Roman Sandals

July 29, 2008

Super-user excuses

Filed under: musing, sysadmin, technology — Craig Lawton @ 1:12 pm

System administrators always get super-user access. Third parties, increasingly located wherever, are often granted super-user access as well, usually to smooth project implementations. Super-user access is thrown around willy-nilly because it’s a hell of lot easier than documenting privileges which is really, really boring work.

This leads to poor outcomes: downtime, systems in “undocumentable” states, security holes etc.

The horrible truth is that somebody somewhere must be able to gain super-user access when required. It can’t be avoided.

The other horrible truth is that when you allow super-user access only because properly defining a particular role is hard, you are in effect, giving up control of your environment. This is amplified when more than one team shares super-user access. It only takes one cowboy, or an innocent slip-up, to undermine confidence in an environment.

In this increasingly abstracted IT world, where architecture mandates shared, re-usable applications, where global resourcing mandates virtual remotely-located teams, where IT use and and server numbers exponentially increase and where businesses increasingly interact through gateways, security increasing looks like a feature tacked on at the last minute.

Security costs a lot and adds nothing to the bottom line – though lack of it can and will lead to some big bottom line subtractions.

The mainframe guys had this licked ages ago. The super-user excuse is looking rather thin.

The Age of Authorization is upon us…

Update: An amazing story from San Francisco, which outlines how lack of IT knowledge at the top of an organisation, and too much power devolved to to few IT staff, can cause much grief.

May 23, 2008

St. George and the IT dragon

Filed under: business, musing — Craig Lawton @ 3:08 pm

Over the last few months I’ve read a couple articles in the AFR relating to IT spend in M&A activity.

It’s amazing to consider that about half of the business integration costs for the proposed merger between Westpac and St. George will be in IT (0.5 * $451,000,000).

Consider that the Commonwealth Bank is planning on spending $580,000,000 to re-engineer its aging platforms (to me this means cleaning out all the legacy crap), and NAB is looking at doing the same.

A merged Westpac/St. George would be $225,500,000 behind the eight-ball, before it could even contemplate a project of this scale.

Also, to make the merger more attractive, or because of uncertainty, either side could be tempted to put off required upgrades, lay off staff (possibly key staff), and run-down maintenance.

Accenture recently concluded a survey of 150 CIOs and found that poor IT integration was the leading cause of failure to meet the stated objectives of a merger or acquisition (38%).

It makes you wonder if this whole “IT thing” is going to collapse under the weight, and expense, of its own complexity!

Frustrating in-house systems

Filed under: musing, technology — Craig Lawton @ 1:25 pm

I’m constantly amazed at the crappy performance of in-house applications at the places I’ve worked. Customer-facing applications must perform, or business is lost. In-house applications are never tuned for performance it seems, and this makes work that much harder.

This difficulty is related to the level of brain-memory you are using for your current task. Very short term memory is great, and necessary, when you are flying through a well understood task. But short system interruptions (usually involving the hour glass) force you to use more extended memory times, making the effort that much larger, and less enjoyable.

There are other types of interruptions of course, which have a similar effect, such as people-interruptions (“What are you doing this weekend?”) and self-inflicted-interruptions (such as twitter alerts).

If your system hangs for long enough you may start a new task altogether (so as not to look stoned at your desk) and therefore lose track completely of where you were at.

This forces unnecessary re-work and brain exhaustion!

I see lots of people with “notepad” or “vi” open constantly so they can continually record their work states. This is a good idea but takes practice and is an overhead.

It comes down to this. I want a system which can keep up with me! :-)

And is that unreasonable, with gazillions of hertz and giga-mega-bits of bandwidth available?

May 1, 2008

Going with the cloud

Filed under: management, musing, technology — Craig Lawton @ 5:01 pm

Really interesting article on the Reg’ which should put data centre fretters’ feet firmly back on the ground. It seems the “thought leaders” don’t see data centres disappearing anytime soon because:

  • Security – “… there are data that belongs in the public cloud and data that needs to go behind a firewall. … data that will never be put out there. Period. Not going to happen. Because no matter how you encrypt it, no matter how you secure it, there will be concerns.”
  • Interoperability- “…figure out ways for systems that are … behind the firewall … to interoperate with systems that are in the public cloud”
  • Application licensing complexity.
  • Wrangling code to work over the grid – getting any code written that exploits parallel infrastructure seems to be very difficult.
  • Compliance – “What happens when government auditors come knocking to check the regulatory complicity of an application living in the cloud?”

Also they didn’t cover jurisdictional issues, such has, who do you take to court, and in what country, when there is an issue with data mis-use “in the cloud”.

It makes you wonder about why cloud computing will be any different to grid computing, or thin desktop clients. A great idea, but not enough inertia to overcome ingrained corporate behaviour.

March 27, 2008

Infrastructure Money

Filed under: musing, technology — Craig Lawton @ 2:33 pm

It’s interesting to note that in big IT shops software often accounts for about 50% of the operating budget. Compare this to human resource, or people as they are sometimes referred to, which comes in at about 10% of most budgets.

In the last downturn, server hardware got hammered in cost, hurting the likes of SUN Microsystems, and helping the likes of Dell. People got done what they could, with what they could budget for.

If the current economic storm clouds start hurting IT budgets, will software (and possibly also Storage costs) be the sacrificial lamb, opening doors for the likes of MySQL, Debian,  Tomcat, Nagios etc. to make big in-roads in to the corporate world? Could SUN make some inroads with ZFS (over VxFS)?

November 22, 2007

C: Drive

Filed under: musing, technology — Craig Lawton @ 4:08 pm

I just had a thought. I should really back-up my work laptop. I should back-up my C: drive. I fired up the XP Backup tool.

Even plebs know what a C: drive is! But why is it so? Surely, it should be the A: drive if it’s that important. But no, the A: drive was originally assigned for a floppy disk, and I think the B: drive was for a secondary floppy disk from memory. But nobody has floppy disk drives anymore!

As if reading my mind, the XP Backup tool told me that after creating the back-up file it would ask me for a floppy to create a boot disk. Of course, you’d think it’d check to see if I had a floppy drive before asking. I don’t.

Interestingly, Wikipedia lists all the Operating Systems that use drive letter assignment. It reads like one of those Human Rights Watch charts, one which lists the countries that kill more of their own citizens than others: (more…)

Blog at WordPress.com.