Posts Tagged ‘programming’

Science and Art

August 29, 2014 Leave a comment

My loyal readers might have noticed that I haven’t written a blog post for a while despite the abundance of source material I could have used. There is a simple explanation for this: I am working on too many other projects just at the moment and have tended to spend time on those instead. Contrary to what you might think I do spend a reasonable amount of time researching, writing, and revising each blog post and they’re not just tossed together in 5 minutes!

Most of what I am working on currently are programming projects which all seem to have become critical at the same time. But that doesn’t really worry me because (and I’m sorry if this sounds really geeky) programming is fun. It’s one of those rare creative activities which results in something which is actually useful (well, at least in most cases).

When I create a new system (and my current projects all involve web-based databases and apps written using PHP and MySQL) I like to create something which is easier to use, more reliable, faster, and just generally more elegant than the alternatives. There are some pretty impressive web-based systems out there now but there is a much greater number of truly terrible ones, so in general I just hope to raise the average a bit.

It’s quite amusing using another person’s web system and noticing all the design and functional errors they have made and smugly thinking “amateurs! my projects never suffer from that problem!” Of course, I shouldn’t be too smart because every system has its faults.

As I have said in the past, programming is a great combination of art and science, or at least it should be because both are required to get the best outcome. The art component doesn’t just involve superficial factors like graphics and typography, it is deeper than that and requires creation of a friendly, logical, and flexible user interaction. The science component should be obvious: programs must be technically correct, perform calculations accurately, but also more subtly be fault tolerant, easy to enhance, and interact with other systems properly.

All of this is not easy to achieve and I have made plenty of mistakes myself, so it is even better when something does magically come together in a positive way. And that description is significant because the way I work a project is an evolving, organic thing which often changes form and function as it progresses. I always have a plan, diagrams for the database structure, flow diagrams for the general functional flow of the program, and technical notes on how certain functions should be performed before I start coding, but by the time the project is finished all of these have changed.

And I am often asked to write technical documentation while I am creating a new system but that is useless because I change the details so often that it’s better just to write that documentation when the project is complete.

When I look back at old projects I am sometimes amused at the naive techniques I used “back in the day” but more often I am quite amazed at some of the awesome, complex code and clever techniques I have used. It’s not usually that I set out to write really clever, complex code, it’s more that as more functions and features evolved the code became more and more impressive. But it is too easy in that situation to let things become convoluted and clumsy. In that case I toss that section out and start again. Sometimes my systems take a little bit longer to complete but they always work properly!

And that brings me to my last design philosophy. I don’t re-use a lot of code, I rarely recycle libraries and classes, and I definitely avoid using other people’s code. Also I don’t use rapid prototyping tools and I don’t use graphical tools to create markup code like HTML. No, it’s all done “on the bare metal”.

In fact that’s not really true, or course. I was recently tidying up some shelves in my office and found some old machine code programs I wrote back on the 80s. Now that was really coding on the bare metal! Multiplying two numbers together was a big job in that environment (the 6502 had no multiply instruction) so PHP and hand-coded HTML are pure luxury compared with that!

Well that’s enough talking about it, it’s time to get back to doing it. I’ve got a nasty bit of database backup code to debug right now. Some sort of privileges error I think, time for some science and not so much art.


Another IT Debacle

June 26, 2013 1 comment

Many people don’t have a great deal of trust in computers and in the people who work with them. Often IT consultants are seen as charlatans more intent on exploiting the computer owner’s ignorance and making some easy money rather than genuinely trying to fix problems and make things better for the user.

I have reached this conclusion through anecdotes rather than any scientific evidence because I can’t find any good surveys on the subject. The best match I could find rated engineers highly but I don’t think, most people rate computer professionals as a type of engineer. And yes, I am a computer professional myself and I am *not* talking about personal experience! As far as I know my clients don’t think I’m ripping them off!

So where has this poor reputation come from? Maybe it’s related to the seemingly continuous stream of computer-related disasters which we hear about. If you are interested, I have blogged about the “Novopay” fiasco here in the past, in blog entries such as “Corporate Newspeak” from 2013-03-21, “Doomed to Failure” from 2012-12-20, and “Talentless Too, No Pay” from 2012-11-24.

And today I heard an update on another ongoing computer disaster, the new student management system for New Zealand’s biggest school (with 20,000 students), the NZ Correspondence School.

They spent $12 million on this new system and the management are convinced it is working well, so everything seems good, doesn’t it? I mean, what could possibly be wrong here?

Well here’s some comments left on a recent survey by actual users of the system: “I’m personally nearly at breaking point because of this system”, “There are no positives with this system. It has put our school on a downward spiral”, “This antiquated dinosaur system is an abomination. We need to start looking at a replacement SMS now.”

Do these sound like happy users of a system which cost millions of dollars and the management assures us is a great success?

There are many obvious parallels with Novopay and other botched systems here. First, an overseas company, which had no experience in the area, was brought in to do the work. Second, the actual users weren’t consulted much on the system and had very little input into how it worked. Third, the system was hacked together from an existing, antiquated system and extra functions were added on top. Fourth, the old system was shut down and the new system was put into full operation before it was thoroughly tested. Fifth, the introduction new system was delayed many times. And last (but certainly not least), the senior management involved are denying the problems exist and are either totally dishonest or out of touch with reality.

I suspect the actual programming team weren’t consulted much on how the project should proceed either. I strongly suspect they were told they just had to hack together something using existing parts and build a sort software equivalent of a Frankenstein monster which was never going to work efficiently. Few IT professionals really want to work that way. Generally they would rather create something new and efficient from the start.

But the management team would have done what management do everywhere: make stupid, greedy, ignorant decisions, and then blame everyone else. Again, I emphasise this is my reading of the situation and I have no proof of this. The senior people involved with the project have refused to be interviewed and have just issued meaningless statements instead, so speculation is always going to be necessary.

My contempt for management in general should be well known to anyone who reads this blog. I don’t reject the idea of using some form of management in every case, and I don’t think all managers are necessarily incompetent, although the results seem to indicate most of them are. But I prefer to look at different situations on a case by case basis, and in general this leads to my conclusion of general incompetence.

When you think about it, in many cases the sort of person who wants to be a manager is probably similar to the sort of person who wants to be a politician or a used car salesman. Either they see an opportunity to make a lot of money for doing very little, or they want undeserved control over other people, or they just can’t get a real job.

So you know what they say: those who can, do; those who can’t, teach; and those who can’t even teach, manage! (there, I’ve offended both teachers and managers!)

And as far as its effect on my profession is concerned, I think a lot of real IT professionals just take it as an additional challenge: not only do they have to contend with constantly changing products, complex software interactions, computer security issues, unreliable infrastructure, intricate programming, and awkward users, but also with silly management decisions. But then, many of us do like a challenge!

Geek Jokes, Part 2

April 29, 2013 Leave a comment

One of the most viewed blog posts I have ever done (at least on the WordPress version of my blog) is one titled “Geek Jokes” from 2011-05-12. It was a collection of jokes about science, engineering, and programming, and included an explanation of some of them.

So because that was so popular, and because I have been a big negative in recent posts (Market Schmarket, Two Complete Morons, etc) I thought it was time for something a bit lighter but also very cool (well cool in a geeky way, at least). So here is Geek Jokes, Part 2…

Joke 1

Heisenberg and Shrodinger get pulled over for speeding.
The cop asks, “do you know how fast you were going?”
Heisenberg replies, “no, but I know were I am.”
The cop thinks this is a strange reply and calls for a search and opens the trunk.
The cop says, “do you know you have a dead cat in your trunk?”
Shrodinger says, “well, I do now!”

Analysis of Joke 1

Many of these jokes seem to derive their humour from a sense of superiority the geek might gain from understanding the joke when others wouldn’t. Of course, many would say that geeks actually are naturally superior and deserve to be just a little bit smug as a consequence, however I couldn’t possibly comment on the idea.

Anyway, Heisenberg and Shrodinger were two famous physicist who were involved in important work and discoveries in the early days of quantum physics.

Heisenberg is most well known for the Heisenberg Uncertainty Principle which states that it is impossible to know both the location and momentum of an object. The more accurately the position is known, the less well known the momentum (and therefore the speed) *can* be known. This isn’t just a failure in the measuring technique, it’s a fundamental property of the quantum world.

Shrodinger used a “thought experiment” involving a cat locked in a box with a vial of poison which could be released based on a truly quantum event (such as radioactive decay). Because it could not be known whether the event occurred or not it could also not be known if the cat was alive or dead. But again, the truth (or at least one interpretation of the meaning of the phenomenon) is far more subtle. According to one interpretation of quantum physics the cat isn’t just in an unknown state (dead or alive) it is actually simultaneously in both states until the box is opened.

So understanding that the joke is now obvious, right? In fact this is an enhanced version of the orignal which only mentioned Heisenberg. Shrodinger was added to double the geeky goodness of the joke a bit later.

Joke 2

How do you recognize a field service engineer on the side of the road with a flat tire?
He’s changing each tire to see which one is flat.
And the related problem:
How do you recognize a field service engineer on the side of the road who has run out of gas?
He’s changing each tire to see which one is flat.

Analysis of Joke 2

A field engineer is a person who is sent into the field (the client’s workplace usually) to solve problems. This joke seems to fit best with software engineers and related helpdesk and support staff so I’m going to analyse the joke based on that. Part of my job involves this sort of work so I particularly identify with this. I’m not saying I’m guilty of doing it, but I do see it a lot in other people!

Many “lesser” support staff try to solve all problems in pretty much the same way. They might either have a list of instructions they have to go through that they have been given as part of their job, or they might have limited experience and only know a few possible responses to all problems. They also go from one step to the next even when it should be possible to go directly to the source of the problem.

So naturally when your computer has a problem they ask you to restart it, or re-install the operating system, or reset the parameter RAM, or one of a few other common actions. These are real solutions to particular problems but they are often used in situations which are completely inappropriate.

So the analogy with fixing a flat tire is obvious. Anyone with a bit of real knowledge (and the permission from his company to use it) can just be a bit smart about it and analyse the problem and change the correct tire immediately. But that’s not the way most people work.

Maybe this is humorous because it is a situation many people find themselves participating in as the owner of the computer (or car in the joke) and maybe it’s even more humorous for superior software engineers like myself who actually analyse the problem and often come up with the correct solution first time as a result!

Joke 3

Never underestimate the bandwidth of a station wagon full of tapes hurtling down the highway.

Analysis of Joke 3

Bandwidth is a term used to describe the speed which data can be transmitted at. If your internet connection works at 10 Megabits per second for example, it can transmit about a million characters (single letters or digits) in a second (note that it takes 8 bits to make a single byte – the most common way to represent a character – plus a bit of overhead for control, so the number reduces by a factor of 10.)

But electronic transmission isn’t the only thing which the concept of bandwidth can be applied to. A pigeon which takes an hour to deliver a 100 word written message has a bandwidth of 100 words per hour, for example. And a computer technician who takes 5 minutes to deliver a 16G flash drive by carrying it to the required destination (sometimes known as sneaker-net) has a bandwidth of about 530 Megabits per second.

Of course those two solutions do vary in speed depending on the distance they must cover, plus there is a second concept which comes into play: latency. That is the time spent waiting for the transmission to begin. In the case of the flash drive the data comes in quite quickly but it takes 5 minutes to start!

So the joke is that sometimes the old way is best (in general, as well as in the specific case of data transmission). It might be possible to fit a thousand 100 Megabyte tapes into a station wagon and even if it takes an hour to reach its destination that is still a bandwidth of 300 Megabits per second. That might be faster than sending the data down a high speed data link!

Joke 4

There are 10 types of people in the world. Those who understand binary, those who don’t,
and those who knew we were using ternary.

Analysis of Joke 4

This is an extension of the classic joke I mentioned in the previous geek jokes post. I didn’t explain it there so I will here, including the added extra component, of course.

Initially it looks like the claim is that there are ten types of people in the world because that’s usually what “10” means. But if you are working in a different base then 10 means something quite different. In fact in every case it means the number of the base. So in base ten (our usual base) it means ten. But in base two it means two and is more properly called “one zero” rather than ten or two.

Computers work in base two at the most basic level because it it easiest to handle signals which are either off (0) or on (1) very quickly. Most programming can be done in base ten, our normal base, because the computer (or more correctly a program called the compiler or interpreter) does the conversion to binary. But in many cases it is useful to undertand binary and any half decent programmer can work in binary with some proficiency.

But just to fool anyone who thinks they are smart enough to make the assumption the number is binary the joke goes on to make the claim it could be ternary (base 3) in which case the number is 3. Of course, that is unlikely because ternary isn’t used in computing applications, at least not as far as I am aware!

Finally, on a similar theme I present joke 5, which is in the form of a geek love poem…

Joke 5

Roses are #ff0000
Violets are #0000ff
All my base
Are belong to you!

Analysis of Joke 5

Base 2 can be quite clumsy to use because it involves long sequences of zeros and ones (for example one thousand in base 2 is 1111101000) so it’s usually best to use higher bases. But ten isn’t suitable because ten isn’t a power of two, and 8 bits (known as a byte) is a common unit meaning base sixteen (where two digits make a byte) is more useful. Because base sixteen requires more than the ten digits, 0 to 9, we usually use it extends these to the letters A to F. So fifteen is F, sixteen is 10, and two hundred and fifty five (the biggest number which can be stored in a byte) is FF.

When we represent colour on a computer (or any other device for that matter) we usually make use of the fact that the human eye has three colour sensors: for red, green, and blue light. By mixing different amounts of these three “primary” colours any other colour can be created. For example red and green make yellow and all three colours make white.

Note that devices which use ink instead of light use a different set of primary colours – cyan, magenta and yellow – which are the secondary colours of light. Also note that your printer uses a fourth colour, black, but it doesn’t strictly need it because theoretically black can be made from cyan, magenta and yellow mixed. However in real life that usually looks more like a muddy brown, plus it uses a lot of ink to produce the most common colour.

So light producing devices, such as computer displays and TVs, use RGB (red green blue) colour, and ink devices such as printers use CMYK (cyan magenta yellow black – black is K because B was already used for blue).

If we want to specify a colour for the screen we just use three numbers for the amount of red green and blue, and because we usually use use a byte (a number from 0 to 255) for each colour a two character base sixteen number makes sense. So ff0000 means 255 (maximum) red, no green, no blue (pure red) and 0000ff means no red, no green and 255 blue (pure blue). Any my favourite colour? That would be #3797ff, a rather nice sky blue.

That explains the first two lines (roses are red, violets are blue) but the other two are a bit more involved! Well, not really.

In 1989 a Japanese video game called “Zero Wing” was released in English. If the game beat you an evil character appeared announcing that he had taken over all of your bases. The translation was a bit odd though and came out as “All your base are belong to us”. For some reason this phrase sort of caught on in the geek world and that is the origin of the final two lines.

A true geek would understand all the jokes without any effort at all. I wrote these explanations entirely without reference to other sources, and I seem to have spent far more time discussing geeky tech stuff than the actual jokes, so I claim uber-geek status based on that. And finally, I would like to add my two bits to this whole discussion: 1 0. Thank you.

Talentless Too, No Pay

November 24, 2012 Leave a comment

About a month ago, in a blog entry titled “Make PowerPoint Illegal”, I discussed IT disasters, specifically what I referred to as the “Ministry of Social Development public computer kiosk security fiasco” and the “Ministry of Education payroll disaster”.

Since then the payroll disaster has just carried on getting worse and worse. There haven’t necessarily been more errors – although it’s possible there have been more but the facts aren’t totally clear – it’s more that many errors don’t get fixed even though there are assurances they have been, and new types of errors keep appearing.

The company which created the Novopay “system” is called Talent2. The title of this blog entry represents most people’s real thoughts on this company and their products: Talentless Too, No Pay. I also considered “Novopray” because by this time many teachers are probably offering prayers that they will be paid.

The ministry and Talent2 say the problems are being fixed and the vast majority of teachers are now being paid properly. I doubt it. According to principals’ representatives 90% of schools and still reporting errors and I suspect this is just the tip of the iceberg. For example my wife, who is a teacher, can’t get reliable payslips. At various times they come through blank, or with a couple of random lines, or with half the required details. We can’t tell if they’re right or wrong but we’ve never officially complained about it so we wouldn’t be counted in the stats.

I’m sure many people would blame the ministry for the problems because that’s just so easy, but I think the company, Talent2, is primarily to blame. They are the ones who were supposed to have created this system. They are the (alleged) professionals. They are one ones being paid over $100 million to create and maintain the system.

And despite their assurances their payroll systems in other organisations really aren’t that great. My friend Fred reports that a Talent2 system is also used in the organisation he works for and it’s a bit of a joke. He does admit that it usually gets the pay right but many features were never implemented and the ones that do work look like the user-interface was designed by a programmer from the 1970s. It really is that primitive.

No one is saying that producing a payroll on this scale is easy but Talent2 supposedly specialise in this sort of work, they have 150 people on the team, they have charged tens of millions for the work, and they have spent at least 2 extra years working on it. Plus they paid another company (presumably equally incompetent and corrupt) almost a million dollars to test the system. What did we get of that money? What did that other company do for a million dollars? Absolutely nothing as far as I can see. The whole thing must be close to being fraud.

I think a lot of the problem is caused by the distorted view people have of big “professional” corporations. They think that because those corporations have fancy office buildings and their staff always wear expensive suits that they are true professionals. Well they may look professional on the surface but the quality of their products and services don’t seem to be great value for money in my opinion. And when you consider that some New Zealand companies were also part of the tender process you really have to wonder why these clowns got the work.

Hopefully the system will eventually work but I suspect a major debacle is looming for the Christmas payroll. Just the time when teachers really don’t want “no pay” from a system created by a “talentless” company!

Recursion: See Recursion

August 28, 2012 2 comments

Today I listened to a Radio NZ podcast which discussed computer science. One of the topics they talked about was recursion, but in my humble opinion they didn’t explain it very well. Many years ago I did a computer science degree and have worked as a computer consultant and programmer ever since so I wanted to offer my own contribution to the topic here: that is recursion and programming in general.

First I want to say why I love programming so much. To me it is the ideal combination of art and science. That suits my personality best because I am an analytical and precise person but I also like to be creative. There is no other profession that I know of which combines both of those elements in quite the same way. Writing a program involves solving a problem in an analytical way but many of the elements of creating a program also involve a lot of creativity and “beauty”.

In programming (as in many fields where beauty seems a strange word to use as a description, such as maths) beauty refers more to the elegance, simplicity, and subtlety of a solution to a problem more than any outward manifestation of the item being created. In programming there is often an opportunity to create a visual interface which can be described as beautiful but that’s not really what I am talking about here. It’s deeper and more subtle than that.

When I write a program I don’t just try to solve the initial problem, I try to make the solution extendable, tolerant of errors, fast, compact, and easy to understand. Usually a short program is a far more impressive achievement than a long one which has the same function. And every moderately complex problem has an infinite (or so close to infinite that it doesn’t matter) number of possible solutions, some of which are elegant and beautiful and some which aren’t.

Of course there is a certain amount of subjectivity in judging how good a program is but, as in most areas of expertise, skilled programmers will generally agree on what is good and what isn’t.

Now getting back to recursion. First of all, what is it? Well it’s a way to solve a problem by creating a series of steps (what computer scientists call an algorithm) and allowing that algorithm to refer to itself. The nerdy joke in the computer world is that if you look up recursion in the dictionary the definition will include “see recursion”. There is also the little “in” jokes where some languages have recursive names. For example the name of the scripting language “PHP” is a recursive acronym for “PHP: Hypertext Preprocessor”, and GNU is an acronym for Gnu’s Not Unix.

The serious example given in the podcast was an algorithm to climb stairs which went like this: (I have given the following steps the name “climb”)…

algorithm “climb”:
go up one step
are you at the top?
– if no: “climb”
end (of climb)

You can see in this example that the algorithm called “climb” has a step which refers to itself (the last step “climb”). But this is a bad example because it could also be done like this…

go up one step
are a you at the top?
– if no go back to “start”

This is what we call an iterative algorithm: it iterates or “loops” around until it stops at a particular step. Generally these are more efficient than recursive algorithms.

By the way, I realise that neither of these work properly if you start at the top of the steps already. That sort of thing is a common programming problem and one which is obvious and quite easy to fix here but often not in more complex algorithms.

So what about an example where recursion does make sense? There is a classic case often used in computing which involves processing a binary tree. So what is a binary tree? It’s a structure in a computer’s memory which contains information in a way which makes it easy to search, sort, and manipulate in other ways. Imagine a series of words and with each word are two links to two other words (or the link could be empty). The words are put in the tree so that the left link always goes to words alphabetically before the current word, and the right link to words after.

If the first word is “computers” for example and the second word is “are” the second word would be accessed from the left link from “computers”. If the third word was “fun” then that would go on the right link from “computers”. if the fourth word was “sometimes” it couldn’t go on the right link from “computers” because “fun” is already there so it would go on the right link from “fun” instead (“s” comes after “f”). If the next word was “but” that would go right from “are”. Continuing the sentence with the words “can be tricky too” we would get this…

Now let’s say I wanted to display the words in alphabetical order. First I make a link pointing to the top of the tree. Now I create some steps called “sort” which I give a link to the current word (initially “computers”). Here are the steps…

Algorithm “sort” using “link”:
Is there a left link at the word for the link you are given?
– if yes, “sort” with the left link
– if no:
— display the current word
— is there a right link?
—- if yes “sort” with the right link.
end (of sort)

That’s it! That will display the words alphabetically. The first link points to “computers” but there is a left link so we send a link to that which points to “are”. There is no left link so we print “are” and send the right link (to “but”) to sort. There is a left link so we send that (to “be”) to sort. There is no left link so it prints “be” and finds no right link. At that point the sort algorithm ends but the previous sort which caused this sort to start is still active so we go back to that. That sort pointed to “but” and we had just taken the left link, so we carry on from that and check the right link using sort. That takes the next sort to “can”, etc…

The key thing here is that each time “sort” is used it sticks around until the end, so there can be any number of sorts waiting to continue where they left off before a “sort” further down the tree was started.

Wow, that sounds so complicated but it’s really quite simple. I did all of this from memory and it’s quite easy when you understand the concept. Without recursion sorting a binary tree would be difficult because there is no reverse link back up the tree and no easy way to remember what has already been done at each word. With recursion when one version of sort launches another its information is left behind and creates a way to go back up the tree when the one it started has finished running.

The recursive algorithm in this case is efficient and elegant. It’s also very simple because all of the complexity that might be required otherwise is available as an intrinsic part of how recursion works. It’s a simple example of “beauty” in programming.


August 8, 2012 Leave a comment

The Mac’s current operating system, Mac OS X, has been quite successful since its introduction about 10 years ago. It has powered a wide range of Mac computers, from single core G3 PowerPC machines all the way up to modern 12 core Intel machines. And iOS, a variation of Mac OS X (well not technically, but based on the same core technology at least), has been powering the iPhone and iPad for years too.

But while Mac OS X is far more reliable and capable than the systems which preceded it, the way that users interact with it isn’t that much different. As Mac OS X (now OS X) becomes more mature it’s natural to wonder what will come next. What will Apple give us in OS XI?

I think I see where they are going. A conspicuous trend with the latest iteration of the system, Mountain Lion, has been to split bigger programs into smaller, single function apps. For example Mail no longer handles notes or RSS feeds, and iCal doesn’t handle reminders any more. Other programs perform these functions instead, so instead of a few big programs there are now many small ones.

This can be convenient because it gives the user the ability to choose which program to use for a single function instead of being locked in to a single program like Microsoft’s Outlook which does email, calendars, notes, address books, and reminders. Because the programs know how to communicate with each other most of the convenience and interoperability of a single big program is maintained. And several smaller programs are generally easier to use, more reliable, and faster than one big program which tries to do everything.

But there are disadvantages as well. For example, it can be inconvenient to swap between programs to get to different functions.

So what is the answer? I think Apple are heading towards component software in a similar way to what they tried to do with OpenDoc back in the 90s. With OpenDoc the user could create his own program by mixing small components. The computer interface was centered around the document and whatever software components were required could be used to create a single complex document which might need to be assembled from many smaller parts in a conventional system.

At the time the operating system and hardware weren’t really up to the task and OpenDoc failed, but what about now?

The object architecture of OS X is a natural fit for this approach. Already there are system components, such as Webkit (the Mac’s built-in web engine) which can be used by programmers. Why not extend that type of function at a higher level so that users can use higher level components to make their own programs?

When I create web databases and apps I usually have a web browser, a text editing program, a PDF viewer for documentation, and a graphics program open. I would like to create my own web projects using a tool I design by mixing my favourite apps which have those abilities. Not only would the whole thing run inside a single window but all the components could freely exchange information. As I types the name of a PHP function, for example, the PDF program would show the syntax for that function.

Some of this functionality is already available in monolithic tools but I already have a text editor and other programs I want to use. Why can’t I link up my existing programs to make them work together more smoothly?

I discussed this idea a few years back with a fairly senior Apple engineer and he seemed skeptical so maybe there are good reasons it can’t be done, or maybe Apple just think it’s a bad idea from a user perspective. Maybe they don’t want their users to have too much control. That certainly seems to be the case superficially.

But that is where I think OS XI should go. Eventually I would like only one app on my whole computer which I created by mixing components I like to use. It’s quite a neat idea and if Apple want to use it in a future OS, hey they can have it for free! I just want to be able to create that sort of environment on my next Mac (and iPhone and iPad).

IT Support Fun

November 4, 2011 2 comments

Being a computer consultant and programmer provides its fair share of challenges. First, there is the temperamental nature of some computers, then there is the constantly changing nature of the IT world, and then there is the ultimate challenge: the users!

I work almost entirely with Macs so I’m not exposed to the same level of troublesome behaviour that my PC colleagues have to put up with. I’m not necessarily saying Macs are totally free from odd and unexplained problems (they certainly aren’t) but Apple’s control over the hardware, operating system, and some of the software means that most Mac systems suffer less from bizarre behaviour than Windows PCs.

The constant change in the computer world can be seen as both its greatest challenge and as its greatest attraction. Having new technologies appearing so quickly does make working in IT interesting but it also makes it hard to keep up. Supporting whole new technology areas, such as iPads and the extremely capable smart phones we now have, is a challenge but would we really want to do without these cool new toys?

And then there’s the users. Few people realise how difficult it can be to support some computer users. It’s not so bad if you have direct access to the computer in need of your intervention, or even if you have screen sharing or terminal access to it, but trying to support computer users by “remote control” over the phone is probably the ultimate exercise in frustration!

It’s not just computers where this happens, because other forms of technology can suffer from similar problems. A friend recently described an experience she had trying to describe how to change the settings on a new TV over the phone for example. And it’s probably significant that TVs (along with almost everything else) are actually controlled by small computers and their on-screen control systems suffer from similar issues to conventional computers.

Ironically it was easier in the “old days” where the primary way to control a computer was through a command-line interface. Asking someone to type a command like “cd /” is often easier than asking them to find the icon for the hard disk and double-click on it. Issues with the “visual” approach include: is the HD icon visible? what does it look like? what is it called? can the user double-click at the correct speed? what display mode is the hard disk window set to display? (and, no doubt, many more) And yes, I know you can control modern computers through a command-line (I love the Mac terminal) but explaining how to launch that can be a major process in itself!

I sometimes wonder what users are thinking. These aren’t stupid people but when it comes to working on their computer they can do some odd things. Here’s a few examples which illustrate the problem…

First there’s the phenomenon of inappropriate use of terminology. A user I was trying to help once told me something like “I pointed my font at the box and clicked but the mouse didn’t appear.” Say what? I recognise all of those words but I have no idea what they mean in that context!

Then there’s the users who just can’t respond appropriately when asked a question. I once asked a user “Is the Finder at the front? You can tell that because the first menu at the top-left (next to the Apple) is called Finder.” I was assured it is so it was then safe to say “go to the Go menu and choose Connect to Server”. But there was no Go menu. That was odd. So I tried a new approach. I said “press command-K” and was informed “it just beeped”. Stranger! Anyway after a while I said: look at the top-left of the screen and read out what it says. The response was “an Apple symbol, then Mail, then…” What? Did you say the second word was Mail? I thought it said Finder? Who knows what the explanation for that slight inconsistency was. It’s still a mystery!

Many users can’t describe real physical objects much better. Recently I was trying to find out what type of computer a person had. She said it was something like a Mac 72. A Mac 72? What is that? The closest thing I could think of was a Power Mac 7200 but that was from the distant past. Anyway it turned out it had a built-in screen, was quite heavy, and didn’t have a CD drive. That didn’t seem to fit anything either but then the name “eMac” was recalled. So I showed this person an old eMac waiting to be recycled and I was assured it was like that except blue and with no CD drive. When I finally saw the computer it was a white iMac with a CD drive. And one other thing: the person wanted to replace the old machine because it had no ethernet to connect to a broadband router. Except, of course, all iMacs have ethernet built-in! Another mystery!

So, as you can see, working with users is a real treat. It’s like a game where they try to deceive you as much as possible and it’s your job to help them despite their best efforts to stop you from doing so. It’s great fun and I really enjoy it when I finally see through the deception and the truth is fully revealed!