Archive

Archive for the ‘computers’ Category

1K of RAM

July 25, 2017 Leave a comment

One of my first computers had just 1K of RAM. That’s enough to store… well, almost nothing. It could store 0.01% of a (JPEG compressed) digital photo I now take on my dSLR or 0.02% of a short (MP3 compressed) music track. In other words, I would need 10 thousand of these devices (in this case a Sinclair ZX80) to store one digital photo!

I know the comparison above is somewhat open to criticism in that I am comparing RAM with storage and that early computers could have their memory upgraded (to a huge 16K in the case of the ZX80) but the point remains the same: even the most basic computer today is massively superior to what we had in the “early days” of computers.

It should be noted that, despite these limitations, you could still do stuff with those early computers. For example, I wrote a fully functioning “Breakout” game in machine code on the ZX80 (admittedly with the memory expansion) and it was so fast I had to put a massive loop in the code to slow it down. That was despite the fact that the ZX80 had a single 8 bit processor running at 3.25 MHz which is somewhat inferior to my current laptop (now a few years out of date) which has four 64 bit cores (8 threads) running at 2.5 GHz.

The reason I am discussing this point here is that I read an article recently titled “The technology struggles every 90s child can relate to”. I wasn’t exactly a child in the 90s but I still struggled with this stuff!

So here’s the list of struggles in the article…

1. Modems

Today I “know everything” because in the middle of a discussion on any topic I can search the internet for any information I need and have it within a few seconds. There are four components to this which weren’t available in the 90s. First, I always have at least one device with me. It’s usually my iPhone but I often have an iPad or laptop too. Second, I am always connected to the internet no matter where I am (except for rare exceptions). Third, the internet is full of useful (and not useful) information on any topic you can image. And finally, Google makes finding that information easy (most of the time).

None of that was available in the 90s. To find a piece of information I would need to walk to the room where my desktop computer lived, boot it, launch a program (usually an early web browser), hope no one else was already using the phone line, wait for the connection to start, and laboriously look for what I needed (possibly using an early search engine) allowing for the distinct possibly that it didn’t exist.

In reality, although that information retrieval was possible both then and now, it was so impractical and slow in the 90s that it might as well have not existed at all.

2. Photography

I bought a camera attachment for one of my early cell phones and thought how great it was going to be taking photos anywhere without the need to take an SLR or compact (film) camera with me. So how may photos did I take with that camera? Almost none, because it was so slow, the quality was so bad, and because it was an attachment to an existing phone it tended to get detached and left behind.

Today my iPhone has a really good camera built-in. Sure it’s not as good as my dSLR but it is good enough, especially for wide-angle shots where there is plenty of light. And because my iPhone is so compact and easy to take everywhere (despite its astonishing list of capabilities) I really do have it with me always. Now I take photos every day and they are good enough to keep permanently.

3. Input devices

The original item here was mice, but I have extended it to mean all input devices. Mice haven’t changed much superficially but modern, wireless mice with no moving parts are certainly a lot better than their predecessors. More importantly, alternative input devices are also available now, most notably touch interfaces and voice input.

Before the iPhone no one really knew how to create a good UI on a phone but after that everything changed, and multi-touch interfaces are now ubiquitous and (in general, with a few unfortunate exceptions) are very intuitive and easy to use.

4. Ringtones

This was an item in the article but I don’t think things have changed that much now so I won’t bother discussing this one.

5. Downloads

Back in the day we used to wait hours (or days) for stuff to download from on-line services. Some of the less “official” services were extremely well used back then and that seems to have reduced a bit now, although downloading music and movies is still popular, and a lot faster now.

The big change here is maybe the change from downloads to streaming. And the other difference might be that now material can be acquired legally for a reasonable price rather than risking the dodgy and possibly virus infected downloads of the past.

6. Clunky Devices

In the 90s I would have needed many large, heavy, expensive devices just to do what my iPhone does now. I would need a gaming console, a music player with about 100 CDs to play in it, a hand-held movie player (if they even existed), a radio, a portable TV, an advanced calculator, a GPS unit, a compass, a barometer, an altimeter, a torch, a note pad, a book of maps, a small library of fiction and reference books, several newspapers, and a computer with functions such as email, messaging, etc.

Not only does one iPhone replace all of those functions, saving thousands of dollars and about a cubic meter of space, but it actually does things better than a lot of the dedicated devices. For example, I would rather use my iPhone as a GPS unit than a “real” GPS device.

7. Software

Software was a pain, but it is till often a pain today so maybe this isn’t such a big deal! At least it’s now easy to update software (it often happens with no user intervention at all) and installing over the internet is a lot easier than from 25 floppy disks!

Also, all software is installed in one place and doesn’t involve running from disk or CD. In fact, optical media (CDs and DVDs) are practically obsolete now which isn’t a bad thing because they never were particularly suitable for data storage.

8. Multi-User, Multi-Player

The article here talks about the problem of having multiple players on a PlayStation, but I think the whole issue of multiple player games (and multi-user software in general) is now taken for granted. I play against other people on my iPhone and iPad every day. There’s no real extra effort at all, and playing against other people is just so much more rewarding, especially when smashing a friend in a “friendly” race in a game like Real Racing 3!

So, obviously things have improved greatly. Some people might be tempted to get nostalgic and ask if things are really that much better today. My current laptop has 16 million times as much memory, hundreds of thousands times as much CPU power, and 3000 times as many pixels as my ZX80 but does it really do that much more? Hell, yes!

Advertisements

The Internet is Best!

March 17, 2017 Leave a comment

I hear a lot of debate about whether the internet is making us dumb, uninformed, or more close-minded. The problems with a lot of these debates are these: first, saying the internet has resulted in the same outcome for everyone is too simplistic; second, these opinions are usually offered with no justification other than it is just “common sense” or “obvious”; and third, whatever the deficiencies of the internet, is it better or worse than not having an internet?

There is no doubt that some people could be said to be more dumb as the result of their internet use. By “dumb” I mean being badly informed (believing things which are unlikely to be true) or not knowing basic information at all, and by “internet use” I mean all internet services people use to gather information: web sites, blogs, news services, email newsletters, podcasts, videos, etc.

How can this happen when information is so ubiquitous? Well information isn’t knowledge, or at least it isn’t necessarily truth, and it certainly isn’t always useful. It is like the study (which was unreplicated so should be viewed with some suspicion) showing that people who watch Fox News are worse informed about news than people who watch no news at all.

That study demonstrates three interesting points: first, people can be given information but gather no useful knowledge as a result; second, non-internet sources can be just as bad a source as the internet itself; and third, this study (being unreplicated and politically loaded) might itself be an example of an information source which is potentially misleading.

So clearly any information source can potentially make people dumber. Before the internet people might have been made dumber by reading printed political newsletters, or watching trashy TV, or by listening to a single opinion at the dinner table, or by reading just one type of book.

And some people will mis-use information sources where others will gain a lot by using the same source. Some will get dumber while others get a lot smarter by using the same sources.

And (despite the Fox News study above) if the alternative to having an information source which can be mis-used is having no information source at all, then I think taking the flawed source is the best option.

Anecdotes should be used with extreme caution, but I’m going to provide some anyway, because this is a blog, not a scientific paper. I’m going to say why I think the internet is a good thing from my own, personal perspective.

I’m interested in everything. I don’t have a truly deep knowledge about anything but I like to think I have a better than average knowledge about most things. My hero amongst Greek philosophers is Eratosthenes, who was sometimes known as “Beta”. This was because he was second best at everything (beta is the second letter in the Greek alphabet which I can recite in full, by the way).

The internet is a great way to learn a moderate amount about many things. Actually, it’s also a great way to learn a lot about one thing too, as long as you are careful about your sources, and it is a great way to learn nothing about everything.

I work in a university and I get into many discussions with people who are experts in a wide range of different subjects. Obviously I cannot match an expert’s knowledge about their precise area but I seem to be able to at least have a sensible discussion, and ask meaningful questions.

For example, in recent times I have discussed the political situation in the US, early American punk bands, the use of drones and digital photography in marine science, social science study design, the history of Apple computers, and probably many others I can’t recall right now.

I hate not knowing things, so when I hear a new word, or a new idea, I immediately Google it on my phone. Later, when I have time, I retrieve that search on my tablet or computer and read a bit more about it. I did this recently with the Gibbard-Satterhwaite Theorem (a mathematical theorem which involves the fairness of voting systems) which was mentioned in a podcast I was listening to.

Last night I was randomly browsing YouTube and came across some videos of extreme engines being started and run. I’ve never seen so much flame and smoke, and heard so much awesome noise. But now I know a bit about big and unusual engine designs!

The videos only ran for 5 or 10 minutes each (I watched 3) so you might say they were quite superficial. A proper TV documentary on big engines would probably have lasted an hour and had far more detail, as well as having a more credible source, but even if a documentary like that exists, would I have seen it? Would I have had an hour free? What would have made me seek out such an odd topic?

The great thing about the internet is not necessarily the depth of its information but just how much there is. I could have watched hundreds of movies on big engines if I had the time. And there are more technical, detailed, mathematical treatments of those subjects if I want them. But the key point is that I would probably know nothing about the subject if the internet didn’t exist.

Here’s a few other topics I have got interested in thanks to YouTube: maths (the numberphile series is excellent), debating religion (I’m a sucker for an atheist experience video, or anything by Christopher Hitchens), darts (who knew the sport of darts could be so dramatic?), snooker (because that’s what happens after darts), Russian jet fighters, Formula 1 engines, classic British comedy (Fawlty Towers, Father Ted, etc).

What would I do if I wasn’t doing that? Watching conventional TV maybe? Now what were my options there: a local “current affairs” program with the intellectual level of an orangutan (with apologies to our great ape cousins), some frivolous reality TV nonsense, a really un-funny American sitcom? Whatever faults the internet has, it sure is a lot better than any of that!

Are You Getting It?

January 10, 2017 Leave a comment

Ten years ago Apple introduced one of the most important devices in the history of technology. It has changed many people’s lives more than almost anything else, and nothing has really supplanted it in the years since then. Obviously I’m talking about the iPhone, but you already knew that.

Like every new Apple product, this wasn’t the first attempt at creating this type of device, it didn’t have the best technical specifications, and it didn’t sell at a particularly good price. In fact, looking at the device superficially many people (the CTO of RIM included) thought it should have immediately failed.

I got an iPhone when Apple introduced the first revision, the iPhone 3G, and it replaced my Sony phone, which was the best available when I bought it. The Sony phone had a flip screen, plus a smaller screen on the outside of the case, a conventional phone keypad, a rotating camera, and an incredibly impressive list of functions including email and web browsing.

In fact the feature list of the Sony phone was much more substantial than the early iPhones. But the difference was the iPhone’s features were something you could use where the Sony’s existed in theory but were so awkward, slow, and unintuitive than I never actually used them.

And that is a theme which has been repeated with all of Apple’s devices which revolutionised a particular product category (Apple II, Mac, iPod, iPhone, iPad). Looking at the feature list, specs, and price compared with competitors, none of these products should have succeeded.

But they did. Why? Well I’m going to say something here which is very Apple-ish and sounds like a marketing catch-phrase rather than a statement of fact or opinion, so prepare yourself. It is because Apple creates experiences, not products.

OK, sorry about that, but I can explain that phrase. The Sony versus iPhone situation I described above is a perfect example. Looking at the specs and features the Sony would have won most comparisons, but the ultimate purpose for a consumer device is to be used. Do the comparison again, but this time with how those specs and features affect the user and the iPhone wins easily.

And it was the same with the other products I mentioned above. Before the Mac, computers were too hard to use. The Mac couldn’t do much initially, but what it could do was so much more easily accessible than with PCs. The iPod was very expensive considering its capacity and list of functions, but it was much easier to use and manage than other MP3 players. And the iPad had a limited feature list, but its operating system was highly customised to creating an intuitive touch interface for the user.

When Steve Jobs introduced the iPhone 10 years ago he teased the audience like this: “[We are introducing] an iPod, a phone and an Internet communicator. An iPod, a phone – are you getting it? These are not separate devices. This is one device. And we are calling it iPhone.”

Today I made a list of the functions my iPhone 6S regularly performs for me, where it replaces other devices, technologies and media. This list includes: watch, stopwatch, alarm clock, point and shoot camera, video camera, photo album, PDA, calculator, GPS, map, music player, portable video player, calendar, appointment diary, book library, ebook reader, audiobook player, magazine, newspaper, recipe book, email client, note pad, drawing tablet, night sky star map, web browser, portable gaming console, radio, TV, audio recorder, TV and audio remote control, landline, and mobile phone.

Not only does it do all of those things but it does a lot of them better than the specialised devices it replaces! And, even though the iPhone isn’t cheap, if you look at the value of the things it replaces it is a bargain. My guess at the value of all the stuff I listed above is $3000 – $5000 which is at least twice the cost of the phone itself.

My iPhone has one million times the storage of the first computer I programmed on. Its processors are tens of thousands of times faster. Its screen displays 25 times more pixels. And, again, it costs a lot less, even when not allowing for inflation.

Most of what I have said would apply to any modern smart-phone, but the iPhone deserves a special place amongst the others for two reasons. First, it is a purer example of ease of use and user-centered functionality than other phones; and second, it was the one phone which started the revolution.

Look at pictures of the most advanced phones before and after the iPhone and you will see a sudden transition. Apple lead the way – not on how to make a smartphone – but on how to make a smartphone that people would actually want to use. And after that, everything changed.

The Next Big Thing

January 8, 2017 Leave a comment

Many (and I really do mean many) years ago, when I was a student, I started a postgrad diploma in computer science. One of the papers was on artificial intelligence and expert systems, an area which was thought (perhaps naively) to have great potential back in the “early days” of computing. Unfortunately, very little in that area was achieved for many years after that. But now I predict things are about to change. I think AI (artificial intelligence, also very loosely described as “thinking computers”) is the next big thing.

There are early signs of this in consumer products already. Superficially it looks like some assistants and other programs running on standard computers, tablets, and phones are performing AI. But these tend to work in very limited ways, and I suspect they follow fairly conventional techniques in producing the appearance of “thinking” (you might notice I keep putting that word in quotes because no one really knows what thinking actually is).

The biggest triumph of true AI last year was Google’s AlphaGo program which won a match 4 games to 1 against Lee Sedol, one of the world’s greatest human players. That previous sentence was significant, I think, because in future it will be necessary to distinguish between AIs and humans. If an AI can already beat a brilliant human player in what is maybe the world’s most complex and difficult game, then how long will it be before humans will be hopelessly outclassed in every game?

Computers which play Chess extremely well generally rely on “brute force” techniques. They check every possible outcome of a move many steps ahead and then choose the move with the best outcome. But Go cannot be solved that way because there are simply too many moves. So AlphaGo uses a different technique. It actually learns how to play Go through playing games against humans, itself, and other AIs, and develops its own strategy for winning.

So while a conventional Chess playing program and AlphaGo might seem similar, in important ways they are totally different. Of course, the techniques used to win Go could be applied to any similar game, including Chess, it’s just that the pure brute force technique was sufficient and easier to implement when that challenge was first met.

Also last year a computer “judge” predicted the verdicts of the European Court of Human Rights cases with 79% accuracy. What does that really mean? Well it means that the computer effectively judged the cases and reached the same result as a human judge in about 80% of those cases. I have no data on this, but I suspect two human judges might agree and disagree to a similar degree.

So computers can perform very “human” functions like judging human rights cases, and that is quite a remarkable achievement. I haven’t seen what techniques were used in that case but I suspect deep learning methods like neural networks would be required.

So what does all this mean? I think it was science fiction author, Arthur C Clarke, who said that a thinking machine would be the last invention humans would ever have to create, because after that the machines themselves would do the inventing. I don’t think we are close to that stage yet but this is a clear start and I think the abilities of AIs will escalate exponentially over the next few decades until Clarke’s idea will be fulfilled.

And, along with another technology which is just about ready to become critical, 3D printing, society will be changed beyond recognition. The scenario portrayed in so many science fiction stories will become reality. The question is, which science fiction story type will be most accurate: the utopian type or the dystopian type. It could go either way.

They’re Taking Over!

August 31, 2016 Leave a comment

As an IT professional and technology enthusiast I generally feel quite positive about advances where computers become better than humans at yet another thing. Many people thought that a computer would never beat a human at chess, but now it is accepted that computers will always be better. When our silicon creations beat us at chess we moved on to another, more complex, game, Go. But now computers have beaten the world champion at that too. And in the process made a move that an expert described as “beautiful and mysterious”.

So what’s next? Well how about one of the most esteemed jobs in our society and one which most people, who don’t really understand what is going on, might say would be the last that a mere machine could tackle. I’m talking about law, and even the top tier of the legal profession: being a judge.

Before I start on that I would like to make an important distinction between the approach to the two games above: Chess and Go. Most computers solve Chess problems by using brute force, that is considering millions of possible moves and counter-moves and taking the move that leads to the best outcome. But that wasn’t practical for Go so the program instead learns how to play by playing against other players and against itself. It really could be said to be learning like a human would and that is the approach future AI will probably use.

An experiment was done in the UK which replicated court cases and compared the AI’s decision with a judge’s. The computer agreed with the judge in 31 out of the 32 cases – maybe the judge got the last case wrong!

Computers do well evaluating complex and technical areas such as international trade dispute law, but are also useful for more common laws, such as divorce and child custody. Plus computers are much better and faster at doing the research tasks that law firms currently use legal professionals for. Another highly rated job that won’t exist much longer maybe?

An expert has stated that creating a computer system that can answer all legal questions is easy, but getting that system used in most societies (which might be quite resistant to change) is the difficult part!

I find the idea of replacing lawyers and judges with computers quite appealing for a few reasons. First, traditionally it has been poorly paid manual workers who have been at threat of being replaced so it is nice to see society’s elite aren’t immune. Second, there are so many cases of terrible decisions being made by judges that having an unbiased computer do the work instead seems like a potentially good idea. And third, if highly rate jobs like these can be replaced then the idea of replacing other jobs becomes easier (the medical profession will be next).

It all sounds quite exciting, as long as you can get over the rather obsolete idea that all humans should exist just to work. But there are a few more unsettling possibilities which are also being tested now. One is to predict whether people convicted of crimes are likely to re-offend in future. There are already claims that this system is biased against blacks. Unfortunately the algorithm in use is secret so no one can ever know.

And that brings me to what is maybe the key point I want to make in how I think this technology should be implemented. Allowing computers to control important aspects of our society, like law, needs to be transparent and accountable. We cannot trust corporations who will inevitably hide the details of what their programs do through copyright and patents. So all the code needs to be open source so that we all know exactly what we are getting.

Many people will just deny that the computer takeover I am describing can happen, and many will say that even if it can happen we shouldn’t let it. I say it can happen and it should happen, but only if it is done properly. Private business has no place in something so critical. We need a properly resourced and open public organisation to do this work. And everything they do should be completely open to view by anyone.

If we do this properly the computer takeover can be a good thing. And yes, I know this is a cliche, but I will say it: I, for one, welcome our silicon overlords!

Pokemon No!

July 30, 2016 Leave a comment

I am a proud computer (and general technology) geek and I see all things geeky as being a big part of my culture. So I don’t really identify much with my nationality of New Zealander, or of traditional Pacific or Maori values (I’m not Maori anyway but many people still think that should be part of my culture), or of the standard interests of my compatriots like rugby, outdoor activities, or beer – well OK, maybe I do identify with the beer!

Being a geek transcends national boundaries and traditional values. I go almost everywhere with my 4 main Apple products: a MacBook Pro laptop, an iPad Pro, an iPhone 6S, and an Apple Watch. They are all brilliant products and I do use them all every day.

For me, the main aspects of being a geek involve “living on the internet” and sourcing most of my information from technology sources, and participating in geek events and activities.

By “living on the internet” I mean that I can’t (or maybe just don’t) go for any period of time (I mean a few hours) without participating in social media, checking internet information sources (general news, new products, etc), or seeking out random material on new subjects from sites such as Quora.

I mainly stay informed not by watching TV (although I still do watch TV news once per day) or listening to radio news (again, I do spend a small amount of time on that too) but by listening to streaming material and podcasts. In fact, podcasts are my main source of information because I can listen to them at any time, avoid most advertising, and listen again to anything which was particularly interesting.

And finally there are the events and activities. Yeah, I mainly mean games. I freely admit that I spend some time every day playing computer games. Sometimes it is only 5 minutes but it is usually more, and sometimes a lot more. Some people think a mature (OK, maybe getting on towards “old”) person like me shouldn’t be doing that and that I should “grow up”. Needless to say I think these people are talking crap.

And so we come to the main subject of this post, the latest computer (or more accurately phone and tablet) game phenomenon: Pokemon GO. The game was released first in the US, Australia, and New Zealand and instantly became a huge hit. Of course, since it was a major new component of geek culture, I felt I should be playing it, but I didn’t want it to become a big obsession.

And I think I did well avoiding it for almost 3 days, but yes, I’m playing it now, with moderate intensity (level 17 after a couple of weeks). Today I explained the gameplay to an older person who never plays games and he asked: but what is the point? Well, there is no real, practical point of course, but I could ask that about a lot of things.

For example, if an alien landed and I took him to a rugby game he might ask what’s the point of those guys running around throwing a ball to each other. Obviously, there’s no point. And what’s the point of sitting in front of a TV and watching some tripe like “The Block” or some crappy sopa opera? Again, there’s no point. In reality, what’s the point of living? Well, let’s not go there until I do another post about philosophy.

So anyone who criticises playing computer games because they have no practical point should think a little bit more about what they are really saying and why.

And there’s another factor in all of this that bugs me too. It’s the fact that almost universally the people who criticise games like Pokemon GO not only have never played them but know almost nothing about them either. They are just letting their petty biases and ignorance inform their opinions. It’s quite pathetic, really.

So to all those people who criticise me for playing Pokemon GO, Real Racing 3 (level 162 after many years play, and yes, it is the greatest game of all time), Clash of Clans (level 110 after 4 years play), and a few others, I say get the hell over it. And if you do want to criticise me just get a bit better informed first. And maybe you should stop all those pointless habits you have (and that I don’t criticise you for) like watching junk programs on TV.

And now, if you’ll excuse me, I’ve got to go find some more Pokemon. Gotta catch ’em all!

Why We Have Bad Software

July 25, 2016 Leave a comment

Many people get extremely frustrated with their interactions with technology, especially computers. I notice this a lot because I work with IT where I am a Mac generalist: I do general support, programming, a bit of server management, and a bunch of other stuff as well.

And when I say “many people” get frustrated I should add myself to that list as well because, either directly or indirectly (by trying to help frustrated users) I am also exposed to this phenomenon.

The strange thing is that generally the problems don’t happen because people are trying to do something unusual, or using some virtually unknown piece of software, or trying to do things in an overly complex way. Most of the frustration happens just trying to get the basics working. By that I mean things like simple word processing in Microsoft Word, simple file access on servers, and simple synchronisation of calendars.

None of these things should be hard, but they often are. In comparison doing complex stuff like creating web apps, or doing complicated graphics manipulations, or completing advanced maths or stats processing often works without a single problem.

Why is this? Well I guess I need to concede (before I offer my own theory) that one reason is that there are far more people doing the simple things and they’re doing them far more often, so if there was a certain failure rate with any process it would show up more for the stuff that is done a lot.

But those simple tasks, like word processing, have been with us on computers for several decades now so it might be reasonable to ask why haven’t they been refined to a greater degree than they have. Is it really so hard to create a word processor which works in a more intuitive, reliable, and responsive way than what he have now? (yes, I’m talking to you, Microsoft)

Well there is. But it involves doing something a lot of people don’t want to do. It involves staying away from the big, dominant companies in IT, especially Microsoft. Well not entirely, because realistically you need to run either Windows or macOS (Linux just doesn’t really work on the desktop) and you need to buy some hardware from Dell, Apple, etc. But what about after that?

Recently I have tried to keep away from the dominant companies in software. For example, I operate a zero-Microsoft policy and am progressing well on my zero-Adobe policy as well. In addition I avoid all the big corporates’ products (Oracle, Cisco, etc) wherever possible.

I don’t think it’s healthy to take this to extremes or to where it becomes more a political thing than a practical one, because then I might end up like the open source fanatics whose decisions are based more on ideology than pragmatism. But it is still a useful guideline.

And I am pragmatic because I do have Microsoft Office and Adobe Creative Suite (all fully licensed) on my machine, I just almost never use them. And, of course, I do use a Mac and therefore use the hardware and operating system made by Apple, the biggest computer corporation in the world.

Although I readily admit to being an Apple “fanboy” I do have to say that, considering the huge resources they have available, they do often fail to perform as well as they should. For example, software is often released with fairly obvious bugs. How much does it cost to hire a few really good bug checkers?

And sometimes Apple products take too long to properly implement some features. With all the programmers they could hire why is this?

I don’t want to pick on Apple and I really have to ask the following question: Microsoft, why is Office 2016 for Mac such a pile of junk? Why is it so slow? Why is it so ugly? Why is it so lacking in functionality (that is one area where Microsoft usually does well: their software is crap in almost every way except it has an impressive feature set).

And just to complete bashing the big three, what’s happening at Adobe? Why does In Design take a week to launch on anything except the latest hardware? Why are there so many poor user interface design choices in Adobe software? And why is the licensing so annoying?

I think the failure of the big companies to create products as good as they should be able to comes back to several factors…

First, large teams of programmers (and probably teams of anything else too) will always be less efficient than smaller teams simply because more time will have to spend trying to coordinate the team rather than actually doing the core work.

Second, in large teams there will be inevitable “disconnections” between the components of a major project that different individuals make. This might result in an inconsistent user experience or maybe even bugs when the components don’t work together properly.

Third, it is likely that many decisions in a large team will be made by managers and that is almost always a bad thing, because managers are generally technically ignorant and have different priorities such as meeting time constraints, fitting in with non-technical corporate aims, or cutting corners in various ways, rather than producing the best technical result.

Fourth, large companies often have too many rules and policies which are presumably formulated to solve a particular problem but more often can be applied without any real thought for any specific situation.

Many software projects are too large for a single programmer or a small team so some of the issues I have listed cannot be fully avoided. But at least if computer users all understand that big companies usually don’t produce the best products they won’t be surprised the next time they have a horrible experience using Microsoft Word.

And maybe they might just look at alternatives.