Archive

Archive for the ‘computers’ Category

Utopia or a Dystopia?

February 5, 2018 Leave a comment

I have been interested in artificial intelligence for years, without being too deeply involved in it, and it seemed that until recently there was just one disappointment after another from this potentially revolutionary area of technology. But now it seems that almost every day there is some fascinating, exciting, and often worrying news about the latest developments in the area.

One recent item which might be more significant than it seems initially is the latest iteration of AlphaGo, Google’s Go playing AI. I wrote about AlphaGo in a post “Sadness and Beauty” from 2016-03-16 after it beat the world champion in the game Go which many people thought a computer could never master.

Now AlphaGo Zero has beaten AlphaGo by 100 games to zero. But the significant thing here is not about an incremental improvement, it is about a change in the way the “Zero” version works. The zero in the name stands for zero human input, because the system learned how to win at Go entirely by itself. The only original input was the rules of the game.

While learning winning strategies AlphaGo Zero “re-created” many of the classic moves humans had already discovered over the last few thousand years, but it went further than this and created new moves which had never been seen before. As I said in my previous post on this subject, the original AlphaGo was already probably better than any human, but the new version seems to be completely superior to even that.

And the truly scary thing is that AlphaGo Zero did all this in such a short period of time. I haven’t heard what the time period actually was, but judging by the dates of news releases, etc, it was probably just days or weeks. So in this time a single AI has learned far more about a game than millions of humans have in thousands of years. That’s scary.

Remember that AlphaGo Zero was created by programmers at Alphabet’s Google DeepMind in London. But in no way did the programmers write a Go playing program. They wrote a program that could learn how to play Go. You could say they had no more input into the program’s success than a parent does into the success of a child whom they abandon at birth. It is sort of like supplying the genetics but not the training.

You might wonder why Alphabet (Google’s parent company) has spent so much time and money creating a system which plays an obscure game. Well the point, of course, is to create techniques which can be used in more general and practical situations. There is some debate amongst experts at the moment about how easily these techniques could be used to create a general intelligence (one which can teach itself anything, instead of just a specific skill) but even if it only works for specific skills it is still very significant.

There are many other areas where specialised intelligence by AIs has exceeded humans. For example, at CERN (the European nuclear research organisation) they are using AI to detect particles, labs are developing AIs which are better than humans at finding the early signs of cancer, and AIs are now good at detecting bombs at airports.

So even if a human level general intelligence is still a significant time away, these specialised systems are very good already, even at this relatively early time in their development. It’s difficult to predict how quickly this technology might advance, because there is one development which would make a revolutionary rather than evolutionary change: that is an AI capable of designing AIs – you might call this a meta-AI.

If that happens then all bets are off.

Remember that an AI isn’t anything physical, because it is just a program. In every meaningful way creating an AI program is just like playing a game of Go. It is about making decisions and creating new “moves” in an abstract world. It’s true that the program requires computer hardware to run on, but once the hardware reaches a reasonable standard of power that is no more important than the Go board is to how games proceed. It limits what can be done in some ways, but the most interesting stuff is happening at a higher level.

If AlphaGo Zero can learn more in a week than every human who ever played Go could learn in thousands of years, then imagine how much progress a programming AI could make compared with every computer scientist and programmer who ever existed. There could be new systems which are orders of magnitude better developed in weeks. Then they could create the next generation which is also orders of magnitude better. The process would literally be out of control. It would be like artificial evolution running a trillion times faster than the natural version, because the generation time is so short and the “mutations” are planned rather than being random.

When I discussed the speed that AlphaGo Zero had shown when it created the new moves, I used the word “scary”, because it literally is. If that same ability existed for creating new AIs then we should be scared, because it will be almost impossible to control. And once super-human intelligence exists it will be very difficult to reverse. You might think something like, “just turn off the computer”, but how many backups of itself will exist by then? Simple computer viruses are really difficult to eliminate from a network, so imagine how much more difficult a super-intelligent “virus” would be to remove.

Where that leaves humans, I don’t know for sure. I have said in the previous post that humans will be redundant, but now I’m not totally sure that is true. Maybe there will be a niche for us, at least temporarily, or maybe humans and machines will merge in some way. Experts disagree on how much a threat AI really is. Some predict a “doomsday” where human existence is fundamentally threatened while others predict a bright future for us, free from the tedious tasks which machines can do better, and where we can pursue the activities we *want* to do rather than what we *have* to do.

Will it be a utopia or a dystopia? No one knows. All we know is that the world will never be the same again.

Advertisements

Random Clicking

January 14, 2018 Leave a comment

Nowadays, most people need to access information through computers, especially through web sites. Many people find the process involved with this quite challenging, and this isn’t necessarily restricted to older people who aren’t “digital natives”, or to people with no interest in, or predisposition towards technology.

In fact, I have found that many young people find some web interfaces bizarre and unintuitive. For example, my daughter (in her early 20s) thinks Facebook is badly designed and often navigates using “random clicking”. And I am a computer programmer with decades of experience but even I find some programs and some web sites completely devoid of any logical design, and I sometimes revert to the good old “random clicking” too!

For example, I received an email notification from Inland Revenue last week and was asked to look at a document on their web site. It should have taken 30 seconds but it took closer to 30 minutes and I only found the document using RC (random clicking).

Before I go further, let me describe RC. You might be presented with a web site or program/app interface and you want to do something. There might be no obvious way to get to where you want to go, or you might take the obvious route only to find it doesn’t go where you expected. Or, of course, you might get random error message like “page not available” or “internal server error” or even the dreaded “this app has quit unexpectedly” or the blue screen of death or spinning activity wheel.

So to make progress it is necessary just to do some RC on different elements, even if they make no sense, until you find what you are looking for. Or in more extreme cases you might even need to “hack” the system by entering deliberately fake information, changing a URL, etc.

What’s going on here? Surely the people involved with creating major web sites and widely used apps know what they are doing, don’t they? After all, many of these are the creations of large corporations with virtually unlimited resources and budgets. Why are there so many problems?

Well, there are two explanations: first, that errors do happen occasionally, no matter how competent the organisation involved is, and because we use these major sites and apps so often we will tend to see the errors more often too; and second, large corporations create stuff through a highly bureaucratic and obscure process and consistency and attention to detail is difficult to attain under such a scheme.

When I encounter errors, especially on web sites, I like to keep a record of it by taking a screenshot. I keep this in a folder to make me feel better if I make an error on any of my own projects, because it reminds me that sites created by organisations with a hundred programmers and huge budgets often have more problems those created by a single programmer with no budget.

So here are some of the sites I currently have in my errors folder…

APN (couldn’t complete your request due to an unexpected error – they’re the worst type!)
Apple (oops! an error occurred – helpful)
Audible (we see you are going to x, would you rather go to x?)
Aurora (trying to get an aurora prediction, just got a “cannot connect to database”)
BankLink (page not found, oh well I didn’t really want to do my tax return anyway)
BBC (the world’s most trusted news source, but not the most trusted site)
CNet (one of the leading computer news sources, until it fails)
DCC (local body sites can be useful – when they work)
Facebook (a diabolical nightmare of bad design, slowness, and bugginess)
Herald (NZ’s major newspaper, but their site generates lots of errors)
InternetNZ (even Internet NZ has errors on their site)
IRD (Inland Revenue has a few good features, but their web site is terrible overall)
Medtech (yeah, good luck getting essential medical information from here)
Mercury (the messenger of the gods dropped his message)
Microsoft (I get errors here too many times to mention)
Fast Net (not so fast when it doesn’t work)
Origin (not sure what the origin of this error was)
Porsche (great cars, web site not so great)
State Insurance (state, the obvious choice for a buggy web site)
Ticketmaster (I don’t have permission for the section of the site needed to buy tickets)
TradeMe (NZ’s equivalent of eBay is poorly designed and quite buggy)
Vodafone (another ISP with web site errors)
WordPress (the world’s leading blogging platform, really?)
YesThereIsAGod (well if there is a god, he needs to hire better web designers)

Note that I also have a huge pile of errors generated by sites at my workplace. Also, I haven’t even bothered storing examples of bad design, or of problems with apps.

As I said, there are two types of errors, and those caused by temporary outages are annoying but not disastrous. The much bigger problem is the sites and apps which are just inherently bad. The two most prominent examples are Facebook and Microsoft Word. Yes, those are probably the most widely used web site and most widely used app in the world. If they are so bad why are they so popular?

Well, popularity can mean two things: first, something is very widely used, even if it is not necessarily very well appreciated; and second, something which is well-liked by users and is utilised because people like it. So you could say tax or work is popular because almost everyone participates in them, but that drinking alcohol, or smoking dope, or sex, or eating burgers is popular because everyone likes them!

Facebook and Word are popular but most people think they could be made so much better. Also many people realise there are far better alternatives but they just cannot be used because of reasons not associated with quality. For example, people use Facebook because everyone else does, and if you want to interact with other people you all need to use the same site. And Word is widely used because that is what many workplaces demand, and many people aren’t even aware there are alternatives.

The whole thing is a bit grim, isn’t it? But there is one small thing I would suggest which could make things better: if you are a developer with a product which has a bad interface, and you can’t be almost certain that you can improve it significantly, don’t bother trying. People can get used to badly designed software, but coping with changes to an equally bad but different interface in a new version is annoying.

The classic example is how Microsoft has changed the interface between Office 2011 and Office 2016 (these are the Mac versions, but the same issue exists on Windows). The older version has a terrible, primitive user interface but after many years people have learned to cope with it. The newer version has an equally bad interface (maybe worse) and users have to re-learn it for no benefit at all.

So, Microsoft, please just stop trying. You have a captive audience for your horrible software so just leave it there. Bring out a new version so you can steal more money from the suckers who use it, but don’t try to improve the user interface. Your users will thank you for it.

Is Apple Doomed?

December 20, 2017 5 comments

I’m a big Apple fanboy. As I sit here writing this blog post (flying at 10,000 meters on my way to Auckland, because I always write blog posts when I fly) I am actively using 4 Apple products: a MacBook Pro computer, an iPad Pro tablet, an iPhone 6S Plus smartphone, and an Apple Watch. At home I have many Apple computers, phones, and other devices. I also have one Windows PC but I very rarely use that.

So the general state of Apple’s “empire” is pretty important to me. Many of the skills I have (such as general trouble-shooting, web programming, scripting, configuration, and general software use) could be transferred to Windows, but I just don’t want to. I really like the elegance of Apple’s devices on the surface, combined with the power of Unix in the background.

But despite my enthusiasm for their products I have developed an increasing air of concern with Apple’s direction. There is the indistinct idea that they have stopped innovating to the extent they did in the past. Then there is the observation that the quality control of both hardware and software isn’t what it was. Then there is just a general perception that Apple are getting too greedy by selling products at too high a price and not offering adequate support for the users of their products.

These opinions are nothing new, but what is new is that people who both know a lot about the subject, and would normally be more positive about Apple, are starting to join in the criticism. Sometimes this is through a slight sense of general concern, and other times through quite strident direct criticism.

I would belong to the former class of critics. I think I have noticed an increase in the number of errors Apple is making, at the same time as I notice an apparent general decrease in the overall reliability of their products, and to make matters worse, these are accompanied by what seems to be higher prices.

You will notice I used a lot of qualifiers in the sentence above. I did this deliberately because I have no real data or objective statistics to demonstrate any of these trends. They might not be real because it is very easy to start seeing problems when you look for them, and negative events often “clump” into groups. Sometimes there might be a series of bad things which happen after a long period with no problems, but that doesn’t mean there is any general trend involved.

But now is the time for anecdotes! These don’t mean much, of course, but I want to list a few just to give an idea of where my concern is coming from.

Recently I set up two new Mac laptop computers in a department where there was a certain amount of pressure from management to switch to Microsoft Surface laptops. The Surface has a really poor reputation for reliability and is quite expensive, so it shouldn’t be difficult to demonstrate the superiority of Apple products in this area, right?

Well, no. Wrong, actually. At least in this case. Both laptops had to go for service twice within the first few weeks. I have worked with Apple hardware for decades and have never seen anything remotely as bad as this. And the fact that it was in a situation where Apple was under increased scrutiny didn’t help!

In addition, the laptops had inadequate storage, because even though these are marketed as “pro” devices the basic model still has only 128G of SSD storage. That wasn’t Apple’s fault, because the person doing the purchasing should have got it right, but it didn’t help!

Also recently Apple has suffered from some really embarrassing security flaws. One allowed root access to a Mac without a password, and the other allowed malicious control of automated home-control devices. There were also a few other lesser issues in the same time period. As far as I now none of these were exploited to any great extent, but it is still a bad look.

Another issue which seems to be becoming more prominent recently is their repair and replacement service. In general I have had fairly good service from Apple repair centers, but I have heard of several people who aren’t as happy.

When you buy a premium device at the premium price Apple demands I don’t think it is unreasonable to expect a little bit of extra help if things go wrong. So unless there is clear evidence of fraud, repairs and replacements should be done without the customer having to resort to threats and demands for the intervention of higher levels of staff.

And even if a device only has one year of official warranty (which seems ridiculous to begin with), Apple should offer a similar level of support for a reasonable period without the customer having to resort to quoting consumer law.

Even if Apple wasn’t interested in doing what was morally right they should be able to see that providing superior service for what they claim is a superior product at a superior price is just good business because it maintains a positive relationship with the customer.

My final complaint regards Apple’s design direction. This is critical because whatever else they stand for, surely good design is their primary advantage over the opposition. But some Apple software recently has been obscure at best and incomprehensibly bizarre at worst, and iTunes has become a “gold standard” for cluttered, confusing user interfaces.

When I started programming Macs in the 1980s there was a large section in the programming documentation about user interface design. The rules were really strict, but resulted in consistent and clear software which came from many different developers, including Apple. I don’t do that sort of programming any more but if a similar section exists in current programming manuals there is little sign that people – even Apple themselves – are taking much notice!

So is Apple doomed? Well probably not. They are (by some measures) the world’s biggest, richest, and most innovative company. They are vying with a few others to become the first trillion dollar company. And, in many ways they still define the standard against which all others are judged. For example, every new smart phone which appears on the market is framed by some people as an “iPhone killer”. They never are, but the fact that products aspire to be that, instead of a Samsung or Huawei killer says a lot about the iPhone.

But despite the fact that Apple isn’t likely to disappear in the immediate future, I still think they need to be more aware of their real and perceived weaknesses. If they aren’t there is likely to be an extended period of slow decline and reduced relevance. And a slow slide into mediocrity is, in many ways, worse than a sudden collapse.

So, Tim Cook, if you are reading this blog post (and why wouldn’t you), please take notice. Here’s just one suggestion: when your company releases a new laptop with connections that are unusable without dongles, throw a few in with the computer, and keep the price the same as the model it replaces, and please, try to make them reliable, and if they aren’t, make sure the service and replacement process is quick and easy.

It’s really not that hard to avoid doom.

1K of RAM

July 25, 2017 Leave a comment

One of my first computers had just 1K of RAM. That’s enough to store… well, almost nothing. It could store 0.01% of a (JPEG compressed) digital photo I now take on my dSLR or 0.02% of a short (MP3 compressed) music track. In other words, I would need 10 thousand of these devices (in this case a Sinclair ZX80) to store one digital photo!

I know the comparison above is somewhat open to criticism in that I am comparing RAM with storage and that early computers could have their memory upgraded (to a huge 16K in the case of the ZX80) but the point remains the same: even the most basic computer today is massively superior to what we had in the “early days” of computers.

It should be noted that, despite these limitations, you could still do stuff with those early computers. For example, I wrote a fully functioning “Breakout” game in machine code on the ZX80 (admittedly with the memory expansion) and it was so fast I had to put a massive loop in the code to slow it down. That was despite the fact that the ZX80 had a single 8 bit processor running at 3.25 MHz which is somewhat inferior to my current laptop (now a few years out of date) which has four 64 bit cores (8 threads) running at 2.5 GHz.

The reason I am discussing this point here is that I read an article recently titled “The technology struggles every 90s child can relate to”. I wasn’t exactly a child in the 90s but I still struggled with this stuff!

So here’s the list of struggles in the article…

1. Modems

Today I “know everything” because in the middle of a discussion on any topic I can search the internet for any information I need and have it within a few seconds. There are four components to this which weren’t available in the 90s. First, I always have at least one device with me. It’s usually my iPhone but I often have an iPad or laptop too. Second, I am always connected to the internet no matter where I am (except for rare exceptions). Third, the internet is full of useful (and not useful) information on any topic you can image. And finally, Google makes finding that information easy (most of the time).

None of that was available in the 90s. To find a piece of information I would need to walk to the room where my desktop computer lived, boot it, launch a program (usually an early web browser), hope no one else was already using the phone line, wait for the connection to start, and laboriously look for what I needed (possibly using an early search engine) allowing for the distinct possibly that it didn’t exist.

In reality, although that information retrieval was possible both then and now, it was so impractical and slow in the 90s that it might as well have not existed at all.

2. Photography

I bought a camera attachment for one of my early cell phones and thought how great it was going to be taking photos anywhere without the need to take an SLR or compact (film) camera with me. So how may photos did I take with that camera? Almost none, because it was so slow, the quality was so bad, and because it was an attachment to an existing phone it tended to get detached and left behind.

Today my iPhone has a really good camera built-in. Sure it’s not as good as my dSLR but it is good enough, especially for wide-angle shots where there is plenty of light. And because my iPhone is so compact and easy to take everywhere (despite its astonishing list of capabilities) I really do have it with me always. Now I take photos every day and they are good enough to keep permanently.

3. Input devices

The original item here was mice, but I have extended it to mean all input devices. Mice haven’t changed much superficially but modern, wireless mice with no moving parts are certainly a lot better than their predecessors. More importantly, alternative input devices are also available now, most notably touch interfaces and voice input.

Before the iPhone no one really knew how to create a good UI on a phone but after that everything changed, and multi-touch interfaces are now ubiquitous and (in general, with a few unfortunate exceptions) are very intuitive and easy to use.

4. Ringtones

This was an item in the article but I don’t think things have changed that much now so I won’t bother discussing this one.

5. Downloads

Back in the day we used to wait hours (or days) for stuff to download from on-line services. Some of the less “official” services were extremely well used back then and that seems to have reduced a bit now, although downloading music and movies is still popular, and a lot faster now.

The big change here is maybe the change from downloads to streaming. And the other difference might be that now material can be acquired legally for a reasonable price rather than risking the dodgy and possibly virus infected downloads of the past.

6. Clunky Devices

In the 90s I would have needed many large, heavy, expensive devices just to do what my iPhone does now. I would need a gaming console, a music player with about 100 CDs to play in it, a hand-held movie player (if they even existed), a radio, a portable TV, an advanced calculator, a GPS unit, a compass, a barometer, an altimeter, a torch, a note pad, a book of maps, a small library of fiction and reference books, several newspapers, and a computer with functions such as email, messaging, etc.

Not only does one iPhone replace all of those functions, saving thousands of dollars and about a cubic meter of space, but it actually does things better than a lot of the dedicated devices. For example, I would rather use my iPhone as a GPS unit than a “real” GPS device.

7. Software

Software was a pain, but it is till often a pain today so maybe this isn’t such a big deal! At least it’s now easy to update software (it often happens with no user intervention at all) and installing over the internet is a lot easier than from 25 floppy disks!

Also, all software is installed in one place and doesn’t involve running from disk or CD. In fact, optical media (CDs and DVDs) are practically obsolete now which isn’t a bad thing because they never were particularly suitable for data storage.

8. Multi-User, Multi-Player

The article here talks about the problem of having multiple players on a PlayStation, but I think the whole issue of multiple player games (and multi-user software in general) is now taken for granted. I play against other people on my iPhone and iPad every day. There’s no real extra effort at all, and playing against other people is just so much more rewarding, especially when smashing a friend in a “friendly” race in a game like Real Racing 3!

So, obviously things have improved greatly. Some people might be tempted to get nostalgic and ask if things are really that much better today. My current laptop has 16 million times as much memory, hundreds of thousands times as much CPU power, and 3000 times as many pixels as my ZX80 but does it really do that much more? Hell, yes!

The Internet is Best!

March 17, 2017 Leave a comment

I hear a lot of debate about whether the internet is making us dumb, uninformed, or more close-minded. The problems with a lot of these debates are these: first, saying the internet has resulted in the same outcome for everyone is too simplistic; second, these opinions are usually offered with no justification other than it is just “common sense” or “obvious”; and third, whatever the deficiencies of the internet, is it better or worse than not having an internet?

There is no doubt that some people could be said to be more dumb as the result of their internet use. By “dumb” I mean being badly informed (believing things which are unlikely to be true) or not knowing basic information at all, and by “internet use” I mean all internet services people use to gather information: web sites, blogs, news services, email newsletters, podcasts, videos, etc.

How can this happen when information is so ubiquitous? Well information isn’t knowledge, or at least it isn’t necessarily truth, and it certainly isn’t always useful. It is like the study (which was unreplicated so should be viewed with some suspicion) showing that people who watch Fox News are worse informed about news than people who watch no news at all.

That study demonstrates three interesting points: first, people can be given information but gather no useful knowledge as a result; second, non-internet sources can be just as bad a source as the internet itself; and third, this study (being unreplicated and politically loaded) might itself be an example of an information source which is potentially misleading.

So clearly any information source can potentially make people dumber. Before the internet people might have been made dumber by reading printed political newsletters, or watching trashy TV, or by listening to a single opinion at the dinner table, or by reading just one type of book.

And some people will mis-use information sources where others will gain a lot by using the same source. Some will get dumber while others get a lot smarter by using the same sources.

And (despite the Fox News study above) if the alternative to having an information source which can be mis-used is having no information source at all, then I think taking the flawed source is the best option.

Anecdotes should be used with extreme caution, but I’m going to provide some anyway, because this is a blog, not a scientific paper. I’m going to say why I think the internet is a good thing from my own, personal perspective.

I’m interested in everything. I don’t have a truly deep knowledge about anything but I like to think I have a better than average knowledge about most things. My hero amongst Greek philosophers is Eratosthenes, who was sometimes known as “Beta”. This was because he was second best at everything (beta is the second letter in the Greek alphabet which I can recite in full, by the way).

The internet is a great way to learn a moderate amount about many things. Actually, it’s also a great way to learn a lot about one thing too, as long as you are careful about your sources, and it is a great way to learn nothing about everything.

I work in a university and I get into many discussions with people who are experts in a wide range of different subjects. Obviously I cannot match an expert’s knowledge about their precise area but I seem to be able to at least have a sensible discussion, and ask meaningful questions.

For example, in recent times I have discussed the political situation in the US, early American punk bands, the use of drones and digital photography in marine science, social science study design, the history of Apple computers, and probably many others I can’t recall right now.

I hate not knowing things, so when I hear a new word, or a new idea, I immediately Google it on my phone. Later, when I have time, I retrieve that search on my tablet or computer and read a bit more about it. I did this recently with the Gibbard-Satterhwaite Theorem (a mathematical theorem which involves the fairness of voting systems) which was mentioned in a podcast I was listening to.

Last night I was randomly browsing YouTube and came across some videos of extreme engines being started and run. I’ve never seen so much flame and smoke, and heard so much awesome noise. But now I know a bit about big and unusual engine designs!

The videos only ran for 5 or 10 minutes each (I watched 3) so you might say they were quite superficial. A proper TV documentary on big engines would probably have lasted an hour and had far more detail, as well as having a more credible source, but even if a documentary like that exists, would I have seen it? Would I have had an hour free? What would have made me seek out such an odd topic?

The great thing about the internet is not necessarily the depth of its information but just how much there is. I could have watched hundreds of movies on big engines if I had the time. And there are more technical, detailed, mathematical treatments of those subjects if I want them. But the key point is that I would probably know nothing about the subject if the internet didn’t exist.

Here’s a few other topics I have got interested in thanks to YouTube: maths (the numberphile series is excellent), debating religion (I’m a sucker for an atheist experience video, or anything by Christopher Hitchens), darts (who knew the sport of darts could be so dramatic?), snooker (because that’s what happens after darts), Russian jet fighters, Formula 1 engines, classic British comedy (Fawlty Towers, Father Ted, etc).

What would I do if I wasn’t doing that? Watching conventional TV maybe? Now what were my options there: a local “current affairs” program with the intellectual level of an orangutan (with apologies to our great ape cousins), some frivolous reality TV nonsense, a really un-funny American sitcom? Whatever faults the internet has, it sure is a lot better than any of that!

Are You Getting It?

January 10, 2017 Leave a comment

Ten years ago Apple introduced one of the most important devices in the history of technology. It has changed many people’s lives more than almost anything else, and nothing has really supplanted it in the years since then. Obviously I’m talking about the iPhone, but you already knew that.

Like every new Apple product, this wasn’t the first attempt at creating this type of device, it didn’t have the best technical specifications, and it didn’t sell at a particularly good price. In fact, looking at the device superficially many people (the CTO of RIM included) thought it should have immediately failed.

I got an iPhone when Apple introduced the first revision, the iPhone 3G, and it replaced my Sony phone, which was the best available when I bought it. The Sony phone had a flip screen, plus a smaller screen on the outside of the case, a conventional phone keypad, a rotating camera, and an incredibly impressive list of functions including email and web browsing.

In fact the feature list of the Sony phone was much more substantial than the early iPhones. But the difference was the iPhone’s features were something you could use where the Sony’s existed in theory but were so awkward, slow, and unintuitive than I never actually used them.

And that is a theme which has been repeated with all of Apple’s devices which revolutionised a particular product category (Apple II, Mac, iPod, iPhone, iPad). Looking at the feature list, specs, and price compared with competitors, none of these products should have succeeded.

But they did. Why? Well I’m going to say something here which is very Apple-ish and sounds like a marketing catch-phrase rather than a statement of fact or opinion, so prepare yourself. It is because Apple creates experiences, not products.

OK, sorry about that, but I can explain that phrase. The Sony versus iPhone situation I described above is a perfect example. Looking at the specs and features the Sony would have won most comparisons, but the ultimate purpose for a consumer device is to be used. Do the comparison again, but this time with how those specs and features affect the user and the iPhone wins easily.

And it was the same with the other products I mentioned above. Before the Mac, computers were too hard to use. The Mac couldn’t do much initially, but what it could do was so much more easily accessible than with PCs. The iPod was very expensive considering its capacity and list of functions, but it was much easier to use and manage than other MP3 players. And the iPad had a limited feature list, but its operating system was highly customised to creating an intuitive touch interface for the user.

When Steve Jobs introduced the iPhone 10 years ago he teased the audience like this: “[We are introducing] an iPod, a phone and an Internet communicator. An iPod, a phone – are you getting it? These are not separate devices. This is one device. And we are calling it iPhone.”

Today I made a list of the functions my iPhone 6S regularly performs for me, where it replaces other devices, technologies and media. This list includes: watch, stopwatch, alarm clock, point and shoot camera, video camera, photo album, PDA, calculator, GPS, map, music player, portable video player, calendar, appointment diary, book library, ebook reader, audiobook player, magazine, newspaper, recipe book, email client, note pad, drawing tablet, night sky star map, web browser, portable gaming console, radio, TV, audio recorder, TV and audio remote control, landline, and mobile phone.

Not only does it do all of those things but it does a lot of them better than the specialised devices it replaces! And, even though the iPhone isn’t cheap, if you look at the value of the things it replaces it is a bargain. My guess at the value of all the stuff I listed above is $3000 – $5000 which is at least twice the cost of the phone itself.

My iPhone has one million times the storage of the first computer I programmed on. Its processors are tens of thousands of times faster. Its screen displays 25 times more pixels. And, again, it costs a lot less, even when not allowing for inflation.

Most of what I have said would apply to any modern smart-phone, but the iPhone deserves a special place amongst the others for two reasons. First, it is a purer example of ease of use and user-centered functionality than other phones; and second, it was the one phone which started the revolution.

Look at pictures of the most advanced phones before and after the iPhone and you will see a sudden transition. Apple lead the way – not on how to make a smartphone – but on how to make a smartphone that people would actually want to use. And after that, everything changed.

The Next Big Thing

January 8, 2017 Leave a comment

Many (and I really do mean many) years ago, when I was a student, I started a postgrad diploma in computer science. One of the papers was on artificial intelligence and expert systems, an area which was thought (perhaps naively) to have great potential back in the “early days” of computing. Unfortunately, very little in that area was achieved for many years after that. But now I predict things are about to change. I think AI (artificial intelligence, also very loosely described as “thinking computers”) is the next big thing.

There are early signs of this in consumer products already. Superficially it looks like some assistants and other programs running on standard computers, tablets, and phones are performing AI. But these tend to work in very limited ways, and I suspect they follow fairly conventional techniques in producing the appearance of “thinking” (you might notice I keep putting that word in quotes because no one really knows what thinking actually is).

The biggest triumph of true AI last year was Google’s AlphaGo program which won a match 4 games to 1 against Lee Sedol, one of the world’s greatest human players. That previous sentence was significant, I think, because in future it will be necessary to distinguish between AIs and humans. If an AI can already beat a brilliant human player in what is maybe the world’s most complex and difficult game, then how long will it be before humans will be hopelessly outclassed in every game?

Computers which play Chess extremely well generally rely on “brute force” techniques. They check every possible outcome of a move many steps ahead and then choose the move with the best outcome. But Go cannot be solved that way because there are simply too many moves. So AlphaGo uses a different technique. It actually learns how to play Go through playing games against humans, itself, and other AIs, and develops its own strategy for winning.

So while a conventional Chess playing program and AlphaGo might seem similar, in important ways they are totally different. Of course, the techniques used to win Go could be applied to any similar game, including Chess, it’s just that the pure brute force technique was sufficient and easier to implement when that challenge was first met.

Also last year a computer “judge” predicted the verdicts of the European Court of Human Rights cases with 79% accuracy. What does that really mean? Well it means that the computer effectively judged the cases and reached the same result as a human judge in about 80% of those cases. I have no data on this, but I suspect two human judges might agree and disagree to a similar degree.

So computers can perform very “human” functions like judging human rights cases, and that is quite a remarkable achievement. I haven’t seen what techniques were used in that case but I suspect deep learning methods like neural networks would be required.

So what does all this mean? I think it was science fiction author, Arthur C Clarke, who said that a thinking machine would be the last invention humans would ever have to create, because after that the machines themselves would do the inventing. I don’t think we are close to that stage yet but this is a clear start and I think the abilities of AIs will escalate exponentially over the next few decades until Clarke’s idea will be fulfilled.

And, along with another technology which is just about ready to become critical, 3D printing, society will be changed beyond recognition. The scenario portrayed in so many science fiction stories will become reality. The question is, which science fiction story type will be most accurate: the utopian type or the dystopian type. It could go either way.