Archive

Posts Tagged ‘technology’

Utopia or a Dystopia?

February 5, 2018 Leave a comment

I have been interested in artificial intelligence for years, without being too deeply involved in it, and it seemed that until recently there was just one disappointment after another from this potentially revolutionary area of technology. But now it seems that almost every day there is some fascinating, exciting, and often worrying news about the latest developments in the area.

One recent item which might be more significant than it seems initially is the latest iteration of AlphaGo, Google’s Go playing AI. I wrote about AlphaGo in a post “Sadness and Beauty” from 2016-03-16 after it beat the world champion in the game Go which many people thought a computer could never master.

Now AlphaGo Zero has beaten AlphaGo by 100 games to zero. But the significant thing here is not about an incremental improvement, it is about a change in the way the “Zero” version works. The zero in the name stands for zero human input, because the system learned how to win at Go entirely by itself. The only original input was the rules of the game.

While learning winning strategies AlphaGo Zero “re-created” many of the classic moves humans had already discovered over the last few thousand years, but it went further than this and created new moves which had never been seen before. As I said in my previous post on this subject, the original AlphaGo was already probably better than any human, but the new version seems to be completely superior to even that.

And the truly scary thing is that AlphaGo Zero did all this in such a short period of time. I haven’t heard what the time period actually was, but judging by the dates of news releases, etc, it was probably just days or weeks. So in this time a single AI has learned far more about a game than millions of humans have in thousands of years. That’s scary.

Remember that AlphaGo Zero was created by programmers at Alphabet’s Google DeepMind in London. But in no way did the programmers write a Go playing program. They wrote a program that could learn how to play Go. You could say they had no more input into the program’s success than a parent does into the success of a child whom they abandon at birth. It is sort of like supplying the genetics but not the training.

You might wonder why Alphabet (Google’s parent company) has spent so much time and money creating a system which plays an obscure game. Well the point, of course, is to create techniques which can be used in more general and practical situations. There is some debate amongst experts at the moment about how easily these techniques could be used to create a general intelligence (one which can teach itself anything, instead of just a specific skill) but even if it only works for specific skills it is still very significant.

There are many other areas where specialised intelligence by AIs has exceeded humans. For example, at CERN (the European nuclear research organisation) they are using AI to detect particles, labs are developing AIs which are better than humans at finding the early signs of cancer, and AIs are now good at detecting bombs at airports.

So even if a human level general intelligence is still a significant time away, these specialised systems are very good already, even at this relatively early time in their development. It’s difficult to predict how quickly this technology might advance, because there is one development which would make a revolutionary rather than evolutionary change: that is an AI capable of designing AIs – you might call this a meta-AI.

If that happens then all bets are off.

Remember that an AI isn’t anything physical, because it is just a program. In every meaningful way creating an AI program is just like playing a game of Go. It is about making decisions and creating new “moves” in an abstract world. It’s true that the program requires computer hardware to run on, but once the hardware reaches a reasonable standard of power that is no more important than the Go board is to how games proceed. It limits what can be done in some ways, but the most interesting stuff is happening at a higher level.

If AlphaGo Zero can learn more in a week than every human who ever played Go could learn in thousands of years, then imagine how much progress a programming AI could make compared with every computer scientist and programmer who ever existed. There could be new systems which are orders of magnitude better developed in weeks. Then they could create the next generation which is also orders of magnitude better. The process would literally be out of control. It would be like artificial evolution running a trillion times faster than the natural version, because the generation time is so short and the “mutations” are planned rather than being random.

When I discussed the speed that AlphaGo Zero had shown when it created the new moves, I used the word “scary”, because it literally is. If that same ability existed for creating new AIs then we should be scared, because it will be almost impossible to control. And once super-human intelligence exists it will be very difficult to reverse. You might think something like, “just turn off the computer”, but how many backups of itself will exist by then? Simple computer viruses are really difficult to eliminate from a network, so imagine how much more difficult a super-intelligent “virus” would be to remove.

Where that leaves humans, I don’t know for sure. I have said in the previous post that humans will be redundant, but now I’m not totally sure that is true. Maybe there will be a niche for us, at least temporarily, or maybe humans and machines will merge in some way. Experts disagree on how much a threat AI really is. Some predict a “doomsday” where human existence is fundamentally threatened while others predict a bright future for us, free from the tedious tasks which machines can do better, and where we can pursue the activities we *want* to do rather than what we *have* to do.

Will it be a utopia or a dystopia? No one knows. All we know is that the world will never be the same again.

Advertisements

The Future of Driving

January 31, 2018 Leave a comment

In a recent post, I talked about how electric power seems to be the inevitable future of cars. This is probably not too surprising to most people given the way electric cars have become so much more popular recently, and how the company Tesla has successfully captured a lot of headlines (in many cases deservedly so, because of its technical advances, and in other cases mainly because of the star status of its founder, Elon Musk).

But a much greater revolution is also coming: that is self-driving cars. In the future people will not be able to comprehend how we allowed people to drive and how we tolerated the massive amount of inefficiency, and the huge number of accidents and deaths as a result of this.

In my previous post I commented on how I am a “petrol-head” and enjoy driving, as well as liking the “insane fury” of current petrol powered supercars. I commented on how electric cars have no “soul” and this would appear to apply even more to self-driving cars. Before I provide the answer to how this travesty can be avoided, I want to present some points on how good self-driving cars should be.

First, there is every indication that computers will be far better than humans at driving, especially in terms of safety. Even current versions of self driving systems are far better than the average human, and these will surely be even more superior in the future once the algorithms are refined and more infrastructure is in place for them.

Whether computer controlled cars are currently better than the best humans is debatable, because I have seen no data on this, but that doesn’t really matter because being better than the actual, flawed, unskilled humans doing most of the driving now is all that is required.

In fact, the majority of accidents involving self-driving systems now can be attributed to human errors which the AI couldn’t cope with, because they still have to obey the laws of physics and not all accidents can be avoided, even by a perfect AI.

So if we switched to self-driving cars, how would things change? Well, to get the full benefit of this technology all cars would need to be self-driving. While some cars are still driven by humans there will always be an element of unpredictability in the system. Plus all the extra infrastructure needed by humans (see later for examples) will need to be kept in place.

Ultimately, as well as all cars being self-driven, the system would also require all vehicles to be able to communicate with each other. This would allow information to be shared and maybe for a central controller to make the system run more efficiently. It might also be possible, and maybe preferable, to have a distributed intelligence instead, where the individual components (vehicles) make decisions in cooperation with other units nearby.

The most obvious benefit would be to free up time for humans who could do something more useful than driving. They could read a book, read a newspaper, watch a movie, write their blog, do some work, etc, because the car would be fully automated.

But it goes far beyond that, because all of the rules we have in place today to control human drivers would be unnecessary. There would be no need for speed limits, for example, because the cars would drive at the speed best for the exact conditions at the time. They would use factors like the traffic density and weather conditions and set their speed appropriately.

There’s no doubt that even today traffic could move much faster than it does if proper driving techniques are used. The problem is that drivers aren’t good enough to drive quickly. But speed and safety can co-exist, as shown by Germany’s autobahns where there is often no speed limit, but the accident rate is lower than the US.

There would be no need to have lanes and other symbols marked on roads, and even the direction vehicles are travelling in the lane could be swapped depending on traffic density. All the cars would know the rules and always obey them. Head-on crashes would be almost impossible even when a lane swaps the direction the traffic is flowing in.

The same would apply to turning traffic. A car could make a turn into a stream of traffic because communications with the other cars in that stream would ensure the space was available. There would be no guessing if another driver would be polite enough to create a gap, and no guessing exactly how much time was needed because all distances and speeds would be known exactly.

I could imagine a scene where traffic was flowing, turning, and merging seemingly randomly at great speed in a way that would look suicidal today, but was in reality is precisely coordinated.

Then there’s navigation. Most humans can follow GPS instructions fairly well, but how much better would this be when all the cars shared knowledge about traffic congestion and other delays, and planned the routes based on that, as well as the basic path?

Finally there’s parking. No one would need to own a car because after completing the journey the car could go and be used by someone else. It would never need to park, except for recharging and maintenance, which could also be automatic. All the payments could be done transparently and the whole system should be much cheaper than personally owning and using a car, like we do now.

The whole thing sounds great, and there are almost no disadvantages, but I still don’t like it in some ways because my car is part of my identity, I like driving, and the new world of self-driving electric cars sounds very efficient, but seems to lack any personality or fun.

But that won’t matter, because there will be two ways to overcome this deficiency. First, there might be lots of tracks where people can go to test their driving skills in traditional human driven – maybe even petrol powered – cars as a recreational activity, sort of like how some people ride horses today. And second, and far more likely, virtual reality will be so realistic that it will be almost indistinguishable from real driving, but without the risks.

And while I am on the subject of VR, it should be far less necessary to travel in the future because so much could be done remotely using VR and AR systems. So less traffic should be another factor making the roads far more efficient and safe.

In general the future in this area looks good. I suspect this will all happen in about 20 years, and when it does, people will be utterly shocked that we used to control our vehicles ourselves, especially when they look at the number of accidents and fatalities, and the amount of time wasted each day. Why would we drive when a machine can do it so much better, and we could use that time for something far more valuable?

The Future of Cars

January 28, 2018 Leave a comment

I have mixed feelings about the idea of electric and self driving cars. I am a bit of a “petrol-head” (car enthusiast) myself and enjoy driving fast, reading about fast cars, and watching supercar videos, so the new generation of cars is not necessarily welcome to me.

There is no doubt that electric power and self-driving cars are the future, but both of these remove the fun factor from driving. Of course, that might be thought of as a small price to pay for the huge advantages the future will bring, but it’s still kind of sad.

But I should talk a little bit about how great the future will be with these two technologies first before I discuss the disadvantages. So here’s what is so great about electric cars (I’ll deal with self-driving technology later)…

Electric is fast. I said I was a “petrol head” and liked driving fast, but I guess I could adapt to fast driving in electric cars as well. After all, no petrol car can get close to an electric for initial acceleration off the line. Electric engines produce maximum torque from zero RPM. My twin turbo petrol car (and every other conventional car) takes a lot longer to reach peak torque.

Electric is cheap. Well, when I say it is cheap I mean it is cheap to run. Unfortunately at the moment the initial cost is far too high, mainly because high capacity batteries are not being mass produced in enough quantity to bring the price down. Some countries have subsidies to encourage the use of electrics, but this shouldn’t be necessary, and hopefully one day won’t be.

Electric is simple. Modern petrol powered cars are ridiculously complex. Depending on what you count as essential components, a petrol car might have hundreds or thousands of moving parts, against just a few on an electric (again, the number of parts depends on whether you count cooling fans for the batteries, air conditioning, and other extra components). Despite this, modern petrol engines (and transmissions) are incredibly reliable. But an electric can have one moving part (essentially the rotor of the engine) connected directly to the wheel. That’s one moving part for the whole drive train! There are no cam shafts, valves, turbos, gearboxes, differentials, or CV joints. Once electric cars become better established their reliability just has to be far greater.

Electric is quiet. The sound of a high performance petrol engine might be music to the ears of a true enthusiast like me, but to many people it is just an annoyance. The electric ars are so quiet it almost becomes a hazard but this will soon become normal.

Electric is environmentally sound. The advantages to the environment of electric cars aren’t quite as obvious as is often imagined, but they are still significant. There is little doubt that electricity generated centrally and used to charge batteries for cars is superior to burning fossil fuels in an engine – especially when an increasing fraction of electricity generation is from renewable sources – but the production of batteries, and their disposal after they lose efficiency, is an extra environmental issue which is sometimes not considered. This makes the environmental advantage of electric cars a bit less certain, but the consensus seems to be that they are still significant.

Electric is the future. Even if you debate the points I have made above it seems that electric cars are an idea whose time has come. Even though they still make up a small fraction of the total fleet, there is a clear trend to them becoming more common on our roads. And, most importantly, they are now an obvious option for anyone buying a new car, where in the past they were a fringe possibility that few people would take seriously.

Of course, there are big disadvantages too. I have already mentioned the initial cost, but the other major factor is range, slowness of recharging, and lack of recharging points. The first two are inherent to the technology but are improving rapidly. The last is a sort of a “Catch 22” situation: there aren’t enough recharging points because there aren’t enough electric cars needing recharging, because there aren’t enough charging points for them.

There’s nothing quite like the sound of a high performance petrol car being thrashed – the sight and sound of a Lamborghini or McLaren exhaust system spitting flames is just awesome – and there’s no doubt that petrol cars have more “soul” than electrics. But people said the same thing about steam engines before they were replaced with electrics. I guess petrol cars will go the same way, so we might as well accept the inevitability of technical progress just get used to it.

I started this post by mentioning both electric and self-driving cars and I don’t seem to have got onto the self-driving part yet, which is actually far more controversial and revolutionary. So I might leave that to a future entry, since it deserves a post to itself.

So, until I switch to an electric myself I will continue to enjoy driving my current car – but I won’t try to race a Tesla away from the lights!

Random Clicking

January 14, 2018 Leave a comment

Nowadays, most people need to access information through computers, especially through web sites. Many people find the process involved with this quite challenging, and this isn’t necessarily restricted to older people who aren’t “digital natives”, or to people with no interest in, or predisposition towards technology.

In fact, I have found that many young people find some web interfaces bizarre and unintuitive. For example, my daughter (in her early 20s) thinks Facebook is badly designed and often navigates using “random clicking”. And I am a computer programmer with decades of experience but even I find some programs and some web sites completely devoid of any logical design, and I sometimes revert to the good old “random clicking” too!

For example, I received an email notification from Inland Revenue last week and was asked to look at a document on their web site. It should have taken 30 seconds but it took closer to 30 minutes and I only found the document using RC (random clicking).

Before I go further, let me describe RC. You might be presented with a web site or program/app interface and you want to do something. There might be no obvious way to get to where you want to go, or you might take the obvious route only to find it doesn’t go where you expected. Or, of course, you might get random error message like “page not available” or “internal server error” or even the dreaded “this app has quit unexpectedly” or the blue screen of death or spinning activity wheel.

So to make progress it is necessary just to do some RC on different elements, even if they make no sense, until you find what you are looking for. Or in more extreme cases you might even need to “hack” the system by entering deliberately fake information, changing a URL, etc.

What’s going on here? Surely the people involved with creating major web sites and widely used apps know what they are doing, don’t they? After all, many of these are the creations of large corporations with virtually unlimited resources and budgets. Why are there so many problems?

Well, there are two explanations: first, that errors do happen occasionally, no matter how competent the organisation involved is, and because we use these major sites and apps so often we will tend to see the errors more often too; and second, large corporations create stuff through a highly bureaucratic and obscure process and consistency and attention to detail is difficult to attain under such a scheme.

When I encounter errors, especially on web sites, I like to keep a record of it by taking a screenshot. I keep this in a folder to make me feel better if I make an error on any of my own projects, because it reminds me that sites created by organisations with a hundred programmers and huge budgets often have more problems those created by a single programmer with no budget.

So here are some of the sites I currently have in my errors folder…

APN (couldn’t complete your request due to an unexpected error – they’re the worst type!)
Apple (oops! an error occurred – helpful)
Audible (we see you are going to x, would you rather go to x?)
Aurora (trying to get an aurora prediction, just got a “cannot connect to database”)
BankLink (page not found, oh well I didn’t really want to do my tax return anyway)
BBC (the world’s most trusted news source, but not the most trusted site)
CNet (one of the leading computer news sources, until it fails)
DCC (local body sites can be useful – when they work)
Facebook (a diabolical nightmare of bad design, slowness, and bugginess)
Herald (NZ’s major newspaper, but their site generates lots of errors)
InternetNZ (even Internet NZ has errors on their site)
IRD (Inland Revenue has a few good features, but their web site is terrible overall)
Medtech (yeah, good luck getting essential medical information from here)
Mercury (the messenger of the gods dropped his message)
Microsoft (I get errors here too many times to mention)
Fast Net (not so fast when it doesn’t work)
Origin (not sure what the origin of this error was)
Porsche (great cars, web site not so great)
State Insurance (state, the obvious choice for a buggy web site)
Ticketmaster (I don’t have permission for the section of the site needed to buy tickets)
TradeMe (NZ’s equivalent of eBay is poorly designed and quite buggy)
Vodafone (another ISP with web site errors)
WordPress (the world’s leading blogging platform, really?)
YesThereIsAGod (well if there is a god, he needs to hire better web designers)

Note that I also have a huge pile of errors generated by sites at my workplace. Also, I haven’t even bothered storing examples of bad design, or of problems with apps.

As I said, there are two types of errors, and those caused by temporary outages are annoying but not disastrous. The much bigger problem is the sites and apps which are just inherently bad. The two most prominent examples are Facebook and Microsoft Word. Yes, those are probably the most widely used web site and most widely used app in the world. If they are so bad why are they so popular?

Well, popularity can mean two things: first, something is very widely used, even if it is not necessarily very well appreciated; and second, something which is well-liked by users and is utilised because people like it. So you could say tax or work is popular because almost everyone participates in them, but that drinking alcohol, or smoking dope, or sex, or eating burgers is popular because everyone likes them!

Facebook and Word are popular but most people think they could be made so much better. Also many people realise there are far better alternatives but they just cannot be used because of reasons not associated with quality. For example, people use Facebook because everyone else does, and if you want to interact with other people you all need to use the same site. And Word is widely used because that is what many workplaces demand, and many people aren’t even aware there are alternatives.

The whole thing is a bit grim, isn’t it? But there is one small thing I would suggest which could make things better: if you are a developer with a product which has a bad interface, and you can’t be almost certain that you can improve it significantly, don’t bother trying. People can get used to badly designed software, but coping with changes to an equally bad but different interface in a new version is annoying.

The classic example is how Microsoft has changed the interface between Office 2011 and Office 2016 (these are the Mac versions, but the same issue exists on Windows). The older version has a terrible, primitive user interface but after many years people have learned to cope with it. The newer version has an equally bad interface (maybe worse) and users have to re-learn it for no benefit at all.

So, Microsoft, please just stop trying. You have a captive audience for your horrible software so just leave it there. Bring out a new version so you can steal more money from the suckers who use it, but don’t try to improve the user interface. Your users will thank you for it.

Introduction to the Elements

December 29, 2017 Leave a comment

The Greek philosophers were incredibly smart people, but they didn’t necessarily know much. By this I mean that they were thinking about the right things in very intelligent and perceptive ways, but some of the conclusions they reached weren’t necessarily true, simply because they didn’t have the best tools to investigate reality.

Today we know a lot more, and even the most basic school science course will impart far more real knowledge to the average school student than what even the greatest philosophers, like Aristotle, could have known.

I have often thought about what it would be like to talk to one of the ancient Greeks about what they thought about the universe and what we have found out since, including how we know what we know. Coincidentally, this might also serve as a good overview of our current knowledge to any interested non-experts today.

Of course, modern technology would be like total magic to any ancient civilisation. In fact, it would seem that way to a person from just 100 years ago. But in this post I want to get to more fundamental concepts than just technology, mostly the ancient and modern ideas about the elements, so let’s go…

The Greeks, as well as several other ancient cultures, had arrived at the concept of there being elements, which were fundamental substances which everything else was made from. The classic 4 elements were fire, air, water, and earth. In addition, a fifth element, aether, was added to account for the non-material and heavenly realm.

This sort of made sense because you might imagine that those components resulted when something changed form. So burning wood releases fire and air (smoke) and some earth (ash) which seemed to indicate that they were original parts of the wood. And sure, smoke isn’t really like air but maybe that’s because it was made mainly from air, with a little bit of earth in it too, or something similar.

So I would say to a philosopher visiting from over 2000 years ago that they were on the right track – especially the atomists – but things aren’t quite the way they thought.

Sure, there are elements, but none of the original 4 are elements by the modern definition. In fact, those elements aren’t even the same type of thing. Fire is a chemical reaction, air is a mixture of gases, water is a molecule, and earth is a mixture of fine solids. The ancient elements correspond more to modern states of matter, maybe matching quite well with plasma, gas, liquid and solid.

The modern concept of elements is a bit more complicated. There are 92 of them occurring naturally, and they are the basic components of all of the common materials we see, although not everything in the universe as a whole is made of elements. The elements can occur by themselves or, much more commonly, combine with other elements to make molecules.

The elements are all atoms, but despite the name, these are not the smallest indivisible particles, because atoms are in turn made from electrons, protons, and neutrons, and then the protons and neutrons are made of quarks. As far as we know, these cannot be divided any further. But to complicate matters a bit more there are many other indivisible particles. The most well known of these from every day life is the photon, which makes up light.

Different atoms all have the same structure: classically thought of as a nucleus containing a certain number of protons and neutrons surrounded by a cloud of electrons. There are the same number of protons (which have a positive charge) and electrons (which have a negative charge) in all neutral atoms. It is the number of protons which determines which atom (or element) is which. So one proton means hydrogen, 2 helium, etc, up to uranium with 92. That number is called the “atomic number”.

The number of neutrons (which have no charge) varies, and the same element can have different forms because they have a different number of neutrons. When this happens the different forms are called isotopes.

Protons and neutrons are big and heavy and electrons are light, so the mass of an atom is made up almost entirely of the protons and neutrons in the nucleus. The electrons are low mass and “orbit” the nucleus at a great distance compared with the size of the nucleus itself, so a hydrogen atom (for example, but this applies to all atoms and therefore everything made of atoms, which is basically everything) is 99.9999999999996% empty space!

When I say protons are big and heavy I mean this only relatively, because there are 50 million trillion atoms in a single grain of sand (which means a lot more protons because silicon and oxygen, the two main elements in sand, both have multiple protons per atom).

When atoms combine we describe it using chemistry. This involves the electrons near the edge of an atom (the electrons form distinct “shells” around the nucleus) combining with another atom’s outer electrons. How atoms react is determined by the number of electrons in the outer shell. Atoms “try” to fill this shell and when they do they are most stable. The easiest way to fill a shell is to borrow and share electrons with other atoms.

Atoms with one electron in the outer shell or with just one missing are very close to being stable and are very reactive (examples: sodium, potassium, fluorine, chlorine). Atoms with that shell full don’t react much at all (examples: helium, neon).

There are far more energetic reactions which atoms can also participate in, when the nucleus splits or combines instead of the electrons. We call these nuclear reactions and they are much harder to start or maintain but generate huge amounts of energy. There are to types: fusion where small atoms combine to make bigger ones, and fission where big atoms break apart. The Sun is powered by fusion, and current nuclear power plants by fission.

After the splitting or combining the resulting atom(s) has less mass/energy (they are the same thing, but that’s another story) than the original atom(s) and that extra energy is released according to a formula E=mc^2 discovered by Einstein. This means you can calculate how much energy (E) comes from a certain amount of mass (m) by multiplying by the speed of light squared (90 thousand trillion). This number is very high which means that a small amount of mass creates a huge amount of energy.

Most reactions involve a bit of initial energy to start it, then they will release energy as the reaction proceeds. That’s why lighting a match next to some fuel starts a reaction which makes a lot more energy.

So water is a molecule made from one oxygen atom and two hydrogen atoms. But gold is an element all by itself and doesn’t bond well with others. And when two elements bind and form a molecule they are totally different from a simple mixture of the two elements. Take some hydrogen and oxygen and mix them and you don’t get water. But light a match and you get a spectacular result, because the hydrogen burns in the oxygen forming water in the process. The energy content of water is lower than the two constituent gases which explains all that extra energy escaping as fire. But the fire wasn’t an elementary part of the original gases and neither was the water. You can see how the Greeks might have reached that conclusion though.

Basic classical physics and chemistry like this make a certain amount of intuitive sense, and the visting philosopher would probably understand how it works fairly quickly. But then I would need to reveal that it is all really just an approximation to what reality is really like.

There would be a couple of experiments I could mention which would be very puzzling and almost impossible to explain based on the classical models. One would be the Michelson–Morley experiment, and the other would be the infamous double-slit experiment. These lead to the inevitable conclusion that the universe is far stranger than we imagined, and new theories – in this case relativity and quantum theory – must be used.

Whether our philosopher friend could ever gain the maths skills necessary to fully understand these would be difficult to know. Consider that the Greeks didn’t really accept the idea of zero and you can see that they would have a long way to go before they could use algebra and calculus with any competence.

But maybe ideas like time and space being dynamic, gravity being a phenomenon caused by warped space-time, particles behaving like waves and waves behaving like particles depending on the experiment being performed on them, single particles being in multiple places at the same time, and particles becoming entangled, might be comprehensible without the math. After all, I have a basic understanding of all these things and I only use maths like algebra and calculus at a simple level.

It would be fun to list some of the great results of the last couple of hundred years of experimental science and ask for an explanation. For example, the observations made by Edwin Hubble showing the red-shifts of galaxies would be interesting to interpret. Knowing what galaxies actually are, what spectra represent, and how galactic distances can be estimated, would seem to lead to only one reasonable conclusion, but it would be interesting to see what an intelligent person with no pre-conceived ideas might think.

As I wrote this post I realised just how much background knowledge is necessary as a prerequisite to understanding our current knowledge of the universe. I think it would be cool to discuss it all with a Greek philosopher, like Aristotle, or my favourite Eratosthenes. And it would be nice to point out where they were almost right, like Eratosthenes’ remarkable attempt at calculating the size of the Earth, but it would also be interesting to see their reaction to where they got things badly wrong!

Is Apple Doomed?

December 20, 2017 5 comments

I’m a big Apple fanboy. As I sit here writing this blog post (flying at 10,000 meters on my way to Auckland, because I always write blog posts when I fly) I am actively using 4 Apple products: a MacBook Pro computer, an iPad Pro tablet, an iPhone 6S Plus smartphone, and an Apple Watch. At home I have many Apple computers, phones, and other devices. I also have one Windows PC but I very rarely use that.

So the general state of Apple’s “empire” is pretty important to me. Many of the skills I have (such as general trouble-shooting, web programming, scripting, configuration, and general software use) could be transferred to Windows, but I just don’t want to. I really like the elegance of Apple’s devices on the surface, combined with the power of Unix in the background.

But despite my enthusiasm for their products I have developed an increasing air of concern with Apple’s direction. There is the indistinct idea that they have stopped innovating to the extent they did in the past. Then there is the observation that the quality control of both hardware and software isn’t what it was. Then there is just a general perception that Apple are getting too greedy by selling products at too high a price and not offering adequate support for the users of their products.

These opinions are nothing new, but what is new is that people who both know a lot about the subject, and would normally be more positive about Apple, are starting to join in the criticism. Sometimes this is through a slight sense of general concern, and other times through quite strident direct criticism.

I would belong to the former class of critics. I think I have noticed an increase in the number of errors Apple is making, at the same time as I notice an apparent general decrease in the overall reliability of their products, and to make matters worse, these are accompanied by what seems to be higher prices.

You will notice I used a lot of qualifiers in the sentence above. I did this deliberately because I have no real data or objective statistics to demonstrate any of these trends. They might not be real because it is very easy to start seeing problems when you look for them, and negative events often “clump” into groups. Sometimes there might be a series of bad things which happen after a long period with no problems, but that doesn’t mean there is any general trend involved.

But now is the time for anecdotes! These don’t mean much, of course, but I want to list a few just to give an idea of where my concern is coming from.

Recently I set up two new Mac laptop computers in a department where there was a certain amount of pressure from management to switch to Microsoft Surface laptops. The Surface has a really poor reputation for reliability and is quite expensive, so it shouldn’t be difficult to demonstrate the superiority of Apple products in this area, right?

Well, no. Wrong, actually. At least in this case. Both laptops had to go for service twice within the first few weeks. I have worked with Apple hardware for decades and have never seen anything remotely as bad as this. And the fact that it was in a situation where Apple was under increased scrutiny didn’t help!

In addition, the laptops had inadequate storage, because even though these are marketed as “pro” devices the basic model still has only 128G of SSD storage. That wasn’t Apple’s fault, because the person doing the purchasing should have got it right, but it didn’t help!

Also recently Apple has suffered from some really embarrassing security flaws. One allowed root access to a Mac without a password, and the other allowed malicious control of automated home-control devices. There were also a few other lesser issues in the same time period. As far as I now none of these were exploited to any great extent, but it is still a bad look.

Another issue which seems to be becoming more prominent recently is their repair and replacement service. In general I have had fairly good service from Apple repair centers, but I have heard of several people who aren’t as happy.

When you buy a premium device at the premium price Apple demands I don’t think it is unreasonable to expect a little bit of extra help if things go wrong. So unless there is clear evidence of fraud, repairs and replacements should be done without the customer having to resort to threats and demands for the intervention of higher levels of staff.

And even if a device only has one year of official warranty (which seems ridiculous to begin with), Apple should offer a similar level of support for a reasonable period without the customer having to resort to quoting consumer law.

Even if Apple wasn’t interested in doing what was morally right they should be able to see that providing superior service for what they claim is a superior product at a superior price is just good business because it maintains a positive relationship with the customer.

My final complaint regards Apple’s design direction. This is critical because whatever else they stand for, surely good design is their primary advantage over the opposition. But some Apple software recently has been obscure at best and incomprehensibly bizarre at worst, and iTunes has become a “gold standard” for cluttered, confusing user interfaces.

When I started programming Macs in the 1980s there was a large section in the programming documentation about user interface design. The rules were really strict, but resulted in consistent and clear software which came from many different developers, including Apple. I don’t do that sort of programming any more but if a similar section exists in current programming manuals there is little sign that people – even Apple themselves – are taking much notice!

So is Apple doomed? Well probably not. They are (by some measures) the world’s biggest, richest, and most innovative company. They are vying with a few others to become the first trillion dollar company. And, in many ways they still define the standard against which all others are judged. For example, every new smart phone which appears on the market is framed by some people as an “iPhone killer”. They never are, but the fact that products aspire to be that, instead of a Samsung or Huawei killer says a lot about the iPhone.

But despite the fact that Apple isn’t likely to disappear in the immediate future, I still think they need to be more aware of their real and perceived weaknesses. If they aren’t there is likely to be an extended period of slow decline and reduced relevance. And a slow slide into mediocrity is, in many ways, worse than a sudden collapse.

So, Tim Cook, if you are reading this blog post (and why wouldn’t you), please take notice. Here’s just one suggestion: when your company releases a new laptop with connections that are unusable without dongles, throw a few in with the computer, and keep the price the same as the model it replaces, and please, try to make them reliable, and if they aren’t, make sure the service and replacement process is quick and easy.

It’s really not that hard to avoid doom.

Making Us Smart

June 28, 2017 Leave a comment

Many people think the internet is making us dumb. They think we don’t use our memory any more because all the information we need is on the web in places like Wikipedia. They think we don’t get exposed to a variety of ideas because we only visit places which already hold the same views as we do. And they think we spend too much time on social media discussing what we had for breakfast.

Is any of this stuff true? Well, in some cases it is. Some people live very superficial lives in the virtual world but I suspect those same people are just naturally superficial and would act exactly the same way in the real world.

For example, very few people, before the internet became popular, remembered a lot of facts. Back then, some people owned the print version of the Encyclopedia Brittanica, and presumably these were people who valued knowledge because the print version wasn’t cheap!

But a survey run by the company found that the average owner only used that reference once per year. If they only referred to an encyclopedia once a year it doesn’t give them much to remember really, does it?

Today I probably refer to Wikipedia multiple times per day. Sure I don’t remember many of the details of what I have read, but I do tend to get a good overview of the subject I am researching or get a specific fact for a specific purpose.

And finding a subject in Wikipedia is super-easy. Generally it only takes a few seconds, compared with much longer looking in an index, choosing the right volume, and finding the correct page of a print encyclopedia.

Plus Wikipedia has easy to use linking between subjects. Often a search for one subject leads down a long and interesting path to other, related topics which I might never learn about otherwise.

Finally, it is always up to date. The print version was usually years old but I have found information in Wikipedia which refers to an event which happened just hours before I looked.

So it seems to me that we have a far richer and more accessible information source now than we have ever had in the past. I agree that Wikipedia is susceptible to a certain extent to false or biased information but how often does that really happen? Very rarely in my experience, and a survey done a few years back indicated the number of errors in Wikipedia was fairly similar to Brittanica (which is also a web-based source now, anyway).

Do we find ourselves mis-remembering details or completely forgetting something we have just seen on the internet? Sure, but that isn’t much to do with the source. It’s because the human brain is not a very good memory device. If it was true that we are remembering less (and I don’t think it is) that might even be a good thing because it means we have to get our information from a reliable source instead!

And it’s not even that this is a new thing. Warnings about how new technologies are going to make us dumb go back many years. A similar argument was made when mass production of books became possible. Few people would agree with that argument now and few people will agree with it being applied to the internet in future.

What about the variety of ideas issue? Well people who only interact with sources that tell them what they want to believe on-line would very likely do the same thing off-line.

If someone is a fundamentalist Christian, for example, they are very unlikely to be in many situations where they will be exposed to views of atheists or Muslims. They just wouldn’t spend much time with people like that.

In fact, again there might be a greater chance to be exposed to a wider variety of views on-line, although I do agree that the echo-chambers of like-minded opinion like Facebook and other sites often tend to be is a problem.

And a similar argument applies to the presumption that most discussion on-line is trivial. I often hear people say something like “I don’t use Twitter because I don’t care what someone had for breakfast”. When I ask how much time they have spent on Twitter I am not surprised to hear that it is usually zero.

Just to give a better idea of what value can come from social media, here is the topic of the top few entries in my current Twitter feed…

I learned that helium is the only element that was discovered in space before found on earth. (I already knew that because I am an amateur astronomer, but it is an interesting fact, anyway).

New Scientist reported that the ozone layer recovery will be delayed by chemical leaks (and it had a link if I want details).

ZDNet (a computer news and information site) tweeted the title of an article: “Why I’m still surprised the iPhone didn’t die.” (and again there was a link to the article).

New Scientist also tweeted that a study showed that “Urban house finches use fibres from cigarette butts in their nests to deter parasites” (where else would you get such valuable insights!)

Guardian Science reported that “scientists explain rare phenomenon of ‘nocturnal sun'” (I’ll probably read that one later).

ZDNet reported the latest malware problem with the headline “A massive cyberattack is hitting organisations around the world” (I had already read that article)

Oxford dictionaries tweeted a link to an article about “33 incredible words ending in -ible and -able” (I’ll read that and add it to my interesting English words list).

The Onion (a satirical on-line news site) tweeted a very useful article on “Tips For Choosing The Right Pet” including advice such as “Consider a rabbit for a cuddly, low cost pet you can test your shampoo on”.

Friedrice Nietzsche tweeted “five easy pentacles” (yes, I doubt this person is related to the real Nietzsche, and I also have no idea what it means).

Greenpeace NZ linked to an article “Read the new report into how intensive livestock farming could be endangering our health” (with a link to the report).

Otago Philosophy tweeted that “@Otago philosopher @jamesmaclaurin taking part in the Driverless Future panel session at the Institute of Public Works Engineers Conference” (with a link).

I don’t see a lot of trivial drivel about breakfast there. And where else would I get such an amazing collection of interesting stuff? Sure, I get that because I chose to follow people/organisations like science magazines, philosophers, and computer news sources, but there is clearly nothing inherently useless about Twitter.

So is the internet making us dumb? Well, like any tool or source, if someone is determined to be misinformed and ignorant the internet can certainly help, but it’s also the greatest invention of modern times, the greatest repository of information humanity has ever had, and something that, when treated with appropriate respect, will make you really smart, not dumb!