Posts Tagged ‘technology’

Home or Away?

April 2, 2018 Leave a comment

Last night I went to a music concert featuring popular performer, Ed Sheeran. Now, I do have to say that I’m not necessarily a big fan, and it was really an event my wife wanted to go to rather than me, but he is a competent musician, and some of his material is quite good. Yeah, I’m sort of damning with faint praise there, a bit!

The small city I live in invested in a covered stadium – the only one in New Zealand – a few years back, and it has been a real asset in many ways, attracting many music events which would not have been likely to come here otherwise. Ed Sheeran was one, and I also saw Robbie Williams, and Black Sabbath there recently.

But what’s the point? Well I do have to say that live concerts featuring leading performers, like Ed Sheeran (and Ozzie Osbourne!) are quite special and there’s something unique about being actually at a real event. A similar argument applies to watching movies in a real movie theatre instead of at home. But at the same time the standard of entertainment experience I now have at home is pretty exceptional too!

I was listening to some music on my AV system today and a particular song played which was beautifully recorded in the old-fashioned way: without a lot of digital processing or fancy techniques but with just a few mics and directly onto a fairly high quality medium (probably analog). The sound was just so pure and true, and orders of magnitude better than anything I have heard at a live concert where the sound quality (especially in a roofed stadium where echo can blur the sound) is actually pretty poor.

I have a fairly sophisticated AV system with a good quality multi-channel receiver, speakers, and other components. It’s nowhere near as high-end as a true fanatic with plenty of money might have, but it is far better than the average system. Anyway, when the source is good it really can sound great. There’s plenty of power, good bass from the sub, and I have fine-tuned everything to optimise the sound. The biggest issue is that I have some items in the room which vibrate when the bass gets too extreme – but my wife won’t let me remove them. I mean, does the wood burner really need a chimney (especially one that vibrates at about 30 Hz)? I don’t think so!

I also recently upgraded my TV to a UHD (4K) model with HDR. The screen is only moderately big at 58 inches, but the room isn’t big enough to make anything bigger practical. But again, the picture quality can be magnificent. With a really good source, recorded in UHD, at a high frame rate, and optimised for HDR, it’s almost like the picture is a real thing you can reach out and touch. The blacks are really deep, the whites are super bright, and the colours can be really saturated but also be subtle and realistic. Again, I spent a fair bit of time optimising the many settings the TV has to get it working the way I like.

So my point is why would I want to go to a movie or a live concert? The system I have at home offers a far better experience. Even if I ignore the tedium of the tasks associated with the outside experience – like finding parking, buying movie tickets, and driving home through massive traffic jams after concerts – the home system still looks and sounds better. And, if you ignore the initial cost of the equipment (over $15000 original full price), it is far cheaper too.

As I said above, there is something special about live events, so I will probably continue going to them, but home-based AV systems are certainly a great alternative, especially when combined with services like Apple Music and Netflix.


Threat or Opportunity

March 12, 2018 Leave a comment

I have discussed the idea that our whole universe could be a simulation in past blog posts. I have also mentioned recent progress in virtual reality systems on multiple occasions. And finally, I have enthused over thought experiments at least once. How do all of these factors fit together? Well, read on to find out.

Pong was one of the first computer games. It was ultra-simple, involving a single bar moving up and down a screen which could “hit” a ball – a bit like table tennis (or ping pong, hence the name). Move forward almost 50 years and have a look at what we have now. People report being totally immersed in virtual reality (VR) games to the point where they almost accept them as reality.

So what will the simulations of reality be like in another 50 years? If we can already produce experiences which are almost indistinguishable from reality, surely in 50 years the experience will literally be impossible to distinguish from reality. Will that make it a type of reality in itself?

Apart from the philosophical question about what reality even is, if we assume the VR is not actually real, is it as good as, or even better than actual reality? And would people prefer to live in an artificial reality rather than the real world?

Most people will say no, but before they say that I would ask them some questions, as a sort of thought experiment (remember, these are one of my favourite things).

First I would ask this: if your “real” life wasn’t that great would you choose to live in a virtual world instead. This might be one where your body just exists in a facility with your life artificially maintained while you “live” in a virtual world. Most people would reject this idea.

But what about this: you are living your life which is pretty good, but you suddenly discover that it is really an artificial reality, and your life is far worse in the real world. Would you choose to terminate the simulation? In this case I think most people would be more hesitant.

If you had been paralysed by an accident, for example, why not live in a simulation where you are fully mobile? That might be tempting. What about if you are really poor and have a poor quality of life, would you live in a simulation where you have whatever you need (or at least a comfortable life, because realistic simulations probably shouldn’t go too far into fantasy). Maybe that might not be so appealing.

Many people will say that they need real human contact in the real world. But do they? People already enjoy interacting with their friends and family using phones, Skype, and other systems. If VR could make these interactions totally convincing, what would be the point of being in the same location as the other person?

And if people are happy to interact with other people through artificial means is it a big step to interact with artificial people instead, assuming they were indistinguishable from actual humans? In science fiction people often form bonds with non-humans and machines, although the machines are often portrayed as being like quirky humans (think of the android Lieutenant Commander Data in Star Trek) but surely the technology would be sufficient to make them just like any real human.

So if Data’s personality just existed in a computer and could be portrayed through VR then we have (paradoxically) an entirely artificial but totally authentic experience.

Emotionally these ideas seem distasteful to many people now, but I think they might be inevitable in the future, and I don’t think that future is far away. Would I want to live in a simulation? Well, I also have that emotional negative response but if it really is indistinguishable from reality then why not?

There are plenty of science fiction stories where characters live in artificial realities. Generally these have dystopian themes where the character wants to “escape” back to reality. But I wonder whether that would be the most likely response. I also wonder how soon this potential dystopia could become a real threat… or opportunity.

Utopia or a Dystopia?

February 5, 2018 Leave a comment

I have been interested in artificial intelligence for years, without being too deeply involved in it, and it seemed that until recently there was just one disappointment after another from this potentially revolutionary area of technology. But now it seems that almost every day there is some fascinating, exciting, and often worrying news about the latest developments in the area.

One recent item which might be more significant than it seems initially is the latest iteration of AlphaGo, Google’s Go playing AI. I wrote about AlphaGo in a post “Sadness and Beauty” from 2016-03-16 after it beat the world champion in the game Go which many people thought a computer could never master.

Now AlphaGo Zero has beaten AlphaGo by 100 games to zero. But the significant thing here is not about an incremental improvement, it is about a change in the way the “Zero” version works. The zero in the name stands for zero human input, because the system learned how to win at Go entirely by itself. The only original input was the rules of the game.

While learning winning strategies AlphaGo Zero “re-created” many of the classic moves humans had already discovered over the last few thousand years, but it went further than this and created new moves which had never been seen before. As I said in my previous post on this subject, the original AlphaGo was already probably better than any human, but the new version seems to be completely superior to even that.

And the truly scary thing is that AlphaGo Zero did all this in such a short period of time. I haven’t heard what the time period actually was, but judging by the dates of news releases, etc, it was probably just days or weeks. So in this time a single AI has learned far more about a game than millions of humans have in thousands of years. That’s scary.

Remember that AlphaGo Zero was created by programmers at Alphabet’s Google DeepMind in London. But in no way did the programmers write a Go playing program. They wrote a program that could learn how to play Go. You could say they had no more input into the program’s success than a parent does into the success of a child whom they abandon at birth. It is sort of like supplying the genetics but not the training.

You might wonder why Alphabet (Google’s parent company) has spent so much time and money creating a system which plays an obscure game. Well the point, of course, is to create techniques which can be used in more general and practical situations. There is some debate amongst experts at the moment about how easily these techniques could be used to create a general intelligence (one which can teach itself anything, instead of just a specific skill) but even if it only works for specific skills it is still very significant.

There are many other areas where specialised intelligence by AIs has exceeded humans. For example, at CERN (the European nuclear research organisation) they are using AI to detect particles, labs are developing AIs which are better than humans at finding the early signs of cancer, and AIs are now good at detecting bombs at airports.

So even if a human level general intelligence is still a significant time away, these specialised systems are very good already, even at this relatively early time in their development. It’s difficult to predict how quickly this technology might advance, because there is one development which would make a revolutionary rather than evolutionary change: that is an AI capable of designing AIs – you might call this a meta-AI.

If that happens then all bets are off.

Remember that an AI isn’t anything physical, because it is just a program. In every meaningful way creating an AI program is just like playing a game of Go. It is about making decisions and creating new “moves” in an abstract world. It’s true that the program requires computer hardware to run on, but once the hardware reaches a reasonable standard of power that is no more important than the Go board is to how games proceed. It limits what can be done in some ways, but the most interesting stuff is happening at a higher level.

If AlphaGo Zero can learn more in a week than every human who ever played Go could learn in thousands of years, then imagine how much progress a programming AI could make compared with every computer scientist and programmer who ever existed. There could be new systems which are orders of magnitude better developed in weeks. Then they could create the next generation which is also orders of magnitude better. The process would literally be out of control. It would be like artificial evolution running a trillion times faster than the natural version, because the generation time is so short and the “mutations” are planned rather than being random.

When I discussed the speed that AlphaGo Zero had shown when it created the new moves, I used the word “scary”, because it literally is. If that same ability existed for creating new AIs then we should be scared, because it will be almost impossible to control. And once super-human intelligence exists it will be very difficult to reverse. You might think something like, “just turn off the computer”, but how many backups of itself will exist by then? Simple computer viruses are really difficult to eliminate from a network, so imagine how much more difficult a super-intelligent “virus” would be to remove.

Where that leaves humans, I don’t know for sure. I have said in the previous post that humans will be redundant, but now I’m not totally sure that is true. Maybe there will be a niche for us, at least temporarily, or maybe humans and machines will merge in some way. Experts disagree on how much a threat AI really is. Some predict a “doomsday” where human existence is fundamentally threatened while others predict a bright future for us, free from the tedious tasks which machines can do better, and where we can pursue the activities we *want* to do rather than what we *have* to do.

Will it be a utopia or a dystopia? No one knows. All we know is that the world will never be the same again.

The Future of Driving

January 31, 2018 Leave a comment

In a recent post, I talked about how electric power seems to be the inevitable future of cars. This is probably not too surprising to most people given the way electric cars have become so much more popular recently, and how the company Tesla has successfully captured a lot of headlines (in many cases deservedly so, because of its technical advances, and in other cases mainly because of the star status of its founder, Elon Musk).

But a much greater revolution is also coming: that is self-driving cars. In the future people will not be able to comprehend how we allowed people to drive and how we tolerated the massive amount of inefficiency, and the huge number of accidents and deaths as a result of this.

In my previous post I commented on how I am a “petrol-head” and enjoy driving, as well as liking the “insane fury” of current petrol powered supercars. I commented on how electric cars have no “soul” and this would appear to apply even more to self-driving cars. Before I provide the answer to how this travesty can be avoided, I want to present some points on how good self-driving cars should be.

First, there is every indication that computers will be far better than humans at driving, especially in terms of safety. Even current versions of self driving systems are far better than the average human, and these will surely be even more superior in the future once the algorithms are refined and more infrastructure is in place for them.

Whether computer controlled cars are currently better than the best humans is debatable, because I have seen no data on this, but that doesn’t really matter because being better than the actual, flawed, unskilled humans doing most of the driving now is all that is required.

In fact, the majority of accidents involving self-driving systems now can be attributed to human errors which the AI couldn’t cope with, because they still have to obey the laws of physics and not all accidents can be avoided, even by a perfect AI.

So if we switched to self-driving cars, how would things change? Well, to get the full benefit of this technology all cars would need to be self-driving. While some cars are still driven by humans there will always be an element of unpredictability in the system. Plus all the extra infrastructure needed by humans (see later for examples) will need to be kept in place.

Ultimately, as well as all cars being self-driven, the system would also require all vehicles to be able to communicate with each other. This would allow information to be shared and maybe for a central controller to make the system run more efficiently. It might also be possible, and maybe preferable, to have a distributed intelligence instead, where the individual components (vehicles) make decisions in cooperation with other units nearby.

The most obvious benefit would be to free up time for humans who could do something more useful than driving. They could read a book, read a newspaper, watch a movie, write their blog, do some work, etc, because the car would be fully automated.

But it goes far beyond that, because all of the rules we have in place today to control human drivers would be unnecessary. There would be no need for speed limits, for example, because the cars would drive at the speed best for the exact conditions at the time. They would use factors like the traffic density and weather conditions and set their speed appropriately.

There’s no doubt that even today traffic could move much faster than it does if proper driving techniques are used. The problem is that drivers aren’t good enough to drive quickly. But speed and safety can co-exist, as shown by Germany’s autobahns where there is often no speed limit, but the accident rate is lower than the US.

There would be no need to have lanes and other symbols marked on roads, and even the direction vehicles are travelling in the lane could be swapped depending on traffic density. All the cars would know the rules and always obey them. Head-on crashes would be almost impossible even when a lane swaps the direction the traffic is flowing in.

The same would apply to turning traffic. A car could make a turn into a stream of traffic because communications with the other cars in that stream would ensure the space was available. There would be no guessing if another driver would be polite enough to create a gap, and no guessing exactly how much time was needed because all distances and speeds would be known exactly.

I could imagine a scene where traffic was flowing, turning, and merging seemingly randomly at great speed in a way that would look suicidal today, but was in reality is precisely coordinated.

Then there’s navigation. Most humans can follow GPS instructions fairly well, but how much better would this be when all the cars shared knowledge about traffic congestion and other delays, and planned the routes based on that, as well as the basic path?

Finally there’s parking. No one would need to own a car because after completing the journey the car could go and be used by someone else. It would never need to park, except for recharging and maintenance, which could also be automatic. All the payments could be done transparently and the whole system should be much cheaper than personally owning and using a car, like we do now.

The whole thing sounds great, and there are almost no disadvantages, but I still don’t like it in some ways because my car is part of my identity, I like driving, and the new world of self-driving electric cars sounds very efficient, but seems to lack any personality or fun.

But that won’t matter, because there will be two ways to overcome this deficiency. First, there might be lots of tracks where people can go to test their driving skills in traditional human driven – maybe even petrol powered – cars as a recreational activity, sort of like how some people ride horses today. And second, and far more likely, virtual reality will be so realistic that it will be almost indistinguishable from real driving, but without the risks.

And while I am on the subject of VR, it should be far less necessary to travel in the future because so much could be done remotely using VR and AR systems. So less traffic should be another factor making the roads far more efficient and safe.

In general the future in this area looks good. I suspect this will all happen in about 20 years, and when it does, people will be utterly shocked that we used to control our vehicles ourselves, especially when they look at the number of accidents and fatalities, and the amount of time wasted each day. Why would we drive when a machine can do it so much better, and we could use that time for something far more valuable?

The Future of Cars

January 28, 2018 Leave a comment

I have mixed feelings about the idea of electric and self driving cars. I am a bit of a “petrol-head” (car enthusiast) myself and enjoy driving fast, reading about fast cars, and watching supercar videos, so the new generation of cars is not necessarily welcome to me.

There is no doubt that electric power and self-driving cars are the future, but both of these remove the fun factor from driving. Of course, that might be thought of as a small price to pay for the huge advantages the future will bring, but it’s still kind of sad.

But I should talk a little bit about how great the future will be with these two technologies first before I discuss the disadvantages. So here’s what is so great about electric cars (I’ll deal with self-driving technology later)…

Electric is fast. I said I was a “petrol head” and liked driving fast, but I guess I could adapt to fast driving in electric cars as well. After all, no petrol car can get close to an electric for initial acceleration off the line. Electric engines produce maximum torque from zero RPM. My twin turbo petrol car (and every other conventional car) takes a lot longer to reach peak torque.

Electric is cheap. Well, when I say it is cheap I mean it is cheap to run. Unfortunately at the moment the initial cost is far too high, mainly because high capacity batteries are not being mass produced in enough quantity to bring the price down. Some countries have subsidies to encourage the use of electrics, but this shouldn’t be necessary, and hopefully one day won’t be.

Electric is simple. Modern petrol powered cars are ridiculously complex. Depending on what you count as essential components, a petrol car might have hundreds or thousands of moving parts, against just a few on an electric (again, the number of parts depends on whether you count cooling fans for the batteries, air conditioning, and other extra components). Despite this, modern petrol engines (and transmissions) are incredibly reliable. But an electric can have one moving part (essentially the rotor of the engine) connected directly to the wheel. That’s one moving part for the whole drive train! There are no cam shafts, valves, turbos, gearboxes, differentials, or CV joints. Once electric cars become better established their reliability just has to be far greater.

Electric is quiet. The sound of a high performance petrol engine might be music to the ears of a true enthusiast like me, but to many people it is just an annoyance. The electric ars are so quiet it almost becomes a hazard but this will soon become normal.

Electric is environmentally sound. The advantages to the environment of electric cars aren’t quite as obvious as is often imagined, but they are still significant. There is little doubt that electricity generated centrally and used to charge batteries for cars is superior to burning fossil fuels in an engine – especially when an increasing fraction of electricity generation is from renewable sources – but the production of batteries, and their disposal after they lose efficiency, is an extra environmental issue which is sometimes not considered. This makes the environmental advantage of electric cars a bit less certain, but the consensus seems to be that they are still significant.

Electric is the future. Even if you debate the points I have made above it seems that electric cars are an idea whose time has come. Even though they still make up a small fraction of the total fleet, there is a clear trend to them becoming more common on our roads. And, most importantly, they are now an obvious option for anyone buying a new car, where in the past they were a fringe possibility that few people would take seriously.

Of course, there are big disadvantages too. I have already mentioned the initial cost, but the other major factor is range, slowness of recharging, and lack of recharging points. The first two are inherent to the technology but are improving rapidly. The last is a sort of a “Catch 22” situation: there aren’t enough recharging points because there aren’t enough electric cars needing recharging, because there aren’t enough charging points for them.

There’s nothing quite like the sound of a high performance petrol car being thrashed – the sight and sound of a Lamborghini or McLaren exhaust system spitting flames is just awesome – and there’s no doubt that petrol cars have more “soul” than electrics. But people said the same thing about steam engines before they were replaced with electrics. I guess petrol cars will go the same way, so we might as well accept the inevitability of technical progress just get used to it.

I started this post by mentioning both electric and self-driving cars and I don’t seem to have got onto the self-driving part yet, which is actually far more controversial and revolutionary. So I might leave that to a future entry, since it deserves a post to itself.

So, until I switch to an electric myself I will continue to enjoy driving my current car – but I won’t try to race a Tesla away from the lights!

Random Clicking

January 14, 2018 Leave a comment

Nowadays, most people need to access information through computers, especially through web sites. Many people find the process involved with this quite challenging, and this isn’t necessarily restricted to older people who aren’t “digital natives”, or to people with no interest in, or predisposition towards technology.

In fact, I have found that many young people find some web interfaces bizarre and unintuitive. For example, my daughter (in her early 20s) thinks Facebook is badly designed and often navigates using “random clicking”. And I am a computer programmer with decades of experience but even I find some programs and some web sites completely devoid of any logical design, and I sometimes revert to the good old “random clicking” too!

For example, I received an email notification from Inland Revenue last week and was asked to look at a document on their web site. It should have taken 30 seconds but it took closer to 30 minutes and I only found the document using RC (random clicking).

Before I go further, let me describe RC. You might be presented with a web site or program/app interface and you want to do something. There might be no obvious way to get to where you want to go, or you might take the obvious route only to find it doesn’t go where you expected. Or, of course, you might get random error message like “page not available” or “internal server error” or even the dreaded “this app has quit unexpectedly” or the blue screen of death or spinning activity wheel.

So to make progress it is necessary just to do some RC on different elements, even if they make no sense, until you find what you are looking for. Or in more extreme cases you might even need to “hack” the system by entering deliberately fake information, changing a URL, etc.

What’s going on here? Surely the people involved with creating major web sites and widely used apps know what they are doing, don’t they? After all, many of these are the creations of large corporations with virtually unlimited resources and budgets. Why are there so many problems?

Well, there are two explanations: first, that errors do happen occasionally, no matter how competent the organisation involved is, and because we use these major sites and apps so often we will tend to see the errors more often too; and second, large corporations create stuff through a highly bureaucratic and obscure process and consistency and attention to detail is difficult to attain under such a scheme.

When I encounter errors, especially on web sites, I like to keep a record of it by taking a screenshot. I keep this in a folder to make me feel better if I make an error on any of my own projects, because it reminds me that sites created by organisations with a hundred programmers and huge budgets often have more problems those created by a single programmer with no budget.

So here are some of the sites I currently have in my errors folder…

APN (couldn’t complete your request due to an unexpected error – they’re the worst type!)
Apple (oops! an error occurred – helpful)
Audible (we see you are going to x, would you rather go to x?)
Aurora (trying to get an aurora prediction, just got a “cannot connect to database”)
BankLink (page not found, oh well I didn’t really want to do my tax return anyway)
BBC (the world’s most trusted news source, but not the most trusted site)
CNet (one of the leading computer news sources, until it fails)
DCC (local body sites can be useful – when they work)
Facebook (a diabolical nightmare of bad design, slowness, and bugginess)
Herald (NZ’s major newspaper, but their site generates lots of errors)
InternetNZ (even Internet NZ has errors on their site)
IRD (Inland Revenue has a few good features, but their web site is terrible overall)
Medtech (yeah, good luck getting essential medical information from here)
Mercury (the messenger of the gods dropped his message)
Microsoft (I get errors here too many times to mention)
Fast Net (not so fast when it doesn’t work)
Origin (not sure what the origin of this error was)
Porsche (great cars, web site not so great)
State Insurance (state, the obvious choice for a buggy web site)
Ticketmaster (I don’t have permission for the section of the site needed to buy tickets)
TradeMe (NZ’s equivalent of eBay is poorly designed and quite buggy)
Vodafone (another ISP with web site errors)
WordPress (the world’s leading blogging platform, really?)
YesThereIsAGod (well if there is a god, he needs to hire better web designers)

Note that I also have a huge pile of errors generated by sites at my workplace. Also, I haven’t even bothered storing examples of bad design, or of problems with apps.

As I said, there are two types of errors, and those caused by temporary outages are annoying but not disastrous. The much bigger problem is the sites and apps which are just inherently bad. The two most prominent examples are Facebook and Microsoft Word. Yes, those are probably the most widely used web site and most widely used app in the world. If they are so bad why are they so popular?

Well, popularity can mean two things: first, something is very widely used, even if it is not necessarily very well appreciated; and second, something which is well-liked by users and is utilised because people like it. So you could say tax or work is popular because almost everyone participates in them, but that drinking alcohol, or smoking dope, or sex, or eating burgers is popular because everyone likes them!

Facebook and Word are popular but most people think they could be made so much better. Also many people realise there are far better alternatives but they just cannot be used because of reasons not associated with quality. For example, people use Facebook because everyone else does, and if you want to interact with other people you all need to use the same site. And Word is widely used because that is what many workplaces demand, and many people aren’t even aware there are alternatives.

The whole thing is a bit grim, isn’t it? But there is one small thing I would suggest which could make things better: if you are a developer with a product which has a bad interface, and you can’t be almost certain that you can improve it significantly, don’t bother trying. People can get used to badly designed software, but coping with changes to an equally bad but different interface in a new version is annoying.

The classic example is how Microsoft has changed the interface between Office 2011 and Office 2016 (these are the Mac versions, but the same issue exists on Windows). The older version has a terrible, primitive user interface but after many years people have learned to cope with it. The newer version has an equally bad interface (maybe worse) and users have to re-learn it for no benefit at all.

So, Microsoft, please just stop trying. You have a captive audience for your horrible software so just leave it there. Bring out a new version so you can steal more money from the suckers who use it, but don’t try to improve the user interface. Your users will thank you for it.

Introduction to the Elements

December 29, 2017 Leave a comment

The Greek philosophers were incredibly smart people, but they didn’t necessarily know much. By this I mean that they were thinking about the right things in very intelligent and perceptive ways, but some of the conclusions they reached weren’t necessarily true, simply because they didn’t have the best tools to investigate reality.

Today we know a lot more, and even the most basic school science course will impart far more real knowledge to the average school student than what even the greatest philosophers, like Aristotle, could have known.

I have often thought about what it would be like to talk to one of the ancient Greeks about what they thought about the universe and what we have found out since, including how we know what we know. Coincidentally, this might also serve as a good overview of our current knowledge to any interested non-experts today.

Of course, modern technology would be like total magic to any ancient civilisation. In fact, it would seem that way to a person from just 100 years ago. But in this post I want to get to more fundamental concepts than just technology, mostly the ancient and modern ideas about the elements, so let’s go…

The Greeks, as well as several other ancient cultures, had arrived at the concept of there being elements, which were fundamental substances which everything else was made from. The classic 4 elements were fire, air, water, and earth. In addition, a fifth element, aether, was added to account for the non-material and heavenly realm.

This sort of made sense because you might imagine that those components resulted when something changed form. So burning wood releases fire and air (smoke) and some earth (ash) which seemed to indicate that they were original parts of the wood. And sure, smoke isn’t really like air but maybe that’s because it was made mainly from air, with a little bit of earth in it too, or something similar.

So I would say to a philosopher visiting from over 2000 years ago that they were on the right track – especially the atomists – but things aren’t quite the way they thought.

Sure, there are elements, but none of the original 4 are elements by the modern definition. In fact, those elements aren’t even the same type of thing. Fire is a chemical reaction, air is a mixture of gases, water is a molecule, and earth is a mixture of fine solids. The ancient elements correspond more to modern states of matter, maybe matching quite well with plasma, gas, liquid and solid.

The modern concept of elements is a bit more complicated. There are 92 of them occurring naturally, and they are the basic components of all of the common materials we see, although not everything in the universe as a whole is made of elements. The elements can occur by themselves or, much more commonly, combine with other elements to make molecules.

The elements are all atoms, but despite the name, these are not the smallest indivisible particles, because atoms are in turn made from electrons, protons, and neutrons, and then the protons and neutrons are made of quarks. As far as we know, these cannot be divided any further. But to complicate matters a bit more there are many other indivisible particles. The most well known of these from every day life is the photon, which makes up light.

Different atoms all have the same structure: classically thought of as a nucleus containing a certain number of protons and neutrons surrounded by a cloud of electrons. There are the same number of protons (which have a positive charge) and electrons (which have a negative charge) in all neutral atoms. It is the number of protons which determines which atom (or element) is which. So one proton means hydrogen, 2 helium, etc, up to uranium with 92. That number is called the “atomic number”.

The number of neutrons (which have no charge) varies, and the same element can have different forms because they have a different number of neutrons. When this happens the different forms are called isotopes.

Protons and neutrons are big and heavy and electrons are light, so the mass of an atom is made up almost entirely of the protons and neutrons in the nucleus. The electrons are low mass and “orbit” the nucleus at a great distance compared with the size of the nucleus itself, so a hydrogen atom (for example, but this applies to all atoms and therefore everything made of atoms, which is basically everything) is 99.9999999999996% empty space!

When I say protons are big and heavy I mean this only relatively, because there are 50 million trillion atoms in a single grain of sand (which means a lot more protons because silicon and oxygen, the two main elements in sand, both have multiple protons per atom).

When atoms combine we describe it using chemistry. This involves the electrons near the edge of an atom (the electrons form distinct “shells” around the nucleus) combining with another atom’s outer electrons. How atoms react is determined by the number of electrons in the outer shell. Atoms “try” to fill this shell and when they do they are most stable. The easiest way to fill a shell is to borrow and share electrons with other atoms.

Atoms with one electron in the outer shell or with just one missing are very close to being stable and are very reactive (examples: sodium, potassium, fluorine, chlorine). Atoms with that shell full don’t react much at all (examples: helium, neon).

There are far more energetic reactions which atoms can also participate in, when the nucleus splits or combines instead of the electrons. We call these nuclear reactions and they are much harder to start or maintain but generate huge amounts of energy. There are to types: fusion where small atoms combine to make bigger ones, and fission where big atoms break apart. The Sun is powered by fusion, and current nuclear power plants by fission.

After the splitting or combining the resulting atom(s) has less mass/energy (they are the same thing, but that’s another story) than the original atom(s) and that extra energy is released according to a formula E=mc^2 discovered by Einstein. This means you can calculate how much energy (E) comes from a certain amount of mass (m) by multiplying by the speed of light squared (90 thousand trillion). This number is very high which means that a small amount of mass creates a huge amount of energy.

Most reactions involve a bit of initial energy to start it, then they will release energy as the reaction proceeds. That’s why lighting a match next to some fuel starts a reaction which makes a lot more energy.

So water is a molecule made from one oxygen atom and two hydrogen atoms. But gold is an element all by itself and doesn’t bond well with others. And when two elements bind and form a molecule they are totally different from a simple mixture of the two elements. Take some hydrogen and oxygen and mix them and you don’t get water. But light a match and you get a spectacular result, because the hydrogen burns in the oxygen forming water in the process. The energy content of water is lower than the two constituent gases which explains all that extra energy escaping as fire. But the fire wasn’t an elementary part of the original gases and neither was the water. You can see how the Greeks might have reached that conclusion though.

Basic classical physics and chemistry like this make a certain amount of intuitive sense, and the visting philosopher would probably understand how it works fairly quickly. But then I would need to reveal that it is all really just an approximation to what reality is really like.

There would be a couple of experiments I could mention which would be very puzzling and almost impossible to explain based on the classical models. One would be the Michelson–Morley experiment, and the other would be the infamous double-slit experiment. These lead to the inevitable conclusion that the universe is far stranger than we imagined, and new theories – in this case relativity and quantum theory – must be used.

Whether our philosopher friend could ever gain the maths skills necessary to fully understand these would be difficult to know. Consider that the Greeks didn’t really accept the idea of zero and you can see that they would have a long way to go before they could use algebra and calculus with any competence.

But maybe ideas like time and space being dynamic, gravity being a phenomenon caused by warped space-time, particles behaving like waves and waves behaving like particles depending on the experiment being performed on them, single particles being in multiple places at the same time, and particles becoming entangled, might be comprehensible without the math. After all, I have a basic understanding of all these things and I only use maths like algebra and calculus at a simple level.

It would be fun to list some of the great results of the last couple of hundred years of experimental science and ask for an explanation. For example, the observations made by Edwin Hubble showing the red-shifts of galaxies would be interesting to interpret. Knowing what galaxies actually are, what spectra represent, and how galactic distances can be estimated, would seem to lead to only one reasonable conclusion, but it would be interesting to see what an intelligent person with no pre-conceived ideas might think.

As I wrote this post I realised just how much background knowledge is necessary as a prerequisite to understanding our current knowledge of the universe. I think it would be cool to discuss it all with a Greek philosopher, like Aristotle, or my favourite Eratosthenes. And it would be nice to point out where they were almost right, like Eratosthenes’ remarkable attempt at calculating the size of the Earth, but it would also be interesting to see their reaction to where they got things badly wrong!