Archive

Posts Tagged ‘science’

Utopia or a Dystopia?

February 5, 2018 Leave a comment

I have been interested in artificial intelligence for years, without being too deeply involved in it, and it seemed that until recently there was just one disappointment after another from this potentially revolutionary area of technology. But now it seems that almost every day there is some fascinating, exciting, and often worrying news about the latest developments in the area.

One recent item which might be more significant than it seems initially is the latest iteration of AlphaGo, Google’s Go playing AI. I wrote about AlphaGo in a post “Sadness and Beauty” from 2016-03-16 after it beat the world champion in the game Go which many people thought a computer could never master.

Now AlphaGo Zero has beaten AlphaGo by 100 games to zero. But the significant thing here is not about an incremental improvement, it is about a change in the way the “Zero” version works. The zero in the name stands for zero human input, because the system learned how to win at Go entirely by itself. The only original input was the rules of the game.

While learning winning strategies AlphaGo Zero “re-created” many of the classic moves humans had already discovered over the last few thousand years, but it went further than this and created new moves which had never been seen before. As I said in my previous post on this subject, the original AlphaGo was already probably better than any human, but the new version seems to be completely superior to even that.

And the truly scary thing is that AlphaGo Zero did all this in such a short period of time. I haven’t heard what the time period actually was, but judging by the dates of news releases, etc, it was probably just days or weeks. So in this time a single AI has learned far more about a game than millions of humans have in thousands of years. That’s scary.

Remember that AlphaGo Zero was created by programmers at Alphabet’s Google DeepMind in London. But in no way did the programmers write a Go playing program. They wrote a program that could learn how to play Go. You could say they had no more input into the program’s success than a parent does into the success of a child whom they abandon at birth. It is sort of like supplying the genetics but not the training.

You might wonder why Alphabet (Google’s parent company) has spent so much time and money creating a system which plays an obscure game. Well the point, of course, is to create techniques which can be used in more general and practical situations. There is some debate amongst experts at the moment about how easily these techniques could be used to create a general intelligence (one which can teach itself anything, instead of just a specific skill) but even if it only works for specific skills it is still very significant.

There are many other areas where specialised intelligence by AIs has exceeded humans. For example, at CERN (the European nuclear research organisation) they are using AI to detect particles, labs are developing AIs which are better than humans at finding the early signs of cancer, and AIs are now good at detecting bombs at airports.

So even if a human level general intelligence is still a significant time away, these specialised systems are very good already, even at this relatively early time in their development. It’s difficult to predict how quickly this technology might advance, because there is one development which would make a revolutionary rather than evolutionary change: that is an AI capable of designing AIs – you might call this a meta-AI.

If that happens then all bets are off.

Remember that an AI isn’t anything physical, because it is just a program. In every meaningful way creating an AI program is just like playing a game of Go. It is about making decisions and creating new “moves” in an abstract world. It’s true that the program requires computer hardware to run on, but once the hardware reaches a reasonable standard of power that is no more important than the Go board is to how games proceed. It limits what can be done in some ways, but the most interesting stuff is happening at a higher level.

If AlphaGo Zero can learn more in a week than every human who ever played Go could learn in thousands of years, then imagine how much progress a programming AI could make compared with every computer scientist and programmer who ever existed. There could be new systems which are orders of magnitude better developed in weeks. Then they could create the next generation which is also orders of magnitude better. The process would literally be out of control. It would be like artificial evolution running a trillion times faster than the natural version, because the generation time is so short and the “mutations” are planned rather than being random.

When I discussed the speed that AlphaGo Zero had shown when it created the new moves, I used the word “scary”, because it literally is. If that same ability existed for creating new AIs then we should be scared, because it will be almost impossible to control. And once super-human intelligence exists it will be very difficult to reverse. You might think something like, “just turn off the computer”, but how many backups of itself will exist by then? Simple computer viruses are really difficult to eliminate from a network, so imagine how much more difficult a super-intelligent “virus” would be to remove.

Where that leaves humans, I don’t know for sure. I have said in the previous post that humans will be redundant, but now I’m not totally sure that is true. Maybe there will be a niche for us, at least temporarily, or maybe humans and machines will merge in some way. Experts disagree on how much a threat AI really is. Some predict a “doomsday” where human existence is fundamentally threatened while others predict a bright future for us, free from the tedious tasks which machines can do better, and where we can pursue the activities we *want* to do rather than what we *have* to do.

Will it be a utopia or a dystopia? No one knows. All we know is that the world will never be the same again.

Advertisements

The Future of Driving

January 31, 2018 Leave a comment

In a recent post, I talked about how electric power seems to be the inevitable future of cars. This is probably not too surprising to most people given the way electric cars have become so much more popular recently, and how the company Tesla has successfully captured a lot of headlines (in many cases deservedly so, because of its technical advances, and in other cases mainly because of the star status of its founder, Elon Musk).

But a much greater revolution is also coming: that is self-driving cars. In the future people will not be able to comprehend how we allowed people to drive and how we tolerated the massive amount of inefficiency, and the huge number of accidents and deaths as a result of this.

In my previous post I commented on how I am a “petrol-head” and enjoy driving, as well as liking the “insane fury” of current petrol powered supercars. I commented on how electric cars have no “soul” and this would appear to apply even more to self-driving cars. Before I provide the answer to how this travesty can be avoided, I want to present some points on how good self-driving cars should be.

First, there is every indication that computers will be far better than humans at driving, especially in terms of safety. Even current versions of self driving systems are far better than the average human, and these will surely be even more superior in the future once the algorithms are refined and more infrastructure is in place for them.

Whether computer controlled cars are currently better than the best humans is debatable, because I have seen no data on this, but that doesn’t really matter because being better than the actual, flawed, unskilled humans doing most of the driving now is all that is required.

In fact, the majority of accidents involving self-driving systems now can be attributed to human errors which the AI couldn’t cope with, because they still have to obey the laws of physics and not all accidents can be avoided, even by a perfect AI.

So if we switched to self-driving cars, how would things change? Well, to get the full benefit of this technology all cars would need to be self-driving. While some cars are still driven by humans there will always be an element of unpredictability in the system. Plus all the extra infrastructure needed by humans (see later for examples) will need to be kept in place.

Ultimately, as well as all cars being self-driven, the system would also require all vehicles to be able to communicate with each other. This would allow information to be shared and maybe for a central controller to make the system run more efficiently. It might also be possible, and maybe preferable, to have a distributed intelligence instead, where the individual components (vehicles) make decisions in cooperation with other units nearby.

The most obvious benefit would be to free up time for humans who could do something more useful than driving. They could read a book, read a newspaper, watch a movie, write their blog, do some work, etc, because the car would be fully automated.

But it goes far beyond that, because all of the rules we have in place today to control human drivers would be unnecessary. There would be no need for speed limits, for example, because the cars would drive at the speed best for the exact conditions at the time. They would use factors like the traffic density and weather conditions and set their speed appropriately.

There’s no doubt that even today traffic could move much faster than it does if proper driving techniques are used. The problem is that drivers aren’t good enough to drive quickly. But speed and safety can co-exist, as shown by Germany’s autobahns where there is often no speed limit, but the accident rate is lower than the US.

There would be no need to have lanes and other symbols marked on roads, and even the direction vehicles are travelling in the lane could be swapped depending on traffic density. All the cars would know the rules and always obey them. Head-on crashes would be almost impossible even when a lane swaps the direction the traffic is flowing in.

The same would apply to turning traffic. A car could make a turn into a stream of traffic because communications with the other cars in that stream would ensure the space was available. There would be no guessing if another driver would be polite enough to create a gap, and no guessing exactly how much time was needed because all distances and speeds would be known exactly.

I could imagine a scene where traffic was flowing, turning, and merging seemingly randomly at great speed in a way that would look suicidal today, but was in reality is precisely coordinated.

Then there’s navigation. Most humans can follow GPS instructions fairly well, but how much better would this be when all the cars shared knowledge about traffic congestion and other delays, and planned the routes based on that, as well as the basic path?

Finally there’s parking. No one would need to own a car because after completing the journey the car could go and be used by someone else. It would never need to park, except for recharging and maintenance, which could also be automatic. All the payments could be done transparently and the whole system should be much cheaper than personally owning and using a car, like we do now.

The whole thing sounds great, and there are almost no disadvantages, but I still don’t like it in some ways because my car is part of my identity, I like driving, and the new world of self-driving electric cars sounds very efficient, but seems to lack any personality or fun.

But that won’t matter, because there will be two ways to overcome this deficiency. First, there might be lots of tracks where people can go to test their driving skills in traditional human driven – maybe even petrol powered – cars as a recreational activity, sort of like how some people ride horses today. And second, and far more likely, virtual reality will be so realistic that it will be almost indistinguishable from real driving, but without the risks.

And while I am on the subject of VR, it should be far less necessary to travel in the future because so much could be done remotely using VR and AR systems. So less traffic should be another factor making the roads far more efficient and safe.

In general the future in this area looks good. I suspect this will all happen in about 20 years, and when it does, people will be utterly shocked that we used to control our vehicles ourselves, especially when they look at the number of accidents and fatalities, and the amount of time wasted each day. Why would we drive when a machine can do it so much better, and we could use that time for something far more valuable?

Introduction to the Elements

December 29, 2017 Leave a comment

The Greek philosophers were incredibly smart people, but they didn’t necessarily know much. By this I mean that they were thinking about the right things in very intelligent and perceptive ways, but some of the conclusions they reached weren’t necessarily true, simply because they didn’t have the best tools to investigate reality.

Today we know a lot more, and even the most basic school science course will impart far more real knowledge to the average school student than what even the greatest philosophers, like Aristotle, could have known.

I have often thought about what it would be like to talk to one of the ancient Greeks about what they thought about the universe and what we have found out since, including how we know what we know. Coincidentally, this might also serve as a good overview of our current knowledge to any interested non-experts today.

Of course, modern technology would be like total magic to any ancient civilisation. In fact, it would seem that way to a person from just 100 years ago. But in this post I want to get to more fundamental concepts than just technology, mostly the ancient and modern ideas about the elements, so let’s go…

The Greeks, as well as several other ancient cultures, had arrived at the concept of there being elements, which were fundamental substances which everything else was made from. The classic 4 elements were fire, air, water, and earth. In addition, a fifth element, aether, was added to account for the non-material and heavenly realm.

This sort of made sense because you might imagine that those components resulted when something changed form. So burning wood releases fire and air (smoke) and some earth (ash) which seemed to indicate that they were original parts of the wood. And sure, smoke isn’t really like air but maybe that’s because it was made mainly from air, with a little bit of earth in it too, or something similar.

So I would say to a philosopher visiting from over 2000 years ago that they were on the right track – especially the atomists – but things aren’t quite the way they thought.

Sure, there are elements, but none of the original 4 are elements by the modern definition. In fact, those elements aren’t even the same type of thing. Fire is a chemical reaction, air is a mixture of gases, water is a molecule, and earth is a mixture of fine solids. The ancient elements correspond more to modern states of matter, maybe matching quite well with plasma, gas, liquid and solid.

The modern concept of elements is a bit more complicated. There are 92 of them occurring naturally, and they are the basic components of all of the common materials we see, although not everything in the universe as a whole is made of elements. The elements can occur by themselves or, much more commonly, combine with other elements to make molecules.

The elements are all atoms, but despite the name, these are not the smallest indivisible particles, because atoms are in turn made from electrons, protons, and neutrons, and then the protons and neutrons are made of quarks. As far as we know, these cannot be divided any further. But to complicate matters a bit more there are many other indivisible particles. The most well known of these from every day life is the photon, which makes up light.

Different atoms all have the same structure: classically thought of as a nucleus containing a certain number of protons and neutrons surrounded by a cloud of electrons. There are the same number of protons (which have a positive charge) and electrons (which have a negative charge) in all neutral atoms. It is the number of protons which determines which atom (or element) is which. So one proton means hydrogen, 2 helium, etc, up to uranium with 92. That number is called the “atomic number”.

The number of neutrons (which have no charge) varies, and the same element can have different forms because they have a different number of neutrons. When this happens the different forms are called isotopes.

Protons and neutrons are big and heavy and electrons are light, so the mass of an atom is made up almost entirely of the protons and neutrons in the nucleus. The electrons are low mass and “orbit” the nucleus at a great distance compared with the size of the nucleus itself, so a hydrogen atom (for example, but this applies to all atoms and therefore everything made of atoms, which is basically everything) is 99.9999999999996% empty space!

When I say protons are big and heavy I mean this only relatively, because there are 50 million trillion atoms in a single grain of sand (which means a lot more protons because silicon and oxygen, the two main elements in sand, both have multiple protons per atom).

When atoms combine we describe it using chemistry. This involves the electrons near the edge of an atom (the electrons form distinct “shells” around the nucleus) combining with another atom’s outer electrons. How atoms react is determined by the number of electrons in the outer shell. Atoms “try” to fill this shell and when they do they are most stable. The easiest way to fill a shell is to borrow and share electrons with other atoms.

Atoms with one electron in the outer shell or with just one missing are very close to being stable and are very reactive (examples: sodium, potassium, fluorine, chlorine). Atoms with that shell full don’t react much at all (examples: helium, neon).

There are far more energetic reactions which atoms can also participate in, when the nucleus splits or combines instead of the electrons. We call these nuclear reactions and they are much harder to start or maintain but generate huge amounts of energy. There are to types: fusion where small atoms combine to make bigger ones, and fission where big atoms break apart. The Sun is powered by fusion, and current nuclear power plants by fission.

After the splitting or combining the resulting atom(s) has less mass/energy (they are the same thing, but that’s another story) than the original atom(s) and that extra energy is released according to a formula E=mc^2 discovered by Einstein. This means you can calculate how much energy (E) comes from a certain amount of mass (m) by multiplying by the speed of light squared (90 thousand trillion). This number is very high which means that a small amount of mass creates a huge amount of energy.

Most reactions involve a bit of initial energy to start it, then they will release energy as the reaction proceeds. That’s why lighting a match next to some fuel starts a reaction which makes a lot more energy.

So water is a molecule made from one oxygen atom and two hydrogen atoms. But gold is an element all by itself and doesn’t bond well with others. And when two elements bind and form a molecule they are totally different from a simple mixture of the two elements. Take some hydrogen and oxygen and mix them and you don’t get water. But light a match and you get a spectacular result, because the hydrogen burns in the oxygen forming water in the process. The energy content of water is lower than the two constituent gases which explains all that extra energy escaping as fire. But the fire wasn’t an elementary part of the original gases and neither was the water. You can see how the Greeks might have reached that conclusion though.

Basic classical physics and chemistry like this make a certain amount of intuitive sense, and the visting philosopher would probably understand how it works fairly quickly. But then I would need to reveal that it is all really just an approximation to what reality is really like.

There would be a couple of experiments I could mention which would be very puzzling and almost impossible to explain based on the classical models. One would be the Michelson–Morley experiment, and the other would be the infamous double-slit experiment. These lead to the inevitable conclusion that the universe is far stranger than we imagined, and new theories – in this case relativity and quantum theory – must be used.

Whether our philosopher friend could ever gain the maths skills necessary to fully understand these would be difficult to know. Consider that the Greeks didn’t really accept the idea of zero and you can see that they would have a long way to go before they could use algebra and calculus with any competence.

But maybe ideas like time and space being dynamic, gravity being a phenomenon caused by warped space-time, particles behaving like waves and waves behaving like particles depending on the experiment being performed on them, single particles being in multiple places at the same time, and particles becoming entangled, might be comprehensible without the math. After all, I have a basic understanding of all these things and I only use maths like algebra and calculus at a simple level.

It would be fun to list some of the great results of the last couple of hundred years of experimental science and ask for an explanation. For example, the observations made by Edwin Hubble showing the red-shifts of galaxies would be interesting to interpret. Knowing what galaxies actually are, what spectra represent, and how galactic distances can be estimated, would seem to lead to only one reasonable conclusion, but it would be interesting to see what an intelligent person with no pre-conceived ideas might think.

As I wrote this post I realised just how much background knowledge is necessary as a prerequisite to understanding our current knowledge of the universe. I think it would be cool to discuss it all with a Greek philosopher, like Aristotle, or my favourite Eratosthenes. And it would be nice to point out where they were almost right, like Eratosthenes’ remarkable attempt at calculating the size of the Earth, but it would also be interesting to see their reaction to where they got things badly wrong!

Cosmological Musings

November 30, 2017 Leave a comment

Recently I have listened to a few podcasts featuring some of the most well known scientists of today. Specifically, I mean Lawrence Krauss, Sean Carroll, and Neil deGrasse Tyson. These aren’t general scientists obviously, since they all specialise in physics and cosmology, but that’s the area I want to concentrate on in this post.

I admire these three in particular for a number of reasons: first, they are clearly brilliant and highly intelligent people, or they wouldn’t have got to the positions they have; second, they are good public communicators of the often difficult subjects they specialise in; and third, they aren’t scared to call out BS where they see it, and Carroll and Krauss in particular are very critical of religion and other forms of irrationality.

But it isn’t the politically or socially controversial topics I want to cover here, it is the scientifically contentious or speculative stuff instead. So let’s get started talking about some of the more speculative ideas I have heard discussed recently. Note that these aren’t necessarily directly attributable to the people I mentioned above, and they represent my interpretation of what I have heard, and I am not an expert in this subject. But that has never stopped me before, so let’s go!

The origin, and underlying nature of the universe is not well understood. This has been a problem for a while, because the actual point where the Big Bang started is hidden in a singularity of infinite density. Physics breaks down there, just like it does in a black hole, so nothing much can be said about it with any certainty. It is possible to use existing theories to get really close to time zero – a tiny fraction of second – but beyond that is inaccessible to current theories.

And the best direct evidence we have comes from the light of early galaxies and the cosmic microwave background (CMB). But even the CMB only formed after 300,000 years, which is s small fraction of the age of the universe (13.7 billion years) but still not as early as we would like.

So clearly this is a difficult subject, but here are a few observations and speculations about the universe which might assist in understanding what is going on…

The first point is that the total energy of the universe might be zero. This seems totally absurd on the surface, because of all the obvious energy sources we see, like stars, and all the mass which we know is the equivalent of energy through the famous equation E=mc^2. But that’s where a convention in physics makes the reality quite different from what most people intuitively believe.

Gravitational energy has always been thought of as negative. This is nothing to do with the Big Bang or cosmology, it is just a natural consequence of the maths. If we accept this it turns out that the gravitational energy of the universe cancels the other energy exactly. So the universe has zero energy which means that any process making a universe can do so easily, meaning there could quite conceivably be an infinite number of them.

While some people dismiss this as a “trick” it really isn’t. If cosmologists had said something like “we need to get the total energy to zero so let’s just say gravity is negative and voila!” then that would be a trick. But this was an established fact long before the total energy of the universe was being considered and this gives it far more credibility.

And while we thinking about the idea of more than one universe, what about the idea that there could be many universes – each with slightly different properties – which might explain why many of the properties of our universe seem to be quite well tuned for the existence of life?

What I am saying here is that various constants seem to have values which make chemistry possible and that, in turn, makes life possible. But there seems to be no reason why the constants could not have totally different values and this could lead to a universe where stars could not form, and no stars means no energy source for life.

And the old argument about life which is entirely different from the type we see now doesn’t really save us because any form of life needs both energy and heavy atoms, and stars are the only likely source for these.

But if there are an infinite, or very large, number of universes, with different constants, then it is inevitable that some will have the values which make life possible. In fact, it’s possible to imagine a universe which is even better than ours for life, so there could be many which have life. In fact, if there are an infinite number of universes, there will be an infinite number with life as well!

A concept I have sometimes heard in both pop science and science fiction is the idea that at very large scales and very small scales there might be other universes hidden. For example, an atom could be a universe made of its own tiny atoms which in turn could be universes, etc. And going the other way, our universe could be an atom in a bigger universe above ours, ad infinitum. This idea might arise from the popular notion that an atom is like a miniature solar system (which it isn’t).

It’s a cute idea, but unfortunately it can be ruled out by applying the laws of physics. Sub-atomic particles have no details and no uniqueness. For example, every electron is a single point (or “cloud” of probability) with no structure and which is completely indistinguishable from every other electron. This doesn’t seem like a good candidate for a whole universe!

What about the “oscillating universe” or “big crunch” theory? This is the idea that the universe expands but the expansion slows down until it stops at a certain point, then it starts contracting again, reaches a singularity, and is “reborn” in a new Big Bang. At this point any vestige of the old universe is erased and all the energy is replenished. This would be a process which recurs infinitely in both the future and past.

This is quite an appealing notion, because it tells us what was before the current Big Bang, and previously it was thought that gravity might have been slowing the rate of expansion. Unfortunately for this theory new evidence shows us that the rate of expansion is actually increasing, because of dark energy, so the contraction and “Big Crunch” can never happen.

There’s nothing fundamental in physics which seems to stop processes running backwards in time. I have heard an idea that maybe the universe was created as a result of a signal sent backward from a future form of the universe itself. This removes the need for an initial cause which in turn might need a cause, leading to an infinite regress of causes.

Signals going back in time should be considered somewhat controversial, of course, because of the principles of causality, so I would be hesitant to take this too seriously unless some clarification on the exact mechanism arose.

Here’s another one: new universes appear inside black holes created in existing universes. These universes all have slightly different attributes than the universe they came from, but inherit the starting parameters from them.

This is nice because it sets up an “evolution” model where “the survival of the fittest” applies to whole universes! Only universes which can make black holes will create new universes. To create a black hole the universe needs to have a fairly long life, a way to concentrate matter, a way to allow matter to “condense” out of energy, etc. These attributes also lead to laws and constants suitable for the development of life.

Clearly this is difficult to evaluate because we don’t know what happens inside black holes, because as I said above, the infinite density of matter causes current theories to break down. There is no compelling reason to think universes are formed from black holes so it’s probably best to disregard this idea unless some new, relevant information becomes available.

Finally, how about the idea that the purpose of a universe is to allow intelligent life to form which, in turn advances to the point where it figures out how to make universes?

This also sets up a potential evolutionary scenario, but we have no idea whether any intelligent life form, no matter how advanced, could create a universe, so again this seems to be somewhat unworthy of spending too much time speculating about at this stage.

Well, wasn’t that fun! Obviously we don’t know the truth about the origin or fundamental nature of the universe, not because we have no ideas, but because we have too many! I’m fairly sure that when real theories are created to explain these phenomena that none of what I have said here will be the real explanation, but it’s still fun to speculate!

The Hard Problem

September 3, 2017 Leave a comment

Recently, while purchasing a few items at a wholesaler I was asked what I was listening to on my phone (because I had my Apple earphones on). I told the person I was listening to a podcast, and when that got a blank response I explained it was like a recorded radio program automatically downloaded from the internet, and that this one was by a philosopher and was mainly about politics. I was asked “are you listening to parliament?” and decided it was best to not try to explain further by making a joke like: “I wouldn’t listen to that because I want to retain what small scraps of sanity I still have.”

But it did emphasise how little most people know or care about many of the things that interest me, including some of the most difficult and obscure problems in science and philosophy today. Now, please don’t think I’m being elitist or arrogant because I know that I am no expert on any of this stuff, I just find it interesting, and knowing more about it is part of my aim to be good at everything but brilliant at nothing!

More recently I listened to another podcast in the same series which dealt with a subject which exactly of the type I mentioned above. That is the hard problem of consciousness. What is consciousness, where does it come from, and what else possesses it, apart from me?

Before I continue I will say what I mean by consciousness here. Basically it is the feeling that I (and presumably others) have that I am an individual, that I have some continuity of existence from the past, that I have some form of free will (or at least the illusion of that) to control the world to some extent. Where does this come from?

The idea which I find most compelling, and the one which I think is generally accepted by the majority of scientists is that consciousness is an emergent phenomenon of the processes which occur inside a brain of sufficient complexity. But some people, especially some philosophers and a lot of theologians, believe it is better explained through dualism. That is the idea that there is something beyond the physical processes of thought occurring in the brain. Maybe that there is a “soul” (not necessarily in the religious sense) which is in final control of the physical processes.

At this stage, all the neuroscience I have heard of gives me no reason to think that anything beyond the purely material exists. But I want to ignore the good, solid stuff like that and consider some idle speculation and thought experiments instead!

Imagine my personal identity, my mind, my consciousness is an emergent property of my brain processes. What would happen if an exact copy of me was made (in something like a Star Trek transporter which copied the original person instead of moving him). Where would my consciousness them lie? The copy would be identical, with an identical brain and identical processes. If my thoughts arose from physical processes would I experience them in both bodies simultaneously?

Alternatively, imagine it was possible to “back up” all the information in a brain and upload it to a computer, then re-establish it after death or injury. What would happen if it was downloaded into a different brain? What would happen if it ran on an artificial brain in the computer itself?

Another disturbing question is how complex does a brain need to be before it becomes conscious? It certainly seems that many animals are self-aware. Surely chimps, dolphins, etc have similar levels of consciousness to humans. What about cats and dogs? Rats and mice? Flies? Where does it end?

And if consciousness arises through the processing power of a brain, can it also arise in an artificial brain, like a sufficiently complex and properly programmed computer? Or does it only arise in “naturally arising” entities. What about in an alien? What if that alien evolved a silicon brain very similar to a computer?

We know that our cells are constantly being replaced, don’t we? Well no, that isn’t exactly true. Different cells have different “life spans”, from a few days up to apparently the life of the individual. Significantly, it is some types of brain neurons which are never replaced. Is it these cells which give us our individual identity?

Now let’s imagine that duality is a better explanation. There are some anecdotes indicating that consciousness apparently exists independently of the body. There are out of body experiences, various phenomena such as ESP, reincarnation, and near death experiences. Some of these seem quite compelling but they have never been confirmed by any rigorous scientific study.

Maybe the brain is just an interface between the non-physical seat of consciousness and the body. If the brain is damaged or dies the consciousness still exists but has no way to interact with the world. It would be difficult to distinguish between that and the emergent phenomenon hypothesis I outlined above so maybe this is one of those theories which is “not even wrong”.

Finally there is computation and maths. The way maths seems to reflect and even predict reality has been a puzzle since the article called “the unreasonable effectiveness of mathematics in the natural sciences” was published almost 50 years ago. Some physicists have noted that reality seems to almost arise from a form of computation, which seems to explain the effectiveness of maths.

So now we seem to be getting back to the idea that the universe might be a simulation (see my blog post titled “Life’s Just a Game” from 2016-07-06). If it is then the universe was created by someone (or something). Would that thing be a god? And if the individual entities are “just” part of a simulation do they have any less moral rights as a result?

Maybe all of this stuff is “not even wrong” and maybe it is pointless to even speculate about it, but sometimes doing pointless things is OK, just as long as we don’t take it too seriously.

So I think I will continue to listen to philosophical musings rather than the rather more mundane business of politics I hear in parliament. Actually, I think there is room for both, because politics is also a subject I include in my “good at everything” strategy. And one thing is clear: in most subjects being above average isn’t difficult!

Do It Yourself

March 3, 2017 Leave a comment

I was going to post this comment as part of an anti-creationist rant but I realised that there was so much to it that I really needed to post it as a separate item. The issue I wanted to tackle was how many believers in mysticism base their beliefs on revealed sources, such as holy books, but the same criticism could be made against “rational” people, like myself, because I also use sources (such as science books, Wikipedia, etc).

So basically what I wanted to do was to show that anyone can discover significant things about the real world by themselves without relying on any information from existing sources, and that they can show anyone how to do the same observation/experiment which would prove their point beyond any reasonable doubt.

I decided to choose the age of the universe as a suitable subject, because it was a controversial subject (there are many young Earth creationists), and it was relatively easy to test. Of course, as I intimated above, it got more complex than I imagined. However, here is my proof – which anyone with a bit of time and a small budget can follow – that the universe, and therefore the Earth, is much older than the 6000 years the young Earth creationists claim.

I could start by trying to establish the age of the oldest things I know of. I could use biology, archaeology, chemistry or physics here, but I know a bit more about astronomy, so let’s use that.

We know the light from stars travels through space at the speed of light. If the stars are far enough away that the light took more than 6000 years to get here then the universe must be more than 6000 years old, so creationism is wrong. I know there are some possible objections to these initial assumptions but let’s leave those aside for now.

First, how fast is the speed of light? Can I figure this out for myself or do I need to take it on trust (some would say faith) from a book? Well it is actually quite easy to figure this out because we can use a highly regular event at a known distance to calculate the time it took for light to reach us. The most obvious choice is timings of Jupiter’s moons.

The moons of Jupiter (there are 4 big ones) take precise times to complete an orbit. I can figure that time out by just watching Jupiter for a few weeks. But we would expect a delay in the times because the light from an event (like a Moon going in front of or behind Jupiter) will take a while to reach us.

Conveniently, the distance from the Earth to Jupiter varies because some times the Earth and Jupiter are on the same side of the Sun, and others the opposite side. So when they are on the same side the distance from the Earth is the radius of Jupiter’s orbit minus the radius of the Earth’s, and when they are on opposite sides it is the radius of Jupiter’s orbit PLUS the radius of the Earth’s. Note that the size of Jupiters orbit doesn’t matter because the difference is just double the size of the Earth’s (in fact it is double the radius, or the diameter).

So now we need to know the size of the Earth’s orbit. How would we do that? There is a technique called parallax which requires no previous assumptions, it is just simple geometry. If you observe the position of an object from two locations the angle to the object will vary.

It’s simple to demonstrate… Hold your finger up in from of your eyes and look at it through one eye and then the other. The apparent position against a distant background wall will change. Move your finger closer and the change will be bigger. If you measure that change you can calculate the distance to your finger with some simple maths.

In astronomy we can do the same thing, except for distant objects the change is small… really small. And we also need two observing locations a large distance apart (the further apart they are, the bigger the change is and therefore the easier it is to measure). Either side of the Earth is OK for close objects, like the Moon (a mere 384000 kilometers away) but for stars (the closest is 42 trillion kilometers away) we need something more. Usually astronomers use the Earth on either side of its orbit (a distance of 300 million kilometers) so the two observations will be 6 months apart.

So getting back to our experiment. You might think we could measure the distance to a star, or a planet like Jupiter, or the Sun using this technique but it’s not quite so simple because the effect is so small. What we do instead is measure the distance to the Moon (which is close) using parallax from two widely separated parts on the Earth. I admit this needs a collaborator on the other side of the Earth, so it involves more than just one individual person, but the principle is the same.

Once we know that it can be used to measure other distances. For example, if we measure the angle between the Moon and Sun when the Earth-Moon-Sun angle is a right angle we can use trigonometry to get the distance to the Sun. It’s not easy because the angle is very close to 90 degrees (the Earth-Sun side of the triangle is much longer than the Earth-Moon side) but it can be done.

So now we know the difference in distance between the Earth and Jupiter in the two situations I mentioned at the start of this post. If we carefully measure the difference in time between the timings of Jupiter’s Moons from Earth when Earth is on either side of its orbit we get a difference of about 16 minutes. So light is taking half of that time to travel from the Sun to the Earth. We know that distance from the previous geometric calculations, so we know the speed of light.

Note that none of this is open to any reasonable criticism. It is simple, makes no assumptions which can fairly be questioned, and anyone can do it without relying on existing knowledge. Note that if you want to derive the basic trig calculations that is fairly easy too, but few people would argue about those.

So the Sun is 8 light minutes away meaning the light we see from the Sun left it 8 minutes ago. We are seeing the Sun literally as it was 8 minutes in the past. This means it must have existed 8 minutes in the past. But who cares? Well this is interesting but looking at more distant objects – those not just light minutes away but light years, thousands of light years, millions of light years away say more about the true age of the Universe.

So we can use this idea in reverse. Above we calculated a distance based on a time difference and the speed of light. Now we will calculate a time based on distance and the speed of light. If a star is 10,000 light years away the light left it 10,000 years ago, so it existed 10,000 years ago, so the universe is at least 10,000 years old.

There is only one direct method to calculate distance and that is parallax. But even from opposite sides of the Earth’s orbit – a baseline of 300 million kilometers – parallax angles are ridiculously small. But with a moderate size telescope (one which many amateurs could afford), and careful observation, they can be measured. The parallax angle of the closest star is about 800 milliarcseconds, or 0.01 degrees. That gives an angle which is the equivalent of the width of a small coin about 5 kilometers away.

Do this observation, then a simple calculation, and the nearest star turns out to be 40 trillion kilometers (4 light years) away. When we see that star we see it as it was 4 years ago. In that time the star could have gone out or been swallowed by a black hole (very unlikely) and we wouldn’t know.

The greatest distance so far detected using parallax is 10,000 light years, but that was with the Hubble Space Telescope, so that is beyond the direct experience of the average person! However note that using this direct, uncontroversial technique, the universe is already at least 10,000 years old, making young Earth creationism impossible.

Another rather obvious consequence of these distance measures is that stars are like our Sun. So if we know how bright stars are we can compare that with how bright they appear to be and get a distance approximation. If a star looks really dim it must be at a great distance. The problem is, of course, that stars vary greatly in brightness and we can’t assume they are all the same brightness as the Sun.

There is another feature of stars which even an amateur can make use of though – that is the spectrum. Examining the spectrum can show what type of star produced the light. The amateur observer can even calibrate his measurements using common chemicals in a lab. The chemicals in the star are the same and give the same signatures (approximately, at least).

So knowing the type of star gives an approximation of the brightness and that can be used to get the distance. The most distant star visible to the naked eye is 16,000 light years away. This would be bright enough to get a spectrum in a telescope, determine the type of star, and estimate the distance. Of course, it would be hit and miss trying to find a distant star to study (because we’re not supposed to use any information already published) but enough persistence would pay off eventually.

There are objects in the sky called globular clusters. These are collections of a few hundred thousand to a few million stars, quite close together. To the naked eye they look like a fuzzy patch but through a small telescope they can be seen to be made of individual stars. A simple calculation based on their apparent brightness shows they are tens of thousands of light years away. A similar technique can be applied to galaxies but these give distances of millions of light years.

In addition, an amateur with a fairly advanced telescope and the latest digital photography equipment – all of which is available at a price many people could afford – could do the investigation of red-shifts originally done by Edwin Hubble over 100 years ago.

A red shift is the shift in the spectrum of an object caused by its movement away from us. As I said above, the spectra of common chemicals can be tested in the lab and compared with the spectrum seen from astronomical objects. As objects get more distant they are found to be moving away more quickly and have higher red shifts. So looking at a red shift gives an approximate measure of distance.

This technique can only be used for really distant objects, like galaxies, so it is a bit more challenging for an amateur, but it will give results of millions to billions of light years, meaning the objects are at least millions or billions of years old.

There are some possible objections to everything I have discussed above. First, maybe the speed of light was much faster in the past meaning that the light could have travelled the vast distances in less time than assumed, meaning the universe could still be just 6000 years old.

Second, the light from the objects could have been created in transit. So a galaxy could have been created 2 million years ago but its light could also be created already travelled 99% of the way to the Earth.

Finally, maybe there is a supernatural explanation that cannot be explained through science or logic, or maybe all of the evidence above is just the malicious work of the devil trying to lead us all astray.

The second and third objections aren’t generally supported, even by most creationists, because they imply that nothing we see can be trusted, and God is not usually thought to be deliberately misleading.

The first one isn’t totally ridiculous though, and there is some serious science suggesting the speed of light might have been faster in the past. But do the calculations and that speed would have to be ridiculously fast – millions of times faster than it is now. If it was changing at that rate then we would see changes over recorded history. So that claim could also be checked by anyone who was prepared to dig into old sources for timings of eclipses, the length of the day, etc.

Astronomy is an interesting science because so much of it is still do-able by amateurs. Follow the steps above and not only will you get a perspective on some of the greatest work done in the past, but you will also make for yourself a truly fundamental discovery about the universe: that it is really old.

It requires no faith in authority, no reference to trusted texts, and no unfounded assumptions. It just involves a few years of dedicated observation and study. I admit I haven’t done all of this myself, but it’s good to know I could if I wanted to.

The Fermi Paradox Again

February 23, 2017 Leave a comment

NASA recently announced the discovery of 7 Earth-like planets orbiting the relatively close star, Trappist-1, and that 3 are in the “Goldilocks Zone” (not too hot, not too cold). It is now expected (at least I have heard this although I don’t think it is officially stated anywhere) that almost all stars have planets and that a significant fraction of them might have conditions similar to Earth.

This is significant because for many years no one knew how many planets existed in the universe (although there were some discoveries going back to 1988 it was only Kepler, HARPS, and some other new advanced telescopes more recently that lead to significant numbers of discoveries). So it was generally assumed that planets were common but there was no way of knowing.

Another great mystery of the universe is how likely is life to arise and under what conditions. Here we are even worse off than with the planets because we are literally working with a sample size of 1. No other life has been discovered outside of the Earth, although there have been some interesting discoveries on Mars, none have lead to any proof of even primitive life.

It is generally assumed that life will have to be broadly similar to what we have here on Earth. I don’t mean similar in any superficial sense but in broad principles. So it will be based on carbon, because carbon is the only element in the universe which bonds to other atoms (and itself) with sufficient complexity to form molecules suitable to base life on. We also know that the elements we know about are the only ones which can exist in the universe.

The chemistry of life also requires a solvent, and water is the obvious choice. So these chemical requirements limit the temperature and other factors that life would need, which is why we are so interested in “Earth-like” planets which are big enough to have strong gravity, are the right temperature to allow liquid water, and have solid surfaces allowing water to pool and to provide the other elements that life might need.

Note that it is possible that life might be able to exist in a wider variety of conditions but I’ll stick to these, fairly conservative, assumptions.

Even when all the conditions are just right, or within certain limits, it’s hard to know how often life might arise. Experiments in the lab and some observations of molecules in space indicate it might be really likely, but the failure to find life on Mars seems to contradict this.

But even if there was only one chance in a billion of life arising if conditions were suitable, that still means these should be a lot of it in our galaxy alone, and a lot more in the universe as a whole.

There are about half a trillion stars in our galaxy (although this number has gone up and down a bit, the latest number I heard was at this high end) and each star seems to have multiple planets (let’s say 10 as an approximation) and it’s likely that at least one might be in the correct temperature zone (some stars might have none in this zone but other, like Trappist-1, have many). This seems to indicate that there are as many Earth-like planets as there are stars.

A recent Hubble survey indicated there might be 2 trillion galaxies in the observable universe. So we have 2 trillion galaxies x 500 billion stars x 10 planets x 1/10 Earth-like, giving one trillion trillion places where life might evolve in the observable universe.

These numbers could be off by many orders of magnitude but who cares? Even if we are a billion times too optimistic that still means a thousand trillion places!

I have talked about the Fermi Paradox – the fact that according to best calculations there should be a lot of advanced life around, yet we never see it – in previous blog posts so I won’t go into that again here except to say we aren’t much further ahead in resolving it!

There is hope though. As telescope technology advances there will be techniques available which seemed impossible in the past. Detecting a planet orbiting another star is an incredible achievement in itself (the stars are really big and bright but at the distances of other stars the planets are very dim and small). But it should be possible to actually study their atmospheres in the future by analysing the light shining through the atmosphere from the star.

In that case it should be possible to learn a lot more about conditions on the planet (temperature, pressure, what elements are present, etc) and to even detect the chemical signatures of life.

And there are even serious proposals now to design small, robotic spacecraft which can be sent to close stars in a reasonable time (by reasonable here we mean decades rather than tens of thousands of years needed by current spacecraft). We know the closest star, a mere 4.2 light years (42 trillion kilometers) away, has a planet but it is unlikely to be suitable for life, but other relatively close stars could also be explored this way.

So how long will it be before we know that life exists on other planets? I predict hints of its existence within 10 years, strong evidence within 30, and proof within 50. And at that point, depending on the circumstances, it should be obvious just how likely life is. I predict we will start finding evidence for it everywhere.

But I still can’t get past the problem presented by the Fermi Paradox. If life arises frequently, why don’t we see signs of advanced, intelligent life? Maybe intelligence isn’t a good evolutionary trait. And, especially given the state of the world at the moment, that is a worrying thought.