Posts Tagged ‘internet’

Making Us Smart

June 28, 2017 Leave a comment

Many people think the internet is making us dumb. They think we don’t use our memory any more because all the information we need is on the web in places like Wikipedia. They think we don’t get exposed to a variety of ideas because we only visit places which already hold the same views as we do. And they think we spend too much time on social media discussing what we had for breakfast.

Is any of this stuff true? Well, in some cases it is. Some people live very superficial lives in the virtual world but I suspect those same people are just naturally superficial and would act exactly the same way in the real world.

For example, very few people, before the internet became popular, remembered a lot of facts. Back then, some people owned the print version of the Encyclopedia Brittanica, and presumably these were people who valued knowledge because the print version wasn’t cheap!

But a survey run by the company found that the average owner only used that reference once per year. If they only referred to an encyclopedia once a year it doesn’t give them much to remember really, does it?

Today I probably refer to Wikipedia multiple times per day. Sure I don’t remember many of the details of what I have read, but I do tend to get a good overview of the subject I am researching or get a specific fact for a specific purpose.

And finding a subject in Wikipedia is super-easy. Generally it only takes a few seconds, compared with much longer looking in an index, choosing the right volume, and finding the correct page of a print encyclopedia.

Plus Wikipedia has easy to use linking between subjects. Often a search for one subject leads down a long and interesting path to other, related topics which I might never learn about otherwise.

Finally, it is always up to date. The print version was usually years old but I have found information in Wikipedia which refers to an event which happened just hours before I looked.

So it seems to me that we have a far richer and more accessible information source now than we have ever had in the past. I agree that Wikipedia is susceptible to a certain extent to false or biased information but how often does that really happen? Very rarely in my experience, and a survey done a few years back indicated the number of errors in Wikipedia was fairly similar to Brittanica (which is also a web-based source now, anyway).

Do we find ourselves mis-remembering details or completely forgetting something we have just seen on the internet? Sure, but that isn’t much to do with the source. It’s because the human brain is not a very good memory device. If it was true that we are remembering less (and I don’t think it is) that might even be a good thing because it means we have to get our information from a reliable source instead!

And it’s not even that this is a new thing. Warnings about how new technologies are going to make us dumb go back many years. A similar argument was made when mass production of books became possible. Few people would agree with that argument now and few people will agree with it being applied to the internet in future.

What about the variety of ideas issue? Well people who only interact with sources that tell them what they want to believe on-line would very likely do the same thing off-line.

If someone is a fundamentalist Christian, for example, they are very unlikely to be in many situations where they will be exposed to views of atheists or Muslims. They just wouldn’t spend much time with people like that.

In fact, again there might be a greater chance to be exposed to a wider variety of views on-line, although I do agree that the echo-chambers of like-minded opinion like Facebook and other sites often tend to be is a problem.

And a similar argument applies to the presumption that most discussion on-line is trivial. I often hear people say something like “I don’t use Twitter because I don’t care what someone had for breakfast”. When I ask how much time they have spent on Twitter I am not surprised to hear that it is usually zero.

Just to give a better idea of what value can come from social media, here is the topic of the top few entries in my current Twitter feed…

I learned that helium is the only element that was discovered in space before found on earth. (I already knew that because I am an amateur astronomer, but it is an interesting fact, anyway).

New Scientist reported that the ozone layer recovery will be delayed by chemical leaks (and it had a link if I want details).

ZDNet (a computer news and information site) tweeted the title of an article: “Why I’m still surprised the iPhone didn’t die.” (and again there was a link to the article).

New Scientist also tweeted that a study showed that “Urban house finches use fibres from cigarette butts in their nests to deter parasites” (where else would you get such valuable insights!)

Guardian Science reported that “scientists explain rare phenomenon of ‘nocturnal sun'” (I’ll probably read that one later).

ZDNet reported the latest malware problem with the headline “A massive cyberattack is hitting organisations around the world” (I had already read that article)

Oxford dictionaries tweeted a link to an article about “33 incredible words ending in -ible and -able” (I’ll read that and add it to my interesting English words list).

The Onion (a satirical on-line news site) tweeted a very useful article on “Tips For Choosing The Right Pet” including advice such as “Consider a rabbit for a cuddly, low cost pet you can test your shampoo on”.

Friedrice Nietzsche tweeted “five easy pentacles” (yes, I doubt this person is related to the real Nietzsche, and I also have no idea what it means).

Greenpeace NZ linked to an article “Read the new report into how intensive livestock farming could be endangering our health” (with a link to the report).

Otago Philosophy tweeted that “@Otago philosopher @jamesmaclaurin taking part in the Driverless Future panel session at the Institute of Public Works Engineers Conference” (with a link).

I don’t see a lot of trivial drivel about breakfast there. And where else would I get such an amazing collection of interesting stuff? Sure, I get that because I chose to follow people/organisations like science magazines, philosophers, and computer news sources, but there is clearly nothing inherently useless about Twitter.

So is the internet making us dumb? Well, like any tool or source, if someone is determined to be misinformed and ignorant the internet can certainly help, but it’s also the greatest invention of modern times, the greatest repository of information humanity has ever had, and something that, when treated with appropriate respect, will make you really smart, not dumb!


I’m a Troll

May 19, 2017 Leave a comment

In the old Norwegian fairy tale, Three Billy Goats Gruff, the three goats must try to cross a bridge to get to richer meadows, but are challenged by a fearsome and hideous troll. This guy is both territorial and aggressive, and has a habit of trying to eat anything that dares to cross the bridge.

Is this a good metaphor for our friend, the internet troll? Maybe it is. But the word “troll” is another one on my list of words I try to avoid using, and my reader, Derek Ramsey, indicated he would like to see my reasons why, probably because he (along with many others) thinks I might indulge in a certain amount of trolling activity myself!

Here’s the definition of an internet troll, from Wikipedia: “…a person who sows discord on the Internet by starting arguments or upsetting people, by posting inflammatory, extraneous, or off-topic messages in an online community … with the intent of provoking readers into an emotional response or of otherwise disrupting normal, on-topic discussion often for the troll’s amusement.”

Having read this I have to admit that I do sometimes stir up trouble just for the fun of it. But even then I do have a higher purpose, and I would like to think that the majority of the time I am accused of “trolling” I am actually trying to make people think in a different way, or trying to make people question their fundamental beliefs, or even offering my opinion with the possibility that it will be proved wrong.

So trolling is more a matter of intent rather than form, and it is just too easy for people with unpopular or alternative views to be dismissed by the majority because they are “just a troll”.

The first time I was excluded from an on-line community due to “excess trolling” was many years ago when I used to offer “alternative commentary” on a site called “GodTube” (I know it looks like I made that up, but it is a real site). This site offers “Christian, funny, inspirational, music, ministry, educational, cute and videos” with a religious perspective.

Of course, that is fine and people are welcome to have communities which represent their interests, but I also think that the internet makes it too easy to enter an “echo chamber” of like-minded people who exclusively parrot the standard dogma of the group and prevent a wider perspective from emerging.

And then there are the blatant lies. In particular I found a lot of anti-science and anti-atheism material on GodTube that I felt I should offer an alternative perspective on. I knew this would cause some of the effects described in the definition of a troll. I knew it would sow discord, I knew it would upset people, I knew it was inflammatory, and I knew it would likely evoke an emotional response and disrupt normal, on-topic discussion.

And, to be honest, it was to a certain extent, for my own amusement.

Hey, now that I read all that I realise that I am a troll! But that is the whole point. In that situation I don’t think that being a troll was bad, and that’s why I don’t like the word.

After many instances of challenging videos on GodTube which rejected evolution, tried to show that the Christian god was supported by real evidence, pretended that events like the Flood, Exodus, etc were actually real, and generally denigrated atheism and science, I was kicked off the community. I could have created a new account and carried on but I thought a break would be good and I moved onto other projects. After all, a troll’s work is never done!

More recently I have been un-friended on Facebook for daring to challenge left-wing ideology which I believe is not based on reality. Since I clearly identify with the political left myself this might seem strange, but I think it is even more important that the “team” I support is credible than that the “other team” is. After all, I can just laugh at the idiotic ideas held by conservatives or fundamentalist Christians, but when a similar criticism could be applied to those I would normally support it becomes difficult.

So when a whole bunch of “lefties” are talking about how dreadful society is as a result of another post, based on absolutely zero real-world evidence, about misogyny, I naturally like to point out that they are doing exactly what they accuse conservatives of, and exactly what turns moderates away from their perspective: they are unquestioningly accepting ideology as fact.

It could very well be that the phenomenon is real, but simple-minded support for a silly political doctrine in an echo chamber of far-left political correctness is no proof, and is certainly no way to approach a problem in an honest way.

And that’s where a bit of what could be uncharitably called trolling or more positively called challenging ideas is called for. And that’s what I do. If people don’t like it they can point out where I am wrong (and that has happened on rare occasions) or they can just shut me down because I’m a “troll”. But how does that second approach achieve anything worthwhile?

It doesn’t, and that’s why we need people to challenge established beliefs. We don’t need this in an extreme or dishonest form such as that practiced by a genuine troll, but it is hard to say which is which – when does a fair challenge to majority beliefs become trolling? It’s too hard to say, so the idea of trolling itself is best avoided.

We don’t need to ban the troll, we need to ban the excuse of ignoring someone by labelling them a troll. That’s my point. Who disagrees with that?

The Internet is Best!

March 17, 2017 Leave a comment

I hear a lot of debate about whether the internet is making us dumb, uninformed, or more close-minded. The problems with a lot of these debates are these: first, saying the internet has resulted in the same outcome for everyone is too simplistic; second, these opinions are usually offered with no justification other than it is just “common sense” or “obvious”; and third, whatever the deficiencies of the internet, is it better or worse than not having an internet?

There is no doubt that some people could be said to be more dumb as the result of their internet use. By “dumb” I mean being badly informed (believing things which are unlikely to be true) or not knowing basic information at all, and by “internet use” I mean all internet services people use to gather information: web sites, blogs, news services, email newsletters, podcasts, videos, etc.

How can this happen when information is so ubiquitous? Well information isn’t knowledge, or at least it isn’t necessarily truth, and it certainly isn’t always useful. It is like the study (which was unreplicated so should be viewed with some suspicion) showing that people who watch Fox News are worse informed about news than people who watch no news at all.

That study demonstrates three interesting points: first, people can be given information but gather no useful knowledge as a result; second, non-internet sources can be just as bad a source as the internet itself; and third, this study (being unreplicated and politically loaded) might itself be an example of an information source which is potentially misleading.

So clearly any information source can potentially make people dumber. Before the internet people might have been made dumber by reading printed political newsletters, or watching trashy TV, or by listening to a single opinion at the dinner table, or by reading just one type of book.

And some people will mis-use information sources where others will gain a lot by using the same source. Some will get dumber while others get a lot smarter by using the same sources.

And (despite the Fox News study above) if the alternative to having an information source which can be mis-used is having no information source at all, then I think taking the flawed source is the best option.

Anecdotes should be used with extreme caution, but I’m going to provide some anyway, because this is a blog, not a scientific paper. I’m going to say why I think the internet is a good thing from my own, personal perspective.

I’m interested in everything. I don’t have a truly deep knowledge about anything but I like to think I have a better than average knowledge about most things. My hero amongst Greek philosophers is Eratosthenes, who was sometimes known as “Beta”. This was because he was second best at everything (beta is the second letter in the Greek alphabet which I can recite in full, by the way).

The internet is a great way to learn a moderate amount about many things. Actually, it’s also a great way to learn a lot about one thing too, as long as you are careful about your sources, and it is a great way to learn nothing about everything.

I work in a university and I get into many discussions with people who are experts in a wide range of different subjects. Obviously I cannot match an expert’s knowledge about their precise area but I seem to be able to at least have a sensible discussion, and ask meaningful questions.

For example, in recent times I have discussed the political situation in the US, early American punk bands, the use of drones and digital photography in marine science, social science study design, the history of Apple computers, and probably many others I can’t recall right now.

I hate not knowing things, so when I hear a new word, or a new idea, I immediately Google it on my phone. Later, when I have time, I retrieve that search on my tablet or computer and read a bit more about it. I did this recently with the Gibbard-Satterhwaite Theorem (a mathematical theorem which involves the fairness of voting systems) which was mentioned in a podcast I was listening to.

Last night I was randomly browsing YouTube and came across some videos of extreme engines being started and run. I’ve never seen so much flame and smoke, and heard so much awesome noise. But now I know a bit about big and unusual engine designs!

The videos only ran for 5 or 10 minutes each (I watched 3) so you might say they were quite superficial. A proper TV documentary on big engines would probably have lasted an hour and had far more detail, as well as having a more credible source, but even if a documentary like that exists, would I have seen it? Would I have had an hour free? What would have made me seek out such an odd topic?

The great thing about the internet is not necessarily the depth of its information but just how much there is. I could have watched hundreds of movies on big engines if I had the time. And there are more technical, detailed, mathematical treatments of those subjects if I want them. But the key point is that I would probably know nothing about the subject if the internet didn’t exist.

Here’s a few other topics I have got interested in thanks to YouTube: maths (the numberphile series is excellent), debating religion (I’m a sucker for an atheist experience video, or anything by Christopher Hitchens), darts (who knew the sport of darts could be so dramatic?), snooker (because that’s what happens after darts), Russian jet fighters, Formula 1 engines, classic British comedy (Fawlty Towers, Father Ted, etc).

What would I do if I wasn’t doing that? Watching conventional TV maybe? Now what were my options there: a local “current affairs” program with the intellectual level of an orangutan (with apologies to our great ape cousins), some frivolous reality TV nonsense, a really un-funny American sitcom? Whatever faults the internet has, it sure is a lot better than any of that!

Pokemon No!

July 30, 2016 Leave a comment

I am a proud computer (and general technology) geek and I see all things geeky as being a big part of my culture. So I don’t really identify much with my nationality of New Zealander, or of traditional Pacific or Maori values (I’m not Maori anyway but many people still think that should be part of my culture), or of the standard interests of my compatriots like rugby, outdoor activities, or beer – well OK, maybe I do identify with the beer!

Being a geek transcends national boundaries and traditional values. I go almost everywhere with my 4 main Apple products: a MacBook Pro laptop, an iPad Pro, an iPhone 6S, and an Apple Watch. They are all brilliant products and I do use them all every day.

For me, the main aspects of being a geek involve “living on the internet” and sourcing most of my information from technology sources, and participating in geek events and activities.

By “living on the internet” I mean that I can’t (or maybe just don’t) go for any period of time (I mean a few hours) without participating in social media, checking internet information sources (general news, new products, etc), or seeking out random material on new subjects from sites such as Quora.

I mainly stay informed not by watching TV (although I still do watch TV news once per day) or listening to radio news (again, I do spend a small amount of time on that too) but by listening to streaming material and podcasts. In fact, podcasts are my main source of information because I can listen to them at any time, avoid most advertising, and listen again to anything which was particularly interesting.

And finally there are the events and activities. Yeah, I mainly mean games. I freely admit that I spend some time every day playing computer games. Sometimes it is only 5 minutes but it is usually more, and sometimes a lot more. Some people think a mature (OK, maybe getting on towards “old”) person like me shouldn’t be doing that and that I should “grow up”. Needless to say I think these people are talking crap.

And so we come to the main subject of this post, the latest computer (or more accurately phone and tablet) game phenomenon: Pokemon GO. The game was released first in the US, Australia, and New Zealand and instantly became a huge hit. Of course, since it was a major new component of geek culture, I felt I should be playing it, but I didn’t want it to become a big obsession.

And I think I did well avoiding it for almost 3 days, but yes, I’m playing it now, with moderate intensity (level 17 after a couple of weeks). Today I explained the gameplay to an older person who never plays games and he asked: but what is the point? Well, there is no real, practical point of course, but I could ask that about a lot of things.

For example, if an alien landed and I took him to a rugby game he might ask what’s the point of those guys running around throwing a ball to each other. Obviously, there’s no point. And what’s the point of sitting in front of a TV and watching some tripe like “The Block” or some crappy sopa opera? Again, there’s no point. In reality, what’s the point of living? Well, let’s not go there until I do another post about philosophy.

So anyone who criticises playing computer games because they have no practical point should think a little bit more about what they are really saying and why.

And there’s another factor in all of this that bugs me too. It’s the fact that almost universally the people who criticise games like Pokemon GO not only have never played them but know almost nothing about them either. They are just letting their petty biases and ignorance inform their opinions. It’s quite pathetic, really.

So to all those people who criticise me for playing Pokemon GO, Real Racing 3 (level 162 after many years play, and yes, it is the greatest game of all time), Clash of Clans (level 110 after 4 years play), and a few others, I say get the hell over it. And if you do want to criticise me just get a bit better informed first. And maybe you should stop all those pointless habits you have (and that I don’t criticise you for) like watching junk programs on TV.

And now, if you’ll excuse me, I’ve got to go find some more Pokemon. Gotta catch ’em all!

The Best News Source

February 21, 2016 Leave a comment

As you may know (and there’s know way you couldn’t if you follow this blog) I listen to a lot of podcasts. The great thing about these is that they can be created and distributed quite easily and because of the history of the technology they tend to be both created and used by technically and scientifically literate people.

But many people consider them a lesser source of news and information – lesser than traditional sources like TV and radio news for example. But are they? I don’t think so.

I consume quite a lot of information on many topics and from many sources. Some of the topics I would consider myself quite knowledgeable about and others not so much. The thing is, that when I listen to material on topics I know a fair bit about from “conventional” sources – even fairly respectable sources like New Zealand’s RNZ National – I notice a lot of errors. I don’t tend to notice this so much with internet sources like podcasts.

There are some complicating factors here. First, most of the RNZ material I listen to is actually in the form of podcasts, but I don’t count them in that category because they are really just recordings of radio items. The “true” podcasts are audio (or sometimes video) programs created specifically for that purpose. And it’s the true podcasts I am promoting as a superior source of information. Second, there are a lot of terrible podcasts, which are probably even less accurate than the traditional sources, but those aren’t the ones I’m listening to.

So it seems to me that if I listen to an item from a traditional source about computers, or astronomy, or an area of science I’m interested in, and almost always notice errors, then it’s likely that there are errors in all the other material too. I just don’t notice it so much in relation to the other topics because I’m not expert enough on those.

So it is a bit of a concern, isn’t it? The sources of news and information that most people use are not accurate.

I think there are a few factors which have lead to this unfortunate situation. First, there is a strong emphasis on providing unchallenging, simplified, entertaining presentations of information today. Second, many items in mainstream sources (TV, radio, newspapers) are created by journalists who may or may not have a good level of expertise in the subject area they are covering. Third, most mainstream sources are commercial and many have a clear bias.

Podcasts, on the other hand, tend to be created by small groups or individuals (although more and more are being created by larger companies) who balance entertainment with information, are experts in the area they have decided to create podcasts about, and don’t have a strong commercial incentive in what they do.

Of course, other internet information sources like blogs – and I mean blogs which concentrate on providing accurate factual information rather than those (like mine) which mainly present opinions – are also good sources. I prefer podcasts simply for the convenience of being able to consume them while doing other stuff like driving, walking around, mowing the lawns, etc.

I think it’s inevitable that traditional news and information sources will continue to gradually decline in both number and quality. News rooms are being downsized to save money and as big business takes over it will inevitably emphasise profit over quality. The internet is probably the biggest cause of this decline (although some people debate that point) but luckily it is also the internet which can provide a solution.

Sure, look on the internet and some of the sources are truly awful but that is also the case with traditional sources. For example, a few years back a study showed that people who watch Fox News (a US channel mainly associated with the political right) are less well informed than people who don’t watch any news at all!

If people are determined to be ignorant on a topic (I could mention climate change as an example) they will find plenty of material supporting whatever state of ignorance they wish to attain on both the internet and other sources (more so in larger countries and not so much in New Zealand because we are too small to have many obviously biased sources). But if people want to really know the truth I would suggest that the highest quality internet sources are where to go.

But that’s the problem: how to tell what is good and what is bad. I believe Google is looking at a reliability and quality rating system for web sites. If that is well done (and in search most of what Google does is brilliant) then at least that will be a good tool for those who actually want to know the truth.

As for those who want to remain ignorant, maybe they will need an alternative search engine which takes them to sites which reinforce their ignorance. There’s already an example of a similar service. It’s an alternative to Wikipedia called “Conservapedia” which is described as a “Wiki encyclopaedia with articles written from a Christian fundamentalist viewpoint” – in other words, it’s full of lies.

Yeah, I know Wikipedia isn’t perfect, and neither are podcasts or blogs. But at least the best examples of those start from a perspective of wanting to present good information, unlike many of the options.

The internet isn’t perfect, but it’s the best we have.

The Enigma

November 4, 2015 Leave a comment

I seem to have had a theme of blogging about people recently. First it was Grace Hopper, then Richard Feynman, and today I’m discussing Alan Turing, the famous computer pioneer, mathematician, and World War II code breaker.

I am currently listening to an audio book biography of his life called “Alan Turing: the Enigma” (a reference to his enigmatic personality and the German code he broke: Enigma) and the thing which I have found most interesting is the way he advocated for, and in some cases invented, many of the ideas and technologies we have today. He did this work after the code breaking he is probably most well known for.

So I’ll list a few of his innovations here. I should say that he can’t claim sole credit for some of these because he often worked by himself and “reinvented” ideas (often in a better form) which other early theoreticians and inventors had already found. For example, some ideas go back to Charles Babbage who didn’t have the theory or the technology to exploit them at the time.

Anyway, here’s the list…

Instructions and data should both be stored in main memory.

Many people see these two as being quite separate and on many machines the instructions would be read linearly from paper tape or cards and data would be stored in fast, random access memory. By putting the code in memory too it could be accessed much more quickly, plus there were two other benefits: any instruction could be accessed at any time so conditional jumps and loops could be done, and instructions could be modified by other instructions (see below for details).

It’s just taken for granted today that code is loaded into working memory (RAM). That’s (mainly) what’s happening when you launch a program (the program’s instructions are being copied from the disk to main memory) or boot your system (the operating system is being loaded into memory) but in the early days (the 1940s and 1950s) this wasn’t obvious.

Programs stored in main memory allow conditional jumps and loops.

Conditional statements and loops allow a lot of efficiency and flexibility. A conditional statement allows the computer to run a set of instructions if a certain condition occurs. For example it could test if a bank balance is less than zero and show a warning if it is. Loops allow a chunk of code to be executed multiple times. For example, survey results for 100 participants could be analysed one at a time by skipping back to the start of the analysis code 100 times.

Any modern programmer would find it bizarre not to have access to conditional statements and loops, but some early machines didn’t have these abilities.

Code in memory allows self modifying code.

If the programming code is in main memory it can be read and written freely, just like any other data. This allows instructions to be modified by other instructions. Turing used this for incrementing memory locations and other simple stuff but potentially it can be used for far more complex tasks.

I can remember when I did computer science being told that self modifying code was a bad thing because it made code hard to understand and debugging difficult, but it has its place and I use it a lot in modern interpreted languages.

Simple, fast processing units and more memory is the best strategy.

Some early American computer designs tried to provide a lot of complex operations built into the main processing unit. This made them more complicated and required more valves (this was before transistors or integrated circuits, of course) for the main processor and less for memory. Turing advocated simpler instruction sets which would allow for more memory and more efficient execution, and the programmer could write the complex code using simpler instructions.

This sounds very much like the modern concept of RISC (reduced instruction set computing) processors which provide a limited range of very fast, efficient instructions and use the extra space on the CPU for cache memory. The more complex instructions are generated by combining simpler ones by the compiler.

Microcode and pipelines.

Turing’s computer, the ACE, ran at 1 MHz (one million cycles per second) which was the fastest of any machine at the time. But interpreting each instruction (figuring out what it meant, like where the data should come from) took several cycles and actually carrying out the function took several more. To make things go faster he interpreted the next instruction while the current one was being executed.

Modern processors have a “pipeline” where several stages of processing can be performed simultaneously. Today we also have deep, multiple pipelines (multiple steps of several streams of code being processed at once) and code prediction (figuring out what the next instruction will be) but the basic idea is the same.

Subroutines and libraries.

Most CPUs could only do very basic things. For example they could add whole numbers (like 42) but not multiply at all or work with real numbers (those with decimals, like 3.33). But many programs needed these operations, so instead of reinventing them over and over for each new program Turing created libraries of subroutines.

A library is a collection of useful chunks of code to do particular things, like work with real numbers. Modern processors have this function built in but more complex tasks, like reading a packet of data from the internet, still require long sequences of code.

Today computers typically have hundreds of libraries of thousands of subroutines (a rather old term for a chunk of code which can perform a task then return to what the computer was doing before it was run) and in many ways that is mostly what a modern operating system is: a collection of useful libraries.

Computers could be accessed remotely.

Turing thought that since there were so few computers around it made sense to allow people to access a computer they needed by remote control. He thought this could be done with special devices attached to the phone system.

Simple modems and other serial interfaces allowed this, and now we have the internet. Even though computers are no longer rare (I have 14 conventional computers at home plus many other devices, like iPads and iPhones, which are effectively computers) it is still useful to be able to access other computers easily.

Computers for entertainment.

Turing thought that “ladies would take their computers to the park and say ‘my little computer said something so funny this morning'” (or something similar to this).

I couldn’t help but think of this when my daughter showed me an amusing cat video on her iPhone today. Yes, ladies carry their computers everywhere and are constantly entertained by the funny little things they say.

No one’s perfect.

So what did he get wrong? Well a few things, actually. For example, he wasn’t a great enthusiast for making the computer easy to use. It was necessary to enter input and read output in base 32 expressed using a series of obscure symbols, plus he used binary but with the least significant bit first.

Perhaps the most important change in the last thirty years has been making computers easier to use. Turing can’t claim much credit in this trend. Still, I see where he was coming from: if it’s hard to build a computer and hard to write code, it should be hard to use them too!

Just a Second

August 25, 2015 Leave a comment

I recently read through a list of interesting things which take just one second to complete. Some of the information was mind-boggling and some was just worrying. So let’s make a look at some of these statistics now…

Statistic 1: Every second, on our planet 4.3 people are born and 1.8 die.

The difference is 2.5 per second meaning the human population is still increasing quite rapidly. The problem of overpopulation used to be a major discussion point but in the last few decades it seems to be increasingly ignored.

My theory is that moden economies require growth and that is easily achieved through increasing population. But that is a stupid approach because it can’t last and as the population increases there will always be problems with allocating fixed resources to increasing an number of people. Again we have let a particular form of economics become the master rather than it being shaped to our requirements.

Statistic 2: Warren Buffet, the world’s highest earner, makes $402 in 1 second, while someone on the global poverty line makes $0.0000144.

Inequality is being recognised as one of the major problems of our era. I think a good case should be made to say that some people should be paid more because they have greater responsibility, work harder, or make a greater contribution to society, but when a tiny fraction of the population make what only could be described as an obscene income there needs to be rational change before the exploited majority force change.

Statistic 3: The Large Hadron Collider collects 6,000,000,000,000,000 bytes of data in just 1 second.

In fact I’m fairly sure this isn’t true because there is just no way to capture that much data! According to CERN’s web site the LHC generates 30 petrabytes per year which means, on average, it generates one terabyte per second. Of course the data is generated in short bursts but at 6 petabytes per second it only takes 5 seconds to create the 30 petabytes for the year.

But whatever the real number is, it is an astonishing amount of data and the computer equipment at the LHC is just another amazing part of a totally incredible project, which I believe would be in the short list of humanity’s greatest engineering achievement.

Statistic 4: The International Space Station travels 7700 km in 1 second during its orbit around Earth, and New Horizons, the fastest spacecraft ever, takes just 1 second to travel 16.26 km.

Well there’s something wrong here, obviously. These numbers did come from a source I would normally trust but I do wonder whether they checked them very thoroughly!

The ISS orbits a few hundred kilometers up and (if my memory is correct) takes about 90 minutes to complete an orbit. That means its orbit is the diameter of the Earth (plus its height) multiplied by pi which is about 41000 kilometers. Dividing that out gives me about 7.6 kilometers per second. Maybe they meant 7700 meters, not kilometers?

Still, that’s pretty quick, but not as fast as New Horizons. That mission to Pluto, which took 10 years even at that speed, is another impressive technical achievement. So much could have gone wrong in that time and the audacity of shutting down communications during the brief flyby (which only lasted a day after 10 years of travelling) was extraordinary.

Statistic 5: 48,745 Google searches happen in just 1 second, and 2,393,470 emails are sent in just 1 second.

Considering each search is potentially looking for text on any page on the whole web – a total of about 5 billion pages – this is an extraordinary achievement. If you estimate 2000 words per page (a guess based on a few random pages I looked at) times 5 billion pages times about 50,000 searches you get 500 trillion word searches per second, 24 hours a day, every day.

Of course Google don’t use a simple “brute force” algorithm like this but I think Google search (and to a lesser extent others) is the one service which has made the internet genuinely useful.

And even though there are now plenty of alternatives to email for communicating on the internet that is still a lot. Add the tweets, Skype calls, Messenger messages, and other forms of communication and it is even more impressive, no doubt.

Statistic 6: The world’s fastest computer performs 33,860,000,000,000,000 calculations in just 1 second.

That is almost 34 quadrillion floating point calculations in a second. In the past I have used comparisons with how long this would take to do by hand so this time I will use an alternative. Imagine each calculation is printed on one line in a small font on a paper tape (quaint concept, I know). How long would the tape be?

Well that would be 3 (let’s say the printed line is 3 mm high) times 34 x 10^15 divided by 1000 (to get meters) then divided by 1000 again (to get kilometers). That is 102 billion kilometers. The distance to the Sun is 150 million kilometers so the tape would reach to the Sun and back 340 times! That’s just one second of calculations for one computer.

So yes, you can get a lot done in a second. Makes me feel kind of bad that it took me half an hour to produce just this one blog post!