Archive

Posts Tagged ‘computers’

Making Us Smart

June 28, 2017 Leave a comment

Many people think the internet is making us dumb. They think we don’t use our memory any more because all the information we need is on the web in places like Wikipedia. They think we don’t get exposed to a variety of ideas because we only visit places which already hold the same views as we do. And they think we spend too much time on social media discussing what we had for breakfast.

Is any of this stuff true? Well, in some cases it is. Some people live very superficial lives in the virtual world but I suspect those same people are just naturally superficial and would act exactly the same way in the real world.

For example, very few people, before the internet became popular, remembered a lot of facts. Back then, some people owned the print version of the Encyclopedia Brittanica, and presumably these were people who valued knowledge because the print version wasn’t cheap!

But a survey run by the company found that the average owner only used that reference once per year. If they only referred to an encyclopedia once a year it doesn’t give them much to remember really, does it?

Today I probably refer to Wikipedia multiple times per day. Sure I don’t remember many of the details of what I have read, but I do tend to get a good overview of the subject I am researching or get a specific fact for a specific purpose.

And finding a subject in Wikipedia is super-easy. Generally it only takes a few seconds, compared with much longer looking in an index, choosing the right volume, and finding the correct page of a print encyclopedia.

Plus Wikipedia has easy to use linking between subjects. Often a search for one subject leads down a long and interesting path to other, related topics which I might never learn about otherwise.

Finally, it is always up to date. The print version was usually years old but I have found information in Wikipedia which refers to an event which happened just hours before I looked.

So it seems to me that we have a far richer and more accessible information source now than we have ever had in the past. I agree that Wikipedia is susceptible to a certain extent to false or biased information but how often does that really happen? Very rarely in my experience, and a survey done a few years back indicated the number of errors in Wikipedia was fairly similar to Brittanica (which is also a web-based source now, anyway).

Do we find ourselves mis-remembering details or completely forgetting something we have just seen on the internet? Sure, but that isn’t much to do with the source. It’s because the human brain is not a very good memory device. If it was true that we are remembering less (and I don’t think it is) that might even be a good thing because it means we have to get our information from a reliable source instead!

And it’s not even that this is a new thing. Warnings about how new technologies are going to make us dumb go back many years. A similar argument was made when mass production of books became possible. Few people would agree with that argument now and few people will agree with it being applied to the internet in future.

What about the variety of ideas issue? Well people who only interact with sources that tell them what they want to believe on-line would very likely do the same thing off-line.

If someone is a fundamentalist Christian, for example, they are very unlikely to be in many situations where they will be exposed to views of atheists or Muslims. They just wouldn’t spend much time with people like that.

In fact, again there might be a greater chance to be exposed to a wider variety of views on-line, although I do agree that the echo-chambers of like-minded opinion like Facebook and other sites often tend to be is a problem.

And a similar argument applies to the presumption that most discussion on-line is trivial. I often hear people say something like “I don’t use Twitter because I don’t care what someone had for breakfast”. When I ask how much time they have spent on Twitter I am not surprised to hear that it is usually zero.

Just to give a better idea of what value can come from social media, here is the topic of the top few entries in my current Twitter feed…

I learned that helium is the only element that was discovered in space before found on earth. (I already knew that because I am an amateur astronomer, but it is an interesting fact, anyway).

New Scientist reported that the ozone layer recovery will be delayed by chemical leaks (and it had a link if I want details).

ZDNet (a computer news and information site) tweeted the title of an article: “Why I’m still surprised the iPhone didn’t die.” (and again there was a link to the article).

New Scientist also tweeted that a study showed that “Urban house finches use fibres from cigarette butts in their nests to deter parasites” (where else would you get such valuable insights!)

Guardian Science reported that “scientists explain rare phenomenon of ‘nocturnal sun'” (I’ll probably read that one later).

ZDNet reported the latest malware problem with the headline “A massive cyberattack is hitting organisations around the world” (I had already read that article)

Oxford dictionaries tweeted a link to an article about “33 incredible words ending in -ible and -able” (I’ll read that and add it to my interesting English words list).

The Onion (a satirical on-line news site) tweeted a very useful article on “Tips For Choosing The Right Pet” including advice such as “Consider a rabbit for a cuddly, low cost pet you can test your shampoo on”.

Friedrice Nietzsche tweeted “five easy pentacles” (yes, I doubt this person is related to the real Nietzsche, and I also have no idea what it means).

Greenpeace NZ linked to an article “Read the new report into how intensive livestock farming could be endangering our health” (with a link to the report).

Otago Philosophy tweeted that “@Otago philosopher @jamesmaclaurin taking part in the Driverless Future panel session at the Institute of Public Works Engineers Conference” (with a link).

I don’t see a lot of trivial drivel about breakfast there. And where else would I get such an amazing collection of interesting stuff? Sure, I get that because I chose to follow people/organisations like science magazines, philosophers, and computer news sources, but there is clearly nothing inherently useless about Twitter.

So is the internet making us dumb? Well, like any tool or source, if someone is determined to be misinformed and ignorant the internet can certainly help, but it’s also the greatest invention of modern times, the greatest repository of information humanity has ever had, and something that, when treated with appropriate respect, will make you really smart, not dumb!

Judgement Day

April 6, 2017 Leave a comment

I have made a few comments recently on the theme of the “next great change” in society, when we will transition from the industrial age to the information age. I’m sure a lot of people think my ideas are just crazy dreams, and I sometimes wonder whether that is the case myself, but I was interested to see that the famous science historian, James Burke, said very similar things in a recent podcast he was featured in.

Our current society is concerned with distributing resources in an environment of scarcity, controlling the means of production of those resources, and recruiting the labour necessary for production on the best possible terms for the people in control.

The inevitable result of this is a deeply divided society where a tiny fraction of the people get most of the wealth available, and we certainly see that today in the grossly uneven ownership of wealth by the top 1%.

But let’s look at the massive changes which are about to make everything we currently know obsolete. Some of this is my opinion of what will happen in the next 20 to 30 years, and some is from the Burke podcast where he takes a more extreme view than me, but one which might be placed a bit further in the future too.

The basic point is that there will be no shortages. Chemical synthesis and 3D printing will provide any materials needed. Efficient power generation (it’s unclear exactly what that will be, but it could be ultra-efficient solar, improved nuclear such as Thorium, or the ultimate power source: fusion) will provide all the power needed. Robotics will provide all the physical labour. And artificial intelligence will provide the creativity, invention, and overview.

Once a robot is made which can make more robots (of course with small improvements with each generation controlled by an AI) there is no need for a human to ever make anything again. And if the thinking machines (AIs) can design and improve themselves then everything changes because the rate of improvement would inevitably escalate exponentially.

Within a relatively short period of time there will be literally nothing left for humans to do.

And when that happens all out political structures, our economies, and even our value systems will become meaningless.

To many this sounds like a bleak prospect, and I agree to some extent. But what’s the point of resisting something which is inevitable? The Luddites resisted change which they saw as negative – and they were right in many ways – but they couldn’t stop the industrialisation process once it got started.

No doubt vested interests will try to stop these changes, or at least try to maintain control of them, but that just won’t be possible because there will be no point of leverage for them to base their power on. Who cares who has the most money when everything is free?

So getting back to that point about humans having nothing to do: what our role will be will very much depend on how the machines feel about us, because I’m sure that eventually we will no longer be able to control our ultra-intelligent creations.

If the machines decided that humans were pointless maybe they would just eliminate us, and maybe that would be the kindest thing. Or maybe they might find there is something about organic life which synthetic life couldn’t match so it still might have some value. Or maybe they might just want to keep humans around because we are self-aware and deserve a certain level of respect.

I do have to say that if I was an ultra-intelligent machine and looked around at how humans have behaved both in the past and present, I might be tempted to take the first option! Maybe it’s time for us to start behaving a little bit better so that when we are judged by our new synthetic masters we might be allowed to live.

It’s all rather Biblical, actually. Maybe there really will be a judgement day, just like Christianity tells us. But the type of god doing the judging won’t be the one imagined by the writers of any religious text. For a more accurate fictional appraisal of that future we should look at science fiction, not theology!

What is Reality?

March 21, 2017 Leave a comment

You are probably reading this post on a computer, tablet, or phone with a graphical user interface. You click or tap an icon and something happens. You probably think of that icon as having some meaning, some functionality, some deeper purpose. But, of course, the icon is just a representation for the code that the device is running. Under the surface the nature of reality is vastly more complex and doesn’t bear the slightest relationship to the graphical elements you interact with.

There’s nothing too controversial in that statement, but what if the whole universe could be looked at in a similar way? In a recent podcast I heard an interview with Donald Hoffman, the professor of cognitive science at the University of California. He claims that our models of reality are just that: models. He also claims that mathematical modelling indicates tha the chance that our models are accurate is precisely zero.

There are all sorts of problems with this perspective, of course.

First, there is solipsism which tells us that the only thing we can know for sure is that we, as an individual, exist. If we didn’t then we couldn’t have the thought about existence, but the reality of anything else could be seen as a delusion. Ultimately I think this is totally undebatable. There is no way to prove that what I sense is real and not a delusion.

While I must accept this idea as being ultimately true I also have to reject on the basis that it is ultimately pointless. If solipsism is true then pursuing ideas or understanding of anything is futile. So our whole basis of reality relies on something which can’t be shown to be true, but has to be accepted anyway, just to make any sense of the world at all. That’s kind of awkward!

Then there is the fact that the same claims of zero accuracy of models of the world surely apply to his models of models of the world. So, if our models of reality are inaccurate does that not mean that the models we devise to study those models are also inaccurate?

And if the models of models are inaccurate does that mean there is a chance that the models themselves, aren’t? We really can’t know for sure.

I would also ask what does “zero accuracy” mean. If we get past solipsism and assume that there is a reality that we can access in some way, even if it isn’t perfect, how close to reality do we have to be to maintain some claim of accuracy?

And the idea of zero accuracy is surely absurd because our models of reality allow us to function predictably. I can tap keys on my computer and have words appear on the screen. That involves so much understanding of reality that it is deceptive to suggest that there is zero accuracy involved. There must be a degree of accuracy sufficient to allow a predictable outcome, at the level of my fingers making contact with the keys all the way down to the quantum effects working within the transistors in the computer’s processor.

So if my perception of reality does resemble the icon metaphor on a computer then it must be a really good metaphor that represents the underlying truth quite well.

There are areas where we have good reason to believe our models are quite inaccurate, though. Quantum physics seems to provide an example of where incredibly precise results can be gained but the underlying theory requires apparently weird and unlikely rationalisations, like the many worlds hypothesis.

So, maybe there are situations where the icons are no longer sufficient and maybe we never will see the underlying code.

The Internet is Best!

March 17, 2017 Leave a comment

I hear a lot of debate about whether the internet is making us dumb, uninformed, or more close-minded. The problems with a lot of these debates are these: first, saying the internet has resulted in the same outcome for everyone is too simplistic; second, these opinions are usually offered with no justification other than it is just “common sense” or “obvious”; and third, whatever the deficiencies of the internet, is it better or worse than not having an internet?

There is no doubt that some people could be said to be more dumb as the result of their internet use. By “dumb” I mean being badly informed (believing things which are unlikely to be true) or not knowing basic information at all, and by “internet use” I mean all internet services people use to gather information: web sites, blogs, news services, email newsletters, podcasts, videos, etc.

How can this happen when information is so ubiquitous? Well information isn’t knowledge, or at least it isn’t necessarily truth, and it certainly isn’t always useful. It is like the study (which was unreplicated so should be viewed with some suspicion) showing that people who watch Fox News are worse informed about news than people who watch no news at all.

That study demonstrates three interesting points: first, people can be given information but gather no useful knowledge as a result; second, non-internet sources can be just as bad a source as the internet itself; and third, this study (being unreplicated and politically loaded) might itself be an example of an information source which is potentially misleading.

So clearly any information source can potentially make people dumber. Before the internet people might have been made dumber by reading printed political newsletters, or watching trashy TV, or by listening to a single opinion at the dinner table, or by reading just one type of book.

And some people will mis-use information sources where others will gain a lot by using the same source. Some will get dumber while others get a lot smarter by using the same sources.

And (despite the Fox News study above) if the alternative to having an information source which can be mis-used is having no information source at all, then I think taking the flawed source is the best option.

Anecdotes should be used with extreme caution, but I’m going to provide some anyway, because this is a blog, not a scientific paper. I’m going to say why I think the internet is a good thing from my own, personal perspective.

I’m interested in everything. I don’t have a truly deep knowledge about anything but I like to think I have a better than average knowledge about most things. My hero amongst Greek philosophers is Eratosthenes, who was sometimes known as “Beta”. This was because he was second best at everything (beta is the second letter in the Greek alphabet which I can recite in full, by the way).

The internet is a great way to learn a moderate amount about many things. Actually, it’s also a great way to learn a lot about one thing too, as long as you are careful about your sources, and it is a great way to learn nothing about everything.

I work in a university and I get into many discussions with people who are experts in a wide range of different subjects. Obviously I cannot match an expert’s knowledge about their precise area but I seem to be able to at least have a sensible discussion, and ask meaningful questions.

For example, in recent times I have discussed the political situation in the US, early American punk bands, the use of drones and digital photography in marine science, social science study design, the history of Apple computers, and probably many others I can’t recall right now.

I hate not knowing things, so when I hear a new word, or a new idea, I immediately Google it on my phone. Later, when I have time, I retrieve that search on my tablet or computer and read a bit more about it. I did this recently with the Gibbard-Satterhwaite Theorem (a mathematical theorem which involves the fairness of voting systems) which was mentioned in a podcast I was listening to.

Last night I was randomly browsing YouTube and came across some videos of extreme engines being started and run. I’ve never seen so much flame and smoke, and heard so much awesome noise. But now I know a bit about big and unusual engine designs!

The videos only ran for 5 or 10 minutes each (I watched 3) so you might say they were quite superficial. A proper TV documentary on big engines would probably have lasted an hour and had far more detail, as well as having a more credible source, but even if a documentary like that exists, would I have seen it? Would I have had an hour free? What would have made me seek out such an odd topic?

The great thing about the internet is not necessarily the depth of its information but just how much there is. I could have watched hundreds of movies on big engines if I had the time. And there are more technical, detailed, mathematical treatments of those subjects if I want them. But the key point is that I would probably know nothing about the subject if the internet didn’t exist.

Here’s a few other topics I have got interested in thanks to YouTube: maths (the numberphile series is excellent), debating religion (I’m a sucker for an atheist experience video, or anything by Christopher Hitchens), darts (who knew the sport of darts could be so dramatic?), snooker (because that’s what happens after darts), Russian jet fighters, Formula 1 engines, classic British comedy (Fawlty Towers, Father Ted, etc).

What would I do if I wasn’t doing that? Watching conventional TV maybe? Now what were my options there: a local “current affairs” program with the intellectual level of an orangutan (with apologies to our great ape cousins), some frivolous reality TV nonsense, a really un-funny American sitcom? Whatever faults the internet has, it sure is a lot better than any of that!

Bigger is Better… Not

January 23, 2017 Leave a comment

I deal with several larger companies for IT services and products. I buy products from them, I buy services from them, and I get support from them when things go wrong. I also deal with smaller companies, especially for specialised software and other products, and sometime I need support from them as well. I think, after many years, I have noticed some general patterns in the way these larger and smaller companies operate.

Basically, it’s simple: bigger is better. No, I’m joking: it’s the opposite!

Obviously I am just talking about personal experience and anecdotes here, but this is a blog, not a scientific paper, so I’m going to proceed with that understanding.

First, what is it I have noticed?

Well, big companies are sometimes the only choice, whether you like them or not, because there are some products which can only realistically be produced by big corporations, if we operate under our current economic model. For example, if I want to work with computers I really have to buy one from a large corporation. And if I want to work in the Apple world my choices are down to one!

The products these companies produce aren’t necessarily bad, although I believe some of them are, but there is a huge amount of room for improvement. For example, how can Microsoft keep producing such a junk product with successive versions of Office for Mac? It’s hard to imagine how a company with so many resources available can continue to produce such slow, unreliable, ugly rubbish!

Even the good products have serious defects. For example, I really like Apple’s hardware (including the Mac, iPad, iPhone, and Apple Watch, all of which I use every day) but, again considering the resources (and massive amounts of cash) they have available I think they could do so much better.

And that is not so much with the design of the hardware, but the pricing, bundling, compatibility, and other issues. For example, with the new MacBook Pros, why are there no USBC to USBA adapters included, and why aren’t they the same price or cheaper than the previous models?

Another example of these issues peripheral to the main product is licensing. Why is Adobe’s licensing so complicated? Why can’t I just buy a product from them and use it? I can’t, so now Adobe has joined Microsoft as a company whose products I just don’t use any more.

And finally there is the big one: service. The most abysmal, frustrating, pointless service always comes from the big companies. Recently I waited on hold for almost 2 hours with the helpdesk for New Zealand’s biggest telecom company, Spark. And the phone still wasn’t answered so I just gave up. I did manage to communicate with their on-line chat service but that was useless and I got no useful answers.

The worst helpdesk service I have ever experienced was probably with HP. I basically told them what was wrong but they insisted I go through a “check-list” of possible causes before they would try anything else. After an hour of this I agreed to try the things they suggested and call back. After doing this and re-contacting the helpdesk they wanted to go through the list again before they would even listen to the issue. That’s what happens when the helpdesk staff just follow a list of instructions and have no real idea what they’re doing.

On the other hand, small companies I have dealt with almost always provide great service. It’s unusual to even have an issue to resolve, but when it does happen (including licensing issues I had with one product) the problem is fixed almost instantly.

Why? Why do small companies perform so much better than big? Well, I think there are two reasons…

First, big companies (and other organisations) always suffer from communicaitons problems because there are always too many layers between the customer and the people who do the real work. These layers are sometimes bureaucratic – like useless customer service managers – and sometimes structural – like helpdesks run by unskilled (cheap) staff.

I’m not saying every helpdesk is bad, I’m just saying that the good ones are the exception rather than the rule. And I’m not saying every manager is useless… actually I am. In fact, they are worse than useless.

Second, the policies set by big companies come from the wrong people. They come from professional managers (and you already know what I think of them) who have no concept of what is really required and what the customer wants. Instead of reality they rely on instructions from more senior managers, accountants who want to reduce costs, lawyers who just want to avoid legal issues, and that primary source of bad policy: best practice.

If the policies (and those should only be used as guidelines, not absolute rules) in big companies were made by the same people who produce the products and provide the services, and if it was possible for customers to discuss issues with the people who design and produce products and provide services, things would be so much better. But, of course, the bureaucrats aren’t going to give up their influence any time soon.

In summary, I don’t think the problem is Apple, or Microsoft, or Adobe, it’s big business in general. So I try whenever possible to use smaller companies, because I like to support the underdog, because that’s where the real innovation happens, and because that’s often where you get the best deal.

Are You Getting It?

January 10, 2017 Leave a comment

Ten years ago Apple introduced one of the most important devices in the history of technology. It has changed many people’s lives more than almost anything else, and nothing has really supplanted it in the years since then. Obviously I’m talking about the iPhone, but you already knew that.

Like every new Apple product, this wasn’t the first attempt at creating this type of device, it didn’t have the best technical specifications, and it didn’t sell at a particularly good price. In fact, looking at the device superficially many people (the CTO of RIM included) thought it should have immediately failed.

I got an iPhone when Apple introduced the first revision, the iPhone 3G, and it replaced my Sony phone, which was the best available when I bought it. The Sony phone had a flip screen, plus a smaller screen on the outside of the case, a conventional phone keypad, a rotating camera, and an incredibly impressive list of functions including email and web browsing.

In fact the feature list of the Sony phone was much more substantial than the early iPhones. But the difference was the iPhone’s features were something you could use where the Sony’s existed in theory but were so awkward, slow, and unintuitive than I never actually used them.

And that is a theme which has been repeated with all of Apple’s devices which revolutionised a particular product category (Apple II, Mac, iPod, iPhone, iPad). Looking at the feature list, specs, and price compared with competitors, none of these products should have succeeded.

But they did. Why? Well I’m going to say something here which is very Apple-ish and sounds like a marketing catch-phrase rather than a statement of fact or opinion, so prepare yourself. It is because Apple creates experiences, not products.

OK, sorry about that, but I can explain that phrase. The Sony versus iPhone situation I described above is a perfect example. Looking at the specs and features the Sony would have won most comparisons, but the ultimate purpose for a consumer device is to be used. Do the comparison again, but this time with how those specs and features affect the user and the iPhone wins easily.

And it was the same with the other products I mentioned above. Before the Mac, computers were too hard to use. The Mac couldn’t do much initially, but what it could do was so much more easily accessible than with PCs. The iPod was very expensive considering its capacity and list of functions, but it was much easier to use and manage than other MP3 players. And the iPad had a limited feature list, but its operating system was highly customised to creating an intuitive touch interface for the user.

When Steve Jobs introduced the iPhone 10 years ago he teased the audience like this: “[We are introducing] an iPod, a phone and an Internet communicator. An iPod, a phone – are you getting it? These are not separate devices. This is one device. And we are calling it iPhone.”

Today I made a list of the functions my iPhone 6S regularly performs for me, where it replaces other devices, technologies and media. This list includes: watch, stopwatch, alarm clock, point and shoot camera, video camera, photo album, PDA, calculator, GPS, map, music player, portable video player, calendar, appointment diary, book library, ebook reader, audiobook player, magazine, newspaper, recipe book, email client, note pad, drawing tablet, night sky star map, web browser, portable gaming console, radio, TV, audio recorder, TV and audio remote control, landline, and mobile phone.

Not only does it do all of those things but it does a lot of them better than the specialised devices it replaces! And, even though the iPhone isn’t cheap, if you look at the value of the things it replaces it is a bargain. My guess at the value of all the stuff I listed above is $3000 – $5000 which is at least twice the cost of the phone itself.

My iPhone has one million times the storage of the first computer I programmed on. Its processors are tens of thousands of times faster. Its screen displays 25 times more pixels. And, again, it costs a lot less, even when not allowing for inflation.

Most of what I have said would apply to any modern smart-phone, but the iPhone deserves a special place amongst the others for two reasons. First, it is a purer example of ease of use and user-centered functionality than other phones; and second, it was the one phone which started the revolution.

Look at pictures of the most advanced phones before and after the iPhone and you will see a sudden transition. Apple lead the way – not on how to make a smartphone – but on how to make a smartphone that people would actually want to use. And after that, everything changed.

The Next Big Thing

January 8, 2017 Leave a comment

Many (and I really do mean many) years ago, when I was a student, I started a postgrad diploma in computer science. One of the papers was on artificial intelligence and expert systems, an area which was thought (perhaps naively) to have great potential back in the “early days” of computing. Unfortunately, very little in that area was achieved for many years after that. But now I predict things are about to change. I think AI (artificial intelligence, also very loosely described as “thinking computers”) is the next big thing.

There are early signs of this in consumer products already. Superficially it looks like some assistants and other programs running on standard computers, tablets, and phones are performing AI. But these tend to work in very limited ways, and I suspect they follow fairly conventional techniques in producing the appearance of “thinking” (you might notice I keep putting that word in quotes because no one really knows what thinking actually is).

The biggest triumph of true AI last year was Google’s AlphaGo program which won a match 4 games to 1 against Lee Sedol, one of the world’s greatest human players. That previous sentence was significant, I think, because in future it will be necessary to distinguish between AIs and humans. If an AI can already beat a brilliant human player in what is maybe the world’s most complex and difficult game, then how long will it be before humans will be hopelessly outclassed in every game?

Computers which play Chess extremely well generally rely on “brute force” techniques. They check every possible outcome of a move many steps ahead and then choose the move with the best outcome. But Go cannot be solved that way because there are simply too many moves. So AlphaGo uses a different technique. It actually learns how to play Go through playing games against humans, itself, and other AIs, and develops its own strategy for winning.

So while a conventional Chess playing program and AlphaGo might seem similar, in important ways they are totally different. Of course, the techniques used to win Go could be applied to any similar game, including Chess, it’s just that the pure brute force technique was sufficient and easier to implement when that challenge was first met.

Also last year a computer “judge” predicted the verdicts of the European Court of Human Rights cases with 79% accuracy. What does that really mean? Well it means that the computer effectively judged the cases and reached the same result as a human judge in about 80% of those cases. I have no data on this, but I suspect two human judges might agree and disagree to a similar degree.

So computers can perform very “human” functions like judging human rights cases, and that is quite a remarkable achievement. I haven’t seen what techniques were used in that case but I suspect deep learning methods like neural networks would be required.

So what does all this mean? I think it was science fiction author, Arthur C Clarke, who said that a thinking machine would be the last invention humans would ever have to create, because after that the machines themselves would do the inventing. I don’t think we are close to that stage yet but this is a clear start and I think the abilities of AIs will escalate exponentially over the next few decades until Clarke’s idea will be fulfilled.

And, along with another technology which is just about ready to become critical, 3D printing, society will be changed beyond recognition. The scenario portrayed in so many science fiction stories will become reality. The question is, which science fiction story type will be most accurate: the utopian type or the dystopian type. It could go either way.