"Undermining my electoral viability since 2001."

Expect the Unexpected, or why I don't believe in the Singularity

Well, the "Mayan Apocalypse" passed without incident. Given that life is apparently going to go on, I'd like to take a minute to register some thoughts on another end-of-the-world (as we know it) theory popular among technophiles: the Singularity.

At it's most generic, the term "Singularty" refers to a point in the future at which change begins to occur so rapidly it's completely impossible to predict what will happen. It's the equivalent to a black hole's event-horizon, the point at which light can no longer make its way out. After that point, we have no idea.

In that simple context, it's an interesting question to ponder — at what point does our ability to predict the future become so poor as to be essentially worthless? I'd actually argue that the answer to that question is a lot sooner than most Futurologists think, but more on that later.

The problem is that the popular interest in in the Singularity is based on notions of accelerating computing power and the replication of human intelligence or a different kind of "Strong AI" which has the potential to self-evolve. Essentially, some kind of artificial mind takes the drivers seat for technological development, at which point all bets are off because it will move much faster than we can imagine. Maybe we'll be immortal. Maybe we'll become post-human. Maybe SkyNet will kill us all.

It's fun to speculate about such things, and I'm not arguing against futurism or science-fiction. I enjoy both quite a bit. However, I do see a number of somewhat obvious flaws in this increasingly popular gestalt that I feel the need to point out, if only to make way for more interesting or pertinent speculation.

Remember: it's the "End of the World as We Know It", not the End of the World

As a starter, there's a strain of apocalyptic thinking in a lot of this Singularity stuff. For instance, there's an "institute" that collects donations, largely based on the argument that without proper safeguards, Strong AI will kill us all.

So, a reminder: resist apocalyptic thinking. People have always believed that the Revelation Is At Hand, and it's never true. The end of the world is not coming. Things will definitely change the next hundred years in ways that will render it neigh unrecognizable to us today, but it's not the rapture, or even the Red Dawn. Anyone who tells you different is selling something.

Simulating the Brain isn't Happening Soon

Part of the inspiration for this post was this really excellent comment on Reddit, which quite eloquently makes the point:

That's just it though, we don't understand the physical phenomena of how the brain works. We get the general principle; impulses go in, impulses go out (you can't explain that!), but the specifics of how neurons interact, how the different systems link together and especially how the whole mess creates the emergent phenomena of a person is still beyond us... What I think is frequently glossed-over is that yes, we're about to hit the point where computers have equivalent processing power to what we believe the human brain to have, but that computational power WILL NOT let us emulate the brain...

There are so many holes in the "we're about to have computers as strong as your brain" argument.

For starters, all estimates of the "computing power" of the brain are a complete wild-ass-guess. That's because, as per the above, we have absolutely no idea how the brain works. The baseline yardstick that a lot of this is based on is a total hand-wave.

In terms of the "raw power" argument — that we don't need to understand how a brain works because we can use a virtual representation of the physical molecules — this is also quite far off. The author of the comment uses a neat nerd-analogy about how much harder it is to simulate old hardware (e.g. you need a fast modern PC to simulate a gaming console from 20 years ago). It's a good one for "getting it", but misses the scale of the challenge by several orders of magnitude.

Current molecular simulations are only able to handle four or five atoms at a time. Our grasp of the behavior of physical matter currently requires a $100B device that's 27km around to study basic forces in single atoms. Putting aside the computing difference required for "hardware emulation" of the brain vs "software emulation", the tools to accurately model reality ("hardware") on even the simplest of scales are currently vastly beyond our information technology powers.

"Strong AI" is also a long-shot

There's a bit of a better argument to be made about our ability to develop genetic algorithms — that is to say, self-modifying programs — which can progress along paths human engineers would never explicitly devise to accomplish seemingly impossible tasks, or simply to get better at them. This is a component of how most speech recognition systems work, and some of the most exciting research in computer science is in this area.

However, the most recent advances in modern speech-recognition (think Siri or Google Now) haven't been done on the backs of mystical self-generated code. They're based on huge data warehouses and lots of parallel processes finding good matches, applications that are straight out of traditional (aka "dumb") computer science. It feels like magic because the machines can do a quick signal process of your speech and then a logical search to figure out what the equivalent typed text would be for the words you just said into the phone, and then come up with some response, and it's actually useful. But this is vastly different than the machine accurately translating phonics to letters on its own, let alone actually understanding anything you're saying.

Still, some kind of genetic algorithm is probably the key for a breakthroughs in self-directed computing. And it'll be dang cool. There are just two problems:

First, a "learning system" is not really learning in the way that we think of it — it's responding to a human-designed set of criteria and then getting a human-defined "right" or "wrong", and then making an adjustment to try and improve. This means that the things a genetic algorithm can learn are limited to the kinds of problems engineers can design.

Bear in mind again that this isn't "learning" in the human context. It's not teachers making a quiz: it's engineers designing a problem; strictly logical, binary, right/wrong. There's no comprehension, just correctness or the lack thereof. This presents a number of obvious challenges for developing anything that hopes to be "self-directing", or even to take a step beyond primitive pattern/signal processing. While we can probably someday train up some software that makes a pretty good personal assistant, that's a far cry from software that can evaluate experimental phenomena and conduct its own research.

Secondly, there's an implicit and un-proven assumption that robots learn faster than humans. While it is the case that once software has "learned" something, this knowledge can in theory be copied, it still takes quite a while for genetic algorithms to come up with useful results. And anyway humans already "copy" knowledge much faster than they invent it. Think about how long it took for us to arrive at Newton and Leibnetz figuring out Calculus vs the fact that well-taught teenagers can learn it in six months.

There's no good reason to assume that a computer can assimilate knowledge at a faster rate than a human being. If you think about it, the brain is (supposedly) orders of magnitude more advanced than the most gargantuan supercomputers, operating some kind of "software" we have absolutely no clue how to engineer, hooked up to a system of data inputs that are thousands of times more sensitive than anything we can model. Yet still it takes a couple decades to get educated.

Not to mention that the scope of a individual human's education is likely a mis-comparison for Artificial Intelligence. Even supposing we can develop a combination processor/software that can truly "learn", it's not a matter of "loading it up" with knowledge. This new intelligence would be completely unknown to us, and vice-versa. It might have a digital hookup to download wikipedia, but that doesn't mean it could parse/comprehend all that data as fast as copying. While an AI might (might!) learn at a faster-than-individual-human timescale, the way in which it accumulates knowledge would be much more akin to the development of mathematics by the human species leading up to to the discovery of calculus late in the 17th Century, as opposed the experience of an advanced math student at your local high school.

Anyway, as compelling as jargon about "learning at a geometric rate" may be, I find the notion that Strong AI would run wild to be more than a little suspect.

Futurism or Narcissism?

There's a common thread running through a lot of the Singularity-think that I find suspect, and it boils down to a kind of narcissism. We self-centeredly believe that our own experience (the simulation of our own minds) holds the key to future advances in technology. More broadly, we see the future in terms of the technological advances that have shaped our own lives, even though change is often wrought from unexpected vectors.

Late industrial-era futurists saw the expansion of our mechanical capabilities creating unlimited possibilities (flying cars!) and challenging what it means to be human (robots!). Similarly, I see contemporary futurists extrapolating what we have experienced in the past 30 years — a revolution in information technology — to posit a similar set of amazing possibilities.

There's nothing wrong with that per se, but it's not necessarily wise. Sure, it feels revolutionary that I can transfer the entire content of an encyclopedia from some server on the internet to my phone in under a minute — and it is! — but that revolutionary feeling is misplaced when extrapolating the future if we don't appreciate the difference between transferring data (the bits that make up the words that are the encyclopedia) and transferring the knowledge. Not only is the scale in bits of the latter presumably much larger, we're fundamentally ignorant as to how it could ever be done.

The futurists of yesterday missed a number of now obvious problems — the energy requirements for rocket-powered flying cars make them non-starters; the specifics of robotically mimicking ranges of motion that are natural for biological organisms turns out to be enormously complex; and so on. Similarly I think today's Singularity enthusiasts hand-wave past a lot of important questions.

The idea that enough processing power and data will deliver us a digital mind is a kind of modern Alchemy. It's a set of ideas which might one day yield good results, but are at the moment a kind of pseudo-science, or at worst a scam.

Expect the Unexpected

More importantly, the lesson of history is that world-change tends to come not from the ultimate extrapolation of the current technology trend, but rather from disruptive development along new and unexpected lines. For my money we'll see much weirder things in the future based in our increasing ability to generate biological systems that do our bidding than from artificial intelligence. But even that feels kind of obvious, and probably therefore wrong.

If you think of the difference between the world of 1912 and today, the person situated in 1912 would have a zero percent chance to predict me to writing and you reading this blog post. That's true of the broad functionality, let alone the specifics of how all this stuff actually works. You're talking about a person living in a world where electrification is still an ongoing project, where "radio" is a bleeding edge technology with no public commercial applications, and literacy rates were around 40%.

That last point sticks out to me too. Futurists tend to over-emphasize gadgetry and underestimate the social context surrounding innovation. It's not just the raw technology that makes change happen, it's what human beings do (and how they change) as a result.

If you weigh the impact of 30 more years of CPU development vs bringing global connectivity and literacy up to 80 or 90%, which is more likely to completely transform the way we live in ways that are impossible to predict, and honestly kind of terrifying? If you're interested in a phase-shift in the human experience, I think there's a pretty obvious one already underway.

This is why I think the event-horizon for future predictions is a lot nearer than most imagine, but it's not a tipping point of computing power that causes the future to run away from us. It's that any time we think we understand what's going on, we're kidding ourselves. Nobody can keep track of six billion people, and it's been the case throughout history that technologically-driven change is extremely difficult to predict.

Even near-term predictions about relatively well-understood but complex systems (like the economy) are hugely challenging. I mean, people think Warren Buffet is a genius for realizing 12 years ago that there might be a building boom and investing in carpet companies. Figuring out what the game will look based on guesses about how the rules may or may not change is at least an order of magnitude more complex.

Telling the future is hard, but it's also fun, but I'm much more interested in focusing on that we can make based on current science — e.g. that we need to get on top of actively managing the geosphere, or figure out how not to end up in a society that grows ever richer, but in which all the gains in wealth accrue to whoever owns the robots — than projecting the unknown impacts of as yet unknown technologies.

If geek culture could redirect the cultural energy — not to mention donations, volunteerism, whatever — currently given over to Singularity Alchemy towards more pressing issues, issues that might benefit from some science, it'd be pretty sweet. At a minimum, it'd make for some pretty interesting speculative fiction.

Responses