"Undermining my electoral viability since 2001."

World Mind vs AI and Big Brother

People's versions of the Apocalypse are particular to their culture. When I lived in rebel Humboldt County, it was all about the red dawn, visions of economic and/or ecological collapse, etc. Down in Silicon Valley, you get a lot more people talking about a technocalypse, some variation on Singularity theory, concern that AI will undo us all. Additionally, the recent revelations of the NSA's vast surveillance programs have cast a shadow over the optimistic vibe that comes long with a growing internet.

In this post, I want to talk about why I believe humanity will likely not be overmatched by machines, with bonus observations on how digital democracy can still thrive in an era of Big Data Big Brother.

Moore's Law Has Been Broken For About Ten Years

There is no good account of how "powerful" the human mind is as an information processing system. There are random-ass guesses from futurists and AI researchers, but nobody really knows what the capabilities are for the mind to run, let alone how to compare it to silicon based computers. That said, the random-ass guesses generally conclude that it will take a lot of CPU power to model a brain. Like, more than all the computing power that exists in the world today.

No big deal, say the preachers of AI - computing power is growing ever more rapidly, because Moore's law, etc. But that's not actually true. Moore's "law" was more of a smart observation: that circuit density was doubling about every 18 months. However, this hasn't been true for a while — Moore's law is collapsing, because of the physical limits of silicon.

Modern computers are made by composing enormously oversized diagrams of their circuitry, and then etching them onto a master print which is subsequently replicated to stamp out chips via x-ray lithography. Basically it's the opposite of the way an optical photo enlarger works: the giant diagram is mini-sized with an x-ray projector which leaves the imprint. There's a limit to how small one can shrink these diagrams down based on the wavelength of the x-ray photons used in the etching process. Manufacturers have reached that limit now.

So, while computers are becoming more energy efficient, allowing us to add more CPU cores to devices, the exponential raw computing power increases that typified the 40 years from 1965 to 2005 have leveled out. Advances in materials science may change this in the future, but it will take quite some time before an entirely new computing technology reaches anywhere near the scale of today's silicon wafers. Not to mention that most applications today are not CPU-bound — the keys to building better things for people, which is to say the things that markets will invest in, are more about reducing power consumption, increasing mobility, and improving the user experience, not crunching raw data. There's not really a strong economic incentive to make faster computers at the moment.

To put it another way, a lot of Singularity-ism is based on trends from decades past:

If these trends continue...

These trends have changed. It doesn't mean we won't someday have the processing power to model a human mind; it just means that it's probably going to take a heck of a lot longer than boosters currently envision.

The Organic Alternative

The other part of my objection is that the notion of AI running the table seems to be based on a kind of Deep Blue or Watson challenge. People seem to have an assumption that at the point at which an AI can beat a human, we're done. We'll turn over the keys to the superior system and that's all she wrote. That's not how it works.

Though it's rarely conceptualized this way, our systems for operating civilization are not the product of individuals — they leverage the power of millions of brains, each one with more processing power than every CPU on the planet. The collective intelligence of these systems shouldn't be underestimated given the enormous complexity they grapple with. Clearly bureaucracies are not operating the brains they connect with anywhere near peak efficiency, but frankly I think it's much easier to imagine near-term advancement in human organizational capability driven by connectivity and computing, than computing itself taking over.

Lets say for a second that we were actually able to get everyone online and generally pointed in the same direction, working together to solve our common problems. What would it mean to have seven billion minds working towards common goals? It would be formidable to say the least: the potential processing power of the interconnected World Mind dwarfs anything we can reasonably expect to emerge from computer science for the foreseeable future.

Note that for now I'm making a morally neutral argument. Being better organized and connected doesn't make you democratic or humanitarian, though I'll get into that later. The point is that there's a cognitive bias among technology people to gravitate towards solutions made in silicon, and to under-value the horsepower of the organic systems that already run the show.

To me the real story is the actual computing power of ourselves, and the potential power of ourselves connected and working together better. That's real shit, happening right now, and our (mis)management of the situation has real world impacts on real lives. You don't need to posit villainous AIs that want to exterminate humanity — just look at how our existing human systems create and perpetuate brutal misery worldwide.

The Morality of Inequality in Processing Power

I was just in Costa Rica for a Drupal conference where I was generously invited to give the Keynote presentation, and I talked about our potential to make the world better by building a vibrant and democratic world wide web. As part of the presentation I made the claim that widespread access to capture, share, and discuss experience (plus the right to encrypt) was our best hope in combatting the creepy Big Brother feeling that you get online, and which we've found is all to real in the revelations about the NSA's various internet surveillance programs.

Afterwards, Marco Villegas a really nice guy from Peru who's worked quite a lot on internal Drupal infrastructure challenged me on this assumption. He noted that even if cellphones with internet access and cameras do make us a "world of witnesses," the fact that governments are the only entities with the resources to capture and mine all the data they can get (or steal) from the internet creates a potentially despotic imbalance of power.

In particular, he pointed out that the ability for governments to track what online communities might be doing to organize — e.g. to challenge said government in some way — could allow them to disrupt those communities before they were able to take action. For instance, by inciting flamewars, sowing dissent, etc.

This is a valid point, and it's something governments already have a history of doing. In Estados Unidos we had the COINTELPRO operation in the '50s and '60s that actively sought to disrupt civil rights and anti-war organizing through everything from implanting agents to agitate and provoke real-world disagreement — including taking egregious provocative action at public demonstrations to bring on a harsh police response and reduce credibility — up to straight assassinating people as part of police "raids".

These infiltration tactics, by the way, is known to have continued as recently as a few years ago.

I find this kind of behavior to be deplorable, but I also don't think it requires a giant data mining operation in Utah to pull off. While agents can infiltrate online communities and start flame wars, the solution to this is establishing better online communities which have stronger norms and are more resistant to this kind of manipulation. Governments are going to try it anyway, so the only answer is to find systems that allow us to trust one another.

The more prescient threat from a Big Brother perspective is that NSA-style surveillance can be used to blackmail people by uncovering embarrassing personal communications.

The only long-term solution that I can see is personal liberation: in a world where people are comfortable with themselves and their actions, blackmail is blunted as a tool. If you're not afraid of stories coming out, then Big Brother loses his leverage. However, given the range of cultural mores and the fact that people make mistakes and do embarrassing things, this will take some time.

Additionally that answer is unsatisfying because it's kind of a variation on the "well if you don't have anything to hide, why worry about your privacy" argument. There's a difference between facing legal consequences vs social consequences — total police surveillance and the ability to "subpoena the past" vs agents threatening people with the release of embarrassing photos — but in the best of all possible worlds we shouldn't have to fear either. Privacy is an important component to living free.

Unfortunately right now we are in fact not free (or not as free as we could be) and people who want to combat that by being agents of change are accepting certain risks. Violation of privacy is almost certain to happen for anyone who makes waves, which sucks, but can't be a reason to stay on the sidelines either.

People Are The Cause Of, And Solution To, All The Worlds Problems

This leads back to my main point — the question is not what computing does on its own, it's what people do with it. Technology is an amplifier for human intentions, good and bad. There's nothing inherently moral about it one way or another. That's what makes the present moment so dangerous, and so fascinating.

We're at a point now where we can potentially communicate and observe all around the world. We're at a point where it should be very hard for people to pull of genocide or mass oppression or other shenanigans without people knowing. We're at a point where we should be able to say no to shenanigans.

Can. Potentially. Should. We're a long ways from the promised land. But the potential is very real. We should seize it.

And longer term, we should consciously embrace the world-altering power of technology. Forget modeling human brains for AI, let's simulate the planet and figure out how to accurately model our climate. We've given ourselves an accelerated timetable by conducting a large-scale accidental experiment in altering the carbon balance of the atmosphere, but it was our destiny to meddle with these systems eventually — we wouldn't take another naturally occurring ice age lying down, nor should we take the impacts of climate change.

Heck, we should start by modeling our economy. Project Cybersyn was 30 years ahead of its time, and I look forward to the days when we apply the tools of our trade to improving our managerial acumen in ways that are more generative and fruitful than high frequency trading.

Ultimately I distrust the particular flavor of the Singularity for the same reason I do the Red Dawn. It's kind of a cop-out to discount the present reality — in all its horror and promise — to focus on some kind of futurist calamity or utopia. People are drawn to this because of how messy and intractable the human condition can seem, which I can understand, but it's no less of a dodge.

To me the big story is the power of the World Mind. It's coming online, if we want it. Just like war is over. Don't forget.

Responses