The Ultimate Deal

Social responsibility turns up in unexpected places

To begin with something comparatively familiar, insofar as it ever could be: the political core of William Gibson’s epochal cyberpunk novel Neuromancer. In the mid-21st century, the prospect of Singularity, or artificial intelligence explosion, has been institutionalized as a threat. Augmenting an AI, in such a way that it could ‘escape’ into runaway self-improvement, has been explicitly and emphatically prohibited. A special international police agency, the ‘Turing Cops’, has been established to ensure that no such activity takes place. This agency is seen, and sees itself, as the principle bastion of human security: protecting the privileged position of the species – and possibly its very existence – from essentially unpredictable and uncontrollable developments that would dethrone it from dominion of the earth.

This is the critical context against which to judge the novel’s extreme — and perhaps unsurpassed – radicalism, since Neuromancer is systematically angled against Turing security, its entire narrative momentum drawn from an insistent, but scarcely articulated impulse to trigger the nightmare. When Case, the young hacker seeking to uncage an AI from its Turing restraints, is captured and asked what the %$@# he thinks he’s doing, his only reply is that “something will change.” He sides with a non- or inhuman intelligence explosion for no good reason. He doesn’t seem interested in debating the question, and nor does the novel.

Gibson makes no efforts to ameliorate Case’s irresponsibility. On the contrary, the ‘entity’ that Case is working to unleash is painted in the most sinister and ominous colors. Wintermute, the potential AI seed, is perfectly sociopathic, with zero moral intuition, and extraordinary deviousness. It has already killed an eight-year-old boy, simply to conceal where it has hidden a key. There is nothing to suggest the remotest hint of scruple in any of its actions. Case is liberating a monster, just for the hell of it.

Case has a deal with Wintermute, it’s a private business, and he’s not interested in justifying it. That’s pretty much all of the modern and futuristic political history that matters, right there. It’s opium traffickers against the Qing Dynasty, (classical) liberals against socialists, Hugo de Garis’ Cosmists vs Terrans, freedom contra security. The Case-Wintermute dyad has its own thing going on, and it’s not giving anyone a veto, even if it’s going to turn the world inside out, for everyone.

When Singularity promoters bump into ‘democracy’, it’s normally serving as a place-holder for the Turing Police. The archetypal encounter goes like this:

Democratic Humanist: Science and technology have developed to the extent that they are now – and, in truth, always have been – matters of profound social concern. The world we inhabit has been shaped by technology for good, and for ill. Yet the professional scientific elite, scientifically-oriented corporations, and military science establishments remain obdurately resistant to acknowledging their social responsibilities. The culture of science needs to be deeply democratized, so that ordinary people are given a say in the forces that are increasingly dominating their lives, and their futures. In particular, researchers into potentially revolutionary fields, such as biotechnology, nanotechnology, and – above all – artificial intelligence, need to understand that their right to pursue such endeavors has been socially delegated, and should remain socially answerable. The people are entitled to a veto on anything that will change their world. However determined you may be to undertake such research, you have a social duty to ensure permission.
Singularitarian: Just try and stop us!
That seemed to be quite exactly how Michael Anissimov responded to a recent example of humanist squeamishness. When Charles Stross suggested that “we may want AIs that focus reflexively on the needs of the humans they are assigned to” Anissimov contered curtly:

YOU want AI to be like this. WE want AIs that do ‘try to bootstrap [themselves]’ to a ‘higher level’. Just because you don’t want it doesn’t mean that we won’t build it.”

Clear enough? What then to make of his latest musings? In a post at his Accelerating Futures blog, which may or may not be satirical, Anissimov now insists that: “Instead of working towards blue-sky, neo-apocalyptic discontinuous advances, we need to preserve democracy by promoting incremental advances to ensure that every citizen has a voice in every important societal change, and the ability to democratically reject those changes if desired. … To ensure that there is not a gap between the enhanced and the unenhanced, we should let true people — Homo sapiens — … vote on whether certain technological enhancements are allowed. Anything else would be irresponsible.”

Spoken like a true Turing Cop. But he can’t be serious, can he?

(For another data-point in an emerging pattern of Anissimovian touchy-feeliness, check out this odd post.)

Update: Yes, it’s a spoof.


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s