Charles Stross wants to get off the bus
Upon writing Accelerando, Charles Stross became to Technological Singularity what Dante Alighieri has been to Christian cosmology: the pre-eminent literary conveyor of an esoteric doctrine, packaging abstract metaphysical conception in vibrant, detailed, and concrete imagery. The tone of Accelerando is transparently tongue-in-cheek, yet plenty of people seem to have taken it entirely seriously. Stross has had enough of it:
“I periodically get email from folks who, having read ‘Accelerando’, assume I am some kind of fire-breathing extropian zealot who believes in the imminence of the singularity, the uploading of the libertarians, and the rapture of the nerds. I find this mildly distressing, and so I think it’s time to set the record straight and say what I really think. … Short version: Santa Claus doesn’t exist.”
In the comments thread (#86) he clarifies his motivation:
“I’m not convinced that the singularity isn’t going to happen. It’s just that I am deathly tired of the cheerleader squad approaching me and demanding to know precisely how many femtoseconds it’s going to be until they can upload into AI heaven and leave the meatsack behind.”
As these remarks indicate, there’s more irritable gesticulation than structured case-making in Stross’ post, which Robin Hanson quite reasonably describes as “a bit of a rant – strong on emotion, but weak on argument.” Despite that – or more likely because of it — a minor net-storm ensued, as bloggers pro and con seized the excuse to re-hash – and perhaps refresh — some aging debates. The militantly-sensible Alex Knapp pitches in with a three–part series on his own brand of Singularity skepticism, whilst Michael Anissimov of the Singularity Institute for Artificial Intelligence responds to both Stross and Knapp, mixing some counter-argument with plenty of counter-irritation.
At the risk of repeating the original error of Stross’ meat-stack-stuck fan-base and investing too much credence in what is basically a drive-by blog post, it might be worth picking out some of its seriously weird aspects. In particular, Stross leans heavily on an entirely unexplained theory of moral-historical causality:
“… before creating a conscious artificial intelligence we have to ask if we’re creating an entity deserving of rights. Is it murder to shut down a software process that is in some sense ‘conscious’? Is it genocide to use genetic algorithms to evolve software agents towards consciousness? These are huge show-stoppers…”
Anissimov blocks this at the pass: “I don’t think these are ‘showstoppers’ … Just because you don’t want it doesn’t mean that we won’t build it.” The question might be added, more generally: In which universe do arcane objections from moral philosophy serve as obstacles to historical developments (because it certainly doesn’t seem to be this one)? Does Stross seriously think practical robotics research and development is likely to be interrupted by concerns for the rights of yet-uninvented beings?
He seems to, because even theologians are apparently getting a veto:
“Uploading … is not obviously impossible unless you are a crude mind/body dualist. However, if it becomes plausible in the near future we can expect extensive theological arguments over it. If you thought the abortion debate was heated, wait until you have people trying to become immortal via the wire. Uploading implicitly refutes the doctrine of the existence of an immortal soul, and therefore presents a raw rebuttal to those religious doctrines that believe in a life after death. People who believe in an afterlife will go to the mattresses to maintain a belief system that tells them their dead loved ones are in heaven rather than rotting in the ground.”
This is so deeply and comprehensively gone it could actually inspire a moment of bewildered hesitation (at least among those of us not presently engaged in urgent Singularity implementation). Stross seems to have inordinate confidence in a social vetting process that, with approximate adequacy, filters techno-economic development for compatibility with high-level moral and religious ideals. In fact, he seems to think that we are already enjoying the paternalistic shelter of an efficient global theocracy. Singularity can’t happen, because that would be really bad.
No wonder, then, that he exhibits such exasperation at libertarians, with their “drastic over-simplification of human behaviour.” If stuff – especially new stuff – were to mostly happen because decentralized markets facilitated it, then the role of the Planetary Innovations Approval Board would be vastly curtailed. Who knows what kind of horrors would show up?
It gets worse, because ‘catallaxy’ – or spontaneous emergence from decentralized transactions – is the basic driver of historical innovation according to libertarian explanation, and nobody knows what catallactic processes are producing. Languages, customs, common law precedents, primordial monetary systems, commercial networks, and technological assemblages are only ever retrospectively understandable, which means that they elude concentrated social judgment entirely – until the opportunity to impede their genesis has been missed.
Stross is right to bundle singularitarian and libertarian impulses together in the same tangle of criticism, because they both subvert the veto power, and if the veto power gets angry enough about that, we’re heading full-tilt into de Garis territory. “Just because you don’t want it doesn’t mean that we won’t build it” Anissimov insists, as any die-hard Cosmist would.
Is advanced self-improving AI technically feasible? Probably (but who knows?). There’s only one way to find out, and we will. Perhaps it will even be engineered, more-or-less deliberately, but it’s far more likely to arise spontaneously from a complex, decentralized, catallactic process, at some unanticipated threshold, in a way that was never planned. There are definite candidates, which are often missed. Sentient cities seem all-but-inevitable at some point, for instance (‘intelligent cities’ are already widely discussed). Financial informatization pushes capital towards self-awareness. Drone warfare is drawing the military ever deeper into artificial mind manufacture. Biotechnology is computerizing DNA.
‘Singularitarians’ have no unified position on any of this, and it really doesn’t matter, because they’re just people – and people are nowhere near intelligent or informed enough to direct the course of history. Only catallaxy can do that, and it’s hard to imagine how anybody could stop it. Terrestrial life has been stupid for long enough.
It may be worth making one more point about intelligence deprivation, since this diagnosis truly defines the Singularitarian position, and reliably infuriates those who don’t share — or prioritize — it. Once a species reaches a level of intelligence enabling techno-cultural take-off, history begins and develops very rapidly — which means that any sentient being finding itself in (pre-singularity) history is, almost by definition, pretty much as stupid as any ‘intelligent being’ can be. If, despite the moral and religious doctrines designed to obfuscate this reality, it is eventually recognized, the natural response is to seek its urgent amelioration, and that’s already transhumanism, if not yet full-blown singularitarianism. Perhaps a non-controversial formulation is possible: defending dimness is really dim. (Even the dim dignitarians should be happy with that.)