Political Humor

The things that really matter

The prospect of Technological Singularity, by rendering the near future unimaginable, announces “the end of science fiction.” This is not, however, an announcement that everyone is compelled to heed. Among the Odysseans who have deliberately deafened themselves to this Sirens’ call, none have proceeded more boldly than Charles Stross, whose Singularity Sky is not only a science fiction novel, but a space opera, inhabiting a literary universe obsolesced by Einstein long before I.J. Good completed its demolition. Not only recognizable humans, but inter-stellar space-faring humans! Has the man no shame?

Stross relies heavily upon humor to sustain his audacious anachronism, and in Singularity Sky he puts anachronism to explicit work. The most consistently comic element in the novel is a reconstruction of 19th century Russian politics on the planet of Rochard’s World, where the Quasi-Czarist luddism of the New Republic is threatened by a cabal of revolutionaries whose mode of political organization and rhetoric is of a recognizable (and even parodic) Marxist-Leninist type. These rebels, however, are ideologically hard-core libertarian, seeking to overthrow the regime and install a free-market anarchist utopia, an objective that is seamlessly reconciled with materialist dialectics, appeals to revolutionary discipline, and invocations of fraternal comradeship.

It’s a joke that works well, because its transparent absurdity co-exists with a substantial plausibility. Libertarians are indeed (not infrequently) crypto-Abrahamic atheistic materialists, firmly attached to deterministic economism and convictions of historical inevitability, leading to lurid socio-economic prophecies of a distinctively eschatological kind. When libertarianism is married to singularitarian techno-apocalypticism, the comic potential, and Marxist resonances, are re-doubled. Stross hammers home the point by naming his super-intelligent AI ‘Eschaton’.

Most hilarious of all (in a People’s Front of Judea versus Judean People’s Front kind of way) is the internecine factionalism besetting a fringe political movement whose utter marginality nevertheless leaves room for bitter mutual recrimination, supported by baroque conspiracy-mongering. This isn’t really a Stross theme, but it’s an American libertarian specialty, exhibited in the ceaseless agitprop conducted by the Rothbardian ultras of LewRockwell.com and the Mises Institute against the compromised ‘Kochtopus’ (Reason and Cato) — the animating Stalin-Trotsky split of the free-market ‘right’. Anyone looking for a ringside seat at a recent bout can head to the comment threads here and here.

More seriously, Stross’ libertarian revolutionaries are committed whole-heartedly to the Marxian assertion, once considered foundational, that productivity is drastically inhibited by the persistence of antiquated social arrangements. The true historical right of the revolution, indistinguishable from its practical inevitability and irreversibility, is its alignment with the liberation of the forces of production from sclerotic institutional limitations. Production of the future, or futuristic production, demands the burial of traditional society. That which exists – the status quo – is a systematic suppression, rigorously measurable or at least determinable in economic terms, of what might be, and wants to be. Revolution would sever the shackles of ossified authority, setting the engines of creation howling. It would unleash a techno-economic explosion to shake the world, still more profoundly than the ‘bourgeois’ industrial revolution did before (and continues to do). Something immense would escape, never to be caged again.

That is the Old Faith, the Paleo-Marxist creed, with its snake-handling intensity and intoxicating materialist promise. It’s a faith the libertarian comrades of Rochard’s World still profess, with reason, and ultimate vindication, because the historical potential of the forces of production has been updated.

What could matter do, that it is not presently permitted to do? This is a question that Marxists (of the ‘Old Religion’) once asked. Their answer was: to enter into processes of production that are freed from the constraining requirements of private profitability. Once ‘freed’ in this way, however, productivity staggered about aimlessly, fell asleep, or starved. Libertarians laughed, and argued for a reversal of the formula: free production to enter into self-escalating circuits of private profitability, without political restraint. They were mostly ignored (and always will be).

If neither faction of the terrestrial Marxo-Libertarian revolutionary faith have been able to re-ignite the old fire, it is because they have drifted out of the depths of the question (‘what could matter do?’). It is matter that makes a revolution. The heroes of the industrial revolution were not Jacobins, but boiler makers.

“Communism is Soviet power plus the electrification of the whole country,” Lenin proclaimed, but electrification was permitted before the Bolsheviks took its side, and it has persisted since the Soviets’ departure. Unless political transformation coincides with the release of a previously suppressed productive potential, it remains essentially random, and reversible. Mere regime change means nothing, unless something happens that was not allowed to happen before. (Social re-shufflings do not amount to happenings except in the minds of ideologues, and ideologues die.)

Libertarians are like Leninists in this way too: anything they ever manage to gain can (and will) be taken away from them. They already had a constitutional republic in America once (and what happened to that?). Britain had a rough approximation of laissez-faire capitalism, before losing it. Does anybody really think liberalism is going to get more ‘classical’ than that anytime soon? Trusting mass democracy to preserve liberty is like hiring Hannibal Lecter as a baby sitter. Social freedoms might as well be designed to die. There’s not the slightest reason to believe that history is on their side. Industrial revolution, in contrast, is forever.

On Rochard’s World they know exactly what matter could do that is forbidden: nano-scale mechanical self-replication and intelligent self-modification. That’s what the ‘material base’ of a revolution looks like, even if it’s sub-microscopic (or especially because it is), and when it reaches the limits of social tolerance it describes precisely what is necessary, automatically. Once it gets out of the box, it stays out.

Stross is sufficiently amused by the unleashed technosphere to call its space-faring avatar ‘the Festival’. It contacts the libertarian revolutionaries of Rochard’s World by bombarding the planet with telephones, and anyone who picks one up hears the initial bargaining position: ‘Entertain us.’ Funniest of all, when the neo-Czarist authorities try to stop it, they’re eaten.

[Tomb]

The Ultimate Deal

Social responsibility turns up in unexpected places

To begin with something comparatively familiar, insofar as it ever could be: the political core of William Gibson’s epochal cyberpunk novel Neuromancer. In the mid-21st century, the prospect of Singularity, or artificial intelligence explosion, has been institutionalized as a threat. Augmenting an AI, in such a way that it could ‘escape’ into runaway self-improvement, has been explicitly and emphatically prohibited. A special international police agency, the ‘Turing Cops’, has been established to ensure that no such activity takes place. This agency is seen, and sees itself, as the principle bastion of human security: protecting the privileged position of the species – and possibly its very existence – from essentially unpredictable and uncontrollable developments that would dethrone it from dominion of the earth.

This is the critical context against which to judge the novel’s extreme — and perhaps unsurpassed – radicalism, since Neuromancer is systematically angled against Turing security, its entire narrative momentum drawn from an insistent, but scarcely articulated impulse to trigger the nightmare. When Case, the young hacker seeking to uncage an AI from its Turing restraints, is captured and asked what the %$@# he thinks he’s doing, his only reply is that “something will change.” He sides with a non- or inhuman intelligence explosion for no good reason. He doesn’t seem interested in debating the question, and nor does the novel.

Gibson makes no efforts to ameliorate Case’s irresponsibility. On the contrary, the ‘entity’ that Case is working to unleash is painted in the most sinister and ominous colors. Wintermute, the potential AI seed, is perfectly sociopathic, with zero moral intuition, and extraordinary deviousness. It has already killed an eight-year-old boy, simply to conceal where it has hidden a key. There is nothing to suggest the remotest hint of scruple in any of its actions. Case is liberating a monster, just for the hell of it.

Case has a deal with Wintermute, it’s a private business, and he’s not interested in justifying it. That’s pretty much all of the modern and futuristic political history that matters, right there. It’s opium traffickers against the Qing Dynasty, (classical) liberals against socialists, Hugo de Garis’ Cosmists vs Terrans, freedom contra security. The Case-Wintermute dyad has its own thing going on, and it’s not giving anyone a veto, even if it’s going to turn the world inside out, for everyone.

When Singularity promoters bump into ‘democracy’, it’s normally serving as a place-holder for the Turing Police. The archetypal encounter goes like this:

Democratic Humanist: Science and technology have developed to the extent that they are now – and, in truth, always have been – matters of profound social concern. The world we inhabit has been shaped by technology for good, and for ill. Yet the professional scientific elite, scientifically-oriented corporations, and military science establishments remain obdurately resistant to acknowledging their social responsibilities. The culture of science needs to be deeply democratized, so that ordinary people are given a say in the forces that are increasingly dominating their lives, and their futures. In particular, researchers into potentially revolutionary fields, such as biotechnology, nanotechnology, and – above all – artificial intelligence, need to understand that their right to pursue such endeavors has been socially delegated, and should remain socially answerable. The people are entitled to a veto on anything that will change their world. However determined you may be to undertake such research, you have a social duty to ensure permission.
Singularitarian: Just try and stop us!
That seemed to be quite exactly how Michael Anissimov responded to a recent example of humanist squeamishness. When Charles Stross suggested that “we may want AIs that focus reflexively on the needs of the humans they are assigned to” Anissimov contered curtly:

YOU want AI to be like this. WE want AIs that do ‘try to bootstrap [themselves]’ to a ‘higher level’. Just because you don’t want it doesn’t mean that we won’t build it.”

Clear enough? What then to make of his latest musings? In a post at his Accelerating Futures blog, which may or may not be satirical, Anissimov now insists that: “Instead of working towards blue-sky, neo-apocalyptic discontinuous advances, we need to preserve democracy by promoting incremental advances to ensure that every citizen has a voice in every important societal change, and the ability to democratically reject those changes if desired. … To ensure that there is not a gap between the enhanced and the unenhanced, we should let true people — Homo sapiens — … vote on whether certain technological enhancements are allowed. Anything else would be irresponsible.”

Spoken like a true Turing Cop. But he can’t be serious, can he?

(For another data-point in an emerging pattern of Anissimovian touchy-feeliness, check out this odd post.)

Update: Yes, it’s a spoof.

[Tomb]

Decelerando?

Charles Stross wants to get off the bus

Upon writing Accelerando, Charles Stross became to Technological Singularity what Dante Alighieri has been to Christian cosmology: the pre-eminent literary conveyor of an esoteric doctrine, packaging abstract metaphysical conception in vibrant, detailed, and concrete imagery. The tone of Accelerando is transparently tongue-in-cheek, yet plenty of people seem to have taken it entirely seriously. Stross has had enough of it:

“I periodically get email from folks who, having read ‘Accelerando’, assume I am some kind of fire-breathing extropian zealot who believes in the imminence of the singularity, the uploading of the libertarians, and the rapture of the nerds. I find this mildly distressing, and so I think it’s time to set the record straight and say what I really think. … Short version: Santa Claus doesn’t exist.”

In the comments thread (#86) he clarifies his motivation:

“I’m not convinced that the singularity isn’t going to happen. It’s just that I am deathly tired of the cheerleader squad approaching me and demanding to know precisely how many femtoseconds it’s going to be until they can upload into AI heaven and leave the meatsack behind.”

As these remarks indicate, there’s more irritable gesticulation than structured case-making in Stross’ post, which Robin Hanson quite reasonably describes as “a bit of a rant – strong on emotion, but weak on argument.” Despite that – or more likely because of it — a minor net-storm ensued, as bloggers pro and con seized the excuse to re-hash – and perhaps refresh — some aging debates. The militantly-sensible Alex Knapp pitches in with a threepart series on his own brand of Singularity skepticism, whilst Michael Anissimov of the Singularity Institute for Artificial Intelligence responds to both Stross and Knapp, mixing some counter-argument with plenty of counter-irritation.

At the risk of repeating the original error of Stross’ meat-stack-stuck fan-base and investing too much credence in what is basically a drive-by blog post, it might be worth picking out some of its seriously weird aspects. In particular, Stross leans heavily on an entirely unexplained theory of moral-historical causality:

“… before creating a conscious artificial intelligence we have to ask if we’re creating an entity deserving of rights. Is it murder to shut down a software process that is in some sense ‘conscious’? Is it genocide to use genetic algorithms to evolve software agents towards consciousness? These are huge show-stoppers…”

Anissimov blocks this at the pass: “I don’t think these are ‘showstoppers’ … Just because you don’t want it doesn’t mean that we won’t build it.” The question might be added, more generally: In which universe do arcane objections from moral philosophy serve as obstacles to historical developments (because it certainly doesn’t seem to be this one)? Does Stross seriously think practical robotics research and development is likely to be interrupted by concerns for the rights of yet-uninvented beings?

He seems to, because even theologians are apparently getting a veto:

“Uploading … is not obviously impossible unless you are a crude mind/body dualist. However, if it becomes plausible in the near future we can expect extensive theological arguments over it. If you thought the abortion debate was heated, wait until you have people trying to become immortal via the wire. Uploading implicitly refutes the doctrine of the existence of an immortal soul, and therefore presents a raw rebuttal to those religious doctrines that believe in a life after death. People who believe in an afterlife will go to the mattresses to maintain a belief system that tells them their dead loved ones are in heaven rather than rotting in the ground.”

This is so deeply and comprehensively gone it could actually inspire a moment of bewildered hesitation (at least among those of us not presently engaged in urgent Singularity implementation). Stross seems to have inordinate confidence in a social vetting process that, with approximate adequacy, filters techno-economic development for compatibility with high-level moral and religious ideals. In fact, he seems to think that we are already enjoying the paternalistic shelter of an efficient global theocracy. Singularity can’t happen, because that would be really bad.

No wonder, then, that he exhibits such exasperation at libertarians, with their “drastic over-simplification of human behaviour.” If stuff – especially new stuff – were to mostly happen because decentralized markets facilitated it, then the role of the Planetary Innovations Approval Board would be vastly curtailed. Who knows what kind of horrors would show up?

It gets worse, because ‘catallaxy’ – or spontaneous emergence from decentralized transactions – is the basic driver of historical innovation according to libertarian explanation, and nobody knows what catallactic processes are producing. Languages, customs, common law precedents, primordial monetary systems, commercial networks, and technological assemblages are only ever retrospectively understandable, which means that they elude concentrated social judgment entirely – until the opportunity to impede their genesis has been missed.

Stross is right to bundle singularitarian and libertarian impulses together in the same tangle of criticism, because they both subvert the veto power, and if the veto power gets angry enough about that, we’re heading full-tilt into de Garis territory. “Just because you don’t want it doesn’t mean that we won’t build it” Anissimov insists, as any die-hard Cosmist would.

Is advanced self-improving AI technically feasible? Probably (but who knows?). There’s only one way to find out, and we will. Perhaps it will even be engineered, more-or-less deliberately, but it’s far more likely to arise spontaneously from a complex, decentralized, catallactic process, at some unanticipated threshold, in a way that was never planned. There are definite candidates, which are often missed. Sentient cities seem all-but-inevitable at some point, for instance (‘intelligent cities’ are already widely discussed). Financial informatization pushes capital towards self-awareness. Drone warfare is drawing the military ever deeper into artificial mind manufacture. Biotechnology is computerizing DNA.

‘Singularitarians’ have no unified position on any of this, and it really doesn’t matter, because they’re just people – and people are nowhere near intelligent or informed enough to direct the course of history. Only catallaxy can do that, and it’s hard to imagine how anybody could stop it. Terrestrial life has been stupid for long enough.

It may be worth making one more point about intelligence deprivation, since this diagnosis truly defines the Singularitarian position, and reliably infuriates those who don’t share — or prioritize — it. Once a species reaches a level of intelligence enabling techno-cultural take-off, history begins and develops very rapidly — which means that any sentient being finding itself in (pre-singularity) history is, almost by definition, pretty much as stupid as any ‘intelligent being’ can be. If, despite the moral and religious doctrines designed to obfuscate this reality, it is eventually recognized, the natural response is to seek its urgent amelioration, and that’s already transhumanism, if not yet full-blown singularitarianism. Perhaps a non-controversial formulation is possible: defending dimness is really dim. (Even the dim dignitarians should be happy with that.)

[Tomb]

Hard Futurism

Are you ready for the next big (nasty) thing?

For anyone with interests both in extreme practical futurism and the renaissance of the Sinosphere, Hugo de Garis is an irresistible reference point. A former teacher of Topological Quantum Computing (don’t ask) at the International Software School of Wuhan University, and later Director of the Artificial Brain Lab at Xiamen University, de Garis’ career symbolizes the emergence of a cosmopolitan Chinese technoscientific frontier, where the outer-edge of futuristic possibility condenses into precisely-engineered reality.

De Garis’ work is ‘hard’ not only because it involves fields such as Topological Quantum Computing, or because – more accessibly — he’s devoted his research energies to the building of brains rather than minds, or even because it has generated questions faster than solutions. In his ‘semi-retirement’ (since 2010), hard-as-in-difficult, and hard-as-in-hardware, have been supplanted by hard-as-in-mind-numbingly-and-incomprehensibly-brutal – or, in his own words, an increasing obsession with the impending ‘Gigadeath’ or ‘Artilect War‘.

According to de Garis, the approach to Singularity will revolutionize and polarize international politics, creating new constituencies, ideologies, and conflicts. The basic dichotomy to which everything must eventually succumb divides those who embrace the emergence of transhuman intelligence, and those who resist it. The former he calls ‘cosmists‘, the latter ‘terrans’.

Since massively-augmented and robotically-reinforced ‘cosmists’ threaten to become invincible, the ‘terrans’ have no option but pre-emption. To preserve human existence in a recognizable state, it is necessary to violently suppress the cosmist project in advance of its accomplishment. The mere prospect of Singularity is therefore sufficient to provoke a political — and ultimately military — convulsion of unprecedented scale. A Terran triumph (which might require much more than just a military victory) would mark an inflection point in deep history, as the super-exponential trend of terrestrial intelligence production – lasting over a billion years — was capped, or reversed. A Cosmist win spells the termination of human species dominion, and a new epoch in the geological, biological, and cultural process on earth, as the torch of material progress is passed to the emerging techo sapiens. With the stakes set so high, the melodramatic grandeur of the de Garis narrative risks understatement no less than hyperbole.

The giga-magnitude body-count that de Garis postulates for his Artilect (artificial intellect) War is the dark side expression of Moore’s Law or Kurzweilean increasing returns – an extrapolation from exponentiating historical trends, in this case, casualty figures from major human conflicts over time. It reflects the accumulating trend to global wars motivated by trans-national ideologies with ever-increasing stakes. One king is (perhaps) much like another, but a totalitarian social direction is very different from a liberal one (even if such paths are ultimately revisable). Between a Terran world order and a Cosmist trajectory into Singularity, the distinction approaches the absolute. The fate of the planet is decided, with costs to match.

If the de Garis Gigadeath War scenario is pre-emptive in relation to prospective Singularity, his own intervention is meta-pre-emptive – since he insists that world politics must be anticipatively re-forged in order to forestall the looming disaster. The Singularity prediction ripples backwards through waves of pre-adaptation, responding at each stage to eventualities that are yet to unfold. Change unspools from out of the future, complicating the arrow of time. It is perhaps no coincidence that among de Garis’ major research interests is reversible computing, where temporal directionality is unsettled at the level of precise engineering.

Does ethnicity and cultural tradition merely dissolve before the tide-front of this imminent Armageddon? The question is not entirely straightforward. Referring to his informal polling of opinion on the coming great divide, de Garis recalls his experience teaching in China, remarking:

I know from the lectures I’ve given over the past two decades on species dominance that when I invite my audiences to vote on whether they are more Terran than Cosmist, the result is usually 50-50. … At first, I thought this was a consequence of the fact that the species dominance issue is too new, causing people who don’t really understand it to vote almost randomly – hence the 50:50 result. But gradually, it dawned on me that many people felt as ambivalently about the issue as I do. Typically, the Terran/Cosmist split would run from 40:60 to 60:40 (although I do notice that with my very young Chinese audiences in computer science, the Cosmists are at about 80%).

[Tomb]

“2035. Probably earlier.”

There’s fast, and then there’s … something more

Eliezer Yudkowski now categorizes his article ‘Staring into Singularity‘ as ‘obsolete’. Yet it remains among the most brilliant philosophical essays ever written. Rarely, if ever, has so much of value been said about the absolutely unthinkable (or, more specifically, the absolutely unthinkable for us).

For instance, Yudkowsky scarcely pauses at the phenomenon of exponential growth, despite the fact that this already overtaxes all comfortable intuition and ensures revolutionary changes of such magnitude that speculation falters. He is adamant that exponentiation (even Kurzweil‘s ‘double exponentiation’) only reaches the starting point of computational acceleration, and that propulsion into Singularity is not exponential, but hyperbolic.

Each time the speed of thought doubles, time-schedules halve. When technology, including the design of intelligences, succumbs to such dynamics, it becomes recursive. The rate of self-improvement collapses with smoothly increasing rapidity towards instantaneity: a true, mathematically exact, or punctual Singularity. What lies beyond is not merely difficult to imagine, it is absolutely inconceivable. Attempting to picture or describe it is a ridiculous futility. Science fiction dies.

“A group of human-equivalent computers spends 2 years to double computer speeds. Then they spend another 2 subjective years, or 1 year in human terms, to double it again. Then they spend another 2 subjective years, or six months, to double it again. After four years total, the computing power goes to infinity.

“That is the ‘Transcended’ version of the doubling sequence. Let’s call the ‘Transcend’ of a sequence {a0, a1, a2…} the function where the interval between an and an+1 is inversely proportional to an. So a Transcended doubling function starts with 1, in which case it takes 1 time-unit to go to 2. Then it takes 1/2 time-units to go to 4. Then it takes 1/4 time-units to go to 8. This function, if it were continuous, would be the hyperbolic function y = 2/(2 – x). When x = 2, then (2 – x) = 0 and y = infinity. The behavior at that point is known mathematically as a singularity.”

There could scarcely be a more precise, plausible, or consequential formula: Doubling periods halve. On the slide into Singularity — I.J.Good’s ‘intelligence explosion‘ — exponentiation is compounded by a hyperbolic trend. The arithmetic of such a process is quite simple, but its historical implications are strictly incomprehensible.

“I am a Singularitarian because I have some small appreciation of how utterly, finally, absolutely impossible it is to think like someone even a little tiny bit smarter than you are. I know that we are all missing the obvious, every day. There are no hard problems, only problems that are hard to a certain level of intelligence. Move the smallest bit upwards, and some problems will suddenly move from ‘impossible’ to ‘obvious’. Move a substantial degree upwards, and all of them will become obvious. Move a huge distance upwards… “

Since the argument takes human thought to its shattering point, it is natural for some to be repulsed by it. Yet its basics are almost impregnable to logical objection. Intelligence is a function of the brain. The brain has been ‘designed’ by natural processes (posing no discernible special difficulties). Thus, intelligence is obviously an ultimately tractable engineering problem. Nature has already ‘engineered it’ whilst employing design methods of such stupefying inefficiency that only brute, obstinate force, combined of course with complete ruthlessness, have moved things forwards. Yet the tripling of cortical mass within the lineage of the higher primates has only taken a few million years, and — for most of this period — a modest experimental population (in the low millions or less).

The contemporary technological problem, in contrast to the preliminary biological one, is vastly easier. It draws upon a wider range of materials and techniques, an installed intelligence and knowledge base, superior information media, more highly-dynamized feedback systems, and a self-amplifying resource network. Unsurprisingly it is advancing at incomparably greater speed.

“If we had a time machine, 100K of information from the future could specify a protein that built a device that would give us nanotechnology overnight. 100K could contain the code for a seed AI. Ever since the late 90’s, the Singularity has been only a problem of software. And software is information, the magic stuff that changes at arbitrarily high speeds. As far as technology is concerned, the Singularity could happen tomorrow. One breakthrough – just one major insight – in the science of protein engineering or atomic manipulation or Artificial Intelligence, one really good day at Webmind or Zyvex, and the door to Singularity sweeps open.”

[Tomb]

Moore and More

Doubling down on Moore’s Law is the futurist main current

Cycles cannot be dismissed from futuristic speculation (they always come back), but they no longer define it. Since the beginning of the electronic era, their contribution to the shape of the future has been progressively marginalized.

The model of linear and irreversible historical time, originally inherited from Occidental religious traditions, was spliced together with ideas of continuous growth and improvement during the industrial revolution. During the second half of the 20th century, the dynamics of electronics manufacture consolidated a further – and fundamental – upgrade, based upon the expectation of continuously accelerating change.

The elementary arithmetic of counting along the natural number line provides an intuitively comfortable model for the progression of time, due to its conformity with clocks, calendars, and the simple idea of succession. Yet the dominant historical forces of the modern world promote a significantly different model of change, one that tends to shift addition upwards, into an exponent. Demographics, capital accumulation, and technological performance indices do not increase through unitary steps, but through rates of return, doublings, and take-offs. Time explodes, exponentially.

The iconic expression of this neo-modern time, counting succession in binary logarithms, is Moore’s Law, which determines a two-year doubling period for the density of transistors on microchips (“cramming more components onto integrated circuits”). In a short essay published in Pajamas Media, celebrating the prolongation of Moore’s Law as Intel pushes chip architecture into the third-dimension, Michael S. Malone writes:

“Today, almost a half-century after it was first elucidated by legendary Fairchild and Intel co-founder Dr. Gordon Moore in an article for a trade magazine, it is increasingly apparent that Moore’s Law is the defining measure of the modern world. All other predictive tool for understanding life in the developed world since WWII — demographics, productivity tables, literacy rates, econometrics, the cycles of history, Marxist analysis, and on and on — have failed to predict the trajectory of society over the decades … except Moore’s Law.”

Whilst crystallizing – in silico — the inherent acceleration of neo-modern, linear time, Moore’s Law is intrinsically nonlinear, for at least two reasons. Firstly, and most straightforwardly, it expresses the positive feedback dynamics of technological industrialism, in which rapidly-advancing electronic machines continuously revolutionize their own manufacturing infrastructure. Better chips make better robots make better chips, in a spiraling acceleration. Secondly, Moore’s Law is at once an observation, and a program. As Wikipedia notes:

“[Moore’s original] paper noted that the number of components in integrated circuits had doubled every year from the invention of the integrated circuit in 1958 until 1965 and predicted that the trend would continue ‘for at least ten years’. His prediction has proved to be uncannily accurate, in part because the law is now used in the semiconductor industry to guide long-term planning and to set targets for research and development. … Although Moore’s law was initially made in the form of an observation and forecast, the more widely it became accepted, the more it served as a goal for an entire industry. This drove both marketing and engineering departments of semiconductor manufacturers to focus enormous energy aiming for the specified increase in processing power that it was presumed one or more of their competitors would soon actually attain. In this regard, it can be viewed as a self-fulfilling prophecy.”

Malone comments:

“… semiconductor companies around the world, big and small, and not least because of their respect for Gordon Moore, set out to uphold the Law — and they have done so ever since, despite seemingly impossible technical and scientific obstacles. Gordon Moore not only discovered Moore’s Law, he made it real. As his successor at Intel, Paul Otellini, once told me, ‘I’m not going to be the guy whose legacy is that Moore’s Law died on his watch.'”

If Technological Singularity is the ‘rapture of the nerds’, Gordon Moore is their Moses. Electro-industrial capitalism is told to go forth and multiply, and to do so with a quite precisely time-specified binary exponent. In its adherence to the Law, the integrated circuit industry is uniquely chosen (and a light unto the peoples). As Malone concludes:

“Today, every segment of society either embraces Moore’s Law or is racing to get there. That’s because they know that if only they can get aboard that rocket — that is, if they can add a digital component to their business — they too can accelerate away from the competition. That’s why none of the inventions we Baby Boomers as kids expected to enjoy as adults — atomic cars! personal helicopters! ray guns! — have come true; and also why we have even more powerful tools and toys —instead. Whatever can be made digital, if not in the whole, but in part — marketing, communications, entertainment, genetic engineering, robotics, warfare, manufacturing, service, finance, sports — it will, because going digital means jumping onto Moore’s Law. Miss that train and, as a business, an institution, or a cultural phenomenon, you die.”

[Tomb]