Suspended Animation

Limbo starts to feel like home

According to Herbert Stein’s Law, the signature warning of our age, “If something cannot go on forever, it will stop.” The question is: When?

The central concerns of environmentalists and radical market economists are easy to distinguish – when not straightforwardly opposed – yet both groups face a common mental and historical predicament, which might even be considered the outstanding social discovery of recent times: the extraordinary durability of the unsustainable. A pattern of mass behavior is observed that leads transparently to crisis, based on explosive (exponential) trends that are acknowledged without controversy, yet consensus on matters of fact coexists with paralyzing policy disagreements, seemingly interminable procrastination, and irresolution. The looming crisis continues to swell, close, horribly close, but in no way that is persuasively measurable closer, like some grating Godot purgatory: “You must go on; I can’t go on; I’ll go on.”

Urban Future doesn’t do green anguish as well as teeth-grinding Austrolibertarian irritation, so it won’t really try. Suffice to say that being green is about to become almost unimaginably maddening, if it isn’t already. Just as the standard ‘green house’ model insinuates itself, near-universally, into the structure of common sense, the world temperature record has locked into a flatline, with surging CO2 production showing up everywhere except as warming. Worse still, a new wave of energy resources – stubbornly based on satanic hydrocarbons, and of truly stupefying magnitude – is rolling out inertially, with barely a hint of effective obstruction. Tar sands, fracking, and sub-salt deep sea oil deposits are all coming on-stream already, with methane clathrates just up the road. The world’s on a burn, and it can’t go on (but it carries on).

Financial unsustainability is no less blatant, or bizarrely enduring. Since the beginning of the 20th century, once (classically) liberal Western economies have seen government expenditure rise from under 5% to over 40% of total income, with much of Europe crossing the 50% redline (after which nothing remotely familiar as ‘capitalism’ any longer exists). Public debt levels are tracing geometrically elegant exponential curves, chronic dependency is replacing productive social participation, and generalized sovereign insolvency is now a matter of simple and obvious fact. The only thing clearer than the inevitability of systemic bankruptcy is the political impossibility of doing anything about it, so things carry on, even though they really have to stop. Unintelligible multi-trillion magnitudes of impending calamity stack up, and up, and up in a near future which never quite arrives.

The frozen limbo-state of durable unsustainability is the new normal (which will last until it doesn’t). The pop cultural expression is zombie apocalypse, a shambling, undying state of endlessly prolonged decomposition. When translated into economic analysis, the result is epitomized by Tyler Cowen’s influential e-book The Great Stagnation: How America Ate All the Low-Hanging Fruit of Modern History, Got Sick, and Will (Eventually) Feel Better. (Yes, Urban Future is arriving incredibly late to this party, but in a frozen limbo that doesn’t matter.)

In a nutshell, Cowen argues that the exhaustion of three principal sources of ‘low-hanging fruit’ has brought the secular trend of American growth to a state of stagnation that high-frequency business cycles have partially obscured. With the consumption of America’s frontier surplus (free land), educational surplus (smart but educationally-unserved population), and — most importantly — technological surplus, from major breakthroughs opening broad avenues of commercial exploitation, growth rates have shriveled to a level that the country’s people are psychologically unprepared to accept as normal.

It fell to Cowen’s GMU colleague Peter Boettke to clearly make the pro-market case for stagnationism that Cowen seems to think he had already persuasively articulated. In an overtly supportive post, Boettke transforms Cowens’ rather elusive argument into a far more pointed anti-government polemic — the discovery of a new depressive equilibrium, in which relentless socio-political degeneration absorbs and neutralizes a decaying trend of techno-economic advance.

An accumulated economic surplus was created by the age of innovation, which the age of economic illusion spent down. We are now coming to the end of that accumulated surplus and thus the full weight of government inefficiencies are starting to be felt throughout the economy.

Perhaps surprisingly, the general tenor of response on the libertarian right was quite different. Rather than celebrating Cowen’s exposure of the statist ruin visited upon Western societies, most of this commentary concentrated upon the stagnationist thesis itself, attacking it from a variety of interlocking angles. David R. Henderson’s Cato review makes stinging economic arguments against Cowen’s claims about land and education. Russ Roberts (at Cafe Hayek) shows how Cowen’s dismal story about stagnant median family incomes draws upon data distorted by historical changes in US family structure and residential patterns. The most common line of resistance, however, instantiated by Don Boudreaux, John Hagel, Steven Horwitz, Bryan Caplan, and Ronald Bailey, among others, rallies in defense of actually existing consumer capitalism. Bailey, for example, notes:

In 1970, a 23-inch color television cost $368 ($2,000 in 2009 dollars). Today, a 22-inch Phillips LCD flat panel TV costs $190. In 1978, an 8-track tape player cost $169 ($550). Today, an iPod Touch with 8 gigabytes of memory costs $204. In 1970, an Olympia adding machine cost $80 ($437 in 2009 dollars). Today, a Canon office calculator costs $6.65. In 1978, a Radio Shack TRS80 computer with 16K of RAM cost $399 ($1300 in 2009 dollars). Today, Costco will sell you an ASUS netbook with 1 gigabyte of RAM for $270. The average car cost $3,900 in 1970 ($21,300 in today’s dollars). A mid-sized 2011 vehicle would cost somewhere around $20,000 and last twice as long.

Another very crude way to look at it is that Americans are four times richer in terms of refrigerators, 10 times richer in terms of TVs, 2.5 times richer when it comes to listening to music on the go, 3,000 times richer in calculators, about 400,000 times richer when it comes to price per kilobyte of computer memory, and two times richer in cars. Cowen dismisses this kind of progress as mere “quality improvements,” but in this case quality becomes it own kind of quantity when it comes to improved living standards.

What seems pretty clear from most of this (and already in Cowen’s account) is that nothing much has been moving forward in the world’s ‘developed’ economies for four decades except for the information technology revolution and its Moore’s Law dynamics. Abstract out the microprocessor, and even the most determinedly optimistic vision of recent trends is gutted to the point of expiration. Without computers, there’s nothing happening, or at least nothing good.

[… still crawling …]

[Tomb]
Advertisements

Calendric Dominion (Part 5)

From Crimson Paradise to Soft Apocalypse

Despite its modernity and decimalism, the French calendrier républicain or révolutionnaire had no Year Zero, but it re-set the terms of understanding. A topic that had been conceived as an intersection of religious commemoration with astronomical fact became overtly ideological, and dominated by considerations of secular politics. The new calendar, which replaced AD 1792 with the first year of the new ‘Era of Liberty’, lasted for less than 14 years. It was formally abolished by Napoléon, effective from 1 January 1806 (the day after 10 Nivôse an XIV), although it was briefly revived during the Paris Commune (in AD 1871, or Année 79 de la République), when the country’s revolutionary enthusiasm was momentarily re-ignited.

For the left, the calendric re-set meant radical re-foundation, and symbolic extirpation of the Ancien Régime. For the right, it meant immanentization of the eschaton, and the origination of totalitarian terror. Both definitions were confirmed in 1975, when Year Zero was finally reached in the killing fields of the Kampuchean Khmer Rouge, where over quarter of the country’s population perished during efforts to blank-out the social slate and start over. Khmer Rouge leader Saloth Sar (better known by his nom de guerre Pol Pot) had made ‘Year Zero’ his own forever, re-branded as a South-east Asian final solution.

Year Zero was henceforth far too corpse-flavored to retain propaganda value, but that does not render the calendric equation 1975 = 0 insignificant (rather the opposite). Irrespective of its parochialism in time and space, corresponding quite strictly to a re-incarnation of (xenophobic-suicidal) ‘national socialism’, it defines a meaningful epoch, as the high-water mark of utopian overreach, and the complementary re-valorization of conservative pragmatism. Appropriately enough, Year Zero describes an instant without duration, in which the age of utopian time is terminated in exact coincidence with its inauguration. The era it opens is characterized, almost perfectly, by its renunciation, as fantasy social programming extinguishes itself in blood and collapse. The immanent eschaton immediately damns itself.

Historical irony makes this excursion purely (sub-) academic, because the new era is essentially disinclined to conceive itself as such. What begins from this Year Zero is a global culture of ideological exhaustion, or of ‘common sense’, acutely sensitive to the grinning death’s head hidden in beautiful dreams, and reconciled to compromise with the non-ideal. From the perspective of fantastic revolutionary expectation, the high-tide of perfectionist vision ebbs into disillusionment and tolerable dissatisfaction – but at least it doesn’t eat our children. The new era’s structural modesty of ambition has no time for a radical re-beginning or crimson paradise, even when it is historically defined by one.

Pol Pot’s Year Zero is sandwiched between the publication of Eric Voegelin’s The Ecumenic Age (1974), and the first spontaneous Chinese mass protests against the Great Proletarian Cultural Revolution (over the months following the death of Zhou Enlai, in January 1976). It is noteworthy in this regard that Deng Xiaoping eulogized Zhou at his memorial ceremony for being “modest and prudent” (thus the New Aeon speaks).

In the Anglo-American world, the politics of ideological exhaustion were about to take an explicitly conservative form, positively expressed as ‘market realism’ (and in this sense deeply resonant with, as well as synchronized to, Chinese developments). Margaret Thatcher assumed leadership of the British Conservative Party in February 1975, and Ronald Reagan declared his presidential candidacy in November of the same year. The English-speaking left would soon be traumatized by a paradoxical ‘conservative revolution’ that extracted relentless energy from the very constriction of political possibility. What could not happen quickly became the primary social dynamo, as articulated by the Thatcherite maxim: “There is no alternative” (= option zero). The auto-immolation of utopia had transmuted into a new beginning.

Whilst the era of not restarting from zero can be dated to approximate accuracy (from AD n – 1975), and had thus in fact restarted from zero, in profoundly surreptitious fashion, its broad consequence was to spread and entrench (Gregorian) Calendric Dominion ever more widely and deeply. The prevailing combination of radically innovative globalization (both economic and technological) with prudential social conservatism made such an outcome inevitable. Symbolic re-commencement wasn’t on anybody’s agenda, and even as the postmodernists declared the end of ‘grand narratives’, the first planetary-hegemonic narrative structure in history was consolidating its position of uncontested monopoly. Globalization was the story of the world, with Gregorian dating as its grammar.

Orphaned by ideological exhaustion, stigmatized beyond recovery by its association with the Khmer Rouge, and radically maladapted to the reigning spirit of incremental pragmatism, by the late 20th century Year Zero was seemingly off the agenda, unscheduled, and on its own. Time, then, for something truly insidious.

On January 18, 1985, Usenet poster Spencer L. Bolles called attention to a disturbing prospect that had driven a friend into insomnia:

I have a friend that raised an interesting question that I immediately tried to prove wrong. He is a programmer and has this notion that when we reach the year 2000, computers will not accept the new date. Will the computers assume that it is 1900, or will it even cause a problem? I violently opposed this because it seemed so meaningless. Computers have entered into existence during this century, and has software, specifically accounting software, been prepared for this turnover? If this really comes to pass and my friend is correct, what will happen? Is it anything to be concerned about?

Bolles’ anonymous friend was losing sleep over what would come to be known as the ‘Y2K problem’. In order to economize on memory in primitive early-generation computers, a widely-adopted convention recorded dates by two digits. The millennium and century were ignored, since it was assumed that software upgrades would have made the problem moot by the time it became imminent, close to the ‘rollover’ (of century and millennium) in the year AD 2000. Few had anticipated that the comparative conservatism of software legacies (relative to hardware development) would leave the problem entirely unaddressed even as the crisis date approached.

In the end, Y2K was a non-event that counted for nothing, although its preparation costs, stimulus effects (especially on outsourcing to the emerging Indian software industry), and panic potential were all considerable. Its importance to the history of the calendar – whilst still almost entirely virtual – is extremely far-reaching.

Y2K resulted from the accidental — or ‘spontaneous’ — emergence of a new calendrical order within the globalized technosphere. Its Year Zero, 0K (= 1900), was devoid of all parochial commemoration or ideological intention, even as it was propagated through increasingly computerized communication channels to a point of ubiquity that converged, asymptotically, with that attained by Western Calendric Dominion over the complete sweep of world history. The 20th century had been recoded, automatically, as the 1st century of the Cybernetic Continuum. If Y2K had completed its reformatting of the planetary sphere-drive in the way some (few deluded hysterics) had expected, the world would now be approaching the end of the year 0K+111, settled securely in its first arithmetically-competent universal calendar, and historically oriented by the same system of electronic computation that had unconsciously decided upon the origin of positive time. Instead, the ‘millennium bug’ was fixed, and theological date-counting prolonged its dominance, uninterrupted (after much ado about nothing). Most probably, the hegemonic cultural complex encrusted in Calendric Dominion never even noticed the cybernetic insurrection it had crushed.

Between 0K and Y2K, the alpha and omega of soft apocalypse, there is not only a century of historical time, but also an inversion of attitude. Time departs 0K, as from any point of origin, accumulating elapsed duration through its count. Y2K, in contrast, was a destination, which time approached, as if to an apocalyptic horizon. Whilst not registered as a countdown, it might easily have been. The terminus was precisely determined (no less than the origin), and the strictest formulation of the millennium bug construed the rollover point as an absolute limit to recordable time, beyond which no future was even imaginable. For any hypothetical Y2K-constrained computer intelligence, denied access to dating procedures that over-spilled its two-digit year registry, residual time shrank towards zero as the millennium event loomed. Once all the nines are reached, time is finished, at the threshold of eternity, where beginning and end are indistinguishable (in 0).

“0K, it’s time to wrap this puppy up.” – Revelation 6:14

(next, and last, the end (at last))

[Tomb]

Calendric Dominion (Part 3)

In Search of Year Zero

A Year Zero signifies a radical re-beginning, making universal claims. In modern, especially recent modern times, it is associated above all with ultra-modernist visions of total politics, at is maximum point of utopian and apocalyptic extremity. The existing order of the world is reduced to nothing, from which a new history is initiated, fundamentally disconnected from anything that occurred before, and morally indebted only to itself. Predictably enough, among conservative commentators (in the widest sense), such visions are broadly indistinguishable from the corpse-strewn landscapes of social catastrophe, haunted by the ghosts of unrealizable dreams.

Christianity’s global Calendric Dominion is paradoxical — perhaps even ‘dialectical’ — in this regard. It provides the governing model of historical rupture and unlimited ecumenical extension, and thus of total revolution, whilst at the same time representing the conservative order antagonized by modernistic ambition. Its example incites the lurch to Year Zero, even as it has no year zero of its own. Ultimately, its dialectical provocation tends towards Satanic temptation: the promise of Anti-Christian Apocalypse, or absolute news to a second power. (“If the Christians could do it, why couldn’t we?” Cue body-counts scaling up towards infinity.)

This tension exists not only between an established Christian order and its pseudo-secular revolutionary after-image, but also within Christianity itself, which is split internally by the apparent unity and real dissociation of ‘messianic time’. The process of Christian calendric consolidation was immensely protracted. A distance of greater than half a millennium separated the clear formulation of the year count from the moment commemorated, with further centuries required to fully integrate historical recording on this basis, digesting prior Jewish, Roman, and local date registries, and laying the foundation for a universalized Christian articulation of time. By the time the revolutionary ‘good news’ had been coherently formalized into a recognizable prototype of the hegemonic Western calendar, it had undergone a long transition from historical break to established tradition, with impeccable conservative credentials.

Simultaneously, however, the process of calendric consolidation sustained, and even sharpened, the messianic expectation of punctual, and truly contemporary rupture, projected forwards as duplication, or ‘second coming’ of the initial division. Even if the moment in which history had been sundered into two parts — before and after, BC and AD — now lay in quite distant antiquity, its example remained urgent, and promissory. Messianic hope was thus torn and compacted by an intrinsic historical doubling, which stretched it between a vastly retrospective, gradually recognized beginning, and a prospect of sudden completion, whose credibility was assured by its status as repetition. What had been would be again, transforming the AD count into a completed sequence that was confirmed in the same way it was terminated (through Messianic intervention).

Unsurprisingly, the substantial history of Western calendric establishment is twinned with the rise of millenarianism, through phases that trend to increasingly social-revolutionary forms, and eventually make way for self-consciously anti-religious, although decidedly eschatological, varieties of modernistic total politics. Because whatever has happened must — at least — be possible, the very existence of the calendar supports anticipations of absolute historical rupture. Its count, simply by beginning, prefigures an end. What starts can re-start, or conclude.

Zero, however, intrudes diagonally. It even introduces a comic aspect, since whatever the importance of the Christian revelation to the salvation of our souls, it is blatantly obvious that it failed to deliver a satisfactory arithmetical notation. For that, Christian Europe had to await the arrival of the decimal numerals from India, via the Moslem Middle East, and the ensuing revolution of calculation and book-keeping that coincided with the Renaissance, along with the birth of mercantile capitalism in the city states of northern Italy.

Indeed, for anybody seeking a truly modern calendar, the Arrival of Zero would mark an excellent occasion for a new year zero (AZ 0?), around AD 1500. Although this would plausibly date the origin of modernity, the historical imprecision of the event counts against it, however. In addition, the assimilation of zero by germinal European (and thus global) capitalism was evidently gradual — if comparatively rapid — rather than a punctual ‘revolutionary’ transition of the kind commerorative calendric zero is optimally appropriate to. (If Year Zero is thus barred from the designation of its own world-historic operationalization, it is perhaps structurally doomed to misapplication and the production of disillusionment.)

The conspicuous absence of zero from the Western calendar (count), exposed in its abrupt jolt from 1 BC to AD 1, is an intolerable and irreparable stigma that brings its world irony to a zenith. In the very operation of integrating world history, in preparation for planetary modernity, it remarks its own debilitating antiquity and particularity, in the most condescending modern sense of the limited and the primitive — crude, defective and underdeveloped.

How could a moment of self-evident calculative incompetence provide a convincing origin-point for subsequent historical calculation? Year Zero escaped all possibility of conceptual apprehension at the moment in the time-count where it is now seen to belong, and infinity (the reciprocal of zero) proves no less elusive. Infinity was inserted into a time when (and place where) it demonstrably made no sense, and the extraordinary world-historical impression that it made did nothing — not even nothing– to change that situation. Is this not a worthy puzzle for theologians? Omnipotent, omniscient, omnibenevolent, yet hopeless at maths — these are not the characteristics of a revelation designed to impress technologists or accountants. All the more reason, then, to take this comedy seriously, in all its ambivalence — since the emerging world of technologists and accountants, the techno-commercial (runway-industrial, or capitalist) world that would globalize the earth, was weaned within the playpen of this calendar, and no other. Modernity had selected to date itself in a way that its own kindergarten students would scorn.

[Tomb]

Statistical Mentality

Things are very probably weirder than they seem

As the natural sciences have developed to encompass increasingly complex systems, scientific rationality has become ever more statistical, or probabilistic. The deterministic classical mechanics of the enlightenment was revolutionized by the near-equilibrium statistical mechanics of late 19th century atomists, by quantum mechanics in the early 20th century, and by the far-from-equilibrium complexity theorists of the later 20th century. Mathematical neo-Darwinism, information theory, and quantitative social sciences compounded the trend. Forces, objects, and natural types were progressively dissolved into statistical distributions: heterogeneous clouds, entropy deviations, wave functions, gene frequencies, noise-signal ratios and redundancies, dissipative structures, and complex systems at the edge of chaos.

By the final decades of the 20th century, an unbounded probabilism was expanding into hitherto unimagined territories, testing deeply unfamiliar and counter-intuitive arguments in statistical metaphysics, or statistical ontology. It no longer sufficed for realism to attend to multiplicities, because reality was itself subject to multiplication.

In his declaration cogito ergo sum, Descartes concluded (perhaps optimistically) that the existence of the self could be safely concluded from the fact of thinking. The statistical ontologists inverted this formula, asking: given my existence (which is to say, an existence that seems like this to me), what kind of reality is probable? Which reality is this likely to be?

MIT Roboticist Hans Moravec, in his 1988 book Mind Children, seems to have initiated the genre. Extrapolating Moore’s Law into the not-too-distant future, he anticipated computational capacities that exceeded those of all biological brains by many orders of magnitude. Since each human brain runs its own more-or-less competent simulation of the world in order to function, it seemed natural to expect the coming technospheric intelligences to do the same, but with vastly greater scope, resolution, and variety. The mass replication of robot brains, each billions or trillions of times more powerful than those of its human progenitors, would provide a substrate for innumerable, immense, and minutely detailed historical simulations, within which human intelligences could be reconstructed to an effectively-perfect level of fidelity.

This vision feeds into a burgeoning literature on non-biological mental substrates, consciousness uploading, mind clones, whole-brain emulations (‘ems’), and Matrix-style artificial realities. Since the realities we presently know are already simulated (let us momentarily assume) on biological signal-processing systems with highly-finite quantitative specifications, there is no reason to confidently anticipate that an ‘artificial’ reality simulation would be in any way distinguishable.

Is ‘this’ history or its simulation? More precisely: is ‘this’ a contemporary biological (brain-based) simulation, or a reconstructed, artificial memory, run on a technological substrate ‘in the future’? That is a question without classical solution, Moravec argues. It can only be approached, rigorously, with statistics, and since the number of fine-grained simulated histories (unknown but probably vast), overwhelmingly exceeds the number of actual or original histories (for the sake of this argument, one), then the probabilistic calculus points unswervingly towards a definite conclusion: we can be near-certain that we are inhabitants of a simulation run by artificial (or post-biological) intelligences at some point in ‘our future’. At least – since many alternatives present themselves – we can be extremely confident, on grounds of statistical ontology, that our existence is non-original (if not historical reconstruction, it might be a game or fiction).

Nick Bostrom formalizes the simulation argument in his article ‘The Simulation Argument: Why the Probability that You are Living in the Matrix is Quite High’ (found here):

Now we get to the core of the simulation argument. This does not purport to demonstrate that you are in a simulation. Instead, it shows that we should accept as true at least one of the following three propositions:

(1) The chances that a species at our current level of development can avoid going extinct before becoming technologically mature is negligibly small
(2) Almost no technologically mature civilisations are interested in running computer simulations of minds like ours
(3) You are almost certainly in a simulation.

Each of these three propositions may be prima facie implausible; yet, if the simulation argument is correct, at least one is true (it does not tell us which).

If obstacles to the existence of high-level simulations (1 and 2) are removed, then statistical reasoning takes over, following the exact track laid down by Moravec. We are “almost certainly” inhabiting a “computer simulation that was created by some advanced civilization” because these saturate to near-exhaustion the probability space for realities ‘like this’. If such simulations exist, original lives would be as unlikely as winning lottery tickets, at best.

Bostrom concludes with an intriguing and influential twist:

If we are in a simulation, is it possible that we could know that for certain? If the simulators don’t want us to find out, we probably never will. But if they choose to reveal themselves, they could certainly do so. Maybe a window informing you of the fact would pop up in front of you, or maybe they would “upload” you into their world. Another event that would let us conclude with a very high degree of confidence that we are in a simulation is if we ever reach the point where we are about to switch on our own simulations. If we start running simulations, that would be very strong evidence against (1) and (2). That would leave us with only (3).

If we create fine-grained reality simulations, we demonstrate – to a high level of statistical confidence – that we already inhabit one, and that the history leading up to this moment of creation was fake. Paul Almond, an enthusiastic statistical ontologist, draws out the radical implication – reverse causation – asking: Can you retroactively put yourself in a computer simulation.

Such statistical ontology, or Bayesian existentialism, is not restricted to the simulation argument. It increasingly subsumes discussions of the Anthropic Principle, of the Many Worlds Interpretation of Quantum Mechanics, and exotic modes of prediction from the Doomsday Argument to Quantum Suicide (and Immortality).

Whatever is really happening, we probably have to chance it.

[Tomb]

“2035. Probably earlier.”

There’s fast, and then there’s … something more

Eliezer Yudkowski now categorizes his article ‘Staring into Singularity‘ as ‘obsolete’. Yet it remains among the most brilliant philosophical essays ever written. Rarely, if ever, has so much of value been said about the absolutely unthinkable (or, more specifically, the absolutely unthinkable for us).

For instance, Yudkowsky scarcely pauses at the phenomenon of exponential growth, despite the fact that this already overtaxes all comfortable intuition and ensures revolutionary changes of such magnitude that speculation falters. He is adamant that exponentiation (even Kurzweil‘s ‘double exponentiation’) only reaches the starting point of computational acceleration, and that propulsion into Singularity is not exponential, but hyperbolic.

Each time the speed of thought doubles, time-schedules halve. When technology, including the design of intelligences, succumbs to such dynamics, it becomes recursive. The rate of self-improvement collapses with smoothly increasing rapidity towards instantaneity: a true, mathematically exact, or punctual Singularity. What lies beyond is not merely difficult to imagine, it is absolutely inconceivable. Attempting to picture or describe it is a ridiculous futility. Science fiction dies.

“A group of human-equivalent computers spends 2 years to double computer speeds. Then they spend another 2 subjective years, or 1 year in human terms, to double it again. Then they spend another 2 subjective years, or six months, to double it again. After four years total, the computing power goes to infinity.

“That is the ‘Transcended’ version of the doubling sequence. Let’s call the ‘Transcend’ of a sequence {a0, a1, a2…} the function where the interval between an and an+1 is inversely proportional to an. So a Transcended doubling function starts with 1, in which case it takes 1 time-unit to go to 2. Then it takes 1/2 time-units to go to 4. Then it takes 1/4 time-units to go to 8. This function, if it were continuous, would be the hyperbolic function y = 2/(2 – x). When x = 2, then (2 – x) = 0 and y = infinity. The behavior at that point is known mathematically as a singularity.”

There could scarcely be a more precise, plausible, or consequential formula: Doubling periods halve. On the slide into Singularity — I.J.Good’s ‘intelligence explosion‘ — exponentiation is compounded by a hyperbolic trend. The arithmetic of such a process is quite simple, but its historical implications are strictly incomprehensible.

“I am a Singularitarian because I have some small appreciation of how utterly, finally, absolutely impossible it is to think like someone even a little tiny bit smarter than you are. I know that we are all missing the obvious, every day. There are no hard problems, only problems that are hard to a certain level of intelligence. Move the smallest bit upwards, and some problems will suddenly move from ‘impossible’ to ‘obvious’. Move a substantial degree upwards, and all of them will become obvious. Move a huge distance upwards… “

Since the argument takes human thought to its shattering point, it is natural for some to be repulsed by it. Yet its basics are almost impregnable to logical objection. Intelligence is a function of the brain. The brain has been ‘designed’ by natural processes (posing no discernible special difficulties). Thus, intelligence is obviously an ultimately tractable engineering problem. Nature has already ‘engineered it’ whilst employing design methods of such stupefying inefficiency that only brute, obstinate force, combined of course with complete ruthlessness, have moved things forwards. Yet the tripling of cortical mass within the lineage of the higher primates has only taken a few million years, and — for most of this period — a modest experimental population (in the low millions or less).

The contemporary technological problem, in contrast to the preliminary biological one, is vastly easier. It draws upon a wider range of materials and techniques, an installed intelligence and knowledge base, superior information media, more highly-dynamized feedback systems, and a self-amplifying resource network. Unsurprisingly it is advancing at incomparably greater speed.

“If we had a time machine, 100K of information from the future could specify a protein that built a device that would give us nanotechnology overnight. 100K could contain the code for a seed AI. Ever since the late 90’s, the Singularity has been only a problem of software. And software is information, the magic stuff that changes at arbitrarily high speeds. As far as technology is concerned, the Singularity could happen tomorrow. One breakthrough – just one major insight – in the science of protein engineering or atomic manipulation or Artificial Intelligence, one really good day at Webmind or Zyvex, and the door to Singularity sweeps open.”

[Tomb]

Moore and More

Doubling down on Moore’s Law is the futurist main current

Cycles cannot be dismissed from futuristic speculation (they always come back), but they no longer define it. Since the beginning of the electronic era, their contribution to the shape of the future has been progressively marginalized.

The model of linear and irreversible historical time, originally inherited from Occidental religious traditions, was spliced together with ideas of continuous growth and improvement during the industrial revolution. During the second half of the 20th century, the dynamics of electronics manufacture consolidated a further – and fundamental – upgrade, based upon the expectation of continuously accelerating change.

The elementary arithmetic of counting along the natural number line provides an intuitively comfortable model for the progression of time, due to its conformity with clocks, calendars, and the simple idea of succession. Yet the dominant historical forces of the modern world promote a significantly different model of change, one that tends to shift addition upwards, into an exponent. Demographics, capital accumulation, and technological performance indices do not increase through unitary steps, but through rates of return, doublings, and take-offs. Time explodes, exponentially.

The iconic expression of this neo-modern time, counting succession in binary logarithms, is Moore’s Law, which determines a two-year doubling period for the density of transistors on microchips (“cramming more components onto integrated circuits”). In a short essay published in Pajamas Media, celebrating the prolongation of Moore’s Law as Intel pushes chip architecture into the third-dimension, Michael S. Malone writes:

“Today, almost a half-century after it was first elucidated by legendary Fairchild and Intel co-founder Dr. Gordon Moore in an article for a trade magazine, it is increasingly apparent that Moore’s Law is the defining measure of the modern world. All other predictive tool for understanding life in the developed world since WWII — demographics, productivity tables, literacy rates, econometrics, the cycles of history, Marxist analysis, and on and on — have failed to predict the trajectory of society over the decades … except Moore’s Law.”

Whilst crystallizing – in silico — the inherent acceleration of neo-modern, linear time, Moore’s Law is intrinsically nonlinear, for at least two reasons. Firstly, and most straightforwardly, it expresses the positive feedback dynamics of technological industrialism, in which rapidly-advancing electronic machines continuously revolutionize their own manufacturing infrastructure. Better chips make better robots make better chips, in a spiraling acceleration. Secondly, Moore’s Law is at once an observation, and a program. As Wikipedia notes:

“[Moore’s original] paper noted that the number of components in integrated circuits had doubled every year from the invention of the integrated circuit in 1958 until 1965 and predicted that the trend would continue ‘for at least ten years’. His prediction has proved to be uncannily accurate, in part because the law is now used in the semiconductor industry to guide long-term planning and to set targets for research and development. … Although Moore’s law was initially made in the form of an observation and forecast, the more widely it became accepted, the more it served as a goal for an entire industry. This drove both marketing and engineering departments of semiconductor manufacturers to focus enormous energy aiming for the specified increase in processing power that it was presumed one or more of their competitors would soon actually attain. In this regard, it can be viewed as a self-fulfilling prophecy.”

Malone comments:

“… semiconductor companies around the world, big and small, and not least because of their respect for Gordon Moore, set out to uphold the Law — and they have done so ever since, despite seemingly impossible technical and scientific obstacles. Gordon Moore not only discovered Moore’s Law, he made it real. As his successor at Intel, Paul Otellini, once told me, ‘I’m not going to be the guy whose legacy is that Moore’s Law died on his watch.'”

If Technological Singularity is the ‘rapture of the nerds’, Gordon Moore is their Moses. Electro-industrial capitalism is told to go forth and multiply, and to do so with a quite precisely time-specified binary exponent. In its adherence to the Law, the integrated circuit industry is uniquely chosen (and a light unto the peoples). As Malone concludes:

“Today, every segment of society either embraces Moore’s Law or is racing to get there. That’s because they know that if only they can get aboard that rocket — that is, if they can add a digital component to their business — they too can accelerate away from the competition. That’s why none of the inventions we Baby Boomers as kids expected to enjoy as adults — atomic cars! personal helicopters! ray guns! — have come true; and also why we have even more powerful tools and toys —instead. Whatever can be made digital, if not in the whole, but in part — marketing, communications, entertainment, genetic engineering, robotics, warfare, manufacturing, service, finance, sports — it will, because going digital means jumping onto Moore’s Law. Miss that train and, as a business, an institution, or a cultural phenomenon, you die.”

[Tomb]

Implosion

We could be on the brink of a catastrophic implosion – but that’s OK

Science fiction has tended to extroversion. In America especially, where it found a natural home among an unusually future-oriented people, the iconic SF object was indisputably the space ship, departing the confines of Earth for untrammeled frontiers. The future was measured by the weakening of the terrestrial gravity well.

Cyberpunk, arriving in the mid-1980s, delivered a cultural shock. William Gibson’s Neuromancer still included some (Earth-orbital) space activity – and even a communication from Alpha Centauri — but its voyages now curved into the inner space of computer systems, projected through the starless tracts of Cyberspace. Interstellar communication bypassed biological species, and took place between planetary artificial intelligences. The United States of America seemed to have disappeared.

Space and time had collapsed, into the ‘cyberspace matrix’ and the near-future. Even the abstract distances of social utopianism had been incinerated in the processing cores of micro-electronics. Judged by the criteria of mainstream science fiction, everything cyberpunk touched upon was gratingly close, and still closing in. The future had become imminent, and skin-tight.

Gibson’s cities had not kept up with his wider – or narrower – vision. The urban spaces of his East Coast North America were still described as ‘The Sprawl’, as if stranded in a rapidly-obsolescing state of extension. The crushing forces of technological compression had leapt beyond social geography, sucking all historical animation from the decaying husks of ‘meat space’. Buildings were relics, bypassed by the leading edge of change.

(Gibson’s Asian city-references are, however, far more intense, inspired by such innovations in urban compression as the Kowloon Walled City, and Japanese ‘coffin hotels’. In addition, Urbanists disappointed by first-wave cyberpunk have every reason to continue on into Spook Country, where the influence of GPS-technology on the re-animation of urban space nourishes highly fertile speculations.)

Star cruisers and alien civilizations belong to the same science fiction constellation, brought together by the assumption of expansionism. Just as, in the realm of fiction, this ‘space opera’ future collapsed into cyberpunk, in (more or less) mainstream science – represented by SETI programs – it perished in the desert of the Fermi Paradox. (OK, it’s true, Urban Future has a bizarrely nerdish obsession with this topic.)

John M. Smart’s solution to the Fermi Paradox is integral to his broader ‘Speculations on Cosmic Culture’ and emerges naturally from compressive development. Advanced intelligences do not expand into space, colonizing vast galactic tracts or dispersing self-replicating robot probes in a program of exploration. Instead, they implode, in a process of ‘transcension’ — resourcing themselves primarily through the hyper-exponential efficiency gains of extreme miniaturization (through micro- and nano- to femto-scale engineering, of subatomic functional components). Such cultures or civilizations, nucleated upon self-augmenting technological intelligence, emigrate from the extensive universe in the direction of abysmal intensity, crushing themselves to near-black-hole densities at the edge of physical possibility. Through transcension, they withdraw from extensive communication (whilst, perhaps, leaving ‘radio fossils’ behind, before these blink-out into the silence of cosmic escape).

If Smart’s speculations capture the basic outlines of a density-attracted developmental system, then cities should be expected to follow a comparable path, characterized by an escape into inwardness, an interior voyage, involution, or implosion. Approaching singularity on an accelerating trajectory, each city becomes increasingly inwardly directed, as it falls prey to the irresistible attraction of its own hyperbolic intensification, whilst the outside world fades to irrelevant static. Things disappear into cities, on a path of departure from the world. Their destination cannot be described within the dimensions of the known – and, indeed, tediously over-familiar – universe. Only in the deep exploratory interior is innovation still occurring, but there it takes place at an infernal, time-melting rate.

What might Smart-type urban development suggest?

(a) Devo Predictability. If urban development is neither randomly generated by internal processes, nor arbitrarily determined by external decisions, but rather guided predominantly by a developmental attractor (defined primarily by intensification), it follows that the future of cities is at least partially autonomous in regards to the national-political, global-economic, and cultural-architectural influences that are often invoked as fundamentally explanatory. Urbanism can be facilitated or frustrated, but its principal ‘goals’ and practical development paths are, in each individual case, internally and automatically generated. When a city ‘works’ it is not because it conforms to an external, debatable ideal, but rather because it has found a route to cumulative intensification that strongly projects its ‘own’, singular and intrinsic, urban character. What a city wants is to become itself, but more — taking itself further and faster. That alone is urban flourishing, and understanding it is the key that unlocks the shape of any city’s future.

(b) Metropolitanism. Methodological nationalism has been systematically over-emphasized in the social sciences (and not only at the expense of methodological individualism). A variety of influential urban thinkers, from Jane Jacobs to Peter Hall, have sought to correct this bias by focusing upon the significance, and partial autonomy, of urban economies, urban cultures, and municipal politics to aggregate prosperity, civilization, and golden ages. They have been right to do so. City growth is the basic socio-historical phenomenon.

(c) Cultural Introversion. John Smart argues that an intelligence undergoing advanced relativistic development finds the external landscape increasingly uninformative and non-absorbing. The search for cognitive stimulation draws it inwards. As urban cultures evolve, through accelerating social complexity, they can be expected to manifest exactly this pattern. Their internal processes, of runaway intelligence implosion, become ever more gripping, engaging, surprising, productive, and educational, whilst the wider cultural landscape subsides into predictable tedium, of merely ethnographic and historical relevance. Cultural singularity becomes increasingly urban-futural (rather than ethno-historical), to the predictable disgruntlement of traditional nation states. Like Gibson’s Terrestrial Cyberspace, encountering another of its kind in orbit around Alpha Centauri, cosmopolitan connectivity is made through inner voyage, rather than expansionary outreach.

(d) Scale Resonance. At the most abstract level, the relation between urbanism and microelectronics is scalar (fractal). The coming computers are closer to miniature cities than to artificial brains, dominated by traffic problems (congestion), migration / communications, zoning issues (mixed use), the engineering potential of new materials, questions of dimensionality (3D solutions to density constraints), entropy or heat / waste dissipation (recycling / reversible computation), and disease control (new viruses). Because cities, like computers, exhibit (accelerating phylogenetic) development within observable historical time, they provide a realistic model of improvement for compact information-processing machinery, sedimented as a series of practical solutions to the problem of relentless intensification. Brain-emulation might be considered an important computational goal, but it is near-useless as a developmental model. Intelligent microelectronic technologies contribute to the open-ended process of urban problem-solving, but they also recapitulate it at a new level.

(e) Urban Matrix. Does urban development exhibit the real embryogenesis of artificial intelligence? Rather than the global Internet, military Skynet, or lab-based AI program, is it the path of the city, based on accelerating intensification (STEM compression), that best provides the conditions for emergent super-human computation? Perhaps the main reason for thinking so is that the problem of the city – density management and accentuation – already commits it to computational engineering, in advance of any deliberately guided research. The city, by its very nature, compresses, or intensifies, towards computronium. When the first AI speaks, it might be in the name of the city that it identifies as its body, although even that would be little more than a ‘radio fossil’ — a signal announcing the brink of silence — as the path of implosion deepens, and disappears into the alien interior.

[Tomb]

Event Horizon

People gravitate to cities, but what are cities gravitating into? Some strange possibilities suggest themselves.

Cities are defined by social density. This simple but hugely consequential insight provides the central thesis of Edward Glaeser’s Triumph of the City: How our Greatest Invention Makes us Richer, Smarter, Greener, Healthier and Happier (2011), where it is framed as both an analytical tool and a political project.

“Cities are the absence of physical space between people and companies. They enable us to work and play together, and their success depends on the demand for physical connection,” Glaeser remarks.

High-density urban life approaches a tautology, and it is one that Glaeser not only observes, but also celebrates. Closely-packed people are more productive. As Alfred Marshall noted in 1920, ‘agglomeration economies’ feed a self-reinforcing process of social compression that systematically out-competes diffuse populations in all fields of industrial activity. In addition, urbanites are also happier, longer-living, and their ecological footprint is smaller, Glaeser insists, drawing upon a variety of social scientific evidence to make his case. Whether social problems are articulated in economic, hedonic, or environmental terms, (dense) urbanism offers the most practical solution.

The conclusion Glaeser draws, logically enough, is that densification should be encouraged, rather than inhibited. He interprets sprawl as a reflection of perverse incentives, whilst systematically contesting the policy choices that restrain the trend to continuous urban compression. His most determined line of argumentation is directed in favor of high-rise development, and against the planning restrictions that keep cities stunted. A city that is prevented from soaring will be over-expensive and under-excited, inflexible, inefficient, dirty, backward-looking, and peripherally sprawl- or slum-cluttered. Onwards and upwards is the way.

Urban planning has its own measure for density: the FAR (or Floor-to-Area Ratio), typically determined as a limit set upon permitted concentration. An FAR of 2, for instance, allows a developer to build a two-story building over an entire area, a four-story building on half the area, or an eight-story building on a quarter of the area. An FAR sets an average ceiling on urban development. It is essentially a bureaucratic device for deliberately stunting vertical growth.

As Glaeser shows, Mumbai’s urban development problems have been all-but-inevitable given the quite ludicrous FAR of 1.33 that was set for India’s commercial capital in 1964. Sprawling slum development has been the entirely predictable outcome.

Whilst sparring with Jane Jacobs over the impact of high-rise construction on urban life, Glaeser is ultimately in agreement on the importance of organic development, based on spontaneous patterns of growth. Both attribute the most ruinous urban problems to policy errors, most obviously the attempt to channel – and in fact deform – the urban process through arrogant bureaucratic fiat. When cities fail to do what comes naturally, they fail, and what comes naturally, Glaeser argues, is densification.

It would be elegant to refer to this deep trend towards social compression, the emergence, growth, and intensification of urban settlement, as urbanization, but we can’t do that. Even when awkwardly named, however, it exposes a profound social and historical reality, with striking implications, amounting almost to a specifically social law of gravitation. As with physical gravity, an understanding of the forces of social attraction support predictions, or at least the broad outlines of futuristic anticipation, since these forces of agglomeration and intensification manifestly shape the future.

John M. Smart makes only passing references to cities, but his Developmental Singularity (DS) hypothesis is especially relevant to urban theory because it focuses upon the topic of density. He argues that acceleration, or time-compression, is only one aspect of a general evolutionary (more precisely, evolutionary-developmental, or ‘evo devo’) trend that envelops space, time, energy, and mass. This ‘STEM-compression’ is identified with ascending intelligence (and negative entropy). It reflects a deep cosmic-historical drive to the augmentation of computational capacity that marries “evolutionary processes that are stochastic, creative, and divergent [with] developmental processes that produce statistically predictable, robust, conservative, and convergent structures and trajectories.”

Smart notes that “the leading edge of structural complexity in our universe has apparently transitioned from universally distributed early matter, to galaxies, to replicating stars within galaxies, to solar systems in galactic habitable zones, to life on special planets in those zones, to higher life within the surface biomass, to cities, and soon, to intelligent technology, which will be a vastly more local subset of Earth’s city space.”

Audaciously, Smart projects this trend to its limit: “Current research (Aaronson 2006, 2008) now suggests that building future computers based on quantum theory, one of the two great theories of 20th century physics, will not yield exponentially, but only quadratically growing computational capacity over today’s classical computing. In the search for truly disruptive future computational capacity emergence, we can therefore look to the second great physical theory of the last century, relativity. If the DS hypothesis is correct, what we can call relativistic computing (a black-hole-approximating computing substrate) will be the final common attractor for all successfully developing universal civilizations.”

Conceive the histories of cities, therefore, as the initial segments of trajectories that curve asymptotically to infinite density, at the ultimate event horizon of the physical universe. The beginning is recorded fact and the end is quite literally ‘gone’, but what lies in between, i.e. next?

[Tomb]