Forward!

Maximum warp into Left Singularity

That was all thoroughly unambiguous. It turns out that Obama really is the FDR for this turn of the gyre. Nate Silver and Paul Krugman are vindicated. The New York Times is the gospel of the age. Conservatism is crushed and humiliated. The brake pedal has been hurled out of the window. There’s no stopping it now.

The day before the election, Der Spiegel described “the United States as a country that doesn’t understand the signs of the times and has almost willfully — flying in the face of all scientific knowledge — chosen to be backward.” For the magazine’s staff writers, the problem was utterly straightforward. “The hatred of big government has reached a level in the United States that threatens the country’s very existence.” Retrogressive forces were impeding the country’s progress by refusing to grasp the obvious identity of Leviathan and social advancement. It should now be obvious to everyone – even charred tea partiers gibbering shell-shocked in the ruins — that contemporary American democracy provides all the impetus necessary to bulldoze such obstructionism aside. The State is God, and all shall bend to its will. Forward!

With the ascension of USG to godhood, a new purity is attained, and a fantastic (and Titanic) experiment progresses to a new stage. It is no longer necessary to enter into controversy with the shattered detritus of the right, henceforth all that matters is the test of strength between concentrated political motivation and the obduracy of reality itself. Which is to say: the final resistance to be overcome is the insolent idea of a reality principle, or outside. Once there is no longer any way of things that exists independently of the State’s sovereign desire, Left Singularity is attained. This is the eschatological promise that sings its hallelujahs in every progressive breast. It translates perfectly into the colloquial chant: yes we can!

Of course, it needs to be clearly understood that ‘we’ – now and going forward – means the State. Through the State we do anything and everything, which we can, if not really, then at least truly, as promised. The State is ‘us’ as God. Hegel already saw all this, but it took progressive educational systems to generalize the insight. Now our time has come, or is coming. All together now: yes we can! Nothing but a brittle reactionary realism stands in our way, and that is something we can be educated out of (yes we can). We have! See our blasted enemies strewn in utter devastation before us.

The world is to be as we will it to be. Surely.

[Tomb]

Suspended Animation (Part 2)

Whatever happened to hell?

“It can’t carry on like this … but how many weeks have we said that for?”
— Justin Urquhart Stewart, director at Seven Investment Management (via James Pethokoukis here)

To make a protracted topic out of this phenomenon is to offer a hostage to fortune. Everything could go over the cliff tomorrow. Perhaps it already has (and we’re just waiting, like Wile E. Coyote, for the consummating splatter).

Greens have been dealing with exactly this question, for a while. After Paul Ehrlich had his credibility torched by Julian Simon, in the most intellectually consequential wager in history, he responded in frustration: “The bet doesn’t mean anything. Julian Simon is like the guy who jumps off the Empire State Building and says how great things are going so far as he passes the 10th floor.”

If environmental catastrophe is structured like this, according to a pattern of durable unsustainability, or disconcerting postponement, there is no obvious theory to account for the fact. With economics, things are different, to such an extent that the entire political economy of the world, along with the overwhelming preponderance of professionalized economic ‘science’, has been geared over the course of a little under a century to crisis postponement as a dominant objective. If the New World Order follows a master plan, this is it.

For ideological purists on the free-market right, laissez-faire capitalism is the ‘unknown ideal’ (although early 20th century Shanghai approached it, as did its student, Hong Kong, in later decades), but it requires no purism whatsoever to acknowledge that the Great Depression effectively buried it as an organizing principle of the world, and that the system which replaced it found political and intellectual expression in the ideas of John Maynard Keynes. Commercial self-organization, which built industrial capitalism before anyone had even the sketchiest understanding of what was happening, gave way to the technocracy of macroeconomics, guided by the radically original belief that governments had a responsibility to manage the oscillations of economic fortune.

In the words of Peter Thiel (drawn straight from the free-market id):

… the trend has been going the wrong way for a long time. To return to finance, the last economic depression in the United States that did not result in massive government intervention was the collapse of 1920–21. It was sharp but short, and entailed the sort of Schumpeterian “creative destruction” that could lead to a real boom. The decade that followed — the roaring 1920s — was so strong that historians have forgotten the depression that started it. The 1920s were the last decade in American history during which one could be genuinely optimistic about politics. Since 1920, the vast increase in welfare beneficiaries and the extension of the franchise to women — two constituencies that are notoriously tough for libertarians — have rendered the notion of “capitalist democracy” into an oxymoron.

As Cato’s Daniel J. Mitchell puts it, more narrowly:

A vibrant and dynamic economy requires the possibility of big profits, but also the discipline of failure. Indeed, capitalism without bankruptcy is like religion without hell.

Because hell’s a hard sell, political and economic rationality have been heading in different directions for 80 years. Even the tropical latitudes of purgatory have proven to be socially combustible, and popularly sensitized politics – which need not be formally ‘democratic’ – tend (strongly) to flee Molotov cocktails in the direction of macroeconomic management. The crucial Keynesian maxim, “In the long run we are all dead,” is especially pertinent to regimes. Who’s going to regenerate deep economic recovery, if the route to it lies through gulfs of fire and brimstone that are fundamentally incompatible with political survival? History, redundantly, provides the obvious answer: nobody is.

The accursed path not taken, across the infernal abyss, has become so neglected and overgrown with weeds that it is rarely noticed, but it is still graphically marked by the advice that Treasury Secretary Andrew Mellon gave to Herbert Hoover as the way to navigate the Great Depression (advice that was, of course, dismissed):

… liquidate labor, liquidate stocks, liquidate farmers, liquidate real estate… it will purge the rottenness out of the system. High costs of living and high living will come down. People will work harder, live a more moral life. Values will be adjusted, and enterprising people will pick up from less competent people.

In recalling this recommendation, as an unacceptable option, Hoover commemorates the precise moment that capitalism ceased to exist as a politically credible social possibility. The alternative – which has many names, although ‘corporatism’ will do – was defined by its systematic refusal of the ‘liquidationist’ path. Coming out stronger on the other side meant nothing, because the passage would probably kill us – it would certainly destroy our political careers. In any case, it was a long run solution to a short term problem, scheduled by volatile popular irritability and election cycles, and in the long run we are all dead. Better, by far, to use ‘macroeconomic policy’ (monetary mind-control) to artificially prolong unsustainable economic euphoria – or even its jaded, hung-over simulation – than to plunge into a catastrophe that might imaginably have been delayed.

It doesn’t take a Schumpeterian fanatic to suspect that such ‘creative destruction (but without the destruction)’ is unlikely to provide a sustainable recipe for economic vitality. When evaluated realistically, it is a formula that programs a trend to perpetual stagnation. Stagnation as a choice.

Because money serves as a general equivalent, and thus as a neutral, non-specific, purely quantitative medium of exchange, it is very supportive of certain highly-consequential economic illusions, of a kind that macroeconomics has been especially prone to. It can easily seem as if ‘the economy’ consists essentially of undifferentiated, quantitative aggregates, such as ‘demand’, ‘gross domestic product’, ‘money supply’, ‘land’, ‘labor’, and ‘capital’. In fact, none of these things exist, except as high-level abstractions, precipitated by the monetary function of general exchangeability.

An understanding of Schumpeterian creative destruction requires, as a preliminary, the recognition that capital is heterogeneous. When expressed in a monetary form, it can appear as a homogeneous quantity, susceptible to simple accumulation, but in its productive social reality it consists of technological apparatus – tools, machines, infrastructures, and installations – representing irretrievable investments, of qualitatively distinctive kinds. The monetary equivalent of such industrial capital is derived from the market values attributed its various components, and these are extremely dynamic, virtual, and speculative. Since the value retrievable from liquidation (and ultimately from scrap) is generally a small fraction, or lower bound, of capital asset value, the ‘capital stock’ is estimated with reference to its productive usage, rather than its intrinsic worth. Schumpeter was careful to break this down into two very different aspects.

Firstly, and most straightforwardly, industrial capital is a resource that depreciates at a regular and broadly predictable rate as a function of output. It is consumed in the process of production, like any other material input, but at a slower rate. Creative destruction, however, refers to a second, far more drastic type of capital depreciation, resulting from technological obsolescence. In this case, capital stock is ‘destroyed’ – suddenly and unpredictably – by an innovation, taking place elsewhere in the economy, which renders its anticipated use unprofitable. In this way, large ‘quantities’ of ‘accumulated’ capital can be depreciated overnight to scrap values, and the investments they represent are annihilated. The hallucination of homogeneous capital is instantaneously vaporized, as painstakingly built fortunes are written down to nothing.

Several points suggest themselves:

1. The violence of creative destruction is directly proportional to its fecundity. The greater, deeper, and more far-reaching the innovation, the more colossal is the resulting capital destruction. At the extreme, profound technological revolutions lay waste not only to specific machines and skills, but to entire infrastructures, industries, occupational categories, and financial systems.

2. The cultural implication of creative destruction far exceeds issues of ‘moral hazard’ and ‘time preference’. The victims of industrial change waves – whether businesses, workers, or financiers – are not being punished by the market for imprudence, slackness, or short-sightedness. They are ruined by pure hazard, as the reciprocal of the absolutely unanticipated nature of technological invention (occurring elsewhere). Neither the creation, nor the destruction, is remotely ‘fair’ – or ever could be. (Although Dawinian ‘virtue’ lies in flexible adaptability — Hong Kong always does OK.)

3. Massive capital destruction expresses technological revolution. Macroeconomic analysis (measuring homogeneous aggregates) will always miss the most significant episodes in industrial evolution, since these do not register primarily as growth, but rather the opposite. Hell is a hothouse.

4. A policy environment designed to preserve macroeconomic aggregates (e.g. ‘wealth’ or ’employment’) necessarily opposes itself to the basic historical process of industrial revolution, because destruction of the existing economy is strictly indistinguishable from industrial renewal. For that old stuff to be worth anything (beyond scrap) we have to keep using it, which means that we’re not switching over. To cross the gulf, we have to enter the gulf. (Like most things in this universe: harsh but true.)

5. Real historical advance is now politically unacceptable. Either politics wins (eternal stagnation) or history does (political collapse). Interesting times (or not).

The world couldn’t take the heat, so it got out of the kitchen. There’s cold porridge for dinner, and it’s going to be cold porridge for breakfast. Eventually the porridge will run out, but that could take a while …

… and here’s Ben Bernanke on topic: “I’m not a believer in the Old Testament theory of business cycles. I think that if we can help people, we need to help people.” (via Mike Krieger at ZH)

Cold porridge politics forever. Yum!

[Tomb]

Suspended Animation

Limbo starts to feel like home

According to Herbert Stein’s Law, the signature warning of our age, “If something cannot go on forever, it will stop.” The question is: When?

The central concerns of environmentalists and radical market economists are easy to distinguish – when not straightforwardly opposed – yet both groups face a common mental and historical predicament, which might even be considered the outstanding social discovery of recent times: the extraordinary durability of the unsustainable. A pattern of mass behavior is observed that leads transparently to crisis, based on explosive (exponential) trends that are acknowledged without controversy, yet consensus on matters of fact coexists with paralyzing policy disagreements, seemingly interminable procrastination, and irresolution. The looming crisis continues to swell, close, horribly close, but in no way that is persuasively measurable closer, like some grating Godot purgatory: “You must go on; I can’t go on; I’ll go on.”

Urban Future doesn’t do green anguish as well as teeth-grinding Austrolibertarian irritation, so it won’t really try. Suffice to say that being green is about to become almost unimaginably maddening, if it isn’t already. Just as the standard ‘green house’ model insinuates itself, near-universally, into the structure of common sense, the world temperature record has locked into a flatline, with surging CO2 production showing up everywhere except as warming. Worse still, a new wave of energy resources – stubbornly based on satanic hydrocarbons, and of truly stupefying magnitude – is rolling out inertially, with barely a hint of effective obstruction. Tar sands, fracking, and sub-salt deep sea oil deposits are all coming on-stream already, with methane clathrates just up the road. The world’s on a burn, and it can’t go on (but it carries on).

Financial unsustainability is no less blatant, or bizarrely enduring. Since the beginning of the 20th century, once (classically) liberal Western economies have seen government expenditure rise from under 5% to over 40% of total income, with much of Europe crossing the 50% redline (after which nothing remotely familiar as ‘capitalism’ any longer exists). Public debt levels are tracing geometrically elegant exponential curves, chronic dependency is replacing productive social participation, and generalized sovereign insolvency is now a matter of simple and obvious fact. The only thing clearer than the inevitability of systemic bankruptcy is the political impossibility of doing anything about it, so things carry on, even though they really have to stop. Unintelligible multi-trillion magnitudes of impending calamity stack up, and up, and up in a near future which never quite arrives.

The frozen limbo-state of durable unsustainability is the new normal (which will last until it doesn’t). The pop cultural expression is zombie apocalypse, a shambling, undying state of endlessly prolonged decomposition. When translated into economic analysis, the result is epitomized by Tyler Cowen’s influential e-book The Great Stagnation: How America Ate All the Low-Hanging Fruit of Modern History, Got Sick, and Will (Eventually) Feel Better. (Yes, Urban Future is arriving incredibly late to this party, but in a frozen limbo that doesn’t matter.)

In a nutshell, Cowen argues that the exhaustion of three principal sources of ‘low-hanging fruit’ has brought the secular trend of American growth to a state of stagnation that high-frequency business cycles have partially obscured. With the consumption of America’s frontier surplus (free land), educational surplus (smart but educationally-unserved population), and — most importantly — technological surplus, from major breakthroughs opening broad avenues of commercial exploitation, growth rates have shriveled to a level that the country’s people are psychologically unprepared to accept as normal.

It fell to Cowen’s GMU colleague Peter Boettke to clearly make the pro-market case for stagnationism that Cowen seems to think he had already persuasively articulated. In an overtly supportive post, Boettke transforms Cowens’ rather elusive argument into a far more pointed anti-government polemic — the discovery of a new depressive equilibrium, in which relentless socio-political degeneration absorbs and neutralizes a decaying trend of techno-economic advance.

An accumulated economic surplus was created by the age of innovation, which the age of economic illusion spent down. We are now coming to the end of that accumulated surplus and thus the full weight of government inefficiencies are starting to be felt throughout the economy.

Perhaps surprisingly, the general tenor of response on the libertarian right was quite different. Rather than celebrating Cowen’s exposure of the statist ruin visited upon Western societies, most of this commentary concentrated upon the stagnationist thesis itself, attacking it from a variety of interlocking angles. David R. Henderson’s Cato review makes stinging economic arguments against Cowen’s claims about land and education. Russ Roberts (at Cafe Hayek) shows how Cowen’s dismal story about stagnant median family incomes draws upon data distorted by historical changes in US family structure and residential patterns. The most common line of resistance, however, instantiated by Don Boudreaux, John Hagel, Steven Horwitz, Bryan Caplan, and Ronald Bailey, among others, rallies in defense of actually existing consumer capitalism. Bailey, for example, notes:

In 1970, a 23-inch color television cost $368 ($2,000 in 2009 dollars). Today, a 22-inch Phillips LCD flat panel TV costs $190. In 1978, an 8-track tape player cost $169 ($550). Today, an iPod Touch with 8 gigabytes of memory costs $204. In 1970, an Olympia adding machine cost $80 ($437 in 2009 dollars). Today, a Canon office calculator costs $6.65. In 1978, a Radio Shack TRS80 computer with 16K of RAM cost $399 ($1300 in 2009 dollars). Today, Costco will sell you an ASUS netbook with 1 gigabyte of RAM for $270. The average car cost $3,900 in 1970 ($21,300 in today’s dollars). A mid-sized 2011 vehicle would cost somewhere around $20,000 and last twice as long.

Another very crude way to look at it is that Americans are four times richer in terms of refrigerators, 10 times richer in terms of TVs, 2.5 times richer when it comes to listening to music on the go, 3,000 times richer in calculators, about 400,000 times richer when it comes to price per kilobyte of computer memory, and two times richer in cars. Cowen dismisses this kind of progress as mere “quality improvements,” but in this case quality becomes it own kind of quantity when it comes to improved living standards.

What seems pretty clear from most of this (and already in Cowen’s account) is that nothing much has been moving forward in the world’s ‘developed’ economies for four decades except for the information technology revolution and its Moore’s Law dynamics. Abstract out the microprocessor, and even the most determinedly optimistic vision of recent trends is gutted to the point of expiration. Without computers, there’s nothing happening, or at least nothing good.

[… still crawling …]

[Tomb]

Radical Manufacturing

Seeing the future in three dimensions

The Industrial Revolution invented the factory, where ever-larger concentrations of labor, capital, energy and raw materials could be brought together under a unified management structure to extract economies of scale from mass production, based on the standardization of inputs and outputs, including specialized, routinized work, and — ultimately – precisely programmed, robotically-serviced assembly lines. It was in the factory that workers became ‘proletarian’, and through the factory that productive investment became ‘big business’. As the system matured, its vast production runs fostered the mass consumerism (along with the generic ‘consumer’) required to absorb its deluge of highly-standardized goods. As the division of labor and aggregation of markets over-spilled national boundaries, economic activities were relentlessly globalized. This complex of specialization, standardization, concentration, and expansion became identified with the essence of modernized production (in both its ‘capitalist’ and ‘socialist’ variants).

Initially, electronics seems only to have perpetuated – which is to say, intensified – this tendency. Electronic goods, and their components, are standardized to previously unimagined levels of resolution, through ultra-specialized production processes, and manufactured in vast, immensely expensive ‘fabs’ that derive scale economies from production runs that only integrated global markets can absorb. The personalization of computing hinted at productively empowered home-workers and disaggregated markets (‘long tails’), but this promise remained basically virtual. The latest tablet computer incarnates the familiar forces of factory production just as a Ford automobile once did, only more so.

Personal networked computing has proven to be a catalyst for cultural fragmentation, breaking up mass media, and eroding the broadcast model (which is steadily supplanted by niche and peer-to-peer ‘content’). It cannot radically disrupt – or revolutionize – the industrial system, however, because computers cannot reproduce themselves. Only robots can do that. Such robots are now coming into focus, and inspiring excited public discussion, even though their implicit nature and potential remains partially disguised by legacy nomenclature that subsumes them under obscure manufacturing processes: rapid prototyping, additive manufacturing, and 3D printing.

As this disparate terminology suggests, the revolutionized manufacturing technology that is appearing on the horizon can be understood in a number of different and seemingly incongruous ways, depending upon the particular industrial lineage it is attributed to. It can be conceived as the latest episode in the history of printing, as the culmination of CAD (computer assisted design) capability, or as an innovative type of productive machine-tool (building up an object ‘additively’ rather than milling it ‘subtractively’). It enables ideas to be materialized in objects, objects to be scanned and reproduced, or clumsily ‘sculpted’ objects to be replaced by precisely assembled alternatives.

Typically, 3D printing materializes a digitally-defined object by assembling it in layers. The raw material might be powdered metal, plastic, or even chocolate, deposited in steps and then fused together by a reiterated process of sintering, adhesion, or hardening. As very flexible machines (tending to universality), 3D printers encourage minute production runs, customization, and bespoke or boutique manufacturing. Changing the output requires no more than switching or tweaking the design (program), without the requirement for retooling.

Describing additive manufacturing as “The Next Trillion Dollar Industry,” Pascal-Emmanuel Gobry celebrates “potentially the biggest change in how we make things since the invention of assembly lines made the modern era possible.” Whilst its early-adopters represent the fairly narrow constituencies of rapid prototypers, specialty manufacturers, and hobbyists, he pointedly notes that “the first people who cared about things like cars, planes and personal computers were hobbyists.”

Gobry sees the market gowing rapidly: “And the printer in every home scenario isn’t that far-fetched either — only as far-fetched as ‘a computer in every home’ was in 1975. Like any other piece of technology, 3D printers are always getting cheaper and better. 3D printers today can be had for about $5,000.”

Rich Karlgaard at Forbes reinforces the message: “The cost of 3D printers has dropped tenfold in five years. That’s the real kicker here — 3D printing is riding the Moore’s Law curve, just as 2D printing started doing in the 1980s.”

With the price of 3D printers having fallen by two orders of magnitude in a decade, comparisons with other runaway consumer electronics markets seem anything but strained. “It’s not hard to envision a world in which, 10 or 20 years from now, every home will have a 3D printer,” remarks dailymarkets.com. Mass availability of near-universal manufacturing capabilities promises the radical decentralization of industrial activity, a phenomenon that is already drawing the attention of mainstream news media. At techliberation.com, Adam Marcus highlights the impending legal issues, in the fields of intellectual property and (especially) product liability.

To comprehend the potential of 3D printing in its full radicality, however, the most indispensable voice is that of Adrian Bowyer, at the Centre for Biomimetic and Natural Technology, Department of Mechanical Engineering, University of Bath, UK. Bowyer is the instigator of RepRap -“a project to build a replicating rapid prototyper. This machine, if successful, will be an instance of a von Neumann Universal Constructor, which is a general-purpose manufacturing device that is also capable of reproducing itself, like a biological cell.”

He elaborates:

There is a sense in which a well-equipped manufacturing workshop is (just about) a universal constructor -it could make many of the machine tools that are in it. The trouble is that the better-equipped the workshop is the easier it becomes to make any one item, but the greater the number and diversity of the items that need to be made. It is certainly the case that human engineering considered as a whole is a universal constructor; it self-propagates with no external input. … RepRap will be a mechatronic device using entirely conventional (indeed simple) engineering. But it is really a piece of biology. This is because it can self-replicate with the symbiotic assistance of a person. Anything that can copy itself immediately and inescapably becomes subject to Darwinian selection, but RepRap has one important difference from natural organisms: in nature, mutations are random, and only a tiny fraction are improvements; but with RepRap, every mutation is a product of the analytical thought of its users. This means that the rate of improvement should be very rapid, at least at the start; it is more analogous to selective breeding -the process we used to make cows from aurochs and wheat from wild grass. Evolution can be relied on to make very good designs emerge quickly. It will also gradually eliminate items from the list of parts that need to be externally supplied. Note also that any old not-so-good RepRap machine can still make a new machine to the latest and best design.

A self-replicating and symbiotically assembled Universal Constructor would proliferate exponentially, placing stupendous manufacturing capability into a multitude of hands, at rapidly shrinking cost. In addition, the evolutionary dynamics of the process would result in an explosive growth in utility, comparable to that attained from the domestication of plants and animals, but at a greatly accelerated pace.

The implications of the project for political economy are fascinating but obscure. Bowyer describes it as an exercise in “Darwinian Marxism,” whilst fellow RepRapper Forrest Higgs describes himself as a “technocratic anarchist.” In any case, there seems no reason to expect the ideological upheavals from (additive and distributed) Industrialism 2.0 to be any less profound than those from (subtractive and concentrated) Industrialism 1.0. The fall of the factory is set to be the biggest event in centuries, and robot politics might already be taking shape.

[Tomb]

Decelerando?

Charles Stross wants to get off the bus

Upon writing Accelerando, Charles Stross became to Technological Singularity what Dante Alighieri has been to Christian cosmology: the pre-eminent literary conveyor of an esoteric doctrine, packaging abstract metaphysical conception in vibrant, detailed, and concrete imagery. The tone of Accelerando is transparently tongue-in-cheek, yet plenty of people seem to have taken it entirely seriously. Stross has had enough of it:

“I periodically get email from folks who, having read ‘Accelerando’, assume I am some kind of fire-breathing extropian zealot who believes in the imminence of the singularity, the uploading of the libertarians, and the rapture of the nerds. I find this mildly distressing, and so I think it’s time to set the record straight and say what I really think. … Short version: Santa Claus doesn’t exist.”

In the comments thread (#86) he clarifies his motivation:

“I’m not convinced that the singularity isn’t going to happen. It’s just that I am deathly tired of the cheerleader squad approaching me and demanding to know precisely how many femtoseconds it’s going to be until they can upload into AI heaven and leave the meatsack behind.”

As these remarks indicate, there’s more irritable gesticulation than structured case-making in Stross’ post, which Robin Hanson quite reasonably describes as “a bit of a rant – strong on emotion, but weak on argument.” Despite that – or more likely because of it — a minor net-storm ensued, as bloggers pro and con seized the excuse to re-hash – and perhaps refresh — some aging debates. The militantly-sensible Alex Knapp pitches in with a threepart series on his own brand of Singularity skepticism, whilst Michael Anissimov of the Singularity Institute for Artificial Intelligence responds to both Stross and Knapp, mixing some counter-argument with plenty of counter-irritation.

At the risk of repeating the original error of Stross’ meat-stack-stuck fan-base and investing too much credence in what is basically a drive-by blog post, it might be worth picking out some of its seriously weird aspects. In particular, Stross leans heavily on an entirely unexplained theory of moral-historical causality:

“… before creating a conscious artificial intelligence we have to ask if we’re creating an entity deserving of rights. Is it murder to shut down a software process that is in some sense ‘conscious’? Is it genocide to use genetic algorithms to evolve software agents towards consciousness? These are huge show-stoppers…”

Anissimov blocks this at the pass: “I don’t think these are ‘showstoppers’ … Just because you don’t want it doesn’t mean that we won’t build it.” The question might be added, more generally: In which universe do arcane objections from moral philosophy serve as obstacles to historical developments (because it certainly doesn’t seem to be this one)? Does Stross seriously think practical robotics research and development is likely to be interrupted by concerns for the rights of yet-uninvented beings?

He seems to, because even theologians are apparently getting a veto:

“Uploading … is not obviously impossible unless you are a crude mind/body dualist. However, if it becomes plausible in the near future we can expect extensive theological arguments over it. If you thought the abortion debate was heated, wait until you have people trying to become immortal via the wire. Uploading implicitly refutes the doctrine of the existence of an immortal soul, and therefore presents a raw rebuttal to those religious doctrines that believe in a life after death. People who believe in an afterlife will go to the mattresses to maintain a belief system that tells them their dead loved ones are in heaven rather than rotting in the ground.”

This is so deeply and comprehensively gone it could actually inspire a moment of bewildered hesitation (at least among those of us not presently engaged in urgent Singularity implementation). Stross seems to have inordinate confidence in a social vetting process that, with approximate adequacy, filters techno-economic development for compatibility with high-level moral and religious ideals. In fact, he seems to think that we are already enjoying the paternalistic shelter of an efficient global theocracy. Singularity can’t happen, because that would be really bad.

No wonder, then, that he exhibits such exasperation at libertarians, with their “drastic over-simplification of human behaviour.” If stuff – especially new stuff – were to mostly happen because decentralized markets facilitated it, then the role of the Planetary Innovations Approval Board would be vastly curtailed. Who knows what kind of horrors would show up?

It gets worse, because ‘catallaxy’ – or spontaneous emergence from decentralized transactions – is the basic driver of historical innovation according to libertarian explanation, and nobody knows what catallactic processes are producing. Languages, customs, common law precedents, primordial monetary systems, commercial networks, and technological assemblages are only ever retrospectively understandable, which means that they elude concentrated social judgment entirely – until the opportunity to impede their genesis has been missed.

Stross is right to bundle singularitarian and libertarian impulses together in the same tangle of criticism, because they both subvert the veto power, and if the veto power gets angry enough about that, we’re heading full-tilt into de Garis territory. “Just because you don’t want it doesn’t mean that we won’t build it” Anissimov insists, as any die-hard Cosmist would.

Is advanced self-improving AI technically feasible? Probably (but who knows?). There’s only one way to find out, and we will. Perhaps it will even be engineered, more-or-less deliberately, but it’s far more likely to arise spontaneously from a complex, decentralized, catallactic process, at some unanticipated threshold, in a way that was never planned. There are definite candidates, which are often missed. Sentient cities seem all-but-inevitable at some point, for instance (‘intelligent cities’ are already widely discussed). Financial informatization pushes capital towards self-awareness. Drone warfare is drawing the military ever deeper into artificial mind manufacture. Biotechnology is computerizing DNA.

‘Singularitarians’ have no unified position on any of this, and it really doesn’t matter, because they’re just people – and people are nowhere near intelligent or informed enough to direct the course of history. Only catallaxy can do that, and it’s hard to imagine how anybody could stop it. Terrestrial life has been stupid for long enough.

It may be worth making one more point about intelligence deprivation, since this diagnosis truly defines the Singularitarian position, and reliably infuriates those who don’t share — or prioritize — it. Once a species reaches a level of intelligence enabling techno-cultural take-off, history begins and develops very rapidly — which means that any sentient being finding itself in (pre-singularity) history is, almost by definition, pretty much as stupid as any ‘intelligent being’ can be. If, despite the moral and religious doctrines designed to obfuscate this reality, it is eventually recognized, the natural response is to seek its urgent amelioration, and that’s already transhumanism, if not yet full-blown singularitarianism. Perhaps a non-controversial formulation is possible: defending dimness is really dim. (Even the dim dignitarians should be happy with that.)

[Tomb]

Hard Futurism

Are you ready for the next big (nasty) thing?

For anyone with interests both in extreme practical futurism and the renaissance of the Sinosphere, Hugo de Garis is an irresistible reference point. A former teacher of Topological Quantum Computing (don’t ask) at the International Software School of Wuhan University, and later Director of the Artificial Brain Lab at Xiamen University, de Garis’ career symbolizes the emergence of a cosmopolitan Chinese technoscientific frontier, where the outer-edge of futuristic possibility condenses into precisely-engineered reality.

De Garis’ work is ‘hard’ not only because it involves fields such as Topological Quantum Computing, or because – more accessibly — he’s devoted his research energies to the building of brains rather than minds, or even because it has generated questions faster than solutions. In his ‘semi-retirement’ (since 2010), hard-as-in-difficult, and hard-as-in-hardware, have been supplanted by hard-as-in-mind-numbingly-and-incomprehensibly-brutal – or, in his own words, an increasing obsession with the impending ‘Gigadeath’ or ‘Artilect War‘.

According to de Garis, the approach to Singularity will revolutionize and polarize international politics, creating new constituencies, ideologies, and conflicts. The basic dichotomy to which everything must eventually succumb divides those who embrace the emergence of transhuman intelligence, and those who resist it. The former he calls ‘cosmists‘, the latter ‘terrans’.

Since massively-augmented and robotically-reinforced ‘cosmists’ threaten to become invincible, the ‘terrans’ have no option but pre-emption. To preserve human existence in a recognizable state, it is necessary to violently suppress the cosmist project in advance of its accomplishment. The mere prospect of Singularity is therefore sufficient to provoke a political — and ultimately military — convulsion of unprecedented scale. A Terran triumph (which might require much more than just a military victory) would mark an inflection point in deep history, as the super-exponential trend of terrestrial intelligence production – lasting over a billion years — was capped, or reversed. A Cosmist win spells the termination of human species dominion, and a new epoch in the geological, biological, and cultural process on earth, as the torch of material progress is passed to the emerging techo sapiens. With the stakes set so high, the melodramatic grandeur of the de Garis narrative risks understatement no less than hyperbole.

The giga-magnitude body-count that de Garis postulates for his Artilect (artificial intellect) War is the dark side expression of Moore’s Law or Kurzweilean increasing returns – an extrapolation from exponentiating historical trends, in this case, casualty figures from major human conflicts over time. It reflects the accumulating trend to global wars motivated by trans-national ideologies with ever-increasing stakes. One king is (perhaps) much like another, but a totalitarian social direction is very different from a liberal one (even if such paths are ultimately revisable). Between a Terran world order and a Cosmist trajectory into Singularity, the distinction approaches the absolute. The fate of the planet is decided, with costs to match.

If the de Garis Gigadeath War scenario is pre-emptive in relation to prospective Singularity, his own intervention is meta-pre-emptive – since he insists that world politics must be anticipatively re-forged in order to forestall the looming disaster. The Singularity prediction ripples backwards through waves of pre-adaptation, responding at each stage to eventualities that are yet to unfold. Change unspools from out of the future, complicating the arrow of time. It is perhaps no coincidence that among de Garis’ major research interests is reversible computing, where temporal directionality is unsettled at the level of precise engineering.

Does ethnicity and cultural tradition merely dissolve before the tide-front of this imminent Armageddon? The question is not entirely straightforward. Referring to his informal polling of opinion on the coming great divide, de Garis recalls his experience teaching in China, remarking:

I know from the lectures I’ve given over the past two decades on species dominance that when I invite my audiences to vote on whether they are more Terran than Cosmist, the result is usually 50-50. … At first, I thought this was a consequence of the fact that the species dominance issue is too new, causing people who don’t really understand it to vote almost randomly – hence the 50:50 result. But gradually, it dawned on me that many people felt as ambivalently about the issue as I do. Typically, the Terran/Cosmist split would run from 40:60 to 60:40 (although I do notice that with my very young Chinese audiences in computer science, the Cosmists are at about 80%).

[Tomb]

Anthropocene

Human history is geology on speed

Complex systems, characterized by high (and rising local) negative entropy, are essentially historical. The sciences devoted to them tend inevitably to become evolutionary, as exemplified by the course of the earth- and life-sciences – which had become thoroughly historicized by the late 19th century. Perhaps the most elegant, abstract, or ‘cosmic’ comprehension of this necessity is found in the work of Vladimir Ivanovich Vernadsky (1863-1945), whose visionary writings sought to establish the basis for an integrated understanding of terrestrial history, conceived as a process of material acceleration through geochemical epochs.

Despite the philosophical power of his ideas, Vernadsky’s scientific training as a chemist anchored his thoughts in concrete, literal reality. The acceleration of the terrestrial process was more than an anthropocentric impression, registering socially and culturally significant change (such as the cephalization of the primate lineage leading to mankind). Geochemical evolution was physically expressed through the average velocity of particles, as biological metabolism (biosphere), and eventually human cultures (noosphere), introduced and propagated ever more intense networks of chemical reactions. Life is matter in a hurry, culture even more so.

Whilst Vernadsky has been sporadically rediscovered and celebrated, his importance – based on the profundity, rigor, and supreme relevance of his work — has yet to be fully and universally acknowledged. Yet it is possible that his time is finally arriving.

The May 28 – June 3 edition of The Economist devotes an editorial and major feature story to the Anthropocene – a distinctive geological epoch proposed by Paul Crutzen in 2000, now under consideration by the International Commission on Stratigraphy (the “ultimate adjudicator of the geological time scale”). Recognition of the Anthropocene would be an acknowledgement that we inhabit a geological epoch whose physical signature has been fundamentally re-shaped by the technological forces of the ‘noosphere’ or ‘ethosphere’ – in which human intelligence has been introduced as a massive (and even dominant) force of nature. Radical metamorphosis (and acceleration) of the earth’s nitrogen and carbon cycles are especially pronounced Anthropocene signals.

“The term ‘paradigm shift’ is bandied around with promiscuous ease,” The Economist notes. “But for the natural sciences to make human activity central to its conception of the world, rather than a distraction, would mark such a shift for real.”

Third Reich master architect Albert Speer is notorious for his promotion of ‘ruin value’ – the persistent grandeur of monumental constructions, encountered by archaeologists in the far future. The Anthropocene introduces a similar perspective on a still vaster scale. As The Economist remarks:

The most common way of distinguishing periods of geological time is by means of the fossils they contain. On this basis picking out the Anthropocene in the rocks of days to come will be pretty easy. Cities will make particularly distinctive fossils. A city on a fast-sinking river delta (and fast-sinking deltas, undermined by the pumping of groundwater and starved of sediment by dams upstream, are common Anthropocene environments) could spend millions of years buried and still, when eventually uncovered, reveal through its crushed structures and weird mixtures of materials that it is unlike anything else in the geological record.

As terrestrial history accelerates, the distinctive units of geological time are compressed. The Archean and Proterozoic aeons are measured in billions of years, the Palaeozoic and Mesozoic eras in hundreds of millions, the Palaeogene and Neogene periods in tens of millions. The Holocene epoch lasts less than 10,000 years, and the Anthropocene (epoch or mere phase?) only centuries – because its recognition is already an indication of its end.

Beyond the Anthropocene lies the Technocene, distinguished by nanotechnological manipulation of matter — a geochemical revolution of such magnitude that only the assembly of (RNA and DNA) replicator molecules is comparable in implication. Within the coming Technocene (lasting mere decades?), the carbon cycle is relayed through sub-microscopic manufacturing processes that utilize it as the ultimate industrial resource – feedstock for diamondoid nanomachine fabrication. The consequences for geological deposition, and thus for the discoveries of potential distant-future geologists, are substantial but opaque. On the far-side of nanomachined age, femtomachines await, precisely assembled from quarks, and decomposing chemistry into nuclear physics.

For the moment, however, even the origination of the Anthropocene – never mind its termination – remains a matter of live controversy. Assuming that it coincides with industrialization (which is not universally accepted), geologists will find themselves enmeshed in a debate among historians, as the fraught term ‘modernity’ takes on a geochemical definition. Whatever the outcome, Vernadsky is back.

[Tomb]

“2035. Probably earlier.”

There’s fast, and then there’s … something more

Eliezer Yudkowski now categorizes his article ‘Staring into Singularity‘ as ‘obsolete’. Yet it remains among the most brilliant philosophical essays ever written. Rarely, if ever, has so much of value been said about the absolutely unthinkable (or, more specifically, the absolutely unthinkable for us).

For instance, Yudkowsky scarcely pauses at the phenomenon of exponential growth, despite the fact that this already overtaxes all comfortable intuition and ensures revolutionary changes of such magnitude that speculation falters. He is adamant that exponentiation (even Kurzweil‘s ‘double exponentiation’) only reaches the starting point of computational acceleration, and that propulsion into Singularity is not exponential, but hyperbolic.

Each time the speed of thought doubles, time-schedules halve. When technology, including the design of intelligences, succumbs to such dynamics, it becomes recursive. The rate of self-improvement collapses with smoothly increasing rapidity towards instantaneity: a true, mathematically exact, or punctual Singularity. What lies beyond is not merely difficult to imagine, it is absolutely inconceivable. Attempting to picture or describe it is a ridiculous futility. Science fiction dies.

“A group of human-equivalent computers spends 2 years to double computer speeds. Then they spend another 2 subjective years, or 1 year in human terms, to double it again. Then they spend another 2 subjective years, or six months, to double it again. After four years total, the computing power goes to infinity.

“That is the ‘Transcended’ version of the doubling sequence. Let’s call the ‘Transcend’ of a sequence {a0, a1, a2…} the function where the interval between an and an+1 is inversely proportional to an. So a Transcended doubling function starts with 1, in which case it takes 1 time-unit to go to 2. Then it takes 1/2 time-units to go to 4. Then it takes 1/4 time-units to go to 8. This function, if it were continuous, would be the hyperbolic function y = 2/(2 – x). When x = 2, then (2 – x) = 0 and y = infinity. The behavior at that point is known mathematically as a singularity.”

There could scarcely be a more precise, plausible, or consequential formula: Doubling periods halve. On the slide into Singularity — I.J.Good’s ‘intelligence explosion‘ — exponentiation is compounded by a hyperbolic trend. The arithmetic of such a process is quite simple, but its historical implications are strictly incomprehensible.

“I am a Singularitarian because I have some small appreciation of how utterly, finally, absolutely impossible it is to think like someone even a little tiny bit smarter than you are. I know that we are all missing the obvious, every day. There are no hard problems, only problems that are hard to a certain level of intelligence. Move the smallest bit upwards, and some problems will suddenly move from ‘impossible’ to ‘obvious’. Move a substantial degree upwards, and all of them will become obvious. Move a huge distance upwards… “

Since the argument takes human thought to its shattering point, it is natural for some to be repulsed by it. Yet its basics are almost impregnable to logical objection. Intelligence is a function of the brain. The brain has been ‘designed’ by natural processes (posing no discernible special difficulties). Thus, intelligence is obviously an ultimately tractable engineering problem. Nature has already ‘engineered it’ whilst employing design methods of such stupefying inefficiency that only brute, obstinate force, combined of course with complete ruthlessness, have moved things forwards. Yet the tripling of cortical mass within the lineage of the higher primates has only taken a few million years, and — for most of this period — a modest experimental population (in the low millions or less).

The contemporary technological problem, in contrast to the preliminary biological one, is vastly easier. It draws upon a wider range of materials and techniques, an installed intelligence and knowledge base, superior information media, more highly-dynamized feedback systems, and a self-amplifying resource network. Unsurprisingly it is advancing at incomparably greater speed.

“If we had a time machine, 100K of information from the future could specify a protein that built a device that would give us nanotechnology overnight. 100K could contain the code for a seed AI. Ever since the late 90’s, the Singularity has been only a problem of software. And software is information, the magic stuff that changes at arbitrarily high speeds. As far as technology is concerned, the Singularity could happen tomorrow. One breakthrough – just one major insight – in the science of protein engineering or atomic manipulation or Artificial Intelligence, one really good day at Webmind or Zyvex, and the door to Singularity sweeps open.”

[Tomb]

Moore and More

Doubling down on Moore’s Law is the futurist main current

Cycles cannot be dismissed from futuristic speculation (they always come back), but they no longer define it. Since the beginning of the electronic era, their contribution to the shape of the future has been progressively marginalized.

The model of linear and irreversible historical time, originally inherited from Occidental religious traditions, was spliced together with ideas of continuous growth and improvement during the industrial revolution. During the second half of the 20th century, the dynamics of electronics manufacture consolidated a further – and fundamental – upgrade, based upon the expectation of continuously accelerating change.

The elementary arithmetic of counting along the natural number line provides an intuitively comfortable model for the progression of time, due to its conformity with clocks, calendars, and the simple idea of succession. Yet the dominant historical forces of the modern world promote a significantly different model of change, one that tends to shift addition upwards, into an exponent. Demographics, capital accumulation, and technological performance indices do not increase through unitary steps, but through rates of return, doublings, and take-offs. Time explodes, exponentially.

The iconic expression of this neo-modern time, counting succession in binary logarithms, is Moore’s Law, which determines a two-year doubling period for the density of transistors on microchips (“cramming more components onto integrated circuits”). In a short essay published in Pajamas Media, celebrating the prolongation of Moore’s Law as Intel pushes chip architecture into the third-dimension, Michael S. Malone writes:

“Today, almost a half-century after it was first elucidated by legendary Fairchild and Intel co-founder Dr. Gordon Moore in an article for a trade magazine, it is increasingly apparent that Moore’s Law is the defining measure of the modern world. All other predictive tool for understanding life in the developed world since WWII — demographics, productivity tables, literacy rates, econometrics, the cycles of history, Marxist analysis, and on and on — have failed to predict the trajectory of society over the decades … except Moore’s Law.”

Whilst crystallizing – in silico — the inherent acceleration of neo-modern, linear time, Moore’s Law is intrinsically nonlinear, for at least two reasons. Firstly, and most straightforwardly, it expresses the positive feedback dynamics of technological industrialism, in which rapidly-advancing electronic machines continuously revolutionize their own manufacturing infrastructure. Better chips make better robots make better chips, in a spiraling acceleration. Secondly, Moore’s Law is at once an observation, and a program. As Wikipedia notes:

“[Moore’s original] paper noted that the number of components in integrated circuits had doubled every year from the invention of the integrated circuit in 1958 until 1965 and predicted that the trend would continue ‘for at least ten years’. His prediction has proved to be uncannily accurate, in part because the law is now used in the semiconductor industry to guide long-term planning and to set targets for research and development. … Although Moore’s law was initially made in the form of an observation and forecast, the more widely it became accepted, the more it served as a goal for an entire industry. This drove both marketing and engineering departments of semiconductor manufacturers to focus enormous energy aiming for the specified increase in processing power that it was presumed one or more of their competitors would soon actually attain. In this regard, it can be viewed as a self-fulfilling prophecy.”

Malone comments:

“… semiconductor companies around the world, big and small, and not least because of their respect for Gordon Moore, set out to uphold the Law — and they have done so ever since, despite seemingly impossible technical and scientific obstacles. Gordon Moore not only discovered Moore’s Law, he made it real. As his successor at Intel, Paul Otellini, once told me, ‘I’m not going to be the guy whose legacy is that Moore’s Law died on his watch.'”

If Technological Singularity is the ‘rapture of the nerds’, Gordon Moore is their Moses. Electro-industrial capitalism is told to go forth and multiply, and to do so with a quite precisely time-specified binary exponent. In its adherence to the Law, the integrated circuit industry is uniquely chosen (and a light unto the peoples). As Malone concludes:

“Today, every segment of society either embraces Moore’s Law or is racing to get there. That’s because they know that if only they can get aboard that rocket — that is, if they can add a digital component to their business — they too can accelerate away from the competition. That’s why none of the inventions we Baby Boomers as kids expected to enjoy as adults — atomic cars! personal helicopters! ray guns! — have come true; and also why we have even more powerful tools and toys —instead. Whatever can be made digital, if not in the whole, but in part — marketing, communications, entertainment, genetic engineering, robotics, warfare, manufacturing, service, finance, sports — it will, because going digital means jumping onto Moore’s Law. Miss that train and, as a business, an institution, or a cultural phenomenon, you die.”

[Tomb]

Scaly Creatures

Cities are accelerators and there are solid numbers to demonstrate it

Among the most memorable features of Shanghai’s 2010 World Expo was the quintet of ‘Theme Pavilions’ designed to facilitate exploration of the city in general (in keeping with the urban-oriented theme of the event: ‘Better City, Better Life’). Whilst many international participants succumbed to facile populism in their national pavilions, these Theme Pavilions maintained an impressively high-minded tone.

Most remarkable of all for philosophical penetration was the Urban Being Pavilion, with its exhibition devoted to the question: what kind of thing is a city? Infrastructural networks received especially focused scrutiny. Pipes, cables, conduits, and transport arteries compose intuitively identifiable systems – higher-level wholes – that strongly indicate the existence of an individualized, complex being. The conclusion was starkly inescapable: a city is more than just an aggregated mass. It is a singular, coherent entity, deserving of its proper – even personal – name, and not unreasonably conceived as a composite ‘life-form’ (if not exactly an ‘organism’).

Such intuitions, however plausible, do not suffice in themselves to establish the city as a rigorously-defined scientific object. “[D]espite much historical evidence that cities are the principle engines of innovation and economic growth, a quantitative, predictive theory for understanding their dynamics and organization and estimating their future trajectory and stability remains elusive,” remark Luís M. A. Bettencourt, José Lobo, Dirk Helbing, Christian Kühnert, and Geoffrey B. West, in their prelude to a 2007 paper that has done more than any other to remedy the deficit: ‘Growth, innovation, scaling, and the pace of life in cities‘.

In this paper, the authors identify mathematical patterns that are at once distinctive to the urban phenomenon and generally applicable to it. They thus isolate the object of an emerging urban science, and outline its initial features, claiming that: “the social organization and dynamics relating urbanization to economic development and knowledge creation, among other social activities, are very general and appear as nontrivial quantitative regularities common to all cities, across urban systems.”

Noting that cities have often been analogized to biological systems, the paper extracts the principle supporting the comparison. “Remarkably, almost all physiological characteristics of biological organisms scale with body mass … as a power law whose exponent is typically a multiple of 1/4 (which generalizes to 1/(d +1) in d-dimensions).” These relatively stable scaling relations allow biological features, such as metabolic rates, life spans, and maturation periods, to be anticipated with a high-level of confidence given body mass alone. Furthermore, they conform to an elegant series of theoretical expectations that draw upon nothing beyond the abstract organizational constraints of n-dimensional space:

“Highly complex, self-sustaining structures, whether cells, organisms, or cities, require close integration of enormous numbers of constituent units that need efficient servicing. To accomplish this integration, life at all scales is sustained by optimized, space-filling, hierarchical branching networks, which grow with the size of the organism as uniquely specified approximately self-similar structures. Because these networks, e.g., the vascular systems of animals and plants, determine the rates at which energy is delivered to functional terminal units (cells), they set the pace of physiological processes as scaling functions of the size of the organism. Thus, the self-similar nature of resource distribution networks, common to all organisms, provides the basis for a quantitative, predictive theory of biological structure and dynamics, despite much external variation in appearance and form.”

If cities are in certain respects meta- or super-organisms, however, they are also the inverse. Metabolically, cities are anti-organisms. As biological systems scale up, they slow down, at a mathematically predictable rate. Cities, in contrast, accelerate as they grow. Something approximating to the fundamental law of urban reality is thus exposed: larger is faster.

The paper quantifies its findings, based on a substantial base of city data (with US cities over-represented), by specifying a ‘scaling exponent’ (or ‘ß‘, beta) that defines the regular correlation between urban scale and the factor under consideration.

A beta of one corresponds to linear correlation (of a variable to city size). For instance, housing supply, which remains constantly proportional to population across all urban scales, is found – unsurprisingly – to have ß = 1.00.

A beta of less than one indicates consistent economy to scale. Such economies are found systematically among urban resource networks, exemplified by gasoline stations (ß = 0.77), gasoline sales (ß = 0.79), length of electrical cables (ß = 0.87), and road surface (ß = 0.83). The sub-linear correlation of resource costs to urban scale makes city life increasingly efficient as metropolitan intensity soars.

A beta of greater than one indicates increasing returns to scale. Factors exhibiting this pattern include inventiveness (e.g. ‘new patents’ß = 1.27, ‘inventors’ ß = 1.25), wealth creation (e.g. ‘GDP’ ß = 1.15, wages ß = 1.12), but also disease (‘new AIDS cases’ ß = 1.23), and serious crimes (ß = 1.16). Urban growth is accompanied by a super-linear rise in opportunity for social interaction, whether productive, infectious, or malicious. More is not only better, it’s much better (and, in some respects, worse).

“Our analysis suggests uniquely human social dynamics that transcend biology and redefine metaphors of urban ‘metabolism’. Open-ended wealth and knowledge creation require the pace of life to increase with organization size and for individuals and institutions to adapt at a continually accelerating rate to avoid stagnation or potential crises. These conclusions very likely generalize to other social organizations, such as corporations and businesses, potentially explaining why continuous growth necessitates an accelerating treadmill of dynamical cycles of innovation.”

Bigger city, faster life.

[Tomb]