Two unusual little girls test the limits of identity

At the leading-edge of information technology — and amongst the ‘transhumanist’ commentary it stimulates – the idea of self-identity is undergoing relentless interrogation. Cultures substantially influenced by Abrahamic religious traditions, in which the resilient integrity and fundamental individuality of the ‘soul’ is strongly emphasized, are especially vulnerable to the prospect of radical and disconcerting conceptual revision.

The computerization of the natural sciences – including neurosciences – ensures that the investigation of the human brain and the innovation of artificial intelligence systems advance in parallel, whilst cross-linking and mutually reinforcing each other. Increasingly, the understanding of the brain and its digital emulation tend to fuse into a single, complex research program. As this program emerges, archaic metaphysics and spiritual doctrines become engineering problems. Individual identity seems ever less like a basic property, and more like a precarious achievement – or challenge – determined by processes of self-reference, and by relative communicative isolation. (‘Split-brain’ cases have vividly illustrated the instability and artificiality of the self-identifying individual.)

Would an AI program – or brain – that was tightly coupled to the Internet by high-bandwidth connections still consider itself to be strictly individuated? Do cyborgs – or uploads — dissolve their souls? Could a networked robot say ‘I’ and mean it? Because such questions are becoming ever more prominent, and practical, it is not surprising that a New York Times story by Susan Dominus, devoted to craniopagus conjoined twins Krista and Tatiana Hogan, has generated an unusual quantity of excitement and Internet-linkage.

The twins are not only fused at the head (craniopagus), their brains are connected by a ‘neural bridge’ that enables signals from one to the other. Neurosurgeon Douglas Cochrane proposes “that visual input comes in through the retinas of one girl, reaches her thalamus, then takes two different courses, like electricity traveling along a wire that splits in two. In the girl who is looking at the strobe or a stuffed animal in her crib, the visual input continues on its usual pathways, one of which ends up in the visual cortex. In the case of the other girl, the visual stimulus would reach her thalamus via the thalamic bridge, and then travel up her own visual neural circuitry, ending up in the sophisticated processing centers of her own visual cortex. Now she has seen it, probably milliseconds after her sister has.”

The twins’ brains, or a twin-brain? The Hogan case is so extraordinary that irreducible ambiguity arises:

The girls’ brains are so unusually formed that doctors could not predict what their development would be like: each girl has an unusually short corpus callosum, the neural band that allows the brain’s two cerebral hemispheres to communicate, and in each girl, the two cerebral hemispheres also differ in size, with Tatiana’s left sphere and Krista’s right significantly smaller than is typical. “The asymmetry raises intriguing questions about whether one can compensate for the other because of the brain bridge,” said Partha Mitra, a neuroscientist at Cold Spring Harbor Laboratory, who studies brain architecture. The girls’ cognition may also be facing specific challenges that no others have experienced: some kind of confusing crosstalk that would require additional energy to filter and process. In addition to sorting out the usual sensory experiences of the world, the girls’ brains, their doctors believe, have been forced to adapt to sensations originating with the organs and body parts of someone else. … Krista likes ketchup, and Tatiana does not, something the family discovered when Tatiana tried to scrape the condiment off her own tongue, even when she was not eating it.

As they struggle to make sense of their boundaries, the twins are avatars of an impending, universal confusion:

Although each girl often used “I” when she spoke, I never heard either say “we,” for all their collaboration. It was as if even they seemed confused by how to think of themselves, with the right language perhaps eluding them at this stage of development, under these unusual circumstances — or maybe not existing at all. “It’s like they are one and two people at the same time,” said Feinberg, the professor of psychiatry and neurology at Albert Einstein College of Medicine. What pronoun captures that?



East-plus-West at the frontier of freedom

In accordance with the widely-held belief that digital communication technologies ‘destroy distance’, James C. Bennett coined the term ‘Anglosphere’ to describe the arena of comparatively frictionless cultural proximity binding spatially-dispersed Anglophone populations. His contention was that the gathering trends exemplified by the development of the Internet would continue to promote cultural ties, whilst eroding the importance of spatial neighborhoods. In the age of the World Wide Web, cultural solidarity trumps geographical solidarity.

Whilst alternative culture-spheres – expressly including the Sinosphere – were mentioned in passing, they were not the focus of Bennett’s account. His attention was directed to English-speaking peoples, scattered geographically, yet bound together by threads of common understanding that derived from a shared language, English common law and limited-government traditions, highly-developed civil societies, individualism, and an unusual tolerance for disruptive social change. He predicted both that these commonalities would become increasingly consequential in the years to come, and that their general tenor would prove highly adaptive as the rate of social change accelerated worldwide.

Bennett’s concern with large-scale cultural systems can be seen as part of an intellectual trend, comparable in significant respects to Samuel Huntington’s influential ‘Clash of Civilizations’ thesis. In a world that is undergoing tectonic shifts in the distribution of wealth, power, and hegemony, such preoccupations are understandable. In these circumstances, it would be surprising if the partisans of Anglospheric and Sinospheric cultural traditions were not aroused to ardent advocacy of their relative merits and demerits, and — if Bennett is taken seriously — such discussions will take place in zones of cultural communion that are, at least relatively, increasingly introverted. The rapid emergence of a highly-autonomous ‘Chinese Internet’ in recent years adds weight to such expectations.

In March, the Z/Yen Group released the ninth in its series of Global Financial Centres Index rankings, in which Shanghai leapt to shared fifth place with Tokyo (on GFCI ratings of 694). London (775), New York (769), Hong Kong (759), and Singapore (722) led the pack. (The top 75 can be seen here).

Both Anglosphereans and Sinosphereans can find ready satisfaction in these ratings. The persistent supremacy of London and New York attests to a 250-year history of world economic dominance, whilst the ascent of Chinese-ethnicity commercial cities to the remaining top-slots clearly indicates the shift of economic gravity to the western Pacific region. Yet the most interesting pattern lies in-between. Neither Hong Kong nor Singapore belong unambiguously to a Sinosphere (still less to a broad Anglosphere). Instead, they are characterized by distinctive forms of Chinese-Anglophone hybridity – an immensely successful cultural synthesis. It would be difficult to maintain that Shanghai was entirely untouched by a comparable phenomenon, inherited in that case from the synthetic mentality of its concession-era International Settlement, and reflected in its singular Haipai or ‘ocean culture’.

The existence of an identifiable Sino-Anglosphere – or Singlosphere – is further suggested by the Heritage Foundation’s 2011 Index of Economic Freedom (rated on a scale of 0-100). On that list, the top two places are taken by Hong Kong (89.7) and Singapore (87.2), followed by Australia (82.5) and New Zealand (82.3). The Anglospherean and Sinospherean territorial cores fare less impressively, with none meeting the Heritage criteria for free economies — the United States comes ninth (77.8), the United Kingdom 16th (74.5), and mainland China 135th (52.0). It seems that the Singlosphere has learnt something about economic freedom that exceeds the presently-manifested wisdom of both cultural root-stocks – setting a model for the Sinosphere, and leaving the Anglosphere trailing in its wake.

As the deep secular trend of Chinese ascent and (relative if not absolute) American decline leads to ever more ominous rumblings and threats of geostrategic tension, it is especially important to note a quite different, non-confrontational pattern – based upon cultural merging and reciprocal liberation. Within the Singlosphere, an emergent, synthetic ethnicity exhibits a dynamically adaptive, cosmopolitan competence without peer, as distinct traditions of spontaneous order fuse and reinforce each other. Adam Smith meets Laozi, and the profound amalgamation of the two results in an unfolding innovated culture that increasingly dominates world rankings of economic capability.

A remarkable study by Christian Gerlach excavates the Daoist roots of European laissez-faire (or wu wei) ideas, and anarcho-capitalist maverick Murray Rothbard was attracted to the same ‘Ancient Chinese Libertarian Tradition’. Ken McCormick calls it The Tao of Laissez-Faire. (Those disturbed by this identification might be more comfortable with Silja Graupe’s leftist critique of ‘Market Daoism’.)

McCormick concludes his essay:

The recent ascendance of free-market ideas around the world probably owes more to the practical historical success of those ideas than to the persuasiveness of any theory or philosophy. Yet one might speculate that the startling success of economic liberalization in the People’s Republic of China might in part be explained by the fact that the idea of free markets is embedded in the culture. In fact, the Confucianism that long dominated China was actually a synthesis of competing schools of thought, including Taoism … Hence, while laissez-faire has frequently been absent from Chinese practice, it is not at all alien to Chinese culture. The recent free-market reforms in China might therefore be interpreted not so much as an importation of a foreign ideology but as a reawakening of a home-grown concept.

The Singlosphere sets both East and West on the right track. The more that Shanghai recalls and learns from it — and the deeper its participation — the faster its ascent will be.


Peak People

Could we be facing the ultimate resource crunch?

Over at Zero Hedge, Sean Corrigan unleashes a fizzing polemic against the (M. King Hubbert) ‘Peak Oil’ school of resource doomsters (enjoy the article if you’re laissez-faire inclined, or the comments if you’re not).

Of particular relevance to density advocates is Corrigan’s “exercise in contextualization” (a kind of de-stressed Stand on Zanzibar) designed to provide an image of the planet’s ‘demographic burden’:

For example, just as an exercise in contextualisation, consider the following:-

The population of Hong Kong: 7 million. Its surface area: 1,100 km2

The population of the World: nigh on 7 billion, i.e., HK x 1000

1000 x area of HK = 110,000 km2 = the area of Cuba or Iceland

Approximate area of the Earth’s landmass = 150 million km2

Approximate total surface area = 520 million km2

So, were we to build one, vast city of the same population density as Hong Kong to cover the entirety of [Cuba], this would accommodate all of humanity, and take up just 0.07% of the planet’s land area and 0.02% of the Earth’s surface.

Anybody eagerly anticipating hypercities, arcologies, and other prospective experiments in large-scale social packing is likely to find this calculation rather disconcerting, if only because – taken as a whole — Hong Kong actually isn’t that dense. For sure, the downtown ‘synapse’ connecting the HK Island with Kowloon is impressively intense, but most of the Hong Kong SAR (Special Administrative Region) is green, rugged, and basically deserted. It’s (mean) average density of 6,364 / km2 doesn’t get anywhere close to that of the top 100 cities (Manila’s 43,000 / km2 is almost seven times greater). Corrigan isn’t envisaging a megalopolis, but a Cuba-scale suburb.

Whether densitarians are more or less likely than average to worry about Peak Oil or related issues might be an interesting question (the New Urbanists tend to be quite greenish). If they really want to see cities scale the heights of social possibility, however, they need to start worrying about population shortage. With the human population projected to level-off at around 10 billion, there might never be enough people to make cities into the ultra-dense monsters that futuristic imagination has long hungered for.

Bryan Caplan is sounding the alarm. At least we have teeming Malthusian robot hordes to look forward to.


Statistical Mentality

Things are very probably weirder than they seem

As the natural sciences have developed to encompass increasingly complex systems, scientific rationality has become ever more statistical, or probabilistic. The deterministic classical mechanics of the enlightenment was revolutionized by the near-equilibrium statistical mechanics of late 19th century atomists, by quantum mechanics in the early 20th century, and by the far-from-equilibrium complexity theorists of the later 20th century. Mathematical neo-Darwinism, information theory, and quantitative social sciences compounded the trend. Forces, objects, and natural types were progressively dissolved into statistical distributions: heterogeneous clouds, entropy deviations, wave functions, gene frequencies, noise-signal ratios and redundancies, dissipative structures, and complex systems at the edge of chaos.

By the final decades of the 20th century, an unbounded probabilism was expanding into hitherto unimagined territories, testing deeply unfamiliar and counter-intuitive arguments in statistical metaphysics, or statistical ontology. It no longer sufficed for realism to attend to multiplicities, because reality was itself subject to multiplication.

In his declaration cogito ergo sum, Descartes concluded (perhaps optimistically) that the existence of the self could be safely concluded from the fact of thinking. The statistical ontologists inverted this formula, asking: given my existence (which is to say, an existence that seems like this to me), what kind of reality is probable? Which reality is this likely to be?

MIT Roboticist Hans Moravec, in his 1988 book Mind Children, seems to have initiated the genre. Extrapolating Moore’s Law into the not-too-distant future, he anticipated computational capacities that exceeded those of all biological brains by many orders of magnitude. Since each human brain runs its own more-or-less competent simulation of the world in order to function, it seemed natural to expect the coming technospheric intelligences to do the same, but with vastly greater scope, resolution, and variety. The mass replication of robot brains, each billions or trillions of times more powerful than those of its human progenitors, would provide a substrate for innumerable, immense, and minutely detailed historical simulations, within which human intelligences could be reconstructed to an effectively-perfect level of fidelity.

This vision feeds into a burgeoning literature on non-biological mental substrates, consciousness uploading, mind clones, whole-brain emulations (‘ems’), and Matrix-style artificial realities. Since the realities we presently know are already simulated (let us momentarily assume) on biological signal-processing systems with highly-finite quantitative specifications, there is no reason to confidently anticipate that an ‘artificial’ reality simulation would be in any way distinguishable.

Is ‘this’ history or its simulation? More precisely: is ‘this’ a contemporary biological (brain-based) simulation, or a reconstructed, artificial memory, run on a technological substrate ‘in the future’? That is a question without classical solution, Moravec argues. It can only be approached, rigorously, with statistics, and since the number of fine-grained simulated histories (unknown but probably vast), overwhelmingly exceeds the number of actual or original histories (for the sake of this argument, one), then the probabilistic calculus points unswervingly towards a definite conclusion: we can be near-certain that we are inhabitants of a simulation run by artificial (or post-biological) intelligences at some point in ‘our future’. At least – since many alternatives present themselves – we can be extremely confident, on grounds of statistical ontology, that our existence is non-original (if not historical reconstruction, it might be a game or fiction).

Nick Bostrom formalizes the simulation argument in his article ‘The Simulation Argument: Why the Probability that You are Living in the Matrix is Quite High’ (found here):

Now we get to the core of the simulation argument. This does not purport to demonstrate that you are in a simulation. Instead, it shows that we should accept as true at least one of the following three propositions:

(1) The chances that a species at our current level of development can avoid going extinct before becoming technologically mature is negligibly small
(2) Almost no technologically mature civilisations are interested in running computer simulations of minds like ours
(3) You are almost certainly in a simulation.

Each of these three propositions may be prima facie implausible; yet, if the simulation argument is correct, at least one is true (it does not tell us which).

If obstacles to the existence of high-level simulations (1 and 2) are removed, then statistical reasoning takes over, following the exact track laid down by Moravec. We are “almost certainly” inhabiting a “computer simulation that was created by some advanced civilization” because these saturate to near-exhaustion the probability space for realities ‘like this’. If such simulations exist, original lives would be as unlikely as winning lottery tickets, at best.

Bostrom concludes with an intriguing and influential twist:

If we are in a simulation, is it possible that we could know that for certain? If the simulators don’t want us to find out, we probably never will. But if they choose to reveal themselves, they could certainly do so. Maybe a window informing you of the fact would pop up in front of you, or maybe they would “upload” you into their world. Another event that would let us conclude with a very high degree of confidence that we are in a simulation is if we ever reach the point where we are about to switch on our own simulations. If we start running simulations, that would be very strong evidence against (1) and (2). That would leave us with only (3).

If we create fine-grained reality simulations, we demonstrate – to a high level of statistical confidence – that we already inhabit one, and that the history leading up to this moment of creation was fake. Paul Almond, an enthusiastic statistical ontologist, draws out the radical implication – reverse causation – asking: Can you retroactively put yourself in a computer simulation.

Such statistical ontology, or Bayesian existentialism, is not restricted to the simulation argument. It increasingly subsumes discussions of the Anthropic Principle, of the Many Worlds Interpretation of Quantum Mechanics, and exotic modes of prediction from the Doomsday Argument to Quantum Suicide (and Immortality).

Whatever is really happening, we probably have to chance it.


“2035. Probably earlier.”

There’s fast, and then there’s … something more

Eliezer Yudkowski now categorizes his article ‘Staring into Singularity‘ as ‘obsolete’. Yet it remains among the most brilliant philosophical essays ever written. Rarely, if ever, has so much of value been said about the absolutely unthinkable (or, more specifically, the absolutely unthinkable for us).

For instance, Yudkowsky scarcely pauses at the phenomenon of exponential growth, despite the fact that this already overtaxes all comfortable intuition and ensures revolutionary changes of such magnitude that speculation falters. He is adamant that exponentiation (even Kurzweil‘s ‘double exponentiation’) only reaches the starting point of computational acceleration, and that propulsion into Singularity is not exponential, but hyperbolic.

Each time the speed of thought doubles, time-schedules halve. When technology, including the design of intelligences, succumbs to such dynamics, it becomes recursive. The rate of self-improvement collapses with smoothly increasing rapidity towards instantaneity: a true, mathematically exact, or punctual Singularity. What lies beyond is not merely difficult to imagine, it is absolutely inconceivable. Attempting to picture or describe it is a ridiculous futility. Science fiction dies.

“A group of human-equivalent computers spends 2 years to double computer speeds. Then they spend another 2 subjective years, or 1 year in human terms, to double it again. Then they spend another 2 subjective years, or six months, to double it again. After four years total, the computing power goes to infinity.

“That is the ‘Transcended’ version of the doubling sequence. Let’s call the ‘Transcend’ of a sequence {a0, a1, a2…} the function where the interval between an and an+1 is inversely proportional to an. So a Transcended doubling function starts with 1, in which case it takes 1 time-unit to go to 2. Then it takes 1/2 time-units to go to 4. Then it takes 1/4 time-units to go to 8. This function, if it were continuous, would be the hyperbolic function y = 2/(2 – x). When x = 2, then (2 – x) = 0 and y = infinity. The behavior at that point is known mathematically as a singularity.”

There could scarcely be a more precise, plausible, or consequential formula: Doubling periods halve. On the slide into Singularity — I.J.Good’s ‘intelligence explosion‘ — exponentiation is compounded by a hyperbolic trend. The arithmetic of such a process is quite simple, but its historical implications are strictly incomprehensible.

“I am a Singularitarian because I have some small appreciation of how utterly, finally, absolutely impossible it is to think like someone even a little tiny bit smarter than you are. I know that we are all missing the obvious, every day. There are no hard problems, only problems that are hard to a certain level of intelligence. Move the smallest bit upwards, and some problems will suddenly move from ‘impossible’ to ‘obvious’. Move a substantial degree upwards, and all of them will become obvious. Move a huge distance upwards… “

Since the argument takes human thought to its shattering point, it is natural for some to be repulsed by it. Yet its basics are almost impregnable to logical objection. Intelligence is a function of the brain. The brain has been ‘designed’ by natural processes (posing no discernible special difficulties). Thus, intelligence is obviously an ultimately tractable engineering problem. Nature has already ‘engineered it’ whilst employing design methods of such stupefying inefficiency that only brute, obstinate force, combined of course with complete ruthlessness, have moved things forwards. Yet the tripling of cortical mass within the lineage of the higher primates has only taken a few million years, and — for most of this period — a modest experimental population (in the low millions or less).

The contemporary technological problem, in contrast to the preliminary biological one, is vastly easier. It draws upon a wider range of materials and techniques, an installed intelligence and knowledge base, superior information media, more highly-dynamized feedback systems, and a self-amplifying resource network. Unsurprisingly it is advancing at incomparably greater speed.

“If we had a time machine, 100K of information from the future could specify a protein that built a device that would give us nanotechnology overnight. 100K could contain the code for a seed AI. Ever since the late 90’s, the Singularity has been only a problem of software. And software is information, the magic stuff that changes at arbitrarily high speeds. As far as technology is concerned, the Singularity could happen tomorrow. One breakthrough – just one major insight – in the science of protein engineering or atomic manipulation or Artificial Intelligence, one really good day at Webmind or Zyvex, and the door to Singularity sweeps open.”


Moore and More

Doubling down on Moore’s Law is the futurist main current

Cycles cannot be dismissed from futuristic speculation (they always come back), but they no longer define it. Since the beginning of the electronic era, their contribution to the shape of the future has been progressively marginalized.

The model of linear and irreversible historical time, originally inherited from Occidental religious traditions, was spliced together with ideas of continuous growth and improvement during the industrial revolution. During the second half of the 20th century, the dynamics of electronics manufacture consolidated a further – and fundamental – upgrade, based upon the expectation of continuously accelerating change.

The elementary arithmetic of counting along the natural number line provides an intuitively comfortable model for the progression of time, due to its conformity with clocks, calendars, and the simple idea of succession. Yet the dominant historical forces of the modern world promote a significantly different model of change, one that tends to shift addition upwards, into an exponent. Demographics, capital accumulation, and technological performance indices do not increase through unitary steps, but through rates of return, doublings, and take-offs. Time explodes, exponentially.

The iconic expression of this neo-modern time, counting succession in binary logarithms, is Moore’s Law, which determines a two-year doubling period for the density of transistors on microchips (“cramming more components onto integrated circuits”). In a short essay published in Pajamas Media, celebrating the prolongation of Moore’s Law as Intel pushes chip architecture into the third-dimension, Michael S. Malone writes:

“Today, almost a half-century after it was first elucidated by legendary Fairchild and Intel co-founder Dr. Gordon Moore in an article for a trade magazine, it is increasingly apparent that Moore’s Law is the defining measure of the modern world. All other predictive tool for understanding life in the developed world since WWII — demographics, productivity tables, literacy rates, econometrics, the cycles of history, Marxist analysis, and on and on — have failed to predict the trajectory of society over the decades … except Moore’s Law.”

Whilst crystallizing – in silico — the inherent acceleration of neo-modern, linear time, Moore’s Law is intrinsically nonlinear, for at least two reasons. Firstly, and most straightforwardly, it expresses the positive feedback dynamics of technological industrialism, in which rapidly-advancing electronic machines continuously revolutionize their own manufacturing infrastructure. Better chips make better robots make better chips, in a spiraling acceleration. Secondly, Moore’s Law is at once an observation, and a program. As Wikipedia notes:

“[Moore’s original] paper noted that the number of components in integrated circuits had doubled every year from the invention of the integrated circuit in 1958 until 1965 and predicted that the trend would continue ‘for at least ten years’. His prediction has proved to be uncannily accurate, in part because the law is now used in the semiconductor industry to guide long-term planning and to set targets for research and development. … Although Moore’s law was initially made in the form of an observation and forecast, the more widely it became accepted, the more it served as a goal for an entire industry. This drove both marketing and engineering departments of semiconductor manufacturers to focus enormous energy aiming for the specified increase in processing power that it was presumed one or more of their competitors would soon actually attain. In this regard, it can be viewed as a self-fulfilling prophecy.”

Malone comments:

“… semiconductor companies around the world, big and small, and not least because of their respect for Gordon Moore, set out to uphold the Law — and they have done so ever since, despite seemingly impossible technical and scientific obstacles. Gordon Moore not only discovered Moore’s Law, he made it real. As his successor at Intel, Paul Otellini, once told me, ‘I’m not going to be the guy whose legacy is that Moore’s Law died on his watch.'”

If Technological Singularity is the ‘rapture of the nerds’, Gordon Moore is their Moses. Electro-industrial capitalism is told to go forth and multiply, and to do so with a quite precisely time-specified binary exponent. In its adherence to the Law, the integrated circuit industry is uniquely chosen (and a light unto the peoples). As Malone concludes:

“Today, every segment of society either embraces Moore’s Law or is racing to get there. That’s because they know that if only they can get aboard that rocket — that is, if they can add a digital component to their business — they too can accelerate away from the competition. That’s why none of the inventions we Baby Boomers as kids expected to enjoy as adults — atomic cars! personal helicopters! ray guns! — have come true; and also why we have even more powerful tools and toys —instead. Whatever can be made digital, if not in the whole, but in part — marketing, communications, entertainment, genetic engineering, robotics, warfare, manufacturing, service, finance, sports — it will, because going digital means jumping onto Moore’s Law. Miss that train and, as a business, an institution, or a cultural phenomenon, you die.”


Perfect Storm

Weather forecasts for winter 2012 are getting wilder all the time

Even before receiving the Hollywood treatment, the year 2012 was shaping up to be a uniquely potent ‘harmonic convergence’ of end times enthusiasm. Initially condensed out of the Mayan calendar, the 2012 countdown was soon fizzed into a heady cocktail by speculative interpretations of the Yijing, Aquarian ‘New Age’ paganism, Ufology, and mushroom mysticism. Once critical mass was achieved, the 2012 became a gathering point for free-floating Jewish, Christian, and Islamic eschatological expectations (coming or return of the Messiah, advent of the Antichrist, Armageddon, Rapture, emergence of the Twelfth Imam from occultation, and others). Just about anything cosmically imaginable is now firmly expected – by somebody – to arrive in late December, 2012.

Secular eschatology also has its dogs in the fight. From reciprocally insulated enclaves of the Internet, apocalyptic strains of Marxism (and libertarianism) joyfully anticipated the imminent collapse of the global economy, fully confident that its downfall would usher in a post-capitalist social order (or untrammeled free-market societies). The boldest proponents of impending Technological Singularity prepared to welcome superhuman artificial intelligence (when Skynet would already be five years overdue). Radical environmentalists, neo-Malthusians, ‘Peak Oil’ resource-crunchers, and Clash of Civilizations theorists also contributed substantially to the atmosphere of impending crisis. Irrespective of Anthropogenic Global Warming, everything was heating up fast.

This climate proved highly receptive to the prophetic ideas of William Strauss and Neil Howe, where it found a fresh and evocative self-description. Beginning with their book Generations (1992), Strauss & Howe sought to explain the rhythm of history through the pattern of generations, as they succeeded each other in four-phase cycles. Their cyclic unit or ‘saeculum’ lasts 80-100 years and consists of generational ‘seasons’ or ‘turnings’, each characterized by a distinctive archetype. The Fourth Turning, starting early in the new millennium, is ‘winter’ and ‘crisis’. They remark: “Today’s older Americans recognize this as the mood of the Great Depression and World War II, but a similar mood has been present in all the other great gates of our history, from the Civil War and Revolution back into colonial and English history.”

Jim Quinn’s discussion of the Fourth Turning at Zero Hedge anticipates the winter storms: “Based upon a review of the foreseeable issues confronting our society it is clear to me that a worse financial implosion will strike before the 2012 presidential election. It may be triggered by a debt ceiling confrontation, the ending of QE2, a panic out of the USD, hyperinflation, a surge in oil prices, or some combination of these possibilities. The ensuing collapse of the stock and bond markets will remove the last vestiges of trust in the existing financial system and the government bureaucrats who have taken taxpayer dollars and funneled them to these Wall Street oligarchs.”

More ominously still, Quinn concludes: “History has taught us that Fourth Turnings end in all out war. The outcome of wars is always in doubt. …It may be 150 years since Walt Whitman foresaw the imminent march of armies, visions of unborn deeds, and a sweeping away of the old order, but history has brought us right back to where we started. Immense challenges and threats await our nation. Will we face them with the courage and fortitude of our forefathers? Or will we shrink from our responsibility to future unborn generations? The drumbeat of history grows louder. Our rendezvous with destiny beckons.”

Stormy enough yet? If not, there’s the harsh weather of Kondratiev winter rolling in too.

Nikolai Kondratiev’s ‘long waves’ fluctuate at roughly twice the frequency of Strauss & Howe saecula (lasting roughly 40-60 years from ‘spring’ to ‘winter’). Originally discovered through empirical investigation of price movements, Kondratiev waves have stimulated a remarkable range of economic-historical theories. Joseph Schumpeter interpreted the cycle as a process of techno-economic innovation, in which capital was creatively revolutionized and destroyed through depreciation, whilst Hyman Minsky attributed it to a rhythm of financial speculation (in which stability fostered over-confidence, excess, and crisis with cyclic regularity).

The discovery of the ‘long wave’ seemed to coincide with its disappearance – at the hands of macroeconomic management (Keynesian counter-cyclical policy). Unsurprisingly, the crisis of Keynesianism under present conditions of ‘debt saturation’ has re-animated long wave discussion. At his Kondratiev-inspired Tipping Points blog, Gordon T. Long forecasts a savage winter, marked by rapid progression from financial through economic to political crisis, culminating in a (US dollar) ‘currency collapse’ in 2012.

Wrap up warmly.


Scaly Creatures

Cities are accelerators and there are solid numbers to demonstrate it

Among the most memorable features of Shanghai’s 2010 World Expo was the quintet of ‘Theme Pavilions’ designed to facilitate exploration of the city in general (in keeping with the urban-oriented theme of the event: ‘Better City, Better Life’). Whilst many international participants succumbed to facile populism in their national pavilions, these Theme Pavilions maintained an impressively high-minded tone.

Most remarkable of all for philosophical penetration was the Urban Being Pavilion, with its exhibition devoted to the question: what kind of thing is a city? Infrastructural networks received especially focused scrutiny. Pipes, cables, conduits, and transport arteries compose intuitively identifiable systems – higher-level wholes – that strongly indicate the existence of an individualized, complex being. The conclusion was starkly inescapable: a city is more than just an aggregated mass. It is a singular, coherent entity, deserving of its proper – even personal – name, and not unreasonably conceived as a composite ‘life-form’ (if not exactly an ‘organism’).

Such intuitions, however plausible, do not suffice in themselves to establish the city as a rigorously-defined scientific object. “[D]espite much historical evidence that cities are the principle engines of innovation and economic growth, a quantitative, predictive theory for understanding their dynamics and organization and estimating their future trajectory and stability remains elusive,” remark Luís M. A. Bettencourt, José Lobo, Dirk Helbing, Christian Kühnert, and Geoffrey B. West, in their prelude to a 2007 paper that has done more than any other to remedy the deficit: ‘Growth, innovation, scaling, and the pace of life in cities‘.

In this paper, the authors identify mathematical patterns that are at once distinctive to the urban phenomenon and generally applicable to it. They thus isolate the object of an emerging urban science, and outline its initial features, claiming that: “the social organization and dynamics relating urbanization to economic development and knowledge creation, among other social activities, are very general and appear as nontrivial quantitative regularities common to all cities, across urban systems.”

Noting that cities have often been analogized to biological systems, the paper extracts the principle supporting the comparison. “Remarkably, almost all physiological characteristics of biological organisms scale with body mass … as a power law whose exponent is typically a multiple of 1/4 (which generalizes to 1/(d +1) in d-dimensions).” These relatively stable scaling relations allow biological features, such as metabolic rates, life spans, and maturation periods, to be anticipated with a high-level of confidence given body mass alone. Furthermore, they conform to an elegant series of theoretical expectations that draw upon nothing beyond the abstract organizational constraints of n-dimensional space:

“Highly complex, self-sustaining structures, whether cells, organisms, or cities, require close integration of enormous numbers of constituent units that need efficient servicing. To accomplish this integration, life at all scales is sustained by optimized, space-filling, hierarchical branching networks, which grow with the size of the organism as uniquely specified approximately self-similar structures. Because these networks, e.g., the vascular systems of animals and plants, determine the rates at which energy is delivered to functional terminal units (cells), they set the pace of physiological processes as scaling functions of the size of the organism. Thus, the self-similar nature of resource distribution networks, common to all organisms, provides the basis for a quantitative, predictive theory of biological structure and dynamics, despite much external variation in appearance and form.”

If cities are in certain respects meta- or super-organisms, however, they are also the inverse. Metabolically, cities are anti-organisms. As biological systems scale up, they slow down, at a mathematically predictable rate. Cities, in contrast, accelerate as they grow. Something approximating to the fundamental law of urban reality is thus exposed: larger is faster.

The paper quantifies its findings, based on a substantial base of city data (with US cities over-represented), by specifying a ‘scaling exponent’ (or ‘ß‘, beta) that defines the regular correlation between urban scale and the factor under consideration.

A beta of one corresponds to linear correlation (of a variable to city size). For instance, housing supply, which remains constantly proportional to population across all urban scales, is found – unsurprisingly – to have ß = 1.00.

A beta of less than one indicates consistent economy to scale. Such economies are found systematically among urban resource networks, exemplified by gasoline stations (ß = 0.77), gasoline sales (ß = 0.79), length of electrical cables (ß = 0.87), and road surface (ß = 0.83). The sub-linear correlation of resource costs to urban scale makes city life increasingly efficient as metropolitan intensity soars.

A beta of greater than one indicates increasing returns to scale. Factors exhibiting this pattern include inventiveness (e.g. ‘new patents’ß = 1.27, ‘inventors’ ß = 1.25), wealth creation (e.g. ‘GDP’ ß = 1.15, wages ß = 1.12), but also disease (‘new AIDS cases’ ß = 1.23), and serious crimes (ß = 1.16). Urban growth is accompanied by a super-linear rise in opportunity for social interaction, whether productive, infectious, or malicious. More is not only better, it’s much better (and, in some respects, worse).

“Our analysis suggests uniquely human social dynamics that transcend biology and redefine metaphors of urban ‘metabolism’. Open-ended wealth and knowledge creation require the pace of life to increase with organization size and for individuals and institutions to adapt at a continually accelerating rate to avoid stagnation or potential crises. These conclusions very likely generalize to other social organizations, such as corporations and businesses, potentially explaining why continuous growth necessitates an accelerating treadmill of dynamical cycles of innovation.”

Bigger city, faster life.