Peak People

Could we be facing the ultimate resource crunch?

Over at Zero Hedge, Sean Corrigan unleashes a fizzing polemic against the (M. King Hubbert) ‘Peak Oil’ school of resource doomsters (enjoy the article if you’re laissez-faire inclined, or the comments if you’re not).

Of particular relevance to density advocates is Corrigan’s “exercise in contextualization” (a kind of de-stressed Stand on Zanzibar) designed to provide an image of the planet’s ‘demographic burden’:

For example, just as an exercise in contextualisation, consider the following:-

The population of Hong Kong: 7 million. Its surface area: 1,100 km2

The population of the World: nigh on 7 billion, i.e., HK x 1000

1000 x area of HK = 110,000 km2 = the area of Cuba or Iceland

Approximate area of the Earth’s landmass = 150 million km2

Approximate total surface area = 520 million km2

So, were we to build one, vast city of the same population density as Hong Kong to cover the entirety of [Cuba], this would accommodate all of humanity, and take up just 0.07% of the planet’s land area and 0.02% of the Earth’s surface.

Anybody eagerly anticipating hypercities, arcologies, and other prospective experiments in large-scale social packing is likely to find this calculation rather disconcerting, if only because – taken as a whole — Hong Kong actually isn’t that dense. For sure, the downtown ‘synapse’ connecting the HK Island with Kowloon is impressively intense, but most of the Hong Kong SAR (Special Administrative Region) is green, rugged, and basically deserted. It’s (mean) average density of 6,364 / km2 doesn’t get anywhere close to that of the top 100 cities (Manila’s 43,000 / km2 is almost seven times greater). Corrigan isn’t envisaging a megalopolis, but a Cuba-scale suburb.

Whether densitarians are more or less likely than average to worry about Peak Oil or related issues might be an interesting question (the New Urbanists tend to be quite greenish). If they really want to see cities scale the heights of social possibility, however, they need to start worrying about population shortage. With the human population projected to level-off at around 10 billion, there might never be enough people to make cities into the ultra-dense monsters that futuristic imagination has long hungered for.

Bryan Caplan is sounding the alarm. At least we have teeming Malthusian robot hordes to look forward to.

[Tomb]

Statistical Mentality

Things are very probably weirder than they seem

As the natural sciences have developed to encompass increasingly complex systems, scientific rationality has become ever more statistical, or probabilistic. The deterministic classical mechanics of the enlightenment was revolutionized by the near-equilibrium statistical mechanics of late 19th century atomists, by quantum mechanics in the early 20th century, and by the far-from-equilibrium complexity theorists of the later 20th century. Mathematical neo-Darwinism, information theory, and quantitative social sciences compounded the trend. Forces, objects, and natural types were progressively dissolved into statistical distributions: heterogeneous clouds, entropy deviations, wave functions, gene frequencies, noise-signal ratios and redundancies, dissipative structures, and complex systems at the edge of chaos.

By the final decades of the 20th century, an unbounded probabilism was expanding into hitherto unimagined territories, testing deeply unfamiliar and counter-intuitive arguments in statistical metaphysics, or statistical ontology. It no longer sufficed for realism to attend to multiplicities, because reality was itself subject to multiplication.

In his declaration cogito ergo sum, Descartes concluded (perhaps optimistically) that the existence of the self could be safely concluded from the fact of thinking. The statistical ontologists inverted this formula, asking: given my existence (which is to say, an existence that seems like this to me), what kind of reality is probable? Which reality is this likely to be?

MIT Roboticist Hans Moravec, in his 1988 book Mind Children, seems to have initiated the genre. Extrapolating Moore’s Law into the not-too-distant future, he anticipated computational capacities that exceeded those of all biological brains by many orders of magnitude. Since each human brain runs its own more-or-less competent simulation of the world in order to function, it seemed natural to expect the coming technospheric intelligences to do the same, but with vastly greater scope, resolution, and variety. The mass replication of robot brains, each billions or trillions of times more powerful than those of its human progenitors, would provide a substrate for innumerable, immense, and minutely detailed historical simulations, within which human intelligences could be reconstructed to an effectively-perfect level of fidelity.

This vision feeds into a burgeoning literature on non-biological mental substrates, consciousness uploading, mind clones, whole-brain emulations (‘ems’), and Matrix-style artificial realities. Since the realities we presently know are already simulated (let us momentarily assume) on biological signal-processing systems with highly-finite quantitative specifications, there is no reason to confidently anticipate that an ‘artificial’ reality simulation would be in any way distinguishable.

Is ‘this’ history or its simulation? More precisely: is ‘this’ a contemporary biological (brain-based) simulation, or a reconstructed, artificial memory, run on a technological substrate ‘in the future’? That is a question without classical solution, Moravec argues. It can only be approached, rigorously, with statistics, and since the number of fine-grained simulated histories (unknown but probably vast), overwhelmingly exceeds the number of actual or original histories (for the sake of this argument, one), then the probabilistic calculus points unswervingly towards a definite conclusion: we can be near-certain that we are inhabitants of a simulation run by artificial (or post-biological) intelligences at some point in ‘our future’. At least – since many alternatives present themselves – we can be extremely confident, on grounds of statistical ontology, that our existence is non-original (if not historical reconstruction, it might be a game or fiction).

Nick Bostrom formalizes the simulation argument in his article ‘The Simulation Argument: Why the Probability that You are Living in the Matrix is Quite High’ (found here):

Now we get to the core of the simulation argument. This does not purport to demonstrate that you are in a simulation. Instead, it shows that we should accept as true at least one of the following three propositions:

(1) The chances that a species at our current level of development can avoid going extinct before becoming technologically mature is negligibly small
(2) Almost no technologically mature civilisations are interested in running computer simulations of minds like ours
(3) You are almost certainly in a simulation.

Each of these three propositions may be prima facie implausible; yet, if the simulation argument is correct, at least one is true (it does not tell us which).

If obstacles to the existence of high-level simulations (1 and 2) are removed, then statistical reasoning takes over, following the exact track laid down by Moravec. We are “almost certainly” inhabiting a “computer simulation that was created by some advanced civilization” because these saturate to near-exhaustion the probability space for realities ‘like this’. If such simulations exist, original lives would be as unlikely as winning lottery tickets, at best.

Bostrom concludes with an intriguing and influential twist:

If we are in a simulation, is it possible that we could know that for certain? If the simulators don’t want us to find out, we probably never will. But if they choose to reveal themselves, they could certainly do so. Maybe a window informing you of the fact would pop up in front of you, or maybe they would “upload” you into their world. Another event that would let us conclude with a very high degree of confidence that we are in a simulation is if we ever reach the point where we are about to switch on our own simulations. If we start running simulations, that would be very strong evidence against (1) and (2). That would leave us with only (3).

If we create fine-grained reality simulations, we demonstrate – to a high level of statistical confidence – that we already inhabit one, and that the history leading up to this moment of creation was fake. Paul Almond, an enthusiastic statistical ontologist, draws out the radical implication – reverse causation – asking: Can you retroactively put yourself in a computer simulation.

Such statistical ontology, or Bayesian existentialism, is not restricted to the simulation argument. It increasingly subsumes discussions of the Anthropic Principle, of the Many Worlds Interpretation of Quantum Mechanics, and exotic modes of prediction from the Doomsday Argument to Quantum Suicide (and Immortality).

Whatever is really happening, we probably have to chance it.

[Tomb]

“2035. Probably earlier.”

There’s fast, and then there’s … something more

Eliezer Yudkowski now categorizes his article ‘Staring into Singularity‘ as ‘obsolete’. Yet it remains among the most brilliant philosophical essays ever written. Rarely, if ever, has so much of value been said about the absolutely unthinkable (or, more specifically, the absolutely unthinkable for us).

For instance, Yudkowsky scarcely pauses at the phenomenon of exponential growth, despite the fact that this already overtaxes all comfortable intuition and ensures revolutionary changes of such magnitude that speculation falters. He is adamant that exponentiation (even Kurzweil‘s ‘double exponentiation’) only reaches the starting point of computational acceleration, and that propulsion into Singularity is not exponential, but hyperbolic.

Each time the speed of thought doubles, time-schedules halve. When technology, including the design of intelligences, succumbs to such dynamics, it becomes recursive. The rate of self-improvement collapses with smoothly increasing rapidity towards instantaneity: a true, mathematically exact, or punctual Singularity. What lies beyond is not merely difficult to imagine, it is absolutely inconceivable. Attempting to picture or describe it is a ridiculous futility. Science fiction dies.

“A group of human-equivalent computers spends 2 years to double computer speeds. Then they spend another 2 subjective years, or 1 year in human terms, to double it again. Then they spend another 2 subjective years, or six months, to double it again. After four years total, the computing power goes to infinity.

“That is the ‘Transcended’ version of the doubling sequence. Let’s call the ‘Transcend’ of a sequence {a0, a1, a2…} the function where the interval between an and an+1 is inversely proportional to an. So a Transcended doubling function starts with 1, in which case it takes 1 time-unit to go to 2. Then it takes 1/2 time-units to go to 4. Then it takes 1/4 time-units to go to 8. This function, if it were continuous, would be the hyperbolic function y = 2/(2 – x). When x = 2, then (2 – x) = 0 and y = infinity. The behavior at that point is known mathematically as a singularity.”

There could scarcely be a more precise, plausible, or consequential formula: Doubling periods halve. On the slide into Singularity — I.J.Good’s ‘intelligence explosion‘ — exponentiation is compounded by a hyperbolic trend. The arithmetic of such a process is quite simple, but its historical implications are strictly incomprehensible.

“I am a Singularitarian because I have some small appreciation of how utterly, finally, absolutely impossible it is to think like someone even a little tiny bit smarter than you are. I know that we are all missing the obvious, every day. There are no hard problems, only problems that are hard to a certain level of intelligence. Move the smallest bit upwards, and some problems will suddenly move from ‘impossible’ to ‘obvious’. Move a substantial degree upwards, and all of them will become obvious. Move a huge distance upwards… “

Since the argument takes human thought to its shattering point, it is natural for some to be repulsed by it. Yet its basics are almost impregnable to logical objection. Intelligence is a function of the brain. The brain has been ‘designed’ by natural processes (posing no discernible special difficulties). Thus, intelligence is obviously an ultimately tractable engineering problem. Nature has already ‘engineered it’ whilst employing design methods of such stupefying inefficiency that only brute, obstinate force, combined of course with complete ruthlessness, have moved things forwards. Yet the tripling of cortical mass within the lineage of the higher primates has only taken a few million years, and — for most of this period — a modest experimental population (in the low millions or less).

The contemporary technological problem, in contrast to the preliminary biological one, is vastly easier. It draws upon a wider range of materials and techniques, an installed intelligence and knowledge base, superior information media, more highly-dynamized feedback systems, and a self-amplifying resource network. Unsurprisingly it is advancing at incomparably greater speed.

“If we had a time machine, 100K of information from the future could specify a protein that built a device that would give us nanotechnology overnight. 100K could contain the code for a seed AI. Ever since the late 90’s, the Singularity has been only a problem of software. And software is information, the magic stuff that changes at arbitrarily high speeds. As far as technology is concerned, the Singularity could happen tomorrow. One breakthrough – just one major insight – in the science of protein engineering or atomic manipulation or Artificial Intelligence, one really good day at Webmind or Zyvex, and the door to Singularity sweeps open.”

[Tomb]

Moore and More

Doubling down on Moore’s Law is the futurist main current

Cycles cannot be dismissed from futuristic speculation (they always come back), but they no longer define it. Since the beginning of the electronic era, their contribution to the shape of the future has been progressively marginalized.

The model of linear and irreversible historical time, originally inherited from Occidental religious traditions, was spliced together with ideas of continuous growth and improvement during the industrial revolution. During the second half of the 20th century, the dynamics of electronics manufacture consolidated a further – and fundamental – upgrade, based upon the expectation of continuously accelerating change.

The elementary arithmetic of counting along the natural number line provides an intuitively comfortable model for the progression of time, due to its conformity with clocks, calendars, and the simple idea of succession. Yet the dominant historical forces of the modern world promote a significantly different model of change, one that tends to shift addition upwards, into an exponent. Demographics, capital accumulation, and technological performance indices do not increase through unitary steps, but through rates of return, doublings, and take-offs. Time explodes, exponentially.

The iconic expression of this neo-modern time, counting succession in binary logarithms, is Moore’s Law, which determines a two-year doubling period for the density of transistors on microchips (“cramming more components onto integrated circuits”). In a short essay published in Pajamas Media, celebrating the prolongation of Moore’s Law as Intel pushes chip architecture into the third-dimension, Michael S. Malone writes:

“Today, almost a half-century after it was first elucidated by legendary Fairchild and Intel co-founder Dr. Gordon Moore in an article for a trade magazine, it is increasingly apparent that Moore’s Law is the defining measure of the modern world. All other predictive tool for understanding life in the developed world since WWII — demographics, productivity tables, literacy rates, econometrics, the cycles of history, Marxist analysis, and on and on — have failed to predict the trajectory of society over the decades … except Moore’s Law.”

Whilst crystallizing – in silico — the inherent acceleration of neo-modern, linear time, Moore’s Law is intrinsically nonlinear, for at least two reasons. Firstly, and most straightforwardly, it expresses the positive feedback dynamics of technological industrialism, in which rapidly-advancing electronic machines continuously revolutionize their own manufacturing infrastructure. Better chips make better robots make better chips, in a spiraling acceleration. Secondly, Moore’s Law is at once an observation, and a program. As Wikipedia notes:

“[Moore’s original] paper noted that the number of components in integrated circuits had doubled every year from the invention of the integrated circuit in 1958 until 1965 and predicted that the trend would continue ‘for at least ten years’. His prediction has proved to be uncannily accurate, in part because the law is now used in the semiconductor industry to guide long-term planning and to set targets for research and development. … Although Moore’s law was initially made in the form of an observation and forecast, the more widely it became accepted, the more it served as a goal for an entire industry. This drove both marketing and engineering departments of semiconductor manufacturers to focus enormous energy aiming for the specified increase in processing power that it was presumed one or more of their competitors would soon actually attain. In this regard, it can be viewed as a self-fulfilling prophecy.”

Malone comments:

“… semiconductor companies around the world, big and small, and not least because of their respect for Gordon Moore, set out to uphold the Law — and they have done so ever since, despite seemingly impossible technical and scientific obstacles. Gordon Moore not only discovered Moore’s Law, he made it real. As his successor at Intel, Paul Otellini, once told me, ‘I’m not going to be the guy whose legacy is that Moore’s Law died on his watch.'”

If Technological Singularity is the ‘rapture of the nerds’, Gordon Moore is their Moses. Electro-industrial capitalism is told to go forth and multiply, and to do so with a quite precisely time-specified binary exponent. In its adherence to the Law, the integrated circuit industry is uniquely chosen (and a light unto the peoples). As Malone concludes:

“Today, every segment of society either embraces Moore’s Law or is racing to get there. That’s because they know that if only they can get aboard that rocket — that is, if they can add a digital component to their business — they too can accelerate away from the competition. That’s why none of the inventions we Baby Boomers as kids expected to enjoy as adults — atomic cars! personal helicopters! ray guns! — have come true; and also why we have even more powerful tools and toys —instead. Whatever can be made digital, if not in the whole, but in part — marketing, communications, entertainment, genetic engineering, robotics, warfare, manufacturing, service, finance, sports — it will, because going digital means jumping onto Moore’s Law. Miss that train and, as a business, an institution, or a cultural phenomenon, you die.”

[Tomb]