Our collective failure

Extinction, normally reserved for non-human animals, may concern us as well after all. Time to take our responsibilities towards the future of life seriously.

„The dinosaurs became extinct because they didn’t have a space program. And if we become extinct because we don’t have a space program, it’ll serve us right!“ This sobering quote by the American Science-Fiction writer Larry Niven captures the motivation behind Elon Musk’s decision to build SpaceX and colonize Mars. The reality, that we are only marginally better prepared for an asteroid impact than the dinosaurs were, is somewhat unsettling, but the simple fact that we have survived the first 2 million years of our existence without being terminated by natural existential risks implies that the estimated probability of such an event happening in the next 100 years is pretty low.

Conversely, man-made existential risks like a supervirus or nuclear weapons that have merely existed for a few decades have already brought us closer to the brink of extinction than it is commonly known. For example, if it weren’t for a few brave individuals as Vasili Arkhipov (1962) or Stanislav Petrov (1984) civilization might have been wiped out by a nuclear winter and you and I would have never been born.

The promise and peril of technology

Technological progress has improved our lives in a myriad of ways and significantly prolonged our lifespan. However, it also provides us with the tools of our own extinction. Theoretically it only takes a single individual out of seven billion to infect power plants with a virus leading to a global system collapse or to design, 3D-Print and disseminate a physical killer virus to play a real-life version of “Plague Inc.”.

Even scarier than that is the “grey goo” scenario described by nanotechnology pioneer Eric Drexler in 1986. Tiny self-replicating robots, a type of von Neumann machines, could spread like an aggressive form of planetary cancer and turn all suitable matter into “grey goo”. Unfortunately, competitive success in an environment and the value something creates in terms of consciousness are not the same thing.

However, the real story of technology isn’t that it increases the leverage of the hairless ape, it’s that it takes over the world. Current artificial intelligence is narrowly confined to specific domains. A chess computer will play chess and an attack drone will kill people, so far so good. Nonetheless, the recent and projected advances in machine learning and neuroscience leave little doubt that Artificial General Intelligence will emerge. It may take a few decades or only 5 to 10 years as Elon Musk has projected, but unless we manage to kill ourselves before, it will come and the implications of changing the main actor on the world stage can hardly be overstated.

Intelligence Explosion

„Will computers ever be as smart as humans? Yes, but only briefly.“, says Vernor Vinge, the author of the 1993 essay „The coming technological singularity“. The reason for that goes back to the concept of an intelligence explosion that the British mathematician and colleague of Alan Turing, I.J. Good, introduced. Through recursive self-improvement a human-level AI could develop into a superintelligence without any human interference and it may do so pretty fast. It’s like compounding interest with the minor difference that in silico communication is about 10 million times faster than our neurons, reducing a subjective life year to a few seconds.

Artificial General Intelligence is our final invention. The future doesn’t need us ever after and jobs aren’t the main concern here in the same way that the most interesting question for non-human animals is not what role they play in the human economy, but whether or not they survive the sixth mass extinction. As Eliezer Yudkowsky puts it: “The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else.“ On the other hand, a friendly superintelligence could also solve all of our problems including death and it might use some of the negentropy in the reachable universe to let you live the life of your dreams.

The price tag of extinction

Reducing existential risk and ensuring a good outcome of the intelligence explosion is probably the issue with the highest expected marginal positive impact on quality-adjusted human or human-equivalent life years (QALY).

To understand the opportunity costs of premature extinction or of setting up the initial conditions for an outcome, in which a philosophical zombie A.I. consumes all the resources, we need to assess the earth-orginating life potential. Space is expanding and the speed of interaction (c) is fundamentally limited. Unless an Alcubierre-Drive, which bends Space itself, should actually work, this implies that the amount of physically reachable stars is finite and declining. The volume encompassing this set of stars is called the Hubble sphere and it currently has a radius of about 14 billion light years. Oxford philosopher Nick Bostrom has estimated that a society able to travel at 50% of light speed could reach about 6×1018 stars, whose planets under modest assumptions provide for 1037 QALYs or 1035 unimpaired human lives of 100 years.

However, a technological mature society might also take materials from planets and asteroids and create its own O’Neill colonies in the habitable Goldilocks zone of a star. Bostrom estimates that this increases the life potential to the equivalent of about 1043 human lives. A little more daring he also calculates the energy output of stars that could be captured in a Dyson sphere surrounding them and compares it with the computational power needed to simulate a human life year on computronium, an arrangement of matter that computes as efficient as theoretically possible. His conclusion: The size of our cosmic endowment is conservatively estimated at the equivalent of 1058 human lives. That’s roughly 1048 people for every person alive during the intelligence explosion! What do we say to those people? Sorry, but I wasted the bulk of my mental energy during my intellectual prime to project my evolutionary In-Group/Out-Group thinking onto non-issues? Your species needs you!

Illustration Kevin Kohler


Leave a Reply

Your email address will not be published. Required fields are marked *

*

*

*