Since the publishing success of Oxford philosopher Nick Bostrom’s Superintelligence in 2014, public debate about humanity’s impending self-destruction has increasingly tended to look beyond nuclear war and ecological catastrophe to worry about the rise of artificial intelligence. footnote1The hyperbolic titles adopted by research foundations active in this field—The Future of Humanity Institute (Oxford), The Centre for the Study of Existential Risk (Cambridge), The Future of Life Institute (Boston)—indicate the importance that they, at least, attach to their work: it would appear that more is at stake in so-called ‘ai Safety’ than traffic accidents involving self-driving cars. Prognostications about the existential danger posed by what is variously termed General ai, the Singularity or Superintelligence all have technical nuances, but basically envisage the takeover of human civilization by a highly capable machine—in Richard Dooling’s words, a ‘rapture for the geeks’. It is proposed that through repeatedly upgrading its own processing equipment, such a machine could achieve a chain-reaction of ever-greater intelligence and rapidly outstrip the capacities of its original human creators. This scenario was first hypothesised by Bletchley Park codebreaker I. J. Good in 1965, a few years before he advised director Stanley Kubrick on the character of hal 9000. ‘The first ultra-intelligent machine is the last invention that man need ever make’, argued Good, ‘provided that the machine is docile enough to tell us how to keep it under control.’

A coinage of Dartmouth mathematician and computer scientist John McCarthy in the mid-1950s, artificial intelligence is a deliberately ambiguous term: it doesn’t refer to a particular set of technologies, nor is it quite a specific area of technical research, yet keeps its popular currency through apocalyptic news stories and hardly less alarmist ‘non-fiction’ books. Max Tegmark’s Life 3.0 purports to offer a manual of sorts for making the most of artificial-intelligence’s ascent, which in a technofuturist version of Pascal’s wager it treats as a non-negligible possibility we would do well to take seriously. Like Bostrom, Tegmark is a Swedish-born public intellectual berthed in Anglophone academia. ‘Mad Max’, as he styles himself, credits an early switch from economics to physics to reading Richard Feynman, a Caltech theoretical physicist and bestselling memoirist. Swapping Stockholm for California, he completed a PhD on the cosmology of the early universe at Berkeley in 1994 and has taught physics at mit since 2003. A number of papers and a debut book, Our Mathematical Universe: My Quest for the Ultimate Nature of Reality (2014), developed a not uncontroversial argument for the solely mathematical basis of physical matter and life itself. Certain mathematical systems are ‘complex enough to contain self-aware substructures’ which ‘subjectively perceive themselves as existing in a physically “real” world’. This Flatland approach to human existence affords perhaps the ideal angle of approach to the problem of artificial intelligence. Cambridge computer scientist Alan Blackwell is among those who have pointed out that if the objective of artificial intelligence is to bring computers and humans closer together, one can either make machines more like humans, or make humans more like machines—as Jean-Pierre Dupuy puts it, ‘not the anthropomorphization of the machine but rather the mechanization’—or mathematization—‘of the human’.

Tegmark now spends much of his time working on ai Safety, specifically the question of ‘how to flourish rather than flounder with ai’. Unsurprisingly, given its author’s background, Life 3.0 contains rather more detail about the structure of the Milky Way than about the nuts and bolts of machine-learning systems, never mind the role these systems play in global capitalism. Although he makes a case in broad terms for the likely advent of a superintelligence, Tegmark has no time for the computer engineer’s practical difficulties in actually constructing one. Instead he blithely maintains that the only limits to our ambition should be those imposed by the physical laws of the universe. An entire chapter is given over to an excited discussion of how an ‘intelligence explosion’ could be used to propel an optimized form of space exploration and colonization complete with Dyson spheres, Stephen Hawking black-hole power plants and boat trips to Alpha Centauri using a laser sail. The ability to reel off such fancies explains why science popularizers are nearly always physicists and usually cosmologists: in this sense, Tegmark belongs in a line running from Carl Sagan to Hawking, Martin Rees and Neil deGrasse Tyson. The ai Safety milieu largely conforms to the general pattern, though with an admixture of wealthy industrialists. Artificial-intelligence researchers are certainly involved as well, but it would be a mistake to think that the movement emerged organically among concerned engineers working in machine-learning laboratories.

The prologue to Life 3.0 imagines the rise of a super-intelligent computer named Prometheus. Developed in a corporate research-and-development facility, Prometheus earns money by working on Amazon Mechanical Turk, publishes cgi films on something resembling Netflix, generates a new tech boom by drip-feeding r&d reports to human researchers, manipulates electoral politics through online media and ultimately engineers the creation of a world state run by the computer’s company—though Tegmark insists that Prometheus would be bound to slip its leash sooner or later. Life 3.0 invites the reader to weigh up the plausibility of this Asimov-esque tale and to consider the social implications of machine intelligence run amok. With characteristic hubris, Tegmark bills it as ‘the most important conversation of our time’.

In its attempt to address an array of computational, cosmological, philosophical and public-policy questions, Life 3.0 is a blunderbuss of a book. Let’s start with its premise that life is an ‘information-processing system’. This analogy has its roots in the cybernetics of the late 1940s, itself largely a continuation of us control-electronics research for radar and weapons systems in the Second World War. However, Tegmark sees biological intelligence not just as a processing system in an abstract sense, but as formally equivalent to current computing architectures: the brain is a ‘powerful computer’. Today’s computing machinery implies a complete separation of hardware and software. It is this distinction that underlines Tegmark’s taxonomy of life, the stages of which mimic the nomenclature of software products. Life 1.0 is simple biological life, with pre-programmed behavioural patterns and a reliance on natural evolution for its development. Life 2.0 is human life, which inherits its hardware but largely designs its own software: we are able to learn French, for example. (People with artificial knees or similar enhancements, Tegmark adds, might be considered Life 2.1.) Then there is Life 3.0, technological life. Like the Third Temple, this is the ultimate state, the ‘final upgrade’: life that is capable of redesigning both its software and hardware.