Since the publishing success of Oxford philosopher Nick Bostrom’s Superintelligence in 2014, public debate about humanity’s impending self-destruction has increasingly tended to look beyond nuclear war and ecological catastrophe to worry about the rise of artificial intelligence. The hyperbolic titles adopted by research foundations active in this field—The Future of Humanity Institute (Oxford), The Centre for the Study of Existential Risk (Cambridge), The Future of Life Institute (Boston)—indicate the importance that they, at least, attach to their work: it would appear that more is at stake in so-called ‘ai Safety’ than traffic accidents involving self-driving cars. Prognostications about the existential danger posed by what is variously termed General ai, the Singularity or Superintelligence all have technical nuances, but basically envisage the takeover of human civilization by a highly capable machine—in Richard Dooling’s words, a ‘rapture for the geeks’. It is proposed that through repeatedly upgrading its own processing equipment, such a machine could achieve a chain-reaction of ever-greater intelligence and rapidly outstrip the capacities of its original human creators. This scenario was first hypothesised by Bletchley Park codebreaker I. J. Good in 1965, a few years before he advised director Stanley Kubrick on the character of hal 9000. ‘The first ultra-intelligent machine is the last invention that man need ever make’, argued Good, ‘provided that the machine is docile enough to tell us how to keep it under control.’

A coinage of Dartmouth mathematician and computer scientist John McCarthy in the mid-1950s, artificial intelligence is a deliberately ambiguous term: it doesn’t refer to a particular set of technologies, nor is it quite a specific area of technical research, yet keeps its popular currency through apocalyptic news stories and hardly less alarmist ‘non-fiction’ books. Max Tegmark’s Life 3.0 purports to offer a manual of sorts for making the most of artificial-intelligence’s ascent, which in a technofuturist version of Pascal’s wager it treats as a non-negligible possibility we would do well to take seriously. Like Bostrom, Tegmark is a Swedish-born public intellectual berthed in Anglophone academia. ‘Mad Max’, as he styles himself, credits an early switch from economics to physics to reading Richard Feynman, a Caltech theoretical physicist and bestselling memoirist. Swapping Stockholm for California, he completed a PhD on the cosmology of the early universe at Berkeley in 1994 and has taught physics at mit since 2003. A number of papers and a debut book, Our Mathematical Universe: My Quest for the Ultimate Nature of Reality (2014), developed a not uncontroversial argument for the solely mathematical basis of physical matter and life itself. Certain mathematical systems are ‘complex enough to contain self-aware substructures’ which ‘subjectively perceive themselves as existing in a physically “real” world’. This Flatland approach to human existence affords perhaps the ideal angle of approach to the problem of artificial intelligence. Cambridge computer scientist Alan Blackwell is among those who have pointed out that if the objective of artificial intelligence is to bring computers and humans closer together, one can either make machines more like humans, or make humans more like machines—as Jean-Pierre Dupuy puts it, ‘not the anthropomorphization of the machine but rather the mechanization’—or mathematization—‘of the human’.

Tegmark now spends much of his time working on ai Safety, specifically the question of ‘how to flourish rather than flounder with ai’. Unsurprisingly, given its author’s background, Life 3.0 contains rather more detail about the structure of the Milky Way than about the nuts and bolts of machine-learning systems, never mind the role these systems play in global capitalism. Although he makes a case in broad terms for the likely advent of a superintelligence, Tegmark has no time for the computer engineer’s practical difficulties in actually constructing one. Instead he blithely maintains that the only limits to our ambition should be those imposed by the physical laws of the universe. An entire chapter is given over to an excited discussion of how an ‘intelligence explosion’ could be used to propel an optimized form of space exploration and colonization complete with Dyson spheres, Stephen Hawking black-hole power plants and boat trips to Alpha Centauri using a laser sail. The ability to reel off such fancies explains why science popularizers are nearly always physicists and usually cosmologists: in this sense, Tegmark belongs in a line running from Carl Sagan to Hawking, Martin Rees and Neil deGrasse Tyson. The ai Safety milieu largely conforms to the general pattern, though with an admixture of wealthy industrialists. Artificial-intelligence researchers are certainly involved as well, but it would be a mistake to think that the movement emerged organically among concerned engineers working in machine-learning laboratories.

The prologue to Life 3.0 imagines the rise of a super-intelligent computer named Prometheus. Developed in a corporate research-and-development facility, Prometheus earns money by working on Amazon Mechanical Turk, publishes cgi films on something resembling Netflix, generates a new tech boom by drip-feeding r&d reports to human researchers, manipulates electoral politics through online media and ultimately engineers the creation of a world state run by the computer’s company—though Tegmark insists that Prometheus would be bound to slip its leash sooner or later. Life 3.0 invites the reader to weigh up the plausibility of this Asimov-esque tale and to consider the social implications of machine intelligence run amok. With characteristic hubris, Tegmark bills it as ‘the most important conversation of our time’.

In its attempt to address an array of computational, cosmological, philosophical and public-policy questions, Life 3.0 is a blunderbuss of a book. Let’s start with its premise that life is an ‘information-processing system’. This analogy has its roots in the cybernetics of the late 1940s, itself largely a continuation of us control-electronics research for radar and weapons systems in the Second World War. However, Tegmark sees biological intelligence not just as a processing system in an abstract sense, but as formally equivalent to current computing architectures: the brain is a ‘powerful computer’. Today’s computing machinery implies a complete separation of hardware and software. It is this distinction that underlines Tegmark’s taxonomy of life, the stages of which mimic the nomenclature of software products. Life 1.0 is simple biological life, with pre-programmed behavioural patterns and a reliance on natural evolution for its development. Life 2.0 is human life, which inherits its hardware but largely designs its own software: we are able to learn French, for example. (People with artificial knees or similar enhancements, Tegmark adds, might be considered Life 2.1.) Then there is Life 3.0, technological life. Like the Third Temple, this is the ultimate state, the ‘final upgrade’: life that is capable of redesigning both its software and hardware.

Life 3.0 attempts to soothe any qualms the reader may have, firstly about seeing natural life and machines placed on the same evolutionary ladder, and secondly as to whether machines really have the wherewithal one day to occupy the top rung. ‘The conventional wisdom among artificial-intelligence researchers is that intelligence is ultimately all about information and computation, not about flesh, blood or carbon atoms. This means there’s no fundamental reason why machines can’t one day be at least as intelligent as us.’ In the course of a brief tour of the fundamentals of computing—memory devices, logic gates and so on—it introduces two elegant theories which point to the ability of machines to master any well-defined function. One is that you can make any combinational logic circuit—that is, anything that can be described with a mathematical truth-table—out of an arrangement of nand (short for not and) logic gates. The other is Alan Turing’s famous thesis that any computer competent across a certain minimum set of operations is capable—given infinite time and memory—of simulating any other computer. ‘The fact that exactly the same computation can be performed on any universal computer’, argues Tegmark, ‘means that computation is substrate-independent’ (Bostrom’s term), and thus that intelligence ‘needs no body, only an internet connection’. But from a technical standpoint, universality and disembodiment are unrelated: no ai researcher would dispute Turing’s proof, yet many would question the potential for an ethereal machine to learn about our world through Wikipedia and Twitter, rather than by touching and interfering with it.

Current applications of artificial intelligence fall short of human, let alone super-human, intelligence. Nevertheless, Tegmark is encouraged by recent developments. In Seoul in March 2016, DeepMind, a British start-up acquired by Google, beat the world’s leading Go player, Lee Sedol, by four games to one in a widely televised set of encounters. This marked an important advance on ibm Deep Blue’s famous 1997 victory over chess grandmaster Gary Kasparov, chiefly because Go has a more rapidly expanding number of possible board-states. The ibm machine is what we would now describe as gofai, or Good Old-Fashioned Artificial Intelligence. Deep Blue used a heuristic metric based on the knowledge of chess experts to evaluate the relative strengths of a large number of possible future scenarios. It didn’t learn from its opponents, nor from its own mistakes. By contrast, DeepMind’s AlphaGo programme was given no such hand-crafted metric. Instead it learned first by imitating human players—downloading recorded games from the internet—and then by playing against itself many times. Rather than being programmed to play in a certain way, it is an example of a machine-learning system, statistically modifying its behaviour in order to optimize a particular measure of success: in this case, the proportion of games won.