fbpx

This review is part of our Elon Musk Experience article series.

In the middle of the last decade, if you got more than one beer in me, I would probably at some point start explaining in the sane, reasonable tones of the truly unhinged that one day machine superintelligence is going to become a real threat but that if we’re smart we can harness its power for good. You would politely try to change the subject, or perhaps casually relocate yourself to another table, or make a quip about The Terminator and try to segue into a less awkward discussion of science fiction. Your reaction would be understandable.

Ten years later, and I am experiencing the hipsterish joyless triumph of being the one who liked A.I. Safety before it was cool, dammit, and now everybody is talking about it. Well, what changed?

What changed is that Superintelligence: Paths, Dangers, Strategies (2014) was released.

You see, for the preceding decade or so, the preeminent voice in A.I. Safety was Eliezer Yudkowsky, a very smart man with no compelling credentials. His critics could always dismiss him with some form of snarky one-line bio along the lines of “high school dropout, well-known Harry Potter fanfiction writer, self-proclaimed philosopher and A.I. safety ‘Guru’.” Regardless of the merit of Yudkowsky’s actual arguments, this descriptor does not scream credibility. Predictably the topic of A.I. Safety could always be discredited by ad hominem-style association with its progenitor.

But now we have this book, Superintelligence, published by OXFORD PROFESSOR OF PHILOSOPHY, DIRECTOR OF THE FUTURE OF HUMANITY INSTITUTE Nick Bostrom. The book that influenced celebrities like Elon Musk, Bill Gates, and Stephen Hawking, leading experts in the field of A.I. like Stuart Russel, and dozens of other smart people – and even government organizations like DARPA – to start talking about, working on, and donating to A.I. Safety-related causes.

You might ask, what is Superintelligence, and what purpose does it serve? I suggest its purpose is simply to imbue the fields of A.I. Risk and A.I. Safety with credibility.

As somebody obsessively familiar with this topic, let me assure you that Nick Bostrom’s book says nothing that hasn’t been said before, and that many of the book’s most interesting sections are taken directly from Eliezer Yudkowsky’s prior works almost verbatim. Some of the strongest sections are direct quotes from Yudkowsky. Superintelligence is less about innovating than about popularizing and endorsing.

Also, despite its length, Superintelligence does not appear to be intended as an exhaustive argument for the positions it contains. The format of a typical section involves the statement of a position, a rapid outline of the arguments supporting that position, and a reference pointing to a more thorough treatment of the position elsewhere. The book provides the start of a dozen different breadcrumb trails for the interested reader to pursue at their own whim.

Bostrom also goes out of his way to lasso in any ideas which will carry with them their own halo of credibility. Superintelligence hosts a lengthy section on economist Robin Hanson’s conception of a dystopian future dominated by computer-simulated human minds. This interlude is included despite the fact that Hanson’s proposed scenario is in contradiction to the main thrust of Bostrom’s argument, namely, that the real threat is rapidly self-improving A.I., which would rapidly outstrip in capability any society of emulated human minds. Why does Bostrom feel compelled to reiterate Hanson’s ideas, despite the awkward fit into his book? Because Hanson is an established academic, a professor with his own trail of publications, his own halo of credibility.

The above paragraphs may read as a criticism of Superintelligence. They are not intended as such. Bostrom has done something useful by collating the ideas of a disreputable field into a book with all the trappings of reputability. In doing so, he has made it possible for serious people to openly discuss these ideas.

I find this process fascinating. The loony pariah is quoted by an Oxford professor and practically overnight, as if by magic, the pariah’s entire body of work goes from taboo to copacetic. The credible professor is even implicitly credited with the ideas – nobody ever actually references Yudkowsky in the press, except by way of Bostrom.
Superintelligence is dense, but understandable to the layman, and potentially highly entertaining if you enjoy thinking about the promise and peril of the future, as I do. If you are interested in A.I., you should read it. If you think the notion of smarter-than-human A.I. is either “impossible” or “very far off” or “not scary”, you should read it and have your confidence put to the test. If you think A.I. is scary, read it for the numerous solutions it proposes to the risks inherent in harnessing superintelligence. And finally, if you’re interested in the process of how a body of ideas becomes imbued with credibility, read Superintelligence for a master class in discourse-reframing.

Share This