Superintelligence: Paths, Dangers, Strategies by Nick Bostrom

Superintelligence: Paths, Dangers, Strategies

The advance of artificial intelligence has gifted us some great headlines recently, from self-driving Tesla’s to deep learning machines winning at Go. In Superintelligence: Paths, Dangers, Strategies by Nick Bostrom, this fertile zone of technological development forms the basis for a philosophical exploration of where this may all lead. What if artificial intelligence advances to a point that we, humanity, are no longer in control?

Superintelligence is a serious, intellectually disorientating treatment of ideas, imagining the inevitable future when we are able to create an AGI (an artificial general intelligence). An AGI would be capable of successfully performing any task that a human can. Such a machine would thus be capable of recursive self-improvement (on a digital time scale) perhaps rapidly leading to an explosion in its own intelligence. An exponentially self-improving superintelligence, according to Bostrom, would pose a significant threat to human survival.
Pop Matters

This is a not a book of facts and discoveries. Instead Bostrom is playing out a philosophical scenario. He is saying what if an artificial intelligence is created that not only mimics human levels of intelligence but far surpasses it?

“Superintelligence” is not intended as a treatise of deep originality; Bostrom’s contribution is to impose the rigors of analytic philosophy on a messy corpus of ideas that emerged at the margins of academic thought.
New Yorker

Bostrom sets as his baseline the assumption that this superintelligence will occur. He then extrapolates forward based on what we know today, and what we have seen that looks similar in the past. Most of the reviews mention how readers need to take a leap or faith with Bostrom, as he builds conjecture upon conjecture:

It may seem an esoteric, even slightly crazy, subject. And much of the book’s language is technical and abstract (readers must contend with ideas such as “goal-content integrity” and “indirect normativity”). Because nobody knows how such an AI might be built, Mr Bostrom is forced to spend much of the book discussing speculations built upon plausible conjecture. He is honest enough to confront the problem head-on, admitting at the start that “many of the points made in this book are probably wrong.”
The Economist

But this is not a doomsday book. Bostrom’s intention is not to generate fear of AI. Instead by exploring this potential darker side he wants to spur on the development of research and actionable strategies to help avoid a potential cataclysmic outcome. Because, as he says himself in an article he wrote for Slate, to assume that our own artificially intelligent creations will have humanities best interests a heart is not a given and is something we need to work towards:

we cannot blithely assume that a superintelligence will necessarily share any of the final values stereotypically associated with wisdom and intellectual development in humans—scientific curiosity, benevolent concern for others, spiritual enlightenment and contemplation, renunciation of material acquisitiveness, a taste for refined culture or for the simple pleasures in life, humility and selflessness, and so forth. We will consider later whether it might be possible through deliberate effort to construct a superintelligence that values such things, or to build one that values human welfare, moral goodness, or any other complex purpose its designers might want it to serve. But it is no less possible—and in fact technically a lot easier—to build a superintelligence that places final value on nothing but calculating the decimal expansion of pi. This suggests that—absent a special effort—the first superintelligence may have some such random or reductionistic final goal.
Slate

This is not a popular science book, and the writing style reflects this, described across the reviews as “dense”, “opaque”, “technical and abstract” by The Economist, and rather bluntly by The Daily Telegraph as “a damn hard read”. But that same reviewer goes on to say:

That’s not a criticism, exactly. Most popular science books are lauded for their ability to render complex subjects in simple language, with little vignettes to sweeten the pill of hard fact. But this should not be thought of as a popular science book; it is a philosophical treatise, and should be read as such.
The Daily Telegraph

But this denseness should not be overstated. Reviews also point to Bostrom’s ability to navigate such a complicated realm so effectively. This is a book for a certain audience, whether it is only for the “techo-futurist Silicon Valley types” suggested by the Yale Scientific review is questionable, but it did certainly gain a prominent readership among the upper echelons of the technorati, with Elon Musk and Bill Gates citing it’s prescience and importance.

The New Yorker did a wonderful piece on Bostrom in November 2015 and I recommend you read it if you are on the fence about the book as it fills in many details on Bostrom himself as well covering the books key arguments in some detail:

The book is its own elegant paradox: analytical in tone and often lucidly argued, yet punctuated by moments of messianic urgency. Some portions are so extravagantly speculative that it is hard to take them seriously. (“Suppose we could somehow establish that a certain future AI will have an IQ of 6,455: then what?”) But Bostrom is aware of the limits to his type of futurology. When he was a graduate student in London, thinking about how to maximize his ability to communicate, he pursued stand­­up comedy; he has a deadpan sense of humor, which can be found lightly buried among the book’s self-serious passages. “Many of the points made in this book are probably wrong,” he writes, with an endnote that leads to the line “I don’t know which ones.”
New Yorker

It enters the WhatBook reading list because artificial intelligence is fast developing, and whether it will achieve superintelligent levels on in our lifetime does not preclude it from deserving our attention now. Plus, as one reviewer said “There’s a perverse thrill in reading a book that presages the possible extinction of the human species”.

Bostrom has given a TED Talk on this topic, as well as at Google, and discussed the book directly on C-Span. I also recommend a conversation he had on the EconTalk podcast.

Book details:
Superintelligence: Paths, Dangers, Strategies
by Nick Bostrom
Oxford University Press, 324 pages


Sign-up for the WhatBook newsletter and get ideas for new books to read delivered directly to your inbox:


Review Sources:
New Yorker
The Guardian
Reason
The Financial Times
Words & Dirt
The Economist
Slate
The Daily Telegraph
Piero Scaruffi
The Washington Post
MIT Technology Review
Yale Scientific
Pop Matters
Harvard Science Review


Image from http://www.flickr.com/photos/cblue98/.