The advance of artificial intelligence has gifted us some great headlines recently, from self-driving Tesla’s to deep learning machines winning at Go. In Superintelligence: Paths, Dangers, Strategies by Nick Bostrom, this fertile zone of technological development forms the basis for a philosophical exploration of where this may all lead. What if artificial intelligence advances to a point that we, humanity, are no longer in control?
Superintelligence is a serious, intellectually disorientating treatment of ideas, imagining the inevitable future when we are able to create an AGI (an artificial general intelligence). An AGI would be capable of successfully performing any task that a human can. Such a machine would thus be capable of recursive self-improvement (on a digital time scale) perhaps rapidly leading to an explosion in its own intelligence. An exponentially self-improving superintelligence, according to Bostrom, would pose a significant threat to human survival.