Superintelligence | Dominik Mayer – Products, Asia, Productivity

Superintelligence  

In his article “Will Superintelligent Machines Destroy Humanity?” Ronald Bailey reviews Nick Bostrom’s book “Superintelligence: Paths, Dangers, Strategies”:

Bostrom argues that it is important to figure out how to control an AI before turning it on, because it will resist attempts to change its final goals once it begins operating. In that case, we’ll get only one chance to give the AI the right values and aims. […]

An example of the first approach would be to try to confine the AI to a “box” from which it has no direct access to the outside world. Its handlers would then treat it as an oracle, posing questions to it such as how can we might exceed the speed of light or cure cancer. But Bostrom thinks the AI would eventually get out of the box, noting that “Human beings are not secure systems, especially when pitched against a super intelligent schemer and persuader.”

Fascinating read. Another book for my to do list.