57 pages • 1 hour read
Max TegmarkA modern alternative to SparkNotes and CliffsNotes, SuperSummary offers high-quality Study Guides with detailed chapter summaries and analysis of major themes, characters, and more.
Chapter 5 discusses the kind of future humans want and why they want it, as well as presents a number of “aftermath scenarios.” These scenarios all refer to a time after which AI has surpassed human-level intelligence, i.e., after the “intelligence explosion.” Tegmark provides a substantial list of potential scenarios in Table 5.1, several of which he discusses at length throughout the chapter via a series of hypotheticals. These scenarios include Libertarian Utopia, the Benevolent Dictator, Egalitarian Utopia, Gatekeeper, Protector God, Enslaved God, Conquerors, Descendants, Zookeeper, 1984, Reversion, and Self-Destruction. These scenarios can be separated, broadly speaking, into three main categories: peaceful coexistence, human extinction, or the prevention of superintelligence.
There are a number of different scenarios via which humans could peacefully coexist with superintelligent AI. In the “Enslaved God” scenario, this coexistence is forced on the godlike AI because it is imprisoned and made to do the bidding of human beings. Tegmark writes that, regardless of the moral concerns we may have with this case, this scenario could be unstable because it might end with breakout. It’s also more “low-tech” (180) than what could be achieved by a free AI. That said, Tegmark believes this is “the scenario that some AI researchers aim for by default” (179).