In 2003, the philosopher Nick Bostrom proposed a thought experiment on the dangers of artificial intelligence. In his scenario, an AI was tasked with maximizing the number of paper clips in its collection. A superintelligent machine unshackled from human perspectives and ethical frameworks, he argued, would deduce that the best way to maximize its paper-clips collection was to convert all matter in the universe—humans included—into paper clips. The question of how to create the ultimate paper-clips collection would be definitively answered, but at what cost?