Part 2/20:
Shortly after the release of ChatGPT, discussions about the dangers of rogue AI surged within AI circles. Conversations often centered around P Doom, the probability that artificial intelligence would destroy or fundamentally displace humanity. Opinions varied widely, with some researchers estimating less than a 1% chance, while others suggested risks could be as high as 15-20%. Notably, in May 2023, luminaries like Sam Altman, Bill Gates, and Geoffrey Hinton publicly declared that mitigating the risk of AI extinction should be a top societal priority, akin to pandemics or nuclear war. However, despite this vocal concern, the pace of AI development continued relentlessly, with most signatories racing ahead to release increasingly capable models.