Part 14/16:
A fallacious argument made by some is that the burden of proof lies on AI developers to prove that future AI systems are safe. The speaker pushes back, likening this to claiming that because you cannot prove a car won't crash tomorrow, you should therefore stop driving. It’s an unreasonable standard: “You can’t prove a negative.”
Building safe AI is a scientific and engineering challenge—not a moral obligation to prove an indefinable, future threat. Like many hazards (e.g., natural disasters, pandemics), risk management involves mitigation, not the impossible task of prevention.