A system that assumes benevolent, self-correcting governance is no safer than one that assumes benevolent, aligned superintelligence.
Authored by Roger Bate: Before Covid, I would have described myself as a technological optimist. New technologies almost always arrive amid exaggerated fears. Railways were supposed to cause mental breakdowns, bicycles were thought to make women infertile or insane, and early electricity was blamed for everything from moral decay to physical collapse. Over time, these anxieties faded, societies adapted, and living standards rose. The pattern was familiar enough that artificial intelligence seemed likely to follow it: disruptive, sometimes misused, but ultimately manageable.

Across much of the world, governments and expert bodies responded to uncertainty with unprecedented social and biomedical interventions, justified by worst-case models and enforced with remarkable certainty. Competing hypotheses were marginalized rather than debated. Emergency measures hardened into long-term policy. When evidence shifted, admissions of error were rare, and accountability rarer still. The experience exposed a deeper problem than any single policy mistake: modern institutions appear poorly equipped to manage uncertainty without overreach.
