X
Story Stream
recent articles

In early February, Matt Schumer, founder of OthersideAI and creator of the AI writing tool HyperWrite, published a 5,000-word essay titled “Something Big Is Happening.” The piece, filled with terror-inducing descriptions of AI’s ability to create new AI, explains that AI’s disruption to society will be “much bigger” than COVID-19, and it’s already happening. The piece racked up more than 60 million views in two days. It’s been the subject of commentary, reporting, and online discourse in the weeks since its publication.

Days later, Mrinank Sharma, head of Anthropic’s Safeguards Research team, resigned with a public letter warning that “the world is in peril.” He cited not just AI risks, but “a whole series of interconnected crises” and announced plans to leave the field entirely to study poetry in the UK…presumably until the world ends in an algorithm-driven apocalypse. Around the same time, OpenAI researcher Zoe Hitzig resigned over concerns about ChatGPT introducing advertising, warning about “the potential for manipulating users in ways we don’t have the tools to understand.”

Congressional hearings feature dire predictions about autonomous weapons. Headlines scream about existential threats and civilization-ending machines. The drumbeat of catastrophic language is relentless. It is also exhausting.

Here’s what no one is saying: This panic helps exactly no one. And it’s actively harming our ability to think about AI and craft sensible policy.

A World in Unexplained Peril

One thing I noticed is missing from all these dire predictions, doomsday siren resignation letters, and alarming X posts: specificity. Sharma warns “the world is in peril,” but doesn’t explain what that peril actually looks like or how it unfolds. Schumer compares AI to COVID-19 but can’t articulate the mechanism of harm beyond job displacement. His parallel to the early moments of the COVID pandemic falls apart even earlier in his piece: Schumer explains that almost no one was talking about COVID before it upended society. Everyone is talking about AI risk. There are, right now, no less than three bestselling books about it. One of them carries the subtle subtitle “Why Superhuman AI Would Kill Us All.” Each of these books, like Sharma’s resignation letter, speaks only in apocalyptic abstractions, leaving us to wonder how the world will actually end at the hands of our AI overlords.

Another thing I noticed: There’s something troublingly narcissistic about this framing. These warnings position AI builders as uniquely important figures. These men and women are giants who have developed data-driven gods. They are the modern-day Robert Oppenheimer, certain to have future major motion pictures made about their mastery, motives, and remorse. They have ushered in a new era of atomic-level destruction, the only ones who truly understand the beast they’ve created. It casts them as simultaneously heroes and Cassandras, indispensable to both creating and containing this civilization threat.

But if the danger is so profound and imminent, why can’t they explain what it actually does? Nuclear weapons vaporize cities. Pandemics spread through populations, killing exponentially. What does the AI apocalypse look like in concrete terms? And crucially: if AI is powerful enough to end civilization, what can AI companies actually do about it that governments and existing institutions cannot?

When the White House Office of Science and Technology Policy held listening sessions on AI policy last year, the challenge wasn’t identifying risks—it was separating genuine technical concerns from science fiction. That distinction matters for agencies trying to allocate limited oversight resources.

Catastrophic language forces a false binary: AI destroys everything, or AI saves everything. Perhaps both are wrong. And perhaps both prevent the nuanced, sector-specific regulation that actually works. Perhaps, after all, we have some autonomy over all this AI.

Disruption Isn’t an Apocalypse

Yes, AI is advancing rapidly. Human resources agencies employ AI tools for resume screening, hiring, and onboarding. AI is now even conducting job interviews. That’s made applying for jobs miserable, as hundreds of candidates are screened out based on things as arbitrary as zip codes, resume length, and how often they smile when talking to a bot. By contrast, this also saves companies hundreds of days of work from their actual HR humans.

So disruption is coming at scale. But disruption—even rapid, widespread disruption—isn’t existential collapse. Desktop computers didn’t eliminate federal employees in the 1980s; they changed what those employees did. The shift to digital services didn’t end government; it made government more accessible.

Ironically, Anthropic CEO Dario Amodei—Sharma’s former boss—had a more sensible take in an essay last month, pushing back against “doomerism.” While warning about genuine risks that require serious attention, Amodei explicitly rejects “thinking about AI risks in a quasi-religious way” and calls for “sober, fact-based” analysis instead of “sensationalistic social media accounts.”

The real policy question isn’t whether AI transforms work and replaces many jobs that require humans typing into computers. It will. The question is whether we manage that transformation to serve the public good or allow market forces alone to determine outcomes.

AI Can’t Police Itself

Should federal agencies build safeguards into AI procurement? Absolutely. Should those safeguards be designed by AI companies without independent oversight? Absolutely not.

AI systems have real technical challenges: unexpected behavior in edge cases, vulnerability to adversarial attacks, and embedded bias that scales. The National Institute of Standards and Technology has documented these risks extensively in its AI Risk Management Framework. These concerns are backed by serious research, not speculation.

But these are engineering and governance problems, not metaphysical inevitabilities. We solve them through rigorous testing protocols, external auditing, and clear regulatory standards—exactly what the Federal Acquisition Regulation does for other mission-critical technologies.

The argument that AI is too complex for traditional oversight is self-serving nonsense. Aviation safety regulators don’t need to build aircraft. Financial regulators don’t need to run banks. Effective oversight requires technical literacy, not industry capture.

We’ve Done This Before

After World War II, the world faced genuinely apocalyptic technology: nuclear weapons. The response wasn’t panic or paralysis. It was governance.

The Atomic Energy Act of 1946 created civilian oversight. In 1974, the Nuclear Regulatory Commission established safety standards. International treaties limited proliferation. None of this eliminated risk, but it managed technology that could actually end civilization.

AI—even powerful AI—is far more manageable. It has no agency. It doesn’t spontaneously develop goals. Its risks come from human misuse, inadequate testing, and market incentives misaligned with public interest. That’s fixable with the regulatory tools we already have.

What Real Regulation Looks Like

Instead of panic, federal agencies need clear authorities and resources. For starters, we need mandatory pre-deployment testing for high-risk AI systems in federal procurement, similar to security testing already required for IT systems. GSA’s AI Center of Excellence should have the authority to reject systems that fail transparency or bias audits.

Next, sector-specific oversight by agencies with domain expertise is required. The Department of Transportation should regulate autonomous vehicles. The Department of Health and Human Services should oversee medical AI. The Securities and Exchange Commission should monitor algorithmic trading. One-size-fits-all AI legislation makes as much sense as one agency regulating both aircraft and pharmaceuticals.

Third, international coordination through existing frameworks is necessary. The Organization for Economic Cooperation and Development already has AI principles. The International Organization for Standardization is developing technical standards. We don’t need new institutions; we need to fund and empower existing ones.

Finally, we need transparency requirements that allow independent researchers to audit systems without compromising trade secrets. The model exists: financial services companies protect proprietary trading algorithms while submitting to regulatory examination.

This is detailed, technical, unglamorous work. It’s also the only work that actually improves outcomes.

Why Catastrophizing Backfires

Insisting AI will inevitably end civilization does measurable harm. It undermines rational policy debate by treating uncertainty as certainty. Congress can’t write effective legislation when witnesses oscillate between “AI will cure cancer” and “AI will kill us all.”

The AI panic porn goes viral immediately. It also feeds public fear rather than informed engagement, making citizens feel helpless instead of empowered to demand accountability.

It gives companies cover to resist oversight. If only AI developers can understand AI, then only AI developers can regulate AI. This is a convenient argument that concentrates power and profits in the same hands.

The answer isn’t to hyperbolize risk. It’s to address real risks with evidence-based governance.

Choose Governance, Not Panic

Look, AI carries a significant risk of shifting the global economy and disrupting large sections of the workforce. That might happen - but it will happen over time. This risk demands somber thought and strategy. It requires wise management through existing institutions. It does not mean doom is preordained. It does not mean traditional regulatory tools don’t work.

Federal agencies can regulate AI development without ceding oversight to industry self-regulation. They can mandate testing, enforce transparency, and coordinate internationally. They’ve done this before with technologies far more dangerous than software.

The choice facing policymakers isn’t between oblivion and utopia. It’s between thoughtful governance using proven regulatory tools and abdication to market forces and industry promises.

Choose governance. Skip the panic.

Joe Buccino is a retired U.S. Army Colonel who last served as Communications Director for U.S. Central Command. He is now the CEO of Joe Buccino Consulting