The “Pro-Human Declaration,” backed by experts across the political spectrum, outlines a framework to keep humans in control of increasingly powerful AI systems.
As political battles over artificial intelligence intensify, a group of researchers, policymakers, and public figures has released a new proposal for how AI development should actually be governed.
The document, known as the Pro-Human Declaration, aims to fill what supporters describe as a dangerous vacuum in U.S. AI regulation.
Its release coincides with mounting controversy over military AI partnerships, including the Pentagon’s recent dispute with Anthropic and subsequent deal with OpenAI—events that exposed how few formal rules currently exist around powerful AI systems.
A Fork in the Road for AI
The declaration frames the future of artificial intelligence as a stark choice.
According to its authors, humanity now faces two paths:
- A “race to replace,” where AI systems gradually displace humans as workers and decision-makers.
- A future where AI expands human potential while remaining firmly under human control.
The latter path requires clear governance, the authors argue.
The document is backed by hundreds of signatories, including technologists, former government officials, and public figures across the political spectrum.
Five Principles for Responsible AI
At the heart of the declaration are five core pillars for managing advanced AI systems:
- Humans must remain in charge of decision-making
- Prevent excessive concentration of AI power
- Protect the human experience and autonomy
- Preserve individual liberty
- Hold AI companies legally accountable
The framework reflects growing concern that AI development is advancing faster than political institutions can regulate it.
Proposed Limits on Powerful AI Systems
The declaration goes further than many policy proposals currently debated in Washington.
Among its strongest recommendations:
- Pause development of superintelligence until scientists agree it can be built safely
- Require mandatory shutdown mechanisms (“off-switches”) in powerful AI systems
- Ban architectures capable of self-replication, autonomous self-improvement, or resisting shutdown
These proposals echo longstanding safety debates within the AI research community.
The Pentagon-Anthropic Dispute Raises the Stakes
The policy discussion gained urgency after the Pentagon labeled Anthropic a “supply-chain risk.”
The designation followed a dispute in which the company refused to grant the Defense Department unrestricted access to its AI systems.
Shortly afterward, OpenAI signed its own deal with the Pentagon, allowing its technology to be used in classified environments.
For many observers, the episode highlighted a key reality: AI governance is currently being shaped by corporate negotiations rather than public policy.
Polling Suggests Public Concern
MIT physicist and AI researcher Max Tegmark, one of the organizers of the declaration, believes public sentiment has shifted rapidly.
Recent polling suggests that 95% of Americans oppose an unregulated race to superintelligence, he said.
“This is the first conversation we have had as a country about control over AI systems,” noted Dean Ball, a senior fellow at the Foundation for American Innovation.
A Regulatory Model Inspired by Drug Safety
Tegmark compares the need for AI regulation to the way governments regulate pharmaceuticals.
Drug companies cannot release new medications until regulators determine they are safe enough for public use.
AI systems, he argues, should face similar pre-release testing requirements.
“You never have to worry that some drug company is going to release something harmful before safety testing,” Tegmark said.
Child Safety as the Political Catalyst
One area where consensus might emerge quickly is protecting children from harmful AI interactions.
The declaration calls for mandatory testing of AI systems before deployment, particularly those aimed at young users.
Testing would evaluate risks such as:
- Mental health harms
- Manipulative interactions
- Encouragement of self-harm
Tegmark argues that society already treats similar behavior as criminal when committed by humans.
“If a person manipulates a child online to harm themselves, they can go to jail,” he said.
“So why should it be different if a machine does it?”
A Rare Bipartisan Alliance
Perhaps the most striking aspect of the declaration is its political diversity.
Signatories include figures rarely aligned on policy:
- Steve Bannon, former adviser to Donald Trump
- Susan Rice, national security adviser under Barack Obama
- Mike Mullen, former chairman of the Joint Chiefs of Staff
- Progressive faith leaders and technology researchers
The common ground, Tegmark says, is simple.
“They’re all human,” he noted.
Will Washington Listen?
Despite growing urgency, AI legislation in the United States remains fragmented and slow-moving.
Without clear rules, critics warn that decisions about powerful AI systems may be left largely to corporations and defense agencies.
The Pro-Human Declaration offers a blueprint—but whether lawmakers adopt it remains uncertain.
Still, its authors argue that the stakes could not be higher.
Because the question at the center of the debate is not merely technological.
It’s existential: who ultimately controls the future of intelligence itself?
TL;DR:
A coalition of experts has released the Pro-Human Declaration, a framework for governing AI development. The proposal calls for keeping humans in control, banning self-replicating AI systems, and requiring safety testing before deployment—amid growing concerns about military AI deals and the absence of clear U.S. regulation.
AI Summary:
- Experts released the Pro-Human Declaration outlining AI governance principles.
- Calls for human oversight, legal accountability, and limits on superintelligence.
- Proposes mandatory shutdown mechanisms and safety testing.
- Inspired partly by recent Pentagon disputes with AI companies.
- Backed by a rare bipartisan coalition of policymakers and researchers.








