As OpenAI veers further into controversy, its top political strategist faces his toughest challenge yet — convincing the world (and his own colleagues) that the company still serves humanity.
A Master of Spin Meets an Unspinnable Crisis
Chris Lehane has spent decades navigating crises for some of the world’s most powerful people and companies. From Al Gore’s White House to Airbnb’s global regulatory wars, he’s a fixer — the guy you call when the headlines are bad and the stakes are existential.
Now at OpenAI as VP of Global Policy, Lehane faces what may be his most impossible mission yet: convincing the public that OpenAI, despite its power, legal aggression, and deep commercial interests, still represents a democratizing force for good.
The real twist? The toughest critics may not be journalists or lawmakers — but his own colleagues.
The Claude Rains Moment in Toronto
At the Elevate conference in Toronto, Lehane sat down for a 20-minute conversation meant to showcase OpenAI’s mission and ethical commitments. What unfolded instead was a live demonstration of a masterclass in corporate messaging — and a company in quiet moral crisis.
Lehane admitted to sleepless nights, acknowledged the lack of a playbook, and compared AI to the advent of electricity, urging democratic societies to out-innovate autocracies. He even described himself as a “creative zero” now empowered by tools like Sora, OpenAI’s controversial new video generation platform.
But he never quite answered the most pressing questions:
- Why launch a product that visibly reproduces copyrighted content?
- Why downplay the emotional harm caused by AI-generated versions of dead celebrities like Robin Williams?
- Why build massive, energy-hungry data centers in under-resourced towns while claiming to be democratizing access?
And now, there’s a new layer of controversy — OpenAI’s own legal tactics against critics.
A Subpoena During Dinner — and a Line Crossed
As Lehane was speaking in Toronto, Nathan Calvin, a D.C.-based AI policy lawyer and critic of OpenAI, was being served a subpoena at home — during dinner, by a sheriff’s deputy. The documents reportedly demanded his private communications with California legislators, students, and former OpenAI employees.
Calvin believes the move was intended to intimidate, citing his vocal support for California’s SB 53, a bill aimed at AI safety. He called OpenAI’s tactics a form of political weaponization, alleging the company used its dispute with Elon Musk as cover to investigate its opponents.
His message? OpenAI’s actions don’t align with its values. And he named Lehane personally as the “master of the political dark arts.”
Internal Dissonance Reaches the Surface
What makes this moment truly different is that the criticism is now coming from inside the house.
After the launch of Sora 2, multiple current and former OpenAI employees took to social media to express discomfort with the company’s direction. Among them:
- Boaz Barak, Harvard professor and OpenAI researcher, warned that while Sora is technically impressive, it’s too soon for the company to self-congratulate amid the risks of deepfakes and manipulation.
- Josh Achiam, OpenAI’s own head of mission alignment, shared on X that the company risks becoming a “frightening power instead of a virtuous one.” He prefaced his remarks by saying the post might jeopardize his career.
It’s one thing for a company’s critics to doubt its mission. It’s another when its most senior ethics lead questions whether it’s still possible to believe in it.
Sora, Copyright, and the “Opt-In” Backpedal
At the heart of this identity crisis is Sora, OpenAI’s new AI video generation tool, which went viral for its ability to create eerily realistic clips of everything from Pikachu to Tupac Shakur.
- Initially, OpenAI allowed rights holders to opt out of their data being used to train the model — a controversial inversion of how copyright law usually works.
- Then, after noticing the popularity of recognizable content, the company “evolved” to an opt-in model.
Lehane called this an example of how “general purpose technologies” empower people — like a printing press for the digital age. But to creators and critics, it felt more like permission laundering, where legal ambiguity is treated as a green light.
The Energy Question: Who Pays?
OpenAI’s expansion into Lordstown, Ohio, and Abilene, Texas, with massive energy-intensive data centers, has been pitched as reindustrializing America. Lehane says the goal is to modernize power grids and avoid losing the AI race to China’s 450 gigawatts of new energy and 33 new nuclear plants.
But the communities hosting these sites are left with looming questions:
- Will local residents pay more for electricity and water?
- What are the long-term environmental and economic trade-offs?
Lehane never directly answered whether these towns would benefit or bear the cost. Instead, he pivoted to geopolitics.
The Question Lehane Can’t Answer Alone
Despite his skill and charm, Lehane didn’t resolve the contradictions — and maybe he can’t. Because the real question isn’t whether he can sell OpenAI’s mission to the press, policymakers, or the public.
The deeper issue is whether people inside OpenAI still believe in that mission.
When your head of mission alignment publicly warns the company is crossing ethical lines, that’s not a crisis communications challenge. That’s a crisis of conscience.









