21.7 C
Canberra
Tuesday, October 21, 2025

The fixer’s dilemma: Chris Lehane and OpenAI’s not possible mission


Chris Lehane is likely one of the finest within the enterprise at making unhealthy information disappear. Al Gore’s press secretary through the Clinton years, Airbnb’s chief disaster supervisor by way of each regulatory nightmare from right here to Brussels – Lehane is aware of the way to spin. Now he’s two years into what is likely to be his most not possible gig but: as OpenAI’s VP of worldwide coverage, his job is to persuade the world that OpenAI genuinely offers a rattling about democratizing synthetic intelligence whereas the corporate more and more behaves like, effectively, each different tech large that’s ever claimed to be completely different.

I had 20 minutes with him on stage on the Elevate convention in Toronto earlier this week – 20 minutes to get previous the speaking factors and into the true contradictions consuming away at OpenAI’s fastidiously constructed picture. It wasn’t simple or solely profitable. Lehane is genuinely good at his job. He’s likable. He sounds cheap. He admits uncertainty. He even talks about waking up at 3 a.m. anxious about whether or not any of this can truly profit humanity.

However good intentions don’t imply a lot when your organization is subpoenaing critics, draining economically depressed cities of water and electrical energy, and bringing useless celebrities again to life to claim your market dominance.

The corporate’s Sora drawback is absolutely on the root of the whole lot else. The video technology software launched final week with copyrighted materials seemingly baked proper into it. It was a daring transfer for an organization already getting sued by the New York Instances, the Toronto Star, and half the publishing trade. From a enterprise and advertising and marketing standpoint, it was additionally good. The invite-only app soared to the high of the App Retailer as individuals created digital variations of themselves, OpenAI CEO Sam Altman; characters like Pikachu, Mario, and Cartman of “South Park”; and useless celebrities like Tupac Shakur.

Requested what drove OpenAI’s choice to launch this latest model of Sora with these characters, Lehane gave me the usual pitch: Sora is a “normal goal expertise” like electrical energy or the printing press, democratizing creativity for individuals with out expertise or sources. Even he – a self-described inventive zero – could make movies now, he mentioned on stage.

What he danced round is that OpenAI initially “let” rights holders decide out of getting their work used to coach Sora, which isn’t how copyright use sometimes works. Then, after OpenAI seen that individuals actually favored utilizing copyrighted pictures, it “advanced” towards an opt-in mannequin. That’s not likely iterating. That’s testing how a lot you may get away with. (And by the best way, although the Movement Image Affiliation made some noise final week about authorized threats, OpenAI seems to have gotten away with quite a bit.)

Naturally, the state of affairs brings to thoughts the aggravation of publishers who accuse OpenAI of coaching on their work with out sharing the monetary spoils. Once I pressed Lehane about publishers getting reduce out of the economics, he invoked truthful use, that American authorized doctrine that’s alleged to steadiness creator rights towards public entry to data. He known as it the key weapon of U.S. tech dominance.

Techcrunch occasion

San Francisco
|
October 27-29, 2025

Possibly. However I’d not too long ago interviewed Al Gore – Lehane’s previous boss – and realized anybody might merely ask ChatGPT about it as a substitute of studying my piece on TechCrunch. “It’s ‘iterative’,” I mentioned, “but it surely’s additionally a alternative.”

For the primary time, Lehane dropped his spiel. “We’re all going to want to determine this out,” he mentioned. “It’s actually glib and simple to take a seat right here on stage and say we have to determine new financial income fashions. However I believe we are going to.” (We’re making it up as we go, briefly.)

Then there’s the infrastructure query no person desires to reply actually. OpenAI is already working a knowledge middle campus in Abilene, Texas, and not too long ago broke floor on a large information middle in Lordstown, Ohio, in partnership with Oracle and SoftBank. Lehane has likened accessibility to AI to the appearance of electrical energy – saying those that accessed it final are nonetheless enjoying catch-up – but OpenAI’s Stargate challenge is seemingly focusing on a few of those self same economically challenged locations as spots to arrange amenities with their huge appetites for water and electrical energy.

Requested throughout our sit-down whether or not these communities will profit or merely foot the invoice, Lehane went to gigawatts and geopolitics. OpenAI wants a few gigawatt of power per week, he famous. China introduced on 450 gigawatts final yr plus 33 nuclear amenities. If democracies need democratic AI, they must compete. “The optimist in me says this can modernize our power methods,” he’d mentioned, portray an image of re-industrialized America with reworked energy grids.

It was inspiring. Nevertheless it was not a solution about whether or not individuals in Lordstown and Abilene are going to observe their utility payments spike whereas OpenAI generates movies of John F. Kennedy and The Infamous B.I.G. (Video technology is the most energy-intensive AI on the market.)

Which introduced me to my most uncomfortable instance. Zelda Williams spent the day earlier than our interview begging strangers on Instagram to cease sending her AI-generated movies of her late father, Robin Williams. “You’re not making artwork,” she wrote. “You’re making disgusting, over-processed hotdogs out of the lives of human beings.”

Once I requested about how the corporate reconciles this sort of intimate hurt with its mission, Lehane answered by speaking about processes, together with accountable design, testing frameworks, and authorities partnerships. “There isn’t any playbook for these things, proper?”

Lehane confirmed vulnerability in some moments, saying that he wakes up at 3. a.m. each evening, anxious about democratization, geopolitics, and infrastructure. “There’s huge duties that include this.”

Whether or not or not these moments have been designed for the viewers, I consider him. Certainly, I left Toronto pondering I’d watched a grasp class in political messaging – Lehane threading an not possible needle whereas dodging questions on firm selections that, for all I do know, he doesn’t even agree with. Then Friday occurred.

Nathan Calvin, a lawyer who works on AI coverage at a nonprofit advocacy group, Encode AI, revealed that on the identical time I used to be speaking with Lehane in Toronto, OpenAI had despatched a sheriff’s deputy to his home in Washington, D.C., throughout dinner to serve him a subpoena. They wished his non-public messages with California legislators, faculty college students, and former OpenAI workers.

Calvin is accusing OpenAI of intimidation ways round a brand new piece of AI regulation, California’s SB 53. He says the corporate weaponized its authorized battle with Elon Musk as a pretext to focus on critics, implying Encode was secretly funded by Musk. The truth is, Calvin says he fought OpenAI’s opposition to California’s SB 53, an AI security invoice, and that when he noticed the corporate declare it “labored to enhance the invoice,” he “actually laughed out loud.” In a social media skein, he went on to name Lehane particularly the “grasp of the political darkish arts.”

In Washington, that is likely to be a praise. At an organization like OpenAI whose mission is “to construct AI that advantages all of humanity,” it seems like an indictment.

What issues way more is that even OpenAI’s personal individuals are conflicted about what they’re changing into.

As my colleague Max reported final week, plenty of present and former workers took to social media after Sora 2 was launched, expressing their misgivings, together with Boaz Barak, an OpenAI researcher and Harvard professor, who wrote about Sora 2 that it’s “technically superb but it surely’s untimely to congratulate ourselves on avoiding the pitfalls of different social media apps and deepfakes.”

On Friday, Josh Achiam – OpenAI’s head of mission alignment – tweeted one thing much more outstanding about Calvin’s accusation. Prefacing his feedback by saying they have been “probably a threat to my entire profession,” Achiam went on to put in writing of OpenAI: “We will’t be doing issues that make us into a daunting energy as a substitute of a virtuous one. We’ve an obligation to and a mission for all of humanity. The bar to pursue that responsibility is remarkably excessive.”

That’s . . .one thing. An OpenAI government publicly questioning whether or not his firm is changing into “a daunting energy as a substitute of a virtuous one,” isn’t on a par with a competitor taking photographs or a reporter asking questions. That is somebody who selected to work at OpenAI, who believes in its mission, and who’s now acknowledging a disaster of conscience regardless of the skilled threat.

It’s a crystallizing second. You may be the most effective political operative in tech, a grasp at navigating not possible conditions, and nonetheless find yourself working for an organization whose actions more and more battle with its said values – contradictions that will solely intensify as OpenAI races towards synthetic normal intelligence.

It has me pondering that the true query isn’t whether or not Chris Lehane can promote OpenAI’s mission. It’s whether or not others – together with, critically, the opposite individuals who work there – nonetheless consider it.

Related Articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

[td_block_social_counter facebook="tagdiv" twitter="tagdivofficial" youtube="tagdiv" style="style8 td-social-boxed td-social-font-icons" tdc_css="eyJhbGwiOnsibWFyZ2luLWJvdHRvbSI6IjM4IiwiZGlzcGxheSI6IiJ9LCJwb3J0cmFpdCI6eyJtYXJnaW4tYm90dG9tIjoiMzAiLCJkaXNwbGF5IjoiIn0sInBvcnRyYWl0X21heF93aWR0aCI6MTAxOCwicG9ydHJhaXRfbWluX3dpZHRoIjo3Njh9" custom_title="Stay Connected" block_template_id="td_block_template_8" f_header_font_family="712" f_header_font_transform="uppercase" f_header_font_weight="500" f_header_font_size="17" border_color="#dd3333"]
- Advertisement -spot_img

Latest Articles