Helen Toner on the OpenAI coup: ‘It was about trust and accountability’
Simply sign up to the Life & Arts myFT Digest -- delivered directly to your inbox.
A few days before Thanksgiving last year, Helen Toner and three of her peers on the board of OpenAI — the world’s best-known artificial intelligence company — fired its chief executive Sam Altman in a surprise coup.
The reason they gave was Altman’s lack of candour in his dealings with the board, but details were minimal. In the days that followed, Toner, a director at Georgetown University’s AI think-tank, the Center for Security and Emerging Technology, swirled at the centre of a crisis that threatened to tear the $86bn company apart. She became a symbolic figure of opposition to Altman, a legendary and canny Silicon Valley operator.
The coup lasted five days, amid intense pressure from the start-up’s powerful investors, supporters and employees to reinstate Altman. One of Toner’s co-directors defected back to Altman, the management team rushed to his defence and, by the end of the long weekend, Altman was back in place as CEO. Toner was forced to resign.
The showdown was more than a clash of personalities: it sparked a global debate about the nature of corporate power, and whether today’s tech leaders can be trusted to oversee what is one of our most powerful inventions.
Seated at the back of a Sichuanese restaurant near London’s St James’s Park, Toner seems unperturbed by the chaos she helped to instigate. In a plain black T-shirt, with her short, wavy hair pulled back sensibly, revealing little emerald studs, the 32-year-old is an unlikely nemesis for Altman. Since her exit from the OpenAI stage, the Melbourne-born engineer has remained mostly tight-lipped about the ousting and how it went awry. To many, she remains an enigma.
“It’s very hard to look at what happened and conclude that self-governing is going to work at these companies,” she says, sipping jasmine tea. “Or that we can rely on self-governance structures to stand up to the pressures of the different kinds of power and incentives that are at play here.
“For the board, there was this trajectory of going from ‘everything’s very low stakes, you want to be pretty hands-off’ to ‘actually, we’re playing this critical governance function in an incredibly high-stakes — not just for the company, but for the world — situation,’” she says.
We turn to the relatively low-stakes task of choosing our meal, which prompts us to discover our mutual vegetarianism. Toner gave up meat for animal welfare reasons a few years ago, so ordering lunch becomes unexpectedly easy. We decide on the veggie sharing menu, to sample as many dishes as we can, united by our love of spicy foods.
Toner was invited to join OpenAI’s board in 2021 by her former boss Holden Karnofsky. They had worked together at the California-based non-profit GiveWell, which used the principles of effective altruism — a controversial social and philanthropic movement influential in tech circles — to conduct research and make grants. At GiveWell, Toner pursued an early interest in AI policy issues, particularly its military use and the influence of geopolitics on AI development.
Karnofsky was stepping off the company’s board and was looking for an apt replacement. Toner knew OpenAI had a convoluted and unusual governance structure, involving a non-profit shell with capped-profit subsidiaries. (The FT has a licensing agreement with OpenAI.) Its largest backer, Microsoft, did not own any conventional equity shareholding in the company. Instead, it is entitled to receive a share of profits from a specific subsidiary of OpenAI, up to a certain limit. In its charter, the company claims that its “primary fiduciary duty is to humanity” and that the non-profit’s board, which governs all OpenAI activities, should act to further its mission, rather than to maximise profit for investors.
Toner asked around — would this board have any real power to hold the company to account? — and was convinced by people close to it that it would. To her, it felt like a potentially valuable way to contribute to the development of safe and beneficial AI. “The funny part is, I think the [OpenAI] board was filtering heavily for someone who would be . . . agreeable and practical and a bridge builder, and not going to rock the boat too much,” she says.
“I was never on this board for fun or for glory. Definitely the level of spotlight that I personally was put under was not something I was expecting,” she tells me. “I think having a kid was very helpful. It’s just very, very grounding.”
Toner’s choice of restaurant, Ma La Sichuan, a buzzing spot decked out in traditional red and gold, is a throwback to her nine-month stint in Beijing in 2018, when she studied Chinese, schooled herself in Sichuanese food and worked as a research affiliate on AI and defence.
During her time there, she worked with machine-learning researchers and attended conferences on AI and the Chinese military, often one of just a handful of foreigners. “China is often used as a bit of a cudgel in DC . . . to do things in AI because [of] China. And often it’s not necessarily that closely connected with what China is actually doing, or how well they’re actually succeeding at their plans,” she says.
Menu
Ma La Sichuan
37 Monck St, London SW1P 2BL
Vegetarian sharing menu x2 £56
— Aromatic duck
— Ma po tofu
— Aubergine hot pot
— Dry-fried fine beans
— Mixed vegetable fried rice
Lychee juice £3
Jasmine tea £2
Total (inc service) £68.60
Since we’ve opted for the sharing menu, trays of steaming dishes begin to arrive in procession, preceded by wafting aromas of chilli and garlic. There are vegetarian aromatic “duck” pancakes with slim cylinders of cucumber, leeks and a hoisin sauce (an unexpected Peking dish at a Sichuanese place, Toner points out, but crisp, salty-sweet and delicious nonetheless).
This is followed by a parade of regional favourites such as ma po tofu and fish-fragrant aubergine hotpot, with a dry dish of fine green beans topped with little piles of roasted garlic and chilli slivers that melt pungently on the tongue. The aubergine has hints of miso that I savour.
“Ma is part of the Chinese word for anaesthesia or paralysis, and that’s because the Sichuan peppercorn numbs your tongue and your lips,” she explains. “I’m kinda addicted to that flavour.”
The conversation turns back to OpenAI, and Toner’s relationship with the company over the two years she sat on its board. When she first joined, there were nine members, including LinkedIn co-founder Reid Hoffman, Shivon Zilis, an executive at Elon Musk’s neurotechnology company Neuralink, and Republican congressman Will Hurd. It was a collegiate atmosphere, she says, though in 2023 those three members all stepped down, leaving three non-execs on the board, including Toner, tech entrepreneur Tasha McCauley and Adam D’Angelo, the chief executive of website Quora, alongside Altman and the company’s co-founders Greg Brockman and Ilya Sutskever.
“I came on as the company was going through a clear shift,” Toner says. “Certainly when I joined, it was much more comparable to being on the board of a VC-funded start-up, where you’re just there to help out [and] do what the CEO thinks is right. You don’t want to be meddling or you don’t want to be getting in the way of anything.”
The transition at the company, she says, was precipitated by the launch of ChatGPT — which Toner and the rest of the board found out about on Twitter — but also of the company’s most advanced AI model, GPT-4. OpenAI went from being a research lab, where scientists were working on nascent and blue-sky research projects not designed to be used by the masses, to a far more commercial entity with powerful underlying technology that had far-reaching impacts.
I ask Toner what she thinks of Altman, the person and leader. “We’ve always had a friendly relationship, he’s a friendly guy,” she says. Toner still has legal duties of confidentiality to the company, and is limited in what she can reveal. But speaking on the Ted AI podcast in May, she was vocal in claiming that Altman had misled the board “on multiple occasions” about its existing safety processes. According to her, he had withheld information, wilfully misrepresented things that were happening at the company, and in some cases outright lied to the board.
She pointed to the fact that Altman hadn’t informed the board about the launch of ChatGPT, or that he owned the OpenAI Startup Fund, a venture capital fund he had raised from external limited partners and made investment decisions on — even though, says Toner, he claimed “to be an independent board member with no financial interest in the company”. Altman stepped down from the fund in April this year.
In the weeks leading up to the November firing, Altman and Toner had also clashed over a paper she had co-authored on public perceptions of various AI developments, which included some criticism of the ChatGPT launch. Altman felt that it reflected badly on the company. “If I had wanted to critique OpenAI, there would have been many more effective ways to do that,” Toner says. “It’s honestly not clear to me if it actually got to him or if he was looking for an excuse to try and get me off the board.”
Today, she says those are all merely illustrative examples to point to long-term patterns of untrustworthy behaviour that Altman exhibited, with the board but also with his own colleagues. “What changed it was conversations with senior executives that we had in the fall of 2023,” she says. “That is where we started thinking and talking more actively about [doing] something about Sam specifically.”
Public criticisms of the board’s decision have ranged from personal attacks on Toner and her co-directors — with many describing her as a “decel”, someone who is anti-technological progress — to disapproval of how the board handled the fallout. Some noted that the board’s timing had been poor, given the concurrent share sale at OpenAI, potentially jeopardising employees’ payouts.
Last March, an independent review conducted by an external law firm into the events concluded that Altman’s behaviour “did not mandate removal”. The entrepreneur rejoined the board the same month. At the time he said he was “pleased this whole thing is over”, adding: “Over these last few months it’s been disheartening to see some people with an agenda trying to tease leaks in the press to try and hurt the company and hurt the mission. They have not worked.”
In Toner’s view, the review’s outcome sounded like the new board had posed the question of whether it had to fire Altman. “Which I think gets interpreted as: ‘Did he do something illegal?’ And that is not how I think the board should necessarily be evaluating his conduct,” she says.
“They’ve not disputed anywhere any of the actual claims that we’ve made about what went wrong or why we fired him . . . which was about trust and accountability and oversight.”
In a statement to the FT, chair of OpenAI’s board Bret Taylor said that “over 95% of employees, including senior leadership, asked for Sam’s reinstatement”. Toner can’t explain — and didn’t anticipate — defections by senior staff, including by board member Sutskever, who went from criticising to supporting Altman within days. “I learnt a lot about how different people react to pressure in different situations.”
We’re making our way through the feast with efficiency, in agreement that the tingly and fragrant ma po tofu is the star of the show. I ask Toner how life has changed for her since November, and she insists that it hasn’t. She has kept her full-time job at CSET, where she advises senior government officials on AI policy and national security, makes her own rye bread at home with her husband, a German scientist, and deals daily with the exertions of toddler-parenting.
At the time, when the OpenAI crisis turned into a long weekend of sleepless negotiations and damage control, she admits it gave her a new appreciation for her community in DC. Since many of her colleagues were in the national security space, they had dealt with “real actual crises, where people were dying or wars were going on, so that put that into perspective”, she says. “A few sleepless nights is not that bad.”
Her biggest learning was around the future of AI governance. To her, the events at OpenAI raised the stakes of getting outside oversight right for the small group of companies racing to build powerful AI systems. “It could mean government regulation but could also just mean . . . industry-wide standards, public pressure, public expectations,” she says.
This isn’t just the case for OpenAI, she emphasises, but for companies including Anthropic, Google and Meta. Establishing legal requirements around transparency is crucial to prevent building a tool that is dangerous to humanity, she believes.
“[The companies] are also in a tough situation, where they’re all trying to compete with each other. And so you talk to people inside these companies, and they almost beg you to intervene from the outside,” she says. “It’s not just about trusting the beneficence and judgment of specific individuals. We shouldn’t let things be set up such that a small number of people get to be the ones that get to decide what happens, no matter how good those people are.”
Toner came to AI policy through an unusual path. As a university student in Melbourne, she was introduced to effective altruism (EA). She’d been seduced by the community’s ideas of helping to improve the world in a way that required thinking with both head and heart, she says.
The EA community — and its problematic workings — were dragged into the limelight in 2022 by its most public promoter and donor, Sam Bankman-Fried, disgraced founder of cryptocurrency trading firm FTX. Toner says she knew him “a little, not well”, and had met him “once or twice”.
“I’ve been much less involved in recent years, mostly because of this groupthink, hero-worship kind of stuff. [Bankman-Fried] is a symptom of it,” she says. “The last thing I wrote [about it] was about getting disillusioned with EA, both how I experienced that and how I’d seen others experience it.”
At this point, we’re sated from the meal but can’t resist picking at the leftovers for another twinge of that numbing peppercorn flavour. A full stomach feels like the right time to ask the dystopian question about the coming wave of AI systems. “One thing [effective altruists] got really right is taking seriously the possibility we might see very advanced AI systems in our lifetimes and that might be a big deal for what happens in the world,” she says. “In 2013, 2014, when I was starting to hear these kinds of ideas, it seemed very countercultural, and now . . . certainly feels more mainstream.”
Despite this, she has faith in humanity’s ability to adapt. “I feel overall somewhat hopeful that we will have space to breathe and prepare,” she says.
Throughout our conversation, Toner has been restrained in recounting her attempts to take on one of tech’s most powerful CEOs. Much of the personal criticism and spotlight she was forced to accept may have been avoided if she’d acted differently, prepared better for the fallout, or taken more counsel, perhaps. I feel compelled to ask if she ever questions herself, her actions or her methods last November.
“I mean, all the time,” she says, smiling broadly. “If you’re not questioning yourself, how are you making good decisions?”
Madhumita Murgia is the FT’s AI editor
Find out about our latest stories first — follow FT Weekend on Instagram and X, and subscribe to our podcast Life & Art wherever you listen
Comments