OpenAI’s board was only aware that ChatGPT had been launched after reading about it on Twitter, according to a former board member.
Helen Toner, an AI researcher with an interest in the regulation of the technology, made the allegation on the TED AI Show podcast. Toner had been a board member at the time, and departed shortly after Sam Altman was fired and rehired by OpenAI late last year.
Toner was fiercely critical of Altman in the interview, and described the situation at OpenAI as “a completely unworkable place to be in as a board, especially a board that is supposed to be providing independent oversight over the company, not just helping the CEO to raise more money.”
According to Toner, “We just couldn’t believe the things that Sam was telling us.”
The board was blindsided by ChatGPT’s surprise launch, she alleged. “We learned about ChatGPT on Twitter,” said Toner. Another example was Altman’s ownership of the OpenAI Startup Fund.
Toner said, “On multiple occasions, he gave us inaccurate information about the small number of formal safety processes that the company did have in place, meaning it was basically impossible for the board to know how well those safety processes were working or what might need to change.”
According to Toner, the mistrust reached the point where the board decided that Altman had to go, and the mayhem of last November ensued. Altman was abruptly fired and then welcomed back the following month.
“The OpenAI saga,” said Toner, “shows that trying to do good and regulating yourself isn’t enough.”
Part of the friction experienced by Toner might have been due to a paper she co-wrote that appeared to criticize OpenAI’s approach to safety compared to rival Anthropic. For her part, Toner described the paper as “way overplayed in the press.” However, it is not difficult to imagine Altman being less than impressed by it.
“The problem was that after the paper came out, Sam started lying to other board members in order to push me off the board,” Toner claimed.
Anthropic has been collecting former OpenAI staffers and scooping $4 billion from Amazon. Most recently, Jan Leike, formerly a lead safety researcher at OpenAI, announced that he would be joining Anthropic.
The Register asked OpenAI for comment on Toner’s allegations and was directed to the closing minutes of the podcast, in which Bret Taylor, chair of the OpenAI board, said the super lab was “disappointed that Ms Toner continues to revisit these issues.”
Taylor pointed out that the prior board’s decision was not based on concerns around safety nor OpenAI’s finances. He also claimed that more than 95 percent of OpenAI’s employees asked for Altman’s reinstatement and the resignation of the prior board.
“Our focus remains on moving forward and pursuing OpenAI’s mission to ensure AGI benefits all of humanity.” ®