Ronan Farrow and the question OpenAI cannot dodge

In the fall of 2023, secret memos helped turn a private mistrust of Sam Altman into a public crisis, and the name Ronan Farrow is tied to the account that brought those details into view. The core issue is not simply whether one executive was difficult to work with. It is whether the person at the center of a company built around managing existential risk could be trusted to oversee it.
Verified fact: Ilya Sutskever, OpenAI’s chief scientist, sent the board material that alleged Altman misrepresented facts and deceived leaders about internal safety protocols. Informed analysis: That matters because OpenAI was founded on the premise that artificial intelligence might require unusual safeguards, and the company’s leadership standard was supposed to be unusually strict.
What was in the memos that changed the board’s view?
The memos were compiled with help from like-minded colleagues and built from roughly seventy pages of Slack messages and H. R. documents, with explanatory text attached. The material also included cellphone images, apparently used to avoid detection on company devices. Sutskever sent the final memos as disappearing messages to other board members, a sign of how sensitive the material had become.
One board member who received them recalled that Sutskever was terrified. The memos, which were reviewed for the article, had not previously been disclosed in full. Their central allegation was blunt: Altman had been misleading executives and directors about internal safety protocols. One memo about Altman opened with the line that he exhibited a consistent pattern of “Lying. ”
The choice of evidence mattered. This was not a single complaint or a passing disagreement. The board was being handed documentation assembled over weeks, in a setting where the company’s own internal systems were treated as too exposed to trust. In the context of a company that had made safety a founding principle, that detail is not cosmetic; it is the story.
Why did the board believe trust, not performance, was the real issue?
OpenAI was established as a nonprofit, with a board duty to prioritize the safety of humanity over the company’s success or survival. That structure was designed to protect against exactly the kind of conflict that can arise when power, money, and advanced technology move together. The founders, including Altman, Sutskever, Greg Brockman, and Elon Musk, treated artificial intelligence as potentially the most powerful invention in human history, and potentially dangerous as well.
Against that backdrop, the board’s question was not whether Altman was ambitious. It was whether he was reliable enough to hold what Sutskever described as a burden of unprecedented responsibility. Sutskever said some people drawn to such roles are interested in power, politics, and saying what others want to hear. Helen Toner, an A. I. -policy expert, and Tasha McCauley, an entrepreneur, read the memos as confirmation of a worry they already shared: Altman’s role gave him influence over a technology with consequences far beyond the company, but he could not be trusted.
Verified fact: the board had six members and was empowered to remove the C. E. O. if trust broke down. Informed analysis: That authority becomes more meaningful when the company’s founding mission is safety-first, because the board’s duty was supposed to override the usual pressures that reward growth at any cost.
What happened when the board acted?
Altman was in Las Vegas, attending a Formula 1 race, when Sutskever invited him to a video call with the board and read a brief statement saying he was no longer an employee of OpenAI. The board then released a public message saying only that Altman had been removed because he “was not consistently candid” in his communications.
The phrasing was narrow, but the implication was wide. It did not accuse him of one isolated error. It pointed to a communication pattern. That distinction matters, because the issue inside OpenAI was less about a single bad decision than about a culture of selective disclosure around the very systems the organization was created to govern.
The aftermath also showed that the board’s move did not settle the dispute. Altman returned as C. E. O., while most of his detractors left the company. That reversal did not erase the original concern; it left it hanging in the air. If the board believed safety required a break with Altman, his return raises a harder question: what, exactly, was corrected?
Who benefits from ambiguity about Sam Altman?
OpenAI’s public response has emphasized that criticisms of Altman rely on anonymous claims and selective anecdotes from people with agendas. That statement does not answer the central issue raised by the memos: whether internal safety concerns were hidden or minimized when they should have been shared with the board in full.
Verified fact: the company has also circulated language about AI risks, including disruption of jobs, misuse by bad actors, systems evading human control, and concentration of power and wealth. Informed analysis: That contrast is striking. A company that warns publicly about the dangers of advanced AI has been forced to defend its own internal governance at the same time.
The broader implication is not limited to one executive. If a safety-focused nonprofit can be pulled into a leadership crisis over candor and control, then the public has reason to ask whether the structures around powerful AI are strong enough to resist pressure from the people building them.
Ronan Farrow’s reporting leaves one fact impossible to ignore: the struggle over Altman was never only about personality. It was about whether the person closest to the button could be trusted with it. That is the question OpenAI still cannot answer cleanly, and it is the question that shadows ronan farrow at the center of this account.




