The OpenAI soap opera is back! As in an article discussing how Sam Altman may control our future & why maybe we should be afraid, be very afraid!*
Eric De Grasse of Project Counsel Media covered an article from The New Yorker titled Sam Altman May Control Our Future—Can He Be Trusted?, written by Ronan Farrow and Andrew Marantz and available here (side note: the GIF to the article is terrific!).
The article discusses the results of a comprehensive investigation into Sam Altman’s leadership of OpenAI, based on internal memos, private notes, and over 100 interviews with colleagues and industry partners. The core findings suggest a consistent pattern of behavior characterized by strategic deception, the systematic prioritization of commercial expansion over safety protocols, and a relentless pursuit of personal and corporate power.
Per the article, internal records from senior leadership provide a detailed account of Altman’s management style, often described by colleagues as “unconstrained by truth.” Examples:
- Before OpenAI: Before Altman was at OpenAI, senior employees at Loopt asked the board to fire Sam as CEO on two separate occasions over concerns about leadership and transparency. At Y Combinator, partners complained to Paul Graham about Sam’s behavior, and Graham privately told colleagues “Sam had been lying to us all the time.”
- The Ilya Memos: Compiled by Chief Scientist Ilya Sutskever, these 70 pages of Slack messages and HR documents outline a “consistent pattern of lying.” Sutskever concluded that Altman was not the appropriate person to “have his finger on the button” of AGI.
- The Amodei Archive: Former safety lead (and now CEO of Anthropic) Dario Amodei kept 200+ pages of private notes titled “My Experience with OpenAI.” His conclusion was definitive: “The problem with OpenAI is Sam himself.”
- Sociopathic Traits: A board member described Sam as having “two traits almost never seen in the same person: a strong desire to please people in any given interaction, and almost a sociopathic lack of concern for the consequences of deceiving someone.” Multiple sources independently used the word “sociopathic.”
Remember when Altman was fired in November 2023 only to be reinstated a few days later? Employees now call this moment “the Blip,” after an incident in the Marvel films in which characters disappear from existence and then return, unchanged, to a world profoundly altered by their absence. One big reason for the turnaround we now learn: The venture firm Thrive put an $86 billion investment in OpenAI on hold, suggesting it would only close if Altman returned, effectively incentivizing employees to back him for their own financial gain. Guess what? The board was backed into a corner. Gee, you think?
After he was reinstated, Altman coordinated with Satya Nadella to select new board members (Bret Taylor and Larry Summers). The subsequent “independent” investigation into Altman by WilmerHale produced no written report, only oral briefings, and has been criticized by employees as a “hunt for clear criminality” designed to acquit him rather than an inquiry into his integrity.
Following Altman’s reinstatement, the “Superalignment” team and the “AGI Readiness” safety teams were both dissolved. Key safety advocates, including Sutskever and Jan Leike, resigned, with Leike stating that “safety culture and processes have taken a backseat to shiny products.”
The article goes on to discuss Altman’s continual efforts to raise funds for OpenAI, including actively courting Sheikh Tahnoon bin Zayed al-Nahyan (UAE) and Mohammed bin Salman (Saudi Arabia) for his “ChipCo” and “Stargate” initiatives—plans to build $5 trillion to $7 trillion worth of AI infrastructure.
OpenAI has also deleted its blanket ban on “military and warfare” from its usage policies, which came into play after Defense Secretary Pete Hegseth chose to designate Anthropic a “supply-chain risk” for refusing to take similar steps. While OpenAI and Google defended Anthropic and Altman claimed that OpenAI shared Anthropic’s ethical boundaries, he was reportedly in negotiations with the Pentagon for at least two days to discuss OpenAI as a potential replacement for Anthropic, which ultimately led to a deal with the military.
All of this has been going on while OpenAI is reportedly preparing for an IPO at a potential $1 trillion valuation. Wow. 🤯
No comment yet from Altman (at least not that I’ve seen). Nothing on his X account and nothing on his blog – last post there was six months ago, discussing an update on their work with Sora (which tells you how out of date that is).
Sam Altman may control our future. If you read Eric’s blog post (at least the beginning, where he provides his own thoughts) and the New Yorker piece, you might be thinking “ruh-roh”!
So, what do you think? Are you concerned that Sam Altman may control our future? Please share any comments you might have or if you’d like to know more about a particular topic.
*The phrase “be afraid, be very afraid” came from the 80’s movie The Fly with Jeff Goldblum and the person who coined it was none other than Mel Brooks, who was an unnamed executive producer on the movie – unnamed because he didn’t want people to think it was a comedy. 😊
Image created using Bing Image Creator Powered by DALL-E, using the term “opera singer with soap”.
Disclaimer: The views represented herein are exclusively the views of the author, and do not necessarily represent the views held by my employer, my partners or my clients. eDiscovery Today is made available solely for educational purposes to provide general information about general eDiscovery principles and not to provide specific legal advice applicable to any particular circumstance. eDiscovery Today should not be used as a substitute for competent legal advice from a lawyer you have retained and who has agreed to represent you.

