The lights dimmed as five actors took their places around a table on a makeshift stage in a New York City art gallery turned theater for the night. Wine and water flowed through the intimate space as the house — packed with media — sat to witness the premiere of “Doomers,” Matthew Gasda’s latest play that is loosely based on Sam Altman’s ousting as CEO of OpenAI in November 2023.
The play fictionalizes events that took place after OpenAI’s co-founder and former chief scientist Ilya Sutskever informed Altman he was fired — a decision the board made over concerns that the CEO was mishandling AI safety and engaging in abusive, toxic behavior. Despite the obvious meticulous research that went into Gasda’s depiction of that night, the playwright told TechCrunch his goal wasn’t to create a documentary, but rather to use that setting as a microcosm for the greater philosophical questions of AI safety and alignment.
The departure of key figures such as Sutskever and Jan Leike, the co-lead on OpenAI’s now-defunct superalignment team, has led to a significant shift in the company’s leadership. Other safety-focused researchers who raised concerns about AI labs have also departed. This exodus has sparked concerns about the future of AI safety and alignment.
However, OpenAI appears to be thriving despite this turnover. The company is reportedly raising a $40 billion round that would value it at $340 billion, while President Donald Trump promises to protect AI from regulation as a new arms race against China heats up and new competitors, like DeepSeek, enter the ring. In short, AI innovation is speeding up, not slowing down, just as Seth’s character wanted. The question everyone awaits the answer for is whether or not this is a good thing.
“It’s ugly to build God,” Alina, the ethicist in the play, says. “Because we’re so ugly, and it’s based on us.”
Source Link