The Lamen

What’s next for OpenAI after the CEO drama

A bar graph for the breast cancer rates among women of different ethnicities.

After one of the highest-profile boardroom dramas in recent history, OpenAI isn’t getting any time to rest.

Photo: Bing Image Generator

Published on Nov 28, 2023

The turmoil at OpenAI — the highest-profile company in tech right now — was thought to be a direct result of the deepening concern over the risks surrounding artificial intelligence.

A saga that captivated Silicon Valley ended in a captivating twist: Altman was reinstated as the CEO, and had the very board that fired him step down from their position. The board’s statement over Altman not being “consistently candid in his communication” has sparked a debate over how quickly AI should be deployed, and the story is still far from over.

The fears over AI taking over the world might seem like implausible nonsense, but a growing number of adults fear that the technology can make their lives worse. Researchers and regulators say that increased cyberattacks, email scams, and disinformation are just some of the threats weaponized AI could pose.

Altman himself has been vocal about some of the risks, stating that “We’ve got to be careful here” since these LLMs could be used for “large-scale disinformation” and “offensive cyberattacks.”

“The OpenAI board no longer trusted Altman with its vision to ‘build safe and beneficial AGI’ that benefits humanity”

The board remained consistently candid over what led to Altman’s firing, which means there may be more than just principles at play here.

OpenAI’s charter puts front and center how its primary mission is to create artificial general intelligence that benefits humanity with adequate safety precautions in place. It’s done so (at least to some degree) through capped profits and a non-profit entity overseeing its operations. So what happened: The board no longer trusted Altman with its vision.

Altman was commercializing their AI too quickly, and the board worried that the consequences would catch up. Just days before Altman’s firing, several staffers wrote a letter to the board about a “powerful artificial intelligence discovery” — dubbed Q* — which could “threaten humanity,” reported Reuters (earlier reported by The Information).

The idea can be put through some scrutiny. OpenAI co-founder and chief scientist Ilya Sutskever — one of the primary voices behind Altman’s ousting — did a turnabout and is still sticking with the company, although he’s no longer a part of the board. How this affects the board’s internal structure and employee morale will likely pan out over the coming weeks and months.

“As part of his return, Altman has agreed to an investigation of the events that led to his firing”

The organization is likely to attract even more attention from regulators, but the drama hasn’t stopped them from fast-paced deployment — something OpenAI has repeatedly been criticized for. The company’s interim CEO Emmett Shear recently called for “slowing down” the pace of AI development.

I specifically say I’m in favor of slowing down, which is sort of like pausing except it’s slowing down. If we’re at a speed of 10 right now, a pause is reducing to 0. I think we should aim for a 1-2 instead.

As part of his return, Altman has agreed to an investigation of the events that led to his firing by an outside firm. Whatever the investigation finds, Altman’s position in the company is likely secure — especially considering the unanimous support he received from OpenAI employees.

By all means, the company’s “new initial board” could decide how it proceeds with governance — and whether it implies filling out deep pockets or thought-out approaches that take the “profound risks to society” into account.

Catch up: A timeline of how Sam Altman became the new, new CEO of OpenAI.