OpenAI drama a reflection of AI’s pivotal moment OpenAI drama a reflection of AI’s pivotal moment
Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities... OpenAI drama a reflection of AI’s pivotal moment

Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.

The philosophical battle between AI “accelerationists” and “doomers” has erupted into full view at OpenAI. Basically, the accelerationists advocate for rapid advancement in AI technology, emphasizing the enormous potential benefits. Conversely, the “doomers” desire a cautious approach that emphasizes the potential risks associated with unbridled AI development.

Various reports suggest a conflict between CEO Sam Altman, who wanted to further monetize OpenAI’s development, and the board, which prioritized safety measures. The board was acting in accordance with their non-profit charter, while the CEO was exploring avenues to secure necessary funding for ongoing development in this highly competitive field. The board won the initial conflict, resulting in the dismissal of Altman in what appears to be a “palace coup.”

Altman is the protagonist in this story, while chief AI scientist and board member Ilya Sutskever appears to be the antagonist. An early developer of deep learning and a star former student of AI pioneer Geoffrey Hinton, Sutskever has a strong understanding of the issues in play. He allegedly was the one pushing for a change. Axios suggested that Sutskever may have persuaded board members that Altman’s accelerated approach to AI deployment was too risky, perhaps even dangerous.

In reporting by The Information, Sutskever told employees in an emergency meeting last Friday that the “board [was] doing its duty to the mission of the nonprofit, which is to make sure that OpenAI builds AGI that benefits all of humanity.”

VB Event

The AI Impact Tour

Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you!


Learn More

However, the reaction from industry watchers, OpenAI investors and many OpenAI employees has been to side with Altman. A “revolt” among these groups might not be too strong a word. Consequently, the board apparently went back to Altman, possibly sheepish, to negotiate the possibility of his return to the CEO post.

Altman’s vision and values

To call Altman flamboyant would be an overstatement. In fact, he often comes off as measured — a proponent of advancing AI while also warning of potential existential dangers. He famously went to Washington, D.C. last spring and warned about these risks and has often called for government regulation of “frontier” AI models. Of course, some have labeled this a disingenuous attempt at regulatory capture to freeze out smaller competitors.

It is true, too, that Altman has his hand in several side projects including World Coin, a cryptocurrency-based scheme to authenticate identity based on iris scans to facilitate future payments of universal basic income after AI eliminates jobs.

A Fortune story describes how Altman had recently been working on “Tigris,” an initiative to create “an AI-focused chip company that could produce semiconductors that compete against those from Nvidia.” Similarly, he has been raising money for a hardware device that he’s been developing in tandem with design specialist Jony Ive.

Whatever else may be true about Altman, his capitalist and venture capitalist credentials are first-rate. These viewpoints appear in sharp relief with the OpenAI non-profit mission to build artificial general intelligence (AGI) that benefits all of humanity.

Underscoring America’s values

In America, we value the “rainmaker.” Altman’s track record before joining OpenAI, and while at the company, speaks to his rainmaking ability through his driving of innovation, securing funding and leading initiatives that push the boundaries of technology and business. In short, his philosophy and accomplishments exemplify what Americans value most. The support for Altman in the OpenAI coup is therefore not surprising.

It is also not surprising that Altman had options beyond OpenAI. Microsoft CEO Satya Nadella pledged support to him in whatever comes next. As the sun rose only three days after his firing, we now know what comes next. Instead of rejoining OpenAI, Altman and others including OpenAI cofounder and President Greg Brockman will join Microsoft to lead a new AI research team.

How many others follow him remains to be seen, but Axios reported that more than 500 OpenAI employees out of a total workforce of more than 700 signed a letter threatening to leave and follow Altman saying: “We will take this step imminently, unless all current board members resign.”

The other shoe drops

As of Monday morning, the board did not resign and instead appointed former Twitch CEO Emmett Shear as interim CEO. However, at least one member of the board has expressed misgivings — surprisingly, that is Sutskever.


According to CNN, Shear said the process of firing Altman was “handled very badly, which has seriously damaged our trust.”

What happens going forward with OpenAI specifically and AI development overall is an open question. Some of the players may have shifted where they sit, but the race in AI development is likely to continue. We can safely assume Microsoft will push forward with the same sense of urgency they have displayed since ChatGPT first appeared almost exactly a year ago and they began pivoting their business model and aggressively began incorporating OpenAI technology into many of their products.

Meanwhile, Google Deepmind is expected to bring their “Gemini” large language model (LLM) to market possibly before the end of the year. Expectations are that this will rival and possibly exceed OpenAI’s GPT-4.

As noted by the New York Times, OpenAI is clearly a big loser in these developments. However, the drama at OpenAI serves as a microcosm of the larger ongoing debate: How do we balance the ambitious drive for AI innovation which promises unprecedented benefits, with the prudent need for safety and ethical considerations? There is validity to cautious voices urging thoughtful reflection on the potential downsides of unfettered AI advancement.

This balance is not just a matter of corporate policy or strategy; it’s a reflection of our societal values and the future we aspire to create. The story of Sam Altman and OpenAI is not just about a clash of personalities or corporate strategies; it’s a reflection of a pivotal moment in our technological journey.

Gary Grossman is the EVP of technology practice at Edelman and global lead of the Edelman AI Center of Excellence.


Welcome to the VentureBeat community!

DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.

If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.

You might even consider contributing an article of your own!

Read More From DataDecisionMakers

Source link