OpenAI Releases: What's Going On Over There?

Nothing succeeds like success, but in Silicon Valley nothing makes more of a splash than a steady trickle out the door.

The exit of Mira Murati, OpenAI's Chief Technology Officer, announced on September 25, has Silicon Valley tongues wagging that all is not well in Altmanland, especially since sources say she left because she had given up on attempt to reform or slow down the company from within. Murati was joined in her departure from the top-tier company by two leading scientific minds, lead researcher Bob McGrew and researcher Barret Zoph (who helped develop ChatGPT). Everyone leaves without an immediately known opportunity.

The drama is both personal and philosophical and gets to the heart of how the age of artificial intelligence will be shaped.

It all dates back to November, when a mix of Sam Altman's alleged management style and security questions about a top-secret project called Q* (later renamed Strawberry and released last month as o1) pushed some board members to try to oust the co-founder. They succeeded, but only for a few days. The 39-year-old face of the AI ​​movement managed to regain control of his bustling company, thanks in large part to Satya Nadella's Microsoft, which owns 49% of OpenAI and didn't want Altman to go anywhere.

The board was reorganized to be more favorable to Altman, and several directors who opposed him were forced out. Even a senior executive wary of his motives, OpenAI co-founder and chief scientific officer Ilya Sutskever, would eventually leave. Sutskever himself was concerned about Altman's “accelerationism” – the idea of ​​pushing forward the development of artificial intelligence at any cost. Sutskever left in May, though a person who knows him said The Hollywood journalist he had effectively stopped being involved in the company after the failed coup in November. (Sutskever more than landed on his feet: He just raised $1 billion for a new AI security company.)

Sutskever and another senior staff member, Jan Leike, had run a “superalignment” team tasked with predicting and avoiding dangers. Leike left at the same time as Sutskever and the team was disbanded. Like many other employees, Leike has since joined Anthropic, OpenAI's rival that is widely considered more security-conscious.

Murati, McGrew and Zoph are the last dominoes to fall. Murati was also concerned about safety, an industry term for the idea that new AI models may pose short-term risks such as hidden biases and long-term dangers such as Skynet scenarios and should therefore be subjected to more rigorous testing. (This is seen as especially likely with the achievement of artificial general intelligence, or AGI, the ability of a machine to solve problems as well as a human that could be achieved in just 1-2 years.)

But unlike Sutskever, after the November drama Murati decided to stay at the company in part to try to slow the accelerationist efforts of Altman and President Greg Brockman from within, according to a person familiar with OpenAI's workings who asked to not be identified because they were not authorized to talk about the situation.

It's unclear what pushed Murati over the edge, but the release of o1 last month may have contributed to his decision. The product represents a new approach that aims not only to synthesize information as many current large language models do (“rewrite the Gettysburg Address like a Taylor Swift song”) but to reason about mathematics and coding problems as a human being. Those concerned with AI security have urged further testing and guardrails before such products are made public.

The splashy product release also comes simultaneously with, and in some ways partly as a result of, OpenAI's complete transition to a for-profit company, with no nonprofit oversight, and with a CEO in Altman who will have stock like any another founder. This change, which also favors accelerationism, has also worried many of the outgoing executives, including Murati, the person said.

Murati said in a post on X that “this moment seems right” to step away.

The concerns have become so great that some former employees are raising the alarm in major public spaces. William Saunders, a former member of OpenAI's technical staff, testified before the Senate Judiciary Committee last month that he left the company because he foresaw a global disaster coming if OpenAI continued on its current path.

“AGI would cause significant changes in society, including radical changes in the economy and employment. “AGI could also cause the risk of catastrophic harm through systems that autonomously conduct cyberattacks or aid in the creation of new biological weapons,” he told lawmakers. “No one knows how to ensure that AGI systems are safe and controlled… OpenAI will say they are getting better. I and the other employees who resigned doubt that they will be ready in time.” A spokesperson for OpenAI did not respond to a request for comment.

Founded as a nonprofit in 2015 — “we will collaborate freely with others in many institutions and expect to work with companies to research and implement new technologies,” its mission statement says — OpenAI launched a for-profit subsidiary in 2019. But it has so far still been controlled by the board of directors of the non-profit foundation. The decision to remove nonprofit oversight gives the company more freedom — and incentive — to accelerate the development of new products, while also potentially making them more attractive to investors.

And investments are crucial: a New York Times The report found that OpenAI could lose $5 billion this year. (The cost of both chips and the power needed to run them are extremely high.) The company on Wednesday announced a new round of capital from entities including Microsoft and chipmaker Nvidia for a total of about 6.6 billions of dollars.

OpenAI also has to cut expensive licensing deals with publishers due to lawsuits from Times and others inhibit the company's ability to freely train its models on content from those publishers.

OpenAI's moves are giving industry observers pause. “The move to a for-profit business solidified what was already clear: Most of the talk about security was probably just talk,” Gary Marcus, a veteran AI expert and author of the just-published book Taming Silicon Valley: How we make sure AI works for ustells THR. “The company is interested in making money and not having checks and balances to ensure it is safe.”

OpenAI has a history of releasing products before the industry thinks they are ready. ChatGPT itself shocked the tech industry when it released in November 2022; Google's rivals who had been working on a similar product thought none of the latest LLMs were ready for debut.

It remains to be seen whether OpenAI can continue to innovate at this pace, given last week's brain drain.

Perhaps to distract from the drama and reassure doubters, Altman published a rare post on his personal blog last week in which he posited that “superintelligence” – the far-reaching idea that machines can become so powerful that they can do everything much better than humans – could happen soon. like in the early 1930s. “Amazing triumphs – correcting the climate, creating a space colony, and discovering all physics – will eventually become commonplace,” he wrote. Ironically, it may have been this speech that prompted Sutskever and Murati to head for the door.

Leave a Comment

url url url url url url url url url url url url url url url url url url url