Subversive scientist Sam Altman: I regret it

Ilya Sutskever, the chief scientist behind the ouster of OpenAI CEO, said he regrets it and hopes the company can get back together.

"I deeply regret participating in the board's activities. I never intended to harm OpenAI. I love everything we have built together," Sutskever wrote on X days November 20, five hours after Sam Altman joined Microsoft.

Soon after, Altman re-shared this message with a heart emoji.

Ilya Sutskever (right) and Sam Altman. Photo: Abigail Uzi/YnetNews

Sutskever's move surprised many people who followed OpenAI's story over the past three days. Many people even expressed suspicion that this was a fake account. "Is this a post written by AI?", one account wrote.

Ilya Sutskever is the co-founder and chief scientist of OpenAI, and is also said to be the person behind the decision to fire CEO Sam Altman on November 17. The difference in the orientation of the board of directors and the CEO is said to be the cause of this result.

However, according to Wired , Sutskever was also among the 490 OpenAI employees who signed an open letter calling for Altman's position to be restored.

Previously, a negotiation was held on November 19 to bring Sam Alman back, but then failed. Altman and former president Greg Brockman said they would join Microsoft, meaning they likely wouldn't be able to "reunite" as Sutskever wanted.

Previously, internal sources said that conflicts arose because Altman continuously accelerated the pace of product commercialization as well as sought to raise money for new AI projects. The remaining management team, led by Sutskever, wants to ensure the company maintains its mission as a non-profit organization, building general AI ( AGI ) in a way that benefits humanity. In a notice on its website, OpenAI said that Altman "was not consistent and straightforward in his communication with the board, hindering his ability to carry out his responsibilities".

At OpenAI , Sutskever plays an important role in the development of large language models such as GPT-2, GPT-3 and the Dall-E text-to-image model. Since July, he has focused more on the potential dangers of AI, especially when artificial intelligence models that surpass humans may emerge - something he believes will happen within the next 10 years.

Ilya Sutskever - the scientist behind the ouster of OpenAI CEO
Ilya Sutskever, OpenAI's chief scientist, is considered a central figure influencing the ouster of CEO Sam Altman.

On November 17 (morning of November 18, Hanoi time), Sam Altman was fired by OpenAI through a meeting on the Google Meet platform, while chairman Greg Brockman was also removed from the board of directors. According to internal sources, the reason for this quick decision is that Altman is leading OpenAI away from its original mission of a non-profit organization. Although he needed large financial resources to operate large language models, he could not find a common voice with the board of directors on the speed of general AI (AGI) development , how to commercialize the product and the necessary steps. to reduce the potential harm of AI.

The person behind the decision to fire the CEO is said to be Ilya Sutskever, co-founder and chief scientist of OpenAI.

Ilya Sutskever. Photo: AFP

Sutskever was born in 1985 in Russia but grew up in Israel from the age of 5. He attended Israel Open University from 2000-2002, then transferred to the University of Toronto (Canada), earning a bachelor's degree in mathematics in 2005, a master's degree in 2007 and a doctorate in computer science four years later. .

One of Sutskever's teachers was Geoffrey Hinton , a pioneer in the field of artificial intelligence and known as the "godfather of AI". Sutskever is one of Hinton's two outstanding students besides Alex Krizhevsky. The trio created AlexNet in 2012 - an artificial neural network capable of self-learning based on data, applied in photo and video recognition, content suggestions, image classification, natural language processing and processing in brain-computer interface (BCI).

In the same year 2012, Sutskever spent two months participating in a small project at Stanford University, then joined Professor Hinton's research company DNNResearch. In March 2013, Google was so impressed with AlexNet that it acquired DNNResearch and invited Sutskever to be a research scientist at Google Brain. Here, he and two other famous figures, Oriol Vinyals and Quoc Viet Le, created the Seq2seq sequence algorithm used for natural language processing.

With his exceptional achievements, Sutskever quickly attracted the attention of another powerful person in the field of artificial intelligence: Elon Musk . The American billionaire has long warned about the potential danger AI can pose to humanity. Musk also repeatedly criticized Google co-founder Larry Page for not caring about AI safety, especially after Google acquired DeepMind in 2014.

Accepting Musk's offer, Sutskever left Google in 2015 to become a co-founder of OpenAI - a non-profit organization that Musk envisioned would become a counterweight to Google on the AI ​​front.

"It was one of the hardest recruiting battles I've ever experienced, but it was the key to OpenAI's success," Musk told the Lex Fridman Podcast in early November when mentioning Ilya Sutskever. The South African-born billionaire commented that his former colleague is not only smart but also "a good person, good ideas and good heart".

Sam Altman also spoke highly of Sutskever. "I remember Sam calling Ilya one of the most respected researchers in the world," Dalton Caldwell, CEO of startup incubation fund Y Combinator, told MIT Technology Review . "Sam sees Ilya as a factor that can attract many AI talents. It will be difficult to find any candidate other than Ilya who can become OpenAI's top scientist."

At OpenAI, Sutskever plays a key role in developing large language models, such as GPT-2, GPT-3, and the Dall-E text-to-image model. In a rare interview with Technology Review last month, Sutskever said that with the excitement of the community recently, ChatGPT has provided a glimpse of what the future might hold, although current models Still disappointed with the returned results.

In recent months, he has focused more on the potential dangers of AI, especially when artificial intelligence models that surpass humans may emerge - something he believes will happen within the next 10 years. "Clearly, it is important to ensure that any superintelligence built by anyone is not fraudulent," he said.

In July, he established a new team at the company to control future "superintelligent" AI. At that time, he said the "super intelligence" system could help solve many important world problems, but was also very dangerous, "could lead to the loss of power, even the extinction of species." People".

Bloomberg quoted anonymous sources saying that the ouster of OpenAI's CEO and chairman is mainly related to AI safety issues. Sutskever disagrees with Altman about the pace of commercialization of AI products. Removing for-profit leaders is a "necessary step to minimize potential harm to the public."

OpenAI board members, led by Sutskever, also argued fiercely about Altman's business ambitions before deciding to dismiss him. Holding executive power, Altman sought to raise tens of billions of dollars from funds in the Middle East with the goal of creating an AI chip startup to compete with Nvidia. He also met with SoftBank President Masayoshi Son to persuade him to invest billions of dollars in an upcoming AI device project in cooperation with former Apple design director Jony Ive.

"Sutskever and like-minded people on the board objected to Altman's use of OpenAI's name to raise funds. They were concerned that the new business might not share the same operating model as OpenAI," a source said. .

When appearing in the media, Sutskever often emphasizes the great potential of AI, both good and bad. "AI is a wonderful thing. It will solve the problems we face today, like unemployment, disease, poverty," he told the Guardian in early November. "But AI also creates problems. new problem: fake news will be a million times worse, cyber attacks will become more extreme. We will have fully automated AI weapons."

After firing the CEO, OpenAI held an internal meeting, according to The Infomartion . During that meeting, at least two employees asked Sutskever if this was a "coup" or a "hostile takeover."

"You can call it that way," Sutskever replied, mentioning the word coup. "I can understand why people use this term, but I disagree. We are on a mission as a non-profit organization, which is to ensure OpenAI builds AGI that benefits all humanity type".

The source of why OpenAI deposed its CEO 38
Behind the scenes of the firing of OpenAI CEO 76
Elon Musk: The public needs to know the reason for firing OpenAI CEO 61

This year 131 international organizations, from 73 countries, partnered with the PRA in Washington, D.C., and its Hernando De Soto Fellow Prof. Sary Levy-Carciente to produce the 17th edition of the IPRI..
The articles on are collected by us on the internet. If you find any infringing articles, please contact us and we will delete them immediately. Thank you!
Copyright 2014-2024 , all rights reserved.