After Trying to Oust Sam Altman, OpenAI Co-founder Launches His 'Safe' Venture
Ilya Sutskever, one of OpenAI co-founders who left the company last month, has introduced his new AI company, Safe Superintelligence, or SSI. Announcing the venture on X (formerly Twitter), Sutskever wrote, “We will pursue safe superintelligence in a straight shot, with one focus, one goal, and one product.” Ilya formerly served as OpenAI’s chief scientist and co-led the company’s Superalignment team.
Who is Ilya Sutskever?
Sutskever is regarded as one of the pioneers of the AI revolution. As a student, he worked in a machine learning lab under the Godfather of AI, Geoffrey Hinton. Together they created an AI startup that was acquired by Google. Sutskever then went on to work with Google’s AI research team, per CNN.
He then helped found OpenAI and worked on its breakthrough product ChatGPT. He co-led the Superalignment team that was focused on steering and controlling AI systems. However, things got sour when Sutskever joined the effort to oust CEO Sam Altman last year. After the dramatic leadership shuffle that saw Altman fired, rehired, and the complete overhaul of the board, Sutskever regretted his role in the fiasco.
I deeply regret my participation in the board's actions. I never intended to harm OpenAI. I love everything we've built together and I will do everything I can to reunite the company.
— Ilya Sutskever (@ilyasut) November 20, 2023
CNN contributor Kara Swisher reported that at the time, Sutvekar was concerned that Altman was pushing AI technology “too far, too fast.” However, he had a change of heart later on and signed the letter which demanded the entire board to resign. Last month, Sutskever joined the suit of departures from OpenAI and announced that he would be leaving the company to work on a project that is “personally meaningful” to him.
After almost a decade, I have made the decision to leave OpenAI. The company’s trajectory has been nothing short of miraculous, and I’m confident that OpenAI will build AGI that is both safe and beneficial under the leadership of @sama, @gdb, @miramurati and now, under the…
— Ilya Sutskever (@ilyasut) May 14, 2024
The announcement came amid growing concerns about AI advancing too quickly and the dearth of regulation overseeing developers of the technology. Experts have argued that companies like OpenAI have been left free to set their safety guidelines, as per their wishes.
The inception of Safe Superintelligence
While the project remained a mystery for some time, Sutskever lifted the curtains on Wednesday, introducing the venture, Safe Superintelligence Inc. “This company is special in that its first product will be the safe superintelligence and it will not do anything else up until then,” Sutskever told Bloomberg in an exclusive interview.
As the name suggests, the venture’s top priority is AI safety. While Sutskever is vague about the details, he has made it clear that the company will try to achieve safety with engineering breakthroughs integrated into the AI system and not rely on guardrails applied to the technology “on the fly,” which is a common practice in the industry.
“Building Safe Superintelligence (SSI) is the most important technical problem of our time,” the company wrote in a social media post.
Superintelligence is within reach.
— SSI Inc. (@ssi) June 19, 2024
Building safe superintelligence (SSI) is the most important technical problem of our time.
We've started the world’s first straight-shot SSI lab, with one goal and one product: a safe superintelligence.
It’s called Safe Superintelligence…
Sutskever is joined by co-founders Daniel Gross, and Daniel Levy, two key figures in the AI industry. Gross is an investor and Apple’s former AI lead who played a major role in the company’s AI and search efforts. On the other hand, Levy worked alongside Sutskever at OpenAI, training large AI models. Currently, Safe Superintelligence has offices in Palo Alto, California, and Tel Aviv, Israel, as per Bloomberg.
In the interview, Sutskever mentioned that SSI will not release any products or do any work other than producing a superintelligence. Meanwhile, he has declined to disclose the company’s financial backers.