Sam Altman wants to hire a person who can predict the dangers of AI — and the pay is great
ChatGPT has been at the forefront of the AI surge that has taken the world by storm. Following this success, its creator, OpenAI, is looking for a new employee to help mitigate the growing dangers of Artificial Intelligence systems. The company is ready to pay a salary of $555,000 per year for the stressful role. CEO Sam Altman took to X to share the job posting and cautioned applicants that they will be "thrown into the deeper end" right away.
In the job posting, the company wrote that 'The Head of Preparedness' will expand, strengthen, and guide the "Safety Systems" team of the company, which ensures that AI systems are built responsibly with safeguards and safety frameworks that help the programs function as intended. With the blistering pace of development, the challenges are mounting for companies like OpenAI. The candidate who lands the job will be tasked with balancing safety concerns and the demands of CEO Altman. This year alone, OpenAI rolled out its Sora 2 video app, Instant Checkout for ChatGPT, new AI models, developer tools, and more advanced agent capabilities.
The head of preparedness, who will be tracking risks and developing mitigation strategies for "frontier capabilities that create new risks of severe harm," has to keep up with the company's pace. "This will be a stressful job, and you'll jump into the deep end pretty much immediately," Altman wrote in an X post describing the role. "This is a critical role at an important time; models are improving quickly and are now capable of many great things, but they are also starting to present some real challenges," he added.
We are hiring a Head of Preparedness. This is a critical role at an important time; models are improving quickly and are now capable of many great things, but they are also starting to present some real challenges. The potential impact of models on mental health was something we…
— Sam Altman (@sama) December 27, 2025
Experts have warned that the job could be "close to impossible" as the head of preparedness at times will likely need to tell the executives of OpenAI to slow down or abandon projects/goals, Maura Grossman, a research professor at the University of Waterloo's School of Computer Science, told Business Insider. They'll be "rolling a rock up a steep hill," she added.
In an analysis of annual Securities and Exchange Commission filings by financial data and analytics company AlphaSense last month, it was found that in 2025, 428 companies worth at least $1 billion cited reputational harm associated with AI. “Models are improving quickly and are now capable of many great things, but they are also starting to present some real challenges,” Altman wrote in the social media post. He added that the role was for candidates looking to help the world figure out how to enable cybersecurity defenders with cutting-edge capabilities while ensuring that attackers can’t use them for harm.
The posting for the position doesn't list a college degree or a minimum number of years of work experience as requirements for the role. However, OpenAI says a person "might thrive" if they have led technical teams, are comfortable with making high-stakes technical judgments under pressure, can align diverse stakeholders on safety concerns, and have deep technical expertise in machine learning, AI safety, or adjacent risk domains.
More on Market Realist:
OpenAI just issued a major warning about AI threats — should you be worried?
Hedge fund veteran issues major warning to investors about AI: 'The bubble is ahead of us'
'Godfather of AI' issues major warning about technology replacing humans in many jobs