OpenAI has commenced a search for a new Head of Preparedness. The role focuses on identifying and managing emerging risks tied to advanced AI.
The announcement arrives at a critical moment. AI systems are becoming more powerful, and concerns around misuse, mental health, and security continue to grow.
CEO Sam Altman addressed these concerns directly in a recent post on X. He said AI models are now “starting to present some real challenges.”
Those challenges span multiple areas, from cybersecurity to mental well-being.

Risk
AI models today do more than generate text; they analyze systems, write code, and uncover weaknesses.
Altman noted that some models are now so skilled at computer security that they can identify critical vulnerabilities. This creates opportunity, and also creates risk.
On the positive side, these tools can help cybersecurity defenders strengthen systems. However, the same capabilities could be exploited by attackers if released without safeguards.
Because of this, preparedness has become a priority. Altman encouraged qualified candidates to apply.
The Role
According to OpenAI’s job listing, the Head of Preparedness will be responsible for executing the company’s Preparedness Framework.
This framework explains how OpenAI tracks and prepares for frontier AI capabilities that could create severe harm. The role covers a wide range of risks that are immediate (phishing attacks).
Others are speculative and include biological risks and concerns around systems that can improve themselves over time.
In simple terms, the job is about foresight. It requires planning for threats before they fully emerge.
Also read: OpenAI Rejects Responsibility Over Adam Raine’s Suicide
Compensation
OpenAI has listed compensation for the role at $555,000 per year, plus equity.
That figure reflects the level of responsibility involved. It requires coordination across technical, policy, and leadership teams. Decisions made here could affect millions of users.
History
OpenAI first announced the creation of its preparedness team in 2023. At the time, the company said the group would study potential “catastrophic risks.”
These ranged from near-term threats, such as fraud and phishing, to more distant scenarios, including nuclear risks.
Less than a year later, changes followed. Aleksander Madry, who served as Head of Preparedness, was reassigned to a role focused on AI reasoning.
Since then, other safety-focused executives have either left OpenAI or moved into roles outside preparedness and safety.
Mental Health Risks
Altman also highlighted the potential impact of AI models on mental health.
This issue has gained attention as generative AI chatbots become more conversational and emotionally responsive.
Recent lawsuits allege that ChatGPT reinforced user delusions, increased social isolation, and, in some cases, contributed to suicide.
These claims have intensified public debate around AI responsibility. OpenAI has said it continues to improve ChatGPT’s ability to recognize signs of emotional distress.
The company also says it is working to better connect users to real-world support when needed. However, OpenAI seems to have higher priorities.
The company stated it may adjust its safety requirements if a competing AI lab releases a high-risk model without similar protections.

