Model Policy Manager, Chemical & Biological Risk

OpenaiOpenai·Remote(San Francisco)
Other
Adjust

Job Description

About the Team Our Safety Systems https://openai.com/safety/safety-systems team is at the forefront of OpenAI's mission to build and deploy safe AGI, driving our commitment to AI safety and fostering a culture of trust and transparency. The Model Policy team aligns model behavior with desired human values and norms. We co-design policy with models and for models by driving rapid policy taxonomy iteration based on data and defining evaluation criteria for foundational models’ ability to reason about safety. Key focus areas include: catastrophic risk, mental health, teen safety and multimodal safety. About the Role Providing access to frontier AI systems raises complex questions around dual-use science and catastrophic risk. How should models respond to requests involving chemical synthesis, biological experimentation, or pathogen research? Where is the boundary between legitimate scientific inquiry and information that could enable misuse? How do we design policies that meaningfully reduce risk without unnecessarily restricting beneficial research? This is a senior role in which you’ll help shape policy creation and development at OpenAI for addressing biological and chemical risks. You will develop structured policy frameworks and taxonomies to guide safe model behavior. This role sits at the intersection of biosecurity expertise, AI safety research, and policy design. You will help ensure that frontier AI systems can support beneficial life sciences research, such as drug discovery, public health, and biosafety, while reducing the risk that these capabilities could be misused. Our relevant publications: - Preparedness framework https://openai.com/index/updating-our-preparedness-framework/ - Preparing for future AI capabilities in biology https://openai.com/index/preparing-for-future-ai-capabilities-in-biology/ - Safety evaluations hub https://openai.com/safety/evaluations-hub/ - OpenAI GPT5 System Card https://openai.com/index/gpt-5-system-card/ - Evaluating Fairness in ChatGPT https://openai.com/index/evaluating-fairness-in-chatgpt/ - Improving Model Safety Behavior with Rule-Based Rewards https://openai.com/index/improving-model-safety-behavior-with-rule-based-rewards/ - OpenAI Model Spec https://openai.com/index/introducing-the-model-spec/ Your Responsibilities: - Design and maintain model policies governing chemical and biological risk, defining how models should safely handle dual-use scenarios.

  • Develop structured taxonomies of chemical and biological risk that inform model training data, evaluation benchmarks, and safety monitoring systems.
  • Translate biosecurity and chemical security expertise into actionable model behavior, working closely with research and engineering teams to operationalize policy in training and evaluation pipelines.
  • Develop a broad range of subject matter expertise while maintaining agility across topics.
  • Identify emerging risk vectors where frontier AI capabilities could meaningfully lower barriers to harmful activity and develop mitigation strategies.
  • Engage with internal and external subject-matter experts in biosecurity, biodefense, and chemical safety to ensure policies reflect real-world risk landscapes. You might thrive in this role if you: - Have strong domain expertise in chemistry, biology, biosecurity, or related fields and are motivated to translate that expertise into principled, operational policies that scale to frontier AI systems.
  • Have experience researching or working with LLMs, machine learning, AI governance, technology policy, or related areas, and enjoy tackling structured reasoning and classification problems—such as defining boundaries between legitimate scientific inquiry and potentially harmful applications.
  • Have experience designing, refining, or enforcing policies or safeguards for complex systems, whether in AI/ML environments, scientific research governance, national security contexts, or other high-stakes technical domains.
  • Are comfortable navigating ambiguous, high-stakes problem spaces, balancing risk reduction with the benefits of scientific openness and innovation.
  • Enjoy building new frameworks from first principles, reasoning about open-ended problems, and generating novel approaches under uncertainty. You take ownership of problems end-to-end—from defining the conceptual framework through collaborating with research and engineering teams to implement and iterate on solutions.
  • Have experience working at the intersection of science, policy, and emerging technology, such as in life sciences research, national security, risk and threat assessment, technology policy, or AI safety. Workplace & Location This role is based in our San Francisco office. We do encourage you to apply even if you prefer a different work location as factors may change over time. We offer relocation support to new employees, and we use a hybrid model: three days in the office per week with optional work from home on Thursdays and Fridays. Our open-plan o

This is a remote position listed on WFA Digital, the platform for professionals who work from anywhere. Browse more remote jobs across all categories.