Research Engineer, Privacy
WFA Digital Insight
The demand for skilled professionals in AI privacy is on the rise, with over 50% of companies investing in AI privacy solutions. OpenAI is at the forefront of this movement, prioritizing responsible data use in their mission to advance Artificial General Intelligence. With the growth of remote work, digital skills are more crucial than ever. As a Research Engineer, Privacy, you'll be part of a team that's redefining the future of AI. Before applying, consider the evolving landscape of data privacy and the role you can play in shaping it.
Job Description
About the Role
The Research Engineer, Privacy role at OpenAI is a unique opportunity to work on the frontlines of safeguarding user data while ensuring the usability and efficiency of AI systems. As part of the Privacy Engineering Team, you'll be responsible for developing and implementing cutting-edge privacy-preserving technologies, such as differential privacy and federated learning. Your work will have a direct impact on the development of Artificial General Intelligence (AGI) that prioritizes user privacy and safety.The Privacy Engineering Team is committed to upholding the highest standards of data privacy and security across all OpenAI products and systems. You'll collaborate with cross-functional teams to equip them with the necessary tools to ensure responsible data use. The team's approach to prioritizing responsible data use is integral to OpenAI's mission of safely introducing AGI that offers widespread benefits.
In this role, you'll have the opportunity to work with a talented team of engineers and researchers who are passionate about advancing the field of AI privacy. You'll be part of a fast-paced, dynamic environment that values innovation, collaboration, and open communication.
What You Will Do
- Design and prototype privacy-preserving machine-learning algorithms, such as differential privacy and secure aggregation, that can be deployed at OpenAI scale.
- Measure and strengthen model robustness against privacy attacks, such as membership inference and model inversion, while balancing utility with provable guarantees.
- Develop internal libraries, evaluation suites, and documentation to make cutting-edge privacy techniques accessible to engineering and research teams.
- Lead deep-dive investigations into the privacy-performance trade-offs of large models, publishing insights that inform model-training and product-safety decisions.
- Define and codify privacy standards, threat models, and audit procedures that guide the entire ML lifecycle, from dataset curation to post-deployment monitoring.
- Collaborate with Security, Policy, Product, and Legal teams to translate evolving regulatory requirements into practical technical safeguards and tooling.
- Participate in the development of novel privacy-preserving techniques, such as federated learning and data memorization.
- Work closely with the research team to stay up-to-date with the latest advancements in AI privacy and implement them in production-ready systems.
- Develop and maintain thorough documentation of your work, including design decisions, testing results, and performance metrics.
What We Are Looking For
- Hands-on research or production experience with privacy-enhancing technologies (PETs), such as differential privacy and federated learning.
- Fluency in modern deep-learning stacks, including PyTorch and JAX, and the ability to turn cutting-edge papers into reliable, well-tested code.
- Experience with stress-testing models and explaining complex attack vectors to non-experts.
- A track record of publishing or implementing novel privacy or security work, with a strong desire to bridge the gap between academia and real-world systems.
- Ability to communicate complex ideas clearly and succinctly, both in writing and verbally.
- Experience working in a fast-paced, cross-disciplinary environment, with a strong ability to collaborate with engineers, researchers, and other stakeholders.
- Strong understanding of machine learning fundamentals, including model training, evaluation, and deployment.
- Familiarity with security and privacy principles, including threat modeling and risk assessment.
Nice to Have
- Experience with secure multi-party computation and homomorphic encryption.
- Knowledge of regulatory requirements, such as GDPR and CCPA, and experience with compliance frameworks.
- Familiarity with cloud-based infrastructure and containerization using Docker.
- Experience with agile development methodologies and version control systems, such as Git.
- Strong understanding of human-centered design principles and experience with user research.
Benefits and Perks
- Competitive salary and equity package.
- Comprehensive health, dental, and vision insurance.
- Flexible PTO policy and paid holidays.
- Remote work stipend and home office setup support.
- Access to cutting-edge technologies and training opportunities.
- Collaborative and dynamic work environment with a team of talented engineers and researchers.
- Opportunity to work on high-impact projects that advance the field of AI privacy.
- Recognition and rewards for outstanding performance and contributions.
- Professional development and growth opportunities, including conference attendance and publication support.
How to Stand Out
- Develop a strong understanding of machine learning fundamentals, including model training, evaluation, and deployment, to stand out in this role.
- Familiarize yourself with the latest advancements in AI privacy, including differential privacy and federated learning, to demonstrate your expertise.
- Showcase your ability to communicate complex ideas clearly and succinctly, both in writing and verbally, to impress interviewers.
- Highlight your experience working in a fast-paced, cross-disciplinary environment, and your ability to collaborate with engineers, researchers, and other stakeholders.
- Be prepared to discuss your experience with stress-testing models and explaining complex attack vectors to non-experts, and provide examples from your previous work.
- Emphasize your strong understanding of security and privacy principles, including threat modeling and risk assessment, to demonstrate your commitment to responsible AI development.
- Consider creating a portfolio that showcases your work in AI privacy, including any research papers, projects, or code repositories that demonstrate your skills and expertise.
This is a remote position listed on WFA Digital, the platform for professionals who work from anywhere. Browse more remote jobs across all categories.