Research Engineer - JetBrains AI
WFA Digital Insight
The demand for skilled AI engineers has skyrocketed in recent years, with the global AI market projected to reach
Job Description
About the Role
As a Research Engineer at JetBrains AI, you will be part of a team that is revolutionizing the way developers work by providing AI capabilities to all JetBrains products. Your primary focus will be on developing Large Language Models from scratch and deploying them to production environments, making them accessible to users worldwide. You will work closely with stakeholders to convert business requirements into technical specifications and ensure that the models meet the needs of the end-users.The role requires a deep understanding of NLP and transformer-based approaches, as well as experience with modern deep learning frameworks such as PyTorch. You will be working with a talented team of engineers who are passionate about AI and are committed to delivering high-quality products. The company values independence, creativity, and a willingness to learn and adapt, making it an exciting and dynamic work environment.
JetBrains AI is part of a larger organization that has been at the forefront of developing innovative software development tools for over two decades. The company has a strong culture of innovation and collaboration, and is committed to creating a workspace that is inclusive and supportive of all employees.
What You Will Do
- Work with stakeholders to convert business requirements into technical specifications
- Design, deploy, and support Large Language Models in production environments
- Train LLMs from scratch on a large GPU cluster
- Collect and process pre-training and fine-tuning datasets
- Support and improve existing subsystems
- Collaborate with the engineering team to identify and prioritize tasks
- Develop and implement automated testing and validation procedures
- Stay up-to-date with the latest developments in the LLM field and apply this knowledge to improve the models
- Work with the product team to ensure that the models meet the needs of the end-users
- Participate in code reviews and contribute to the improvement of the overall code quality
What We Are Looking For
- Experience in design, deployment, and support of production ML systems
- A strong theoretical background in NLP and transformer-based approaches
- Proficiency with modern deep learning frameworks such as PyTorch
- Experience with distributed training of multi-billion parameter models
- Attention to detail and great communication skills
- Ability to work independently and collaboratively as part of a team
- Strong problem-solving skills and ability to think creatively
- Experience with LLM inference frameworks such as vLLM, DeepSpeed, TensorRT
- Knowledge of MLOps tools and practices, including CI/CD for ML
- Familiarity with K8s and Kubeflow
Nice to Have
- Experience with LLM alignment techniques such as RLHF/RLAIF
- Scientific publications in the NLP field
- Experience with other deep learning frameworks such as TensorFlow
- Knowledge of software development methodologies such as Agile
Benefits and Perks
- Opportunity to work on a cutting-edge AI project with a talented team of engineers
- Competitive salary and benefits package
- Flexible working hours and remote work options
- Access to a large GPU cluster for training and deploying models
- Professional development opportunities, including training and conference attendance
- Collaborative and dynamic work environment
- Recognition and reward for outstanding performance
- Comprehensive health insurance and wellness programs
- Generous paid time off and vacation days
How to Stand Out
- Make sure you have a strong understanding of transformer-based approaches and experience with modern deep learning frameworks such as PyTorch.
- Highlight your experience with distributed training of multi-billion parameter models and your ability to work with large datasets.
- Showcase your problem-solving skills and ability to think creatively, and provide examples of how you have applied these skills in previous roles.
- Be prepared to discuss your experience with LLM inference frameworks and MLOps tools and practices.
- If you have experience with LLM alignment techniques or scientific publications in the NLP field, be sure to highlight these in your application.
- Research the company culture and values, and be prepared to discuss how you align with these and how you can contribute to the team.
- Practice your communication skills, as the ability to communicate complex technical ideas is essential for this role.
This is a remote position listed on WFA Digital, the platform for professionals who work from anywhere. Browse more remote jobs across all categories.