Staff Software Engineer, AI
WFA Digital Insight
As demand for AI and machine learning experts surges, companies like Lattice are at the forefront. With the AI market expected to grow significantly, professionals with experience in designing and delivering AI/ML systems are in high demand. Lattice stands out for its commitment to innovation and trust in AI technologies. Before applying, candidates should be aware of the need for strong technical leadership and collaboration skills, as well as the ability to operate AI systems in cloud environments. The current job market sees a 25% increase in remote AI roles, making this an exciting time for professionals to advance their careers.
Job Description
About the Role
Lattice is seeking a highly skilled Staff Software Engineer, AI, to join its team in shaping the future of AI engineering. As a key member of the AI Engineering team, you will be responsible for architecting and scaling the infrastructure that powers AI quality, reliability, and reuse across Lattice. This role is crucial in defining how intelligence works across the platform and ensuring AI systems are measured, improved, and trusted in production.The AI Engineering team at Lattice is dedicated to pushing the boundaries of AI capabilities, and as a Staff Software Engineer, AI, you will play a pivotal role in this mission. Your expertise in AI evaluation and quality, agent architecture, and production systems will be instrumental in driving the technical direction for agent quality and evaluation strategy across Lattice engineering teams.
What You Will Do
- Design and scale an end-to-end AI evaluation framework that includes offline evaluations, production tracing, and human feedback loops.
- Define meaningful performance metrics such as task completion, hallucination, response quality, engagement, and business impact, and build the datasets and automated scoring systems to prevent regressions.
- Identify and quantify the drivers of agent quality improvement and set methodological standards for evaluation across the organization.
- Architect reusable agent infrastructure using LangGraph or comparable frameworks, including multi-turn workflows, LLM DAGs, recommendation systems, and standardized topologies.
- Build and scale RAG pipelines, vector retrieval systems, and production-grade AI infrastructure with strong reliability, observability, and performance.
- Make principled build-vs-buy decisions across LLM providers, agent frameworks, and evaluation tooling, balancing capability, cost, latency, and risk.
- Engineer AI systems as reusable internal platforms that multiply product engineering velocity at Lattice.
- Own projects end-to-end, including scoping, designing, executing, and delivering.
- Set technical direction for agent quality and evaluation strategy across Lattice engineering teams.
- Lead rigorous discussions on AI system design and evaluation methodology.
- Raise the AI engineering bar through mentorship, code review, and clear technical communication across engineering and leadership.
What We Are Looking For
- 8+ years of professional experience writing and maintaining production-level code, with 5+ years in designing, delivering, and operating AI/ML systems in production.
- Deep production experience with LLM systems, including prompting, RAG, agent orchestration, evaluation frameworks, and fine-tuning.
- Experience building and operating agentic systems and managing their failure modes.
- Strong command of AI evaluation methodology and statistical experimentation.
- Strong system design judgment across scalability, latency, accuracy, reliability, and cost.
- Production-grade Python skills, with the ability to write clean, maintainable, and testable systems.
- Experience with LangGraph or comparable agent orchestration frameworks and LLM observability/evaluation tooling.
- Vector databases and retrieval system design experience, such as Pinecone.
- Familiarity with operating AI systems in AWS or comparable cloud environments, including CI/CD, monitoring, and deployment workflows.
Nice to Have
- Experience with RLHF, LoRA, or other model adaptation techniques.
- Background in traditional ML and judgment in selecting ML vs. LLM approaches.
- Experience with MLOps tooling, such as MLflow or DataDog.
- Published research, talks, or open-source contributions in AI/ML.
- Experience in HR tech or other trust-sensitive domains.
Benefits and Perks
- Competitive compensation package.
- Opportunity to work with cutting-edge AI technologies and contribute to the development of innovative products.
- Collaborative and dynamic work environment with a team of experienced professionals.
- Flexible remote work arrangements, with the option to work from anywhere in British Columbia, Canada.
- Access to professional development opportunities, including training and conference participation.
- Comprehensive health and wellness benefits, including mental health support.
- Generous paid time off and holiday policy.
How to Stand Out
- Ensure your resume and online profiles highlight your experience with AI/ML systems, particularly in production environments.
- Prepare to discuss your approach to AI system design and evaluation methodology during interviews.
- Showcase your ability to communicate complex technical concepts to both technical and non-technical audiences.
- Be ready to provide examples of your experience with agent orchestration frameworks and LLM observability/evaluation tooling.
- Research Lattice's current projects and initiatives to demonstrate your interest and knowledge of the company's mission and values.
- Consider creating a portfolio or repository of your personal projects or contributions to open-source AI/ML projects to share with the hiring team.
This is a remote position listed on WFA Digital, the platform for professionals who work from anywhere. Browse more remote jobs across all categories.