Senior Software Engineer, AI

LatticeLattice·Remote(Remote - Canada)
Software Development

WFA Digital Insight

As the demand for AI and machine learning specialists continues to soar, with over 45% of companies investing in AI-powered solutions, the need for skilled professionals who can develop and evaluate these systems has never been more pressing. In this remote role at Lattice, you'll have the chance to work on cutting-edge AI projects and contribute to the development of innovative evaluation methodologies. With the remote job market offering greater flexibility than ever, candidates with a strong background in software engineering, AI, and machine learning are in high demand. Before applying, consider highlighting your experience with AI evaluation frameworks, agent architecture, and technical leadership.

Job Description

## About the Role The Senior Software Engineer, AI at Lattice is a pivotal role that combines the development of AI systems with the creation of evaluation frameworks to ensure these systems perform optimally. As part of the AI Engineering team, you will be responsible for designing and shipping robust, end-to-end AI evaluation frameworks that cover all aspects of AI performance, from offline evaluations to human-in-the-loop feedback loops. Your work will directly impact how AI products are measured, improved, and trusted at scale.

The role is highly collaborative, working closely with product and design teams to deliver exceptional user experiences. Given the remote nature of this position, strong communication and project management skills are essential. You will be part of a dynamic team that values continuous improvement, both in terms of the product and the craft of engineering itself.

Day-to-day, you will be involved in architecting and implementing reusable agent infrastructure, building and scaling RAG pipelines, and making informed decisions about the technology stack. Your experience in production AI/ML systems, especially with LLM-based systems, will be crucial in driving the technical direction of the team.

## What You Will Do - Design and ship a robust, end-to-end AI evaluation framework that covers offline evaluations, production tracing, and human feedback loops.

  • Define and instrument key metrics for AI performance, including agent task completion rates, hallucination rates, response quality, user engagement, and downstream business outcomes.
  • Build and maintain evaluation datasets, test harnesses, and automated scoring pipelines to detect regressions before they reach production.
  • Identify and surface factors that drive agent quality improvement, giving the team clear signals on where to focus.
  • Architect and implement reusable agent infrastructure, including multi-turn conversation workflows and recommendation services.
  • Build and scale RAG pipelines and retrieval infrastructure, focusing on vector store management and retrieval quality optimization.
  • Make strategic decisions on build vs. buy for LLM providers, agent frameworks, and evaluation tooling, considering capability, cost, latency, and vendor risk.
  • Contribute to the development of production AI systems with a strong focus on reliability, observability, and performance.
  • Own projects from start to finish, driving them to completion and ensuring the right resources are allocated at the right time.
  • Partner with engineering leads and managers to inform technical direction on agent quality and evaluation strategy.
## What We Are Looking For - 5+ years of professional software engineering experience, with a significant focus on production AI/ML systems.
  • Deep hands-on experience with LLM-based systems, including prompt engineering, RAG pipelines, agent orchestration, evaluation metrics, and model fine-tuning.
  • Proven ability to work with data and understand statistical concepts, especially in experimental settings.
  • Experience in building and operating agentic AI systems in production, including multi-step workflows and multi-agent topologies.
  • Strong command of AI evaluation, including the design of evaluation frameworks and the differentiation between meaningful and vanity metrics.
  • Production-grade Python engineering skills, with an emphasis on clean, maintainable, and testable code.
  • Experience with LangGraph or comparable agent orchestration frameworks, beyond tutorials.
  • Familiarity with LangSmith or comparable LLM observability tooling for tracing, evaluation, and debugging.
## Nice to Have - Experience with cloud computing platforms, such as AWS or GCP, and containerization using Docker.
  • Knowledge of agile development methodologies and version control systems like Git.
  • Experience with automated testing frameworks and continuous integration/continuous deployment (CI/CD) pipelines.
  • Participation in open-source projects or personal projects that demonstrate your passion for AI and software engineering.
## Benefits and Perks - Competitive salary and equity package.
  • Comprehensive health, dental, and vision insurance.
  • Generous PTO policy, including vacation days, sick leave, and holidays.
  • Remote work stipend to support your home office setup.
  • Professional development opportunities, including conference sponsorships and training programs.
  • Access to cutting-edge technologies and tools in the AI and software engineering domains.
  • Collaborative and dynamic work environment with a team of experienced professionals.

How to Stand Out

- Highlight your experience with AI evaluation frameworks, agent architecture, and technical leadership in your resume and cover letter.

  • Prepare to discuss your approach to AI system development and evaluation, including how you handle challenges and complexities.
  • Showcase your projects that demonstrate your skills in AI and software engineering, even if they are personal projects or contributions to open-source initiatives.
  • Emphasize your ability to work in a remote setting, including your experience with remote collaboration tools and your strategies for staying productive.
  • Be ready to talk about your understanding of the current AI landscape, including recent advancements and challenges in the field.
  • Research the company to understand its products, mission, and values, and be prepared to discuss how your skills and experience align with these aspects.
  • Practice your coding skills, as you may be asked to complete a coding challenge or participate in a technical interview.

This is a remote position listed on WFA Digital, the platform for professionals who work from anywhere. Browse more remote jobs across all categories.