Offshore Pod Leads (TBD, AN, IN)
WFA Digital Insight
As the demand for skilled data engineers continues to rise, with a 25% increase in job postings over the past year, NTT DATA is looking for exceptional leaders to oversee their data platform modernization initiative. With a focus on migrating to a unified Databricks Lakehouse architecture, this role requires a unique blend of technical expertise and leadership skills. Candidates should be prepared to navigate complex legacy systems and lead high-performing teams. With the global data engineering market expected to reach $77 billion by 2027, this is an exciting time to join a forward-thinking organization like NTT DATA.
Job Description
About the Role
The Offshore Pod Leads will play a crucial role in NTT DATA's data platform modernization initiative, overseeing the migration from legacy ETL ecosystems to a unified Databricks Lakehouse architecture. This is a high-impact, high-visibility program that requires experienced technical leaders who can navigate complex systems and lead skilled engineering teams. The successful candidates will be responsible for technical excellence, delivery velocity, and team development throughout the engagement.The role involves working closely with the client to understand their requirements and developing a comprehensive migration plan. The Offshore Pod Leads will be jointly accountable for the technical design and implementation of the migration, ensuring that all migrated pipelines meet data quality, SLA, and observability requirements.
The ideal candidates will have a strong technical background, excellent leadership skills, and experience in data engineering. They will be able to communicate complex technical concepts to both technical and non-technical stakeholders and manage expectations around scope, timelines, and quality.
What You Will Do
- Own the end-to-end technical design and implementation of the migration from the respective source platform to Databricks Lakehouse
- Conduct thorough assessments of existing ETL jobs, analyzing lineage, dependencies, transformation logic, scheduling, and data quality rules
- Define migration patterns, reusable frameworks, and coding standards adopted across the pod
- Architect scalable, cost-efficient pipelines using Databricks PySpark, Spark SQL, and Delta Live Tables (DLT) as appropriate
- Drive adoption of software engineering best practices: version control (Git), CI/CD, unit testing, and code review within the pod
- Directly lead a pod of 4–6 Data Engineers, providing technical mentorship, task assignment, code reviews, and unblocking day-to-day impediments
- Manage sprint planning, backlog refinement, and progress tracking against migration milestones
- Serve as the primary technical point of contact for the pod's workstream with the client
- Translate complex technical concepts and migration trade-offs into clear, concise communications for both technical and non-technical stakeholders
- Participate in program-level status reviews, architecture governance meetings, and client steering committees as required
What We Are Looking For
- At least 12 years of relevant experience in data engineering, with a strong technical background in ETL, data warehousing, and data architecture
- Proven experience in leading high-performing teams and managing complex technical projects
- Excellent technical skills in areas such as Databricks, AWS Glue, Informatica PowerCenter, and Amazon Kinesis Streams
- Strong understanding of data engineering principles, including data quality, data governance, and data security
- Experience with Agile development methodologies and version control systems such as Git
- Excellent communication and interpersonal skills, with the ability to communicate complex technical concepts to both technical and non-technical stakeholders
- Strong problem-solving skills, with the ability to analyze complex problems and develop creative solutions
Nice to Have
- Experience with Databricks PySpark, Spark SQL, and Delta Live Tables (DLT)
- Knowledge of cloud-based data engineering platforms, including AWS and Azure
- Experience with data governance and data quality tools, such as data catalogs and data lineage tools
- Certification in data engineering or a related field, such as AWS Certified Data Engineer or Google Cloud Certified - Professional Data Engineer
Benefits and Perks
- Competitive salary and benefits package
- Opportunity to work with a leading global IT services company
- Collaborative and dynamic work environment
- Professional development and growth opportunities
- Flexible working hours and remote work options
- Access to cutting-edge technologies and tools
- Recognition and reward programs for outstanding performance
- Comprehensive health and wellness program
- Generous paid time off and vacation policy
How to Stand Out
- Make sure to highlight your experience with data engineering tools and technologies, such as Databricks, AWS Glue, and Informatica PowerCenter.
- Emphasize your leadership skills and experience in managing high-performing teams.
- Be prepared to discuss your approach to data governance and data quality, and how you have implemented these principles in previous roles.
- Show a willingness to learn and adapt to new technologies and tools, such as Databricks PySpark and Delta Live Tables (DLT).
- Highlight your excellent communication and interpersonal skills, and provide examples of how you have communicated complex technical concepts to non-technical stakeholders.
- Be prepared to discuss your experience with Agile development methodologies and version control systems such as Git.
- Research the company and the role, and be prepared to ask informed questions during the interview process.
This is a remote position listed on WFA Digital, the platform for professionals who work from anywhere. Browse more remote jobs across all categories.