Enable job alerts via email!

Lead Data Engineer | London, UK

Capgemini

London

On-site

GBP 70,000 - 90,000

Full time

8 days ago

Job summary

A global technology consulting firm in London is looking for an experienced Lead Data Engineer to manage data solutions and mentor a team. The ideal candidate will have strong expertise in Azure Modern Data Platform, PySpark, and Python, along with robust experience in data ingestion and validation techniques. Join us to help leading organizations leverage technology for a sustainable future.

Benefits

Collaborative work environment
Career development opportunities
Diverse and inclusive culture

Qualifications

  • Proven experience with Azure Data Factory and Azure Databricks with Unity Catalog.
  • Strong proficiency in PySpark and Python for data processing.
  • Good understanding of data warehousing concepts and best practices.

Responsibilities

  • Manage the ingestion of varied and unstructured data from REST APIs.
  • Demonstrate excellent data analysis skills with strong SQL knowledge.
  • Perform thorough validation and verification using automation techniques.
  • Develop specific, targeted, and well-detailed project and stage plans.
  • Exhibit strong coordination and communication skills.

Skills

Data Ingestion
Data Analysis
Validation and Verification
Project Planning
Coordination and Communication
Mentorship

Education

Certification in relevant technologies (e.g., Azure, GCP)

Tools

Azure Data Factory
Azure Databricks
Jira
ADO
Confluence

Job description

Get The Future You Want!

Choosing Capgemini means choosing a company where you will be empowered to shape your career in the way you'd like, where you'll be supported and inspired by a collaborative community of colleagues around the world, and where you'll be able to reimagine what's possible. Join us and help the world's leading organizations unlock the value of technology and build a more sustainable, more inclusive world.

Your Role:

We are seeking a highly experienced Lead Data Engineer with a strong background in working with cross-functional teams, particularly Data Science, with expertise in Azure Modern Data Platform and knowledge in GCP. The ideal candidate will take ownership of solutioning and mentoring the team, ensuring a smooth transition from Jupyter notebooks to Azure Databricks. Expertise in writing ETL frameworks with PySpark and Python coding skills for Data Migration and Transformation (On-Premises or Cloud) is essential. This role requires a deep understanding of both traditional and NoSQL databases, distributed data processing, and data transformation techniques.

  • Data Ingestion: Manage the ingestion of varied and unstructured data from REST APIs, as well as structured data from transactional databases. Experience in the design and development of Medallion architecture with Databricks and Unity Catalog governance principles is required.
  • Data Analysis and Transformation: Demonstrate excellent data analysis skills with strong SQL knowledge, along with robust reporting and data transformation capabilities, especially using PySpark throughout the development.
  • Validation and Verification: Perform thorough validation and verification using automation techniques by writing efficient unit test cases to ensure data quality.
  • Project and Stage Planning: Develop specific, targeted, and well-detailed project and stage plans.
  • Tool Proficiency: Utilize tools such as Jira, ADO, and Confluence effectively.
  • Coordination and Communication: Exhibit strong coordination and communication skills, reporting back on progress and alignment with the implementation strategy for data pipelines and production deployments.
  • Defect Management: Ensure transparency in defect management and work towards resolving them promptly, irrespective of the defect owner.
  • Mentorship: Mentor and guide the team, ensuring they can hit the ground running and effectively collaborate with the Data Science and DevOps teams.

Your Profile:

  • Proven experience with Azure Data Factory and Azure Databricks with Unity Catalog.
  • Strong proficiency in PySpark and Python for data processing.
  • Strong SQL knowledge.
  • Good understanding of GCP cloud services such as GCP BigQuery and others.
  • Good understanding of Airflow scheduling.
  • Knowledge of Terraform for infrastructure management.
  • Experience with CI/CD tools and practices.
  • Understanding of data security, access controls, and compliance.
  • Excellent problem-solving skills and attention to detail.
  • Strong communication and collaboration skills, with leadership qualities to mentor and lead the team in providing data solutions.
  • Ability to work independently and as part of a team.
  • Experience with other cloud platforms and data engineering tools.
  • Proven experience with data warehousing concepts and best practices.
  • Certification in relevant technologies (e.g., Azure, GCP).

About Capgemini

Capgemini is a global business and technology transformation partner, helping organizations to accelerate their dual transition to a digital and sustainable world while creating tangible impact for enterprises and society. It is a responsible and diverse group of 350,000 team members in more than 50 countries. With its strong over 55-year heritage, Capgemini is trusted by its clients to unlock the value of technology to address the entire breadth of their business needs. It delivers end-to-end services and solutions leveraging strengths from strategy and design to engineering, all fueled by its market-leading capabilities in AI, cloud, and data, combined with its deep industry expertise and partner ecosystem. The Group reported 2023 global revenues of €22.5 billion.

Get The Future You Want | www.capgemini.com

Boost your career
Find thousands of job opportunities by signing up to eFinancialCareers today.

Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.