Enable job alerts via email!

Senior Data Engineer_London_Hybrid

DataBuzz

London

Hybrid

GBP 50,000 - 90,000

Full time

7 days ago
Be an early applicant

Boost your interview chances

Create a job specific, tailored resume for higher success rate.

Job summary

An innovative firm is seeking a Senior Data Engineer to lead the design and maintenance of data architecture and infrastructure. This role emphasizes the development of efficient data pipelines and ETL processes, leveraging cloud technologies like AWS, Azure, and GCP. The ideal candidate will have over six years of experience, particularly in Python and Pyspark, and will work closely with data scientists and analysts to meet their data needs. Join a dynamic team and contribute to enhancing data solutions that drive business insights and performance.

Qualifications

  • 6+ years of experience in data engineering with a focus on Python, Pyspark, and SQL.
  • Hands-on experience with ETL processes and cloud environments.

Responsibilities

  • Design and maintain data pipelines using Python, Pyspark, and SQL.
  • Collaborate with data scientists to develop data solutions.

Skills

Python
Pyspark
SQL
ETL processes

Tools

AWS
Azure
GCP

Job description

About Role :

As a Data Engineer, you will play a crucial role in designing, developing, and maintaining data architecture and infrastructure. The successful candidate should possess a strong foundation in Python, Pyspark, SQL, and ETL processes, with a demonstrated ability to implement solutions in a cloud environment.

Position

Senior Data Engineer

Experience

6+ years

Location

London

Job Type

Hybrid, Permanent

Mandatory Skills :
  1. Design, build, maintain data pipelines using Python, Pyspark, and SQL
  2. Develop and maintain ETL processes to move data from various data sources to our data warehouse on AWS, AZURE, GCP.
  3. Collaborate with data scientists and business analysts to understand their data needs & develop solutions that meet their requirements.
  4. Develop & maintain data models and data dictionaries for our data warehouse.
  5. Develop & maintain documentation for our data pipelines and data warehouse.
  6. Continuously improve the performance and scalability of our data solutions.
Qualifications :
  1. Minimum 6+ years of total experience.
  2. At least 4 years of hands-on experience using Python, Pyspark, and SQL.
Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.