Enable job alerts via email!

Senior Data Engineer [UAE Based]

ZipRecruiter

London

On-site

GBP 70,000 - 90,000

Full time

3 days ago
Be an early applicant

Boost your interview chances

Create a job specific, tailored resume for higher success rate.

Job summary

A leading company is seeking a Senior Data Engineer to design and maintain scalable data systems. You will lead the development of data pipelines, ensure data quality, and collaborate with cross-functional teams to deliver high-performance data platforms. This role requires expertise in modern data engineering practices and cloud solutions.

Qualifications

  • 8+ years of experience in data engineering within a production environment.
  • Experience building stream processing systems using Apache Kafka.
  • Hands-on experience with ELK stack for scalable search and logging.

Responsibilities

  • Design, implement, and maintain scalable and reliable data pipelines.
  • Develop and deploy data workflows on AWS or GCP.
  • Collaborate with data scientists to prepare and serve features for ML models.

Skills

Python
Linux shell scripting
SQL
NoSQL
Docker
Kubernetes
Data governance
Data quality tools
Data correlation
Data observability

Education

Bachelor’s or Master’s degree in Computer Science
Engineering
Data Science

Tools

Apache Kafka
ELK stack
AWS
GCP

Job description

Job Description

Job Title: Senior Data Engineer

Location: Abu Dhabi

Job Summary:

As a Senior Data Engineer, you will be responsible for designing, developing, and maintaining advanced, scalable data systems that power critical business decisions. You will lead the development of robust data pipelines, ensure data quality and governance, and collaborate across cross-functional teams to deliver high-performance data platforms in production environments. This role requires a deep understanding of modern data engineering practices, real-time processing, and cloud- solutions.

Key Responsibilities:

Data Pipeline Development & Management:

  • Design, implement, and maintain scalable and reliable data pipelines to ingest, transform, and load structured, unstructured, and real-time data feeds from diverse sources.
  • Manage data pipelines for analytics and operational use, ensuring data integrity, timeliness, and accuracy across systems.
  • Implement data quality tools and validation frameworks within transformation pipelines.

Data Processing & Optimization:

  • Build efficient, high-performance systems by leveraging techniques like data denormalization, partitioning, caching, and parallel processing.
  • Develop stream-processing applications using Apache Kafka and optimize performance for large-scale datasets.
  • Enable data enrichment and correlation across primary, secondary, and tertiary sources.

Cloud, Infrastructure, and Platform Engineering:

  • Develop and deploy data workflows on AWS or GCP, using services such as S3, Redshift, Pub/Sub, or BigQuery.
  • Containerize data processing tasks using Docker, orchestrate with Kubernetes, and ensure production-grade deployment.
  • Collaborate with platform teams to ensure scalability, resilience, and observability of data pipelines.

Database Engineering:

  • Write and optimize complex SQL queries on relational (Redshift, PostgreSQL) and NoSQL (MongoDB) databases.
  • Work with ELK stack (Elasticsearch, Logstash, Kibana) for search, logging, and real-time analytics.
  • Support Lakehouse architectures and hybrid data storage models for unified access and processing.

Data Governance & Stewardship:

  • Implement robust data governance, access control, and stewardship policies aligned with compliance and security best practices.
  • Establish metadata management, data lineage, and auditability across pipelines and environments.

Machine Learning & Advanced Analytics Enablement:

  • Collaborate with data scientists to prepare and serve features for ML models.
  • Maintain awareness of ML pipeline integration and ensure data readiness for experimentation and deployment.

Documentation & Continuous Improvement:

  • Maintain thorough documentation including technical specifications, data flow diagrams, and operational procedures.
  • Continuously evaluate and improve the data engineering stack by adopting new technologies and automation strategies.

Required Skills & Qualifications:

  • 8+ years of experience in data engineering within a production environment.
  • Advanced knowledge of Python and Linux shell scripting for data manipulation and automation.
  • Strong expertise in SQL/NoSQL databases such as PostgreSQL and MongoDB.
  • Experience building stream processing systems using Apache Kafka.
  • Proficiency with Docker and Kubernetes in deploying containerized data workflows.
  • Good understanding of cloud services (AWS or Azure).
  • Hands-on experience with ELK stack (Elasticsearch, Logstash, Kibana) for scalable search and logging.
  • Familiarity with AI models supporting data management.
  • Experience working with Lakehouse systems, data denormalization, and data labeling practices.

Qualifications:

  • Working knowledge of data quality tools, lineage tracking, and data observability solutions.
  • Experience in data correlation, enrichment from external sources, and managing data integrity at scale.
  • Understanding of data governance frameworks and enterprise compliance protocols.
  • Exposure to CI/CD pipelines for data deployments and infrastructure-as-code.

Education & Experience:

  • Bachelor’s or Master’s degree in Computer Science, Engineering, Data Science, or a related field.
  • Demonstrated success in designing, scaling, and operating data systems in cloud- and distributed environments.
  • Proven ability to work collaboratively with cross-functional teams including product managers, data scientists, and DevOps.
Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.