Enable job alerts via email!

Data Engineer

Virgule International Limited

Leicester

On-site

GBP 60,000 - 80,000

Full time

2 days ago
Be an early applicant

Job summary

A data solutions company in Leicester is seeking a skilled Data Engineer to design and scale ETL pipelines and data warehouses, focusing on performance and data quality. The ideal candidate has 5-9+ years of experience, advanced Python and Apache Spark skills, and expertise in cloud technologies. This role offers the opportunity to tackle complex data challenges with cutting-edge technologies.

Qualifications

  • 5-9+ years in data engineering with proven delivery of enterprise-scale solutions.
  • Advanced skills in Python, Apache Spark, and SQL optimization.
  • Deep expertise in at least one leading data warehouse.

Responsibilities

  • Design & own robust ETL/ELT pipelines using Python and Apache Spark.
  • Architect and optimize enterprise data warehouses for performance.
  • Build data models and implement governance frameworks.

Skills

Python
Apache Spark
SQL optimization
Data modeling
Streaming data technologies
Cloud proficiency (AWS, Azure, GCP)

Tools

Snowflake
BigQuery
Redshift
Azure Synapse
Kafka
Kinesis
Airflow
dbt

Job description

We are seeking an exceptional Data Engineer to design, develop, and scale data pipelines, warehouses, and streaming systems that power mission-critical analytics and AI workloads. This is a hands-on engineering role where you will work with cutting-edge technologies, tackle complex data challenges, and shape the organization's data architecture.

Core Responsibilities

Design & own robust ETL/ELT pipelines using Python and Apache Spark for batch and near real-time processing.

Architect and optimize enterprise data warehouses (Snowflake, BigQuery, Redshift, Azure Synapse) for performance and scalability.

Build data models and implement governance frameworks ensuring data quality, lineage, and compliance.

Engineer streaming data solutions using Kafka and Kinesis for real-time insights.

Collaborate cross-functionally to translate business needs into high-quality datasets and APIs.

Proactively monitor, tune, and troubleshoot pipelines for maximum reliability and cost efficiency.

5-9+ years in data engineering, with proven delivery of enterprise-scale solutions.

Advanced skills in Python, Apache Spark, and SQL optimization.

Deep expertise in at least one leading data warehouse (Snowflake, BigQuery, Redshift, Azure Synapse).

Strong knowledge of data modeling, governance, and compliance best practices.

Hands-on experience with streaming data technologies (Kafka, Kinesis).

Cloud proficiency in AWS, Azure, or GCP.

Preferred Qualifications

Experience with Airflow, dbt, or similar orchestration frameworks.

Exposure to DevOps & CI/CD in data environments.

Familiarity with ML pipelines and feature stores.

Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.

Similar jobs