Enable job alerts via email!

Senior Data & DevOps Engineer

Drest

United Kingdom

On-site

GBP 60,000 - 100,000

Full time

21 days ago

Job summary

A leading company is seeking a Senior Data and DevOps Engineer to enhance its data infrastructure and ensure optimal performance. This role involves developing robust data pipelines, managing AWS cloud systems, and collaborating with multiple engineering teams to meet data needs effectively. The ideal candidate has substantial experience in data engineering, AWS services, and Terraform, along with excellent communication skills.

Qualifications

  • 5+ years of experience in data engineering.
  • Solid experience with AWS cloud services.
  • Proven expertise in Terraform and infrastructure-as-code practices.

Responsibilities

  • Design and maintain robust data pipelines.
  • Manage and optimize AWS cloud infrastructure.
  • Develop infrastructure-as-code using Terraform.

Skills

AWS services
Data modeling
SQL
Python
Terraform
Data processing
Communication

Tools

Redshift
MongoDB
PostgreSQL
Docker
Kubernetes

Job description

The Senior Data and DevOps Engineer is responsible for the performance, scalability, and reliability of Drest's data infrastructure and pipelines. This role ensures the efficient ingestion, transformation, and delivery of high-volume data, while also maintaining robust, cost-effective cloud operations. As a hands-on engineer, the Senior Data and DevOps Engineer is actively involved in building pipelines, defining infrastructure as code, and supporting critical systems. Reporting to the Devops Lead Engineer, they work closely with data science, backend, and the platform team to enable high-quality analytics and drive technical excellence across data and infrastructure.

What you will be accountable for:

  • Design, build, and maintain robust data pipelines capable of handling tens of millions of events per day in both batch and real-time processing contexts.
  • Manage and optimise AWS cloud infrastructure, ensuring high availability, performance, cost-efficiency, and security.
  • Develop infrastructure-as-code using Terraform, supporting scalable and maintainable infrastructure deployments.
  • Build and monitor data warehouse solutions (e.g. Redshift), ensuring data is accessible, clean, and well-modelled for analytics and product teams.
  • Drive system performance and operational excellence by improving observability, uptime, and deployment processes across data and platform systems.
What the Senior Data and Devops Engineer will be responsible for:

  • Design, implement and maintain scalable and reliable data pipelines
  • Build, optimize and monitor data warehousing solutions with Redshift and other columnar data stores
  • Develop infrastructure-as-code using Terraform to provision and manage cloud infrastructure
  • Own and manage AWS-based systems, ensuring cost-effective, secure and high-performance operations
  • Support and enhance the data and platform stack to ensure uptime, observability, and recoverability of key systems
  • Collaborate with engineering and product teams to ensure data needs are met and infrastructure bottlenecks are identified early
  • Support analytics and reporting workflows by making data accessible, clean, and well-modeled
  • Implement system performance improvements and improve alerting, monitoring and release processes
Requirements
  • 5+ years of experience in data engineering, including multiple end-to-end pipeline builds
  • Solid experience with AWS cloud services, including S3, EC2, Lambda, RDS, Redshift, Glue, and Kinesis, Mongodb, Postgresql
  • Proven expertise in Terraform and infrastructure-as-code practices
  • Strong SQL and data modeling skills, and experience with both SQL and NoSQL data stores
  • Strong understanding of dbt (or equivalent) and Tableau
  • Hands-on experience with Python for data processing and automation tasks
  • A background working in environments with high throughput data (millions of events per hour)
  • Understanding of best practices around security, scalability, and maintainability in cloud-native systems
  • Comfort working independently in a fast-paced, highly collaborative environment
  • Great communication skills, ability to explain complex systems clearly to technical and non-technical stakeholders
  • Familiarity with Docker, Kubernetes, or other container orchestration platforms
  • A willingness to be available outside standard working hours when needed to support critical issues or key deliveries.
It would be a bonus if you have...

  • Experience in the gaming industry
  • Experience with event-driven applications
  • Exposure to CI/CD pipelines and automated testing for data infrastructure
  • Experience with additional data warehouses (BigQuery, Snowflake, etc.)
Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.