Enable job alerts via email!

Senior Data Engineer

ZipRecruiter

Malvern Hills

On-site

GBP 50,000 - 90,000

Full time

30+ days ago

Boost your interview chances

Create a job specific, tailored resume for higher success rate.

Job summary

An established industry player is looking for a Senior Data Engineer with over 7 years of experience to design and optimize data solutions. This role involves building scalable ETL pipelines, implementing advanced anomaly detection systems, and architecting resilient APIs. You will work with cutting-edge technologies like AWS, Python, and TypeScript, ensuring data integrity and quality while collaborating with cross-functional teams. If you're passionate about data engineering and eager to innovate, this opportunity is perfect for you. Join a dynamic team and drive impactful data-driven solutions in a collaborative environment.

Qualifications

  • 7+ years of experience in data engineering with strong skills in Python and AWS.
  • Expertise in designing and implementing complex data pipelines and APIs.

Responsibilities

  • Design and maintain complex ETL pipelines for large-scale data processing.
  • Implement anomaly detection systems to ensure data integrity and quality.

Skills

Python
SQL
TypeScript
AWS Web services
Swagger/Open API
Rest API
LLM/AI
GraphQL

Tools

DynamoDB
S3
Athena
Glue ETL
Lambda
ECS
Prometheus
Grafana
OpenSearch
RDS

Job description

Senior Data Engineer - 7+ Years of Experience

We are seeking a highly experienced Senior Data Engineer with 7+ years of expertise in designing, building, and optimizing robust data solutions. The ideal candidate must possess top-tier skills in Python, AWS services, API development, and TypeScript, and have significant hands-on experience with anomaly detection systems.

The candidate should have a proven ability to work at both strategic and tactical levels, from designing data architectures to implementing them in the weeds.

Required Technical Skills:

  • Python
  • SQL
  • TypeScript
  • AWS Web services
  • Swagger/Open API
  • Rest API
  • LLM/AI
  • GraphQL

Core Programming Skills:

  • Expert proficiency in Python, with experience in building data pipelines and back-end systems.
  • Solid experience with TypeScript for developing scalable applications.
  • Advanced knowledge of SQL for querying and optimizing large datasets.

AWS Cloud Services Expertise:

  • DynamoDB, S3, Athena, Glue ETL, Lambda, ECS, Glue Data Quality, EventBridge, Redshift Machine Learning, OpenSearch, and RDS.

API and Resilience Engineering:

  • Proven expertise in designing fault-tolerant APIs using Swagger/OpenAPI, GraphQL, and RESTful standards.
  • Strong understanding of distributed systems, load balancing, and failover strategies.

Monitoring and Orchestration:

  • Hands-on experience with Prometheus and Grafana for observability and monitoring.

Key Responsibilities:

Data Pipeline Development:

  • Independently design, build, and maintain complex ETL pipelines, ensuring scalability and efficiency for large-scale data processing needs.
  • Manage pipeline complexity and orchestration, delivering high-performance data products accessible via APIs for business-critical applications.
  • Archive processed data products into data lakes (e.g., AWS S3) for analytics and machine learning use cases.

Anomaly Detection and Data Quality:

  • Implement advanced anomaly detection systems and data validation techniques, ensuring data integrity and quality.
  • Leverage AI/ML methodologies, including Large Models (LLMs), to detect and address data inconsistencies.
  • Develop and automate robust data quality and validation frameworks.

Cloud and API Engineering:

  • Architect and manage resilient APIs using modern patterns, including microservices, RESTful design, and GraphQL.
  • Configure API gateways, circuit breakers, and fault-tolerant mechanisms for distributed systems.
  • Ensure horizontal and vertical scaling strategies for API-driven data products.

Monitoring and Observability:

  • Implement comprehensive monitoring and observability solutions using Prometheus and Grafana to optimize system reliability.
  • Establish proactive alerting systems and ensure real-time system health visibility.

Cross-functional Collaboration and Innovation:

  • Collaborate with stakeholders to understand business needs and translate them into scalable, data-driven solutions.
  • Continuously research and integrate emerging technologies to enhance data engineering practices.
Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.