Enable job alerts via email!

Big Data Engineer

Cloud Bridge

Greater London

On-site

GBP 50,000 - 90,000

Full time

30 days ago

Boost your interview chances

Create a job specific, tailored resume for higher success rate.

Job summary

An innovative firm is looking for a talented Big Data Engineer to join their dynamic data engineering team. In this exciting role, you'll be responsible for designing and optimizing scalable big data pipelines and architectures using cutting-edge AWS services. Collaborating closely with data scientists and analysts, you will ensure that high-volume data is efficiently processed and accessible for analytics and operational use. This position offers the chance to work on impactful projects that drive data-driven insights and machine learning initiatives. If you have a strong background in big data technologies and AWS cloud services, this opportunity is perfect for you!

Qualifications

  • Experience with AWS services for data engineering and architecture.
  • Proficiency in building ETL pipelines and managing cloud resources.

Responsibilities

  • Design and maintain scalable data pipelines using AWS services.
  • Develop ETL workflows and ensure data governance and security.

Skills

AWS S3
AWS Redshift
AWS Glue
AWS Kinesis
AWS EMR
Hadoop
Spark
Kafka
Python
Java
Scala
CloudFormation
Terraform
AWS CDK

Tools

Apache Spark
Apache Flink
Apache Atlas

Job description

We are seeking a talented Big Data Engineer to join our data engineering team. In this role, you will be responsible for designing, developing, and optimising scalable big data pipelines and architectures. You will work closely with data scientists, data analysts, and business stakeholders to ensure that high-volume data is processed, stored, and made accessible for analytical and operational use.

Key Responsibilities:

  1. Architect and maintain scalable data pipelines using AWS services like S3, Redshift, Glue, Kinesis, and EMR.
  2. Integrate data from various sources, ensuring accessibility and consistency for analytics.
  3. Build real-time streaming solutions using Kinesis, Kafka, or Flink.
  4. Develop ETL workflows with tools like AWS Glue or Apache Spark.
  5. Ensure efficient storage, query optimization, and cost management for large datasets.
  6. Work closely with data scientists and analysts to enable data-driven insights and machine learning.
  7. Implement monitoring and automation for data pipelines using AWS CloudWatch and CloudFormation.
  8. Ensure data governance and security, adhering to privacy and compliance standards.

Required Skills & Experience:

  1. Experience with S3, Redshift, Glue, Kinesis, EMR, Lambda.
  2. Strong background with Hadoop, Spark, Kafka.
  3. Proficiency in building ETL pipelines using AWS Glue, Apache Spark, or Python.
  4. Experience with Kinesis, Kafka, or similar technologies.
  5. Programming: Proficiency in Python, Java, or Scala.
  6. Familiarity with CloudFormation, Terraform, or AWS CDK for managing cloud resources.
  7. Skills in structuring and optimizing data for performance.

Preferred Qualifications:

  1. AWS Certified Big Data – Specialty, or other AWS-related certifications.
  2. Experience with tools like Apache Atlas or AWS Glue Data Catalog.
  3. Familiarity with Apache Flink and similar streaming platforms.
  4. Experience integrating data systems with machine learning workflows.

If you are an experienced Big Data Engineer with expertise in AWS cloud services and big data technologies and are eager to work on large-scale, impactful projects, we’d love to hear from you!

Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.