Enable job alerts via email!

Big Data Engineer

TN United Kingdom

Marlow

On-site

GBP 45,000 - 80,000

Full time

Yesterday
Be an early applicant

Boost your interview chances

Create a job specific, tailored resume for higher success rate.

Job summary

An innovative company is looking for a skilled Big Data Engineer to join their dynamic data engineering team. In this role, you will design and optimize scalable big data pipelines using cutting-edge AWS services. Collaborating closely with data scientists and analysts, you will ensure high-volume data is efficiently processed and made accessible for analytics. This position offers the opportunity to work on impactful projects, employing technologies like Kinesis, Spark, and Kafka. If you are passionate about big data and eager to contribute to data-driven insights, this role is perfect for you.

Qualifications

  • Experience in building scalable data pipelines using AWS services.
  • Proficiency in programming languages like Python, Java, or Scala.

Responsibilities

  • Architect and maintain scalable data pipelines using AWS services.
  • Develop ETL workflows and ensure data governance and security.

Skills

AWS S3
AWS Redshift
AWS Glue
AWS Kinesis
AWS EMR
Hadoop
Spark
Kafka
Python
Java
Scala
CloudFormation
Terraform
AWS CDK

Tools

AWS Glue
Apache Spark
Apache Flink
Apache Atlas

Job description

Social network you want to login/join with:

We are seeking a talented Big Data Engineer to join our data engineering team. In this role, you will be responsible for designing, developing, and optimising scalable big data pipelines and architectures. You will work closely with data scientists, data analysts, and business stakeholders to ensure that high-volume data is processed, stored, and made accessible for analytical and operational use

Key Responsibilities:

  • Architect and maintain scalable data pipelines using AWS services like S3, Redshift, Glue, Kinesis, and EMR.
  • Integrate data from various sources, ensuring accessibility and consistency for analytics.
  • Build real-time streaming solutions using Kinesis, Kafka, or Flink.
  • Develop ETL workflows with tools like AWS Glue or Apache Spark.
  • Ensure efficient storage, query optimization, and cost management for large datasets.
  • Work closely with data scientists and analysts to enable data-driven insights and machine learning.
  • Implement monitoring and automation for data pipelines using AWS CloudWatch and CloudFormation.
  • Ensure data governance and security, adhering to privacy and compliance standards.

Required Skills & Experience:

  • Experience with S3, Redshift, Glue, Kinesis, EMR, Lambda.
  • Strong background with Hadoop, Spark, Kafka.
  • Proficiency in building ETL pipelines using AWS Glue, Apache Spark, or Python.
  • Experience with Kinesis, Kafka, or similar technologies.
  • Programming: Proficiency in Python, Java, or Scala.
  • Familiarity with CloudFormation, Terraform, or AWS CDK for managing cloud resources.
  • Skills in structuring and optimizing data for performance.

Preferred Qualifications:

  • AWS Certified Big Data – Specialty, or other AWS-related certifications.
  • Experience with tools like Apache Atlas or AWS Glue Data Catalog.
  • Familiarity with Apache Flink and similar streaming platforms.
  • Experience integrating data systems with machine learning workflows.

If you are an experienced Big Data Engineer with expertise in AWS cloud services and big data technologies and are eager to work on large-scale, impactful projects, we’d love to hear from you!

#CBTR

Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.