Enable job alerts via email!

Senior Data Engineer - Apache / Spark / AWS - Manchester

JR United Kingdom

Manchester

On-site

GBP 50,000 - 80,000

Full time

2 days ago
Be an early applicant

Boost your interview chances

Create a job specific, tailored resume for higher success rate.

Job summary

An innovative recruitment agency seeks a Senior Data/DevOps Engineer to tackle complex data challenges using cutting-edge technologies. In this role, you will develop scalable solutions on a robust data platform, leveraging AWS and Apache Spark. Join a dynamic team where problem-solving and collaboration are key, and contribute to the evolution of data engineering practices. This is a fantastic opportunity to grow your skills in a fun and challenging environment while making a significant impact on data-driven projects.

Qualifications

  • Experience with AWS and cloud technologies is essential.
  • Strong coding practices and willingness to learn new technologies.

Responsibilities

  • Perform DevOps backend and cloud development on data infrastructure.
  • Develop infrastructure automation and scheduling scripts.

Skills

Problem-solving
AWS or equivalent cloud technologies
Serverless technologies
Apache Spark (Scala or PySpark)
Infrastructure automation (AWS CloudFormation or Terraform)
Scala, Java, or C#
High-quality coding and testing practices
Agile software development practices
Interpersonal skills
Debugging business-critical systems

Tools

Git
Jenkins
CI/CD tools

Job description

Social network you want to login/join with:

Client:

Mayflower Recruitment Ltd

Location:

Manchester, United Kingdom

Job Category:

Other

EU work permit required:

Yes

Job Views:

3

Posted:

28.04.2025

Expiry Date:

12.06.2025

Job Description:

We are looking for a Senior Data/DevOps Engineer for a growing client near Manchester.

Your role will primarily be to perform DevOps backend and cloud development on the data infrastructure to develop innovative solutions to effectively scale and maintain the data platform. You will be working on complex data problems in a challenging and fun environment using some of the latest Big Data open-source technologies like Apache Spark as well as Amazon Web Service technologies including Elastic MapReduce, Athena, and Lambda to develop scalable data solutions.

  • Adhering to Company Policies and Procedures with respect to Security, Quality, and Health & Safety.
  • Writing application code and tests that conform to standards.
  • Developing infrastructure automation and scheduling scripts for reliable data processing.
  • Continuously evaluating and contributing towards using cutting-edge tools and technologies to improve the design, architecture, and performance of the data platform.
  • Supporting the production systems running the deployed data software.
  • Regularly reviewing colleagues' work and providing helpful feedback.
  • Working with stakeholders to fully understand requirements.
  • Being the subject matter expert for the data platform and supporting processes, and being able to present to others to share knowledge.
Here's what we're looking for:
  • The ability to problem-solve.
  • Knowledge of AWS or equivalent cloud technologies.
  • Knowledge of Serverless technologies, frameworks, and best practices.
  • Apache Spark (Scala or PySpark).
  • Experience using AWS CloudFormation or Terraform for infrastructure automation.
  • Knowledge of Scala or other languages such as Java or C#.
  • High-quality coding and testing practices.
  • Willingness to learn new technologies and methodologies.
  • Knowledge of agile software development practices including continuous integration, automated testing, and working with software engineering requirements and specifications.
  • Good interpersonal skills, positive attitude, willing to help other team members.
  • Experience debugging and dealing with failures on business-critical systems.
Preferred:
  • Exposure to Apache Spark, Apache Trino, or another big data processing system.
  • Knowledge of streaming data principles and best practices.
  • Understanding of database technologies and standards.
  • Experience working on large and complex datasets.
  • Exposure to Data Engineering practices used in Machine Learning training and inference.
  • Experience using Git, Jenkins, and other CI/CD tools.

Mayflower is acting as an Employment Agency in relation to this vacancy.

Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.