Enable job alerts via email!

Data Engineer

Soda

London

Remote

GBP 100,000 - 125,000

Full time

Today
Be an early applicant

Boost your interview chances

Create a job specific, tailored resume for higher success rate.

Job summary

An innovative consultancy is seeking a skilled Data Engineer to join their dynamic team. This role offers the opportunity to work with modern tech stacks and agile methodologies while tackling complex data challenges. As a Data Engineer, you'll be responsible for analyzing data, designing storage solutions, and optimizing data processes. The company values engineering excellence and empowers its consultants with equity, creating a collaborative and impactful work environment. If you're passionate about data and looking to make a difference, this position is perfect for you.

Qualifications

  • Proficiency in Python with unit and integration testing.
  • Experience with AWS and building data platforms on cloud services.

Responsibilities

  • Analyze and cleanse data, design data storage solutions.
  • Communicate effectively with the team to solve data problems.

Skills

Python
PySpark
Scala
AWS
SQL
NoSQL
Git
Docker
Kubernetes

Tools

AWS
Docker
Kubernetes
Git

Job description

Data Engineer - HIRING ASAP

Start date: ASAP
Duration: 6 Months
Location: Remote
Rate: £450 - £500 per day outside ir35

Summary

We are currently working with a new generation consultancy based across the UK and EU, founded on engineering excellence and empowering people to make an impact. All their consultants have equity in the company, genuinely love what they do, and are highly skilled. They work with modern tech stacks and typically run agile Scrum on all projects.

Responsibilities
  • Work closely with the business to understand current data problems, analyze and cleanse data, design data storage solutions, and communicate effectively with your team.
Key Skills
  • Proficiency in Python at the software engineering level, including unit and integration testing.
  • Knowledge of distributed computing with PySpark or Scala, debugging SparkUI, and optimization skills.
  • Experience with AWS.
  • Strong understanding of data modeling, change data capture, and ACID-compliant table structures.
  • Good understanding of data lake and data lakehouse architectures, as well as traditional data platforms.
  • Experience with data ingestion via API calls, batch, and streaming methods.
  • Good knowledge of SQL and NoSQL databases.
  • Experience with Git version control and CI/CD pipelines.
  • Extensive experience building data platforms on cloud services, preferably AWS.
  • Containerization experience with Docker or Kubernetes.
  • Consulting experience working with multiple stakeholders and projects involving other consultancies.
Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.