Job Search and Career Advice Platform

Enable job alerts via email!

AWS Data Engineer

Tenth Revolution Group

Remote

GBP 60,000 - 80,000

Full time

Today
Be an early applicant

Generate a tailored resume in minutes

Land an interview and earn more. Learn more

Job summary

A leading tech consulting firm is seeking an experienced Data Engineer for a 14-week remote contract. The role involves designing and implementing ETL/ELT pipelines, developing data processing patterns, and integrating complex data sources. Candidates should have strong skills in Python, SQL, and experience with AWS services like Glue and Step Functions. This position offers a competitive rate and immediate interviews before Christmas.

Benefits

Fully remote contract
Outside IR35
Competitive day rate
Immediate interviews

Qualifications

  • Experience designing ETL/ELT pipelines with error handling.
  • Strong skills in Advanced Python and SQL.
  • Familiarity with AWS services like Glue and Step Functions.

Responsibilities

  • Design and implement robust ETL/ELT pipelines.
  • Develop data processing patterns for large datasets.
  • Integrate diverse data sources including REST APIs.

Skills

ETL/ELT pipeline design
Advanced Python
SQL
AWS Glue
Vector databases
Document parsing
data governance

Tools

AWS
CockroachDB
PySpark
Job description

Data Engineer - 14-Week Contract (Outside IR35) Likely to Extend

Start Date: 12th January

Rate: £350 per day

Location: Remote (UK-based)

Interview: Immediate - Offer before Christmas

We are seeking an experienced Data Engineer to join a 14-week project focused on building robust data pipelines and integrating complex data sources. This is an outside IR35 engagement, offering flexibility and autonomy.

Key Responsibilities
  • Design and implement ETL/ELT pipelines with strong error handling and retry logic.
  • Develop incremental data processing patterns for large-scale datasets.
  • Work with AWS services including Glue, Step Functions, S3, DynamoDB, Redshift, Lambda, and EventBridge.
  • Build and optimise vector database solutions and embedding generation pipelines for semantic search.
  • Implement document processing workflows (PDF parsing, OCR, metadata extraction).
  • Integrate data from REST APIs, PIM systems, and potentially SAP.
  • Ensure data quality, governance, and lineage tracking throughout the project.
Required Skills
  • ETL/ELT pipeline design and data validation frameworks.
  • Advanced Python (pandas, numpy, boto3) and SQL (complex queries, optimisation).
  • Experience with AWS Glue, Step Functions, and event-driven architectures.
  • Knowledge of vector databases, embeddings, and semantic search strategies.
  • Familiarity with document parsing libraries (PyPDF2, pdfplumber, Textract) and OCR tools.
  • Understanding of data governance, schema validation, and master data management.
  • Strong grasp of real-time vs batch processing trade-offs.
Beneficial Experience
  • CockroachDB deployment and management.
  • PySpark or similar for large-scale processing.
  • SAP data structures and PIM systems.
  • E-commerce and B2B data integration patterns.
Why Apply?
  • Fully remote contract.
  • Outside IR35.
  • Competitive day rate.
  • Immediate interviews - secure your next role before Christmas.
Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.