Enable job alerts via email!

Junior AWS Data Engineer

Vallum Associates

City Of London

On-site

GBP 60,000 - 80,000

Part time

Today
Be an early applicant

Job summary

A leading data services provider in London is seeking an experienced AWS Data Engineer on a contract basis. In this role, you will design and maintain robust ETL pipelines while working with various AWS services such as Lambda and S3. The ideal candidate has extensive experience in Python and SQL, and will collaborate with teams to ensure data quality and availability. This position offers the chance to work on critical data engineering projects in a dynamic environment.

Qualifications

  • Extensive experience writing Python scripts.
  • Proficiency in designing and implementing scalable ETL pipelines.
  • Strong experience with relational databases and SQL.

Responsibilities

  • Design, develop, and maintain robust ETL pipelines using Python and SQL.
  • Utilize Terraform to provision and manage AWS cloud resources.
  • Collaborate with teams to ensure data availability and quality.

Skills

Python
ETL Pipelines
SQL
Terraform
Databricks
PySpark API
Job description
Job Overview

We are seeking an experienced AWS Data Engineer to join our team on a contract basis. As part of our data engineering team, you will work with a variety of AWS services to design, develop, and maintain scalable data pipelines. You’ll be responsible for creating robust ETL processes, implementing infrastructure as code, and ensuring that data is processed and delivered in a timely, reliable, and efficient manner.

Responsibilities
  • Design, develop, and maintain robust ETL pipelines using Python and SQL.
  • Work with AWS services like Lambda, S3, and RDS for cloud- data architecture.
  • Utilize Terraform to provision and manage cloud resources in AWS.
  • Leverage Databricks and PySpark APIs to process large datasets and optimize data pipelines.
  • Collaborate with data scientists, analysts, and other teams to ensure data availability and quality.
  • Optimize and troubleshoot data pipelines to ensure scalability and performance.
  • Ensure proper documentation and adherence to coding standards.
  • Participate in code reviews and provide mentorship to junior engineers as needed.
Required Skills
  • Python: Extensive experience writing Python scripts and working with data transformation libraries.
  • ETL Pipelines: Proficiency in designing and implementing scalable ETL pipelines.
  • SQL: Strong experience working with relational databases and writing efficient SQL queries.
  • Terraform: Hands-on experience with Terraform for infrastructure provisioning and management.
  • Databricks: Strong working knowledge of Databricks, including using Spark (PySpark) for data processing.
  • PySpark API: Deep understanding of PySpark and its integration with data processing workflows.
Nice to Have
  • Cloud: Experience working with AWS Lambda and other serverless services.
  • AI/LLM: Exposure to Artificial Intelligence or Large Models (LLMs) and their application in data processing.
  • Snowflake: Familiarity with Snowflake data warehouse for data storage and querying.
Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.