Enable job alerts via email!

Data Engineer II, Amazon Last Mile - Routing and Planning - DE

Amazon

London

On-site

GBP 50,000 - 90,000

Full time

Yesterday
Be an early applicant

Boost your interview chances

Create a job specific, tailored resume for higher success rate.

Job summary

Join a forward-thinking company as a Data Engineer II, where your expertise in data engineering and AWS technologies will drive improvements in last mile delivery. In this dynamic role, you'll design and implement data infrastructure, manage ETL processes, and collaborate with cross-functional teams to derive insights from large datasets. Your analytical skills and leadership will be key in optimizing data processes and enhancing the efficiency of delivery routes. This is an exciting opportunity to make a significant impact in a fast-paced environment, leveraging cutting-edge technology to deliver exceptional results for customers.

Qualifications

  • 3+ years of data engineering experience with a focus on AWS technologies.
  • Experience in building ETL pipelines and data modeling.

Responsibilities

  • Design and support data warehouse infrastructure using AWS big data stack.
  • Extract and analyze large volumes of structured and unstructured data.

Skills

Data Engineering
SQL
Python
AWS Technologies
ETL Development
Data Modeling
Data Warehousing
Scala

Tools

AWS Glue
Redshift
S3
EMR
Quicksight
Athena

Job description

Data Engineer II, Amazon Last Mile - Routing and Planning - DE

As part of the Last Mile Science & Technology organization, you’ll partner closely with Product Managers, Data Scientists, and Software Engineers to drive improvements in Amazon's Last Mile delivery network. You will leverage data and analytics to generate insights that accelerate the scale, efficiency, and quality of the routes we build for our drivers through our end-to-end last mile planning systems. You will develop complex data engineering solutions using AWS technology stack (S3, Glue, IAM, Redshift, Athena). You should have deep expertise and passion in working with large data sets, building complex data processes, performance tuning, bringing data from disparate data stores and programmatically identifying patterns. You will work with business owners to develop and define key business questions and requirements. You will provide guidance and support for other engineers with industry best practices and direction. Analytical ingenuity and leadership, business acumen, effective communication capabilities, and the ability to work effectively with cross-functional teams in a fast-paced environment are critical skills for this role.

Key job responsibilities
  1. Design, implement, and support data warehouse / data lake infrastructure using AWS big data stack, Python, Redshift, Quicksight, Glue/lake formation, EMR/Spark/Scala, Athena etc.
  2. Extract huge volumes of structured and unstructured data from various sources (Relational /Non-relational/No-SQL database) and message streams and construct complex analyses.
  3. Develop and manage ETLs to source data from various systems and create unified data model for analytics and reporting.
  4. Perform detailed source-system analysis, source-to-target data analysis, and transformation analysis.
  5. Participate in the full development cycle for ETL: design, implementation, validation, documentation, and maintenance.
Minimum requirements
  • 3+ years of data engineering experience
  • 4+ years of SQL experience
  • Experience with data modeling, warehousing, and building ETL pipelines
  • Experience in at least one modern scripting or programming language, such as Python, Java, Scala, or NodeJS
  • Experience as a data engineer or related specialty (e.g., software engineer, business intelligence engineer, data scientist) with a track record of manipulating, processing, and extracting value from large datasets
  • Experience with AWS technologies like Redshift, S3, AWS Glue, EMR, Kinesis, FireHose, Lambda, and IAM roles and permissions
  • Experience with non-relational databases / data stores (object storage, document or key-value stores, graph databases, column-family databases)
  • Experience building/operating highly available, distributed systems of data extraction, ingestion, and processing of large data sets

Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner.

Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.