Job Search and Career Advice Platform

Enable job alerts via email!

Data Engineer II, CloudTune

Amazon

Toronto

On-site

CAD 100,000 - 120,000

Full time

Yesterday
Be an early applicant

Generate a tailored resume in minutes

Land an interview and earn more. Learn more

Job summary

A leading tech company in Toronto is looking for a Data Engineer to develop and maintain the data infrastructure for automation processes. This role involves designing scalable data solutions, creating reliable ETL pipelines, and collaborating with teams to implement machine learning models. Ideal candidates will have over 3 years of data engineering experience, strong SQL and programming skills, and expertise in AWS technologies. This position offers the opportunity to impact efficiency across the organization.

Qualifications

  • 3+ years of data engineering experience.
  • Experience with data modeling, warehousing, and building ETL pipelines.
  • Programming experience in at least one software language.
  • Proficient in data mining, management, reporting, and SQL queries.

Responsibilities

  • Design and support a scalable platform for data access.
  • Build automated and fault-tolerant ETL/ELT pipelines.
  • Collaborate with Data Scientists to define data structures.
  • Implement data structures and practices for enhanced accessibility.
  • Improve data processes for self-service support.

Skills

Data modeling
ETL pipelines
SQL
Python
Data mining

Tools

AWS Redshift
AWS Glue
AWS S3
Job description
Description

The CloudTune team within Amazon's Intelligent Cloud Control organization is seeking an expert Data Engineer to build and operate the data foundation that powers our end-to-end automation vision. Our goal is to invent new software and data systems that remove human decision‑making from SDO’s scaling and cost controllership processes across Amazon's entire retail and digital businesses.

As a Data Engineer in CloudTune, you will be working with an extremely large, complex, and dynamic data environment that integrates capacity, traffic forecasting, and financial data. You will be responsible for designing, implementing, and operating stable, scalable, and low‑cost solutions to flow massive datasets from production systems into our analytical and machine learning platforms. This work directly enables the ML models and control systems that provision Amazon’s services optimally for availability, customer experience, and cost—creating an enormous impact on company‑wide efficiency. You should be passionate about leveraging huge data sets to drive sophisticated, algorithmic automation.

Key job responsibilities

Design, implement, and support a scalable platform providing high‑integrity, ad‑hoc access to petabyte‑scale capacity and financial datasets.

Build robust, automated, and fault‑tolerant data integration (ETL/ELT) pipelines using SQL, Python, and AWS services (e.g., Redshift, Glue, S3, EMR).

Interface with Data Scientists and Software Engineers to define and build the data structures necessary to train and deploy advanced machine learning models for forecasting and optimization.

Implement data structures and modeling best practices to simplify, enhance the accessibility, and improve the usability of complex, heterogeneous datasets.

Continually improve ongoing data processes, automating or simplifying self‑service support for engineers and scientists.

About the team

CloudTune is part of Amazon's Intelligent Cloud Control organization, dedicated to inventing software systems that remove human decision‑making from SDO’s scaling and infrastructure spend planning. Our core mission is to drive end‑to‑end automation to provision Amazon's services—optimizing simultaneously for a great customer experience, high availability, and reduced cost. The solutions we build eliminate vast quantities of undifferentiated and tedious work, creating an enormous impact for tens of thousands of Amazon developers across the company. We are a high‑leverage team focused on innovative, large‑scale systems that define the future of Amazon's growth and efficiency.

Basic Qualifications
  • 3+ years of data engineering experience
  • Experience with data modeling, warehousing and building ETL pipelines
  • Experience programming with at least one software programming language
  • Experience in data mining, data management, reporting, and SQL queries
Preferred Qualifications
  • Experience with AWS technologies like Redshift, S3, AWS Glue, EMR, Kinesis, FireHose, Lambda, and IAM roles and permissions
  • Experience with non‑relational databases / data stores (object storage, document or key‑value stores, graph databases, column‑family databases)

Amazon is an equal opportunity employer and does not discriminate on the basis of protected veteran status, disability, or other legally protected status.

Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner.

Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.