Enable job alerts via email!

Data Engineer III / Senior Data Engineer Engineering Toronto, Canada

Heart Talent

Toronto

On-site

CAD 80,000 - 110,000

Full time

18 days ago

Boost your interview chances

Create a job specific, tailored resume for higher success rate.

Job summary

An innovative firm is seeking a passionate Data Engineer to join their dynamic team in Toronto. This role involves designing and maintaining ETL processes and data pipelines, ensuring data quality and integrity while working with cutting-edge cloud technologies. The ideal candidate will have extensive experience in Python and SQL, and be skilled in data warehousing and cloud data architectures. This is an exciting opportunity to contribute to transformative projects that empower clients across various industries. If you thrive in a fast-paced environment and enjoy creative problem-solving, this position is perfect for you.

Qualifications

  • 3+ years of Python coding experience and 5+ years of SQL Server development.
  • Experience with ETL pipelines using Databricks Pyspark and cloud data warehouses.

Responsibilities

  • Design, build, and maintain ETL infrastructure and data pipelines.
  • Optimize data pipelines for efficient ingestion and transformation.

Skills

Python
SQL
ETL Processes
Data Warehousing
Data Modeling
Data Integration
Cloud Technologies
Data Quality Assurance

Education

Bachelor's Degree in Computer Science or related field

Tools

Databricks
ADF (Azure Data Factory)
Synapse
Redshift
Snowflake
Airflow
AWS Lambda
AWS Glue

Job description

Toronto, Canada

Full Time

Job Description

Overview

Haptiq is a leader in delivering digital solutions and consulting services that drive value and transform businesses. We specialize in leveraging technology to improve efficiencies and offer comprehensive solutions tailored to meet the unique needs of our clients across various industries. We bring next-generation technology to private capital markets through the Olympus suite of cloud-based solutions designed to empower private equity and credit funds, as well as the firms in which they invest.

The Opportunity

We are seeking a highly motivated and self-driven data engineer for our growing data team who is able to work and deliver independently and as a team. In this role, you will play a crucial part in designing, building, and maintaining our ETL infrastructure and data pipelines.

Responsibilities and Duties

  • This position is for a Cloud Data engineer with a background in Python, DBT, SQL, and data warehousing for enterprise-level systems.
  • Adhere to standard coding principles and standards.
  • Build and optimize data pipelines for efficient data ingestion, transformation, and loading from various sources while ensuring data quality and integrity.
  • Design, develop, and deploy Python scripts and ETL processes in ADF environment to process and analyze varying volumes of data.
  • Experience with DWH, Data Integration, Cloud, Design, and Data Modeling.
  • Proficient in developing programs in Python and SQL.
  • Experience with Data warehouse Dimensional data modeling.
  • Working with event-based/streaming technologies to ingest and process data.
  • Working with structured, semi-structured, and unstructured data.
  • Optimize ETL jobs for performance and scalability to handle big data workloads.
  • Monitor and troubleshoot ADF jobs, identify and resolve issues or bottlenecks.
  • Implement best practices for data management, security, and governance within the Databricks environment. Experience designing and developing Enterprise Data Warehouse solutions.
  • Proficient in writing SQL queries and programming including stored procedures and reverse engineering existing processes.
  • Perform code reviews to ensure fit to requirements, optimal execution patterns, and adherence to established standards.
  • Checking in, checking out, and peer review of merging PRs into git Repo.
  • Knowledge of deployment of packages and code migrations to stage and prod environments via CI/CD pipelines.

Requirements

  • 3+ years Python coding experience.
  • 5+ years SQL Server based development of large datasets.
  • 5+ years experience with developing and deploying ETL pipelines using Databricks Pyspark.
  • Experience in any cloud data warehouse like Synapse, ADF, Redshift, Snowflake.
  • Experience in Data warehousing - OLTP, OLAP, Dimensions, Facts, and Data modeling.
  • Previous experience leading an enterprise-wide Cloud Data Platform migration with strong architectural and design skills.
  • Experience with Cloud-based data architectures, messaging, and analytics.
  • Add-ons: Any experience with Airflow, AWS Lambda, AWS Glue, and Step Functions is a Plus.

Why Join Us?

We value creative problem solvers who learn fast, work well in an open and diverse environment, and enjoy pushing the bar for success ever higher. We do work hard, but we also choose to have fun while doing it.

Job ID 8952711074 | Posted on April 17, 2025

Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.

Similar jobs

Conversational AI Consultant

Calabrio, Inc.

Toronto

On-site

CAD 55,000 - 85,000

6 days ago
Be an early applicant

data scientist

TD Bank

Toronto

On-site

CAD 52,000 - 118,000

24 days ago