Job Search and Career Advice Platform

Enable job alerts via email!

25-199 - Data Engineer

Morson Canada

Oshawa

Hybrid

CAD 75,000 - 95,000

Full time

Today
Be an early applicant

Generate a tailored resume in minutes

Land an interview and earn more. Learn more

Job summary

A leading data solutions provider in Oshawa is seeking an Azure and Databricks Data Engineer. In this role, you will design and build data pipelines to support innovative applications. Collaboration with various teams is essential to ensure data integrity and quality. Required qualifications include a bachelor's degree in computer science and experience in data engineering, particularly with Azure tools. This position offers a hybrid work environment, combining remote work with in-office collaboration.

Qualifications

  • Completion of a four-year university education in computer science or related fields.
  • Experience as a Data Engineer designing and building data pipelines.
  • Fluent in creating data processing frameworks using Python and Spark.

Responsibilities

  • Design and support data-driven applications for digital experiences.
  • Build and productionize data ETL pipelines using Azure tools.
  • Collaborate with teams to optimize data delivery and maintain data quality.

Skills

Python
PySpark
Azure Data Factory
Data modeling
Data quality principles
Communication

Education

Bachelor's degree in Computer Science or related field

Tools

Azure Data Lake
Azure SQL Databases
Azure Synapse Analytics
Power BI
Spark
Job description

Resume Due Date: Tuesday, December 16th, 2025 (5:00PM EST)

Number of Vacancies: 4

Level: MP4

Duration: 11 Months

Hours of work: 35 hours

Location: CHQ (Hybrid – 3 days remote)

Job Overview
  • As an Azure and Databricks Data Engineer, you will be responsible for designing, building and supporting the data driven applications which enable innovative, customer centric digital experiences.
  • Will be working as part of a cross‑discipline agile team who help each other solve problems across all business areas.
  • Will build reliable, supportable & performant data lake & data warehouse products to meet the organization’s need for data to drive reporting analytics, applications, and innovation.
  • Will employ best practice in development, security, and accessibility and design to achieve the highest quality of service for our customers.
  • Build and productionize modular and scalable data ELT/ETL pipelines and data infrastructure leveraging the wide range of data sources across the organization.
  • Build curated common data models designed by the Data Modelers that offer an integrated, business‑centric single source of truth for business intelligence, reporting, and downstream system use, in collaboration with Data Architect.
  • Work closely with infrastructure, and cyber teams and Senior Data Developers to ensure data is secure in transit and at rest.
  • Clean, prepare and optimize datasets for performance, ensuring lineage and quality controls are applied throughout the data integration cycle.
  • Support Business Intelligence Analysts in modelling data for visualization and reporting, using dimensional data modeling and aggregation optimization methods.
  • Troubleshoot issues related to ingestion, data transformation and pipeline performance, data accuracy and integrity.
  • Collaborate with Business Analysts, data scientists, Senior Data Engineers, data Data Analysts and, solution Architects and Data Modelers to develop data pipelines to feed our data marketplace.
  • Assist in identifying, designing, and implementing internal process improvements: automating manual processes, optimizing data delivery, re‑designing infrastructure for greater scalability, etc.
  • Work with tools in the Microsoft Stack; Azure Data Factory, Azure Data Lake, Azure SQL Databases, Azure Data Warehouse, Azure Synapse Analytics Services, Azure Databricks, Microsoft Purview, and Power BI.
  • Work within the agile SCRUM work management framework in delivery of products and services, including contributing to feature & user story backlog item development, and utilizing related Kanban/SCRUM toolsets.
  • Assist in building data catalog and maintenance of relevant metadata for datasets published for enterprise use.
  • Develop optimized, performant data pipelines and models at scale using technologies such as Python, Spark and SQL, consuming data sources in XML, CSV, JSON, REST APIs, or other formats.
  • Document as‑built pipelines and data products within the product description, and utilize source control to ensure a maintainable code‑base.
  • Implement orchestration of data pipeline execution designed by Senior Data Engineers to ensure data products meet customer latency expectations, dependencies are managed, and datasets are as up‑to‑date as possible, with minimal disruption to end‑customer use.
  • Create tooling in collaboration with senior data engineers and data architects to help with day to day tasks, and reduce toil via automation wherever possible.
  • Work with Continuous Integration/Continuous Delivery and DevOps pipelines to assist in automate infrastructure, code delivery and product enhancement isolation and proper release management and versioning.
  • Monitor the ongoing operation of in‑production solutions, assist in troubleshooting issues, and provide Tier 2 support for datasets produced by the team, on an as‑required basis.
  • Implement and manage appropriate access to data products via role‑based access control based on guidance from senior data engineers.
  • Write and perform automated unit and regression testing for data product builds, assist with user acceptance testing and system integration testing as required, and assist in design of relevant test cases based on guidance from Data Architects.
  • Participate in peer code review sessions.
Qualifications
  • Completion of a four‑year University education in computer science, computer/software engineering or other relevant programs within data engineering, data analysis, artificial intelligence, or machine learning.
  • Experience as a Data Engineer designing and building data pipelines.
  • Fluent in creating data processing frameworks using Python, PySpark, SparkSOL and SOLExperience with Azure Data Factory, ADLS, Synapse Analytics and Databricks
  • Experience building data pipelines for Data Lakehouses and Data Warehouses
  • Good understanding of data structures and data processing frameworks
  • Knowledge of data governance and data quality principles
  • Effective communication skills to translate technical details to non‑technical stakeholders
Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.