Enable job alerts via email!

25-108 Senior Data Engineer

Morson

Pickering

Hybrid

CAD 60,000 - 80,000

Full time

4 days ago
Be an early applicant

Job summary

A data solutions provider is seeking a Senior Data Engineer to enhance customer-centric digital experiences. Responsibilities include building scalable data pipelines and collaborating with agile teams. The ideal candidate is skilled in Azure Data Factory, Python, and has experience in data governance. This contract position offers a hybrid work environment with a competitive hourly rate of $70-$90.

Qualifications

  • Experience in designing and building data pipelines using Azure Data Factory and Databricks.
  • Fluent in creating data processing frameworks using Python and SQL.
  • Good understanding of data structures and data processing frameworks.

Responsibilities

  • Build and support data-driven applications for customer-centric digital experiences.
  • Collaborate with cross-discipline agile teams and ensure data security.
  • Implement data ingestion and curation pipelines for business intelligence.

Skills

Azure Data Factory
Python
Databricks
SQL
PySpark
Data Warehouse
Data Lakehouses
Apache Spark
Kafka
Hadoop

Education

Four-year University education in computer science or related fields

Tools

Azure Data Lake
Azure Synapse Analytics
Azure SQL Databases

Job description

Job Description

Position : Senior Data Engineer

Resume Due Date : Wednesday June 25 2025 (5 : 00PM EST)

Number of Vacancies : 2

Level : MP4 upto $90 / hr INC

Duration : 12 Months

Hours of work : 35

Location : 889 Brock Road Pickering (Hybrid 4 days remote)

Job Overview

  • As a Senior Data Developer you will be responsible for building and supporting the data driven applications which enable innovative customer centric digital experiences.
  • You will be working as part of a cross-discipline agile team who help each other solve problems across all business areas.
  • You will build reliable supportable & performant data lake & data warehouse products to meet the organizations need for data to drive reporting analytics applications and innovation.
  • You will employ best practice in development security and accessibility to achieve the highest quality of service for our customers.
  • Build and productionize modular and scalable data ELT / ETL pipelines and data infrastructure leveraging the wide range of data sources across the organization.
  • Implement data ingestion and curation data pipelines that offer an integrated business-centric single source of truth for business intelligence reporting and downstream system use in collaboration with Data Architect.
  • Work closely with Data Architect infrastructure and cyber teams to ensure data is secure in transit and at rest.
  • Clean prepare and optimize datasets for performance ensuring lineage and quality controls are applied throughout the data integration cycle.
  • Support Business Intelligence Analysts in modelling data for visualization and reporting using dimensional data modeling and aggregation optimization methods.
  • Provide production support for issues related to ingestion data transformation and pipeline performance data accuracy and integrity.
  • Collaborate with data architect business analysts data scientists data engineers data analysts solution architects and data modelers to develop data pipelines to feed our data marketplace.
  • Assist in identifying designing and implementing internal process improvements : automating manual processes optimizing data delivery re-designing infrastructure for greater scalability etc.
  • Work with tools in the Microsoft Stack; Azure Data Factory Azure Data Lake Azure SOL Databases Azure Data Warehouse Azure Synapse Analytics Services Azure Databricks Collibra and Power Bl.
  • Work within the agile SCRUM work management framework in delivery of products and services including contributing to feature & user story backlog item development and utilizing related Kanban / SCRUM toolsets.
  • Assist in building data catalog and maintenance of relevant metadata for datasets published for enterprise use.
  • Develop optimized performant data pipelines and models at scale using technologies such as Python Spark and SOL consuming data sources in XML CSV JSON REST APls or other formats.
  • Document as-built pipelines and data products within the product description and utilize source control to ensure a maintainable code-base.
  • Implement orchestration of data pipeline execution to ensure data products meet customer latency expectations dependencies are managed and datasets are as up-to-date as possible with minimal disruption to end-customer use.
  • Create tooling to help with day to day tasks and reduce toil via automation wherever possible.
  • Work with Continuous Integration / Continuous Delivery and DevOps pipelines to automate infrastructure code delivery and product enhancement isolation and proper release management and versioning.
  • Monitor the ongoing operation of in-production solutions assist in troubleshooting issues and provide Tier 2 support for datasets produced by the team on an as-required basis.
  • Implement and manage appropriate access to data products via role-based access control.
  • Write and perform automated unit and regression testing for data product builds assist with user acceptance testing and system integration testing as required and assist in design of relevant test cases.
  • Participate in peer code review sessions and approve non-production pull requests.

Qualifications

  • Completion of a four-year University education in computer science computer / software engineering or other relevant programs within data engineering data analysis artificial intelligence or machine learning.
  • Experience as a Data Engineer designing and building data pipelines using Azure Data Factory and Databricks is a must.
  • Fluent in creating data processing frameworks using Python PySpark SparkSOL and SQL
  • Experience with Azure Data Factory ADLS Synapse Analytics and Databricks
  • Experience building data pipelines for Data Lakehouses and Data Warehouses
  • Good understanding of data structures and data processing frameworks
  • Knowledge of data governance and data quality principles
  • Effective communication skills to translate technical details to non-technical stakeholders
  • Required Experience :

    Senior IC

    Key Skills

    Apache Hive,S3,Hadoop,Redshift,Spark,AWS,Apache Pig,NoSQL,Big Data,Data Warehouse,Kafka,Scala

    Employment Type : Contract

    Experience : years

    Vacancy : 1

    Hourly Salary Salary : 70 - 90

    Get your free, confidential resume review.
    or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.

    Similar jobs