Enable job alerts via email!

Senior Data Developer

Cpus Engineering Staffing Solutions Inc.

Pickering

On-site

CAD 80,000 - 120,000

Full time

30+ days ago

Boost your interview chances

Create a job specific, tailored resume for higher success rate.

Job summary

An established industry player is seeking a Senior Data Developer to join a dynamic remote team. In this role, you will be responsible for building and supporting data-driven applications that enhance customer-centric digital experiences. You'll work collaboratively within an agile framework to design and implement scalable data pipelines, ensuring data integrity and security. This position offers the opportunity to utilize cutting-edge tools and technologies, such as Azure Data Factory and Python, to create impactful data solutions. If you are passionate about data engineering and eager to make a difference, this role is perfect for you.

Qualifications

  • Completion of a four-year degree in computer science or relevant field.
  • Experience as a Data Engineer designing and building data pipelines.

Responsibilities

  • Build and productionize modular and scalable data ELT/ETL pipelines.
  • Collaborate with cross-discipline teams to develop data pipelines.

Skills

Python
Data Engineering
Data Pipeline Design
Data Processing Frameworks
Communication Skills
Data Governance
Agile Methodologies

Education

Four-year University Education in Computer Science

Tools

Azure Data Factory
Azure Data Lake
Azure Synapse Analytics
Azure Databricks
Power BI
Spark
SQL

Job description

We are currently requesting resumes for the following position: Senior Data Developer

Resume Due Date: Wednesday, January 22nd, 2025 (5:00PM EST)

Job ID: 25-010

Number of Vacancies: 1

Level: Senior

Duration: 10 Months

Hours of work:35 per week

Location: 889 Brock Rd., Pickering

Work Mode: 100% Remote

Job Overview

JOB FUNCTION

  • As a Senior Data Developer, you will be responsible for building and supporting the data driven applications which enable innovative, customer centric digital experiences.
  • You will be working as part of a cross-discipline agile team who help each other solve problems across all business areas.
  • You will build reliable, supportable & performant data lake & data warehouse products to meet the organization’s need for data to drive reporting analytics, applications, and innovation.
  • You will employ best practice in development, security and accessibility to achieve the highest quality of service for our customers.

JOB DUTIES

  • Build and productionize modular and scalable data ELT/ETL pipelines and data infrastructure leveraging the wide range of data sources across the organization.
  • Implement data ingestion and curation data pipelines that offer an integrated, business-centric single source of truth for business intelligence, reporting, and downstream system use, in collaboration with Data Architect.
  • Work closely with Data Architect, infrastructure and cyber teams to ensure data is secure in transit and at rest.
  • Clean, prepare and optimize datasets for performance, ensuring lineage and quality controls are applied throughout the data integration cycle.
  • Support Business Intelligence Analysts in modelling data for visualization and reporting, using dimensional data modeling and aggregation optimization methods.
  • Provide production support for issues related to ingestion, data transformation and pipeline performance, data accuracy and integrity.
  • Collaborate with data architect, business analysts, data scientists, data engineers, data analysts, solution architects and data modelers to develop data pipelines to feed our data marketplace.
  • Assist in identifying, designing, and implementing internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.
  • Work with tools in the Microsoft Stack; Azure Data Factory, Azure Data Lake, Azure SOL Databases, Azure Data Warehouse, Azure Synapse Analytics Services, Azure Databricks, Collibra, and Power Bl.
  • Work within the agile SCRUM work management framework in delivery of products and services, including contributing to feature & user story backlog item development, and utilizing related Kanban/SCRUM toolsets.
  • Assist in building data catalog and maintenance of relevant metadata for datasets published for enterprise use.
  • Develop optimized, performant data pipelines and models at scale using technologies such as Python, Spark and SOL, consuming data sources in XML, CSV, JSON, REST APls, or other formats.
  • Document as-built pipelines and data products within the product description, and utilize source control to ensure a maintainable code-base.
  • Implement orchestration of data pipeline execution to ensure data products meet customer latency expectations, dependencies are managed, and datasets are as up-to-date as possible, with minimal disruption to end-customer use.
  • Create tooling to help with day to day tasks, and reduce toil via automation wherever possible.
  • Work with Continuous Integration/Continuous Delivery and DevOps pipelines to automate infrastructure, code delivery and product enhancement isolation and proper release management and versioning.
  • Monitor the ongoing operation of in-production solutions, assist in troubleshooting issues, and provide Tier 2 support for datasets produced by the team, on an as-required basis.
  • Implement and manage appropriate access to data products via role-based access control.
  • Write and perform automated unit and regression testing for data product builds, assist with user acceptance testing and system integration testing as required, and assist in design of relevant test cases.
  • Participate in peer code review sessions, and approve non-production pull requests.

Qualifications

Education

  • Completion of a four-year University education in computer science, computer/software engineering or other relevant programs within data engineering, data analysis, artificial intelligence, or machine learning.

Experience

  • Experience as a Data Engineer designing and building data pipelines.
  • Fluent in creating data processing frameworks using Python, PySpark, SparkSOL and SOL
  • Experience with Azure Data Factory, ADLS, Synapse Analytics and Databricks
  • Experience building data pipelines for Data Lakehouses and Data Warehouses
  • Good understanding of data structures and data processing frameworks
  • Knowledge of data governance and data quality principles
  • Effective communication skills to translate technical details to non-technical stakeholders

Our client is dedicated to promoting employment equity and encourages applications from equity-seeking communities including but not limited to: Indigenous Peoples, racialized individuals, persons with disabilities, and women. Our client aims to build a diverse team that reflects the communities it serves, enhancing its ability to meet their needs. We are committed to an inclusive and accessible work environment and invites all qualified individuals to apply, offering accommodations during the application, interview and onboarding process as needed. This effort supports our client’s long-term strategy for equity, diversity, and inclusion.

Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.

Similar jobs

Data Engineer - Snowflake - Senior

Lumenalta

Toronto

Remote

CAD 100,000 - 130,000

Yesterday
Be an early applicant

Data Engineer - Snowflake - Tech Lead

Lumenalta

Toronto

Remote

CAD 100,000 - 130,000

Yesterday
Be an early applicant

Network Data Engineer - Remote / Telecommute

Cynet Systems Inc

Toronto

Remote

CAD 80,000 - 100,000

2 days ago
Be an early applicant

Senior Software Developer, Data Technology (Canada)

Braintrust

Alberta

Remote

CAD 98,000 - 138,000

4 days ago
Be an early applicant

Data Engineer, AI/ML (Toronto, Hydrid / Remote)

Autodesk

Toronto

Remote

CAD 80,000 - 110,000

12 days ago

Senior Data Engineer (Database Developer)

Fundserv Inc.

Toronto

Hybrid

CAD 90,000 - 130,000

Yesterday
Be an early applicant

Microsoft Fabric Data Engineer

UNC Health Care

Morrisville

Remote

CAD 80,000 - 100,000

Today
Be an early applicant

Microsoft Fabric Data Engineer

UNC REX Healthcare

Morrisville

Remote

CAD 80,000 - 100,000

Yesterday
Be an early applicant

Software Developer, Data Sync

Mappedin

Waterloo

Remote

CAD 70,000 - 90,000

5 days ago
Be an early applicant