Enable job alerts via email!

OPEN: Data Architect

Cpus Engineering Staffing Solutions Inc.

Pickering

Hybrid

CAD 80,000 - 120,000

Full time

20 days ago

Boost your interview chances

Create a job specific, tailored resume for higher success rate.

Job summary

An innovative firm is seeking a Senior Data Architect to lead the design and implementation of data models and solutions on Azure. This role involves collaborating with agile teams to create scalable data pipelines and ensuring data security and integrity. The ideal candidate will have extensive experience in data modeling, data warehousing, and the Azure environment, contributing to the development of customer-centric digital experiences. Join a forward-thinking organization that values diversity and inclusion, and play a pivotal role in shaping the data landscape to meet evolving business needs.

Qualifications

  • Extensive knowledge of data modeling and pipeline design patterns.
  • 6-8 years of experience in data modeling and solution architecture.

Responsibilities

  • Lead the design and implementation of data ELT/ETL pipelines on Azure.
  • Collaborate with cross-discipline teams to develop data solutions.

Skills

Data Modeling
Azure Architecture
Data Pipeline Design
Data Warehousing
Data Security
Agile Methodologies
Data Engineering

Education

Bachelor's Degree in Computer Science
Relevant Programs in Data Engineering

Tools

Azure Data Factory
Azure Data Lake
Azure SQL Databases
Azure Synapse Analytics
Azure Databricks
Power BI

Job description

We are currently requesting resumes for the following position: Data Architect

Resume Due Date: Monday, April 14, 2025 (5:00PM EST)

Job ID: 25-053

Number of Vacancies: 1

Level: Senior

Duration: 12 Months

Hours of work: 35 Hours per week

Location: 889 Brock Road, Pickering

Work Mode: Hybrid – 4 days remote

Job Overview

As a Data Architect you will be responsible for leading the Azure architecture, design, and delivery of data models and data products which enable innovative, customer-centric digital experiences. You will be working as part of a cross-discipline agile team who helps each other solve problems across all business areas. You will be a thought leader and subject matter expert on data lake & data warehousing and modeling activities for the team and use your influence to ensure that the team produces best-in-class data solutions that leverage repeatable, maintainable, and well-documented design patterns. You will employ best practice in development, security, accessibility, and design to achieve the highest quality of service for our customers.

JOB DUTIES

  • Lead the architecture, design, and oversee implementation of modular and scalable data ELT/ETL pipelines and data infrastructure on Azure and Databricks leveraging the wide range of data sources across the organization.
  • Design curated common data models that offer an integrated, business-centric single source of truth for business intelligence, reporting, and downstream system use.
  • Work closely with infrastructure and cyber teams to ensure data is secure in transit and at rest.
  • Create, guide and enforce code templates for delivery of data pipelines and transformations for structured, semi-structured and unstructured data sets.
  • Develop modeling guidelines that ensure model extensibility and reuse by employing industry standard disciplines for building facts, dimensions, bridge, aggregates, slowly changing dimensions and other dimensional and fact optimizations.
  • Establish standards database system fields, including primary and natural key combinations that optimize join performance in a multi-domain, multiple subject area physical (structured zone) and semantic model (curated zone).
  • Transform data and map to more valuable and understandable semantic layer sets for consumption, transitioning from system centric language to business-centric language.
  • Collaborate with business analysts, data scientists, data engineers, data analysts and solution architects to develop data pipelines to feed our data marketplace.
  • Introduce new technologies to the environment through research and POCs, and prepare POC code designs that can be implemented and productionized by developers.
  • Work with tools in the Microsoft Stack; Azure Data Factory, Azure Data Lake, Azure SQL Databases, Azure Data Warehouse, Azure Synapse Analytics Services, Azure Databricks, Microsoft Purview, and Power BI.
  • Work within the agile SCRUM work management framework in delivery of products and services, including contributing to feature & user story backlog item development, and utilizing related Kanban/SCRUM toolsets.
  • Document as-built architecture and designs within the product description.
  • Design data solutions that enable batch, near-real-time, event-driven, and/or streaming approaches depending on business requirements.
  • Design & advise on orchestration of data pipeline execution to ensure data products meet customer latency expectations, dependencies are managed, and datasets are as up-to-date as possible, with minimal disruption to end-customer use.
  • Ensure that designs are implemented with proper attention to data security, access management, and data cataloging requirements.
  • Approve pull requests related to production deployments.
  • Demonstrate solutions to business customers to ensure customer acceptance and solicit feedback to drive iterative improvements.
  • Assist in troubleshooting issues for datasets produced by the team (Tier 3 support), on an as-required basis.
  • Guide data modelers, business analysts and data scientists in the build of models optimized for KPI delivery, actionable feedback/writeback to operational systems and enhancing the predictability of machine learning models and experiments.
  • Develop Bicep or Terraform templates to manage Azure Infra as code.
  • Perform hands-on data engineering work to build data ingestion and data transformation pipelines.

Qualifications

EDUCATION

  • Requires an extensive knowledge in designing a data model to solve a business problem, specifying a data pipeline design pattern to bring data into a data warehouse, optimizing data structures to achieve required performance, designing low-latency and/or event-driven patterns of data processing, creation of a common data model to support current and future business needs.
  • This knowledge is considered to be normally acquired through the completion of a four-year University education in computer science, computer/software engineering or other relevant programs within data engineering, data analysis, artificial intelligence, or machine learning.

EXPERIENCE

  • Experience guiding data lake ingestion and data modeling projects in the Azure cloud environment; experience in modeling relational and in-memory models with star/snowflake schemas; experience with designing and implementing event-driven (pub/sub), near-real-time, or streaming data solutions, involving structured, semi-structured and unstructured data across various platforms.
  • A period of over 6 years and up to and including 8 years in data modeling, data warehouse design, and data solution architecture in a Big Data environment is considered necessary to gain this experience.

Our client is dedicated to promoting employment equity and encourages applications from equity-seeking communities including but not limited to: Indigenous Peoples, racialized individuals, persons with disabilities, and women. Our client aims to build a diverse team that reflects the communities it serves, enhancing its ability to meet their needs. We are committed to an inclusive and accessible work environment and invites all qualified individuals to apply, offering accommodations during the application, interview and onboarding process as needed. This effort supports our client’s long-term strategy for equity, diversity, and inclusion.

Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.