Enable job alerts via email!

Senior Data Engineer

Cpus Engineering Staffing Solutions Inc.

Toronto

Remote

CAD 70,000 - 110,000

Full time

Yesterday
Be an early applicant

Generate a tailored resume in minutes

Land an interview and earn more. Learn more

Start fresh or import an existing resume

Job summary

A leading engineering staffing solutions provider is seeking a Senior Data Engineer for a remote 3-month contract. The position involves designing robust data pipelines, enhancing data infrastructures, and ensuring data solutions meet the needs of analytics and business intelligence. Ideal candidates will have 4-6 years of experience in relevant areas, particularly in big data environments.

Qualifications

  • Requires 4-6 years of experience in data modeling, data warehousing, and architecture.
  • Experience as a Data Engineer in a Big Data environment is essential.
  • Strong knowledge of data ingestion, SQL, Python, and Spark required.

Responsibilities

  • Design and productionize modular data pipelines and data infrastructure.
  • Implement data models for business intelligence and analytics.
  • Monitor ongoing operations and assist with troubleshooting of datasets.

Skills

Data modeling
Data solution architecture
Programming methodologies
Agile development
Fluency with SQL
Fluency with Python
Fluency with Spark/PySpark

Education

4 years of University in computer science or relevant programs

Tools

Azure Data Factory
Azure Data Lake
Azure SQL Databases
Azure Data Warehouse
Azure Synapse Analytics Services
Azure Databricks
Airflow
DBT

Job description

We are currently requesting resumes for the following position : Senior Data Engineer

Number of Vacancies : 1

Job ID : 24-080

Level : MP5

Duration : 3 months

Hours of work : 40

Location : 700 University Ave (100% Remote)

Recruiter : Lana Newman

Job Overview

  • Design, build and productionize modular and scalable data pipelines and data infrastructure leveraging the wide range of data sources across the organization.
  • Implement curated common data models that offer an integrated, business-centric single source of truth for business intelligence, analytics, artificial intelligence, and other downstream system use.
  • Identify, design, and implement internal process improvements : automating manual processes, optimizing data delivery, redesigning infrastructure for greater scalability, etc.
  • Work with tools in the Microsoft Stack; Azure Data Factory, Azure Data Lake, Azure SQL Databases, Azure Data Warehouse, Azure Synapse Analytics Services, Azure Databricks, Microsoft Purview, and Power BI.
  • Work within an agile work management framework in delivery of products and services, including contributing to feature & user story backlog item development.
  • Develop optimized, performant data pipelines and models at scale using technologies such as Python, Spark, and SQL, consuming data sources in XML, CSV, JSON, REST APIs, or other formats.
  • Implement orchestration of data pipeline execution to ensure data products meet customer latency expectations, dependencies are managed, and datasets are as up to date as possible, with minimal disruption to end-customer use.
  • Create tooling to help with day-to-day tasks and reduce toil via automation wherever possible.
  • Build continuous integration / continuous delivery (CI / CD) pipelines to automate testing and deployment of infrastructure / code.
  • Monitor the ongoing operation of in-production solutions, assist in troubleshooting issues, and provide Tier 2 support for datasets produced by the team, on an as-required basis.
  • Write and perform automated unit and regression testing for data product builds, assist with user acceptance testing and system integration testing as required, and assist in design of relevant test cases.
  • Participate in code review as both a submitter and reviewer.

Qualifications

  • 4 years of University in computer science, computer / software engineering or other relevant programs within data engineering, data analysis, artificial intelligence, or machine learning.
  • Requires experience of over 4 years and up to 6 years in data modeling, data warehouse design and data solution architecture in a Big Data environment.
  • Experience as a Data Engineer in a Big Data environment.
  • Experience with integrating structured and unstructured data across various platforms and sources.
  • Knowledge of content fragmentation, partitioning, query parallelism, and query execution plans.
  • Experience with implementing event-driven (pub / sub), near-real-time, or streaming data solutions.
  • Strong knowledge of programming methodologies (source / version control, continuous integration / continuous delivery, automated testing, quality assurance) and agile development methodologies.
  • Fluency with SQL, Python, and Spark / PySpark is required.
  • Experience with Airflow and DBT.
  • A period of over 4 years and up to and including 6 years in data ingestion, data modeling, data engineering, and software development.

J-18808-Ljbffr

Create a job alert for this search

Senior Data Engineer • Old Toronto, ON, Canada

Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.