Enable job alerts via email!

Senior Data Engineer

VLink Inc

Mississauga

On-site

CAD 80,000 - 120,000

Part time

Today
Be an early applicant

Job summary

A prominent consulting firm in IT is seeking an Azure Data Engineer to design and implement data pipelines using Databricks. The role demands expertise in Azure Cloud Services and proficiency in ETL processes. The ideal candidate will work collaboratively with stakeholders to ensure optimal performance and data governance. Relevant certifications are advantageous. The position is contract-based and targets professionals at the mid-senior level.

Qualifications

  • Strong expertise in Databricks and Azure Cloud Services.
  • Solid understanding of Spark and PySpark for big data processing.
  • Experience with relational databases is required.

Responsibilities

  • Build and maintain scalable ETL/ELT pipelines using Databricks.
  • Optimize Databricks workloads for cost efficiency and performance.
  • Implement data security and governance standards.

Skills

Databricks
Azure Cloud Services
Spark
PySpark
Python
Job description
Overview

We are seeking a highly skilled Azure Data Engineer with strong expertise in Databricks to join our data team. The ideal candidate will design, implement and optimize large-scale data pipelines, ensuring scalability, reliability and performance. This role involves working closely with multiple teams and business stakeholders to deliver cutting-edge data solutions.

Key Responsibilities
  • Build and maintain scalable ETL / ELT pipelines using Databricks.
  • Leverage PySpark / Spark and SQL to transform and process large datasets.
  • Integrate data from multiple sources including Azure Blob Storage, ADLS and other relational / non-relational systems.
  • Work closely with multiple teams to prepare data for dashboards and BI tools.
  • Collaborate with cross-functional teams to understand business requirements and deliver tailored data solutions.
Performance & Optimization
  • Optimize Databricks workloads for cost efficiency and performance.
  • Monitor and troubleshoot data pipelines to ensure reliability and accuracy.
Governance & Security
  • Implement and manage data security, access controls and governance standards using Unity Catalog.
  • Ensure compliance with organizational and regulatory data policies.
Deployment
  • Leverage Databricks Asset Bundles for seamless deployment of Databricks jobs, notebooks and configurations across environments.
  • Manage version control for Databricks artifacts and collaborate with the team to maintain development best practices.
Technical Skills
  • Strong expertise in Databricks (Delta Lake, Unity Catalog, Lakehouse Architecture, Table Triggers, Delta Live Pipelines, Databricks Runtime, etc.)
  • Proficiency in Azure Cloud Services.
  • Solid understanding of Spark and PySpark for big data processing.
  • Strong programming skills in Python.
  • Experience in relational databases.
  • Knowledge of Databricks Asset Bundles and GitLab.
Preferred Experience
  • Familiarity with Databricks Runtimes and advanced configurations.
  • Knowledge of streaming frameworks like Spark Streaming.
Certifications

For the role, relevant certifications in Azure or Databricks can be beneficial.

Experience & Employment Details
  • Seniority level: Mid-Senior level
  • Employment type: Contract
  • Job function: Information Technology
  • Industries: IT Services and IT Consulting
Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.