Enable job alerts via email!

Senior Data Engineer (Databricks Expertise)

Tuppl

Mississauga

On-site

CAD 100,000 - 130,000

Full time

Today
Be an early applicant

Job summary

A leading data solutions provider in Mississauga is looking for a seasoned data engineer to build and maintain scalable ETL / ELT pipelines using Databricks. The role requires 10+ years of experience in data engineering and strong expertise in Spark, PySpark, and Azure Cloud Services. You will work closely with cross-functional teams to ensure data integrity and optimize performance, allowing for reliable data solutions. This is a long-term contract position with competitive compensation.

Qualifications

  • 10+ years of experience in data engineering or related field.
  • Strong expertise in building ETL / ELT pipelines using Databricks.
  • Experience in optimizing workloads for cost efficiency.

Responsibilities

  • Build and maintain scalable ETL / ELT pipelines using Databricks.
  • Leverage PySpark / Spark and SQL to process large datasets.
  • Implement and manage data security and governance standards.

Skills

Databricks
PySpark
SQL
Azure Cloud Services
Python
GitLab
Job description
Duration

Long term Contract

Experience Needed

10+ Years

Key Responsibilities
  • Build and maintain scalable ETL / ELT pipelines using Databricks.
  • Leverage PySpark / Spark and SQL to transform and process large datasets.
  • Integrate data from multiple sources including Azure Blob Storage, ADLS and other relational / non-relational systems.
  • Work closely with multiple teams to prepare data for dashboards and BI tools.
  • Collaborate with cross-functional teams to understand business requirements and deliver tailored data solutions.
Performance & Optimization
  • Optimize Databricks workloads for cost efficiency and performance.
  • Monitor and troubleshoot data pipelines to ensure reliability and accuracy.
Governance & Security
  • Implement and manage data security, access controls and governance standards using Unity Catalog.
  • Ensure compliance with organizational and regulatory data policies.
Deployment
  • Leverage Databricks Asset Bundles for deployment of Databricks jobs, notebooks and configurations across environments.
  • Manage version control for Databricks artifacts and collaborate with the team to maintain development best practices.
Technical Skills
  • Strong expertise in Databricks (Delta Lake, Unity Catalog, Lakehouse Architecture, Table Triggers, Delta Live Pipelines, Databricks Runtime, etc.).
  • Proficiency in Azure Cloud Services.
  • Solid understanding of Spark and PySpark for big data processing.
  • Strong programming skills in Python.
  • Experience with relational databases.
  • Knowledge of Databricks Asset Bundles and GitLab.
Preferred Experience
  • Familiarity with Databricks Runtimes and advanced configurations.
  • Knowledge of streaming frameworks like Spark Streaming.
Certifications
  • J-18808-Ljbffr
Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.