Enable job alerts via email!

Databricks Engineer

ShyftLabs

Toronto

On-site

CAD 90,000 - 130,000

Full time

22 days ago

Boost your interview chances

Create a job specific, tailored resume for higher success rate.

Job summary

A dynamic data product company, ShyftLabs, is hiring a Databricks Engineer to design and optimize large-scale data solutions. The ideal candidate will have robust experience in Apache Spark and cloud platforms, contributing to innovative data architectures and driving data-driven insights for Fortune 500 clients. This role promises competitive salary and strong employee growth opportunities, ensuring a vibrant career in data engineering.

Benefits

Competitive salary
Strong insurance package
Extensive learning and development resources

Qualifications

  • 5+ years of hands-on experience with Databricks and Apache Spark.
  • Experience with cloud platforms (AWS, Azure, or GCP) for data engineering.
  • Strong knowledge of ETL frameworks, data lakes, and Delta Lake architecture.

Responsibilities

  • Design, implement, and optimize big data pipelines in Databricks.
  • Develop scalable ETL workflows to process large datasets.
  • Collaborate with data scientists and engineers to enable advanced AI/ML workflows.

Skills

Apache Spark
SQL
Python
Data Governance
ETL frameworks
Problem-Solving

Education

Bachelor’s or Master’s degree in Computer Science

Tools

Databricks
AWS
Azure
GCP
CI/CD tools
Kubernetes
Docker
Terraform

Job description

Position Overview:

ShyftLabs is seeking a skilled Databricks Engineer to support in designing, developing, and optimizing big data solutions using the Databricks Unified Analytics Platform. This role requires strong expertise in Apache Spark, SQL, Python, and cloud platforms (AWS/Azure/GCP). The ideal candidate will collaborate with cross-functional teams to drive data-driven insights and ensure scalable, high-performance data architectures.

ShyftLabs is a growing data product company that was founded in early 2020 and works primarily with Fortune 500 companies. We deliver digital solutions built to help accelerate the growth of businesses in various industries, by focusing on creating value through innovation.


Job Responsiblities
  • Design, implement, and optimize big data pipelines in Databricks.
  • Develop scalable ETL workflows to process large datasets.
  • Leverage Apache Spark for distributed data processing and real-time analytics.
  • Implement data governance, security policies, and compliance standards.
  • Optimize data lakehouse architectures for performance and cost-efficiency.
  • Collaborate with data scientists, analysts, and engineers to enable advanced AI/ML workflows.
  • Monitor and troubleshoot Databricks clusters, jobs, and performance bottlenecks.
  • Automate workflows using CI/CD pipelines and infrastructure-as-code practices.
  • Ensure data integrity, quality, and reliability in all pipelines.
Basic Qualifications
  • Bachelor’s or Master’s degree in Computer Science, Data Engineering, or a related field.
  • 5+ years of hands-on experience with Databricks and Apache Spark.
  • Proficiency in SQL, Python, or Scala for data processing and analysis.
  • Experience with cloud platforms (AWS, Azure, or GCP) for data engineering.
  • Strong knowledge of ETL frameworks, data lakes, and Delta Lake architecture.
  • Experience with CI/CD tools and DevOps best practices.
  • Familiarity with data security, compliance, and governance best practices.
  • Strong problem-solving and analytical skills with an ability to work in a fast-paced environment.
Preferred Qualifications
  • Databricks certifications (e.g., Databricks Certified Data Engineer, Spark Developer).
  • Hands-on experience with MLflow, Feature Store, or Databricks SQL.
  • Exposure to Kubernetes, Docker, and Terraform.
  • Experience with streaming data architectures (Kafka, Kinesis, etc.).
  • Strong understanding of business intelligence and reporting tools (Power BI, Tableau, Looker).
  • Prior experience working with retail, e-commerce, or ad-tech data platforms.

We are proud to offer a competitive salary alongside a strong insurance package. We pride ourselves on the growth of our employees, offering extensive learning and development resources.

Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.

Similar jobs

Data Engineer - Databricks

Lumenalta

Toronto

Remote

CAD 80,000 - 110,000

2 days ago
Be an early applicant

Databricks Data Architect, Deloitte Global Technology

Deloitte Canada

Toronto

Remote

CAD 85,000 - 156,000

30+ days ago

Data Engineer - Databricks

Lumenalta

Toronto

Remote

CAD 80,000 - 120,000

22 days ago

Ai and Data - Senior - Data Engineer - Toronto and Ottawa

Ernst & Young Advisory Services Sdn Bhd

Toronto

On-site

CAD 80,000 - 120,000

4 days ago
Be an early applicant

Data Enginner- Databricks, ADF, Snowflake, Pyspark, SQL -Canada/Hybrid

W3Global

Toronto

Hybrid

CAD 90,000 - 150,000

8 days ago

Data Engineer - Databricks

Lumenalta

Remote

CAD 100,000 - 140,000

2 days ago
Be an early applicant

Data Engineer - Databricks

Lumenalta

Remote

CAD 120,000 - 160,000

5 days ago
Be an early applicant

Data Engineer - Databricks - Mid Level

Lumenalta

Toronto

Remote

CAD 70,000 - 110,000

30+ days ago

Data Engineer(with MLOps Exposure)

Gore Mutual Insurance

Toronto

Hybrid

CAD 80,000 - 120,000

4 days ago
Be an early applicant