Job Search and Career Advice Platform

Enable job alerts via email!

Databricks Engineer with Capital Market & Banking Experience

AIT Global, Inc.

Mississauga

Hybrid

CAD 80,000 - 100,000

Full time

2 days ago
Be an early applicant

Generate a tailored resume in minutes

Land an interview and earn more. Learn more

Job summary

A leading technology firm is seeking an experienced Data Engineer in Mississauga to design and maintain scalable data solutions. The ideal candidate will have expertise in Python, PySpark, SQL, and Snowflake, with hands-on experience in Azure Data Factory. Responsibilities include developing data pipelines, ensuring data quality, and collaborating with stakeholders. This hybrid role requires excellent problem-solving skills and familiarity with cloud platforms, ensuring data governance and security compliance.

Qualifications

  • Experience as a Data Engineer or in similar roles.
  • Strong proficiency in Python and frameworks like PySpark.
  • Expertise in SQL with experience in Snowflake.

Responsibilities

  • Develop and maintain scalable data pipelines and ETL/ELT workflows.
  • Manage and optimize data storage in Snowflake.
  • Integrate data from multiple sources and ensure quality.

Skills

Python
PySpark
SQL
Snowflake
Azure Data Factory
Data Lakes
OpenShift

Education

Bachelor's or Master's degree in Computer Science or Data Engineering

Tools

Azure
AWS
Google Cloud Platform
Job description
Databricks Engineer with Capital Market/Banking Experience with Azure

Location: Mississauga, ON (Hybrid, 3 days in a week from office)

Job Summary

We are seeking an experienced Data Engineer to design, develop, and maintain scalable data solutions. The ideal candidate will have a strong background in Python, PySpark, SQL, Snowflake, Data Factory, Data Lakes, and OpenShift, with a passion for building reliable data pipelines and enabling data-driven decision-making.

Key Responsibilities
  • Develop, construct, test, and maintain scalable data pipelines and ETL/ELT workflows using Python, PySpark, and SQL.
  • Manage and optimize data storage solutions in Snowflake and on Data Lakes.
  • Integrate data from multiple sources and ensure data quality, integrity, and security.
  • Design and implement automated workflows using Azure Data Factory or other orchestration tools.
  • Deploy and manage containerized applications and services on OpenShift.
  • Collaborate with Data Scientists, Analysts, and Stakeholders to understand data requirements.
  • Monitor and troubleshoot data pipelines and resolve performance bottlenecks.
  • Maintain documentation of data architecture, workflows, and best practices.
  • Ensure compliance with data governance and security policies.
Qualifications & Skills
  • Bachelor’s or Master’s degree in Computer Science, Data Engineering, or related field.
  • Proven experience as a Data Engineer or in similar roles.
  • Strong proficiency in Python, with frameworks like PySpark.
  • Expertise in SQL and experience with Snowflake Data Warehouse.
  • Hands‑on experience with data orchestration using Azure Data Factory or similar tools.
  • Knowledge of Data Lakes architecture and management.
  • Experience deploying and managing containerized applications on OpenShift or Kubernetes.
  • Familiarity with cloud platforms (Azure, AWS, Google Cloud Platform).
  • Understanding of data security, governance, and compliance best practices.
  • Strong problem‑solving skills and attention to detail.
  • Excellent communication and teamwork abilities.
Preferred Skills
  • Experience with other big data tools (e.g., Kafka, Hadoop).
  • Knowledge of CI/CD pipelines and DevOps practices.
  • Familiarity with other cloud data services and tools.
Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.