TEEMA Solutions Group
Brookfield Asset Management
Protecnium Ingeniería y Servicios, S. L.
Manulife
Tenaris, the leading supplier of steel pipe
Aecon
Sia Partners
WINNERS
Zurich Insurance Company
camh
A technology solutions firm in Toronto is seeking a Data Engineering Manager to architect, implement, and optimize data solutions using Databricks. The role requires leading a technical team, managing cloud infrastructure, and ensuring data processing excellence. Ideal candidates will have extensive experience in Databricks and strong programming skills in Python or Scala. This position offers hybrid work with potential for some travel.
Hybrid onsite 2 days/week or 8 days/month
1 interview will be onsite (2 interview process)
Databricks is a must
AWS or Azure
Must be able to do code reviews of Python code
Create designs and validate code
"own" the architecture
As the Data Engineering Manager, you will be responsible for architecting, implementing, and optimizing end-to-end data solutions on Databricks while integrating with core AWS services. You will lead a technical team of data engineers, ensuring best practices in performance, security, and scalability. This role requires a deep, hands‑on understanding of Databricks internals and a track record of delivering large‑scale data platforms in a cloud environment.
Lead a team of data engineers in the architecture and maintenance of Databricks Lakehouse platform, ensuring optimal platform performance and efficient data versioning using Delta Lake.
Manage and optimize Databricks infrastructure including cluster lifecycle, cost optimization, and integration with AWS services (S3, Glue, Lambda).
Design and implement scalable ETL/ELT frameworks and data pipelines using Spark (Python/Scala), incorporating streaming capabilities where needed.
Drive technical excellence through advanced performance tuning of Spark jobs, cluster configurations, and I/O optimization for large‑scale data processing.
Implement robust security and governance frameworks using Unity Catalog, ensuring compliance with industry standards and internal policies.
Lead and mentor data engineering teams, conduct code reviews, and champion Agile development practices while serving as technical liaison across departments.
Establish and maintain comprehensive monitoring solutions for data pipeline reliability, including SLAs, KPIs, and alerting mechanisms.
Configure and manage end-to-end CI/CD workflows using source control, automated testing, and version control.
Your Role:
You have a Bachelor’s Degree in Engineering, Computer Science or equivalent.
5+ years of hands‑on experience with Databricks and Apache Spark, demonstrating expertise in building and maintaining a production‑grade data pipelines.
Proven experience leading and mentoring data engineering teams in complex, fast‑paced environments.
Extensive experience with AWS cloud services (S3, EC2, Glue, EMR, Lambda, Step Functions).
Strong programming proficiency in Python (PySpark) or Scala, and advanced SQL skills for analytics and data modeling.
Demonstrated expertise in infrastructure as code using Terraform or AWS CloudFormation for cloud resource management.
Strong background in data warehousing concepts, dimensional modeling, and experience with RDBMS systems (e.g., Postgres, Redshift).
Proficiency with version control systems (Git) and CI/CD pipelines, including automated testing and deployment workflows.
Excellent communication and stakeholder management skills, with demonstrated ability to translate complex technical concepts into business terms.
Has demonstrated the use of AI in the development lifecycle.
Some travel may be required to the US.
Knowledge of financial industry will be preferred.
* The salary benchmark is based on the target salaries of market leaders in their relevant sectors. It is intended to serve as a guide to help Premium Members assess open positions and to help in salary negotiations. The salary benchmark is not provided directly by the company, which could be significantly higher or lower.