Enable job alerts via email!
Generate a tailored resume in minutes
Land an interview and earn more. Learn more
A leading tech solutions provider is seeking a Senior Azure Data Engineer with strong databricks and Python experience in Toronto. The role involves designing and managing data pipelines, automating infrastructure, and collaborating with teams to implement data solutions. Ideal candidates have over 6 years of Azure data engineering experience and a Bachelor's degree in IT-related fields. The position offers a hybrid work model.
Senior Azure Data Engineer with strong databricks and python experience to work with one of our financial services clients- 15231
Location Address: Toronto/Hybrid on site Tuesday/Wednesday/Thursday
Contract Duration: 08/25/2025 to 12/31/2025 - ability to start asap dependent on check clearance
Schedule Hours: 9am-5pm Monday-Friday; standard 37.5 hrs/week
Story Behind the Need
• Project: Migrating data warehouse from oracle
Typical Day in Role
• Design, develop, and manage data pipelines that facilitate the detailed extraction, transformation, and loading of data from diverse sources.
• Automate infrastructure provisioning and deployment using tools such as Terraform.
• Collaborate with Architecture, security, and risk teams to implement the latest guidelines and Azure standard methodologies.
• Collaborate with business collaborators to suggest data solutions that align with business goals and improve decision-making processes
Candidate Requirements/Must Have Skills:
1) 6+ years of previous data engineering experience with Azure-related data technologies.
2) Solid understanding of Azure infrastructure including; subscriptions, resource groups, resources, access control with RBAC (role-based access control), integrations with Azure AD, and Azure security principles (user group, service principal, managed identity), network concepts (VNet, Subnet, NSG rules, private endpoints), password/credential/key management, and data protection.
3) Strong hands-on knowledge of Azure Databricks, ADF, ADLS, Synapse Serverless/dedicated/spark pools, Python, PySpark, and T-SQL, along with experience crafting and developing scripts for ETL processes and automation in Azure Data Factory and Azure Databricks.
4) High proficiency in GIT/Jenkins/dev ops processes to maintain and resolve issues with data pipelines in production.
Nice-To-Have Skills:
• Knowledge of implementing Azure technologies and networking via Terraform, along with the ability to fix issues with Azure infrastructure in production.
• Experience with data modelling, data mart, data lakehouse architecture, SCD, data mesh, and delta lake overall.
Education:
• Bachelor’s degree or equivalent experience in computer/IT or data-related fields is required
• Master’s degree is a plus.
Interview Format:
2 rounds max
w/ Technical team, 2 individuals.
If 1st interview is held virtual, 2nd interview will likely be booked for in person