Edmonton
On-site
CAD 100,000 - 140,000
Full time
Job summary
A technology solutions provider in Canada is seeking a skilled Data Architect to design and implement scalable data solutions on Azure. The ideal candidate will have extensive experience in data architecture strategies, particularly using Databricks and SQL, and will play a critical role in ensuring data governance and compliance with regulatory standards. This position promotes technical leadership and collaboration within cross-functional teams.
Qualifications
- 8+ years as a Data Architect designing data architecture strategies.
- 5+ years in Azure services for secure and scalable solutions.
- 6+ years in Python and SQL for ETL/ELT workflows.
Responsibilities
- Design and implement data architecture on Azure.
- Lead development of data ingestion and transformation pipelines.
- Ensure compliance with privacy and regulatory standards.
Skills
Data architecture design
Azure services (Databricks, Synapse, Data Lake)
Python
SQL
Data governance practices
APIs and integration
Education
College or Bachelor degree in Computer Science or a related field
Tools
Azure Data Factory
Azure Databricks
Power BI
GitHub/Git
Overview
- Design and implement scalable, secure, and high-performance data architecture on Microsoft Azure, supporting both cloud-native and hybrid environments.
- Lead the development of data ingestion, transformation, and integration pipelines using Azure Data Factory, Azure Databricks, and Azure Synapse Analytics.
- Architect and manage data lakes and structured storage solutions using Azure Data Lake Storage Gen2, ensuring efficient access and governance.
- Integrate data from diverse source systems including ServiceNow, and geospatial systems, using APIs, connectors, and custom scripts.
- Develop and maintain robust data models and semantic layers to support operational reporting, analytics, and machine learning use cases.
- Build and optimize data workflows using Python and SQL for data cleansing, enrichment, and advanced analytics within Azure Databricks.
- Design and expose secure data services and APIs using Azure API Management for downstream systems.
- Implement data governance practices, including metadata management, data classification, and lineage tracking.
- Ensure compliance with privacy and regulatory standards (e.g., FOIP, GDPR) through role-based access controls, encryption, and data masking.
- Collaborate with cross-functional teams to align data architecture with business requirements, program timelines, and modernization goals.
- Monitor and troubleshoot data pipelines and integrations, ensuring reliability, scalability, and performance across the platform.
- Provide technical leadership and mentorship to data engineers and analysts, promoting best practices in cloud data architecture and development.
Qualifications
- College or Bachelor degree in Computer Science or a related field of study.
- Hands-on experience managing Databricks workspaces, including cluster configuration, user roles, permissions, cluster policies, and applying monitoring and cost optimization for efficient, governed Spark workloads. (3+ years)
- Experience as a Data Architect in a large enterprise, designing and implementing data architecture strategies and models that align data, technology, and business goals with strategic objectives. (8+ years)
- Experience designing data solutions for analytics-ready, trusted datasets using tools like Power BI and Synapse, including semantic layers, data marts, and data products for self-service, data science, and reporting. (4+ years)
- Experience in Github/Git for version control, collaborative development, code management, and integration with data engineering workflows. (4+ years)
- Experience with Azure services (Storage, SQL, Synapse, networking) for scalable, secure solutions, and with authentication (Service Principals, Managed Identities) for secure access in pipelines and integrations. (5+ years)
- Experience in Python (including PySpark) and SQL, applied to developing, orchestrating, and optimizing enterprise-grade ETL/ELT workflows in a large-scale cloud environment. (6+ years)
- Experience building scalable data pipelines with Azure Databricks, Delta Lake, Workflows, Jobs, and Notebooks, plus cluster management. Extending solutions to Synapse Analytics and Microsoft Fabric is a plus. (3+ years)