Senior Data Architect - Remote / Telecommute
Cynet Systems Inc
Edmonton
Remote
CAD 100,000 - 120,000
Full time
Job summary
A technology solutions firm in Canada is seeking a Data Architect who will design and implement scalable data architecture on Microsoft Azure. The ideal candidate will have extensive experience in leading data ingestion development, managing Databricks workspaces, and integrating data from multiple sources. Responsibilities include ensuring data governance compliance and optimizing data workflows. This role offers competitive compensation and requires a Bachelor's degree in Computer Science.
Qualifications
- Hands-on experience managing Databricks workspaces - 3 years.
- Experience as a Data Architect in a large enterprise - 8 years.
- Experience with Azure services and authentication mechanisms - 5 years.
- Experience building scalable data pipelines with Azure Databricks - 3 years.
Responsibilities
- Design and implement scalable data architecture on Microsoft Azure.
- Lead development of data ingestion and transformation pipelines.
- Integrate data from diverse sources using APIs.
- Implement data governance practices and ensure compliance with regulations.
Skills
Azure Data Factory
Azure Databricks
Data Modeling
Python
SQL
Data Governance
Cloud Architecture
Education
College or Bachelor's degree in Computer Science or related field
Tools
Azure Synapse Analytics
Power BI
Git/GitHub
Job Description
Responsibilities:
- Design and implement scalable, secure, and high-performance data architecture on Microsoft Azure, supporting cloud-native and hybrid environments.
- Lead development of data ingestion, transformation, and integration pipelines using Azure Data Factory, Azure Databricks, and Azure Synapse Analytics.
- Architect and manage data lakes and structured storage solutions using Azure Data Lake Storage Gen2, ensuring efficient access and governance.
- Integrate data from diverse sources, including ServiceNow and geospatial systems, using APIs, connectors, and custom scripts.
- Develop and maintain robust data models and semantic layers to support operational reporting, analytics, and machine learning use cases.
- Build and optimize data workflows using Python and SQL for data cleansing, enrichment, and advanced analytics within Azure Databricks.
- Design and expose secure data services and APIs using Azure API Management for downstream systems.
- Implement data governance practices, including metadata management, data classification, and lineage tracking.
- Ensure compliance with privacy and regulatory standards (e.g., FOIP, GDPR) through role-based access controls, encryption, and data masking.
- Collaborate with cross-functional teams to align data architecture with business requirements, program timelines, and modernization goals.
- Monitor and troubleshoot data pipelines and integrations, ensuring reliability, scalability, and performance across the platform.
- Provide technical leadership and mentorship to data engineers and analysts, promoting best practices in cloud data architecture and development.
- Perform other duties as needed.
Equipment Requirements
- Resource will require own equipment.
Mandatory Training
- Complete all required training, including Freedom of Information and Protection of Privacy Act and Security Awareness training.
Must Have Qualifications
- College or Bachelor’s degree in Computer Science or related field.
- Hands-on experience managing Databricks workspaces (clusters, roles, permissions, monitoring, cost optimization) – 3 years.
- Experience as a Data Architect in a large enterprise, designing and implementing data architecture strategies and models – 8 years.
- Experience designing data solutions for analytics-ready, trusted datasets using Power BI and Synapse, including semantic layers, data marts, and data products – 4 years.
- Experience with Git/GitHub for version control, collaborative development, and code management – 4 years.
- Experience with Azure services (Storage, SQL, Synapse, networking) and authentication mechanisms (Service Principals, Managed Identities) – 5 years.
- Experience with Python (including PySpark) and SQL for ETL/ELT workflow development and optimization – 6 years.
- Experience building scalable data pipelines with Azure Databricks, Delta Lake, Workflows, Jobs, Notebooks, and cluster management – 3 years.
Nice to Have
- TOGAF certification.
- Experience using AI for code generation, data analysis, automation, and productivity enhancements – 1 year.
- Business requirement analysis experience for data manipulation, transformation, and cleansing – 8 years.
- Strong technical knowledge of Microsoft SQL Server, including database design, optimization, and administration – 8 years.
- Experience with Talend for ETL, data quality enforcement, and cloud integration – 2 years.
- Experience in data governance, security, and metadata management within Databricks – 2 years.
- Experience building secure, scalable RESTful APIs – 3 years.
- Experience with Message Queueing technologies like ActiveMQ or Service Bus – 3 years.
- Experience collaborating with cross-functional teams to create software applications and data products – 5 years.
- Experience with ServiceNow-Azure based Data Management Platform Integrations – 1 year.