Enable job alerts via email!

Intermediate DataOps/Cloud Data Engineer - Remote / Telecommute

Cynet Systems Inc

Toronto

Remote

CAD 90,000 - 130,000

Full time

2 days ago
Be an early applicant

Boost your interview chances

Create a job specific, tailored resume for higher success rate.

Job summary

A leading company is seeking a skilled Data Engineer to design and develop efficient data pipelines using Azure services. The role involves optimizing pipeline performance, supporting analytics workloads, and ensuring data quality. Candidates must possess strong proficiency in Python and a robust understanding of data governance.

Qualifications

  • Strong proficiency in Python and Azure services.
  • Experience with ADF, Databricks, and Azure DevOps.
  • Proficient in data governance and security principles.

Responsibilities

  • Design and develop scalable, efficient data pipelines.
  • Optimize pipeline performance and implement data quality processes.
  • Collaborate with stakeholders to gather requirements.

Skills

Data pipeline development
Python
Azure services
Data warehousing
ETL concepts
DataOps principles
Performance monitoring and tuning

Education

5+ years of experience in data engineering

Job description

Job Description:

Responsibilities:
  • Design and develop scalable, efficient data pipelines using Azure Data Factory and Databricks Workflows.
  • Optimize pipeline performance for scalability, throughput, and reliability with minimal latency.
  • Implement robust data quality, validation, and cleansing processes to ensure data integrity.
  • Collaborate with stakeholders to gather business and technical requirements for data solutions.
  • Troubleshoot and resolve data ingestion, transformation, and orchestration issues.
  • Support analytics, data science, and machine learning workloads through seamless data integration.
  • Support data governance initiatives, ensuring compliance with data security, privacy, and quality standards.
  • Contribute to data migration projects including OLTP/OLAP workloads and very large datasets (VLDs) to cloud platforms (SaaS, PaaS, IaaS).
Required Skills:
  • 5+ years of experience in data engineering, Strong proficiency in Python and familiarity with Azure Services is required.
  • Expertise with Azure Data Services: Azure SQL Database, Azure Data Lake, Azure Storage, Azure Databricks.
  • Experience with data pipeline development, orchestration, deployment, and automation using ADF, Databricks, Azure DevOps/GitHub Actions.
  • Proficiency in Python, Scala, and T-SQL.
  • Solid understanding of data warehousing and ETL concepts including star/snowflake schemas, fact/dimension modeling, and OLAP.
  • Familiarity with DataOps principles, Agile methodologies, and continuous delivery.
  • Proficient in data provisioning automation, data flow control, and platform integration.
  • Knowledge of both structured, semi-structured, and unstructured data ingestion, exchange, and transformation.
  • Experience with cloud-native data services such as DaaS (Data-as-a-Service), DBaaS (Database-as-a-Service), and DWaaS (Data Warehouse-as-a-Service), and infrastructure elements like Key Vault, VMs, and disks.
  • Experience with commercial and open-source data platforms, storage technologies (cloud and on-prem), and the movement of data across environments.
  • Experience in performance monitoring and tuning for cloud-based data solutions.
  • Experience supporting digital product development, data analysis, data security, and secure data exchange across platforms.
  • Proven experience designing enterprise-scale data architectures with high availability and security.
  • Understanding of data governance, data security, compliance, and metadata management.
  • Proficient in entity-relationship (ER) modeling and dimensional modeling.
  • Strong knowledge of normalization/denormalization techniques to support analytics-ready datasets.
Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.