Enable job alerts via email!

Data Engineer

Airswift

Calgary

On-site

CAD 80,000 - 100,000

Full time

21 days ago

Job summary

A leading energy solutions provider is looking for a Data Engineer to develop scalable data pipelines and support analytics. This role in Calgary requires a background in data engineering, with strong skills in Python, SQL, and cloud tools. The ideal candidate will have over 5 years of experience and a commitment to building impactful data solutions that drive business value.

Qualifications

  • 5+ years of experience developing cost-optimized, scalable ETL/ELT pipelines.
  • Strong experience with Python, SQL, and PySpark.
  • Familiarity with DevOps/CI-CD processes.

Responsibilities

  • Develop and maintain standards for MLOps and machine learning initiatives.
  • Provide support for data integration, monitoring, and troubleshooting.
  • Conduct root cause analysis of incidents reported.

Skills

Python
SQL
PySpark
Data integration
ETL/ELT pipelines

Education

Post-secondary degree in Computer Science or related field

Tools

Azure
Databricks
Azure Data Factory
HVR
Knime
Magnotix
Job description
Overview

Airswift is seeking a Data Engineer to support a major energy client in Calgary. This is an exciting opportunity for someone passionate about data and eager to build modern, scalable data solutions that drive business value.

As a Data Engineer, you will collaborate with Data Analysts, Business Systems Analysts, and subject matter experts to deliver fit-for-purpose data pipelines and data products for analytics and data science. You will be responsible for developing pipelines that extract data from multiple sources, transform and validate it, and load it into cloud-based environments such as data lakes, data warehouses, or applications for further analysis and visualization.

The Enterprise Data and Analytics team is focused on enabling the organization to become more data-driven, working across departments to deliver impactful data solutions.

Key Accountabilities
  • Estimate, design, and develop scalable, optimized data pipelines to support Business Intelligence and Advanced Analytics.
  • Develop and maintain standards and best practices for MLOps and machine learning initiatives, using DevOps CI/CD processes.
  • Provide hands-on support for data integration, including monitoring, configuration, troubleshooting, and user administration.
  • Optimize data objects for consumption by analytics and data science teams.
  • Follow ITIL processes to transition solutions from project to production.
  • Conduct root cause analysis and resolve incidents reported by monitoring systems or end-users.
  • Configure and administer tools such as HVR, Knime, and Magnotix for data replication and integration.
  • Establish and maintain best practices and templates for data engineering across different toolsets and use cases (e.g., ML/AI vs. BI Analytics).
Skills & Qualifications
  • Post-secondary degree in Computer Science, Software Engineering, or a related field—or equivalent experience in data engineering or integration.
  • 5+ years of experience developing cost-optimized, scalable, and configurable ETL/ELT pipelines.
  • Strong experience with Python, SQL, and PySpark.
  • Proficiency in Azure, Databricks, Azure Data Factory, and other cloud-based tools.
  • Familiarity with DevOps/CI-CD pipelines, performant data stores, and operational REST APIs.
  • Experience with tools such as HVR, Knime, Magnotix, Synapse Analytics, Data Lakes, and Scala is an asset.
Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.