Job Search and Career Advice Platform

Enable job alerts via email!

Data Engineer

Novi Labs

Remote

CAD 80,000 - 100,000

Full time

Today
Be an early applicant

Generate a tailored resume in minutes

Land an interview and earn more. Learn more

Job summary

A tech-driven analytics firm is seeking a Data Engineer to enhance their data systems. This role involves building scalable data pipelines, collaborating with cross-functional teams, and ensuring data reliability. Ideal candidates possess 2–5 years of experience, proficient Python skills, and familiarity with data tools like Airflow and Spark. The position is initially remote, but candidates should reside in Calgary or be open to relocation. This is an opportunity to grow technically in a collaborative environment.

Qualifications

  • 2–5 years of experience in data engineering or related role.
  • Comfortable with SQL and relational databases.
  • Exposure to modern data engineering tooling.

Responsibilities

  • Build and improve scalable data pipelines and ETL/ELT workflows.
  • Collaborate with teams to deliver high-quality data.
  • Monitor data pipelines for reliability and performance.

Skills

Strong Python programming
SQL and relational databases
Modern data engineering tools

Tools

Airflow
dbt
Spark
Job description
Data Engineer

This role is remote to start, but we expect to open an office in Calgary in the near future. For that reason, we are only considering candidates who already reside in Calgary or are open to relocating.

About Novi

Novi Labs delivers industry-leading software and data solutions that empower our clients to make smarter, business-critical decisions in the energy sector. From production planning to investment strategies, our customers rely on us for insights that drive large-scale, impactful decisions.

About the Role

We’re looking for a Data Engineer to help build and improve the data systems that power Novi’s products. In this role, you’ll work with a modern data stack, contribute to the development of scalable pipelines, and help ensure the quality and reliability of the data that drives our platform.

This role is ideal for early-career or mid-level engineers who want to learn from an experienced team, grow their technical skillset, and have meaningful ownership in a high-impact product environment. You’ll collaborate closely with engineering, product, data science, and customer-facing teams as you help evolve our data platform.

What You’ll Do
  • Build, maintain, and improve scalable data pipelines and ETL/ELT workflows.
  • Develop workflows using tools like Airflow and contribute to our orchestration framework.
  • Work with lakehouse and distributed processing technologies (e.g., Iceberg/Athena, Spark).
  • Support data reliability through strong data quality, validation, and governance practices.
  • Contribute to the design and implementation of efficient data storage and retrieval patterns across cloud environments (e.g., AWS S3, Iceberg, Snowflake, Redshift, BigQuery).
  • Collaborate with engineers, data scientists, and product teams to deliver high-quality data to downstream consumers.
  • Help monitor and troubleshoot data pipelines to ensure reliability and performance.
  • Contribute to improving engineering standards, documentation, and tooling.
You Might Be a Great Fit If You…
  • Have 2–5 years of experience in data engineering or a related software/data role.
  • Have strong Python programming skills.
  • Are comfortable writing SQL and working with relational databases.
  • Have exposure to modern data engineering tooling such as Airflow, dbt, Spark, Iceberg, or similar technologies.
  • Understand batch or streaming data processing concepts.
  • Follow software engineering best practices (version control, testing, CI/CD).
  • Communicate clearly and collaborate well with teams across different functions.
  • Take ownership of your work, ask great questions, and proactively seek opportunities to improve systems.
  • Are eager to grow into broader responsibilities—including design participation, pipeline optimization, and scaling systems.
Preferred (Not Required)
  • Experience deploying or working with cloud data tooling (AWS/Azure/GCP).
  • Experience with infrastructure-as-code (Terraform, Pulumi, CloudFormation) or containerization (Docker/Kubernetes).
  • Experience in a small-company or fast-paced startup environment.
  • Familiarity with energy-sector datasets is a bonus but not required.
Cultural Fit

You’ll thrive at Novi if you:

  • Have a positive attitude and growth mindset.
  • Enjoy working on small, highly collaborative teams.
  • Are motivated to continuously improve your technical skills.
  • Communicate clearly and influence through well-reasoned ideas, regardless of seniority.
  • Are comfortable working cross-functionally to support customer impact.
  • Celebrate team and company successes—not just your own.
  • Take ownership with strong attention to detail.
  • Are willing to context-switch when needed to help the team succeed.
  • Enjoy having visibility into the full breadth of what’s happening across the company.

Joining Novi means helping shape the future of energy analytics while growing your skills on an experienced, mission-driven team. If you’re excited to contribute to critical data infrastructure in a high-impact environment, we’d love to hear from you!

Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.