Job Search and Career Advice Platform

Enable job alerts via email!

Senior QA Automation Engineer (Canada)

Atreides

Remote

CAD 80,000 - 100,000

Full time

Today
Be an early applicant

Generate a tailored resume in minutes

Land an interview and earn more. Learn more

Job summary

A data solutions company is looking for a Senior QA Automation Data Engineer to ensure the reliability of data pipelines. You will develop automated testing frameworks and work with data engineers to embed quality controls in the CI/CD process. The ideal candidate has over 5 years of experience in data engineering with strong skills in Python, PySpark, and cloud data infrastructure. This remote position offers competitive salary and benefits in Canada.

Benefits

Competitive salary
Health, dental, and vision insurance
Flexible hybrid work environment
Generous vacation and parental leave

Qualifications

  • 5+ years of experience in data engineering or automation-focused QA roles.
  • Proficient in Python and PySpark with testable code.
  • Familiarity with geospatial formats and data validation libraries.

Responsibilities

  • Develop automated test harnesses for validating data pipelines.
  • Implement validation suites for data schema enforcement.
  • Integrate data pipeline validation with CI/CD tooling.

Skills

Python
PySpark
Data QA Automation
Cloud Data Infrastructure
Test Automation Frameworks

Tools

Apache Iceberg
Delta Lake
Great Expectations
Job description

Job Title: Senior QA Automation Data Engineer (Remote CAN)

Company Overview:

Atreides helps organizations transform large and complex multi-modal datasets into information-rich geo-spatial data subscriptions that can be used across a wide spectrum of use cases. Currently, Atreides focuses on providing high-fidelity data solutions to enable customers to derive insights quickly.

We are a fast-moving, high-performance startup. We value a diverse team and believe inclusion drives better performance. We trust our team with autonomy, believing it leads to better results and job satisfaction. With a mission-driven mindset and entrepreneurial spirit, we are building something new and helping unlock the power of massive-scale data to make the world safer, stronger, and more prosperous.

Team Overview:

We are a passionate team of technologists, data scientists, and analysts with backgrounds in operational intelligence, law enforcement, large multinationals, and cybersecurity operations. We obsess about designing products that will change the way global companies, governments, and nonprofits protect themselves from external threats and global adversaries.

Position Overview:

We are seeking a QA Automation Data Engineer to ensure the correctness, performance, and reliability of our data pipelines, data lakes, and enrichment systems. In this role, you will design, implement, and maintain automated validation frameworks for our large-scale data workflows. You will work closely with data engineers, analysts, and platform engineers to embed test coverage and data quality controls directly into the CI/CD lifecycle of our ETL and geospatial data pipelines.

You should be deeply familiar with test automation in data contexts, including schema evolution validation, edge case generation, null/duplicate detection, statistical drift analysis, and pipeline integration testing. This is not a manual QA role—you will write code, define test frameworks, and help enforce reliability through automation.

Team Principles:
  • Remain curious and passionate in all aspects of our work
  • Promote clear, direct, and transparent communication
  • Embrace the 'measure twice, cut once' philosophy
  • Value and encourage diverse ideas and technologies
  • Lead with empathy in all interactions
Responsibilities:
  • Develop automated test harnesses for validating Spark pipelines, Iceberg table transformations, and Python-based data flows.
  • Implement validation suites for data schema enforcement, contract testing, and null/duplication/anomaly checks.
  • Design test cases for validating geospatial data processing pipelines (e.g., geometry validation, bounding box edge cases).
  • Integrate data pipeline validation with CI/CD tooling.
  • Monitor and alert on data quality regressions using metric-driven validation (e.g., row count deltas, join key sparsity, referential integrity).
  • Write and maintain mock data generators and property-based test cases for data edge cases and corner conditions.
  • Contribute to team standards for testing strategy, coverage thresholds, and release readiness gates.
  • Collaborate with data engineers on pipeline observability and reproducibility strategies.
  • Participate in root cause analysis and post-mortems for failed data releases or quality incidents.
  • Document infrastructure design, data engineering processes, and maintain comprehensive documentation.
Desired Qualifications:
  • 5+ years of experience in data engineering or data QA roles with automation focus.
  • Strong proficiency in Python and PySpark, including writing testable, modular data code.
  • Experience with Apache Iceberg, Delta Lake, or Hudi, including schema evolution and partitioning.
  • Familiarity with data validation libraries (e.g., Great Expectations, Deequ, Soda SQL) or homegrown equivalents.
  • Understanding of geospatial formats (e.g., GeoParquet, GeoJSON, Shapefiles) and related edge cases.
  • Experience with test automation frameworks such as pytest, hypothesis, unittest, and integration with CI pipelines.
  • Familiarity with cloud-native data infrastructure, especially AWS (Glue, S3, Athena, EMR).
  • Knowledge of data lineage, data contracts, and observability tools is a plus.
  • Strong communication skills and the ability to work cross-functionally with engineers and analysts.
You’ll Succeed If You:
  • Enjoy catching issues before they hit production and designing coverage to prevent them.
  • Believe that data quality is a first-class concern, not an afterthought.
  • Thrive in environments where automated tests are part of the engineering pipeline, not separate from it.
  • Can bridge the gap between engineering practices and analytics/ML testing needs.
  • Have experience debugging distributed failures (e.g., skewed partitions, schema mismatches, memory pressure).
Compensation and Benefits:
  • Competitive salary
  • Comprehensive health, dental, and vision insurance plans
  • Flexible hybrid work environment
  • Additional benefits like flexible hours, work travel opportunities, competitive vacation time and parental leave

While meeting all of these criteria would be ideal, we understand that some candidates may meet most, but not all. If you’re passionate, curious and ready to "work smart and get things done," we’d love to hear from you.

Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.