Enable job alerts via email!

Data Engineer II

Numerator

Toronto

On-site

CAD 80,000 - 115,000

Full time

3 days ago
Be an early applicant

Job summary

A leading data and technology company in Toronto is seeking a Data Engineer II to enhance and scale data platforms. This role involves developing efficient data pipelines and collaborating across functions to drive innovation. Candidates should have 3+ years in data engineering, proficiency in Python and SQL, and experience with cloud solutions like AWS. A competitive compensation package and supportive culture for career growth are offered.

Benefits

Inclusive company culture
Market-competitive compensation
Career growth support
Volunteer time off
Regular hackathons

Qualifications

  • 3+ years of experience in data engineering, focused on building data pipelines.
  • Proficient in Python and SQL for data transformations.
  • Experience with cloud solutions in AWS, Azure, or GCP.

Responsibilities

  • Collaborate with teams to enhance data products for analytics.
  • Lead projects to improve data quality and integrate machine learning models.
  • Architect and develop data pipelines with validation and quality checks.

Skills

Data engineering
Python
SQL
Data modeling
ETL design
AWS
Airflow
Machine Learning

Tools

Terraform
Docker
Snowflake
Job description
Overview

Numerator is a data and technology company reinventing market research. Headquartered in Chicago, IL, Numerator has 1,600 employees worldwide. The company blends proprietary data with advanced technology to create unique insights for the market research industry, which has been slow to change. The majority of Fortune 100 companies are Numerator clients.

Numerator is looking for a Data Engineer II to help us expand and optimize our data platforms, drive decision-making, and enable innovation across our products. In this position, you will take ownership of complex initiatives to automate, enhance, maintain, and scale services in a rapidly evolving environment.

As a Data Engineer II at Numerator, you will build resilient data pipelines, architect infrastructure, and enable advanced analytics and modeling capabilities. You will partner cross-functionally with Product, Analytics, Data Science, and Engineering teams, ensuring the integrity, scalability, and performance of our data systems. This role goes beyond implementation—you will play a key part in shaping data strategies, mentoring team members, and delivering impactful, production-grade solutions.

Responsibilities
  • Collaborate with cross-functional teams to design, enhance, and scale data products that power analytics and customer-facing solutions.

  • Lead complex, end-to-end projects focused on improving data quality, integrating advanced statistical and machine learning models, and driving measurable business impact.

  • Architect and develop pipelines that enforce robust data validation, quality checks, and efficient workflows for model deployment.

  • Partner with data scientists to streamline model integration, ensuring reliability and performance in production systems.

  • Mentor junior engineers and contribute to best practices, design standards, and engineering excellence across the team.

Qualifications
  • 3+ years of experience in data engineering, including designing and maintaining data warehouses, building data pipelines, and implementing large-scale data solutions.

  • Proficiency in Python and SQL, with demonstrated expertise in building efficient, high-quality data transformations.

  • Strong background in data modeling, ETL design, and orchestration tools (especially Airflow), ensuring business goals are met with data integrity and scalability.

  • Proven experience deploying cloud-based production solutions in AWS, Azure, or GCP, with emphasis on data reliability across environments.

  • Familiarity with Machine Learning/Statistical Model Development processes and their data dependencies.

  • Strong problem-solving ability, intellectual curiosity, and attention to detail, with a focus on delivering high-impact solutions in a fast-paced, collaborative environment.

Nice to have
  • Deep experience with Amazon Web Services (EC2, RDS, ECS, S3, Lambda, etc.).

  • Expertise in Terraform, Ansible, or similar IaC tools for infrastructure automation.

  • Proficiency in Airflow, including DAG design, monitoring, and custom operator development.

  • Hands-on experience with modern data platforms such as Snowflake, Databricks, or Redshift.

  • Comfort working with containerized services (Docker, Kubernetes) in production environments.

  • Background in retail, consumer insights, or marketing data is a strong plus.

  • Ability to contribute to technical strategy and mentor more junior team members.

What we offer
  • An inclusive and collaborative company culture - we work in an open, transparent environment to get things done and adapt to the changing needs as they come

  • An opportunity to have an impact in a technologically data-driven company that’s changing the market research industry and getting rave reviews

  • Market-competitive total compensation package

  • Volunteer time off and charitable donation matching

  • Strong support for career growth, including mentorship programs, leadership training, access to conferences and employee resources groups

  • Regular hackathons to build your own projects and Engineering and Data Science Lunch and Learns

Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.