Enable job alerts via email!

Senior Data Engineer, Catalog

Instacart

Canada

On-site

CAD 100,000 - 130,000

Full time

Yesterday
Be an early applicant

Generate a tailored resume in minutes

Land an interview and earn more. Learn more

Job summary

A leading online grocery platform is seeking a Data Engineer to own critical data integration pipelines. You will work with engineers and stakeholders to deliver high-quality, scalable solutions. Ideal candidates will have extensive experience in building data pipelines and expertise in SQL and Python. Experience with cloud-based technologies is essential for this role.

Qualifications

  • 6+ years of working experience in a Data/Software Engineering role focused on building data pipelines.
  • Expertise in SQL with Python knowledge.
  • Past experience with data immutability and auditability.

Responsibilities

  • Own essential data integration pipelines and models.
  • Work closely with engineers and stakeholders.
  • Ship high quality, scalable, and robust solutions.

Skills

SQL
Python
Data pipeline building
ETL/ELT pipelines
Cloud-based data technologies
Data immutability
Cross-functional communication

Education

Bachelor's degree in Computer Science or equivalent

Tools

Snowflake
Databricks
Airflow
dbt (data build tool)
Job description
Overview

At Instacart, our mission is to create a world where everyone has access to the food they love and more time to enjoy it together. Millions of customers every year use Instacart to buy their groceries online, and the Data Engineering team is building the critical data pipelines that underpin all of the myriad of ways that data is used across Instacart to support our customers and partners.

About the Role

The Catalog data engineering team plays a critical role in defining how catalog data is structured and standardized for consistent, reliable, timely, and accurate product information. This is a high impact, high visibility role owning essential data integration pipelines and models across all of Instacart’s offerings. You will enable efficient, high-quality data workflows that support Instacart’s platform and its ability to deliver exceptional shopping experiences.

About the Team

The Catalog Engineering team is responsible for ensuring that the best possible representation of products is available to users, shoppers, and advertisers. They achieve this by ingesting data from retailers, third parties, and brands, and processing it to create accurate and comprehensive product listings. The team handles various aspects of catalog management, including data acquisition, product ingestion, inventory management, quality assurance, and enrichment of product attributes. Our systems processes approximately 8 billion updates each day

About the Job
  • You will be part of a team with a large amount of ownership and autonomy.
  • Large scope for company-level impact working on catalog data.
  • You will work closely with engineers and both internal and external stakeholders, owning a large part of the process from problem understanding to shipping the solution.
  • You will ship high quality, scalable and robust solutions with a sense of urgency.
  • You will have the freedom to suggest and drive organization-wide initiatives.
About You
Minimum Qualifications
  • 6+ years of working experience in a Data/Software Engineering role, with a focus on building data pipelines.
  • Expert with SQL and knowledge of Python.
  • Experience building high quality ETL/ELT pipelines.
  • Past experience with data immutability, auditability, slowly changing dimensions or similar concepts.
  • Experience building data pipelines for accounting/billing purposes.
  • Experience with cloud-based data technologies such as Snowflake, Databricks, Trino/Presto, or similar.
  • Adept at fluently communicating with many cross-functional stakeholders to drive requirements and design shared datasets.
  • A strong sense of ownership, and an ability to balance a sense of urgency with shipping high quality and pragmatic solutions.
  • Experience working with a large codebase on a cross functional team.
Preferred Qualifications
  • Bachelor’s degree in Computer Science, computer engineering, electrical engineering OR equivalent work experience.
  • Experience with Snowflake, dbt (data build tool) and Airflow
  • Experience with building Flink pipelines
  • Experience with data quality monitoring/observability
Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.