Enable job alerts via email!

Senior Data Engineer

GuruLink

Montreal

On-site

CAD 85,000 - 110,000

Full time

25 days ago

Job summary

A Montreal-based startup is seeking a Senior Data Engineer to own and develop end-to-end data pipelines crucial for decision-making. The ideal candidate will possess over 5 years of experience in data engineering, proficiency in SQL and Python, and hands-on experience with modern data warehouses. This role involves working closely with data analysts and product managers to drive data initiatives and mentor junior team members.

Qualifications

  • 5+ years of experience in data engineering or closely related role.
  • Hands-on experience building and maintaining ETL/ELT pipelines.
  • Solid understanding of data warehousing concepts.

Responsibilities

  • Design and implement efficient ETL/ELT pipelines.
  • Build key datasets and analytics layers.
  • Optimize data pipelines for performance and reliability.
  • Integrate with third-party systems and APIs.
  • Mentor and guide junior data engineers.

Skills

SQL
Python
ETL/ELT pipelines
Data warehousing
Data governance
Cloud platforms

Education

B.S. or M.S. in Computer Science, Engineering, or related field

Tools

Snowflake
Databricks
BigQuery
Airflow
dbt
Job description

Location: Montreal, Quebec

Our client is a homegrown Montreal startup. They are transforming the way digital audiences convert online. Their AI-driven affiliate platform empowers publishers, ticketing providers, and content creators to unlock new revenue streams while enhancing user experience. By delivering smarter monetization tools, their partners don't just earn more, they deliver more.

The Data team is on a mission to transform raw data into actionable insights that drive the business forward. As a Senior Data Engineer, you will be at the heart of this mission, developing and owning end-to-end data pipelines and products that empower teams across the company.

You will take on squad-level data projects – from designing robust data ingestion processes to building data models and analytics layers – ensuring that stakeholders have access to reliable, timely data for decision-making.

You will collaborate closely with data analysts, product managers, and software engineers to integrate data solutions into our products and workflows. In this role, you’ll also contribute to data governance initiatives and share best practices, while mentoring junior data engineers and fostering a culture of continuous improvement within the team.


You will:
•Design and implement efficient ETL/ELT pipelines to ingest, transform, and load data from various sources, ensuring high reliability and scalability
•Build and evolve key datasets and analytics layers across product areas, making data easily accessible and usable for analysis and decision-making
•Analyze data flows and dependencies to design data models optimized for efficient storage and retrieval
•Optimize and tune data pipelines for performance, scalability, and reliability as data volumes grow
•Integrate with third-party systems and APIs to source or distribute data, expanding our overall data capabilities
•Collaborate with product managers, software engineers, and analysts to understand data requirements and deliver effective data solutions that support product features and business decisions
•Implement data governance best practices to ensure data quality, integrity, and security across all data assets
•Maintain clear documentation for data pipelines, datasets, and data lineage to promote transparency and knowledge sharing
•Stay up-to-date with industry trends in data engineering, and proactively recommend improvements to our data infrastructure and workflows
•Mentor and guide junior data engineers, sharing knowledge and best practices to strengthen the team’s capabilities

Must Have Skills:

•B.S. or M.S. in Computer Science, Engineering, or a related field (or equivalent experience)
•5+ years of experience in data engineering or a closely related role
•Proficiency in SQL and Python, with proven experience working with modern data warehouses (e.g. Snowflake, Databricks, BigQuery, etc)
•Hands-on experience building and maintaining ETL/ELT pipelines, including using workflow orchestration tools (such as Airflow or dbt)
•Experience with cloud platforms (GCP, etc.) and their data processing and storage services
•Solid understanding of data warehousing concepts, data modeling techniques, and data architecture best practices (including data quality and governance)
•Proven ability to design solutions for complex data problems and drive data projects from conception to production

Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.