Enable job alerts via email!

Senior Data Engineer

HRB

Toronto

Hybrid

CAD 80,000 - 120,000

Full time

19 days ago

Generate a tailored resume in minutes

Land an interview and earn more. Learn more

Start fresh or import an existing resume

Job summary

A leading tech firm is looking for a Data Engineer to design and maintain data pipelines and lake architecture in a startup-like environment. You will work with cutting-edge technologies, optimizing real-time data streaming and processing while collaborating with a passionate team.

Benefits

Opportunities to learn and work with the latest technologies.
Autonomy to drive projects from inception to completion.
Collaborative and innovative team environment.

Qualifications

  • 3 - 5 years of experience as an intermediate engineer.
  • Proficiency in AWS services and strong programming skills in Python and SQL.
  • Experience with Apache Spark, Delta Lake, and Terraform for infrastructure.

Responsibilities

  • Design and maintain real-time data pipelines using AWS Glue, Lambda, and Kafka.
  • Implement data lake architecture for efficient data storage and processing.
  • Optimize data pipelines for high availability and performance.

Skills

Python
SQL
AWS
Apache Spark
Terraform
Kafka
DevOps

Job description

Job Description

We are seeking a highly skilled Data Engineer with experience in designing and maintaining real-time data streaming pipelines and building robust data lake infrastructure and architecture. The right candidate will be excited by the prospect of optimizing and building data architecture to support our next generation of products and data initiatives.

Key Responsibilities:

  • Design, develop, and maintain scalable, real-time and batch data pipelines using AWS Glue, Lambda, Apache Spark, and Kafka.
  • Implement data lake architecture to ensure efficient data storage, processing, and retrieval.
  • Collaborate with cross-functional teams to understand data requirements and ensure the data infrastructure supports their needs.
  • Use Terraform to manage and provision infrastructure in a reproducible and scalable manner.
  • Optimize and troubleshoot complex data pipelines, ensuring high availability and performance.
  • Explore and integrate the latest technologies to enhance our data processing capabilities.
  • Work in a fast-paced, startup-like environment where you will take ownership of key projects and contribute to our overall data strategy.

Technical Requirements:

  • 3 - 5 years of experience as an intermediate engineer with proficiency in AWS services.
  • Strong programming skills in python and SQL.
  • Strong experience with Apache Spark and Delta Lake for big data processing.
  • Expertise in using Terraform for Infrastructure as Code (IAAC).
  • Proficiency using standard DevOps tool such as Github, Azure DevOps, etc.
  • Experience with Kafka and real-time data streaming pipelines along with geospatial data processing and analysis will be good to have.

Why Join Us?

  • Work with complex and large datasets that will challenge and expand your skill set.
  • Be part of a startup-like environment where your ideas and contributions directly impact the company's success.
  • Take ownership of projects and have the autonomy to drive them from inception to completion.
  • Grow your career in a fast-paced environment with plenty of opportunities to learn and work with the latest technologies.
  • Collaborate with a passionate and innovative team that values your input and encourages professional growth.
Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.