Job Search and Career Advice Platform

Enable job alerts via email!

Data Platform Engineer [In Person, Toronto]

Terminal

Canada

On-site

CAD 80,000 - 120,000

Full time

Today
Be an early applicant

Generate a tailored resume in minutes

Land an interview and earn more. Learn more

Job summary

A leading technology startup in Toronto seeks an experienced engineer to build scalable data platforms and optimize the data that powers their API. Candidates should have 3+ years of experience in data engineering, with expertise in Java, Python, and large-scale data processing tools like Kafka and Flink. The position offers strong compensation, equity packages, and a flexible work environment in downtown Toronto.

Benefits

Strong compensation and equity packages
Brand new MacBook and equipment
Top-tier health/dental benefits
Personal development spending account
Four weeks paid time off
In-person culture at downtown office

Qualifications

  • 3+ years of experience in platform engineering or data engineering.
  • 2+ years designing and optimizing data pipelines at TB to PB scale.
  • Proficient in Java with a focus on clean code.
  • Strong system design skills for big data.
  • Familiarity with lake-house architectures.
  • Experience with real-time data processing tools.
  • Knowledge of distributed systems.
  • Strong problem-solving skills.

Responsibilities

  • Own projects to enhance data replication, storage, and reporting.
  • Build and optimize streaming and batch data pipelines.
  • Design scalable storage solutions for large data sets.
  • Develop real-time data systems for growing data volumes.
  • Implement data lineage and observability patterns.
  • Write maintainable code in Java and Python.
  • Shape architectural decisions for scalability.

Skills

Platform engineering
Data engineering
Java
Python
System design
Real-time data processing
Big data workflows
Problem-solving

Tools

Kafka
Flink
Spark
AWS
Iceberg
Redis
Job description
About Terminal

Terminal is Plaid for Telematics in commercial trucking. Companies building the next generation of insurance products, financial services and fleet software for trucking use our Universal API to access GPS data, speeding data and vehicle stats. We are a fast-growing, venture-backed startup supported by top investors including Y Combinator, Golden Ventures and Wayfinder Ventures. Our exceptionally talented team is based in Toronto, Canada.

For more info, check out our website: https://withterminal.com

Note: This role is only available to Toronto/GTA-based candidates

About the role

We’re looking for an engineer who thrives on building scalable data platforms and enjoys tackling complex backend challenges. This isn’t just a data engineering role, you’ll be designing and optimizing the data platform that powers Terminal’s API, managing everything from data streaming and storage to analytics features at petabyte scale.

You should be comfortable navigating both data and backend engineering, with a solid foundation in software development. You’ll work with advanced data architectures, including Iceberg, Flink, and Kafka, tackling large-scale challenges and contributing to core product development using Java and Python. If you’re excited by the opportunity to shape a high-impact platform and tackle diverse engineering problems, we’d love to hear from you.

What you will do:
  • Own projects aimed at enhancing data replication, storage, enrichment, and reporting capabilities.
  • Build and optimize efficient streaming and batch data pipelines that support our core product and API.
  • Design scalable storage solutions for handling petabytes of IoT and time-series data.
  • Develop and maintain real-time data systems to ingest growing data volumes.
  • Implement distributed tracing, data lineage and observability patterns to improve monitoring and troubleshooting.
  • Write clean, maintainable code in Java and Python for various platform components.
  • Shape architectural decisions to ensure scalability and reliability throughout the data platform.
The ideal candidate will have:
  • 3+ years of experience in platform engineering or data engineering.
  • 2+ years of experience designing and optimizing data pipelines at TB to PB scale.
  • Proficient in Java, with a focus on clean, maintainable code.
  • Strong system design skills with a focus on big data and real-time workflows.
  • Familiarity with lake-house architectures (e.g., Iceberg, Delta, Paimon).
  • Experience with real-time data processing tools like Kafka, Flink and Spark.
  • Knowledge of distributed systems and large-scale data challenges.
  • Strong problem-solving skills and a collaborative mindset.
  • Nice-to-have:
    • Experience working with orchestration / workflow engines (e.g. Step Functions, Temporal)
    • Experience with serverless and/or event-driven architectures (e.g. AWS Lambda, SQS).
    • Experience with Javascript/Typescript languages (for cross team work)
Tech Stack
  • Languages: Java, Python
  • Framework: Springboot
  • Storage: AWS S3, AWS DynamoDB, Apache Iceberg, Redis
  • Streaming: AWS Kinesis, Apache Kafka, Apache Flink
  • ETL: AWS Glue, Apache Spark
  • Serverless: AWS SQS, AWS EventBridge, AWS Lambda and Step Functions.
  • Infrastructure as Code: AWS CDK
  • CI/CD: GitHub Actions
Benefits
  • Strong compensation and equity packages
  • Brand new MacBook and computer equipment
  • Top-tier health/dental benefits and a flexible healthcare spending account
  • Personal spending account for professional development, fitness and wellness
  • Four weeks paid time off + statutory holidays
  • In-person culture with an office located in downtown Toronto
Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.