Enable job alerts via email!

Senior Back-end Engineer Scalable Data Systems

ALLTECH CONSULTING SVC INC

Quebec

On-site

CAD 90,000 - 120,000

Full time

30+ days ago

Job summary

A leading technology services firm in Quebec is seeking a Senior Back-end Engineer to lead the design and development of high-throughput data systems. The ideal candidate has over 5 years of experience, strong expertise in Python, and a passion for building scalable applications that handle billions of events. Join us to shape the future of our vulnerability management platform in a fast-paced environment.

Qualifications

  • 5+ years of experience building high-throughput, data-intensive applications.
  • Proven expertise in Python and relational databases.
  • Strong understanding of distributed systems, caching strategies, and microservices architecture.
  • Experience designing systems that handle billions of events and support hundreds of thousands of users.
  • Deep knowledge of data modeling, schema design, and query optimization.

Responsibilities

  • Lead the design and development of high-throughput, data-intensive services.
  • Architect and implement high-throughput ETL pipelines.
  • Design and build scalable RESTful APIs using FastAPI, SQLModel, and Redis.
  • Optimize API performance to meet strict SLAs.
  • Collaborate with DevOps to deploy and scale services.

Skills

Python
Distributed systems
Microservices architecture
ETL pipelines
Data modeling
Caching strategies

Tools

Docker
Kubernetes
OpenShift
Job description
Join our Vulnerability Management Platform team as a Senior Back-end Engineer Scalable Data Systems, where you’ll :
– lead the design and development of high-throughput, data-intensive services that power critical security insights.
-You’ll own greenfield features from architecture to deployment, mentor teammates, and help shape the future of our platform.
We’re looking for a proactive engineer who thrives in fast-paced environments, takes full ownership of their work, and is passionate about building scalable systems that process billions of events daily.
What You’ll Do:
– Architect and implement high-throughput ETL pipelines to onboard new datasets and enrich vulnerability context.
– Design and build scalable, maintainable RESTful APIs using FastAPI, SQLModel, and Redis to expose new data points to the UI.
– Optimize API performance to meet strict SLAs (e.g., sub-second response times).
– Automate repetitive tasks to reduce engineering toil and improve operational efficiency.
– Collaborate with DevOps to deploy and scale services in OpenShift/Kubernetes environments.
– Monitor and analyze API usage, latency, and error rates to ensure reliability and performance.
– Define integration patterns and data flows between system components.
– Conduct design and code reviews, and mentor junior developers on best practices for data pipeline and API development.
– Establish and uphold technical standards and architectural guidelines.
What We’re Looking For:
– 5+ years of experience building high-throughput, data-intensive applications.
– Proven expertise in Python and relational databases.
– Strong understanding of distributed systems, caching strategies, and microservices architecture.
– Experience designing systems that handle billions of events and support hundreds of thousands of users.
– Deep knowledge of data modeling, schema design, and query optimization.
– Familiarity with containerized environments (Docker, Kubernetes/OpenShift).
– Strong analytical and problem-solving skills.
– Excellent communication and documentation abilities.
– A proactive, independent mindset with a strong sense of ownership and reliability.
Bonus Points:
– Experience in vulnerability management or cybersecurity domains.
– Prior success mentoring engineers and leading architectural decisions.
Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.