Enable job alerts via email!

Data Software Engineer

Stafflink

Vancouver

On-site

CAD 80,000 - 100,000

Full time

4 days ago
Be an early applicant

Job summary

A technology solutions firm in Metro Vancouver is seeking a Data Software Engineer to develop scalable applications and data pipelines for AI/ML initiatives. This hands-on role involves backend development, infrastructure support, and cross-functional collaboration. Ideal candidates have strong Python skills and experience with Google Cloud services. Prior consulting experience is preferred.

Qualifications

  • 3–6 years of experience in software development or data engineering roles.
  • Experience troubleshooting complex systems.
  • Familiarity with Git, CI/CD, and DevOps practices.

Responsibilities

  • Design and deploy scalable Python applications using GCP services.
  • Develop APIs, microservices, and backend systems for AI/ML.
  • Build and maintain data transformation logic using DBT.

Skills

Strong proficiency in Python for backend development
Hands-on experience with Google Cloud Platform
Experience with DBT
Experience with Airflow
Experience with Debezium

Job description

Job Description

We’re hiring for a Data Software Engineer to support the development of robust, scalable applications and data pipelines that power AI / ML initiatives. This is a hands-on role combining software engineering, data infrastructure, and ML workflow enablement in a modern cloud-native environment. The ideal candidate has strong backend development experience (Python, GCP) and is confident navigating complex systems across dev and prod environments.Responsibilities : Software Engineering & Backend Development

Design and deploy scalable Python applications using GCP services like Cloud Run, Kubernetes, and Compute Engine.

Develop APIs, microservices, and core backend systems to support application and AI / ML use cases.

Debug and resolve production issues across distributed systems, data pipelines, and orchestration layers.

Build and maintain robust data transformation logic using DBT.

Develop and orchestrate CDC data ingestion pipelines using Debezium and Airflow.

Infrastructure & Platform Support

Monitor production environments and contribute to platform reliability and scalability.

Implement improvements based on technical backlog priorities.

Support both analytics and ML infrastructure through architectural contributions and automation.

AI / ML Workflow Enablement

Build and support pipelines for model training, batch inference, feature generation, and performance monitoring.

Coordinate automated model refresh cycles and scoring jobs using Airflow or custom orchestration.

Ensure ML pipelines produce structured and reusable features for analytics and conversational AI agents.

Cross-Functional Collaboration

Partner with functional consultants, analysts, and AI / ML engineers to define and deliver technical solutions.

Participate in discovery and planning sessions to align technical architecture with business needs.

Contribute to solutioning sessions and client-facing discussions with clear and structured technical input.

Qualifications :

3–6 years of experience in software development or data engineering roles.

Strong proficiency in Python for backend and data application development.

Hands-on experience with Google Cloud Platform (e.g., Cloud Run, Kubernetes, Compute Engine, BigQuery, Cloud SQL, Composer).

Experience with DBT, Airflow, and Debezium (CDC).

Proven ability to troubleshoot complex systems, optimize DAG performance, and manage orchestration dependencies.

Familiarity with Git-based workflows, CI / CD, and DevOps best practices.

Exposure to enabling ML workflows (e.g., model scoring, data prep, feature pipelines).

Comfort working in fast-paced, ambiguous environments with shifting technical requirements.

Basic front-end development skills (e.g., React, Angular, or plain HTML / CSS / JS) to support lightweight UIs.

Preferred Qualifications :

Java experience for enterprise integration or backend system development.

Familiarity with ML orchestration frameworks such as Vertex AI Pipelines, Kubeflow, or MLflow.

Experience supporting LLM systems or agent frameworks (e.g., LangChain, CrewAI, LlamaIndex).

Experience with infrastructure-as-code tools (e.g., Terraform or Pulumi).

Prior consulting or client-facing delivery experience.

Familiarity with translating functional requirements into technical infrastructure for analytics and AI use cases.

Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.

Similar jobs