Job Search and Career Advice Platform

Enable job alerts via email!

Data Engineer

Vaco Recruiter Services

Remote

CAD 80,000 - 100,000

Full time

Today
Be an early applicant

Generate a tailored resume in minutes

Land an interview and earn more. Learn more

Job summary

A leading Canadian energy organization is seeking a skilled Data Engineer to build high-performance data lakes and warehouses. You will design scalable data pipelines and collaborate with cross-functional teams. The ideal candidate has a Bachelor's degree in Computer Science and experience with Azure tools, including Data Factory and Databricks. This full-time remote position offers competitive pay and opportunities for growth within the energy sector.

Benefits

Medical, dental, and vision benefits
401(k) retirement plan
Performance bonuses

Qualifications

  • Bachelor’s degree in computer science, Software Engineering, or related field (or equivalent experience).
  • Proven experience designing and building data pipelines in enterprise environments.
  • Proficiency in Python, PySpark, SparkSQL, and SQL.

Responsibilities

  • Build reliable and high-performance data lakes and warehouses.
  • Design and deploy scalable ELT/ETL pipelines.
  • Collaborate with analysts and engineers to support data marketplace.

Skills

Python
Spark
SQL
Data Engineering
Communication

Education

Bachelor’s degree in Computer Science or related field

Tools

Azure Data Factory
Azure Data Lake
Databricks
Synapse Analytics
Job description
About the Company

Our client is a leading Canadian energy organization with a strong presence across Ontario. They operate critical infrastructure that supports millions of customers and drives economic growth.

Why Work Here

Our client is engaging in ongoing transformation and investing in cutting‑edge technologies and modernizing operations to meet the evolving needs of the energy sector. By joining, you will be part of an organization that values collaboration, professional development, and making a meaningful impact on the future of clean energy.

About the Opportunity
  • Data Architecture & Development: Build reliable, supportable, and high‑performance data lake and data warehouse products to meet organizational needs for analytics, reporting, and innovation.
  • Pipeline Engineering: Design and productionize modular, scalable ELT/ETL pipelines and data infrastructure, leveraging a wide range of data sources.
  • Data Modeling: Collaborate with Data Modelers and Architects to build curated, business‑centric data models that serve as a single source of truth for reporting and downstream applications.
  • Security & Compliance: Partner with infrastructure and cybersecurity teams to ensure data security in transit and at rest.
  • Data Preparation: Clean, prepare, and optimize datasets for performance, applying lineage and quality controls throughout the integration cycle.
  • Analytics Support: Assist Business Intelligence Analysts with dimensional modeling and aggregation optimization for visualization and reporting.
  • Issue Resolution: Troubleshoot ingestion, transformation, pipeline performance, and data integrity issues.
  • Collaboration: Work closely with analysts, scientists, engineers, and architects to develop pipelines that feed enterprise data marketplaces.
  • Process Improvement: Identify and implement automation, scalability enhancements, and infrastructure optimizations.
  • Technology Stack: Utilize Microsoft tools including Azure Data Factory, Azure Data Lake, Azure SQL Databases, Synapse Analytics, Databricks, Microsoft Purview, and Power BI.
  • Agile Delivery: Contribute within SCRUM and Kanban frameworks, supporting backlog development and iterative delivery.
  • Metadata Management: Assist in building and maintaining enterprise data catalogs and metadata repositories.
  • Programming & Optimization: Develop performant pipelines and models using Python, Spark, and SQL, consuming diverse data formats (XML, CSV, JSON, REST APIs).
  • Documentation & Source Control: Maintain clear documentation of pipelines and products, ensuring codebase sustainability through version control.
  • Orchestration: Implement pipeline execution orchestration to meet latency expectations and manage dependencies.
  • Automation Tools: Collaborate on tooling to reduce manual effort and enhance efficiency.
  • DevOps Integration: Support CI/CD pipelines for infrastructure automation, code delivery, and release management.
  • Operations Support: Monitor production solutions, troubleshoot issues, and provide Tier 2 support for datasets.
  • Access Control: Implement role‑based access to data products in alignment with governance standards.
  • Testing & Quality Assurance: Write and perform automated unit and regression tests, assist with user acceptance and integration testing, and contribute to test case design.
  • Peer Collaboration: Participate in peer code reviews to ensure quality and maintainability.
About You
  • Bachelor’s degree in computer science, Software Engineering, or a related field (or equivalent experience).
  • Proven experience designing and building data pipelines in enterprise environments.
  • Proficiency in Python, PySpark, SparkSQL, and SQL.
  • Hands‑on experience with Azure Data Factory, Azure Data Lake Storage (ADLS), Synapse Analytics, and Databricks.
  • Strong background in building pipelines for Data Lakehouses and Data Warehouses.
  • Solid understanding of data structures, frameworks, and processing methodologies.
  • Knowledge of data governance and data quality principles.
  • Effective communication skills, with the ability to translate technical concepts for non‑technical stakeholders.
Pay Range

$87.58/hr

How to Apply

Click the “Apply Now” button and follow the instructions to submit your resume. Please know that we only accept documents in MS Word or Rich Text formats. When referencing this job, quote #466865.

You must currently reside within the Greater Toronto Area and be permitted to work in Canada to be considered for this opportunity. A recruiter will be in touch with you if your profile meets our client’s requirements for this role.

Determining compensation for this role (and others) at Vaco/Highspring depends upon a wide array of factors including but not limited to the individual’s skill sets, experience and training, licensure and certifications, office location and other geographic considerations, as well as other business and organizational needs. With that said, as required by local law in geographies that require salary range disclosure, Vaco/Highspring notes the salary range for the role is noted in this job posting. The individual may also be eligible for discretionary bonuses, and can participate in medical, dental, and vision benefits as well as the company’s 401(k) retirement plan. Additional disclaimer: Unless otherwise noted in the job description, the position Vaco/Highspring is filing for is occupied. Please note, however, that Vaco/Highspring is regularly asked to provide talent to other organizations. By submitting to this position, you are agreeing to be included in our talent pool for future hiring for similarly qualified positions. Submissions to this position are subject to the use of AI to perform preliminary candidate screenings, focused on ensuring minimum job requirements noted in the position are satisfied. Further assessment of candidates beyond this initial phase within Vaco/Highspring will be otherwise assessed by recruiters and hiring managers. Vaco/Highspring does not have knowledge of the tools used by its clients in making final hiring decisions and cannot opine on their use of AI products.

Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.