Enable job alerts via email!

Senior Data Engineer

Compunnel Inc.

Toronto

On-site

CAD 80,000 - 100,000

Part time

2 days ago
Be an early applicant

Job summary

A leading technology firm in Toronto is seeking an experienced Data Engineer to modernize an enterprise Oracle Data Warehouse into an Azure-based Data Lake. The role involves re-engineering PL/SQL scripts into PySpark transformations, managing data pipelines with Azure Data Factory, and collaborating with cross-functional teams. Ideal candidates will have a strong background in financial services and data modeling.

Qualifications

  • Experience in modernizing Oracle Data Warehouse into Azure Data Lake.
  • Strong background in PL/SQL and PySpark for data transformations.
  • Capable of navigating data pipelines and cross-functional collaborations.

Responsibilities

  • Re-engineer approximately 700 PL / SQL scripts into efficient PySpark transformations.
  • Build and manage data pipelines using Azure Data Factory.
  • Leverage Databricks notebooks for data processing.

Skills

Hands-on experience with Azure Data Factory
Familiarity with Databricks and SQL exposure
Understanding of Terraform CI/CD deployment flows
Background in Oracle Data Warehousing
Experience with data modeling and resolving ingestion performance issues
Prior work in finance or sensitive data environments
Familiarity with Access or legacy systems for troubleshooting

Job description

Get AI-powered advice on this job and more exclusive features.

Are you an experienced Data Engineer ready to take on a large-scale data platform migration? Join us in modernizing an enterprise Oracle Data Warehouse into a cutting-edge Azure-based Data Lake.

We're looking for a hands-on engineer who can dive into legacy PL / SQL and help reimagine it in PySpark within Databricks, using Azure Data Factory to power our data pipelines.

  1. Re-engineer approximately 700 PL / SQL scripts into efficient PySpark transformations.
  2. Build and manage data pipelines using Azure Data Factory.
  3. Leverage Databricks notebooks for data processing and exposure.
  4. Collaborate with cross-functional teams to ensure smooth platform and data migration.
  5. Navigate Terraform deployment pipelines (understanding only, no coding required).

Must-Have Skills

  • Hands-on experience with Azure Data Factory.
  • Familiarity with Databricks and SQL exposure.
  • Understanding of Terraform CI/CD deployment flows.
  • Background in Oracle Data Warehousing.
  • Experience with data modeling and resolving ingestion performance issues.
  • Prior work in finance or sensitive data environments.
  • Familiarity with Access or legacy systems for troubleshooting.

Seniority level: Mid-Senior level

Employment type: Contract

Job function: Information Technology

Industries: Financial Services, Banking, and Investment Banking

Note: This job posting is active and available.

Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.