Enable job alerts via email!

Senior Data Engineer

LanceSoft, Inc.

Montreal

Hybrid

CAD 80,000 - 100,000

Full time

Today
Be an early applicant

Job summary

A tech solutions firm in Montreal is looking for a Sr. Data Engineer to develop ETL processes using Python and DataBricks. The ideal candidate will have mid-senior level experience in Python programming and cloud services, particularly in creating and managing scalable data pipelines. The role emphasizes collaboration and quality assurance during the engineering lifecycle, making it essential to have a keen understanding of ETL principles and data integration. The position is contract-based with onsite presence required 3 times a week.

Qualifications

  • Proficiency in Python programming, including experience in writing efficient and maintainable code.
  • Hands-on experience with cloud services, especially DataBricks, for building and managing scalable data pipelines.
  • Solid understanding of ETL principles, data modeling, and data integration best practices.

Responsibilities

  • Collaborate with cross-functional teams to understand data requirements and design efficient ETL processes.
  • Develop and deploy ETL jobs that extract data from various sources well.
  • Take ownership of the end-to-end engineering lifecycle, ensuring accuracy.

Skills

Proficiency in Python programming
Experience with DataBricks
Working with Snowflake
Understanding of ETL principles
Familiarity with agile methodologies
Experience with Git

Tools

DataBricks
Snowflake
Apache Airflow
Linux

Job description

Direct message the job poster from LanceSoft, Inc.

Tech Talent Acquisition Specialist | Tailoring Talent Solutions to Meet Business Needs | Specializing in IT and FinTech Industry

Job Title :

Sr. Data Engineer

Location :

Montreal (Day 1 onboarding onsite / in office presence 3x week)

Duration :

12+ months (extendable contract)

Role Responsibilities
  • Collaborate with cross-functional teams to understand data requirements and design efficient, scalable, and reliable ETL processes using Python and DataBricks.
  • Develop and deploy ETL jobs that extract data from various sources, transforming it to meet business needs.
  • Take ownership of the end-to-end engineering lifecycle, including data extraction, cleansing, transformation, and loading, ensuring accuracy and consistency.
  • Create and manage data pipelines, ensuring proper error handling, monitoring, and performance optimization.
  • Work in an agile environment, participating in sprint planning, daily stand-ups, and retrospectives.
  • Conduct code reviews, provide constructive feedback, and enforce coding standards to maintain high quality.
  • Develop and maintain tooling and automation scripts to streamline repetitive tasks.
  • Implement unit, integration, and other testing methodologies to ensure the reliability of ETL processes.
  • Utilize REST APIs and other integration techniques to connect various data sources.
  • Maintain documentation, including data flow diagrams, technical specifications, and processes.
You have :
  • Proficiency in Python programming, including experience in writing efficient and maintainable code.
  • Hands-on experience with cloud services, especially DataBricks, for building and managing scalable data pipelines.
  • Proficiency in working with Snowflake or similar cloud-based data warehousing solutions.
  • Solid understanding of ETL principles, data modeling, data warehousing concepts, and data integration best practices.
  • Familiarity with agile methodologies and the ability to work collaboratively in a fast-paced, dynamic environment.
  • Experience with code versioning tools (e.g., Git).
  • Meticulous attention to detail and a passion for problem solving.
  • Knowledge of Linux operating systems and familiarity with REST APIs and integration techniques.
You might also have :
  • Familiarity with data visualization tools and libraries (e.g., Power BI).
  • Background in database administration or performance tuning.
  • Familiarity with data orchestration tools, such as Apache Airflow.
  • Previous exposure to big data technologies (e.g., Hadoop, Spark) for large data processing.
Seniority level :

Mid-Senior level

Employment type :

Contract

Job function :

Engineering, Information Technology, and Project Management

Industries :

Financial Services, Investment Banking, and Insurance

Note: This job posting is active.

Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.

Similar jobs