Enable job alerts via email!

Data Engineer| Level 4 (CAD) - RSR MSP

LanceSoft Inc

Montreal

On-site

CAD 85,000 - 110,000

Full time

2 days ago
Be an early applicant

Job summary

A leading technology firm in Montreal is looking for a Data Engineer with at least 7 years of experience. The role involves designing and managing scalable ETL processes using Python and DataBricks. Candidates should have proficiency in cloud services and data warehousing solutions like Snowflake. The position offers a supportive environment with opportunities for growth and collaboration in agile teams.

Benefits

Career development opportunities
Collaborative work environment
State-of-the-art office

Qualifications

  • 7+ years of experience in data engineering or relevant field.
  • Strong proficiency in Python and experience in ETL processes.
  • Hands-on experience with cloud services for data management.

Responsibilities

  • Collaborate with teams to design scalable ETL processes.
  • Develop and deploy ETL jobs for data extraction and transformation.
  • Take ownership of data engineering lifecycle ensuring accuracy.

Skills

Python programming
DataBricks
Snowflake
ETL principles
Agile methodologies
Git
Linux operating systems

Tools

Data visualization tools (e.g., Power BI)
Apache Airflow

Job description



Job Title: Data Engineer
Experience Level: Level 4 (advanced): 7-15 years
Open positions: 1
Job Level: FTC
Location: Montreal (Day 1 onboarding onsite / in office presence 3x week)

We provide:

• A robust career development path offering numerous opportunities for growth, learning and advancement.
• A supportive, learning-oriented environment in collaboration with development within fast-feedback agile delivery squads
• Collaborative work within cross-functional squads, following agile practices and utilizing both cloud and on-premises technology to deliver innovative solutions.
• Encouragement for every developer to contribute their unique perspective - your ideas will be valued, and you’ll receive full support in their implementation!
• Participation in an international environment with various multidisciplinary squads, working alongside customers, product experts, and SREs.
• A dynamic environment where cutting-edge technology propels us.
• State-of-the-art offices located in the City Centre designed to enhance collaboration.

Role Responsibilities

You will be responsible for:

• Collaborating with cross-functional teams to understand data requirements, and design efficient, scalable, and reliable ETL processes using Python and DataBricks
• Developing and deploying ETL jobs that extract data from various sources, transforming it to meet business needs.
• Taking ownership of the end-to-end engineering lifecycle, including data extraction, cleansing, transformation, and loading, ensuring accuracy and consistency.
• Creating and manage data pipelines, ensuring proper error handling, monitoring and performance optimizations
• Working in an agile environment, participating in sprint planning, daily stand-ups, and retrospectives.
• Conducting code reviews, provide constructive feedback, and enforce coding standards to maintain a high quality.
• Developing and maintain tooling and automation scripts to streamline repetitive tasks.
• Implementing unit, integration, and other testing methodologies to ensure the reliability of the ETL processes
• Utilizing REST APIs and other integration techniques to connect various data sources
• Maintaining documentation, including data flow diagrams, technical specifications, and processes.

You have:

• Proficiency in Python programming, including experience in writing efficient and maintainable code.
• Hands-on experience with cloud services, especially DataBricks, for building and managing scalable data pipelines
• Proficiency in working with Snowflake or similar cloud-based data warehousing solutions
• Solid understanding of ETL principles, data modelling, data warehousing concepts, and data integration best practices
• Familiarity with agile methodologies and the ability to work collaboratively in a fast-paced, dynamic environment.
• Experience with code versioning tools (e.g., Git)
• Meticulous attention to detail and a passion for problem solving
• Knowledge of Linux operating systems-Familiarity with REST APIs and integration techniques

You might also have:

• Familiarity with data visualization tools and libraries (e.g., Power BI)
• Background in database administration or performance tuning
• Familiarity with data orchestration tools, such as Apache Airflow
• Previous exposure to big data technologies (e.g., Hadoop, Spark) for large data processing

*//
EEO Employer
Minorities/ Females/ Disabled/ Veterans/ Gender Identity/ Sexual Orientation
//*
Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.