Enable job alerts via email!

Python Developer - Data Engineer

CG Consulting Group

Toronto

On-site

CAD 80,000 - 120,000

Full time

30+ days ago

Boost your interview chances

Create a job specific, tailored resume for higher success rate.

Job summary

Join a dynamic IT services firm as a Python Data Engineer, where you'll lead the design and implementation of robust data pipelines. In this hybrid role, you'll collaborate with data scientists and analysts to transform complex business requirements into efficient data solutions. Your expertise in Python, SQL, and cloud platforms will empower the organization with actionable insights. This is an exciting opportunity to mentor a talented team while continuously researching and implementing new technologies. If you're passionate about data engineering and eager to make a significant impact, we encourage you to apply!

Qualifications

  • 5+ years in Data Engineering with strong Python skills.
  • Experience with SQL and relational databases is essential.

Responsibilities

  • Lead the design and development of data pipelines for data ingestion.
  • Implement data quality checks and monitoring systems.

Skills

Python
SQL
Data Engineering
Communication Skills
Mentorship

Education

Bachelor's degree in Computer Science
Master's degree in Engineering

Tools

Airflow
SQLAlchemy
Openshift
ECS
Kubernetes

Job description

Python Developer - Data Engineer

This is with a large IT services firm for a major US bank in Canada.

Candidate must have legal work status for Canada.

Client Location is Mississauga ON.

Hybrid - In-Office 3 days per week.

Perm/FT role, salary and benefits at market rate.


We are seeking a passionate and highly skilled Python Data Engineer to spearhead the development and maintenance of our cutting-edge data infrastructure. As a leader in our data team, you'll play a pivotal role in designing, building, and optimizing data pipelines that empower our organization with insightful and actionable data. If you thrive in a fast-paced environment, possess a strong collaborative spirit, and are eager to make a significant impact, we encourage you to apply.


Objectives of this role
  1. Design, architect, and implement robust and scalable data pipelines using Python and related technologies (Airflow, PySpark, PyFlink are a plus).
  2. Champion best practices for data engineering, code quality, testing, and deployment.
  3. Mentor and guide a team of talented data engineers, fostering a collaborative and high-performing team culture.
  4. Collaborate closely with Data Scientists, Data Analysts, and business stakeholders to translate complex business requirements into efficient data solutions.
  5. Continuously research and implement new technologies and best practices to improve the efficiency and scalability of our data platform.
  6. Take ownership of the deployment and monitoring of data pipelines and related infrastructure on cloud platforms such as Openshift, ECS, or Kubernetes.

Responsibilities
  1. Lead the design and development of data pipelines for ingestion, transformation, and loading of data from various sources (databases, APIs, streaming platforms) into our data warehouse/lake.
  2. Write optimized and maintainable SQL queries and leverage SQLAlchemy for efficient database interaction.
  3. Implement robust data quality checks and monitoring systems to ensure data integrity and accuracy.
  4. Develop comprehensive documentation and contribute to knowledge sharing within the team.
  5. Contribute to the design and implementation of data governance policies and procedures.

Required skills and qualifications
  1. 5+ years of hands-on experience in a Data Engineering role, with a strong proficiency in Python (version 3.6+).
  2. Extensive experience working with relational databases and writing complex SQL queries.
  3. Proven expertise with SQLAlchemy or similar ORM libraries.
  4. Experience with workflow management tools like Airflow (experience with PySpark or PyFlink is a major plus).
  5. Solid understanding of data warehousing concepts and experience working with large datasets.
  6. Ability to guide and mentor junior developers, fostering a collaborative team environment.
  7. Strong communication skills, both written and verbal, with the ability to explain complex technical concepts to both technical and non-technical audiences.
  8. Experience deploying and managing applications on cloud platforms like Openshift, ECS, or Kubernetes.

Preferred skills and qualifications
  1. Bachelor's or Master's degree in Computer Science, Engineering, or a related field.
  2. Experience with data visualization tools and techniques.
  3. Familiarity with agile development methodologies.
  4. Contributions to open-source projects or active participation in the data engineering community.
Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.

Similar jobs

Intermediate DataOps/Cloud Data Engineer

Akkodis group

Toronto

Remote

CAD 80.000 - 110.000

2 days ago
Be an early applicant

Staff Software Engineer, Data Products

Chainlink Labs

Toronto

Remote

CAD 100.000 - 150.000

4 days ago
Be an early applicant

Data Engineer - Snowflake

Lumenalta

Toronto

Remote

CAD 100.000 - 140.000

3 days ago
Be an early applicant

Principal Data Engineer (Remote)

World Education Services

Toronto

Remote

CAD 100.000 - 130.000

5 days ago
Be an early applicant

Intermediate DataOps/Cloud Data Engineer - Remote / Telecommute

Cynet Systems Inc

Toronto

Remote

CAD 90.000 - 130.000

Yesterday
Be an early applicant

QA Engineer - Data Platform

Veeva Systems

Toronto

Remote

CAD 65.000 - 115.000

7 days ago
Be an early applicant

Senior Data Engineer

The Score

Toronto

Remote

CAD 90.000 - 130.000

7 days ago
Be an early applicant

Data Engineer

AmeriLife

Vaughan

Remote

CAD 85.000 - 120.000

2 days ago
Be an early applicant

Data Engineer - Snowflake

Lumenalta

Toronto

Remote

CAD 90.000 - 130.000

7 days ago
Be an early applicant