Enable job alerts via email!

Senior Data Engineer (Hybrid)

NEARSOURCE TECHNOLOGIES

Old Toronto

Hybrid

CAD 60,000 - 80,000

Full time

30+ days ago

Boost your interview chances

Create a job specific, tailored resume for higher success rate.

Job summary

Join a dynamic team in a hybrid role as a Data Engineer, contributing to innovative solutions within a Fortune 500 project. You will work on building and scaling robust data infrastructure, automating cloud services, and developing CI/CD pipelines. Your expertise in big data systems, SQL, and data processing will be crucial in delivering impactful insights and solutions. This is an exciting opportunity to collaborate with cross-functional teams, engage with stakeholders, and utilize your skills in Spark, PySpark, and Python to drive technology advancements. Make a significant impact in a forward-thinking environment that values diversity and innovation.

Qualifications

  • 5-7 years of experience in big data systems and data processing.
  • 3 years of experience with Spark data frames and PySpark.
  • Strong proficiency in SQL and dimensional modeling.

Responsibilities

  • Design scalable systems to meet evolving business needs.
  • Build robust data infrastructure for efficient processing.
  • Collaborate with teams to deliver tailored data solutions.

Skills

Spark
PySpark
Python
SQL
ETL Tools (Airflow)
Business Intelligence Tools (Looker)
Data Analysis (Jupyter, EMR Notebooks)
Version Control (Git)
CI/CD (Jenkins CI)

Education

Bachelor's degree in Computer Science or related field

Tools

Airflow
Looker
Jupyter
Apache Zeppelin

Job description

Join a top Fortune 500 project in Canada as a Data Engineer. Contribute to innovative solutions and technology advancements. Apply now to make an impact with a dynamic team. This hybrid role is based in Toronto, Ontario, Canada.

Responsibilities

  • Product-Driven Development: Apply a product-focused mindset to understand business needs and design scalable, adaptable systems that evolve with changing requirements.
  • Problem Solving & Technical Design: Deconstruct complex challenges, document technical solutions, and plan iterative improvements for fast, impactful results.
  • Data Infrastructure & Processing: Build and scale robust data infrastructure to handle batch and real-time processing of billions of records efficiently.
  • Automation & Cloud Infrastructure: Automate cloud infrastructure, services, and observability to enhance system efficiency and reliability.
  • CI/CD & Testing: Develop CI/CD pipelines and integrate automated testing to ensure smooth, reliable deployments.
  • Cross-Functional Collaboration: Work closely with data engineers, data scientists, product managers, and other stakeholders to understand requirements and promote best practices.
  • Growth Mindset & Insights: Identify business challenges and opportunities, using data analysis and mining to provide strategic and tactical recommendations.
  • Analytics & Reporting: Support analytics initiatives by delivering insights into product usage, campaign performance, funnel metrics, segmentation, conversion, and revenue growth.
  • Ad-Hoc Analysis & Dashboarding: Conduct ad-hoc analyses, manage long-term projects, and create reports and dashboards to reveal new insights and track key initiative progress.
  • Stakeholder Engagement: Partner with business stakeholders to understand analytical needs, define key metrics, and maintain a data-driven approach to problem-solving.
  • Cross-Team Partnership: Collaborate with cross-functional teams to gather business requirements and deliver tailored data solutions.
  • Data Storytelling & Presentation: Deliver impactful presentations that translate complex data into clear, actionable insights for diverse audiences.

Minimum Qualifications

  • Educational Background: Bachelor's degree in Computer Science, Engineering, or a related field, or equivalent training, fellowship, or work experience.
  • Industry Experience: 5-7 years of industry experience in big data systems, data processing, and SQL databases.
  • Spark & PySpark Expertise: 3 years of experience with Spark data frames, Spark SQL, and PySpark for large-scale data processing.
  • Programming Skills: 3 years of hands-on experience in writing modular, maintainable code, preferably in Python and SQL.
  • SQL & Data Modeling: Strong proficiency in SQL, dimensional modeling, and working with analytical big data warehouses like Hive and Snowflake.
  • ETL Tools: Experience with ETL workflow management tools such as Airflow.
  • Business Intelligence (BI) Tools: 2+ years of experience in building reports and dashboards using BI tools like Looker.
  • Version Control & CI/CD: Proficiency with version control and CI/CD tools like Git and Jenkins CI.
  • Data Analysis Tools: Experience working with and analyzing data using notebook solutions such as Jupyter, EMR Notebooks, and Apache Zeppelin.

APPLY NOW!

NearSource Technologies values diversity and is committed to equal opportunity. All qualified applicants will be considered regardless of their race, color, religion, sex, sexual orientation, gender identity, national origin, disability, or status as protected veterans.

Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.

Similar jobs

Data Analyst III (Healthcare Analytics)

Centene Corporation

Vaughan

Remote

USD 68,000 - 124,000

3 days ago
Be an early applicant

Data Integration Engineer

Atreides LLC.

Vancouver

Remote

CAD 70,000 - 110,000

30+ days ago

Asset and Configuration Analyst

Fidelity International

Old Toronto

Remote

CAD 60,000 - 100,000

30+ days ago