Enable job alerts via email!

Data Pipeline Engineer - Hybrid

Ribbon Communications Operating Company

Ottawa

Hybrid

CAD 100,000 - 125,000

Full time

Today
Be an early applicant

Generate a tailored resume in minutes

Land an interview and earn more. Learn more

Job summary

A telecommunications firm is seeking a Data Pipeline Engineer to develop scalable data processing solutions. You will design distributed data pipelines and collaborate on machine learning workflows. Ideal candidates have a strong background in data engineering with experience in tools such as Hadoop and Apache Flink. The role is located in Ottawa with a hybrid work model starting January 2025.

Qualifications

  • Bachelor’s degree in Computer Science, Electrical Engineering, or related field.
  • Four or more years of related experience in a similar role.
  • Strong programming skills in Python, Java, and SQL.

Responsibilities

  • Design and develop large-scale, distributed data processing pipelines.
  • Collaborate on ML workflows and analytics data pipelines.
  • Optimize data workflows for performance and scalability.

Skills

Distributed data processing pipelines
Data ingestion and transformation
Machine learning workflows
Programming in Python
Programming in Java
SQL proficiency

Education

Bachelor’s degree in Computer Science or related field
Master’s degree in Computer Science or related field

Tools

Hadoop
Apache Flink
Apache Kafka
Spark
Airflow
Job description
JOB TITLE Data Pipeline Engineer – Ribbon Analytics (Full Time)
ABOUT RIBBON COMMUNICATIONS
OPPORTUNITY

Ribbon Communications is looking for a Data Pipeline Engineer to support the infrastructure, data pipelines, and analytics applications powering Ribbon Analytics , a big data security and network intelligence product. You’ll work with container platforms; design distributed data pipelines and help scale machine learning capabilities for real-world telecommunications use cases.

Ribbon Analytics is a big data network analytics and security product that collects, processes, and responds to massive amounts of data collected from the network, leveraging machine learning and other techniques to analyze trends and outliers in the data and take action to mitigate security threats, fraud, and other issues in a customer’s network.

We are seeking a self-driven candidate with a strong work ethic and a focus on creating scalable data pipelines and analytics solutions.

LOCATION

Ottawa, ON, Canada two days a week in our Ottawa office then three days a week starting January 2025

What you will be doing : (Responsibilities)
  • Design and develop large-scale, distributed data processing pipelines using technologies like Hadoop, Apache Trino / Impala, Flink, and Airflow.
  • Design efficient data ingestion and transformation solutions for structured and unstructured data.
  • Participate in code reviews, design discussions, and technical decision-making.
  • Collaborate with architects, data scientists, and software developers to design and develop ML workflows and analytics data pipelines using the latest technologies.
  • Optimize data workflows for performance, reliability, and scalability.
  • Stay current on data engineering trends and contribute to code reviews and design discussions.
What we need to see : (Qualifications)
  • Bachelor’s degree in Computer Science, Electrical Engineering, Computer Engineering, or a related field.
  • A Master’s degree in Computer Science, Electrical Engineering, or Computer Engineering is preferred.
  • Specialization in Data Science or Machine Learning is preferred.
  • Four or more years of related experience in a similar role.
  • Experience with distributed data pipeline and data warehousing frameworks such as Apache Flink, Spark, Kafka, Hadoop, etc.
  • Strong programming skills in Python, Java, and SQL as they relate to machine learning, data science, and ML frameworks.
  • Ability to quickly pick up new tools and technologies to assist in rapid prototyping.
Ways To Stand Out from the Crowd (Preferred Skills)
  • Experience with streaming systems such as Apache Kafka and Flink.
  • Strong understanding of database technologies (SQL and NoSQL)
  • Experience with AI / ML concepts or frameworks.
  • Experience with microservices architecture and frameworks - Kubernetes, Docker, and OpenShift.
  • Experience with cloud platforms and distributed computing environments for NLP tasks, such as AWS, Google Cloud, or Azure.
Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.