Enable job alerts via email!

Data Pipeline Engineer - Hybrid

Ribbon Communications

Ottawa

Hybrid

CAD 80,000 - 100,000

Full time

19 days ago

Job summary

A global leader in real-time communications is seeking a Data Pipeline Engineer in Ottawa, ON. This full-time position involves designing and developing large-scale data pipelines. The ideal candidate has a relevant bachelor's degree, experience with frameworks like Hadoop and Apache Flink, and strong programming skills in Python, Java, and SQL. Join the team to work on innovative projects in a collaborative environment.

Qualifications

  • 4+ years of experience in a similar role.
  • Experience with distributed data pipeline and data warehousing frameworks.
  • Ability to quickly pick up new tools and technologies.

Responsibilities

  • Design and develop large-scale, distributed data processing pipelines.
  • Collaborate with architects and data scientists on ML workflows.
  • Optimize data workflows for performance and scalability.

Skills

Data pipeline design
Data ingestion and transformation
Programming in Python
Programming in Java
Programming in SQL
Apache Kafka
Machine Learning concepts
Microservices architecture
Cloud platforms (AWS, Google Cloud, Azure)

Education

Bachelor’s degree in Computer Science or related field
Master’s degree in Computer Science or related field

Tools

Hadoop
Apache Flink
Apache Spark
Docker
Kubernetes
OpenShift
Job description

JOB TITLE Data Pipeline Engineer – Ribbon Analytics (Full Time)

ABOUT RIBBON COMMUNICATIONS

Ribbon Communications is a global leader in real-time communications, transforming networks to secure IP and cloud-based architectures for consumers and businesses worldwide. Learn more at rbbn.com.

LOCATION

Ottawa, ON, Canada two days a week in our Ottawa office then three days a week starting January 2025

Responsibilities
  • Design and develop large-scale, distributed data processing pipelines using technologies like Hadoop, Apache Trino/Impala, Flink, and Airflow.
  • Design efficient data ingestion and transformation solutions for structured and unstructured data.
  • Participate in code reviews, design discussions, and technical decision-making.
  • Collaborate with architects, data scientists, and software developers to design and develop ML workflows and analytics data pipelines using the latest technologies.
  • Optimize data workflows for performance, reliability, and scalability.
  • Stay current on data engineering trends and contribute to code reviews and design discussions.
Qualifications
  • Bachelor’s degree in Computer Science, Electrical Engineering, Computer Engineering, or a related field.
  • A Master’s degree in Computer Science, Electrical Engineering, or Computer Engineering is preferred.
  • Specialization in Data Science or Machine Learning is preferred.
  • Four or more years of related experience in a similar role.
  • Experience with distributed data pipeline and data warehousing frameworks such as Apache Flink, Spark, Kafka, Hadoop, etc.
  • Strong programming skills in Python, Java, and SQL as they relate to machine learning, data science, and ML frameworks.
  • Ability to quickly pick up new tools and technologies to assist in rapid prototyping.
Preferred Skills
  • Experience with streaming systems such as Apache Kafka and Flink.
  • Strong understanding of database technologies (SQL and NoSQL)
  • Experience with AI/ML concepts or frameworks.
  • Experience with microservices architecture and frameworks - Kubernetes, Docker, and OpenShift.
  • Experience with cloud platforms and distributed computing environments for NLP tasks, such as AWS, Google Cloud, or Azure.

Please Note:

'All qualified applicants will receive consideration for employment without regard to race, age, sex, color, religion, sexual orientation, gender identity, national origin, protected veteran status, on the basis of disability, or other characteristic protected by applicable law.'

Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.