Enable job alerts via email!

Lead Data Developer

OpenText

Waterloo

Hybrid

CAD 90,000 - 120,000

Full time

4 days ago
Be an early applicant

Job summary

A prominent cloud solutions provider is seeking a Lead Data Developer for a hybrid role in Waterloo. The candidate will design and implement scalable ETL processes, develop data pipelines, and ensure data quality. This position requires strong experience in data warehousing, dimensional modeling, and advanced T-SQL, along with excellent communication skills. Join a team dedicated to building a data-driven ecosystem in an inclusive workspace.

Qualifications

  • 5+ years of experience in ETL / ELT development and data warehousing.
  • Strong experience designing and implementing dimensional data models (Kimball).
  • Excellent communication skills, with a collaborative, problem-solving mindset.

Responsibilities

  • Designing and implementing scalable ETL / ELT processes to capture data from diverse systems.
  • Building and maintaining stored procedures and automation in SQL.
  • Delivering clean, reliable datasets and aggregates that feed Power BI dashboards.

Skills

ETL / ELT development
Dimensional data models (Kimball)
Advanced T-SQL
Data integration
Version control
Communication skills

Education

Bachelor's or Master's degree in Computer Science, Information Systems, or related field

Tools

Azure Data Factory
Power BI
SSRS
Git
Job description
Overview

AI-First. Future-Driven. Human-Centered.

OpenText is hiring a Lead Data Developer in Waterloo, ON for a hybrid role (In-Office Tue; WFH Mon & Fri).

The Opportunity

OpenText (OT) is a growing organization looking for a talented and experienced Lead Data Engineer to support our SMB (Small Medium Business) Secure Cloud Data Analytics and Reporting Services. You\'ll serve on a SCRUM team responsible for designing, developing, and enhancing data pipelines that integrate multiple data sources into our Enterprise Data Warehouse. This warehouse powers dimensional reporting, analytics, and integrations across the business using tools such as Azure Analysis Services, Power BI and SQL Reporting Services.

This opportunity provides end-to-end experience across the data lifecycle—from data collection and ingestion, to modeling and transformation, to ensuring quality and scalability for downstream reporting and analytics. You\'ll also partner with engineering teams across the globe to troubleshoot issues, deliver improvements, and help shape the future of our data-driven ecosystem.

Responsibilities
  • Designing and implementing scalable ETL / ELT processes to capture data from diverse systems (MongoDB, APIs, XML / JSON, flat files, Azure Blob storage).
  • Modeling data into Kimball-style facts and dimensions that provide a solid foundation for analytics and reporting.
  • Building and maintaining stored procedures, scripts, and automation in SQL, PowerShell, and C#.
  • Delivering clean, reliable datasets and aggregates that feed Power BI dashboards, Analysis Services models, and SSRS reports.
  • Partnering with product managers, analysts, and other engineers to ensure data quality, reliability, and timely delivery.
  • Collaborating in backlog grooming, estimation, and prioritization of new features and enhancements.
  • Acting as a mentor and subject-matter expert for junior engineers, setting standards for data practices across the team.
What You Need to Succeed
  • Bachelor\'s or Master\'s degree in Computer Science, Information Systems, or related field.
  • 5+ years of experience in ETL / ELT development and data warehousing.
  • Strong experience designing and implementing dimensional data models (Kimball).
  • Advanced T-SQL skills, including stored procedure development and performance tuning.
  • Experience with data integration from structured and semi-structured sources (APIs, JSON, XML, flat files, MongoDB, blob storage).
  • Familiarity with version control and CI / CD tools (Git, Azure DevOps, Octopus Deploy).
  • Excellent communication skills, with a collaborative, problem-solving mindset.
Desirable Skills
  • Strong experience with data ingestion frameworks (e.g., SSIS, custom C# / PowerShell ETL, Azure Data Factory, or similar).
  • Hands-on with semi-structured and unstructured data (JSON, XML, APIs, MongoDB, blob storage).
  • Advanced knowledge of Kimball dimensional modeling and performance-optimized schema design.
  • Familiarity with automation and orchestration (SQL Agent, Azure Data Factory pipelines, or scheduling frameworks).
  • Experience with DevOps practices for data engineering (CI / CD pipelines, testing, monitoring).
  • Comfort with scripting / programming in C#, PowerShell, or Python for ETL and automation tasks.
  • Working knowledge of reporting platforms (Power BI, SSRS) to understand downstream needs and design data models appropriately.
OpenText and Inclusion

OpenText\'s efforts to build an inclusive work environment go beyond simply complying with applicable laws. Our Employment Equity and Diversity Policy provides direction on maintaining a working environment that is inclusive of everyone, regardless of culture, national origin, race, color, gender, gender identification, sexual orientation, family status, age, veteran status, disability, religion, or other basis protected by applicable laws.

If you need assistance and / or a reasonable accommodation due to a disability during the application or recruiting process, please submit a ticket atAsk HR. Our proactive approach fosters collaboration, innovation, and personal growth, enriching OpenText\'s vibrant workplace.

Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.