Overview
This is a remote position. We are seeking a highly skilled and motivated AWS Senior Platform Engineer to join our growing data engineering team. This role is ideal for someone who thrives in a fast-paced environment and is passionate about building scalable secure and high-performance data platforms.
Location
Across Canada
Employment Type
Full Time
Status
Contract
Responsibilities
- Design, develop and optimize data pipelines using EMR and Spark (PySpark)
- Implement and manage AWS Lake Formation for secure and governed data access
- Contribute to the development and adoption of data mesh solutions enabling decentralized data ownership and interoperability
- Write and maintain data quality checks to ensure accuracy, completeness and reliability of data assets
- Build out data catalog solutions on AWS
- Support light DevOps tasks with a strong preference for experience in Terraform for infrastructure as code
Qualifications
- 5 years of experience in data engineering or platform engineering roles
- Strong hands-on experience with AWS EMR, Spark (PySpark) and Lake Formation
- Familiarity with data mesh architecture and its implementation in enterprise environments
- Proficiency in writing robust data validation and quality checks
- Experience working within structured frameworks and agile environments
- Exposure to DevOps practices especially using Terraform for provisioning and managing cloud infrastructure
- 5+ years of experience in data quality assurance and testing, including developing and executing functional test cases, validating data pipelines, and coordinating deployments from development to production environments
- Has supported at least one Enterprise / Government Organization with Big Data platforms and tools, such as Hadoop (HDFS, Pig, Hive, Spark), Big SQL, NoSQL, and Scala, ideally within cloud-based environments
- 3+ data analysis and modeling projects, including working with structured and unstructured databases, building automated data quality pipelines, and collaborating with data engineers and architects to ensure high data integrity
- Experience developing and executing test cases for Big Data pipelines, with deployments across dev, test, and production environments
- Strong SQL skills for validation, troubleshooting, and data profiling
- Applied knowledge of Big Data platforms including Hadoop (HDFS, Hive, Pig), Spark, BigSQL, NoSQL, Scala
- Familiarity with cloud data ingestion and integration methods
- Experience working with structured and unstructured data formats
- Understanding of data modeling, data structures, and use-case-driven design
- Experience in test automation for data validation pipelines is a strong asset
- Prior experience with Genesys Cloud testing is a plus
- Exposure to Tableau or other BI tools is beneficial
- Hybrid role: 2 days per week onsite in North Vancouver