
Enable job alerts via email!
Generate a tailored resume in minutes
Land an interview and earn more. Learn more
A leading technology firm in York Region, Canada, seeks an experienced Database Architect to lead the design and optimization of next-generation data analytics infrastructure. The ideal candidate will have expert proficiency in database kernel principles and strong programming skills in C/C++. Responsibilities include performance analysis, technology research, and guiding the development of innovative database solutions. A passion for cutting-edge technologies is essential.
The Computing Data Application Acceleration Lab aims to create a leading global data analytics platform organized into three specialized teams using innovative programming technologies. This team focuses on full-stack innovations, including software-hardware co-design and optimizing data efficiency at both the storage and runtime layers. This team also develops next-generation GPU architecture for gaming, cloud rendering, VR/AR, and Metaverse applications.
One of the goals of this lab are to enhance algorithm performance and training efficiency across industries, fostering long-term competitiveness.
Lead the core architecture design and evolution of our next-generation data analytics infrastructure, with a focus on high-performance, scalable databases and big data analytics systems.
Spearhead in-depth performance analysis and optimization of the database kernel, resolving extreme performance bottlenecks under massive data loads to enhance system efficiency and stability.
Track and research cutting-edge technologies in databases, data analytics, and the integration of AI; lead exploratory projects, especially on innovative database solutions leveraging software-hardware co-design, such as computation and storage decoupling and CXL-based memory pooling.
Develop the database technology roadmap, guide the team in overcoming key technical challenges, and build our industry-leading technical influence.
Expert proficiency in the kernel principles, storage engines, and query optimization of at least one mainstream database (e.g., PostgreSQL, MySQL, MongoDB, DB2, Oracle DB, etc.), with hands-on experience in source-code level development and/or deep optimization.
Solid programming foundation in C/C++, proficiency in Linux systems programming, experiences with software-hardware co-design and optimization.
Knowledge of modern computer architecture and familiarity with emerging technologies such as CXL, RDMA, and NVMe-oF is an asset.
Familiarity with the big data technology ecosystem (Hadoop, Spark, Flink, Lakehouse, unified batch/stream processing, etc.) is an asset.
Possessing excellent technical vision, strong learning abilities, and complex problem-solving skills. Keen insight into cutting-edge database and data analytics technologies (e.g., AI for DB, vector databases), coupled with the passion and capability to apply new technologies to real-world scenarios.