Job Search and Career Advice Platform

Enable job alerts via email!

Software Engineer – Foundational Data Systems for AI - Canada

Granica Computing, Inc.

Toronto

On-site

CAD 80,000 - 120,000

Full time

Yesterday
Be an early applicant

Generate a tailored resume in minutes

Land an interview and earn more. Learn more

Job summary

A leading AI research firm in Toronto is seeking a motivated team member to help design and implement foundational data systems for enterprise AI. You will build autonomous data engines, contribute to scalable compute systems, and translate new algorithms into production. Ideal candidates are proficient in Java, Rust, Go, or C++, and have a foundational understanding of distributed systems. This role offers competitive salary, equity, and work in a high-trust environment focused on impactful research.

Benefits

Competitive salary
Meaningful equity
Substantial bonuses for top performers
Flexible time off
Comprehensive health coverage
Support for research and publication

Qualifications

  • Foundational understanding of distributed systems: partitioning, replication, and fault tolerance.
  • Experience or curiosity with columnar formats such as Parquet and low-level data encoding.
  • Familiarity with metadata-driven architectures or data query planning.
  • Exposure to or hands-on use of Spark, Flink, or similar distributed engines on cloud storage.
  • Proficiency in Java, Rust, Go, or C++; commitment to clean, reliable code.
  • Curiosity about how compression, entropy, and representation shape system efficiency and learning.
  • A builder’s mindset—eager to learn, improve, and deliver features end-to-end with growing autonomy.

Responsibilities

  • Help design and implement the global metadata substrate.
  • Build components that reorganize data autonomously.
  • Develop strategies to extract maximum signal per byte read.
  • Contribute to distributed compute systems that scale predictively.
  • Translate new algorithms from research into production-grade implementations.
  • Design and optimize data paths to minimize time between question and insight.

Skills

Foundational understanding of distributed systems
Experience with columnar formats like Parquet
Familiarity with metadata-driven architectures
Exposure to Spark, Flink
Proficiency in Java, Rust, Go, or C++
Curiosity about compression and representation
Builder's mindset

Tools

Spark
Flink
Job description

Granica is an AI research and systems company building the infrastructure for a new kind of intelligence: one that is structured, efficient, and deeply integrated with data.

Our systems operate at exabyte scale, processing petabytes of data each day for some of the world’s most prominent enterprises in finance, technology, and industry. These systems are already making a measurable difference in how global organizations use data to deploy AI safely and efficiently.

We believe that the next generation of enterprise AI will not come from larger models but from more efficient data systems. By advancing the frontier of how data is represented, stored, and transformed, we aim to make large-scale intelligence creation sustainable and adaptive.

Our long-term vision is Efficient Intelligence: AI that learns using fewer resources, generalizes from less data, and reasons through structure rather than scale. To reach that, we are first building the Foundational Data Systems that make structured AI possible.

The Mission

AI today is limited not only by model design but by the inefficiency of the data that feeds it. At scale, each redundant byte, each poorly organized dataset, and each inefficient data path slows progress and compounds into enormous cost, latency, and energy waste.

Granica’s mission is to remove that inefficiency. We combine new research in information theory, probabilistic modeling, and distributed systems to design self-optimizing data infrastructure: systems that continuously improve how information is represented and used by AI.

This engineering team partners closely with the Granica Research group led by Prof. Andrea Montanari (Stanford), bridging advances in information theory and learning efficiency with large-scale distributed systems. Together, we share a conviction that the next leap in AI will come from breakthroughs in efficient systems, not just larger models.

What You’ll Build
  • Global Metadata Substrate. Help design and implement the metadata substrate that supports time-travel, schema evolution, and atomic consistency across massive tabular datasets.
  • Adaptive Engines. Build components that reorganize data autonomously, learning from access patterns and workloads to maintain efficiency with minimal manual tuning.
  • Intelligent Data Layouts. Develop and refine bit-level encodings, compression, and layout strategies to extract maximum signal per byte read.
  • Autonomous Compute Pipelines. Contribute to distributed compute systems that scale predictively and adapt to dynamic load.
  • Research to Production. Translate new algorithms in compression and representation from research into production-grade implementations.
  • Latency as Intelligence. Design and optimize data paths to minimize time between question and insight, enabling faster learning for both models and humans.
What You Bring
  • Foundational understanding of distributed systems: partitioning, replication, and fault tolerance.
  • Experience or curiosity with columnar formats such as Parquet and low-level data encoding.
  • Familiarity with metadata-driven architectures or data query planning.
  • Exposure to or hands-on use of Spark, Flink, or similar distributed engines on cloud storage.
  • Proficiency in Java, Rust, Go, or C++; commitment to clean, reliable code.
  • Curiosity about how compression, entropy, and representation shape system efficiency and learning.
  • A builder’s mindset—eager to learn, improve, and deliver features end-to-end with growing autonomy.
Bonus
  • Familiarity with Iceberg, Delta Lake, or Hudi.
  • Contributions to open-source projects or research in compression, indexing, or distributed systems.
  • Interest in how data representation influences AI training dynamics and reasoning efficiency.
Why Granica
  • Fundamental Research Meets Enterprise Impact. Work at the intersection of science and engineering, turning foundational research into deployed systems serving enterprise workloads at exabyte scale.
  • AI by Design. Build the infrastructure that defines how efficiently the world can create and apply intelligence.
  • Real Ownership. Design primitives that will underpin the next decade of AI infrastructure.
  • High-Trust Environment. Deep technical work, minimal bureaucracy, shared mission.
  • Enduring Horizon. Backed by NEA, Bain Capital, and various luminaries from tech and business. We are building a generational company for decades, not quarters or a product cycle.
Compensation & Benefits
  • Competitive salary, meaningful equity, and substantial bonus for top performers
  • Flexible time off plus comprehensive health coverage for you and your family
  • Support for research, publication, and deep technical exploration
  • Join us to build the foundational data systems that power the future of enterprise AI.
  • At Granica, you will shape the fundamental infrastructure that makes intelligence itself efficient, structured, and enduring.
Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.