Enable job alerts via email!
Boost your interview chances
Join a dynamic startup aiming to revolutionize data querying for AI applications as a Software Engineer. Work on cutting-edge distributed systems while collaborating closely with a talented team in a supportive environment. Enjoy competitive compensation, catered meals, flexibility, and opportunities for growth.
Every breakthrough AI application, from foundation models to autonomous vehicles, relies on processing massive volumes of images, video, and complex data. But today’s data platforms (like Databricks and Snowflake) are built on top of tools made for spreadsheet-like analytics, not the petabytes of multimodal data that power AI. As a result, teams waste months on brittle infrastructure instead of conducting research and building their core product.
Eventual was founded in 2022 to solve this. Our mission is to make querying any kind of data, images, video, audio, text, as intuitive as working with tables, and powerful enough to scale to production workloads. Our open-source engine, Daft, is purpose-built for real-world AI systems: coordinating with external APIs, managing GPU clusters, and handling failures that traditional engines can’t. Daft already powers critical workloads at companies like Amazon, Mobileye, Together AI, and CloudKitchens.
We’ve assembled a world-class team from Databricks, AWS, Nvidia, Pinecone, GitHub Copilot, Tesla, and more, quadrupling our size within a year. With backing from Y Combinator, Caffeinated Capital, Array.vc, and top angels from the co-founders of Databricks and Perplexity, we’re looking to double the team now. Join us—Eventual is just getting started.
Please note we are looking for someone who is willing and able to come into our San Francisco office in the Mission district 4 days / week.
As a Software Engineer on the Systems team, you will build key capabilities for the Daft distributed data engine. You will be working on core architectural design and implementation of various components in Daft. While we are an experienced team that can provide constant guidance and mentorship, we value engineers who can autonomously scope and solve difficult technical challenges.
Planning/Query Optimizer: intelligently optimize users’ workloads with modern database techniques
Execution Engine: improve memory stability through the use of streaming computation and more efficient data structures
Distributed Scheduler: improve Daft’s resource utilization, task scheduling and fault tolerance
Storage: improve Daft integrations with modern data lake technologies such as Apache Parquet, Apache Iceberg and Delta Lake
Our goal is to build the world’s best open-source distributed query engine, becoming the leading framework for data engineering and analytics.
We are a young startup - so be prepared to wear many hats such as tinkering with infrastructure, talking to customers and participating heavily in the core design process of our product!
We are looking for a candidate with a strong foundation in systems programming and ideally experience with building distributed data systems or databases (e.g. Hadoop, Spark, Dask, Ray, BigQuery, PostgreSQL etc)
3+ years of experience working with distributed data systems (query planning, optimizations, workload pipelining, scheduling, networking, fault tolerance etc)
Strong fundamentals in systems programming (e.g. C++, Rust, C) and Linux
Familiarity and experience with cloud technologies (e.g. AWS S3 etc)
Most importantly, we are looking for someone who works well in small, focused teams with fast iterations and lots of autonomy. If you are passionate, intellectually curious and excited to build the next generation of distributed data technologies, we want you on the team!
In-person tight knit team with 4x a week in office
Competitive comp and startup equity
Catered lunches and dinners for SF employees
Commuter benefit
Team building events & poker nights
Health, vision, and dental coverage
Flexible PTO
Latest Apple equipment
401k plan with match!