Enable job alerts via email!

Software Engineer - Integrations Paris - Remote France

Sifflet, Inc.

Snowflake (AZ)

Remote

USD 80,000 - 120,000

Full time

30+ days ago

Boost your interview chances

Create a job specific, tailored resume for higher success rate.

Job summary

Join a forward-thinking company that is revolutionizing data observability! As a backend engineer, you'll design and implement integrations with various data sources, enhancing how organizations manage and visualize their data. This is an exciting opportunity to work with a small, dynamic team where your contributions will directly impact product development and architecture. You’ll gain hands-on experience with modern technologies like Java, Spring Boot, and various data tools, all while being part of a collaborative environment that values innovation and learning. If you're passionate about data and eager to make a difference, this role is for you!

Benefits

Competitive salary
Company equity
Remote friendly
Weekly tech talks
Knowledgeable team members

Qualifications

  • 2+ years of experience in backend engineering or similar role.
  • Knowledge of data warehouses and visualization tools is a plus.

Responsibilities

  • Design and implement integrations with various data products.
  • Scale the data ingestion engine for large customer instances.
  • Add new capabilities and support for new integration types.

Skills

Backend Engineering
Data Warehousing
Data Visualization
ETL Pipelines
Java
Spring Boot
Typescript
Python

Education

Bachelor's Degree in Computer Science or related field

Tools

Gitlab CI
Prometheus
Grafana
Sentry

Job description

We are building the world’s best data observability platform to help companies excel at data-driven decision making.
Today half of a data team’s time is spent troubleshooting data quality issues, Sifflet is putting an end to that. Our solution allows data engineers and data consumers to visualize how data flows between their services, define data quality checks, and quickly find the root cause of any data anomaly.
Companies such as Datadog and New Relic have improved the productivity of developer teams tenfold. Our goal is to bring the same benefits to data teams. In a few years, every data-driven company will be using a data observability solution, and we want to be the best solution on the market (and of course, we have plans to go well beyond simple “data observability”).
We are backed by tier 1 investors and work with customers all across the globe. Our number of clients is growing steadily, and we need to expand our team!

About the job

Sifflet connects to many different data sources: data warehouses (Google BigQuery, Snowflake, AWS Redshift…), business intelligence visualisation solutions (Looker, PowerBI, AWS QuickSight…), transformation/ETL tools (dbt, Fivetran, Airflow…). For each of these data sources, we need to support all Sifflet features (catalog, lineage, monitoring…).

As each integration requires deep knowledge about the API, data model, and behaviour of each data source, Sifflet has a team dedicated to building these integrations. As a member of this team, you will:

  • Design and implement new integrations with data products. This often requires using and researching how each data source behaves, and then think hard about how to model it within the Sifflet platform.
  • Make the necessary changes to architecture and implementation to scale our data ingestion engine - some of our customers connect Sifflet to really large instances.
  • Add new capabilities to existing integrations.
  • Add support for completely new integration types - which entails defining how they will be displayed and integrated within the Sifflet application.

This is a key moment to join Sifflet, as we’re still a small team (about 20 engineers) with a lot of room to grow: you’ll have a major impact in the development of our product and the underlying architecture.

Some projects you could be working on
  • Extend our SQL parser to support field lineage for a given SQL dialect (meaning being able to detect which columns were used to compute another column, from query logs, potentially across different database systems).
  • Add catalog support for a vector database (allow users to search what kind of data is stored in a vector database, and where this data comes from).
  • Optimize the queries issued by our ingestion engine to reduce the cost incurred by customers when monitoring their datasources with Sifflet.
Our stack
  • Applications written in (modern) Java, to tap into the huge data ecosystem offered by this language; Spring Boot 3.
  • Other teams at Sifflet use Typescript + VueJS (frontend) or Python. You may need to write small chunks of code in these languages too.
  • A few supporting services: Gitlab CI, Prometheus/Loki/Grafana, Sentry…

While not directly part of our stack, expect to gain a lot of knowledge on many products in the modern data ecosystem. The subtleties of BigQuery or Snowflake will soon be very familiar to you.

Preferred qualifications
  • More than two years of experience in a backend engineer role or equivalent. Data engineers who want to move to a backend engineering position are also welcome.
  • General knowledge around some of these topics: data warehouses, data visualisation solutions, ETL pipelines… You don’t have to know everything upfront of course, you’ll pick up what you need on the job.
  • Willingness to learn Java and Spring Boot if you don’t already know this ecosystem.
  • You value ownership of your projects from design to production and aren’t afraid of taking initiatives.

None of the people who joined Sifflet perfectly matched the described requirements for the role. If you’re interested in this position but don’t tick all the boxes above, feel free to apply anyway!

Are we the company you’re looking for?
  • We have offices in Paris, but we’re very remote friendly - several team members are fully remote.
  • We offer competitive salary and company equity.
  • We have experts on many topics, so there’s always someone to help. We also have weekly tech talks where everyone can discuss a cool project or technology.
  • We’re constantly exposed to the intricacies of the modern data ecosystem - you’ll become very knowledgeable about data engineering and the modern data stack, and about how data is used in enterprises.
  • We’re building a genuinely great product, and we think you’ll love the team!
Apply for this job

* indicates a required field

First Name *

Last Name *

Email *

Phone

Resume/CV

Enter manually

Accepted file types: pdf, doc, docx, txt, rtf

Enter manually

Accepted file types: pdf, doc, docx, txt, rtf

Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.