Activez les alertes d’offres d’emploi par e-mail !

Software Engineer, Data Acquisition

Mistral AI

Paris

Hybride

EUR 60 000 - 80 000

Plein temps

Il y a 3 jours
Soyez parmi les premiers à postuler

Résumé du poste

A leading AI company in Paris seeks a Web Crawling and Data Indexing Engineer. This role involves developing web crawlers, managing data extraction, and collaborating with teams to optimize data processes. The ideal candidate will have strong skills in Python and experience with web scraping libraries. Join us to shape the future of AI and enjoy competitive salary and benefits.

Prestations

Competitive salary and equity
Health insurance
Transportation allowance
Sport allowance
Meal vouchers
Private pension plan
Generous parental leave policy
Visa sponsorship

Qualifications

  • Proficiency in Python, Java, or C++.
  • Hands‑on experience with web scraping libraries/frameworks.
  • Experience with SQL and/or NoSQL databases.

Responsabilités

  • Develop and maintain web crawlers using Python.
  • Utilize headless browsing techniques for data collection.
  • Collaborate with teams to scrape and integrate data from APIs.
  • Design and manage distributed job queues.

Connaissances

Python
Web scraping
Data structures
HTTP/HTTPS protocols
Algorithms

Outils

Beautiful Soup
Scrapy
Postgres
Docker
Redis
Description du poste

About Mistral

At Mistral AI, we believe in the power of AI to simplify tasks, save time, and enhance learning and creativity. Our technology is designed to integrate seamlessly into daily working life.

We democratize AI through high-performance, optimized, open-source, and cutting-edge models, products, and solutions. Our comprehensive AI platform is designed to meet enterprise needs, whether on-premises or in cloud environments. Our offerings include le Chat, the AI assistant for life and work.

We are a dynamic, collaborative team passionate about AI and its potential to transform society.

Our diverse workforce thrives in competitive environments and is committed to driving innovation. Our teams are distributed between France, the USA, the UK, Germany, and Singapore. We are creative, low-ego, and team-spirited.

Join us to be part of a pioneering company shaping the future of AI. Together, we can make a meaningful impact. See more about our culture on https://mistral.ai/careers.

Role Summary

We are looking for a skilled and motivated Web Crawling and Data Indexing Engineer to join our dynamic engineering team. The ideal candidate should have a solid background in web scraping, data extraction, and indexing, with experience using advanced tools and technologies to collect and process large-scale data from diverse web sources.

What you will do
  • Develop and maintain web crawlers using Python libraries such as Beautiful Soup to extract data from target websites
  • Utilize headless browsing techniques, such as Chrome DevTools, to automate and optimize data collection processes
  • Collaborate with cross-functional teams to identify, scrape, and integrate data from APIs to support business objectives
  • Create and implement efficient parsing patterns using regular expressions, XPaths, and CSS selectors to ensure accurate data extraction
  • Design and manage distributed job queues using technologies such as Redis, Kubernetes, and Postgres to handle large-scale data processing tasks
  • Develop strategies to monitor and ensure data quality, accuracy, and integrity throughout the crawling and indexing process
  • Continuously improve and optimize existing web crawling infrastructure to maximize efficiency and adapt to new challenges
About You
  • Core Programming & Web Technologies
  • Proficiency in Python, Java, or C++
  • Strong understanding of HTTP/HTTPS protocols and web communication
  • Knowledge of HTML, CSS, and JavaScript for parsing and navigating web content
  • Data Structures & Algorithms
  • Mastery of queues, stacks, hash maps, and other data structures for efficient data handling
  • Ability to design and optimize algorithms for large-scale web crawling
  • Web Scraping & Data Acquisition
  • Hands‑on experience with web scraping libraries/frameworks (e.g., Scrapy, BeautifulSoup, Selenium, Playwright)
  • Understanding of how search engines work and best practices for web crawling optimization
  • Databases & Data Storage
  • Experience with SQL and/or NoSQL databases (e.g., PostgreSQL, MongoDB) for storing and managing crawled data
  • Familiarity with data warehousing and scalable storage solutions
  • Distributed Systems & Big Data
  • Knowledge of distributed systems (e.g., Hadoop, Spark) for processing large datasets
  • Data Analysis & Visualization
  • Proficiency in Pandas, NumPy, and Matplotlib for analyzing and visualizing scraped data
  • Bonus Skills (Nice‑to‑Have)
  • Experience applying Machine Learning to improve crawling efficiency or accuracy
  • Familiarity with cloud platforms (AWS, GCP) and containerization (Docker) for deployment
Hiring Process
  • Introduction call - 35 min
  • Hiring Manager Interview - 30 min
  • Live‑coding Interview - 45 min
  • System Design Interview - 45 min
  • Culture‑fit discussion - 30 min
  • Reference checks
Location & Remote

This role is primarily based at one of our European offices (Paris, France and London, UK). We will prioritize candidates who either reside in Paris or are open to relocating. We strongly believe in the value of in‑person collaboration to foster strong relationships and seamless communication within our team.

In certain specific situations, we will also consider remote candidates based in one of the countries listed in this job posting — currently France, UK, Germany, Belgium, Netherlands, Spain and Italy. In that case, we ask all new hires to visit our Paris office:

  • for the first week of their onboarding (accommodation and travelling covered)
  • then at least 3 days per month
What we offer
  • Competitive salary and equity
  • Health insurance
  • Transportation allowance
  • Sport allowance
  • Meal vouchers
  • Private pension plan
  • Parental: Generous parental leave policy
  • Visa sponsorship

We may use artificial intelligence (AI) tools to support parts of the hiring process, such as reviewing applications, analysing resumes, or assessing responses. These tools assist our recruitment team but do not replace human judgment. Final hiring decisions are ultimately made by humans. If you would like more information about how your data is processed, please contact us.

Obtenez votre examen gratuit et confidentiel de votre CV.
ou faites glisser et déposez un fichier PDF, DOC, DOCX, ODT ou PAGES jusqu’à 5 Mo.