Activez les alertes d’offres d’emploi par e-mail !

Data Ops Engineerxfm

Doctolib

Paris

Sur place

EUR 50 000 - 80 000

Plein temps

Il y a 7 jours
Soyez parmi les premiers à postuler

Mulipliez les invitations à des entretiens

Créez un CV sur mesure et personnalisé en fonction du poste pour multiplier vos chances.

Résumé du poste

Join a forward-thinking company as a DataOps Engineer, where you will play a pivotal role in transforming healthcare through innovative data solutions. Collaborate with talented data professionals to design and implement robust data infrastructures that streamline operations and enhance decision-making. This position offers an exciting opportunity to work on large-scale data systems, ensuring data quality and availability while pushing the boundaries of technology in a supportive and inclusive environment. If you are passionate about making a meaningful impact in healthcare, this role is perfect for you.

Prestations

Health Insurance
RTT Days
Parental Leave
Wellbeing Programs
Flexible Work Policies
Lunch Vouchers
Bicycle Subsidies

Qualifications

  • 2+ years as a DataOps Engineer with proven data infrastructure experience.
  • Strong proficiency in data engineering tools and technologies.

Responsabilités

  • Design and implement scalable data infrastructure for large datasets.
  • Build and maintain data pipelines for efficient data flow.

Connaissances

DataOps Engineering
Data Infrastructure Design
Python
Problem-Solving
Cloud Infrastructure
Communication

Outils

Redshift
BigQuery
Docker
Airflow
Terraform
AWS
Azure
GCP

Description du poste

Join a team of passionate and hardworking entrepreneurs to transform healthcare!

Working in the tech team at Doctolib involves building innovative products and features to improve the daily lives of care teams and patients. We work in feature teams in an agile environment while collaborating with product, engineering, design, and business teams.

What you'll do :

We are seeking a highly skilled and motivated DataOps Engineer to join our dynamic data team.

As a DataOps Engineer, you will play a crucial role in designing, implementing, and managing our data infrastructure, ensuring seamless data flow across the organization and enabling data-driven decision-making.

You will collaborate closely with data engineers, data scientists, and other stakeholders to build and maintain robust, scalable, and efficient data systems.

  1. Data Infrastructure Design and Implementation : Design and implement a scalable and reliable data infrastructure that supports the collection, processing, storage, and analysis of large-scale datasets, while pushing security and privacy best practices.
  2. Data Pipeline Development and Maintenance : Build and maintain data pipelines that efficiently extract, transform, and load data from various sources into our data warehouse.
  3. Automation and Orchestration : Implement automation and orchestration tools to streamline infrastructure provisioning, data workflows, reduce manual effort, and improve operational efficiency.
  4. Monitoring and Troubleshooting : Monitor data platform performance and reliability, identify and troubleshoot issues, and implement proactive solutions to ensure data quality and availability. Also, streamline and monitor platform costs to identify optimizations and savings opportunities.

Collaboration : Collaborate with data engineers, data scientists, and other stakeholders to gather requirements, understand data needs, and provide technical expertise.

Who you are :

If you don't meet all the requirements below but believe this opportunity matches your expectations and experience, we still encourage you to apply!

  • Experience: > 2 years as a DataOps Engineer or in a similar role, with a proven track record of building and maintaining complex data infrastructures.
  • Technical Skills: Strong proficiency in data engineering/infrastructure tools and technologies such as data warehouse solutions (Redshift, BigQuery), Docker, Airflow.
  • Programming Skills: Expertise in programming languages like Python.
  • Cloud Infrastructure: Familiarity with cloud infrastructure and services, preferably AWS, Azure, or GCP. Experience with infrastructure-as-code tools such as Terraform. Knowledge of security best practices on network and IAM.
  • Problem-Solving: Excellent problem-solving skills, focused on identifying and resolving data infrastructure bottlenecks and performance issues.
  • Communication: Strong communication and collaboration skills for effective teamwork.

Bonus Qualifications :

  • Experience building APIs, understanding API design and best practices with frameworks such as FastAPI.
  • Knowledge of Data Governance and Security principles and practices.
  • CI/CD experience, with pipelines for data deployment.
  • Impactful work: Ability to make a direct impact on our business by building and optimizing critical data infrastructure.
  • Collaborative environment: Work in a supportive environment with talented data professionals.
  • Growth opportunities: Opportunities for professional development and career advancement within a growing data team.

Additional benefits include health insurance, RTT days, parental leave, wellbeing programs, flexible work policies, lunch vouchers, subsidies for sports and creative classes, bicycle subsidies, and more.

The interview process :

  1. Case study
  2. System Design Interview
  3. Fit interview with the team
  4. Reference and criminal record check
  5. Offer!

Job details :

  • Full-time
  • Remuneration: fixed salary plus bonus based on objectives

If you want to learn more about our tech culture, check out our latest Medium blog articles!

At Doctolib, we believe in improving healthcare access for everyone. We are an equal opportunity employer, celebrating diversity and committed to inclusive hiring practices. We welcome applications regardless of gender, religion, age, sexual orientation, ethnicity, disability, or place of origin. Please inform us if you have a disability and need accommodations during the interview process.

All application data is processed in accordance with our privacy policy. For inquiries or to exercise your data rights, contact us accordingly.

Key Skills : Apache Hive, S3, Hadoop, Redshift, Spark, AWS, Apache Pig, NoSQL, Big Data, Data Warehouse, Kafka, Scala

Obtenez votre examen gratuit et confidentiel de votre CV.
ou faites glisser et déposez un fichier PDF, DOC, DOCX, ODT ou PAGES jusqu’à 5 Mo.