Activez les alertes d’offres d’emploi par e-mail !

Data Engineer

TN France

France

Sur place

EUR 50 000 - 80 000

Plein temps

Il y a 17 jours

Mulipliez les invitations à des entretiens

Créez un CV sur mesure et personnalisé en fonction du poste pour multiplier vos chances.

Résumé du poste

Join a forward-thinking company as a Data Engineer, where you will design and maintain scalable data pipelines and architectures. This role is crucial in managing data workflows, ensuring accuracy, and optimizing processing. You will work closely with a talented team, leveraging cutting-edge technologies like PySpark and AWS to drive data-driven decision-making. If you're passionate about data and eager to make an impact in a dynamic environment, this opportunity is perfect for you. Embrace the chance to grow in your career while contributing to innovative projects that shape the future of data engineering.

Prestations

Attractive salary & benefits
Flexible working hours
Professional development opportunities
Health insurance
Remote work options

Qualifications

  • 2+ years of experience in data engineering or related field.
  • Proven experience developing ETL pipelines and data processing workflows.
  • Hands-on experience with big data technologies.

Responsabilités

  • Design, build, and maintain scalable data pipelines.
  • Develop and optimize ETL workflows for data sources.
  • Collaborate with data scientists and analysts on projects.

Connaissances

Python
ETL Development
Data Pipeline Management
Data Modeling
Big Data Technologies
Cloud Computing
Problem-Solving
Communication

Formation

Bachelor's degree in Computer Science
Equivalent work experience in data engineering

Outils

PySpark
Pandas
SQL
AWS
GCP
Azure
Apache Spark
Hadoop
Kafka

Description du poste

Social network you want to login/join with:

  • Work within a company with a solid track record of success
  • Attractive salary & benefits

The Job

Job Description:

We are looking for a skilled Data Engineer to join our team. The ideal candidate will have strong experience in designing, building, and maintaining scalable data pipelines and architectures. You will play a critical role in managing data workflows, ensuring data integrity, and optimizing data processing.

Responsibilities:

  • Data Pipeline Development: Design, build, and maintain scalable and efficient data pipelines to process and transform large datasets.
  • ETL & Data Integration: Develop and optimize ETL (Extract, Transform, Load) workflows for structured and unstructured data sources.
  • Big Data Processing: Work with PySpark and Pandas to handle large-scale data processing tasks.
  • Database Management: Design, implement, and manage relational (SQL) and non-relational databases for data storage and retrieval.
  • Cloud Technologies: Leverage cloud platforms such as AWS, GCP, or Azure to deploy and manage data infrastructure.
  • Collaboration: Work closely with data scientists, analysts, and software engineers to support analytical and machine learning projects.
  • Data Quality & Performance Optimization: Ensure data accuracy, consistency, and security while optimizing performance.
  • Monitoring & Troubleshooting: Identify and resolve data pipeline performance bottlenecks and failures.

The Profile

Required Work Experience:

  • 2+ years of experience in data engineering or a related field.
  • Proven experience developing ETL pipelines and data processing workflows.
  • Hands-on experience with PySpark, Pandas, and SQL.
  • Experience working with big data technologies such as Apache Spark, Hadoop, or Kafka (preferred).
  • Familiarity with cloud data solutions (AWS, GCP, or Azure).

Required Skills:

  • Programming: Strong proficiency in Python (PySpark, Pandas) or Scala.
  • Data Modeling & Storage: Experience with relational databases (PostgreSQL, MySQL, SQL Server) and NoSQL databases (MongoDB, Cassandra).
  • Big Data & Distributed Computing: Knowledge of Apache Spark, Hadoop, or Kafka.
  • ETL & Data Integration: Ability to develop efficient ETL processes and manage data pipelines.
  • Cloud Computing: Experience with AWS (S3, Redshift, Glue), GCP (BigQuery), or Azure (Data Factory, Synapse).
  • Data Warehousing: Understanding of data warehousing concepts and best practices.
  • Problem-Solving: Strong analytical skills to troubleshoot and optimize data pipelines.
  • Communication: Must be proficient in spoken English to collaborate with US-based teams.

Education Requirements:

  • Bachelor’s degree in Computer Science, Data Engineering, Information Technology, or a related field (preferred).
  • Equivalent work experience in data engineering will also be considered.

The Employer

Our client is a Wisconsin-based consulting firm dedicated to connecting top global talent with leading U.S. companies. Our client is specialize in sourcing skilled professionals, conducting rigorous screening and technical assessments, and preparing candidates for opportunities in the U.S. job market.

Obtenez votre examen gratuit et confidentiel de votre CV.
ou faites glisser et déposez un fichier PDF, DOC, DOCX, ODT ou PAGES jusqu’à 5 Mo.