¡Activa las notificaciones laborales por email!

Data Engineer (Senior) ID40199

AgileEngine

Castilla y León

Híbrido

EUR 50.000 - 70.000

Jornada completa

Hoy
Sé de los primeros/as/es en solicitar esta vacante

Descripción de la vacante

A forward-thinking software company in Castilla y León is looking for a Senior Data Engineer. You will take ownership of data infrastructure, working with AWS, Hadoop, and Apache Spark to develop and optimize data solutions. The ideal candidate has 5+ years of experience, strong skills in data engineering, and proficiency in Python or Java. This position offers competitive compensation, diverse projects, and benefits for professional growth.

Servicios

Professional growth opportunities
Competitive compensation
Exciting projects
Flextime

Formación

  • 5+ years of experience in data engineering, software engineering, or related roles.
  • Strong hands-on expertise with AWS services (S3, EMR, Glue, Lambda, Redshift, etc.).
  • Deep knowledge of big data ecosystems, including Hadoop and Apache Spark.
  • Proficiency in Python, Java, or Scala for data processing and automation.
  • Strong SQL skills and experience with relational and NoSQL databases.

Responsabilidades

  • Design, build, and maintain large-scale data pipelines and processing systems in AWS.
  • Develop and optimize distributed data workflows using Hadoop, Spark, and related technologies.
  • Collaborate with data scientists, analysts, and product teams to deliver reliable data solutions.
  • Monitor, troubleshoot, and improve performance of data systems and pipelines.

Conocimientos

AWS services
Hadoop
Apache Spark
SQL
Python
Java
Scala
ETL/ELT processes

Educación

Bachelor's or Master’s degree in Computer Science, Engineering, or related field

Herramientas

Airflow
Docker
Kubernetes
Descripción del empleo

AgileEngine is an Inc. 5000 company that creates award-winning software for Fortune 500 brands and trailblazing startups across 17+ industries. We rank among the leaders in areas like application development and AI/ML, and our people-first culture has earned us multiple Best Place to Work awards.

Why join us

If you're looking for a place to grow, make an impact, and work with people who care, we'd love to meet you!

About the role

We are looking for a Senior Data Engineer to take ownership of our data infrastructure, designing and optimizing high-performance, scalable solutions. You’ll work with AWS and big data frameworks like Hadoop and Spark to drive impactful data initiatives across the company.

What you will do
  • Design, build, and maintain large-scale data pipelines and data processing systems in AWS;
  • Develop and optimize distributed data workflows using Hadoop, Spark, and related technologies;
  • Collaborate with data scientists, analysts, and product teams to deliver reliable and efficient data solutions;
  • Implement best practices for data governance, security, and compliance;
  • Monitor, troubleshoot, and improve the performance of data systems and pipelines;
  • Mentor junior engineers and contribute to building a culture of technical excellence;
  • Evaluate and recommend new tools, frameworks, and approaches for data engineering.
MUST HAVES
  • Bachelor’s or Master’s degree in Computer Science, Engineering, or related field;
  • 5+ years of experience in data engineering, software engineering, or related roles;
  • Strong hands-on expertise with AWS services (S3, EMR, Glue, Lambda, Redshift, etc.);
  • Deep knowledge of big data ecosystems, including Hadoop (HDFS, Hive, MapReduce) and Apache Spark (PySpark, Spark SQL, streaming);
  • Strong SQL skills and experience with relational and NoSQL databases;
  • Proficiency in Python, Java, or Scala for data processing and automation;
  • Experience with workflow orchestration tools (Airflow, Step Functions, etc.);
  • Solid understanding of data modeling, ETL/ELT processes, and data warehousing concepts;
  • Excellent problem-solving skills and ability to work in fast-paced environments;
  • Ability to work in German TimeZone;
  • Upper-Intermediate English level.
NICE TO HAVES
  • Experience with real-time data streaming platforms (Kafka, Kinesis, Flink);
  • Knowledge of containerization and orchestration (Docker, Kubernetes);
  • Familiarity with data governance, lineage, and catalog tools;
  • Previous leadership or mentoring experience.
PERKS AND BENEFITS
  • Professional growth: Accelerate your professional journey with mentorship, TechTalks, and personalized growth roadmaps.
  • Competitive compensation: We match your ever-growing skills, talent, and contributions with competitive USD-based compensation and budgets for education, fitness, and team activities.
  • A selection of exciting projects: Join projects with modern solutions development and top-tier clients that include Fortune 500 enterprises and leading product brands.
  • Flextime: Tailor your schedule for an optimal work-life balance, by having the options of working from home and going to the office – whatever makes you the happiest and most productive.
Consigue la evaluación confidencial y gratuita de tu currículum.
o arrastra un archivo en formato PDF, DOC, DOCX, ODT o PAGES de hasta 5 MB.