¡Activa las notificaciones laborales por email!

Senior Data Engineer (Python)

Parser Limited

Burgos

Híbrido

EUR 40.000 - 65.000

Jornada completa

Hace 3 días
Sé de los primeros/as/es en solicitar esta vacante

Mejora tus posibilidades de llegar a la entrevista

Elabora un currículum adaptado a la vacante para tener más posibilidades de triunfar.

Descripción de la vacante

A leading company is seeking a skilled Data Engineer to maintain data streams and optimize ETL pipelines in a cloud environment. Candidates should have a strong background in SQL, Python, and cloud technologies, with responsibilities including troubleshooting pipeline issues and collaborating with various stakeholders. This role offers competitive compensation, flexible working options, and opportunities to grow in a diverse tech community.

Servicios

Competitive compensation and benefits
Flexible, remote working options
Medical insurance

Formación

  • Minimum 4 years of experience in data engineering or related roles.
  • Strong programming skills in Python for scalable data solutions.
  • Experience with data pipeline orchestration and cloud platforms.

Responsabilidades

  • Build, maintain, and optimize scalable ETL / ELT pipelines.
  • Ensure data availability, reliability, and consistency.
  • Collaborate with cross-functional teams for data alignment.

Conocimientos

Problem-solving
Attention to detail
Proactive mindset
Collaboration
Communication

Educación

Bachelor's degree in Computer Science, Data Science, or related field

Herramientas

SQL
NoSQL databases
Python
AWS
CI/CD pipelines
Dagster

Descripción del empleo

We are seeking a highly skilled Data Engineer to focus on maintaining data streams and ETL pipelines within a cloud-based environment. The ideal candidate will have experience in building, monitoring, and optimizing data pipelines, ensuring data consistency, and proactively collaborating with upstream and downstream teams to enable seamless data flow across the organization.

In this role, you will troubleshoot and resolve pipeline issues, contribute to enhancing data architecture, implement best practices in data governance and security, and ensure scalability and performance of data solutions. You will understand the business context of data, supporting analytics and decision-making by collaborating with data scientists, analysts, and other stakeholders.

This position requires client presence between 25%-50% of the time per month at the client’s office in London.

Key Responsibilities :
  1. Build, maintain, and optimize scalable ETL / ELT pipelines using tools such as Dagster or similar.
  2. Ensure high data availability, reliability, and consistency through data validation and monitoring practices.
  3. Collaborate with cross-functional teams to align data pipeline requirements with business objectives and technical feasibility.
  4. Automate data workflows to improve operational efficiency and reduce manual intervention.
  5. Data Integrity & Monitoring
  6. Perform regular data consistency checks, identify and resolve anomalies or discrepancies.
  7. Implement robust monitoring frameworks to detect and address pipeline failures or performance issues.
  8. Work with upstream teams to optimize data ingestion strategies and handoffs.
  9. Collaboration & Stakeholder Management
  10. Partner with data scientists, analysts, and business teams to provide trusted, accurate, and well-structured data for analytics and reporting.
  11. Communicate complex data concepts clearly to non-technical stakeholders.
  12. Maintain documentation for knowledge sharing and continuity.
  13. Infrastructure & Security Management
  14. Support cloud-based data platforms such as AWS, ensuring cost-effective and scalable solutions.
  15. Implement data governance, compliance, and security best practices.
  16. Improve data processing frameworks for performance and resilience.
  17. Continuous Improvement & Business Context Mastery
  18. Understand the business implications of data to drive insights and strategic decisions.
  19. Identify opportunities to enhance data models and workflows aligned with business needs.
  20. Stay updated with emerging data technologies and advocate for their adoption.
Qualifications :

Education & Experience :

Bachelor's degree in Computer Science, Data Science, or related field.

Minimum 4 years of experience in data engineering or related roles.

Technical Skills :

Proficiency with SQL (e.g., PostgreSQL, MySQL) and NoSQL databases (e.g., MongoDB).

Strong programming skills in Python, with experience in building scalable data solutions.

Experience with data pipeline orchestration tools such as Dagster or similar.

Familiarity with cloud platforms (AWS) and data services (S3, Redshift, Snowflake).

Understanding of data warehousing concepts and modern warehousing solutions.

Experience with CI/CD pipelines for data workflows.

Soft Skills :

Problem-solving skills, attention to detail, proactive mindset.

Ability to work collaboratively in a fast-paced environment.

Excellent communication skills for translating technical concepts to non-technical stakeholders.

Nice-to-Have Qualifications :

Experience with streaming technologies like Kafka.

Familiarity with Docker, ECS for data workflows.

Experience with BI tools such as Tableau or Power BI.

Understanding of machine learning pipelines.

What We'll Offer You :
  • Join an organization experiencing rapid growth and innovation.
  • Be part of a diverse community of tech experts.
  • Competitive compensation and benefits.
  • Flexible, remote working options.
  • Medical insurance.

Come and join our #ParserCommunity.

Consigue la evaluación confidencial y gratuita de tu currículum.
o arrastra un archivo en formato PDF, DOC, DOCX, ODT o PAGES de hasta 5 MB.