¡Activa las notificaciones laborales por email!

Data Engineer

HBX Group

Málaga

Híbrido

EUR 45.000 - 85.000

Jornada completa

Hace 7 días
Sé de los primeros/as/es en solicitar esta vacante

Mejora tus posibilidades de llegar a la entrevista

Elabora un currículum adaptado a la vacante para tener más posibilidades de triunfar.

Descripción de la vacante

An innovative company is seeking a Data Engineer to join their expanding Data Delivery team. This role involves optimizing data flows, designing new pipeline architectures, and collaborating with various stakeholders to enhance data insights. With a modern analytics stack that includes tools like Snowflake and AWS, you will have the opportunity to work on impactful projects that drive business efficiency. If you are enthusiastic about data and eager to learn, this position offers a dynamic environment where you can grow and contribute significantly to the company's success.

Formación

  • Experience with data modeling, data pipelines, testing, and documentation.
  • Strong technical skills in big data tools and relational databases.

Responsabilidades

  • Create and maintain optimal data pipeline architecture.
  • Build analytics tools for actionable insights into business performance.
  • Work with stakeholders to support their data infrastructure needs.

Conocimientos

Apache Airflow
SQL
Data Pipeline Architecture
Big Data Technologies
Analytic Skills
Project Management
Cross-Functional Team Collaboration

Herramientas

AWS
Tableau
DBT

Descripción del empleo

Join our expanding Data Delivery team, taking on new challenges to support strategic projects. We aim to put data insights at the heart of every commercial action in our e-commerce business. To do this we need people who are experienced in data modelling, data pipelines, testing and documentation. We want our team to drive our commercial success through the solutions we build.

The company blends a great working environment with lots of challenges. Our HQ is in Palma de Mallorca although the team is based in multiple locations and we practice a hybrid work model. Your internal collaborators will be global with users all over.

We want you to be productive, so we have a modern analytics stack that includes Snowflake for data warehousing, DBT and Airflow for data pipelines, Python for ML modelling and Tableau for data visualizations.

This role has been focused on working on the wide range of projects that data engineering are impacted by. The main project that we need support in the short term for is to finalize the Atlas report migration and this role is needed to map the new data sources from Mitra to snowflake BI tables. There is also projects linked to Salesforce data ingestion, updating our airflow to the latest version, developing our real time data pipelines and supporting an update of our architecture to improve our data security / permissions'.

The impact of this role is to improve business efficiency. The ability to build and maintain stable and on-time data flows means that everyone around the company is able to consume data for insights, alerts and reporting.

Job Responsibilities

As a Data Engineer you will be expected to contribute to all parts of our data architecture. This means that you might be optimizing data flows, designing a new pipeline architecture or landing data from a new external data source.

We don’t expect you to be an expert in all of these things initially, but we want you to be enthusiastic and willing to learn them all.

You should also be comfortable working with stakeholders from around the business to understand their problems and translate them into solutions that are insightful and intuitive. Within our team we utilize agile working methods and you can expect to be given responsibility from the first moment.

So, are you up for the challenge? See the main responsibilities below.

  • Create and maintain optimal data pipeline architecture,
  • Assemble large, complex data sets that meet functional / non-functional business requirements.
  • Identify, design, and implement internal process improvements : automating manual processes, optimizing data delivery, redesigning infrastructure for greater scalability, etc.
  • Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and AWS ‘big data’ technologies.
  • Build analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency and other key business performance metrics.
  • Work with stakeholders including the Executive, Product, Data and Design teams to assist with data-related technical issues and support their data infrastructure needs.
  • Keep our data separated and secure across national boundaries through multiple data centers and AWS regions.
  • Create data tools for analytics and data scientist team members that assist them in building and optimizing our product into an innovative industry leader.
  • Work with data and analytics experts to strive for greater functionality in our data systems.

Required Skills

  • Experience of Apache Airflow or a comparable orchestration tool.
  • Advanced working SQL knowledge and experience working with relational databases, query authoring (SQL) as well as working familiarity with a variety of databases.
  • Experience building and optimizing ‘big data’ data pipelines, architectures and data sets.
  • Experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement.
  • Strong analytic skills related to working with unstructured datasets.
  • A successful history of manipulating, processing and extracting value from large, disconnected datasets.
  • Working knowledge of message queuing, stream processing, and highly scalable ‘big data’ stores.
  • Strong project management and organizational skills.
  • Experience supporting and working with cross-functional teams in a dynamic environment.
  • Strong technical skills ideally covering big data tools, relational SQL and NoSQL databases, Cloud Databases, data pipeline and workflow management tools, AWS cloud services, stream-processing systems, object-oriented / object function scripting languages
  • Self-directed and comfortable in a role supporting the data needs of multiple teams, systems and products.
Consigue la evaluación confidencial y gratuita de tu currículum.
o arrastra un archivo en formato PDF, DOC, DOCX, ODT o PAGES de hasta 5 MB.