¡Activa las notificaciones laborales por email!

Data Engineer

Ampstek

Salamanca

Presencial

EUR 40.000 - 65.000

Jornada completa

Hace 2 días
Sé de los primeros/as/es en solicitar esta vacante

Mejora tus posibilidades de llegar a la entrevista

Elabora un currículum adaptado a la vacante para tener más posibilidades de triunfar.

Descripción de la vacante

A leading company is seeking a Data Engineer in Salamanca to create and maintain data pipelines and analytics tools. The ideal candidate will have 5+ years of experience in data engineering, proficient with SQL and big data technologies. This role involves working with various teams to optimize data delivery and ensure data integrity across platforms.

Formación

  • 5+ years of experience in a Data Engineer role.
  • Experience with big data tools and AWS cloud services.
  • Strong knowledge of SQL and data pipeline management tools.

Responsabilidades

  • Create and maintain optimal data pipeline architecture.
  • Build analytics tools to provide actionable insights.
  • Manage and design data privacy and security solutions.

Conocimientos

SQL
Big Data
Data Privacy
Agile Methodologies
Teamwork
Analytic Skills
Project Management

Educación

Graduate degree in Computer Science
Graduate degree in Statistics
Graduate degree in Informatics
Graduate degree in Information Systems
Graduate degree in a quantitative field

Herramientas

Hadoop
Spark
Kafka
AWS EC2
AWS EMR
AWS RDS
AWS Redshift
Postgres
Cassandra
Tableau
Python
Java
C++
Scala

Descripción del empleo

Create and maintain optimal data pipeline architecture.

Assemble large, complex data sets that meet functional / non-functional business requirements.

Identify, design, and implement internal process improvements : automating manual processes, optimizing data

delivery, re-designing infrastructure for greater scalability, etc.

Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources

data sources using SQL and AWS ‘big data’ technologies.

Build analytics tools (Tableu) that utilize the data pipeline to provide actionable insights into customer acquisition,

operational efficiency and other key business performance metrics.

Work with stakeholders including the Executive, Product, Data and Design teams to assist with data-related

technical issues and support their data infrastructure needs.

Keep our data separated and secure across national boundaries through multiple data centers and AWS regions.

Create data tools for analytics and data scientist team members that assist them in building and optimizing our

product into an innovative industry leader.

Work with data and analytics experts to strive for greater functionality in our data systems.

Work with system integration and middlewares (MuleSoft, Talend, Solace and etc)

Perform proof of concept with customer in data integration and data load

Managing structure and unstructured data set

Managing and design data privacy, integrity and security solution

Managing data with high confidentiality, integrity and privacy

Skills

Experience with working in agile methodologies (Scrum or SAFe)

Good teamwork and communication skill (Higher Performance Team)

Able to be self organized and focus in delivering values to the business

Always work with integrity, passion and courage

Advanced working SQL knowledge and experience working with relational databases, query authoring (SQL) and Nosql unstructured database

Familiarity with a variety of databases (Microsoft SQL is mandatory).

Familiarity reporting tools and dashboards like Tableu.

Experience building and optimizing ‘big data’ data pipelines, architectures and data sets.

Experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement.

Strong analytic skills related to working with unstructured datasets.

Build processes supporting data transformation, data structures, metadata, dependency and workload management.

A successful history of manipulating, processing and extracting value from large disconnected datasets.

Working knowledge of message queuing, stream processing, and highly scalable ‘big data’ data stores. Preferable middleware knowledge, example : MuleSoft, Solace.

Strong project management and organizational skills.

Experience supporting and working with cross-functional teams in a dynamic environment.

We are looking for a candidate with 5+ years of experience in a Data Engineer role, who has attained a Graduate

degree in Computer Science, Statistics, Informatics, Information Systems or another quantitative field. They should also have experience using the following software / tools :

Experience with big data tools : Hadoop, Spark, Kafka, etc.

Experience with relational SQL and NoSQL databases, including Postgres and Cassandra.

Experience with data pipeline and workflow management tools : Azkaban, Luigi, Airflow, etc.

Experience with AWS cloud services : EC2, EMR, RDS, Redshift

Experience with stream-processing systems : Storm, Spark-Streaming, etc.

Experience with object-oriented / object function scripting languages : Python, Java, C++, Scala, etc.

Consigue la evaluación confidencial y gratuita de tu currículum.
o arrastra un archivo en formato PDF, DOC, DOCX, ODT o PAGES de hasta 5 MB.