¡Activa las notificaciones laborales por email!
A leading global data solutions provider is seeking a Data Engineer in Madrid to build and maintain scalable data systems. This role involves designing data pipelines, managing cloud infrastructure, and optimizing CI/CD processes. Ideal candidates should have strong Python and SQL proficiency, hands-on experience with AWS, and excellent collaboration skills. Join a dynamic team to contribute to innovative data products and services.
Role Overview
Parameta Solutions is seeking a skilled Data Engineer to join our growing global team. Based in Madrid, you will be a foundational member of our new hub, helping to shape a high-performing engineering function. In this role, you will design, build, and maintain scalable systems and infrastructure to process and analyse complex, high-volume datasets. Working closely with data scientists, analysts, product managers, and other stakeholders, you will translate business needs into robust technical solutions that power innovative data products and services.
Key Responsibilities
Design, build, and maintain performant batch and streaming data pipelines to support both real-time market data and large-scale batch processing (Apache Airflow, Apache Kafka, Apache Flink).
Develop scalable data warehousing solutions using Snowflake and other modern platforms.
Architect and manage cloud-based infrastructure (AWS or GCP) to ensure resilience, scalability, and efficiency.
Enhance and optimise CI / CD pipelines (Jenkins, GitLab) to streamline development, testing, and deployment.
Monitor and maintain the health and performance of data applications, pipelines, and databases using observability tools (Prometheus, Grafana, CloudWatch).
Partner with stakeholders across functions to deliver reliable data services and enable analytical product development.
Contribute actively in agile ceremonies (stand-ups, sprint planning, retrospectives) to align on priorities and delivery.
Experience & Competencies
Essential
Demonstrated, professional experience in data engineering or a closely related role.
Proficiency in Python (with frameworks such as Pandas, Dask, PySpark) and strong SQL skills.
Experience building and scaling API-driven data platforms (e.g., FastAPI).
Hands-on experience with AWS, Snowflake, Kubernetes, and Airflow.
Knowledge of monitoring and alerting systems (Prometheus, Grafana, CloudWatch).
Strong understanding of ETL processes and event streaming (Kafka, Flink, etc.).
Comfortable with Linux and command-line operations.
Excellent communication skills, with the ability to collaborate across technical and business teams.
Bachelor’s degree in Computer Science, Engineering, Mathematics, or related technical field.
Desired
Exposure to financial market data or capital markets environments.
Knowledge of additional programming languages such as Java, C#, or C++.
Band & Level
Professional / 5
#PARAMETA #LI-ASO #LI-Hybrid