
¡Activa las notificaciones laborales por email!
Genera un currículum adaptado en cuestión de minutos
Consigue la entrevista y gana más. Más información
A promising tech startup based in Barcelona seeks a hybrid Data & DevOps Engineer to oversee the entire data lifecycle. The role includes maintaining cloud infrastructure on AWS, building optimized data ingestion engines, and collaborating with data scientists for ML model deployments. Candidates should have strong experience with AWS Serverless technologies, Databricks, and DevOps methodologies. This position offers a competitive salary and opportunities for equity in a fast-paced environment.
We are looking for a hybrid Data & DevOps Engineer to manage our end-to-end data lifecycle. You will not only maintain the cloud infrastructure but also build and optimize the ingestion engines that feed our Databricks Lakehouse. A key part of this role involves interacting with REST APIs, managing serverless ingestion (AWS Lambda), and ensuring data quality from the moment it hits S3.
1. IaC & Security Hardening: Expand our AWS infrastructure using Terraform or CloudFormation. You will implement rigorous security measures, including VPC peering/PrivateLink for Databricks, KMS encryption at rest, and IAM least-privilege policies.
2. API Ingestion & Engineering: Build and maintain Python-based ingestion services. You will manage API authentication, handle rate limiting, and ensure efficient data partitioning in S3.
3. CI/CD Evolution: Scale our GitHub Actions workflows to handle multi environment deployments (Dev/Sandbox/Prod) for both cloud infrastructure (Glue and Kinesis) and Databricks DLT pipelines.
4. Spark Performance & Optimization: Monitor and tune Spark configurations (shuffling, partitioning, caching) to ensure our DLT and AutoLoader pipelines run efficiently.
5. MLOps Support: Partner with Data Scientists to automate ML model deployments, managing feature store integrations and model serving infrastructure.
6. Security & Governance: Implement SSE-KMS encryption, IAM policies, and lifecycle rules to ensure our data lake is compliant and cost-effective.
7. Observability & Monitoring: Build a "single pane of glass" using CloudWatch and Datadog. You’ll create dashboards that track pipeline latency, AutoLoader costs, and system health.
8. Documentation & Knowledge Transfer: Produce high-quality architectural diagrams and runbooks. You aren't just building; you are mentoring the internal team to ensure long-term operational success.
Must-have:
AWS Serverless: Hands-on with Lambda, EventBridge, SQS, SNS, and S3.
Databricks: Experience with Delta Live Tables (DLT), AutoLoader, and Unity Catalog.
DevOps: Proficient in GitHub Actions and Terraform.
Monitoring: Hands-on experience with Datadog and CloudWatch Logs/Metrics.
Languages: Strong Python (for Lambda/PySpark) and SQL (for data validation/modelling).
Data Prep: Knowledge of AWS Glue (Catalog/ETL) and Kinesis Firehose is a major plus.
Competitive salary appropriate for an early-stage European startup.
Company equity
A central role in defining how EV charging flexibility is sold in Europe
Direct collaboration with the management team on product, data, and cloud infrastructure
High autonomy, fast decision cycles, and meaningful equity/ownership discussions for the right profile