We are seeking a skilled data engineer with experience in Spark, Kafka, and related big data technologies. The ideal candidate should be familiar with Spark code refactoring, testing, and best practices, as well as Kafka architecture patterns and strategies for scaling. Knowledge of datalake, datawarehouse, and delta lake concepts is a plus, along with awareness of current data engineering trends.
Requirements:
- Experience in Scala and Spark, or Java with an interest in Scala. Functional programming experience is not mandatory, as our codebase is primarily object-oriented.
- Familiarity with real-time technologies such as Kafka (KStreams and KSql) and Spark Streaming.
- Experience with Terraform and AWS is advantageous.
- Comfort with pair programming and remote collaboration.
- Willingness to learn and adapt to various technologies involved in our projects.
- Strong communication, self-management, respect, and inclusiveness skills for remote work.
Our Tech Stack:
Scala, Spark, Node.js (ES6 and Typescript), Python, React, MongoDB, MySQL, RabbitMQ, Redis, AWS services (SNS, SQS, API Gateway, Cognito, Lambda, Redshift, Aurora, DynamoDB). Mastery of all is not required; understanding principles behind them is more important.
What We Offer:
- Monthly co-working space allowance.
- Learning days during work hours.
- Training budget with access to SafariBooks and Coursera.
- Remote work with flexible schedule.
- Local and national holidays, plus a free day on your birthday.
- Quarterly rewards for targets, engineering meetups, team building in Córdoba, and global company events.
- Two duvet days per year.
- Allowance for therapy sessions.
- Laptop choice between Mac or PC.