Responsibilities
- Develop and maintain ETL pipelines and data integration services using Python and SQL
- Work with AWS services (S3, DynamoDB, Lambda, Glue) and NoSQL databases (MongoDB, DynamoDB)
- Design, optimize and validate data flows ensuring data quality across systems
- Collaborate with Senior engineers on architecture and performance improvements
- Troubleshoot production data issues and perform root cause analyses
- Contribute to the continuous improvement of development practices and performance monitoring
Qualifications
- 2-4 years of experience as a Data Engineer or Back-end Developer
- Strong hands‑on experience with Python and SQL
- Experience with AWS data services (S3, DynamoDB, Lambda, Glue)
- Familiarity with NoSQL databases and API integrations
- Basic understanding of PySpark or similar distributed frameworks
- Analytical mindset and proactive problem‑solving skills
- Fluent in English (Upper‑Intermediate level or higher)
WILL BE A PLUS
- Experience with Airflow or other orchestration tools
- Interest in big data performance tuning and cloud optimization
Personal Profile
- Strong communication and collaboration skills
- Self‑motivated, responsible and able to work independently
Remote Work: Yes
Employment Type: Full-time
Key Skills
Apache Hive, S3, Hadoop, Redshift, Spark, AWS, Apache Pig, NoSQL, Big Data, Data Warehouse, Kafka, Scala
Experience: years
Vacancy: 1