Roles and Responsibilities:
- Develop and optimize ETL (Extract, Transform, Load) processes to ingest and transform large volumes of data from multiple sources.
- Must have experience in investment banking, payment, and transaction banking domains.
- Develop and deploy data processing applications using Big Data frameworks such as Hadoop, Spark, Kafka, or similar technologies.
- Proficiency in programming languages and scripting (e.g., Java, Scala, Python, SQL) for data processing and analysis.
- Experience with cloud platforms and services for Big Data (e.g., AWS, Azure, Google Cloud).
Requirements:
Primary Skills:
- Design, build, and maintain systems that handle large volumes of data to enable businesses to extract insights and make data-driven decisions.
- Create scalable and efficient data pipelines, implement data models, and integrate various data sources.
- Develop and deploy data processing applications using Big Data frameworks such as Hadoop, Spark, Kafka.
- Write efficient and optimized code in programming languages like Java, Scala, Python to manipulate and analyze data.
- Implement data models and integrate diverse data sources for insights.
Secondary Skills:
- Design, develop, and implement scalable data processing pipelines using Big Data technologies.
- Implement Kafka-based pipelines for real-time data feeding into dynamic pricing models.
- Conduct testing and validation of data pipelines and analytical solutions for accuracy and performance.
- Strong experience in Spring Boot and microservices architecture.
- Strong experience in distributed computing principles and Big Data ecosystem components (e.g., Hadoop, Spark, Hive, HBase).
- More than 8 years of experience in the IT industry.
- More than 5 years of relevant experience.