Enable job alerts via email!

Big Data Developer

KG SOWERS GROUP PTE. LTD.

Singapore

On-site

SGD 80,000 - 120,000

Full time

25 days ago

Job summary

A leading company in Singapore is seeking a skilled data engineer to develop and optimize ETL processes and data applications. The ideal candidate will have extensive experience with Big Data technologies such as Hadoop and Spark, along with strong programming skills in Java, Scala, and Python. This role involves creating scalable data pipelines and applying distributed computing principles to help businesses make data-driven decisions.

Qualifications

  • More than 8 years of experience in the IT industry.
  • More than 5 years of relevant experience.

Responsibilities

  • Develop and optimize ETL processes for large volumes of data.
  • Create scalable data pipelines and integrate various data sources.
  • Conduct testing and validation of data pipelines for accuracy.

Skills

ETL processes
Big Data frameworks
Java
Scala
Python
SQL
Data processing applications
Distributed computing

Tools

Hadoop
Spark
Kafka
AWS
Azure
Google Cloud

Job description

Roles and Responsibilities:

  • Develop and optimize ETL (Extract, Transform, Load) processes to ingest and transform large volumes of data from multiple sources.
  • Must have experience in investment banking, payment, and transaction banking domains.
  • Develop and deploy data processing applications using Big Data frameworks such as Hadoop, Spark, Kafka, or similar technologies.
  • Proficiency in programming languages and scripting (e.g., Java, Scala, Python, SQL) for data processing and analysis.
  • Experience with cloud platforms and services for Big Data (e.g., AWS, Azure, Google Cloud).

Requirements:

Primary Skills:

  • Design, build, and maintain systems that handle large volumes of data to enable businesses to extract insights and make data-driven decisions.
  • Create scalable and efficient data pipelines, implement data models, and integrate various data sources.
  • Develop and deploy data processing applications using Big Data frameworks such as Hadoop, Spark, Kafka.
  • Write efficient and optimized code in programming languages like Java, Scala, Python to manipulate and analyze data.
  • Implement data models and integrate diverse data sources for insights.

Secondary Skills:

  • Design, develop, and implement scalable data processing pipelines using Big Data technologies.
  • Implement Kafka-based pipelines for real-time data feeding into dynamic pricing models.
  • Conduct testing and validation of data pipelines and analytical solutions for accuracy and performance.
  • Strong experience in Spring Boot and microservices architecture.
  • Strong experience in distributed computing principles and Big Data ecosystem components (e.g., Hadoop, Spark, Hive, HBase).
  • More than 8 years of experience in the IT industry.
  • More than 5 years of relevant experience.
Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.