Enable job alerts via email!

Data Engineer (Digital Finance)

Monroe Consulting Group

Jakarta Pusat

On-site

IDR 488.917.000 - 814.864.000

Full time

19 days ago

Job summary

A fast-growing digital finance platform in Jakarta is seeking a skilled Data Engineer to develop scalable data infrastructure and optimize data processes for decision-making. You will collaborate with various teams to build efficient ETL pipelines and manage data integration workflows, requiring expertise in Python, Java, or Scala as well as experience with real-time data technologies. This is a fantastic opportunity to contribute to impactful data solutions in Southeast Asia.

Qualifications

  • 1-2 years of data engineering or backend development experience.
  • Strong coding skills in Python, Java, or Scala.
  • Experience with cloud-based data platforms.

Responsibilities

  • Develop and maintain scalable data infrastructure and pipelines.
  • Build data ingestion and transformation workflows.
  • Partner with teams to enhance data accessibility.

Skills

Python
Java
Scala
ETL pipelines
Data modeling
Data processing workflows
Git
Docker
Kubernetes
Spark

Tools

MySQL
MongoDB
Kafka
Flink
Airflow
Job description

On behalf of a fast-growing digital finance platform, we are currently seeking a skilled and motivated Data Engineer to support the development of scalable data infrastructure and analytical capabilities. Based in Jakarta, this role will be instrumental in enabling data-driven decision-making across multiple product lines and markets in Southeast Asia.

You will collaborate closely with cross-functional teams to design and implement efficient ETL pipelines, develop robust data models, and optimize data processing workflows to ensure high performance and cost-efficiency in a high-volume environment.

Key Responsibilities:

  • Develop and maintain scalable data infrastructure, databases, and pipelines to support reliable and efficient data operations.
  • Build and manage data ingestion and transformation workflows to enable seamless data integration across multiple platforms.
  • Apply best practices to ensure the stability, availability, and performance of data systems.
  • Partner with engineering, data science, and product teams to enhance data accessibility and usability across the organization.
  • Design and sustain large-scale, efficient data pipelines to process complex datasets.
  • Translate user needs into well-crafted tools and platform capabilities that address real-world data challenges.



Qualifications:

  • 1-2 years of hands-on experience in data engineering or backend development with a strong focus on data systems.
  • Solid coding skills in Python, Java, or Scala.
  • Familiar with source control tools (e.g., Git) and build/dependency management tools like Maven.
  • Knowledge of container tools (Docker) and orchestration frameworks (Kubernetes).
  • Practical experience with real-time and batch data technologies, such as Spark, Kafka, Flink, Flume, or Airflow.
  • Comfortable working with both relational (e.g., MySQL) and NoSQL databases (e.g., MongoDB).
  • Prior involvement with cloud-based data platforms and large-scale data solutions.
  • Strong analytical mindset, with the ability to juggle multiple tasks and projects.
  • A proactive communicator and team player who thrives in a collaborative environment.
  • Motivated to stay current with emerging technologies and continuously enhance technical capabilities.
  • Familiarity with automation and DevOps practices is a plus.
Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.