Enable job alerts via email!

It Data Engineer

Kalbe Consumer Health (Pt Saka Farma Laboratories)

Daerah Khusus Ibukota Jakarta

On-site

IDR 8.000.000 - 12.000.000

Full time

Today
Be an early applicant

Job summary

A leading healthcare company in Jakarta is seeking an experienced Data Engineer to manage data pipelines and support integration processes. The role involves collaboration with various teams to ensure data quality and efficiency in data handling. Candidates should have a bachelor's degree and at least 4 years of experience in data engineering, specifically with tools like SQL and ETL frameworks.

Qualifications

  • Minimum 4 years of experience as Data Engineer.
  • Proficient in SQL and data integration tools.
  • Experience with cloud-based platforms.

Responsibilities

  • Design and maintain data pipelines.
  • Collaborate with cross-functional teams.
  • Optimize ETL processes for efficiency.

Skills

ETL
SQL
Data modeling
Python
Data integration

Education

S1 (Bachelor's degree)

Tools

Apache Airflow
GCP
AWS
Job description
Job Description

Location: Jakarta, Jakarta
Salary: IDR8000000 - IDR12000000 per year
Company: PT Intikom Berlian Mustika
Contract: 12 months (Kontrak)

Scope of Work – Data Engineer

  • Mengeksplorasi database DI/DX (data pipelines, ETL, integrasi data).
  • Mengeksplorasi dan, jika diperlukan, mengembangkan Datamart untuk mendukung kebutuhan bisnis terkait DI/DX.
  • Menangani permintaan ad-hoc untuk query, ekstraksi data, dan persiapan data terkait DI/DX.
  • Melakukan validasi pada data yang diekstrak untuk memastikan kualitas dan keandalan terkait DI/DX.
  • Memastikan konsistensi dan aksesibilitas data bagi pemangku kepentingan DXO terkait DI/DX.

Jika Anda merasa cocok atau memiliki referensi kandidat yang sesuai, silakan hubungi kami via WhatsApp:

SoftwareDeveloper #Hiring #Career #ITJobs #Java #SQL

Jenis Pekerjaan: Kontrak
Panjang kontrak: 12 bulan

Pertanyaan Lamaran:

  • Berapa usia kamu saat ini ?
  • apakah kamu Memiliki pengalaman kerja dalam membangun dan mengelola data pipelines, ETL, dan integrasi data?
  • apakah Kamu Menguasai SQL serta tools/teknologi terkait Data Warehouse dan Datamart?

Pendidikan:

  • S1 (Diutamakan)

Pengalaman:

  • Data Engineer: 4 tahun (Diutamakan)
Responsibilities – Devoteam Data Engineer
  • Work closely with data architects and other stakeholders to design scalable and robust data architectures that meet the organization's requirements.
  • Develop and maintain data pipelines, which involve the extraction of data from various sources, data transformation to ensure quality and consistency, and loading the processed data into data warehouses or other storage systems.
  • Responsible for managing data warehouses and data lakes, ensuring their performance, scalability, and security.
  • Integrate data from different sources, such as databases, APIs, and external systems, to create unified and comprehensive datasets.
  • Perform data transformations and implement Extract, Transform, Load (ETL) processes to convert raw data into formats suitable for analysis and reporting.
  • Collaborate with data scientists, analysts, and other stakeholders to establish data quality standards and implement data governance practices.
  • Optimise data processing and storage systems for performance and scalability.

Collaborate with cross-functional teams, including data scientists, analysts, and business stakeholders, to understand data requirements and deliver solutions.

Programming Skills: Proficiency in programming languages such as Python, Java, Scala, or SQL is essential for data engineering roles. Data engineers should have experience in writing efficient and optimized code for data processing, transformation, and integration.

Database Knowledge: Strong knowledge of relational databases and experience with database management systems is crucial. Familiarity with data modeling, schema design, and query optimization is important for building efficient data storage and retrieval systems.

Big Data Technologies: Understanding and experience with big data technologies such as Apache Hadoop, Apache Spark, or Apache Kafka is highly beneficial. Knowledge of distributed computing and parallel processing frameworks is valuable for handling large-scale data processing.

ETL and Data Integration: Proficiency in ETL processes and experience with data integration tools like Apache NiFi, Talend, or Informatica is desirable. Knowledge of data transformation techniques and data quality principles is important for ensuring accurate and reliable data.

Data Warehousing: Familiarity with data warehousing concepts and experience with popular data warehousing platforms like Amazon Redshift, Google BigQuery, or Snowflake is advantageous. Understanding dimensional modeling and experience in designing and optimizing data warehouses is beneficial.

Cloud Platforms: Knowledge of cloud computing platforms such as Amazon Web Services (AWS), Microsoft Azure, or Google Cloud Platform (GCP) is increasingly important. Experience in deploying data engineering solutions in the cloud and utilizing cloud-based data services is valuable.

Data Pipelines and Workflow Tools: Experience with data pipeline and workflow management tools such as Apache Airflow, Luigi, or Apache Oozie is beneficial. Understanding how to design, schedule, and monitor data workflows is essential for efficient data processing.

Problem‑Solving and Analytical Skills: Data engineers should have strong problem‑solving abilities and analytical thinking to identify data‑related issues, troubleshoot problems, and optimize data processing workflows.

Communication and Collaboration: Effective communication and collaboration skills are crucial for working with cross‑functional teams. Data engineers should be able to translate technical concepts into clear and understandable terms.

Education and Experience

Bachelor's degree in Engineering required.

  • Minimum Two years of related experience is highly preferred.
  • Two certifications in GCP (within 3 months after joining).

Status: Full‑Time

Duration: –

Job Description – Second Company

The Data Engineer will be responsible for the following activities:

  • Design, develop, deploy, and maintain robust ETL workflows using SSIS to support data integration from multiple sources into the data warehouse.
  • Build and maintain operational and analytical reports using SSRS, delivering insights to business users.
  • Collaborate with business analysts, data architects, and stakeholders to understand data requirements and translate them into technical specifications.
  • Optimize ETL packages for performance, scalability, and error handling.
  • Perform data profiling, validation, and reconciliation to ensure high data quality and integrity.
  • Maintain and improve existing SSIS/SSRS solutions.
  • Document ETL designs, data mappings, and workflow processes.

If you interest or have any reference, kindly send your update CV to:

  • Email:
  • Subject: Name - Position

Notes: Only shortlisted candidates will be contacted.

Job Description – Third Company

Build and optimize robust data pipelines for extracting, transforming, and loading (ETL) data from multiple sources into a central data warehouse or data lake.

  • Integrate data from multiple heterogeneous sources, ensuring data quality, consistency, and availability.
  • Monitor the performance of data systems, identify bottlenecks, and resolve issues related to data quality or processing failures.
Job Description – Fourth Company

Company Description

PT Mandiri Sekuritas (Mandiri Sekuritas/Company) has been awarded as Indonesia's Best Investment Bank and Best Broker by the FinanceAsia Country Awards 2022. These recognitions have established the Company's strong position as the Best Investment Bank in Indonesia for 12 consecutive years and Best Broker for 8 consecutive years. Established in 2000, Mandiri Sekuritas provides customers with comprehensive and value‑added capital market financial solutions. The Company obtained its business license as a securities broker and underwriter from Bapepam-LK, demonstrating its commitment to excellence in financial services.

Role Description

This is a contract, on‑site role for a Data Engineer located in Jakarta. The Data Engineer will be responsible for designing, developing, and managing data pipelines and infrastructure. Daily tasks include data acquisition, processing, and storage solutions; implementing data workflows; optimizing performance of data‑centric systems; and ensuring data quality and consistency. Additionally, the Data Engineer will collaborate with other teams to integrate and utilize data effectively.

Qualifications

  • 3–5 years of experience building large‑scale data pipelines in cloud or hybrid environments.
  • Strong in SQL, Python, and Java; skilled with Bash scripting for automation.
  • Hands‑on expertise with GCP, Azure, relational & non‑relational databases, and Hadoop/on‑prem systems.
  • Production experience with Airflow DAGs, Spark, and Flink.
  • Experienced with CI/CD & containerization (Git, Terraform, Helm, Docker, Kubernetes).
  • Solid grasp of distributed systems (partitioning, replication, fault tolerance).
  • Familiar with financial services data, regulations, and security frameworks.
  • Excellent communicator – able to explain complex pipelines to non‑technical stakeholders.
Job Description – Cube Asia

As a Data Engineer at Cube Asia, you will use various methods to transform raw data into useful data systems. You'll strive for efficiency by aligning data systems with business goals.

To succeed in this position, you should have prior experience in large‑scale public data collection from the web using open APIs and other tools, as well as a good understanding of the terms and guidelines, as well as the technical considerations, governing such data collection.

Data engineer skills also include familiarity with several programming languages and a basic knowledge of machine learning methods. If you are detail‑oriented, with excellent organizational skills and experience in this field, we'd like to hear from you.

Responsibilities

  • Build and maintain scalable data pipelines to process and integrate e‑commerce data from multiple sources.
  • Design and implement a modern cloud‑based data lakehouse architecture using AWS services such as S3, Athena, Glue, Fargate, and Iceberg.
  • Explore tools and solutions for high‑performance data transformation and analysis, such as Polars, DuckDB, and PySpark.
  • Work with data analysts to deliver accessible and well‑structured datasets for reporting and advanced analytics.
  • Collaborate architects on designing data platforms.
  • Explore ways to enhance data quality and reliability.

What You’ll Love About This Role

  • Build from Scratch: Be part of a team creating foundational data systems and processes, shaping the future of our platform.
  • Learn by Doing: Gain hands‑on experience working with modern tools, cloud technologies, and real‑world data challenges.
  • Work on Complex Projects: Tackle exciting and challenging problems, from integrating large‑scale e‑commerce data to optimizing data pipelines for performance and scalability.

Requirements

  • Knowledge of programming languages (e.g. Java and Python).
  • Hands‑on experience with SQL database design.
  • Previous experience as a data engineer or in a similar role.
  • Technical expertise with data models, data mining, and segmentation techniques.
  • Great numerical and analytical skills.
  • Willingness to learn and work with new tools and technologies.
Job Description – Accord Innovations

Hi #TalentReady, our client is looking for Data Engineer (ETL) for their project.

Full WFO at Jakarta Area | 12 Month Contract (PKWT) | Banking Industry

Requirements:

  • Design, develop, deploy, and maintain robust ETL workflows using SSIS to support data integration from multiple sources into the data warehouse.
  • Build and maintain operational and analytical reports using SSRS, delivering insights to business users.
  • Collaborate with business analysts, data architects, and stakeholders to understand data requirements and translate them into technical specifications.
  • Optimize ETL packages for performance, scalability, and error handling.
  • Perform data profiling, validation, and reconciliation to ensure high data quality and integrity.
  • Maintain and improve existing SSIS/SSRS solutions.
  • Document ETL designs, data mappings, and workflow processes.
Job Description – Final Company

On behalf of a fast‑growing digital finance platform, we are currently seeking a skilled and motivated Data Engineer to support the development of scalable data infrastructure and analytical capabilities. Based in Jakarta, this role will be instrumental in enabling data‑driven decision‑making across multiple product lines and markets in Southeast Asia.

You will collaborate closely with cross‑functional teams to design and implement efficient ETL pipelines, develop robust data models, and optimize data processing workflows to ensure high performance and cost‑efficiency in a high‑volume environment.

Key Responsibilities:

  • Develop and maintain scalable data infrastructure, databases, and pipelines to support reliable and efficient data operations.
  • Build and manage data ingestion and transformation workflows to enable seamless data integration across multiple platforms.
  • Apply best practices to ensure the stability, availability, and performance of data systems.
  • Partner with engineering, data science, and product teams to enhance data accessibility and usability across the organization.
  • Design and sustain large‑scale, efficient data pipelines to process complex datasets.
  • Translate user needs into well‑crafted tools and platform capabilities that address real‑world data challenges.

Qualifications:

  • 1–2 years of hands‑on experience in data engineering or backend development with a strong focus on data systems.
  • Solid coding skills in Python, Java, or Scala.
  • Familiar with source control tools (e.g., Git) and build/dependency management tools like Maven.
  • Knowledge of container tools (Docker) and orchestration frameworks (Kubernetes).
  • Practical experience with real‑time and batch data technologies, such as Spark, Kafka, Flink, Flume, or Airflow.
  • Comfortable working with both relational (e.g., MySQL) and NoSQL databases (e.g., MongoDB).
  • Prior involvement with cloud‑based data platforms and large‑scale data solutions.
  • Strong analytical mindset, with the ability to juggle multiple tasks and projects.
  • A proactive communicator and team player who thrives in a collaborative environment.
  • Motivated to stay current with emerging technologies and continuously enhance technical capabilities.
  • Familiarity with automation and DevOps practices is a plus.

The Devoteam Group is committed to equal opportunities, promoting its employees on the basis of merit and actively fighting against all forms of discrimination. We believe that diversity contributes to the creativity, dynamism, and excellence of our organization. All our positions are open to people with disabilities.

Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.