Enable job alerts via email!

Data Engineer Contract

Astra Graphia, Pt Tbk

Daerah Khusus Ibukota Jakarta

On-site

IDR 200.000.000 - 300.000.000

Full time

Today
Be an early applicant

Job summary

A leading technology firm in Jakarta seeks an experienced Data Engineer to design and manage data pipelines and systems. The ideal candidate will possess strong skills in Spark, Python, and SQL, with a bachelor's degree in engineering. Responsibilities include developing ETL workflows and collaborating across teams to ensure data quality. This full-time position offers competitive compensation.

Qualifications

  • Minimum 3 years of experience as a Data Engineer.
  • Experience in writing SQL queries, functions, and procedures is mandatory.
  • Experience with data pipelines, ETL, and integration.

Responsibilities

  • Develop and maintain data pipelines for various data sources.
  • Ensure data quality, consistency, and availability.
  • Collaborate with teams to understand data requirements.

Skills

Proficient in Spark
Develop in Python
Scala experience
ETL processes
Data Warehousing knowledge
Experience with Tableau
Strong analytical skills

Education

Bachelor's degree in Engineering

Tools

SQL
Apache Spark
ETL tools
Google Cloud Platform
Job description
Job Postings in Jakarta

Showing 183 Data Engineer Contract jobs in Jakarta

Jakarta, Jakarta IDR9000000 - IDR12000000 Y Kredit Pintar

Posted today

Job Description

Job Responsibilities

  • Participate in the construction and development of data warehouses for financial projects, design models according to business requirements, and implement ETL.
  • Provide data report support for business departments.
  • Participate in optimizing ETL to improve code efficiency and reduce costs.

Job Qualifications

  • Proficient in Spark, able to develop in Python and Scala, with good experience in performance tuning.
  • Familiar with various data warehouse modeling theories, proficient in data model design and data layer design.
  • Priority will be given to those with Tableau development experience.
  • Priority will be given to those with a background in finance and e-commerce projects.
  • Diligent, responsible, meticulous, with good team spirit, analytical skills, and communication skills.
  • Spark.
  • Phyton.
  • Scala.
Job Description
  • Devoteam is a leading consulting firm focused on digital strategy, tech platforms, and cybersecurity.

By combining creativity, tech, and data insights, we empower our customers to transform their business and unlock the future.

With 25 years' of experience and 8,000 employees across Europe and the Middle East, Devoteam promotes responsible tech for people and works to create better change.

#Creative Tech for Better Change

Devoteam has launched in January 2021 its new strategic plan, Infinite 2024, with the ambition to become the #1 EMEA partner of the leading Cloud-based platform companies (AWS, Google Cloud, Microsoft, Salesforce, ServiceNow), further reinforced by deep expertise in digital strategy, cybersecurity, and data.

The Data Engineer will be responsible for the following activities:

  • Work closely with data architects and other stakeholders to design scalable and robust data architectures that meet the organization's requirements.
  • Develop and maintain data pipelines, which involve the extraction of data from various sources, data transformation to ensure quality and consistency, and loading the processed data into data warehouses or other storage systems.
  • Responsible for managing data warehouses and data lakes, ensuring their performance, scalability, and security.
  • Integrate data from different sources, such as databases, APIs, and external systems, to create unified and comprehensive datasets.
  • Perform data transformations and implement Extract, Transform, Load (ETL) processes to convert raw data into formats suitable for analysis and reporting.
  • Collaborate with data scientists, analysts, and other stakeholders to establish data quality standards and implement data governance practices.
  • Optimise data processing and storage systems for performance and scalability.

Collaborate with cross‑functional teams, including data scientists, analysts, and business stakeholders, to understand data requirements and deliver solutions.

Programming Skills: Proficiency in programming languages such as Python, Java, Scala, or SQL is essential for data engineering roles. Data engineers should have experience in writing efficient and optimised code for data processing, transformation, and integration.

Database Knowledge: Strong knowledge of relational databases (e.g., SQL) and experience with database management systems (DBMS) is crucial. Familiarity with data modelling, schema design, and query optimisation is important for building efficient data storage and retrieval systems.

Big Data Technologies: Understanding and experience with big data technologies such as Apache Hadoop, Apache Spark, or Apache Kafka is highly beneficial. Knowledge of distributed computing and parallel processing frameworks is valuable for handling large‑scale data processing.

ETL and Data Integration: Proficiency in Extract, Transform, Load (ETL) processes and experience with data integration tools like Apache NiFi, Talend, or Informatica is desirable. Knowledge of data transformation techniques and data quality principles is important for ensuring accurate and reliable data.

Data Warehousing: Familiarity with data warehousing concepts and experience with popular data warehousing platforms like Amazon Redshift, Google BigQuery, or Snowflake is advantageous. Understanding dimensional modelling and experience in designing and optimising data warehouses is beneficial.

Cloud Platforms: Knowledge of cloud computing platforms such as Amazon Web Services (AWS), Microsoft Azure, or Google Cloud Platform (GCP) is increasingly important. Experience in deploying data engineering solutions in the cloud and utilising cloud‑based data services is valuable.

Data Pipelines and Workflow Tools: Experience with data pipeline and workflow management tools such as Apache Airflow, Luigi, or Apache Oozie is beneficial. Understanding how to design, schedule, and monitor data workflows is essential for efficient data processing.

Problem‑Solving and Analytical Skills: Data engineers should have strong problem‑solving abilities and analytical thinking to identify data‑related issues, troubleshoot problems, and optimise data processing workflows.

Communication and Collaboration: Effective communication and collaboration skills are crucial for working with cross‑functional teams, including data scientists, analysts, and business stakeholders. Data engineers should be able to translate technical concepts into clear and understandable terms.

Education and Experience

Bachelor's degree in Engineering required.

  • Minimum Two years of related experience is highly preferred.
  • Two certifications in GCP (within 3 months after joining).

Status: Full‑Time

Duration: –

The Devoteam Group is committed to equal opportunities, promoting its employees on the basis of merit and actively fighting against all forms of discrimination. We believe that diversity contributes to the creativity, dynamism, and excellence of our organization. All our positions are open to people with disabilities.

Job Description

a.Min experience 3 years as Data Engineer. Experience in writing SQL queries, SQL function and SQL procedure is mandatory. Experience in Java development of API is an advantage. Have experience in SSRS reporting, PowerBI analytic, Google Big Query, ETL & data pipeline is mandatory. Have knowledge and experience in data science with Python is an advantage.

Job Description

Scope of Work – Data Engineer

  • Mengeksplorasi database DI/DX (data pipelines, ETL, integrasi data).
  • Mengeksplorasi dan, jika diperlukan, mengembangkan Datamart untuk mendukung kebutuhan bisnis terkait DI/DX.
  • Menangani permintaan ad‑hoc untuk query, ekstraksi data, dan persiapan data terkait DI/DX.
  • Melakukan validasi pada data yang diekstrak untuk memastikan kualitas dan keandalan terkait DI/DX.
  • Memastikan konsistensi dan aksesibilitas data bagi pemangku kepentingan DXO terkait DI/DX.

Jika Anda merasa cocok atau memiliki referensi kandidat yang sesuai, silakan hubungi kami via WhatsApp:

SoftwareDeveloper #Hiring #Career #ITJobs #Java #SQL

Jenis Pekerjaan: Kontrak
Panjang kontrak: 12 bulan

Pertanyaan Lamaran:

  • Berapa usia kamu saat ini?
  • apakah kamu Memiliki pengalaman kerja dalam membangun dan mengelola data pipelines, ETL, dan integrasi data?
  • apakah Kamu Menguasai SQL serta tools/teknologi terkait Data Warehouse dan Datamart?

Pendidikan:

  • S1 (Diutamakan)

Pengalaman:

  • Data Engineer: 4 tahun (Diutamakan)
Job Description

Requirements:

  • Hold bachelor degree (S‑1 degree) in Information Technology or Computer Engineering or Telecommunication or Statistic
  • Excellent written & verbal communication skill
  • Min. 3‑5 years experience at ETL tools
  • Deep understanding in SQL, SQL optimisation, data pipeline, and job optimisation
  • Having knowledge shell script, VB script is a plus
  • Good and effective communication and leadership skills
  • Proactive, self‑learn motivated person, and eager to learn new technologies
  • Good verbal and written English communication skills
  • Good interpersonal skills to be able to relate with people or personnel from different units of the company
  • Excellent numerical, analytical and problem‑solving skills
  • Ability to perform under pressure and tight time constraints; be able to work under limited guidance in line with a broad plan, or strategy;
Job Description
  • Build and optimise robust data pipelines for extracting, transforming, and loading (ETL) data from multiple sources into a central data warehouse or data lake.
  • Integrate data from multiple heterogeneous sources, ensuring data quality, consistency, and availability.
  • Monitor the performance of data systems, identify bottlenecks, and resolve issues related to data quality or processing failures.
Job Description

Company Description

PT Mandiri Sekuritas (Mandiri Sekuritas/Company) has been awarded as Indonesia's Best Investment Bank and Best Broker by the FinanceAsia Country Awards 2022. These recognitions have established the Company's strong position as the Best Investment Bank in Indonesia for 12 consecutive years and Best Broker for 8 consecutive years. Established in 2000, Mandiri Sekuritas provides customers with comprehensive and value‑added capital market financial solutions. The Company obtained its business license as a securities broker and underwriter from Bapepam‑LK, demonstrating its commitment to excellence in financial services.

Role Description

This is a contract, on‑site role for a Data Engineer located in Jakarta. The Data Engineer will be responsible for designing, developing, and managing data pipelines and infrastructure. Daily tasks include data acquisition, processing, and storage solutions; implementing data workflows; optimising performance of data‑centric systems; and ensuring data quality and consistency. Additionally, the Data Engineer will collaborate with other teams to integrate and utilise data effectively.

Qualifications

  • 3–5 years of experience building large‑scale data pipelines in cloud or hybrid environments.
  • Strong in SQL, Python, and Java; skilled with Bash scripting for automation.
  • Hands‑on expertise with GCP, Azure, relational & non‑relational databases, and Hadoop/on‑prem systems.
  • Production experience with Airflow DAGs, Spark, and Flink.
  • Experienced with CI/CD & containerisation (Git, Terraform, Helm, Docker, Kubernetes).
  • Solid grasp of distributed systems (partitioning, replication, fault tolerance).
  • Familiar with financial services data, regulations, and security frameworks.
  • Excellent communicator—able to explain complex pipelines to non‑technical stakeholders.
  • Motivated to stay current with emerging technologies and continuously enhance technical capabilities.
Job Description

As a Data Engineer at Cube Asia, you will use various methods to transform raw data into useful data systems. You'll strive for efficiency by aligning data systems with business goals.

To succeed in this position, you should have prior experience in large‑scale public data collection from the web using open APIs and other tools, as well as a good understanding of the terms and guidelines, as well as the technical considerations, governing such data collection.

Data engineer skills also include familiarity with several programming languages and a basic knowledge of machine learning methods. If you are detail‑oriented, with excellent organisational skills and experience in this field, we'd like to hear from you.

Responsibilities

  • Build and maintain scalable data pipelines to process and integrate e‑commerce data from multiple sources.
  • Design and implement a modern cloud‑based data lakehouse architecture using AWS services such as S3, Athena, Glue, Fargate, and Iceberg.
  • Explore tools and solutions for high‑performance data transformation and analysis, such as Polars, DuckDB, and PySpark.
  • Work with data analysts to deliver accessible and well‑structured datasets for reporting and advanced analytics.
  • Collaborate architects on designing data platforms.
  • Explore ways to enhance data quality and reliability.

What You'll Love About This Role

  • Build from Scratch: Be part of a team creating foundational data systems and processes, shaping the future of our platform.
  • Learn by Doing: Gain hands‑on experience working with modern tools, cloud technologies, and real‑world data challenges.
  • Work on Complex Projects: Tackle exciting and challenging problems, from integrating large‑scale e‑commerce data to optimising data pipelines for performance and scalability.

Requirements

  • Knowledge of programming languages (e.g. Java and Python).
  • Hands‑on experience with SQL database design.
  • Previous experience as a data engineer or in a similar role.
  • Technical expertise with data models, data mining, and segmentation techniques.
  • Great numerical and analytical skills.
  • Willingness to learn and work with new tools and technologies.
Job Description

Hi #TalentReady, our client is looking for Data Engineer (ETL) for their project.

Full WFO at Jakarta Area | 12 Month Contract (PKWT) | Banking Industry

Requirements:

  • Design, develop, deploy, and maintain robust ETL workflows using SSIS to support data integration from multiple sources into the data warehouse.
  • Build and maintain operational and analytical reports using SSRS, delivering insights to business users.
  • Collaborate with business analysts, data architects, and stakeholders to understand data requirements and translate them into technical specifications.
  • Optimise ETL packages for performance, scalability, and error handling.
  • Perform data profiling, validation, and reconciliation to ensure high data quality and integrity.
  • Maintain and improve existing SSIS/SSRS solutions.
  • Document ETL designs, data mappings, and workflow processes.

If you interest or have any reference, kindly send your update CV to:

  • Email:
  • Subject: Name - Position

Notes: Only shortlisted candidates will be contacted.

Good luck

Job Description

On behalf of a fast‑growing digital finance platform, we are currently seeking a skilled and motivated Data Engineer to support the development of scalable data infrastructure and analytical capabilities. Based in Jakarta, this role will be instrumental in enabling data‑driven decision‑making across multiple product lines and markets in Southeast Asia.

You will collaborate closely with cross‑functional teams to design and implement efficient ETL pipelines, develop robust data models, and optimise data processing workflows to ensure high performance and cost‑efficiency in a high‑volume environment.

Key Responsibilities:

  • Develop and maintain scalable data infrastructure, databases, and pipelines to support reliable and efficient data operations.
  • Build and manage data ingestion and transformation workflows to enable seamless data integration across multiple platforms.
  • Apply best practices to ensure the stability, availability, and performance of data systems.
  • Partner with engineering, data science, and product teams to enhance data accessibility and usability across the organisation.
  • Design and sustain large‑scale, efficient data pipelines to process complex datasets.
  • Translate user needs into well‑crafted tools and platform capabilities that address real‑world data challenges.

Qualifications:

  • 1–2 years of hands‑on experience in data engineering or backend development with a strong focus on data systems.
  • Solid coding skills in Python, Java, or Scala.
  • Familiar with source control tools (e.g., Git) and build/dependency management tools like Maven.
  • Knowledge of container tools (Docker) and orchestration frameworks (Kubernetes).
  • Practical experience with real‑time and batch data technologies, such as Spark, Kafka, Flink, Flume, or Airflow.
  • Comfortable working with both relational (e.g., MySQL) and NoSQL databases (e.g., MongoDB).
  • Prior involvement with cloud‑based data platforms and large‑scale data solutions.
  • Strong analytical mindset, with the ability to juggle multiple tasks and projects.
  • A proactive communicator and team player who thrives in a collaborative environment.
  • Motivated to stay current with emerging technologies and continuously enhance technical capabilities.
  • Familiarity with automation and DevOps practices is a plus.
Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.