Job Search and Career Advice Platform

Enable job alerts via email!

Data Engineer

AME Mineral Economics Pty Ltd

Jakarta Utara

On-site

IDR 200.000.000 - 300.000.000

Full time

Today
Be an early applicant

Generate a tailored resume in minutes

Land an interview and earn more. Learn more

Job summary

A global data analytics company in Jakarta is seeking a Data Engineer to design and maintain reliable data pipelines. The successful candidate will build robust ETL/ELT processes and handle complex API integrations, ensuring data availability and reliability. Candidates should have at least 3 years of experience and a Bachelor’s degree in a relevant field. The role offers a modern office environment and opportunities to work with leading edge data technologies.

Benefits

Modern office near public transport

Qualifications

  • Bachelor’s degree in Computer Science, Data Engineering, or a related field.
  • 3+ years of experience building data pipelines in a production environment.
  • Strong understanding of ETL/ELT architecture, API data ingestion, and data modelling principles.

Responsibilities

  • Design and implement robust ETL/ELT pipelines to extract data from APIs.
  • Build and manage scheduled workflows using tools like Apache Airflow.
  • Handle complex API integrations and ensure data integrity.

Skills

Python
SQL
ETL/ELT architecture
Apache Airflow
Data modelling principles

Education

Bachelor’s degree in Computer Science or related field

Tools

Azure
Git
Job description

About AME Group:
The AME Group is a global data analytics and research company. We offer technical, market expertise and research analytics on the Materials Sector. This includes renewables, energy transition, metals and carbon emissions for the energy, and infrastructure sectors. AME has a flat management structure and encourages wide engagement across the firm.

Our team of engineers, economists, scientists, financial experts, and programmers produces world‑class, independent research. We focus on the technical intricacies of on‑site engineering and market analysis. Governments, NGOs, fund managers, primary producers, and the financial sector use our analysis and research platforms to drive technological change, plan greenfield projects, and develop low‑carbon plant expansions.

What We Do:

Expand research capabilities in renewables, battery metals, transition commodities, and new energy sectors like hydrogen.

Analyze individual projects such as solar farms, mines, and infrastructure operations to deliver economic and planning insights.

Build industry and plant engineering models and catalog carbon emission data using technical papers, production data, financial metrics, surveys, and site visits.

Our 2025 Focus:
This year, we are prioritizing South East Asia, particularly Indonesia. We are establishing a Jakarta office as our regional hub and are eager to collaborate with Indonesian professionals.

About the Role

We’re looking for a Data Engineer to design, implement, and maintain reliable data pipelines that collect and process data from various public APIs into a centralized blob storage and database environment. You’ll play a key role in ensuring data is ingested efficiently, transformed cleanly, and made accessible for downstream analysis and reporting.

This role involves working with Python-based ETL pipelines, workflow orchestration (Apache Airflow or equivalent), and cloud storage solutions (e.g., Azure Blob Storage).

Key Responsibilities
  • Design and implement robust ETL/ELT pipelines to extract data from multiple public APIs and load into blob storage and relational databases.
  • Build and manage scheduled workflows using tools such as Apache Airflow (or similar).
  • Handle complex API integrations, authentication flows (OAuth2, API keys), pagination, and rate limiting.
  • Structure data in blob storage (e.g., Azure Blob) for efficient downstream access.
  • Transform and load data into structured SQL‑based databases for analytics and access by Web applications.
  • Implement monitoring, logging, and validation to ensure data integrity and pipeline reliability.
  • Work closely with data scientists, and software developers to ensure data availability and usability.
  • Maintain clear documentation for data sources, transformations, and orchestration logic.
Qualifications
  • Bachelor’s degree in Computer Science, Data Engineering, or a related field (or equivalent experience).
  • 3+ years of experience building data pipelines in a production environment.
  • Strong understanding of ETL/ELT architecture, API data ingestion, and data modelling principles.
  • Experience with Azure.
Technical Skills

Languages: Python (preferred), SQL (Microsoft T‑SQL)

Orchestration: Apache Airflow or similar

Familiarity with version control (Git) and CI/CD pipelines

Nice to Have
  • Experience integrating data from open economic and financial APIs (e.g., World Bank, IMF, OECD, UN Data, FRED)
  • Familiarity with large structured and semi‑structured datasets (JSON, CSV, Parquet) and associated performance considerations
  • Experience building incremental ingestion or change data capture (CDC) pipelines
Soft Skills
  • Strong analytical and problem‑solving mindset
  • Clear communication of technical concepts to non‑technical stakeholders
  • Attention to detail and data accuracy
  • Proactive approach to monitoring
What We Offer

Modern, central office near to public transport and key amenities

Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.