
Enable job alerts via email!
Generate a tailored resume in minutes
Land an interview and earn more. Learn more
A global data analytics company in Jakarta is seeking a Data Engineer to design and maintain reliable data pipelines. The successful candidate will build robust ETL/ELT processes and handle complex API integrations, ensuring data availability and reliability. Candidates should have at least 3 years of experience and a Bachelor’s degree in a relevant field. The role offers a modern office environment and opportunities to work with leading edge data technologies.
About AME Group:
The AME Group is a global data analytics and research company. We offer technical, market expertise and research analytics on the Materials Sector. This includes renewables, energy transition, metals and carbon emissions for the energy, and infrastructure sectors. AME has a flat management structure and encourages wide engagement across the firm.
Our team of engineers, economists, scientists, financial experts, and programmers produces world‑class, independent research. We focus on the technical intricacies of on‑site engineering and market analysis. Governments, NGOs, fund managers, primary producers, and the financial sector use our analysis and research platforms to drive technological change, plan greenfield projects, and develop low‑carbon plant expansions.
What We Do:
Expand research capabilities in renewables, battery metals, transition commodities, and new energy sectors like hydrogen.
Analyze individual projects such as solar farms, mines, and infrastructure operations to deliver economic and planning insights.
Build industry and plant engineering models and catalog carbon emission data using technical papers, production data, financial metrics, surveys, and site visits.
Our 2025 Focus:
This year, we are prioritizing South East Asia, particularly Indonesia. We are establishing a Jakarta office as our regional hub and are eager to collaborate with Indonesian professionals.
We’re looking for a Data Engineer to design, implement, and maintain reliable data pipelines that collect and process data from various public APIs into a centralized blob storage and database environment. You’ll play a key role in ensuring data is ingested efficiently, transformed cleanly, and made accessible for downstream analysis and reporting.
This role involves working with Python-based ETL pipelines, workflow orchestration (Apache Airflow or equivalent), and cloud storage solutions (e.g., Azure Blob Storage).
Languages: Python (preferred), SQL (Microsoft T‑SQL)
Orchestration: Apache Airflow or similar
Familiarity with version control (Git) and CI/CD pipelines
Modern, central office near to public transport and key amenities