Enable job alerts via email!

Data Engineer

Ctlyst

Jawa Barat

Hybrid

IDR 200.000.000 - 300.000.000

Full time

Today
Be an early applicant

Job summary

A tech consulting firm in Indonesia is looking for a Data Engineer to design and optimize data pipelines and ETL workflows. Ideal candidates will have over a year of experience in data engineering, with strong SQL skills and proficiency in Azure Data Factory. This role requires engaging with clients to implement effective data solutions. Competitive compensation and hybrid work options are offered.

Benefits

Competitive salary
Hybrid work model
Collaborative environment

Qualifications

  • 1+ years of experience as a Data Engineer or similar.
  • Solid understanding of data warehouse design, particularly Star Schema modeling.
  • Hands-on experience with ETL tools like Azure Data Factory.

Responsibilities

  • Engage directly with customers to understand business requirements.
  • Design, build, and maintain data ingestion and ETL pipelines.
  • Implement data warehouse solutions and monitor pipeline performance.

Skills

ETL processes
Data pipelines
Data ingestion
Azure Data Factory
SQL skills
Data modeling

Education

Bachelor's degree in computer science or related field

Tools

Snowflake
SSIS
SSAS
Power BI
Job description

Showing 9 Data Engineer jobs in Jakarta Utara

Posted today

Job Description

We're looking for a Data Engineer to join our consulting team and work directly with clients on the design, development, and optimization of data pipelines, ETL workflows, and data warehouses. This role is ideal for someone who enjoys solving complex data challenges while collaborating closely with stakeholders.

Key Responsibilities

  • Engage directly with customers to understand business requirements and translate them into technical data solutions.
  • Design, build, and maintain data ingestion and ETL pipelines to support analytics and reporting.
  • Implement data warehouse solutions following Star Schema best practices.
  • Develop and orchestrate workflows using Azure Data Factory and/or Microsoft Fabric.
  • Leverage SSIS (SQL Server Integration Services) for ETL and SSAS (SQL Server Analysis Services) for analytical modeling.
  • Build and manage data solutions in Snowflake.
  • Monitor, troubleshoot, and optimize pipelines for performance and reliability.
  • Provide technical guidance and best practices to clients and internal teams.

Required Skills & Qualifications

  • 1+ years of experience as a Data Engineer or in a similar data-focused role.
  • Strong experience with ETL processes, data pipelines, and data ingestion.
  • Solid understanding of data warehouse design, particularly Star Schema modeling.
  • Hands‑on experience with Azure Data Factory and/or Microsoft Fabric.
  • Proficiency with SSIS and SSAS.
  • Experience working with Snowflake
  • Strong SQL skills and understanding of relational database concepts.
  • Excellent communication skills for engaging with customers and translating requirements into solutions.

Nice to Have

  • Consulting or client‑facing project experience.
  • Exposure to BI tools (Power BI, Tableau) and data governance practices.
  • Knowledge of cloud platforms beyond Azure (AWS, GCP).
Is this job a match or a miss?

Posted today

Job Description

PT Mitra Solusi Telematika (MST) is seeking for a passionate Data Engineer to join our dynamic technology team in our Jakarta office. As an expertise, you will play a crucial role in the design, development, and maintenance of our data infrastructure.

Job Description:

. Your primary responsibilities will include:

  • Data Integration: Collaborate with cross‑functional teams to gather data requirements and develop ETL processes for seamless data integration.
  • Performance Optimization: Continuously monitor and enhance the performance of data warehousing solutions, ensuring scalability and efficiency.
  • Data Modeling: Design and implement data models that meet business objectives and adhere to best practices.
  • Data Governance: Implement data governance and security measures to ensure data quality and compliance with regulatory standards.
  • Documentation: Maintain clear and concise documentation of data pipelines, processes, and configurations.
  • Snowflake Expertise (if any): Utilize your deep knowledge of Snowflake to architect, build, and optimize data pipelines and warehouse solutions for our clients.

Qualifications:

  • Bachelor's degree in computer science, data science, software engineering, information systems, or related field.
  • Minimum 3+ years of experience as Data Engineer.
  • Proficient in programming languages such as R, SQL, ETL, Python, and C++
  • Have knowledge of visualization tools such as Power BI, Tableau, etc.
  • Currently has or is considering pursuing relevant certifications.
  • AWS Data Engineer/AWS Cloud Engineer certification is highly preferred for this role.
  • Possess strong communication skills and proactive attitude, as well as great teamwork traits.
  • Willing to be placed in Jakarta for 6 months contract with hybrid terms.
Is this job a match or a miss?

Posted today

Job Description

At Insignia, we're looking for a Mid‑Level Data Engineer who's worked hands‑on with Databricks and has solid experience across AWS, GCP, or Azure. You'll design and maintain end‑to‑end data pipelines that power analytics, machine learning, and business decision‑making — from ingestion to transformation, warehousing, and beyond.

You don't need to be a cloud expert in all three platforms — but you should have deep experience in at least one, and comfort navigating multi‑cloud environments where needed. If you've built production ETL/ELT workflows on the Lakehouse, optimized Delta tables, or integrated Databricks with orchestration tools like Airflow — this is your kind of challenge.

This is a hybrid role based in West Jakarta, blending focused collaboration with flexible execution.

What You'll Do:

Design, build, and maintain scalable data pipelines using Databricks (Lakehouse, Delta Lake, Spark)

Work across cloud platforms (AWS preferred, also GCP/Azure) — S3, BigQuery, Blob Storage, etc.

Transform raw data into structured, reliable datasets for analytics and ML teams

Optimize performance, cost, and governance across data workflows

Collaborate with analysts, MLEs, and software engineers to ensure data readiness

Implement CI/CD, monitoring, and documentation practices for data systems

Who You Are:

2–4 years of experience in data engineering, ideally within tech‑driven or digital service environments

Hands‑on experience with Databricks — including PySpark, SQL, and workflow automation

Proven track record working with at least one major cloud provider: AWS (S3, Glue, Redshift), GCP (BigQuery, Pub/Sub), or Azure (Data Lake, Synapse)

Proficient in Python, SQL, and data modeling (medallion architecture, star schema, etc.)

Experience with orchestration tools like Airflow, Prefect, or Step Functions

Bonus: Familiarity with Unity Catalog, MLflow, or real‑time streaming (Kafka, Kinesis)

Fluent in English — written and spoken

Collaborative, proactive, and passionate about building clean, maintainable data infrastructure

Why Join Us?

Because great data systems aren't just fast — they're trusted, reusable, and built to evolve. If you're ready to work on high-impact projects where your pipelines power AI and insight, let's talk.

Hybrid role — West Jakarta

Is this job a match or a miss?

Posted today

Job Description

Job Description:

  • Perform application system development related to Realtime Engine System, Big Data.
  • Ensure that the development process is in accordance with the timeline
  • Follow / comply with applicable application system development policies and procedures

Technical Requirements:

  • Application system development (familiar with SAS, Java, .Net, SQL, SSIS/ETL, Oracle, etc)
  • Having some Analytic tools skill such as R , Python are advantages

Requirements:

  • Minimum Bachelor Degree (S1) Majoring Computer Science, preferably reputable university
  • Minimum 1 year experienced
  • Willing to located in Bintaro
Is this job a match or a miss?

Posted today

Job Description

"What does it take to build data systems that scale without breaking? A Data Engineer who knows the cloud isn't just a place — it's a mindset.">

At Insignia, we're looking for a Data Engineer with proven experience in AWS and a track record of building reliable, scalable data pipelines.

You'll design and maintain ETL/ELT workflows, optimize data models, and ensure our analytics and machine learning teams have clean, accessible data. If you're comfortable in S3, Glue, Redshift, Lambda, and Step Functions, and you care about performance, governance, and simplicity — this is your chance to build infrastructure that powers real decisions.

This is an Hybrid role in Kebon Jeruk , West Jakarta, where you'll work closely with data scientists, analysts, and software engineers to turn raw data into value.

What You'll Do:

Design, build, and maintain scalable data pipelines on AWS

Model and structure data for analytics, reporting, and ML-readiness

Optimize data storage, query performance, and cost-efficiency across cloud services

Collaborate with internal teams to understand needs and deliver robust solutions

Implement data quality checks, monitoring, and documentation

Automate workflows using Glue, Lambda, Step Functions, or Airflow

Support secure access, governance, and compliance across data systems

Who You Are:

2–4 years of experience in data engineering, preferably in a tech‑driven or digital service environment

Strong hands‑on experience with AWS data & compute services (S3, Glue, Redshift, EC2, Lambda, etc.)

Proficient in SQL, Python, and data modeling (star schema, medallion architecture, etc.)

Experience with ETL/ELT pipelines, workflow orchestration (e.g., Airflow, Step Functions), and CI/CD for data

Bonus: Familiarity with data lakehouses, CDC, real‑time streaming (Kinesis), or MLOps integration

Fluent in English — written and spoken

Collaborative, proactive, and passionate about building systems that last

Why Join Us?

Because great data engineering isn't just about moving data — it's about making it mean something. At Insignia, you'll have the autonomy to design, the support to grow, and the impact to show for it.

We offer:

  • Performance-driven environment with real ownership
  • A collaborative, fast-moving culture with direct access to technical leads
  • Exposure to complex, high-impact projects across industries
  • If you're ready to build the backbone of intelligent systems, let's talk.
Is this job a match or a miss?

Posted today

Job Description

ABOUT THE ROLE

We are building a new team for data analytics to modernize and deliver B2B products at high quality and efficiency. As a data engineer you will mostly work with modern data stack to grow and empower our teams and to monitor sales activities. A young mind with good work ethics is highly valued.

QUALIFICATION

Currently residing in Jabodetabek

1-2 years of experience in data engineering role

Have a degree in related to computer science OR at least 2 years of experience in data engineering

Have experience with Google Sheet, Python, SQL, bash scripting, dbt, and Tableau Public (Desktop)

Good understanding of SDLC, version control and deployment strategy

Preferably have demonstrable experience with one or more of these:

Snowflake

  • Prefect, Dagster, or Astronomer
  • Web scraping libraries and applications

WHAT WE OFFER

  • Wrk-from-office hours: 8.30 AM PM
  • 12 months contract, with the opportunity to have permanent role
  • Complimentary coffee
Is this job a match or a miss?
Data Science Engineer

Posted today

Job Description

Job Title:

Data Science Engineer

About The Role:

To apply data science techniques and learning algorithms to solve business problems, improve decision-making, and ensure the efficient deployment of models in production.

What Will You Do:

  • Understanding business objectives and developing models that help to achieve them, along with metrics to track their progress
  • Analyzing the ML algorithms that could be used to solve a given problem
  • Exploring and visualizing data to gain an understanding of it
  • Identifying differences in data distribution that could affect performance when deploying the model in the real world
  • Verifying data quality, andoror ensuring it via data cleaning
  • Supervising the data acquisition process if more data is needed
  • Defining the preprocessing or feature engineering to be done on a given dataset
  • Training models and tuning their hyperparameters
  • Analyzing the errors of the model and designing strategies to overcome them
  • Deploying models to production

What we are looking for:

  • Bachelor's degree in Computer Science, Data Science, Mathematics, or a related field.
  • 4+ years of experience in data science, machine learning, or related fields.
  • Data Science or Machine Learning certifications (e.g., Google Professional Data Engineer, Microsoft Certified: Azure Data Scientist).
  • Experience with specific data science platforms (e.g., AWS Sagemaker, Google AI Platform) is a plus.

Soft Skill Requirements:

  • Strong problem-solving and analytical skills.
  • Effective communication skills for presenting findings to stakeholders.
  • Ability to work collaboratively in a team environment.
  • Adaptability and a proactive approach to problem-solving.

Technical Skill Requirements:

  • Proficiency in data science tools and languages (Python, R, SQL).
  • Expertise in machine learning algorithms and frameworks (e.g., TensorFlow, PyTorch, Scikit-learn).
  • Strong knowledge of data processing, feature engineering, and model validation techniques.
  • Experience with cloud platforms (e.g., AWS, GCP) and deployment of models to production.
Is this job a match or a miss?
Expert, Data Lake Engineer

Posted today

Job Description

We are seeking a highly skilled and experienced
Expert, Data Lake Engineer
to actively involve in the design, implementation, and maintenance of our enterprise‑grade data lake infrastructure. This role is critical in enabling advanced analytics, data science, and AI/ML initiatives by ensuring robust data ingestion, storage, and processing capabilities.

Key Responsibilities:

Data Acquisition & Structuring:

  • Extract and consolidate data from diverse primary (SAP) and secondary sources. Reorganize and format data to support downstream analytics, machine learning, and AI workflows.

Data Lake Operations:

  • Oversee daily operations related to data collection, storage, and processing. Architect and maintain scalable data lake solutions using modern technologies and best practices.
  • Design and implement ETL/ELT pipelines. Ensure high performance, reliability, and scalability of data systems, including batch data and/or near real‑time data. Monitor pipeline health and perform warehouse cleansing to maintain data integrity.

System Monitoring & Optimization:

  • Continuously monitor system performance and. Troubleshoot issues, optimize resource usage, and ensure seamless data flow across platforms.

Job Requirements

  • 7–12 years of hands‑on experience in data lake implementation and operations.
  • Proven track record in designing and deploying data lake architectures, topologies, and infrastructures.

Technical Expertise:

  • Strong proficiency in ETL/ELT pipeline development. Experience with
    SAP HANA
    is a plus.
  • Deep understanding of data lake technologies and deployment tools, including but not limited to:
  • Cloudera, Hadoop, Spark, Kafka, Airflow, Dremio
  • Soft Skills & Passion:
  • Passionate about data processing and engineering excellence.
  • Strong problem‑solving skills and ability to work collaboratively in cross‑functional teams.

Why Join Us?

  • Be part of a forward‑thinking team driving innovation in data and AI.
  • Work on impactful projects that shape business decisions and operational efficiency.
  • Enjoy a dynamic work environment with opportunities for Growth and learning.

If you’re a data engineering expert ready to take on complex challenges and build scalable data infrastructure, we’d love to hear from you

Is this job a match or a miss?

Posted today

Job Description

Responsibilities:

  • Perform regular assessments and documentation of server and storage conditions.
  • Conduct preventive and corrective maintenance for Dell systems.
  • Monitor system health, capacity, and backup performance.
  • Ensure OS, firmware, and configuration compliance.
  • Identify and report hardware anomalies or failures.
  • Maintain accurate records of maintenance and system updates.

Requirements:

  • Minimum
    Diploma (D3)
    in Information Technology, Computer Engineering, or related field.
  • 3 years of experience in server or storage support.
  • Familiar with Dell systems (PowerMax, Unity XT, or similar).
  • Knowledge of OS, firmware and system monitoring tools.
  • Good analytical and documentation skills.
  • Willing to work in rotating (24/7) shifts.
  • Willing to work under a 1‑year project‑based contract.
Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.