Showing 166 Junior Data Engineer jobs in Jakarta
Data Science Engineer
Posted today
Job Description
About The Role:
To apply data science techniques and machine learning algorithms to solve business problems, improve decision‑making, and ensure the efficient deployment of models in production.
What Will You Do:
- Understanding business objectives and developing models that help to achieve them, along with metrics to track their progress
- Analyzing the ML algorithms that could be used to solve a given problem
- Exploring and visualizing data to gain an understanding of it
- Identifying differences in data distribution that could affect performance when deploying the model in the real world
- Verifying data quality, and ensuring it via data cleaning
- Supervising the data acquisition process if more data is needed
- Defining the preprocessing or feature engineering to be done on a given dataset
- Training models and tuning their hyperparameters
- Analyzing the errors of the model and designing strategies to overcome them
- Deploying models to production
Minimum Qualifications: What we are looking for:
- Bachelor's degree in Computer Science, Data Science, Mathematics, or a related field.
- 4+ years of experience in data science, machine learning, or related fields.
- Data Science or Machine Learning certifications (e.g., Google Professional Data Engineer, Microsoft Certified: Azure Data Scientist).
- Experience with specific data science platforms (e.g., AWS Sagemaker, Google AI Platform) is a plus.
Willing to work in Jakarta Office
- Technical Skill Requirements:
- Proficiency in data science tools and languages (Python, R, SQL).
- Expertise in machine learning algorithms and frameworks (e.g., TensorFlow, PyTorch, Scikit‑learn).
- Strong knowledge of data processing, feature engineering, and model validation techniques.
- Experience with cloud platforms (e.g., AWS, GCP) and deployment of models to production.
Soft Skill Requirements:
- Strong problem‑solving and analytical skills.
- Effective communication skills for presenting findings to stakeholders.
- Ability to work collaboratively in a team environment.
- Adaptability and a proactive approach to problem‑solving.
The transaction is the lungs of the breathing economy, that is why our first step starts with a big dream to pave the way towards freedom of transaction. Since 2007, DOKU has been the first electronic payment system and risk management company in Indonesia. From paying and getting paid to transfer funds, all are possible with DOKU. For more than one decade, we have grown together alongside large and medium scaled companies to personal sellers from various lines of business, ranging from transportation, tourism, insurance, retail, donation, communities, and many more. Collaboration with partners such as local and international banks, also non‑banking institutions has strengthened our reputation as a trusted local electronic payment solution. Our business grows together with our partners' businesses. Thus we have built a payment ecosystem that is supportive and strengthens theirs. The formation of three product pillars that responds to all business needs across all layers of society has marked our transformation from "The Better Way to Pay" becoming "Think Beyond Payments". The three winning product pillars comprise Payment Gateway and Transfer Services for Corporate, SMEs, Start‑ups, and Local and International MSMEs. The last product pillar, Collaborative Commerce, is designed to empower communities and personal usage.
Is this job a match or a miss?
Data Science Engineer
Posted today
Job Description
Job Title:
Data Science Engineer
About The Role:
To apply data science techniques and machine learning algorithms to solve business problems, improve decision‑making, and ensure the efficient deployment of models in production.
What Will You Do:
- Understanding business objectives and developing models that help to achieve them, along with metrics to track their progress
- Exploring and visualizing data to gain an understanding of it
- Identifying differences in data distribution that could affect performance when deploying the model in the real world
- Verifying data quality, and ensuring it via data cleaning
- Supervising the data acquisition process if more data is needed
- Defining the preprocessing or feature engineering to be done on a given dataset
- Training models and tuning their hyperparameters
- Analyzing the errors of the model and designing strategies to overcome them
- Deploying models to production
What we are looking for:
- Bachelor's degree in Computer Science, Data Science, Mathematics, or a related field.
- 4+ years of experience in data science, machine learning, or related fields.
- Data Science or Machine Learning certifications (e.g., Google Professional Data Engineer, Microsoft Certified: Azure Data Scientist).
- Experience with specific data science platforms (e.g., AWS Sagemaker, Google AI Platform) is a plus.
Soft Skill Requirements:
- Strong problem‑solving and analytical skills.
- Effective communication skills for presenting findings to stakeholders.
- Ability to work collaboratively in a team environment.
- Adaptability and a proactive approach to problem‑solving.
Technical Skill Requirements:
- Proficiency in data science tools and languages (Python, R, SQL).
- Expertise in machine learning algorithms and frameworks (e.g., TensorFlow, PyTorch, Scikit‑learn).
- Strong knowledge of data processing, feature engineering, and model validation techniques.
- Experience with cloud platforms (e.g., AWS, GCP) and deployment of models to production.
Is this job a match or a miss?
Data Science Engineer 4
Jakarta, Jakarta IDR500000 - IDR2000000 Y Amadeus
Posted today
Job Description
Job Title
Data Science Engineer 4
Responsibilities
- Design and implement GenAI applications (both agentic and non‑agentic) focused towards solving API integration challenges encompassing all phases of development.
- Collaborate with cross‑functional teams (DevOps, QA & PDAs) and interdepartmental teams to ensure smooth delivery and operation
- Write clean, efficient, and well‑documented code adhering to best practices
- Identify and implement opportunities for performance optimization and resource efficiency
- Design and implement GenAI application as service, both agentic and non‑agentic
- Design and implement API to interact with ourAI application services and Agents as service
- Design, develop, and test microservices using Quarkus/FastAPI
- Research and evaluate new techniques and technology to build the right product
- Contribute to the creation of robust and scalable cloud‑native architecture
- Stay up‑to‑date with the latest advancements in GenAI and associated technologies
- Mentor and motivate junior developers, sharing your knowledge and expertise
- Participate in code reviews and ensure adherence to design principles and technical standards
- Identify ways to evaluate the system
- Identify ways to implement MLOps
- Identify and troubleshoot potential issues, proactively solving problems and mitigating risks
- Contribute to continuous improvement processes and knowledge sharing within the team
Qualifications
- Good understanding of GPT APIs and libraries like LangChain
- Experience in building LLM based system leveraging multiple LLMs
- Good understanding of Vector Databases
- Hands‑on experience in various prompt engineering patterns
- Hands on experience with RAG
- Hands‑on experience in developing Agentic applications, with good knowledge of the following frameworks: LangGraph, CrewAI, OpenAI's Swarm
- Must have experience in taking a popular GenAI based application to production
- In‑depth knowledge of building services and microservices architecture
- Strong understanding of design principles and patterns
- Excellent communication and collaboration skills, ability to work effectively in a team
- Problem‑solving and analytical skills, with a proactive approach to overcoming challenges
- Good Knowledge of anyone of the framework: FastAPI, Quarkus, Spring boot, any micro framework for building microservices
- Passion for learning and staying updated with the latest advancements in GenAI
Bonus Points
- Experience with cloud‑native technologies (Kubernetes, Docker, etc.)
- Experience with Microsoft Azure
- Experience with LLM evals
- Experience with CI/CD pipelines and automation tools
- Leadership experience and the ability to mentor junior developers
If you are a highly motivated and skilled GenAI applications (not just ChatBots) developer with a background in Java/C++, passionate about solving business & development challenges and interested in building distributed cloud‑native GenAI applications (agentic/non‑agentic), we encourage you to apply
* *Diversity & Inclusion***
Amadeus is an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to gender, race, ethnicity, sexual orientation, age, beliefs, disability or any other characteristics protected by law.
Is this job a match or a miss?
Jakarta, Jakarta IDR9000000 - IDR12000000 Y Kredit Pintar
Posted today
Job Description
Job Responsibilities
- Participate in the construction and development of data warehouses for financial projects, design models according to business requirements, and implement ETL.
- Provide data report support for business departments.
- Participate in optimizing ETL to improve code efficiency and reduce costs.
Job Qualifications
- Proficient in Spark, able to develop in Python and Scala, with good experience in performance tuning;
- Familiar with various data warehouse modeling theories, proficient in data model design and data layer design;
- Priority will be given to those with Tableau development experience;
- Priority will be given to those with a background in finance and e‑commerce projects;
- Diligent, responsible, meticulous, with good team spirit, analytical skills, and communication skills.
- Spark;
- Phyton;
- Scala;
Is this job a match or a miss?
Posted today
Job Description
- Devoteam is a leading consulting firm focused on digital strategy, tech platforms, and cybersecurity.
By combining creativity, tech, and data insights, we empower our customers to transform their business and unlock the future.
With 25 years' of experience and 8,000 employees across Europe and the Middle East, Devoteam promotes responsible tech for people and works to create better change.
#Creative Tech for Better Change
Devoteam has launched in January 2021 its new strategic plan, Infinite 2024, with the ambition to become the #1 EMEA partner of the leading Cloud-based platform companies (AWS, Google Cloud, Microsoft, Salesforce, ServiceNow), further reinforced by deep expertise in digital strategy, cybersecurity, and data.
The Data Engineer will be responsible for the following activities:
- Work closely with data architects and other stakeholders to design scalable and robust data architectures that meet the organization's requirements
- Develop and maintain data pipelines, which involve the extraction of data from various sources, data transformation to ensure quality and consistency, and loading the processed data into data warehouses or other storage systems
- Responsible for managing data warehouses and data lakes, ensuring their performance, scalability, and security
- Integrate data from different sources, such as databases, APIs, and external systems, to create unified and comprehensive datasets.
- Perform data transformations and implement Extract, Transform, Load (ETL) processes to convert raw data into formats suitable for analysis and reporting
- Collaborate with data scientists, analysts, and other stakeholders to establish data quality standards and implement data governance practices
- Optimise data processing and storage systems for performance and scalability
Collaborate with cross‑functional teams, including data scientists, analysts, and business stakeholders, to understand data requirements and deliver solutions
Programming Skills: Proficiency in programming languages such as Python, Java, Scala, or SQL is essential for data engineering roles. Data engineers should have experience in writing efficient and optimized code for data processing, transformation, and integration.
Database Knowledge: Strong knowledge of relational databases (e.g., SQL) and experience with database management systems (DBMS) is crucial Familiarity with data modeling, schema design, and query optimization is important for building efficient data storage and retrieval systems.
Big Data Technologies: Understanding and experience with big data technologies such as Apache Hadoop, Apache Spark, or Apache Kafka is highly beneficial. Knowledge of distributed computing and parallel processing frameworks is valuable for handling large‑scale data processing.
ETL and Data Integration: Proficiency in Extract, Transform, Load (ETL) processes and experience with data integration tools like Apache NiFi, Talend, or Informatica is desirable. Knowledge of data transformation techniques and data quality principles is important for ensuring accurate and reliable data.
Data Warehousing: Familiarity with data warehousing concepts and experience with popular data warehousing platforms like Amazon Redshift, Google BigQuery, or Snowflake is advantageous. Understanding dimensional modeling and experience in designing and optimizing data warehouses is beneficial.
Cloud Platforms: Knowledge of cloud computing platforms such as Amazon Web Services (AWS), Microsoft Azure, or Google Cloud Platform (GCP) is increasingly important. Experience in deploying data engineering solutions in the cloud and utilizing cloud-based data services is valuable.
Data Pipelines and Workflow Tools: Experience with data pipeline and workflow management tools such as Apache Airflow, Luigi, or Apache Oozie is beneficial. Understanding how to design, schedule, and monitor data workflows is essential for efficient data processing.
Problem‑Solving and Analytical Skills: Data engineers should have strong problem‑solving abilities and analytical thinking to identify data‑related issues, troubleshoot problems, and optimize data processing workflows.
Communication and Collaboration: Effective communication and collaboration skills are crucial for working with cross‑functional teams, including data scientists, analysts, and business stakeholders. Data engineers should be able to translate technical concepts into clear and understandable terms.
Education and Experience
Bachelor's degree in Engineering required.
- Minimum Two years of related experience is highly preferred.
- Two certifications in GCP (within 3 months after joining).
Status: Full‑Time
Duration: –
The Devoteam Group is committed to equal opportunities, promoting its employees on the basis of merit and actively fighting against all forms of discrimination. We believe that diversity contributes to the creativity, dynamism, and excellence of our organization. All our positions are open to people with disabilities.
Is this job a match or a miss?
Posted today
Job Description
a.Min experience
3 years as Data Engineer
b.Experience in
SQL query, SQL function and SQL procedure is mandatory
c.Experience in
Java development of API
is an advantage
d.Have experience in
SSRS reporting
,
PowerBI analytic, Google Big Query, ETL & data pipeline is mandatory.
e.Have knowledge and experience in data science with Python is an advantage
Is this job a match or a miss?
Be The First To Know
About the latest Junior data engineer Jobsin Indonesia !
Set Email Alert:
Job title
Location
Jakarta, Jakarta IDR8000000 - IDR12000000 Y PT Intikom Berlian Mustika
Posted today
Job Description
Scope of Work – Data Engineer
Scope of Work – Data Engineer
- Mengeksplorasi database DI/DX (data pipelines, ETL, integrasi data).
- Mengeksplorasi dan, jika diperlukan, mengembangkan Datamart untuk mendukung kebutuhan bisnis terkait DI/DX.
- Menangani permintaan ad‑hoc untuk query, ekstraksi data, dan persiapan data terkait DI/DX.
- Melakukan validasi pada data yang diekstrak untuk memastikan kualitas dan keandalan terkait DI/DX.
- Memastikan konsistensi dan aksesibilitas data bagi pemangku kepentingan DXO terkait DI/DX.
Jika Anda merasa cocok atau memiliki referensi kandidat yang sesuai, silakan hubungi kami via WhatsApp:
SoftwareDeveloper #Hiring #Career #ITJobs #Java #SQL
Jenis Pekerjaan: Kontrak
Panjang kontrak: 12 bulan
Pertanyaan Lamaran:
- Berapa usia kamu saat ini ?
- apakah kamu Memiliki pengalaman kerja dalam membangun dan mengelola data pipelines, ETL, dan integrasi data?
- apakah Kamu Menguasai SQL serta tools/teknologi terkait Data Warehouse dan Datamart?
Pendidikan:
Pengalaman:
- Data Engineer: 4 tahun (Diutamakan)
Is this job a match or a miss?
Posted today
Job Description
Requirements:
• Hold bachelor degree (S-1 degree) in Information Technology or Computer Engineering or Telecommunication or Statistic
• Excellent written & verbal communication skill
• Min. 3-5 years experience at ETL tools
• Deep understanding in SQL, SQL optimization, data pipeline, and job optimization
• Having knowledge shell script, VB script is a plus
• Good and effective communication and leadership skills
• Proactive, self‑learn motivated person, and eager to learn new technologies
• Good verbal and written English communication skills
• Good interpersonal skills to be able to relate with people or personnel from different units of the company
• Excellent numerical, analytical and problem‑solving skills
• Ability to perform under pressure and tight time constraints; be able to work under limited guidance in line with a broad plan, or strategy;
Jakarta, Jakarta IDR8000000 - IDR12000000 Y PT Astra Graphia Information Technology (AGIT)
Posted today
Job Description
- Build and optimize robust data pipelines for extracting, transforming, and loading (ETL) data from multiple sources into a central data warehouse or data lake.
- Integrate data from multiple heterogeneous sources, ensuring data quality, consistency, and availability.
- Monitor the performance of data systems, identify bottlenecks, and resolve issues related to data quality or processing failures.
Is this job a match or a miss?
Jakarta, Jakarta IDR8000000 - IDR12000000 Y PT Intikom Berlian Mustika
Posted today
Job Description
Scope of Work – Data Engineer
Scope of Work – Data Engineer
- Mengeksplorasi database DI/DX (data pipelines, ETL, integrasi data).
- Mengeksplorasi dan, jika diperlukan, mengembangkan Datamart untuk mendukung kebutuhan bisnis terkait DI/DX.
- Menangani permintaan ad‑hoc untuk query, ekstraksi data, dan persiapan data terkait DI/DX.
- Melakukan validasi pada data yang diekstrak untuk memastikan kualitas dan keandalan terkait DI/DX.
- Memastikan konsistensi dan aksesibilitas data bagi pemangku kepentingan DXO terkait DI/DX.
Jika Anda merasa cocok atau memiliki referensi kandidat yang sesuai, silakan hubungi kami via WhatsApp:
SoftwareDeveloper #Hiring #Career #ITJobs #Java #SQL