We’re looking for a Data Engineer to join our consulting team and work directly with clients on the design, development, and optimization of data pipelines, ETL workflows, and data warehouses. This role is ideal for someone who enjoys solving complex data challenges while collaborating closely with stakeholders.
Key Responsibilities
- Engage directly with customers to understand business requirements and translate them into technical data solutions.
- Design, build, and maintain data ingestion and ETL pipelines to support analytics and reporting.
- Implement data warehouse solutions following Star Schema best practices.
- Develop and orchestrate workflows using Azure Data Factory and/or Microsoft Fabric.
- Leverage SSIS (SQL Server Integration Services) for ETL and SSAS (SQL Server Analysis Services) for analytical modeling.
- Build and manage data solutions in Snowflake.
- Monitor, troubleshoot, and optimize pipelines for performance and reliability.
- Provide technical guidance and best practices to clients and internal teams.
Required Skills & Qualifications
- 1+ years of experience as a Data Engineer or in a similar data-focused role.
- Strong experience with ETL processes, data pipelines, and data ingestion.
- Solid understanding of data warehouse design, particularly Star Schema modeling.
- Hands‑on experience with Azure Data Factory and/or Microsoft Fabric.
- Proficiency with SSIS and SSAS.
- Experience working with Snowflake.
- Strong SQL skills and understanding of relational database concepts.
- Excellent communication skills for engaging with customers and translating requirements into solutions.
Nice to Have
- Consulting or client‑facing project experience.
- Exposure to BI tools (Power BI, Tableau) and data governance practices.
- Knowledge of cloud platforms beyond Azure (AWS, GCP).
PT Mitra Solusi Telematika (MST) – Data Engineer
PT Mitra Solusi Telematika (MST) is seeking a passionate Data Engineer to join our dynamic technology team in our Jakarta office. As an expert, you will play a crucial role in the design, development, and maintenance of our data infrastructure.
Responsibilities
- Data Integration: Collaborate with cross‑functional teams to gather data requirements and develop ETL processes for seamless data integration.
- Performance Optimization: Continuously monitor and enhance the performance of data warehousing solutions, ensuring scalability and efficiency.
- Data Modeling: Design and implement data models that meet business objectives and adhere to best practices.
- Data Governance: Implement data governance and security measures to ensure data quality and compliance with regulatory standards.
- Documentation: Maintain clear and concise documentation of data pipelines, processes, and configurations.
- Snowflake Expertise (if any): Utilize your deep knowledge of Snowflake to architect, build, and optimize data pipelines and warehouse solutions for our clients.
Qualifications
- Bachelor’s degree in computer science, data science, software engineering, information systems, or related field.
- Minimum 3+ years of experience as Data Engineer.
- Proficient in programming languages such as R, SQL, ETL, Python, and C++.
- Have knowledge of visualization tools such as Power BI, Tableau, etc.
- Currently has or is considering pursuing relevant certifications.
- AWS Data Engineer/AWS Cloud Engineer certification is highly preferred for this role.
- Possess strong communication skills and proactive attitude, as well as great teamwork traits.
- Willing to be placed in Jakarta for 6 months contract with hybrid terms.
Insignia – Mid-Level Data Engineer
At Insignia, we’re looking for a Mid‑Level Data Engineer who’s worked hands‑on with Databricks and has solid experience across AWS, GCP, or Azure. You’ll design and maintain end‑to‑end data pipelines that power analytics, machine learning, and business decision‑making — from ingestion to transformation, warehousing, and beyond.
You don’t need to be a cloud expert in all three platforms — but you should have deep experience in at least one, and comfort navigating multi‑cloud environments where needed. If you’ve built production ETL/ELT workflows on the Lakehouse, optimized Delta tables, or integrated Databricks with orchestration tools like Airflow — this is your kind of challenge.
This is a hybrid role based in West Jakarta, blending focused collaboration with flexible execution.
What You’ll Do
Design, build, and maintain scalable data pipelines using Databricks (Lakehouse, Delta Lake, Spark)
Work across cloud platforms (AWS preferred, also GCP/Azure) — S3, BigQuery, Blob Storage, etc.
Transform raw data into structured, reliable datasets for analytics and ML teams
Optimize performance, cost, and governance across data workflows
Collaborate with analysts, MLEs, and software engineers to ensure data readiness
Implement CI/CD, monitoring, and documentation practices for data systems
Who You Are
2–4 years of experience in data engineering, ideally within tech‑driven or digital service environments
Hands‑on experience with Databricks — including PySpark, SQL, and workflow automation
Proven track record working with at least one major cloud provider: AWS (S3, Glue, Redshift), GCP (BigQuery, Pub/Sub), or Azure (Data Lake, Synapse)
Proficient in Python, SQL, and data modeling (medallion architecture, star schema, etc.)
Experience with orchestration tools like Airflow, Prefect, or Step Functions
Bonus: Familiarity with Unity Catalog, MLflow, or real‑time streaming (Kafka, Kinesis)
Fluent in English — written and spoken
Collaborative, proactive, and passionate about building clean, maintainable data infrastructure
Why Join Us?
Because great data systems aren’t just fast — they’re trusted, reusable, and built to evolve. If you’re ready to work on high‑impact projects where your pipelines power AI and insight, let’s talk.
Hybrid role – West Jakarta
Data Engineer – Realtime Engine System
- Perform application system development related to Realtime Engine System, Big Data.
- Ensure that the development process is in accordance with the timeline.
- Follow / comply with applicable application system development policies and procedures.
Technical Requirements
- Application system development (familiar with SAS, Java, .Net, SQL, SSIS/ETL, Oracle, etc.).
- Having some analytic tools skill such as R, Python are advantages.
Requirements
- Minimum Bachelor Degree (S1) Majoring Computer Science, preferably reputable university.
- Minimum 1 year experience.
- Willing to located in Bintaro.
Data Engineer – Designer for scalable data infrastructure
To design, build, and maintain scalable and reliable data infrastructure that enables efficient data collection, storage, and processing across the organization. The Data Engineer plays a crucial role in developing data pipelines, ensuring data quality and integrity, and supporting analytics and machine learning initiatives by making clean, well‑structured data accessible to stakeholders and data teams.
Responsibilities
- Integrate and perform comprehensive testing during solution deployment to ensure system reliability and performance.
- Design the technical architecture of proposed solutions and collaborate with other IT teams for development and implementation.
- Proactively monitor and troubleshoot operational data processes to ensure smooth and timely execution.
- Identify and implement solutions to improve efficiency, prevent issues, and enhance data processing and quality.
- Research and propose new technologies or methods for more reliable and scalable data processing.
- Prepare clear and detailed documentation for data architecture, schemas, procedures, and process workflows.
Qualifications
- Bachelor in Computer Science, Information Technology, or relevant disciplines from a top university.
- Minimum 1 year of working experience as a Data Engineer or ETL Developer.
- Strong analytical skills & high sense of logical thinking.
- Experience developing data warehouse schemas with OWB, ODI, or other enterprise ETL technologies.
- Have expertise and experience with a cloud solution, preferably a GCP Platform.
- Able to work individually as well as in a team.
- Willing to work onsite & fulltime in Bintaro, Tangerang Selatan.
About the Role – Data Analytics Team
We are building a new team for data analytics to modernize and deliver B2B products at high quality and efficiency. As a data engineer you will mostly work with modern data stack to grow and empower our teams and to monitor sales activities. A young mind with good work ethics is highly valued.
Qualifications
- Currently residing in Jabodetabek.
- 1‑2 years of experience in data engineering role.
- Have a degree in related to computer science OR at least 2 years of experience in data engineering.
- Have experience with Google Sheet, Python, SQL, bash scripting, dbt, and Tableau Public (Desktop).
- Good understanding of SDLC, version control and deployment strategy.
- Preferably have demonstrable experience with one or more of these: Snowflake, Prefect, Dagster, or Astronomer, Web scraping libraries and applications.
What We Offer
- Work‑from‑office hours: 8.30 AM – 4.30 PM.
- 12 months contract, with the opportunity to have permanent role.
- Complimentary coffee.
Data Science Engineer
Job Title: Data Science Engineer
About The Role: To apply data science techniques and machine learning algorithms to solve business problems, improve decision‑making, and ensure the efficient deployment of models in production.
What Will You Do
- Understanding business objectives and developing models that help to achieve them, along with metrics to track their progress.
- Analyzing the ML algorithms that could be used to solve a given problem.
- Exploring and visualizing data to gain an understanding of it.
- Identifying differences in data distribution that could affect performance when deploying the model in the real world.
- Verifying data quality, and/or ensuring it via data cleaning.
- Supervising the data acquisition process if more data is needed.
- Defining the preprocessing or feature engineering to be done on a given dataset.
- Training models and tuning their hyperparameters.
- Analyzing the errors of the model and designing strategies to overcome them.
- Deploying models to production.
What we are looking for
- Bachelor’s degree in Computer Science, Data Science, Mathematics, or a related field.
- 4+ years of experience in data science, machine learning, or related fields.
- Data Science or Machine Learning certifications (e.g., Google Professional Data Engineer, Microsoft Certified: Azure Data Scientist).
- Experience with specific data science platforms (e.g., AWS SageMaker, Google AI Platform) is a plus.
Soft Skill Requirements
- Strong problem‑solving and analytical skills.
- Effective communication skills for presenting findings to stakeholders.
- Ability to work collaboratively in a team environment.
- Adaptability and a proactive approach to problem‑solving.
Technical Skill Requirements
- Proficiency in data science tools and languages (Python, R, SQL).
- Expertise in machine learning algorithms and frameworks (e.g., TensorFlow, PyTorch, Scikit‑learn).
- Strong knowledge of data processing, feature engineering, and model validation techniques.
- Experience with cloud platforms (e.g., AWS, GCP) and deployment of models to production.
Expert, Data Lake Engineer
We are seeking a highly skilled and experienced expert, Data Lake Engineer to actively involve in the design, implementation, and maintenance of our enterprise‑grade data lake infrastructure. This role is critical in enabling advanced analytics, data science, and AI/ML initiatives by ensuring robust data ingestion, storage, and processing capabilities.
Key Responsibilities
- Data Acquisition & Structuring: Extract and consolidate data from diverse primary (SAP) and secondary sources; reorganize and format data to support downstream analytics, machine learning, and AI workflows.
- Data Lake Operations: Oversee daily operations related to data collection, storage, and processing. Architect and maintain scalable data lake solutions using modern technologies and best practices. Design and implement ETL/ELT pipelines. Ensure high performance, reliability, and scalability of data systems, including batch data and/or near real‑time data. Monitor pipeline health and perform warehouse cleansing to maintain data integrity.
- System Monitoring & Optimization: Continuously monitor system performance and stability. Troubleshoot issues, optimize resource usage, and ensure seamless data flow across platforms.
Job Requirements
- 7–12 years of hands‑on experience in data lake implementation and operations.
- Proven track record in designing and deploying data lake architectures, topologies, and infrastructures.
Technical Expertise
- Strong proficiency in ETL/ELT pipeline development. Experience with SAP HANA is a plus.
- Deep understanding of data lake technologies and deployment tools, including but not limited to: Cloudera, Hadoop, Spark, Kafka, Airflow, Dremio.
- Soft Skills & Passion: Passionate about data processing and engineering excellence.
- Strong problem‑solving skills and ability to work collaboratively in cross‑functional teams.
Why Join Us?
- Be part of a forward‑thinking team driving innovation in data and AI.
- Work on impactful projects that shape business decisions and operational efficiency.
- Enjoy a dynamic work environment with opportunities for growth and learning.
- If you’re a data engineering expert ready to take on complex challenges and build scalable data infrastructure, we’d love to hear from you.
Server & Storage Support Engineer
Responsibilities
- Perform regular assessments and documentation of server and storage conditions.
- Conduct preventive and corrective maintenance for Dell systems.
- Monitor system health, capacity, and backup performance.
- Ensure OS, firmware, and configuration compliance.
- Identify and report hardware anomalies or failures.
- Maintain accurate records of maintenance and system updates.
Requirements
- Minimum Diploma (D3) in Information Technology, Computer Engineering, or related field.
- 3 years of experience in server or storage support.
- Familiar with Dell systems (PowerMax, Unity XT, or similar).
- Knowledge of OS, firmware, and system monitoring tools.
- Good analytical and documentation skills.
- Willing to work in rotating (24/7) shifts.
- Willing to work under a 1‑year project‑based contract.