Enable job alerts via email!

Senior Delivery Microsoft – Azure Data & Ai

Pt. Sigma Cipta Caraka (Telkomsigma)

Tangerang

On-site

IDR 300.000.000 - 400.000.000

Full time

2 days ago
Be an early applicant

Generate a tailored resume in minutes

Land an interview and earn more. Learn more

Job description
Senior Delivery Microsoft Azure Data AI (Tangerang Selatan)

Perform application system development related to Realtime Engine System and Big Data. Ensure that the development process follows the schedule and complies with applicable application system development policies and procedures.

  • Application system development (familiar with SAS, Java, .Net, SQL, SSIS/ETL, Oracle, etc.)
  • Analytical tools skill such as R, Python (advantage)
  • Minimum Bachelor Degree (S1) in Computer Science from a reputable university.
  • Minimum 1 year of experience.
  • Willing to be located in Bintaro.
Data Engineer – Design, Build, and Maintain Data Infrastructure

To design, build, and maintain scalable and reliable data infrastructure that enables efficient data collection, storage, and processing across the organization. The Data Engineer plays a crucial role in developing data pipelines, ensuring data quality and integrity, and supporting analytics and machine learning initiatives.

  • Integrate and perform comprehensive testing during solution deployment to ensure system reliability and performance.
  • Design the technical architecture of proposed solutions and collaborate with other IT teams for development and implementation.
  • Proactively monitor and troubleshoot operational data processes to ensure smooth and timely execution.
  • Identify and implement solutions to improve efficiency, prevent issues, and enhance data processing and quality.
  • Research and propose new technologies or methods for more reliable and scalable data processing.
  • Prepare clear and detailed documentation for data architecture, schemas, procedures, and process workflows.
  • Bachelor in Computer Science, Information Technology, or relevant disciplines from a top university.
  • Minimum 1 year of working experience as a Data Engineer or ETL Developer.
  • Strong analytical skills & high sense of logical thinking.
  • Experience developing data warehouse schemas with OWB, ODI, or other enterprise ETL technologies.
  • Have expertise and experience with a cloud solution, preferably a GCP Platform.
  • Able to work individually as well as in a team.
  • Willing to work onsite & fulltime in Bintaro, Tangerang Selatan.
Data Engineer – Consulting Team

We’re looking for a Data Engineer to join our consulting team and work directly with clients on the design, development, and optimization of data pipelines, ETL workflows, and data warehouses.

  • Engage directly with customers to understand business requirements and translate them into technical data solutions.
  • Design, build, and maintain data ingestion and ETL pipelines to support analytics and reporting.
  • Implement data warehouse solutions following Star Schema best practices.
  • Develop and orchestrate workflows using Azure Data Factory and/or Microsoft Fabric.
  • Leverage SSIS (SQL Server Integration Services) for ETL and SSAS (SQL Server Analysis Services) for analytical modeling.
  • Build and manage data solutions in Snowflake.
  • Monitor, troubleshoot, and optimize pipelines for performance and reliability.
  • Provide technical guidance and best practices to clients and internal teams.
  • 1+ years of experience as a Data Engineer or in a similar data-focused role.
  • Strong experience with ETL processes, data pipelines, and data ingestion.
  • Solid understanding of data warehouse design, particularly Star Schema modeling.
  • Hands‑on experience with Azure Data Factory and/or Microsoft Fabric.
  • Proficiency with SSIS and SSAS.
  • Experience working with Snowflake.
  • Strong SQL skills and understanding of relational database concepts.
  • Excellent communication skills for engaging with customers and translating requirements into solutions.
  • Consulting or client‑facing project experience.
  • Exposure to BI tools (Power BI, Tableau) and data governance practices.
  • Knowledge of cloud platforms beyond Azure (AWS, GCP).
Data Engineer – PT Mitra Solusi Telematika (MST)
  • Data Integration: Collaborate with cross‑functional teams to gather data requirements and develop ETL processes for seamless data integration.
  • Performance Optimization: Continuously monitor and enhance the performance of data warehousing solutions, ensuring scalability and efficiency.
  • Data Modeling: Design and implement data models that meet business objectives and adhere to best practices.
  • Data Governance: Implement data governance and security measures to ensure data quality and compliance with regulatory standards.
  • Documentation: Maintain clear and concise documentation of data pipelines, processes, and configurations.
  • Snowflake Expertise (if any): Utilize your deep knowledge of Snowflake to architect, build, and optimize data pipelines and warehouse solutions for our clients.
  • Bachelor’s degree in computer science, data science, software engineering, information systems, or related field.
  • Minimum 3+ years of experience as Data Engineer.
  • Proficient in programming languages such as R, SQL, ETL, Python, and C++.
  • Have knowledge of visualization tools such as Power BI, Tableau, etc.
  • Currently has or is considering pursuing relevant certifications.
  • AWS Data Engineer/AWS Cloud Engineer certification is highly preferred for this role.
  • Possess strong communication skills and proactive attitude, as well as great teamwork traits.
  • Willing to be placed in Jakarta for a 6‑month contract with hybrid terms.
Data Engineer – Insignia (West Jakarta)

We’re looking for a Mid‑Level Data Engineer who’s worked hands‑on with Databricks and has solid experience across AWS, GCP, or Azure.

  • Design, build, and maintain scalable data pipelines using Databricks (Lakehouse, Delta Lake, Spark).
  • Work across cloud platforms (AWS preferred, also GCP/Azure) – S3, BigQuery, Blob Storage, etc.
  • Transform raw data into structured, reliable datasets for analytics and ML teams.
  • Optimize performance, cost, and governance across data workflows.
  • Collaborate with analysts, MLEs, and software engineers to ensure data readiness.
  • Implement CI/CD, monitoring, and documentation practices for data systems.
  • 2–4 years of experience in data engineering, ideally within tech‑driven or digital service environments.
  • Hands‑on experience with Databricks – including PySpark, SQL, and workflow automation.
  • Proven track record working with at least one major cloud provider: AWS (S3, Glue, Redshift), GCP (BigQuery, Pub/Sub), or Azure (Data Lake, Synapse).
  • Proficient in Python, SQL, and data modeling (medallion architecture, star schema, etc.).
  • Experience with orchestration tools like Airflow, Prefect, or Step Functions.
  • Bonus: Familiarity with Unity Catalog, MLflow, or real‑time streaming (Kafka, Kinesis).
  • Fluent in English – written and spoken.
  • Collaborative, proactive, and passionate about building clean, maintainable data infrastructure.
Data Engineer – Accord Innovations (Depok)
  • Proven experience as a Data Engineer or similar role with a strong track record of designing and implementing data solutions.
  • Proficiency in programming languages such as Python (primary), Java, or Scala.
  • Hands‑on experience with stream processing and CDC pipelines, particularly using Apache Kafka.
  • Experience with analytical databases and real‑time data warehouses, especially RisingWave and ClickHouse.
  • Solid understanding of database concepts (SQL & NoSQL), including query optimization for high‑performance analytics.
  • Familiarity with data ingestion pipelines, ETL/ELT processes, and system integrations.
  • Strong knowledge of data modeling, partitioning, and schema evolution in streaming/analytical environments.
  • Experience with cloud platforms (AWS, GCP, or Azure) and services such as S3, BigQuery, Redshift, EMR, or EC2.
  • Familiarity with observability and monitoring tools (e.g., Grafana, Prometheus, Kibana) for data pipeline monitoring.
  • Experience in performance tuning for highly concurrent workloads.
  • Basic understanding of data governance, security, and compliance practices.
  • Contract based.
Data Engineer – Insignia (Hybrid, West Jakarta)
  • Designed, built, and maintained scalable data pipelines on AWS.
  • Model and structured data for analytics, reporting, and ML‑readiness.
  • Optimized data storage, query performance, and cost‑efficiency across cloud services.
  • Collaborated with internal teams to understand data needs and deliver robust solutions.
  • Implemented data quality checks, monitoring, and documentation.
  • Automated workflows using Glue, Lambda, Step Functions, or Airflow.
  • Supported secure access, governance, and compliance across data systems.
  • 2–4 years of experience in data engineering, preferably in a tech‑driven or digital service environment.
  • Strong hands‑on experience with AWS data & compute services (S3, Glue, Redshift, EC2, Lambda, etc.).
  • Proficient in SQL, Python, and data modeling (star schema, medallion architecture, etc.).
  • Experience with ETL/ELT pipelines, workflow orchestration (e.g., Airflow, Step Functions), and CI/CD for data.
  • Bonus: Familiarity with data lakehouses, CDC, real‑time streaming (Kinesis), or MLOps integration.
  • Fluent in English – written and spoken.
  • Collaborative, proactive, and passionate about building systems that last.
Performance‑Driven Environment – Insignia
  • Performance‑driven environment with real ownership.
  • A collaborative, fast‑moving culture with direct access to technical leads.
  • Exposure to complex, high‑impact projects across industries.
  • If you’re ready to build the backbone of intelligent systems, let’s talk.
Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.