Enable job alerts via email!

Senior Apache Kafka Data Engineer

Accenture Southeast Asia

Kota Semarang

On-site

IDR 200.000.000 - 300.000.000

Full time

9 days ago

Job summary

A leading consulting firm is seeking a Data Engineer to manage and optimize data flow within large-scale applications. The ideal candidate will have a Bachelor's degree in Computer Science or a related field, with at least 2 years of relevant experience, proficiency in Python and SQL, and experience with Azure technologies like Databricks. Strong communication skills in English and a team-oriented mindset are essential for success in this role.

Qualifications

  • Minimum 2 years’ experience in Data Engineering (new graduates welcome).
  • Good communication skills in English.
  • Team player with analytical and problem-solving skills.

Responsibilities

  • Support data requirements including reports and dashboards.
  • Develop and optimize E2E Data Pipeline for large-scale applications.
  • Analyze and perform data ingestion in batch and real-time.

Skills

Python
SQL
Spark
Cloud Architecture
Data Visualization

Education

Bachelor’s degree in Computer Science or related fields

Tools

Azure
Databricks
Power BI
Tableau

Job description

As a Data Engineer, you will:

Work across workstreams to support data requirements including reports and dashboards

Analyze and perform data profiling to understand data patterns and discrepancies following Data Quality and Data Management processes

Understand and follow best practices to design and develop the E2E Data Pipeline: data transformation, ingestion, processing, and surfacing of data for large-scale applications

Develop data pipeline automation using Azure technologies stack, Databricks, Data Factory

Understand business requirements to translate them into technical requirements that the system analysts and other technical team members can drive into the project design and delivery

Analyze source data and perform data ingestion in both batch and real-time patterns via various methods; for example, file transfer, API, Data Streaming using Kafka and Spark Streaming

Analyze and understand data processing and standardization requirements, develop ETL using Spark processing to transform data

Understand data/reports and dashboards requirements, develop data export, data API, or data visualization using Power BI, Tableau, or other visualization tools

We are looking for experience and qualifications in the following:

Bachelor’s degree in Computer Science, Computer Engineer, IT, or related fields

Minimum 2 years’ experience in Data Engineering fields (new graduates are also welcome for some of our job openings)

Data Engineering skills: Python, SQL, Spark, Cloud Architect, Data & Solution Architect, API, Databricks, Azure

Data Visualization skills: Power BI (or other visualization tools), DAX programming, API, Data Model, SQL, Story Telling and wireframe design

Business Analyst skills: business knowledge, data profiling, basic data model design, data analysis, requirement analysis, SQL programing

Basic knowledge in Data Lake/Data Warehousing/ Big data tools, Apache Spark, RDBMS and NoSQL, Knowledge Graph

Experience working in a client-facing/consulting environment is a plus

Team player, analytical and problem-solving skills

Good communication skills in English

Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.