Job Search and Career Advice Platform

Enable job alerts via email!

Data Engineer Data Warehouse

PT Trinusa Travelindo

Jakarta Utara

On-site

IDR 200.000.000 - 300.000.000

Full time

2 days ago
Be an early applicant

Generate a tailored resume in minutes

Land an interview and earn more. Learn more

Job summary

A tech startup in Jakarta seeks a data warehouse engineer to build and maintain data platforms. The ideal candidate will have 1-5 years of experience, be proficient in Python and advanced SQL, and understand data warehousing concepts. Responsibilities include creating data pipelines and ensuring data quality. The company offers a dynamic work environment and opportunities for professional growth.

Qualifications

  • 1 to 5 years of experience in data warehousing.
  • Strong proficiency in Python and advanced-SQL.
  • Familiarity with data warehouse environments.

Responsibilities

  • Build and maintain data warehouse platforms.
  • Design and develop data pipelines.
  • Ensure data governance and quality.

Skills

Fluent in Python
Advanced SQL
Data warehousing concept
Creative problem solving
Team collaboration

Tools

Google BigQuery
AWS Redshift
Snowflake
Docker
Kubernetes
Job description
Overview

It's fun to work in a company where people truly BELIEVE in what they're doing!

Traveloka is a tech startup based in Jakarta. We aim to revolutionize Indonesian travel marketplace and make it more accessible to travelers across the country. We are committed to building a dynamic workplace where people truly enjoy their work and feel that they can really have an impact.

Responsibilities
  • The data warehouse engineering team plays an important role in the data team. We build, govern and maintain data warehouse platforms and environments as the foundation and source for every data product created by data analysts and scientists.
  • Define data model convention and governance
  • Design, develop and maintain data pipelines (external data source ingestion jobs, ETL/ELT jobs, etc)
  • Design, develop and maintain data pipeline framework (combined open source and internal software to build and govern data pipelines)
  • Create and manage data pipelines infrastructures
  • Continuously seek ways to optimize existing data processing to be cost and time efficient
  • Ensure good data governance and quality through build monitoring systems to monitor data quality in data warehouse.
Requirements
  • Minimum 1 year until 5 years experience (we are opening the roles across multiple levels).
  • Fluent in Python and advanced-SQL
  • Preferably familiar with data warehouse environments (eg: Google BigQuery, AWS Redshift, Snowflake)
  • Preferably familiar with data transformation or processing framework (eg: dbt, Dataform, Spark, Hive, etc)
  • Preferably familiar with data processing technology (Google Dataflow, Google Dataproc, etc)
  • Preferably familiar with orchestration tool (eg: Airflow, Argo, Azkaban, etc)
  • Understand data warehousing concept (eg: Kimball, Inmon, data vault, etc) and experience in data modeling and measure + improve data quality
  • Preferably understand basic containerization and microservice concept (eg: Docker, Kubernetes)
  • Having knowledge in machine learning, building robust API, and web development will be an advantage
  • Able to build and maintain good relationship with stakeholders
  • Able to translate business requirements to data warehouse modeling specifications
  • Able to demonstrate creative problem solving skill
  • A team player who loves to collaborate with others and can work independently when needed
Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.