Enable job alerts via email!

Data Engineer – Big Data & ETL

Borr Drilling

Singapore

On-site

SGD 80,000 - 120,000

Full time

Yesterday
Be an early applicant

Generate a tailored resume in minutes

Land an interview and earn more. Learn more

Start fresh or import an existing resume

Job summary

A leading company in the field of drilling seeks a skilled Data Engineer to design and maintain ETL pipelines for processing large-scale data. The ideal candidate will possess strong expertise in Big Data technologies and cloud services, collaborating with various teams to enhance data initiatives.

Qualifications

  • Experience with schema design and performance tuning.
  • Exposure to ML model deployment is a plus.
  • Strong analytical and problem-solving skills.

Responsibilities

  • Design, develop, and maintain scalable ETL pipelines.
  • Collaborate with data scientists and analysts for data initiatives.
  • Troubleshoot and resolve data quality issues.

Skills

Big Data
ETL frameworks
Apache Spark
Hadoop
AWS
Python
Scala
Java
Shell scripting

Education

Bachelor’s or Master’s degree in Computer Science, Data Engineering, or a related field

Tools

Apache Spark
Hadoop
AWS
Kafka
Talend

Job description

We are seeking a skilled Data Engineer with strong expertise in Big Data and ETL frameworks , hands-on experience with Apache Spark, Hadoop ecosystem , and proficiency in cloud-based data services (AWS) . The ideal candidate will have a solid programming background and experience building robust data pipelines for large-scale data processing.

Key Responsibilities:
  • Design, develop, and maintain scalable ETL pipelines for processing structured and unstructured data.
  • Work extensively with Apache Spark, Hadoop, Hive, Sqoop, Kafka, Talend , and other big data technologies.
  • Develop batch and streaming data processing workflows to support analytical and machine learning models.
  • Build and optimize data lake and data warehouse solutions using AWS services like S3, Glue, EMR, Redshift, Athena, Lambda .
  • Write high-quality, testable code in Python, Scala, Java , and Shell scripting .
  • Collaborate with data scientists, analysts, and DevOps teams to support data initiatives and ensure optimal data delivery architecture.
  • Troubleshoot and resolve data quality issues, performance bottlenecks, and ETL failures.
  • Implement data governance and best practices for data security, privacy, and compliance.
Technical Skills:
  • Big Data & ETL Tools: Apache Spark, Hadoop, Hive, Sqoop, Kafka, Talend
  • Programming Languages: Python, Scala, Java, SQL, Shell Scripting
  • Cloud Platform: AWS (S3, EMR, Glue, Redshift, Athena, Lambda)
  • Data Modeling & Warehousing: Experience with schema design, star/snowflake schema, performance tuning
  • Machine Learning Platforms: Exposure to ML model deployment and integration (experience with ML Mode Bricks or similar is a plus)
Preferred Qualifications:
  • Bachelor’s or Master’s degree in Computer Science, Data Engineering, or a related field.
  • Experience working in an agile environment.
  • Strong analytical and problem-solving skills.
Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.