Enable job alerts via email!

Data Engineer(Hadoop, Spark) – Contract

1MTECH PTE. LTD.

Singapore

On-site

SGD 60,000 - 90,000

Full time

Today
Be an early applicant

Job summary

A technology company based in Singapore is seeking a Data Engineer to design and maintain data pipelines, ensuring data quality and security throughout the data lifecycle. The role requires a Bachelor's degree and 3+ years of relevant experience. Proficiency in SQL and familiarity with cloud platforms is essential. Competitive compensation offered.

Qualifications

  • 3+ years of experience in data engineering or related roles.
  • Experience with data pipeline tools such as Apache Spark, Kafka, Airflow, or similar.
  • Familiarity with cloud platforms (AWS, Azure, or GCP).
  • Strong understanding of data warehousing concepts and tools.

Responsibilities

  • Design, develop, and maintain robust data pipelines and ETL processes.
  • Collaborate with data scientists and analysts to understand data requirements.
  • Implement data integration solutions across structured and unstructured data sources.
  • Ensure data quality, integrity, and security across all stages of the data lifecycle.

Skills

SQL
Python
Scala
Apache Spark
Kafka
Airflow

Education

Bachelor’s degree in Computer Science, Information Systems, Engineering, or related field

Tools

SQL
Apache Spark
Kafka
Airflow
Snowflake
Redshift
BigQuery
Job description
Key Responsibilities


  • Design, develop, and maintain robust data pipelines and ETL processes to support analytics and reporting needs.

  • Collaborate with data scientists, analysts, and business stakeholders to understand data requirements and deliver high-quality solutions.

  • Implement data integration solutions across structured and unstructured data sources.

  • Ensure data quality, integrity, and security across all stages of the data lifecycle.

  • Optimize data workflows for performance and scalability in cloud and on-premise environments.

  • Support data migration and transformation initiatives for client projects.

  • Monitor and troubleshoot data pipeline issues and provide timely resolutions.


Required Qualifications


  • Bachelor’s degree in Computer Science, Information Systems, Engineering, or related field.

  • 3+ years of experience in data engineering or related roles.

  • Proficiency in SQL and Python or Scala.

  • Experience with data pipeline tools such as Apache Spark, Kafka, Airflow, or similar.

  • Familiarity with cloud platforms (AWS, Azure, or GCP).

  • Strong understanding of data warehousing concepts and tools (e.g., Snowflake, Redshift, BigQuery).

  • Knowledge of data governance, security, and compliance standards.

Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.