Data Engineer(Hadoop, Spark) – Contract
1MTECH PTE. LTD.
Singapore
On-site
SGD 70,000 - 90,000
Full time
Job summary
A tech company in Singapore is seeking a Data Engineer to design, develop, and maintain data pipelines and ETL processes. The ideal candidate has a Bachelor's degree in Computer Science and over 3 years of experience in data engineering. Proficiency in SQL and Python, as well as familiarity with tools like Apache Spark and cloud platforms, is required. This role is critical for ensuring data quality and supporting analytics initiatives.
Qualifications
- 3+ years of experience in data engineering or related roles.
- Strong understanding of data warehousing concepts and tools like Snowflake.
Responsibilities
- Design, develop, and maintain robust data pipelines and ETL processes.
- Collaborate with data scientists and stakeholders to understand data requirements.
- Monitor and troubleshoot data pipeline issues.
Skills
SQL
Python
Data integration
Data quality
ETL processes
Education
Bachelor’s degree in Computer Science or related field
Tools
Apache Spark
Kafka
Airflow
AWS
Azure
Key Responsibilities
- Design, develop, and maintain robust data pipelines and ETL processes to support analytics and reporting needs.
- Collaborate with data scientists, analysts, and business stakeholders to understand data requirements and deliver high-quality solutions.
- Implement data integration solutions across structured and unstructured data sources.
- Ensure data quality, integrity, and security across all stages of the data lifecycle.
- Optimize data workflows for performance and scalability in cloud and on-premise environments.
- Support data migration and transformation initiatives for client projects.
- Monitor and troubleshoot data pipeline issues and provide timely resolutions.
Required Qualifications
- Bachelor’s degree in Computer Science, Information Systems, Engineering, or related field.
- 3+ years of experience in data engineering or related roles.
- Proficiency in SQL and Python or Scala.
- Experience with data pipeline tools such as Apache Spark, Kafka, Airflow, or similar.
- Familiarity with cloud platforms (AWS, Azure, or GCP).
- Strong understanding of data warehousing concepts and tools (e.g., Snowflake, Redshift, BigQuery).
- Knowledge of data governance, security, and compliance standards.