Singapore
On-site
SGD 80,000 - 130,000
Full time
Boost your interview chances
Create a job specific, tailored resume for higher success rate.
Job summary
Une entreprise leader dans le secteur de la drill offre un poste d'ingénieur de données à Singapour. Le candidat idéal aura plus de 10 ans d'expérience en ingénierie des données, avec des compétences en SQL, Python, et des technologies de cloud. Des certifications en ingénierie des données sont un plus. Le rôle implique le développement de pipelines de données et l'intégration de sources variées pour répondre aux besoins commerciaux.
Qualifications
- 10 ans d'expérience en ingénierie des données.
- Expertise en conception et gestion d'architectures de données complexes.
Responsibilities
- Développer et maintenir des pipelines de données évolutifs.
- Gérer les solutions de stockage de données et garantir l'intégrité.
- Intégrer des données de différentes sources et collaborer avec les équipes.
Skills
SQL
Python
Java
Scala
ETL
Data Warehousing
Hadoop
Spark
Kafka
Education
Bachelor’s or Master’s degree in Computer Science
Key Responsibilities
- Data Pipeline Development:
Design, build, and maintain scalable and reliable data pipelines to support business needs.
Automate data extraction, transformation, and loading (ETL/ELT) processes.
- Database and Storage Management:
Develop and optimize data storage solutions, including relational and NoSQL databases.
Ensure data integrity, security, and accessibility across platforms.
- Data Integration:
Integrate data from multiple sources, including APIs, databases, and third-party systems.
Collaborate with data analysts, scientists, and stakeholders to understand data requirements.
- Performance Optimization:
Monitor and improve the performance of data pipelines and systems.
Address data quality issues and implement robust data validation processes.
- Cloud and Big Data Technologies:
Utilize cloud platforms (e.g., AWS, Azure, GCP) for scalable data processing.
Implement and manage big data technologies like Hadoop, Spark, or Kafka.
- Collaboration & Documentation:
Work closely with cross-functional teams to align data infrastructure with business goals.
Document data workflows, schemas, and processes for seamless knowledge sharing.
Key Requirements
- Education:
Bachelor’s or Master’s degree in Computer Science, Engineering, Data Science, or a related field. - Experience:
At least 10 years of experience in data engineering or a related field.
Proven expertise in designing and managing complex data architectures. - Technical Skills:
Proficiency in SQL, Python, and Java/Scala for data manipulation and pipeline development.
Experience with ETL tools (e.g., Apache NiFi, Talend, Informatica).
Strong understanding of data warehousing concepts and tools (e.g., Snowflake, Redshift, BigQuery).
Familiarity with big data frameworks like Hadoop, Spark, or Kafka.
Hands-on experience with cloud platforms such as AWS (Glue, Redshift), GCP (BigQuery, Dataflow), or Azure (Data Factory, Synapse). - Certifications:
Relevant certifications in cloud data engineering (e.g., AWS Certified Data Analytics, Google Professional Data Engineer) are a plus. - Soft Skills:
Strong problem-solving and analytical thinking abilities.
Excellent communication and collaboration skills.
Ability to work independently and in a team-oriented environment.