
Enable job alerts via email!
Generate a tailored resume in minutes
Land an interview and earn more. Learn more
A technology solutions company in Singapore is seeking a skilled data engineer proficient in Python, with hands-on experience in Apache Spark and strong SQL skills. The ideal candidate will have expertise in ETL development, familiarity with workflow orchestration tools, and knowledge of data warehousing principles. Security practices such as IAM and data encryption are also important. This role focuses on optimizing data processing workflows and ensuring efficient data management practices.
• Programming Languages: Proficiency in Python for data processing, automation, and unix shell scripting.
• Big Data Frameworks: Hands-on experience with Apache Spark for distributed data processing and analytics.
• Database & Querying: Strong knowledge of SQL for relational databases and Hive for querying large datasets in Hadoop ecosystems.
• ETL Development: Expertise in designing and implementing ETL pipelines for data ingestion, transformation, and loading.
• Workflow Orchestration: Familiarity with Control-M or similar scheduling tools for batch job automation and monitoring.
• Data Warehousing: Understanding of data modeling and optimization techniques for large-scale data storage and retrieval.
• Performance Tuning: Ability to optimize queries and jobs for efficiency and scalability.
• Version Control & CI/CD: Experience with Git and deployment pipelines for data engineering workflows.
• BI/Analytics Integration: Familiarity with how downstream tools (Power BI/Tableau) consume curated datasets.
• Security: IAM, secrets management, encryption at rest/in transit, PII handling.