Job Search and Career Advice Platform

Enable job alerts via email!

AWS Pyspark Developer

ENFACTUM PTE. LTD.

Singapore

On-site

SGD 60,000 - 80,000

Full time

Yesterday
Be an early applicant

Generate a tailored resume in minutes

Land an interview and earn more. Learn more

Job summary

A leading technology firm in Singapore is seeking a skilled Data Engineer to design and maintain data pipelines using PySpark in a distributed environment. You will implement scalable AWS solutions, optimize workflows, and ensure data security. The ideal candidate is proficient in AWS, Python, and SQL, with expertise in data processing and automation. This position offers an opportunity to collaborate with diverse teams and contribute to impactful data solutions.

Qualifications

  • Hands-on experience with core AWS services for data engineering.
  • Strong programming skills for data processing and automation.
  • Expertise in distributed data processing and Spark framework.
  • Proficiency in writing complex queries and optimizing performance.

Responsibilities

  • Design, develop, and maintain data pipelines and ETL processes using PySpark on distributed systems.
  • Implement scalable solutions on AWS cloud services (e.g., S3, EMR, Lambda).
  • Optimize data workflows for performance and reliability.
  • Collaborate with data engineers and business stakeholders to deliver high-quality solutions.
  • Ensure data security and compliance with organizational standards.

Skills

AWS
Python
PySpark
SQL

Tools

Databricks
Job description
Key Responsibilities
  • Design, develop, and maintain data pipelines and ETL processes using PySpark on distributed systems.
  • Implement scalable solutions on AWS cloud services (e.g., S3, EMR, Lambda).
  • Optimize data workflows for performance and reliability.
  • Collaborate with data engineers, analysts, and business stakeholders to deliver high-quality solutions.
  • Ensure data security and compliance with organizational standards.
Required Skills
  • AWS: Hands‑on experience with core AWS services for data engineering.
  • Python: Strong programming skills for data processing and automation.
  • PySpark: Expertise in distributed data processing and Spark framework.
  • SQL: Proficiency in writing complex queries and optimizing performance.
Good to Have
  • Databricks: Experience with Databricks platform for big data analytics.
  • Knowledge of CI/CD pipelines and DevOps practices.
  • Familiarity with data lake and data warehouse concepts.
Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.