Job Search and Career Advice Platform

Enable job alerts via email!

AWS Pyspark Developer

ENFACTUM PTE. LTD.

Singapore

On-site

SGD 70,000 - 100,000

Full time

Yesterday
Be an early applicant

Generate a tailored resume in minutes

Land an interview and earn more. Learn more

Job summary

A data engineering company in Singapore is looking for a skilled professional to design and maintain data pipelines using PySpark on AWS. The role involves optimizing workflows for performance, collaborating with various stakeholders, and ensuring data security. Strong experience in AWS, Python, PySpark, and SQL is essential, with knowledge of Databricks considered a plus. This position offers the opportunity to work with cutting-edge technologies in a dynamic environment.

Qualifications

  • Hands-on experience with core AWS services for data engineering.
  • Strong programming skills for data processing and automation.
  • Expertise in distributed data processing and Spark framework.
  • Proficiency in writing complex queries and optimizing performance.

Responsibilities

  • Design, develop, and maintain data pipelines and ETL processes using PySpark.
  • Implement scalable solutions on AWS cloud services.
  • Optimize data workflows for performance and reliability.
  • Collaborate with data engineers, analysts, and business stakeholders.
  • Ensure data security and compliance with organizational standards.

Skills

AWS
Python
PySpark
SQL

Tools

Databricks
Job description
Key Responsibilities
  • Design, develop, and maintain data pipelines and ETL processes using PySpark on distributed systems.
  • Implement scalable solutions on AWS cloud services (e.g., S3, EMR, Lambda).
  • Optimize data workflows for performance and reliability.
  • Collaborate with data engineers, analysts, and business stakeholders to deliver high-quality solutions.
  • Ensure data security and compliance with organizational standards.
Required Skills
  • AWS: Hands‑on experience with core AWS services for data engineering.
  • Python: Strong programming skills for data processing and automation.
  • PySpark: Expertise in distributed data processing and Spark framework.
  • SQL: Proficiency in writing complex queries and optimizing performance.
Good to Have
  • Databricks: Experience with Databricks platform for big data analytics.
  • Knowledge of CI/CD pipelines and DevOps practices.
  • Familiarity with data lake and data warehouse concepts.
Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.