Job Search and Career Advice Platform

Enable job alerts via email!

AWS Pyspark Developer

Trades Workforce Solutions

Singapore

On-site

SGD 60,000 - 80,000

Full time

Today
Be an early applicant

Generate a tailored resume in minutes

Land an interview and earn more. Learn more

Job summary

A consulting firm is seeking an AWS Pyspark Developer located in Singapore. In this 12-month contract role, you will design, develop, and maintain data pipelines using PySpark and AWS services such as S3, EMR, and Lambda. Ideal candidates should have strong programming skills in Python, expertise in distributed data processing with PySpark, and proficiency in SQL. This position offers competitive compensation and the potential for contract extension.

Benefits

Competitive Compensation

Qualifications

  • Hands-on experience with core AWS services for data engineering.
  • Strong programming skills for data processing and automation.
  • Expertise in distributed data processing and Spark framework.
  • Proficiency in writing complex SQL queries and optimizing performance.

Responsibilities

  • Design, develop, and maintain data pipelines and ETL processes using PySpark.
  • Implement scalable solutions on AWS cloud services.
  • Optimize data workflows for performance and reliability.
  • Collaborate with data engineers and business stakeholders.

Skills

AWS
Python
PySpark
SQL

Tools

Databricks
Job description
AWS Pyspark Developer
What’s on Offer:
  • Industry: Consulting
  • Location: Singapore
  • 12 months contract role (with the possibility of extension)
  • Competitive Compensation
Job Description:
  • Design, develop, and maintain data pipelines and ETL processes using PySpark on distributed systems.
  • Implement scalable solutions on AWS cloud services (e.g., S3, EMR, Lambda).
  • Optimize data workflows for performance and reliability.
  • Collaborate with data engineers, analysts, and business stakeholders to deliver high-quality solutions.
  • Ensure data security and compliance with organizational standards.
Job Requirements:
  • AWS: Hands-on experience with core AWS services for data engineering.
  • Python: Strong programming skills for data processing and automation.
  • PySpark: Expertise in distributed data processing and Spark framework.
  • SQL: Proficiency in writing complex queries and optimizing performance.
Good to Have:
  • Databricks: Experience with Databricks platform for big data analytics.
  • Knowledge of CI/CD pipelines and DevOps practices.
  • Familiarity with data lake and data warehouse concepts.
Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.