Job Search and Career Advice Platform

Enable job alerts via email!

Data Engineer (PySpark) / Cloudera Data Platform Expert

GSSTech Group

Dubai

On-site

AED 120,000 - 200,000

Full time

Today
Be an early applicant

Generate a tailored resume in minutes

Land an interview and earn more. Learn more

Job summary

A prominent technology firm in Dubai is seeking a skilled Data Engineer (PySpark) to develop and maintain scalable data pipelines using the Cloudera Data Platform. This role demands expertise in data ingestion, transformation, and advanced processing techniques. Ideal candidates have a Bachelor's or Master's degree in Computer Science, with over 3 years of experience in data engineering focused on PySpark. The position emphasizes collaboration with teams and commitment to quality data management.

Qualifications

  • Bachelor’s or Master’s degree in Computer Science, Data Engineering, Information Systems, or a related field.
  • 3+ years of experience as a Data Engineer, with a strong focus on PySpark and the Cloudera Data Platform.

Responsibilities

  • Design, develop, and maintain highly scalable ETL pipelines using PySpark.
  • Implement and manage data ingestion processes from various sources.
  • Use PySpark to cleanse and transform large datasets.
  • Conduct performance tuning of PySpark code and Cloudera components.
  • Implement data quality checks throughout the pipeline.
  • Automate data workflows using Apache Oozie or Airflow.
  • Monitor pipeline performance and troubleshoot issues.

Skills

Advanced proficiency in PySpark
Experience with Cloudera Data Platform
Knowledge of data warehousing concepts
Familiarity with Hadoop and Kafka
Experience with orchestration frameworks
Strong scripting skills in Linux

Education

Bachelor’s or Master’s degree in Computer Science
3+ years of experience as a Data Engineer

Tools

Cloudera Manager
Hive
Impala
HDFS
HBase
Apache Oozie
Airflow
Job description
Job Title: Data Engineer (PySpark)
About the Role

We are seeking a highly skilled Data Engineer with deep expertise in PySpark and the Cloudera Data Platform (CDP) to join our data engineering team. As a Data Engineer, you will be responsible for designing, developing, and maintaining scalable data pipelines that ensure high data quality and availability across the organization. This role requires a strong background in big data ecosystems, cloud-native tools, and advanced data processing techniques.

The ideal candidate has hands‑on experience with data ingestion, transformation, and optimization on the Cloudera Data Platform, along with a proven track record of implementing data engineering best practices. You will work closely with other data engineers to build solutions that drive impactful business insights.

Responsibilities
  • Design, develop, and maintain highly scalable and optimized ETL pipelines using PySpark on the Cloudera Data Platform, ensuring data integrity and accuracy.
  • Implement and manage data ingestion processes from a variety of sources (e.g., relational databases, APIs, file systems) to the data lake or data warehouse on CDP.
  • Use PySpark to process, cleanse, and transform large datasets into meaningful formats that support analytical needs and business requirements.
  • Conduct performance tuning of PySpark code and Cloudera components, optimizing resource utilization and reducing runtime of ETL processes.
  • Implement data quality checks, monitoring, and validation routines to ensure data accuracy and reliability throughout the pipeline.
  • Automate data workflows using tools like Apache Oozie, Airflow, or similar orchestration tools within the Cloudera ecosystem.
  • Monitor pipeline performance, troubleshoot issues, and perform routine maintenance on the Cloudera Data Platform and associated data processes.
  • Work closely with other data engineers, analysts, product managers, and other stakeholders to understand data requirements and support various data‑driven initiatives.
  • Maintain thorough documentation of data engineering processes, code, and pipeline configurations.
Qualifications
Education and Experience
  • Bachelor’s or Master’s degree in Computer Science, Data Engineering, Information Systems, or a related field.
  • 3+ years of experience as a Data Engineer, with a strong focus on PySpark and the Cloudera Data Platform.
Technical Skills
  • PySpark: Advanced proficiency in PySpark, including working with RDDs, DataFrames, and optimization techniques.
  • Cloudera Data Platform: Strong experience with Cloudera Data Platform (CDP) components, including Cloudera Manager, Hive, Impala, HDFS, and HBase.
  • Data Warehousing: Knowledge of data warehousing concepts, ETL best practices, and experience with SQL-based tools (e.g., Hive, Impala).
  • Big Data Technologies: Familiarity with Hadoop, Kafka, and other distributed computing tools.
  • Orchestration and Scheduling: Experience with Apache Oozie, Airflow, or similar orchestration frameworks.
  • Scripting and Automation: Strong scripting skills in Linux.
Soft Skills
  • Strong analytical and problem‑solving skills.
  • Excellent verbal and written communication abilities.
  • Ability to work independently and collaboratively in a team environment.
  • Attention to detail and commitment to data quality.
Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.