Enable job alerts via email!

Pyspark - Data Architect

Virtusa

Dubai

On-site

AED 120,000 - 180,000

Full time

30+ days ago

Generate a tailored resume in minutes

Land an interview and earn more. Learn more

Start fresh or import an existing resume

Job summary

An established industry player is seeking a skilled Data Engineer with expertise in PySpark and the Cloudera Data Platform. In this dynamic role, you will design and optimize ETL pipelines, ensuring data integrity and performance while collaborating with cross-functional teams to meet data requirements. Your contributions will help drive data-driven initiatives and enhance analytical capabilities within the organization. If you have a passion for big data technologies and a commitment to quality, this is an exciting opportunity to advance your career in a thriving environment.

Qualifications

  • 3+ years of experience as a Data Engineer focused on PySpark and CDP.
  • Strong proficiency in data pipeline development and ETL processes.

Responsibilities

  • Design and maintain scalable ETL pipelines using PySpark on Cloudera.
  • Implement data ingestion and transformation processes for analytics.

Skills

PySpark
Data Engineering
Analytical Skills
Problem-Solving
Communication Skills
Attention to Detail

Education

Bachelor's degree in Computer Science
Master's degree in Data Engineering

Tools

Cloudera Data Platform
Apache Oozie
Apache Airflow
Hadoop
Kafka
SQL (Hive, Impala)

Job description

Pyspark JD:

Responsibilities
  1. Data Pipeline Development: Design, develop and maintain highly scalable and optimized ETL pipelines using PySpark on the Cloudera Data Platform ensuring data integrity and accuracy.
  2. Data Ingestion: Implement and manage data ingestion processes from a variety of sources (relational databases, APIs, file systems) to the data lake or data warehouse on CDP.
  3. Data Transformation and Processing: Use PySpark to process, cleanse and transform large datasets into meaningful formats that support analytical needs and business requirements.
  4. Performance Optimization: Conduct performance tuning of PySpark code and Cloudera components, optimizing resource utilization and reducing runtime of ETL processes.
  5. Data Quality and Validation: Implement data quality checks, monitoring and validation routines to ensure data accuracy and reliability throughout the pipeline.
  6. Automation and Orchestration: Automate data workflows using tools like Apache Oozie, Airflow, or similar orchestration tools within the Cloudera ecosystem.
  7. Monitoring and Maintenance: Monitor pipeline performance, troubleshoot issues, and perform routine maintenance on the Cloudera Data Platform and associated data processes.
  8. Collaboration: Work closely with other data engineers, analysts, product managers, and other stakeholders to understand data requirements and support various data-driven initiatives.
  9. Documentation: Maintain thorough documentation of data engineering processes, code, and pipeline configurations.
Qualifications
  1. Education and Experience: Bachelor's or Master's degree in Computer Science, Data Engineering, Information Systems, or a related field.
  2. 3 years of experience as a Data Engineer with a strong focus on PySpark and the Cloudera Data Platform.
  1. Technical Skills:
  2. PySpark: Advanced proficiency in PySpark including working with RDDs, DataFrames, and optimization techniques.
  3. Cloudera Data Platform: Strong experience with Cloudera Data Platform (CDP) components including Cloudera Manager, Hive, Impala, HDFS, and HBase.
  4. Data Warehousing: Knowledge of data warehousing concepts, ETL best practices, and experience with SQL-based tools (Hive, Impala).
  5. Big Data Technologies: Familiarity with Hadoop, Kafka, and other distributed computing tools.
  6. Orchestration and Scheduling: Experience with Apache Oozie, Airflow, or similar orchestration frameworks.
  7. Scripting and Automation: Strong scripting skills in Linux.
  1. Soft Skills:
  2. Strong analytical and problem-solving skills.
  3. Excellent verbal and written communication abilities.
  4. Ability to work independently and collaboratively in a team environment.
  5. Attention to detail and commitment to data quality.
Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.