Pyspark - Data Architect

Virtusa
Dubai
AED 120,000 - 180,000
Job description

Pyspark JD:

Responsibilities

  1. Data Pipeline Development: Design, develop and maintain highly scalable and optimized ETL pipelines using PySpark on the Cloudera Data Platform ensuring data integrity and accuracy.
  2. Data Ingestion: Implement and manage data ingestion processes from a variety of sources (relational databases, APIs, file systems) to the data lake or data warehouse on CDP.
  3. Data Transformation and Processing: Use PySpark to process, cleanse and transform large datasets into meaningful formats that support analytical needs and business requirements.
  4. Performance Optimization: Conduct performance tuning of PySpark code and Cloudera components, optimizing resource utilization and reducing runtime of ETL processes.
  5. Data Quality and Validation: Implement data quality checks, monitoring and validation routines to ensure data accuracy and reliability throughout the pipeline.
  6. Automation and Orchestration: Automate data workflows using tools like Apache Oozie, Airflow, or similar orchestration tools within the Cloudera ecosystem.
  7. Monitoring and Maintenance: Monitor pipeline performance, troubleshoot issues, and perform routine maintenance on the Cloudera Data Platform and associated data processes.
  8. Collaboration: Work closely with other data engineers, analysts, product managers, and other stakeholders to understand data requirements and support various data-driven initiatives.
  9. Documentation: Maintain thorough documentation of data engineering processes, code, and pipeline configurations.

Qualifications

  1. Education and Experience: Bachelor's or Master's degree in Computer Science, Data Engineering, Information Systems, or a related field.
  2. 3 years of experience as a Data Engineer with a strong focus on PySpark and the Cloudera Data Platform.
  1. Technical Skills:
  2. PySpark: Advanced proficiency in PySpark including working with RDDs, DataFrames, and optimization techniques.
  3. Cloudera Data Platform: Strong experience with Cloudera Data Platform (CDP) components including Cloudera Manager, Hive, Impala, HDFS, and HBase.
  4. Data Warehousing: Knowledge of data warehousing concepts, ETL best practices, and experience with SQL-based tools (Hive, Impala).
  5. Big Data Technologies: Familiarity with Hadoop, Kafka, and other distributed computing tools.
  6. Orchestration and Scheduling: Experience with Apache Oozie, Airflow, or similar orchestration frameworks.
  7. Scripting and Automation: Strong scripting skills in Linux.
  1. Soft Skills:
  2. Strong analytical and problem-solving skills.
  3. Excellent verbal and written communication abilities.
  4. Ability to work independently and collaboratively in a team environment.
  5. Attention to detail and commitment to data quality.
Get a free, confidential resume review.
Select file or drag and drop it
Avatar
Free online coaching
Improve your chances of getting that interview invitation!
Be the first to explore new Pyspark - Data Architect jobs in Dubai