Data Pipeline Development: Design, develop and maintain highly scalable and optimized ETL pipelines using PySpark on the Cloudera Data Platform ensuring data integrity and accuracy.
Data Ingestion: Implement and manage data ingestion processes from a variety of sources (relational databases, APIs, file systems) to the data lake or data warehouse on CDP.
Data Transformation and Processing: Use PySpark to process, cleanse and transform large datasets into meaningful formats that support analytical needs and business requirements.
Performance Optimization: Conduct performance tuning of PySpark code and Cloudera components, optimizing resource utilization and reducing runtime of ETL processes.
Data Quality and Validation: Implement data quality checks, monitoring and validation routines to ensure data accuracy and reliability throughout the pipeline.
Automation and Orchestration: Automate data workflows using tools like Apache Oozie, Airflow, or similar orchestration tools within the Cloudera ecosystem.
Monitoring and Maintenance: Monitor pipeline performance, troubleshoot issues, and perform routine maintenance on the Cloudera Data Platform and associated data processes.
Collaboration: Work closely with other data engineers, analysts, product managers, and other stakeholders to understand data requirements and support various data-driven initiatives.
Documentation: Maintain thorough documentation of data engineering processes, code, and pipeline configurations.
Qualifications
Education and Experience: Bachelor's or Master's degree in Computer Science, Data Engineering, Information Systems, or a related field.
3 years of experience as a Data Engineer with a strong focus on PySpark and the Cloudera Data Platform.
Technical Skills:
PySpark: Advanced proficiency in PySpark including working with RDDs, DataFrames, and optimization techniques.
Cloudera Data Platform: Strong experience with Cloudera Data Platform (CDP) components including Cloudera Manager, Hive, Impala, HDFS, and HBase.
Data Warehousing: Knowledge of data warehousing concepts, ETL best practices, and experience with SQL-based tools (Hive, Impala).
Big Data Technologies: Familiarity with Hadoop, Kafka, and other distributed computing tools.
Orchestration and Scheduling: Experience with Apache Oozie, Airflow, or similar orchestration frameworks.
Scripting and Automation: Strong scripting skills in Linux.
Soft Skills:
Strong analytical and problem-solving skills.
Excellent verbal and written communication abilities.
Ability to work independently and collaboratively in a team environment.
Attention to detail and commitment to data quality.
* The salary benchmark is based on the target salaries of market leaders in their relevant sectors. It is intended to serve as a guide to help Premium Members assess open positions and to help in salary negotiations. The salary benchmark is not provided directly by the company, which could be significantly higher or lower.