Enable job alerts via email!
A leading company in the data engineering field is seeking an experienced Data Engineer proficient in Python, PySpark, and SQL. The ideal candidate will possess strong problem-solving abilities and a solid understanding of AWS and data warehousing concepts. Join a collaborative team and contribute to building data applications and solutions that drive business insights.
Mandatory skills*
Very Strong Proficiency:
Python: Extensive experience in Python for data manipulation, scripting, and building data applications.
PySpark: Deep expertise in developing and optimizing large-scale data transformations using PySpark.
SQL: Advanced SQL skills, including complex query writing, performance tuning, and database design.
AWS: Hands-on experience designing, deploying, and managing data solutions on various AWS services (S3, EMR, Glue, Lambda etc).
Solid understanding of data warehousing concepts, ETL/ELT principles, and data pipeline best practices.
Excellent problem-solving, analytical, and communication skills.
Ability to work independently and as part of a collaborative team.
Desired skills*
Airflow: Experience with Airflow for orchestrating and managing data workflows.
Snowflake: Familiarity with Snowflake for cloud data warehousing and analytical processing.
Bitbucket (or Git): Proficient in using version control systems for collaborative development.
Domain*
Data Engineering
Mode of Interview: Telephonic/Face to Face/Skype Interview* -Teams or F2F