Enable job alerts via email!
Boost your interview chances
Create a job specific, tailored resume for higher success rate.
Join a forward-thinking company where your skills in SQL, Python, and cloud technologies will shine. This role offers the opportunity to support a mission-critical environment while working with cutting-edge tools like AWS and Teradata. You’ll be part of a dynamic team, solving complex problems and ensuring the smooth operation of large-scale distributed systems. Your expertise in big data technologies and agile practices will be invaluable as you contribute to maintaining data ingestion pipelines and troubleshooting production issues. If you thrive in a collaborative environment and are eager to tackle challenges, this position is perfect for you.
Full Job Description
Ability to support a mission-critical 24/7/365 environment.
Demonstrated problem-solving skills and analytical ability.
Ability to work effectively both independently and in a team environment.
Experience using a ticketing system or incident management system such as ServiceNow.
Knowledge of enterprise-level relational databases (Teradata, SQL, Postgres, etc.).
Proficiency in SQL analysis, development and troubleshooting.
Proficiency in Shell scripts and good experience in Python.
Should be well versed with Pyspark.
Experience with or knowledge of core concepts for clouds (networking, security, IAM, etc.).
Hands-on experience with AWS, Airflow, Glue, RDS, Redshift.
Experience with large-scale distributed software systems.
Experience with Big Data platform technologies, extensive knowledge of data integration, enterprise data warehouse, data lake, and analytical ecosystem.
Experience with Teradata database and tools with a broad understanding of Teradata’s products.
Good understanding of Agile principles, experienced in Agile practices in development, and support.
Experience in Data Ingestion pipeline maintenance and ETL tools.
Experience in BTEQ scripts and Teradata support is an added advantage.
Understanding of big data technologies like Hadoop, Spark and Hive.
Ability to analyze logs for errors and exceptions – Ability to drill down errors to environment issues, code issues, etc.
Good knowledge of Linux and debugging skills.
Strong verbal and written communication skills are mandatory.
Excellent analytical and problem-solving skills are mandatory.
Solid troubleshooting abilities and able to work with a team to fix large production issues.