Enable job alerts via email!

Impetus Technologies Looking for Sr Technical Support Engineer (AWS/SQL/Spark) at Indore, Madhy[...]

Impetus Technologies

Indore

On-site

INR 6,75,000 - 9,00,000

Full time

30+ days ago

Boost your interview chances

Create a job specific, tailored resume for higher success rate.

Job summary

Join a forward-thinking company where your skills in SQL, Python, and cloud technologies will shine. This role offers the opportunity to support a mission-critical environment while working with cutting-edge tools like AWS and Teradata. You’ll be part of a dynamic team, solving complex problems and ensuring the smooth operation of large-scale distributed systems. Your expertise in big data technologies and agile practices will be invaluable as you contribute to maintaining data ingestion pipelines and troubleshooting production issues. If you thrive in a collaborative environment and are eager to tackle challenges, this position is perfect for you.

Qualifications

  • Proficiency in SQL analysis, development, and troubleshooting.
  • Experience with AWS, Airflow, Glue, RDS, and Redshift.
  • Strong verbal and written communication skills are mandatory.

Responsibilities

  • Support a mission-critical 24/7/365 environment.
  • Analyze logs for errors and troubleshoot production issues.
  • Maintain data ingestion pipelines and ETL tools.

Skills

SQL
Python
Pyspark
AWS
Shell scripting
Data integration
Agile principles
Big Data technologies
Linux
Teradata

Tools

ServiceNow
Airflow
Glue
RDS
Redshift

Job description

Full Job Description

Ability to support a mission-critical 24/7/365 environment.

Demonstrated problem-solving skills and analytical ability.

Ability to work effectively both independently and in a team environment.

Experience using a ticketing system or incident management system such as ServiceNow.

Knowledge of enterprise-level relational databases (Teradata, SQL, Postgres, etc.).

Proficiency in SQL analysis, development and troubleshooting.

Proficiency in Shell scripts and good experience in Python.

Should be well versed with Pyspark.

Experience with or knowledge of core concepts for clouds (networking, security, IAM, etc.).

Hands-on experience with AWS, Airflow, Glue, RDS, Redshift.

Experience with large-scale distributed software systems.

Experience with Big Data platform technologies, extensive knowledge of data integration, enterprise data warehouse, data lake, and analytical ecosystem.

Experience with Teradata database and tools with a broad understanding of Teradata’s products.

Good understanding of Agile principles, experienced in Agile practices in development, and support.

Experience in Data Ingestion pipeline maintenance and ETL tools.

Experience in BTEQ scripts and Teradata support is an added advantage.

Understanding of big data technologies like Hadoop, Spark and Hive.

Ability to analyze logs for errors and exceptions – Ability to drill down errors to environment issues, code issues, etc.

Good knowledge of Linux and debugging skills.

Strong verbal and written communication skills are mandatory.

Excellent analytical and problem-solving skills are mandatory.

Solid troubleshooting abilities and able to work with a team to fix large production issues.

Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.