Job Description
We are looking for a Data enthusiast with extensive experience in the Analysis, Design, and Development of Data warehousing solutions and in developing strategies for Extraction, Transformation, and Loading (ETL) mechanism using Ab Initio.
Key Responsibilities
- Develop, maintain, and optimize data products for enterprise consumption using Ab Initio and SQL-based tools.
- Perform end-to-end ETL development, including extraction, transformation, and loading of data from multiple sources.
- Build and maintain scalable data warehousing solutions in Teradata, Snowflake, and DB2.
- Develop pipelines and frameworks in Ab Initio GDE, Conduct?It, EME, Continuous Flows, and related tools.
- Apply advanced data analysis, data modeling, and data design principles.
- Ensure all data products align with company’s architecture standards, guidelines, and security requirements.
- Collaborate with data scientists, analysts, architects, and business teams to deliver clean, validated datasets.
- Apply automation and improve operational efficiency using scripting and scheduling tools.
- Troubleshoot data issues and ensure data quality, lineage, and metadata management.
- Work collaboratively in Agile teams, using JIRA, Rally, and company’s ITIL toolsets like ServiceNow.
- Participate in innovation, technical forums, and thought leadership initiatives within the data engineering domain.
- Ensure adherence to governance frameworks, risk controls, and compliance requirements.
Experience Required
- Extensive experience in Data Engineering,Data Warehousing, and ETL Development.
- Advanced, hands‑on experience in:
- Ab Initio suite (GDE, EME, Conduct?It, Continuous Flows, Web Services)
- SQL for Teradata and Snowflake
- UNIX scripting
- Experience delivering enterprise‑scale data products for analytics.
- Experience designing data processing strategies in a multi‑platform environment.
- Proven ability to debug, optimize, improve performance, and maintain high‑quality data solutions.