Enable job alerts via email!

Data Engineer III

Zinnia

Pune City

On-site

INR 12,00,000 - 18,00,000

Full time

4 days ago
Be an early applicant

Job summary

A leading technology platform in Pune is seeking an experienced Data Engineer to design and optimize data pipelines that support analytics and decision-making. The desired candidate will have strong skills in big data technologies like Spark and SQL, and will collaborate with various stakeholders to ensure high-quality data solutions. This role offers a competitive salary and benefits, focused on fostering an inclusive workplace.

Qualifications

  • 5+ years of experience as a Data Engineer working on large-scale data systems.
  • Proven track record of delivering production-ready data pipelines in big data environments.
  • Strong analytical thinking, problem-solving, and communication skills.

Responsibilities

  • Design, develop, and maintain scalable big data pipelines using Spark.
  • Build and manage data workflows and orchestration using Airflow.
  • Develop complex SQL queries for data transformation, validation, and reporting.

Skills

Big Data stack
Programming skills
SQL
Spark tuning
Data workflow orchestration

Education

Bachelor’s or Master’s degree in Computer Science

Tools

Spark
Airflow
Cloud technologies

Job description

Zinnia is the leading technology platform for accelerating life and annuities growth. With innovative enterprise solutions and data insights, Zinnia simplifies the experience of buying, selling, and administering insurance products. All of which enables more people to protect their financial futures. Our success is driven by a commitment to three core values: be bold, team up, deliver value – and that we do. Zinnia has over $180 billion in assets under administration, serves 100+ carrier clients, 2500 distributors and partners, and over 2 million policyholders.

WHO YOU ARE:

We are looking for an experiencedData engineer to join our data engineering team and help design, build, and optimize robust data pipelines and platforms that power our analytics, products, and decision-making. This role demands a strong foundation in big data technologies, excellent programming and SQL skills, and hands-on experience in tuning and optimization for large-scale data processing. You will work closely with data scientists, analysts, product managers, and other engineers to build scalable, efficient, and reliable data solutions.

WHAT YOU’LL DO:

  • Design, develop, and maintain scalablebig data pipelines using Spark (Scala or PySpark), Hive, and HDFS.
  • Build and manage data workflows and orchestration usingAirflow.
  • Write efficient, production-grade code inany programming language(such as Python, Java, Scala, etc.) to transform and process data.
  • Develop complexSQL queries for data transformation, validation, and reporting, ensuring high performance and optimization.
  • Tune and optimize Spark jobs and SQL queries to improve performance and resource utilization.
  • Work oncloud platforms(AWS, Azure, GCP) to deploy and manage data infrastructure, preferably AWS with EMR.
  • Collaborate with data stakeholders to understand requirements and deliver reliable, high-quality data solutions.
  • Maintain data quality, governance, and monitoring, ensuring pipelines are robust, observable, and recoverable.

WHAT YOU’LL NEED:

  • Strong experience withBig Data stack: HDFS, Hive, Spark (Scala or PySpark)
  • Excellentprogramming skillsin any major language (Python, Java, Scala, etc.)
  • Expert inSQLwith ability to write and optimize complex queries
  • Hands-on experience withSpark tuning and optimization(both compute and SQL layer)
  • Experience withAirflowfor data workflow orchestration
  • Bachelor’s or Master’s degree in Computer Science, Engineering, or related field.
  • 5+ years of experience as a Data Engineer working on large-scale data systems.
  • Proven track record of delivering production-ready data pipelines in big data environments.
  • Strong analytical thinking, problem-solving, and communication skills.
  • Exposure tocloud technologies(AWS, Azure, or GCP)

Preferred / Nice-to-Have Skills

  • AWS ecosystem, especiallyEMR (Elastic MapReduce)
  • Familiarity withLakehouse formats(Hudi, Delta, Iceberg)
  • Experience withDBT (Data Build Tool)for analytics engineering
  • Experience withKafka(streaming ingestion)
  • Familiarity with monitoring tools likePrometheusandGrafana

WHAT’S IN IT FOR YOU?

At Zinnia, you collaborate with smart, creative professionals who are dedicated to delivering cutting-edge technologies, deeper data insights, and enhanced services to transform how insurance is done. Visit our website at www.zinnia.com for more information. Apply by completing the online application on the careers section of our website. We are an Equal Opportunity employer committed to a diverse workforce. We do not discriminate based on race, religion, color, national origin, gender, sexual orientation, age, marital status, veteran status, or disability

Create a Job Alert

Interested in building your career at Zinnia? Get future opportunities sent straight to your email.

Apply for this job

*

indicates a required field

First Name *

Last Name *

Preferred First Name

Email *

Phone

Resume/CV

Enter manually

Accepted file types: pdf, doc, docx, txt, rtf

Enter manually

Accepted file types: pdf, doc, docx, txt, rtf

LinkedIn Profile

Website

Total experience in Scala/Pyspark *

Experience in Airflow *

Current Compensation: Please share details of your current salary package, including any bonuses or additional benefits. *

Expected Compensation: What are your salary expectations for this position? *

Preffered Location (Pune/Noida/Hyderabad) *

Notice Period: What is your current notice period, or how soon would you be available to join our team? *

India Standard Demographic Question

At Zinnia, we are committed to fostering an inclusive and diverse workplace. This question is asked solely for reporting purposes and will not influence the evaluation of your application or hiring decision.

Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.