Enable job alerts via email!

Senior Software Engineer (Scala)

Zendesk

Maharashtra

On-site

INR 8,00,000 - 12,00,000

Full time

Today
Be an early applicant

Job summary

A leading software solutions provider is seeking a data engineer in Maharashtra, India, to design and implement data pipelines using various distributed data technologies. The ideal candidate will have strong programming skills in Python, Java, and Scala, and experience with big data tools such as Spark and Hadoop. This position also emphasizes collaboration on scalable solutions and the use of data visualization tools like Tableau and Power BI.

Qualifications

  • Proficiency in programming languages such as Python, Java, Scala, and C.
  • Experience with big data technologies like Hadoop, Spark, and data modeling.
  • Strong understanding of cloud platforms like Google Cloud and AWS.

Responsibilities

  • Design and implement data pipelines using distributed data technologies.
  • Collaborate on building scalable data processing solutions.
  • Utilize tools for data visualization and analytics.

Skills

Python
Java
Scala
C
PySpark
Spark SQL
Hadoop ecosystem
Tableau
Power BI
SQL

Tools

Git
Jenkins
Docker
OpenShift
JIRA
Job description
Overview

ACTIVELY HIRING

Find Your perfect Job

Sign-in & Get noticed by top recruiters and get hired fast

Responsibilities
  • Work with data engineering and software engineering concepts to design and implement data pipelines and systems using distributed data technologies.
  • Collaborate on building scalable data processing solutions using Spark, AirFlow, Kafka, NiFi, SQL, and cloud-based architectures.
  • Develop and maintain data models, ETL/ELT processes, and data warehousing solutions; utilize tools such as BigQuery, Snowflake, Hive, Impala, Looker, Tableau, and Power BI for data visualization and analytics.
  • Implement and operate with REST APIs, Jupyter Notebook workflows, and version control (Git, Jenkins, TeamCity) within CI/CD pipelines.
  • Leverage GenAI-related tooling and concepts (Vector embeddings, RAG, LLMs, LangChain, LlamaIndex, OpenAI, Claude, Mistral) to design AI-enabled data applications and agent orchestration libraries.
  • Engage in agile practices (Scrum/Agile) and adhere to professional software engineering standards, security, and risk management principles.
Qualifications / Skills
  • Programming languages and frameworks: Python, Java, Scala, C; PySpark; Spark Core, Spark SQL, Spark Streaming; JDBC.
  • Big data and data warehousing: Hadoop ecosystem, HDFS, YARN, Hive, Spark, Spark SQL, Spark Streaming, data modeling, OLAP concepts.
  • Databases and storage: RDBMS (Oracle, MSSQL), NoSQL, SQL, data management best practices.
  • Cloud and distributed systems: Google Cloud Platform, AWS, Cloud Composer, OpenShift, data tooling (dbt, ETLELT frameworks).
  • Data visualization and BI: Tableau, Power BI, Looker.
  • Tools and platforms: Git, Jenkins, Artifactory, JIRA, OpenShift; CI/CD pipelines; Data governance and SDLC.
  • AI/ML concepts: GenAI applications, prompt engineering, vector embeddings, retrieval-augmented generation (RAG), OpenAI, Claude, Mistral, LangChain, LlamaIndex; experience with production-grade AI systems and agent orchestration.
  • Other skills: Data structures and algorithm design, architectural specifications, SDLC tools, security, risk management, analytical thinking, communication (verbal and written).
Notes

Sign-in & Get noticed by top recruiters and get hired fast

Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.