Enable job alerts via email!

Big Data Developer

Best Job Tool

India

On-site

USD 15,000 - 25,000

Full time

7 days ago
Be an early applicant

Generate a tailored resume in minutes

Land an interview and earn more. Learn more

Start fresh or import an existing resume

Job summary

A leading company in data solutions seeks an experienced Big Data Engineer. In this role, you will design and optimize ETL/ELT data pipelines, work with large datasets, and mentor junior team members. The ideal candidate will have over 10 years of experience in Hadoop and Spark, strong problem-solving skills, and the ability to collaborate within cross-functional teams.

Qualifications

  • 10+ years in big data engineering with Hadoop and Spark expertise.
  • Strong experience in designing ETL/ELT pipelines.
  • Proficiency in Spark (SQL and Dataframe) for large datasets.

Responsibilities

  • Design, develop, and test ETL/ELT data pipelines using MapReduce and Spark.
  • Optimize job performance and analyze data models for efficiency.
  • Conduct code reviews and mentor junior engineers.

Skills

ETL/ELT
Python
PySpark
CI/CD
Hive
Unix Shell Scripting
SQL
Problem-Solving
Analytical Skills
Collaboration

Job description

  • etl/elt (professional experience with teradata, ab initio)
  • python or pyspark
  • ci/cd, hive

responsibilities:

  • design, develop, and test robust etl/elt data pipelines using map-reduce and spark.
  • process large datasets in multiple file formats such as csv, json, parquet, and avro.
  • perform metadata configuration and optimize job performance.
  • analyze and recommend changes to data models (e-r and dimensional models) for enhanced efficiency.
  • collaborate with cross-functional teams to ensure smooth data workflows and processing.
  • implement best practices in coding, performance tuning, and process automation.
  • lead the team in troubleshooting complex data issues and provide guidance on best approaches.
  • ensure high-quality delivery of data pipelines with regular performance and scalability checks.
  • conduct code reviews and mentor junior engineers on technical skills and best practices.
  • design and optimize processes for scalable data storage, management, and access.

eligibility criteria:

  • 10+ years of experience in big data engineering with hands-on expertise in hadoop and spark.
  • strong understanding and practical experience in designing, coding, and testing etl/elt pipelines.
  • proficiency in spark (sql and dataframe) for processing large datasets.
  • experience with data models (e-r & dimensional) and their optimization.
  • strong skills in unix shell scripting (simple to moderate).
  • familiarity with sparkflow framework (preferred).
  • proficiency in sql, gcp big query, and python is desirable.
  • strong problem-solving and analytical skills.
  • good communication and collaboration skills to work in a cross-functional team environment.
  • ability to work independently and manage multiple tasks simultaneously.
Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.