Enable job alerts via email!

Senior Data Engineer IND (Remote)

RemoteStar

Cambridge

Remote

GBP 60,000 - 80,000

Full time

Today
Be an early applicant

Job summary

A leading revenue intelligence platform is seeking a Data Engineer to design scalable data pipelines and enhance internal processes. The ideal candidate should have over 10 years of experience, especially with Apache Spark. Role requires proficiency in coding and database management in a dynamic, remote setting across India.

Qualifications

  • Bachelor's degree in Engineering, Computer Science, or a relevant technical field.
  • 10+ years of recent experience in Data Engineering roles.
  • Minimum 5 years of hands-on experience with Apache Spark.
  • Proficiency in coding with Scala, Python, or Java.
  • Expertise in SQL with experience in PostgreSQL or MySQL.

Responsibilities

  • Design and build scalable data pipelines for ETL using Big Data technologies.
  • Identify and implement internal process improvements like automating tasks.
  • Collaborate with teams to support advanced data use cases.

Job description

Our Client : is a leading revenue intelligence platform, combining automation and human research to deliver 95% data accuracy across their published contact data. With a growing database of 5 million+ human-verified contacts and over 70 million machine-processed contacts, they offer one of the largest collections of direct dial contacts in the industry. Their dedicated research team re-verifies contacts every 90 days, ensuring exceptional data accuracy and quality.

Location: Remote (Pan India)
Shift Timings: 2:00 PM - 11:00 PM IST
Reporting To: CEO or assigned Lead by Management.

Responsibility :
  • Design and build scalable data pipelines for extraction, transformation, and loading (ETL) using the latest Big Data technologies.
  • Identify and implement internal process improvements like automating manual tasks and optimizing data flows for better performance and scalability.
  • Partner with Product, Data, and Engineering teams to address data-related technical issues and infrastructure needs.
  • Collaborate with machine learning and analytics experts to support advanced data use cases.
Key Requirements :
  • Bachelor's degree in Engineering, Computer Science, or a relevant technical field.
  • 10+ years of recent experience in Data Engineering roles.
  • Minimum 5 years of hands-on experience with Apache Spark, with strong understanding of Spark internals.
  • Deep knowledge of Big Data concepts and distributed systems.
  • Proficiency in coding with Scala, Python, or Java, with flexibility to switch languages when required.
  • Expertise in SQL, and hands-on experience with PostgreSQL, MySQL, or similar relational databases.
  • Strong cloud experience with Databricks, including Delta Lake.
  • Experience working with data formats like Delta Tables, Parquet, CSV, JSON.
  • Comfortable working in Linux environments and scripting.
  • Comfortable working in an Agile environment.
  • Machine Learning knowledge is a plus.
  • Must be capable of working independently and delivering stable, efficient and reliable software.
  • Experience supporting and working with cross-functional teams in a dynamic environment.
Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.

Similar jobs