Enable job alerts via email!

Senior Data Engineer IND (Remote)

Remotestar

Cambourne

Remote

GBP 60,000 - 90,000

Full time

10 days ago

Generate a tailored resume in minutes

Land an interview and earn more. Learn more

Start fresh or import an existing resume

Job summary

A leading revenue intelligence platform is seeking a skilled Data Engineer to design scalable data pipelines and optimize internal processes. The ideal candidate will have over 10 years of experience and a degree in a technical field, with proficiency in Apache Spark, SQL, and data engineering practices. This remote position offers opportunities to collaborate with cross-functional teams and work on advanced data use cases in a dynamic environment.

Qualifications

  • 10+ years of recent experience in Data Engineering roles.
  • Minimum 5 years hands-on experience with Apache Spark.
  • Strong cloud experience with Databricks.

Responsibilities

  • Design and build scalable data pipelines for ETL processes.
  • Implement process improvements and optimize data flows.
  • Collaborate with teams to address data-related technical issues.

Skills

Data Engineering
Apache Spark
Scala
Python
SQL
Databricks
Big Data

Education

Bachelor’s degree in Engineering, Computer Science, or a relevant technical field

Tools

PostgreSQL
MySQL
Linux

Job description

Our Client :is a leading revenue intelligence platform, combining automation and human research to deliver 95% data accuracy across their published contact data. With a growing database of 5 million+ human-verified contacts and over 70 million machine-processed contacts, they offer one of the largest collections of direct dial contacts in the industry. Their dedicated research team re-verifies contacts every 90 days, ensuring exceptional data accuracy and quality.


Location: Remote (Pan India)
Shift Timings: 2:00 PM – 11:00 PM IST
Reporting To: CEO or assigned Lead by Management.

Responsibility :
  • Design and build scalable data pipelines for extraction, transformation, and loading (ETL) using the latest Big Data technologies.
  • Identify and implement internal process improvements like automating manual tasks and optimizing data flows for better performance and scalability.
  • Partner with Product, Data, and Engineering teams to address data-related technical issues and infrastructure needs.
  • Collaborate with machine learning and analytics experts to support advanced data use cases.
Key Requirements :
  • Bachelor’s degree in Engineering, Computer Science, or a relevant technical field.
  • 10+ years of recent experience in Data Engineering roles.
  • Minimum 5 years of hands-on experience with Apache Spark, with strong understanding of Spark internals.
  • Deep knowledge of Big Data concepts and distributed systems.
  • Proficiency in coding with Scala, Python, or Java, with flexibility to switch languages when required.
  • Expertise in SQL, and hands-on experience with PostgreSQL, MySQL, or similar relational databases.
  • Strong cloud experience with Databricks, including Delta Lake.
  • Experience working with data formats like Delta Tables, Parquet, CSV, JSON.
  • Comfortable working in Linux environments and scripting.
  • Comfortable working in an Agile environment.
  • Machine Learning knowledge is a plus.
  • Must be capable of working independently and delivering stable, efficient and reliable software.
  • Experience supporting and working with cross-functional teams in a dynamic environment.



Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.