Job Search and Career Advice Platform

Enable job alerts via email!

Remote Senior Data Engineer

Varwise

Remote

PLN 120,000 - 180,000

Full time

Yesterday
Be an early applicant

Generate a tailored resume in minutes

Land an interview and earn more. Learn more

Job summary

A prominent Adtech company is seeking a Remote Senior Data Engineer to create reliable and scalable data processing systems. You'll work with cutting-edge technologies, focusing on big data and artificial intelligence. Ideal candidates have 8+ years in software engineering, extensive experience with Scala and Python, and proficiency in cloud platforms like AWS. Join an international team with a flat structure that offers a modern work environment and the opportunity to solve challenging problems.

Benefits

Free coffee
International projects
Team events
Modern office with no dress code
In-house trainings
Snacks and beverages

Qualifications

  • 8+ years of professional software engineering experience focusing on data engineering.
  • 4+ years of experience with Scala and familiarity with Python.
  • Proficiency in SDLC, cloud platforms, and large-scale data management.

Responsibilities

  • Create and maintain scalable distributed data processing systems.
  • Become a core maintainer of the data lake.
  • Troubleshoot and fix existing applications and services.

Skills

Data engineering experience
Scala programming
Python programming
Spark proficiency
AWS experience
Experience with AI technologies

Education

Bachelor's degree in Computer Science

Tools

Spark
AWS
Databricks
TensorFlow
Airflow
Job description
Remote Senior Data Engineer @ Varwise

We are looking for Data Engineers to work remotely for an Adtech company that leverages machine learning and data science to build an identity graph that can scale to reach millions of users via brands with programmatically selected households. The work includes scaling our Big Data asset that combines billions of transaction data points including intent, conversions, first party data into an identity graph that needs to scale to a future cookie‑less world.

This is a 100% remote position, you will be working with team members in NYC. If you like solving hard and technically challenging problems, join us to use those skills here to create real‑time, concurrent, globally distributed systems applications and services.

Responsibilities
  • Work on creating and maintaining reliable and scalable distributed data processing systems.
  • Become a core maintainer of the data lake; maintain it by building searchable data sets for broader business uses.
  • Scale, troubleshoot, and fix existing applications and services.
  • Own complex set of services and applications, ensuring that data pipelines run 24/7.
  • Lead technical discussions leading to improvements in tools, processes or projects.
  • Work on scaling our identity graph to deliver impactful advertising campaigns.
  • Handle data sets exceeding billions of records.
  • Work on AWS based infrastructure and MLOps platform by using both traditional ML as well as LLM/Generative AI based applications.
Requirements & Skills
  • 8+ years of professional software engineering experience, with a focus on data engineering in big data environments.
  • 4+ years of experience developing and delivering production‑grade Scala‑based systems, familiarity with Python, and at least one other high‑level programming language (e.g. Java, C++, C#).
  • Proficiency in all aspects of SDLC, from concept to running production systems.
  • Proficiency using Spark (PySpark) or TensorFlow.
  • Proven experience building and optimizing large‑scale data pipelines using Databricks and Spark.
  • Experience participating in ETL and ML pipeline projects based on Airflow, Kubeflow, Mleap, SageMaker or similar.
  • Hands‑on experience developing and deploying data solutions in a major cloud platform (AWS, GCP, or Azure).
  • Experience working with AI, LLMs, Agents, and/or generative AI technologies, both in product applications and for development productivity.
  • Database experience at large scale, both SQL and NoSQL databases such as PostgreSQL, Cassandra, Neo4j, Neptune, or similar.
  • Experience with large‑scale data management formats and frameworks such as Parquet, ORC, Databricks/Delta Lake, Iceberg or Hudi.
  • Bachelor’s degree in Computer Science or related discipline.
  • Additional tools: Spark, AWS, Linux, NoSQL, SQL, Kafka, Scala, Neo4j, Databricks, PySpark, LLM, GCP, Kinesis, Airflow, Jenkins, Python, TensorFlow, Parquet, Delta Lake.
Benefits & Culture
  • Small teams, international projects, team events.
  • 100% remote, international team, flat structure.
  • Free coffee, bike parking, playroom, snacks, beverages.
  • Modern office, no dress code, in‑house trainings.
Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.