Job Search and Career Advice Platform

Enable job alerts via email!

Senior Distributed Data Platform Engineer

Relativity

Białystok

Remote

PLN 181,000 - 271,000

Full time

Today
Be an early applicant

Generate a tailored resume in minutes

Land an interview and earn more. Learn more

Job summary

A leading data technology firm in Poland seeks a Senior Data Platform Engineer to create scalable cloud-native platforms utilizing Apache Spark and Delta Lake. This role emphasizes robust data pipeline design and advanced analytics. Ideal candidates will have strong programming skills and experience with data governance and analytics workflows. Competitive salary range of 181,000 to 271,000 PLN offered.

Benefits

Comprehensive health, dental, and vision plans
Flexible work arrangements
Unlimited time off
Long-term incentive program

Qualifications

  • Strong programming skills in Python and SQL.
  • Hands-on experience with Apache Spark for distributed data processing.
  • Understanding of performance tuning, scalability strategies, and cost optimization.

Responsibilities

  • Design and implement scalable data pipelines and distributed systems.
  • Apply software engineering best practices.
  • Develop and maintain lakehouse capabilities.
  • Enable analytics workflows by integrating dbt for SQL transformations.

Skills

Strong programming skills in Python
Solid understanding of software engineering principles
Hands-on experience with Apache Spark
Expertise in Delta Lake and/or Apache Iceberg
Experience with dbt for data modeling
Familiarity with Databricks and Snowflake
Understanding of data governance
Familiarity with Kubernetes and Docker
Job description
Posting Type

Remote

Job Overview

We are building a specialized team focused on enabling advanced analytics and reporting capabilities across our internal data ecosystem. This team will design and maintain data platforms that integrate modern lakehouse technologies, distributed compute frameworks, and cloud‑native services to support diverse analytical use cases and enterprise‑scale insights.

As a Senior Data Platform Engineer, you will combine strong software engineering principles with deep data expertise to build robust, cloud‑native platforms. You will work on systems that leverage Apache Spark, Delta Lake, and Iceberg to process large‑scale datasets efficiently, while enabling internal users to build reporting and analytics through curated data models, optimized query performance, and reliable data pipelines. This role emphasizes cloud‑native architecture, data warehousing integration, and governance best practices to deliver secure, reliable, and future‑ready solutions.

Relativity’s scale and breadth provide significant opportunities for rich data exploration and insights. Our data infrastructure ensures that vast datasets remain accessible, secure, and compliant, while enabling innovation across the organization. We are making substantial investments in data lake technology and distributed systems to support future growth and advanced analytics.

Job Description And Requirements
Your Role in Action
  • Design and implement scalable data pipelines and distributed systems using Spark and Python to process and transform large‑scale datasets for analytics and reporting.
  • Apply software engineering best practices, including clean code, modular design, CI/CD, automated testing, and code reviews.
  • Develop and maintain lakehouse capabilities with Delta Lake and Iceberg, ensuring data reliability, versioning, and performance optimization.
  • Enable analytics workflows by integrating dbt for SQL transformations running on Spark.
  • Collaborate with internal teams to provide curated datasets and self‑service capabilities for reporting and advanced analytics.
  • Integrate and optimize data‑warehousing solutions such as Databricks and Snowflake for scalable storage and query performance.
  • Build platforms that allow secure and compliant access to diverse data sources for analytical use cases.
  • Implement observability and governance frameworks, including data lineage, quality checks, and compliance controls.
  • Drive performance tuning and cost optimization across Spark jobs and cloud‑native environments.
  • Champion best practices in CI/CD, automated testing, and infrastructure‑as‑code for data engineering workflows.
Core Requirements
  • Strong programming skills in Python and SQL.
  • Solid understanding of software engineering principles, CI/CD, and automated testing.
  • Hands‑on experience with Apache Spark for distributed data processing.
  • Expertise in Delta Lake and/or Apache Iceberg for lakehouse architecture.
  • Experience with dbt for data modeling and transformation workflows.
  • Familiarity with Databricks and Snowflake for data warehousing and analytics.
  • Understanding of data governance, lineage, and compliance in multi‑tenant environments.
  • Familiarity with Kubernetes, Docker, and infrastructure‑as‑code tools.
  • Understanding of performance tuning, scalability strategies, and cost optimization for large‑scale systems.
Nice to Have
  • Exposure to event‑driven architectures and advanced analytics platforms.
  • Experience enabling self‑service analytics for internal stakeholders.
  • Experience in any of the following languages: Java, Scala, Rust.

Relativity is a diverse workplace with different skills and life experiences—and we love and celebrate those differences. We believe that employees are happiest when they're empowered to be their full, authentic selves, regardless how you identify.

Benefit Highlights
  • Comprehensive health, dental, and vision plans
  • Parental leave for primary and secondary caregivers
  • Flexible work arrangements
  • Two week‑long company breaks per year
  • Unlimited time off
  • Long‑term incentive program
  • Training investment program

All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, or national origin, disability or protected veteran status, or any other legally protected basis, in accordance with applicable law.

Relativity is committed to competitive, fair, and equitable compensation practices.

This position is eligible for total compensation which includes a competitive base salary, an annual performance bonus, and long‑term incentives.

The expected salary range for this role is between the following values: 181,000 and 271,000 PLN.

The final offered salary will be based on several factors, including but not limited to the candidate's depth of experience, skill set, qualifications, and internal pay equity. Hiring at the top end of the range would not be typical, to allow for future meaningful salary growth in this position.

Suggested Skills
  • Automation
  • Data Analysis
  • Database Management
  • Network Architecture
  • Performance Optimizations
  • Problem Solving
  • Project Management
  • Software Development
  • System Designs
  • Technical Leadership
Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.