Job Search and Career Advice Platform

Enable job alerts via email!

Databricks Engineer

Codest Ltd. Company No. 12590542, VAT number: GB363431020

Remote

PLN 180,000 - 240,000

Full time

Today
Be an early applicant

Generate a tailored resume in minutes

Land an interview and earn more. Learn more

Job summary

A leading tech software company in Poland is seeking an experienced Databricks Engineer to design and manage scalable data solutions for a banking app. Candidates should have over 10 years of data engineering experience, proficiency in Python, PySpark, and SQL, along with a Databricks certification. The role offers a B2B contract with a competitive salary of 27,000-34,000 PLN and the flexibility of 100% remote work.

Benefits

27000-34000 PLN salary on B2B contract
100% remote work
300 PLN benefit platform credit
Integration events and education opportunities

Qualifications

  • 10+ years of experience in data engineering with production-grade data pipelines.
  • Proficiency in Python, PySpark, and SQL.
  • Extensive hands-on experience with AWS services.

Responsibilities

  • Design and manage scalable data solutions and pipelines.
  • Build data quality frameworks with automated testing.
  • Implement advanced Delta Lake features.

Skills

Python
PySpark
SQL
Data engineering
Data governance
English communication

Education

Databricks Certified Professional Data Engineer

Tools

AWS (Glue, Lambda, Redshift, S3)
Docker
Kubernetes
Job description
🌍 Hello World!

We are The Codest - International Tech Software Company with tech hubs in Poland delivering global IT solutions and projects. Our core values lie in “Customers and People First” approach that prioritises the needs of our customers and a collaborative environment for our employees, enabling us to deliver exceptional products and services.

Our expertise centers on web development, cloud engineering, DevOps and quality. After many years of developing our own product - Yieldbird, which was honored as a laureate of the prestigious Top25 Deloitte awards, we arrived at our mission: to help tech companies build impactful product and scale their IT teams through boosting IT delivery performance. Through our extensive experience with product development challenges, we have become experts in building digital products and scaling IT teams.

But our journey does not end here - we want to continue our growth. If you’re goal-driven and looking for new opportunities, join our team! What awaits you is an enriching and collaborative environment that fosters your growth at every step.

We are currently looking for:

DATABRICKS ENGINEER

Here, you will have an opportunity to contribute to a banking app for one of the leading financial groups in Japan. The platform is equipped with bank modules and data management features and it is customer‑facing as well. We are seeking an experienced Databricks Engineer to design, build, and manage scalable data solutions and pipelines using Databricks. You’ll work closely with cross‑functional teams to ensure data is reliable, accessible, and efficient to power analytics and business intelligence initiatives.

📈 Your Responsibilities
  • Architect medallion architecture (Bronze, Silver, Gold) lakehouses with optimized performance patterns
  • Build strong data quality frameworks with automated testing and monitoring
  • Implement advanced Delta Lake features such as time travel, vacuum operations, and Z‑ordering
  • Develop and maintain complex ETL/ELT pipelines processing large‑scale datasets daily
  • Design and implement CI/CD workflows for data pipelines using Databricks Asset Bundles or equivalent tools
  • Create real‑time and batch data processing solutions with Structured Streaming and Delta Live Tables
  • Optimize Spark jobs for cost efficiency and performance, leveraging cluster auto‑scaling and resource management
  • Develop custom integrations with Databricks APIs and external systems
  • Design scalable data architectures using Unity Catalog, Delta Lake, and Apache Spark
  • Establish data mesh architectures with governance and lineage tracking
🔑 Key Requirements:
  • 10+ years of experience in data engineering, with a strong track record of designing and deploying production‑grade data pipelines and ETL/ELT workflows
  • Databricks Certified Professional Data Engineer (or equivalent certification)
  • Proficiency in Python, PySpark, and SQL
  • Extensive hands‑on experience with AWS services such as Glue, Lambda, Redshift, and S3
  • Solid background in data governance and management tools, including platforms like Unity Catalog and AWS SageMaker Unified Studio
  • Proven experience in data migration initiatives and modernizing legacy systems
  • Communicative English language skills (the project is in an international team)
➕Nice to have:
  • Excellent communication and teamwork skills
  • Strong analytical thinking and problem‑solving ability
  • Experience with containerization and orchestration tools (e.g., Docker, Kubernetes) is considered an advantage
📜Our Promise (what you can expect from us):
  • 27000-34000 PLN on B2B contract
  • 100% remote work (but we have offices in Krakow and Warsaw and we’re happy to meet there from time to time 😉)
  • 300 PLN to use on our benefits platform, Worksmile - gift cards, medical services, sports, etc.
  • Our B2B contract contains provisions that allow you to obtain IP BOX support
  • Integration events, education opportunities and much more…
  • A unique opportunity to take your career to the next level - we’re looking for people who want to create an impact. You have ideas, we want to hear them!
📌Recruitment process:
  • 30 minute screening call online with our recruiter
  • 45min-1h technical call with one of our engineers
  • 1h call with the team leader
  • Offer

Questions, insights? Feel free to reach out to our recruiting team:

ewa.szczodrak@thecodest.co

In the meantime, feel free to visit our website where you can find key facts about us.

Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.