Job Search and Career Advice Platform

Enable job alerts via email!

Senior Data Engineer

Experis ManpowerGroup Sp. z o.o.

Remote

PLN 120,000 - 180,000

Full time

Today
Be an early applicant

Generate a tailored resume in minutes

Land an interview and earn more. Learn more

Job summary

A technology services company is seeking a Senior Data Engineer to design and maintain advanced data pipelines. This fully remote position requires strong experience with AWS services and proficiency in Python or Java. Ideal candidates should have 5–8 years of relevant experience and exemplary communication skills. The role includes collaboration with analytics and ML teams, emphasizing data transformability and quality.

Benefits

Medicover healthcare package
Multisport card
Access to an e-learning platform
Group life insurance

Qualifications

  • 5–8 years of relevant experience.
  • Strong communication skills and ability to document technical processes.
  • Knowledge of DSCS and DPTM is a plus.

Responsibilities

  • Design, develop, and maintain ETL/ELT pipelines for data lakes and data warehouses.
  • Define data transformation rules and build data models.
  • Implement data quality checks and maintain data catalogues.

Skills

AWS services (S3, IAM, Redshift, Sagemaker, Glue)
Python
SQL (preferably Redshift)
Java
Spark / PySpark
Docker
Git
Jenkins
CloudFormation
Terraform

Tools

Databricks
Dataiku
Job description

Start Date: ASAP / within 1 month / flexible
Work Model: Fully remote, B2B contract via Experis (150-170)PLN/h + vat)

Job Description:
We are looking for an experienced Senior Data Engineer to join our team and take ownership of designing, developing, and maintaining advanced data pipelines. You will collaborate closely with Product Analysts, Data Scientists, and Machine Learning Engineers to deliver clean, structured, and business-ready data.

Key Responsibilities:

  • Design, develop, and maintain ETL/ELT pipelines to extract data from various sources and load it into data lakes and data warehouses.
  • Define data transformation rules and build data models.
  • Collaborate with analytics and ML teams to identify and transform data for better usability.
  • Implement data quality checks, maintain data catalogues, and use orchestration, logging, and monitoring tools.
  • Apply test-driven development methodology in pipeline creation.
  • Document processes in line with SDLC standards.

Requirements:

  • 5–8 years of relevant experience.
  • Knowledge of DSCS and DPTM is a plus.
  • Strong experience with AWS services: S3, IAM, Redshift, Sagemaker, Glue, Lambda, Step Functions, CloudWatch.
  • Hands-on experience with platforms like Databricks and Dataiku.
  • Proficiency in Python or Java, SQL (preferably Redshift), Jenkins, CloudFormation, Terraform, Git, Docker.
  • 2–3 years of experience with Spark / PySpark.
  • Strong communication skills and ability to document technical processes.

Our Offer:

  • Medicover healthcare package
  • Multisport card
  • Access to an e-learning platform
  • Group life insurance
Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.