Job Search and Career Advice Platform

Enable job alerts via email!

Data Engineer

Appsilon

Remote

PLN 120,000 - 180,000

Full time

Today
Be an early applicant

Generate a tailored resume in minutes

Land an interview and earn more. Learn more

Job summary

A leading data solutions company is looking for a Data Engineer to design and maintain scalable data pipelines. The ideal candidate should have strong experience in backend Python development, data engineering expertise, and a solid understanding of SQL. The role entails collaborating with Data Scientists and ensuring high data quality across various environments. This position offers fully remote work options and competitive compensation. Candidates should also be familiar with cloud platforms and modern data processing tools.

Benefits

Competitive B2B compensation
Modern equipment
Budget for professional development
Supportive team environment

Qualifications

  • Strong experience building scalable backend systems in Python.
  • Experience designing and operating ETL/ELT pipelines.
  • Good understanding of performance optimization and Python internals.

Responsibilities

  • Design, build, and maintain scalable data pipelines.
  • Integrate data from multiple internal and external sources.
  • Collaborate closely with Data Scientists and ML Engineers.

Skills

Backend Python Development
Data Engineering
Solid SQL skills
Familiarity with large-scale data processing tools

Tools

FastAPI
Django REST Framework
Flask
Docker
Kubernetes
CI/CD tools
Job description
Why we need you?

At Appsilon, we empower global organizations to make smarter decisions with data. Our solutions help Fortune 500 companies discover new drugs, save lives, optimize operations, and unlock millions in value. To do this, we rely on robust, scalable, beautifully engineered data systems.

We're looking for a Data Engineer who can elevate how our clients collect, process, and leverage massive datasets — someone who loves building modern data pipelines and wants their work to power meaningful, real-world impact.

Your responsibilities:
  • Design, build, and maintain scalable data pipelines across diverse environments.
  • Integrate data from multiple internal and external sources into data warehouses or data lakes.
  • Collaborate closely with Data Scientists, ML Engineers, and Developers to ensure data quality, structure, and availability.
  • Monitor and improve data integrity, performance, and reliability.
  • Build and optimize database schemas, data models, and documentation.
  • Implement data governance, security best practices, and compliance standards.
We’re looking for somebody with:
Backend Python Development
  • Strong experience building scalable backend systems in Python.
  • Comfortable with modern language features (type hints, decorators, generators).
  • Able to design clean, maintainable APIs using FastAPI, Django REST Framework, or Flask.
  • Good understanding of performance optimization and Python internals.
  • A collaborative mindset — you enjoy working closely with cross‑functional teams.
Data Engineering
  • Hands‑on experience designing and operating ETL/ELT pipelines.
  • Solid SQL skills and ability to model, optimize, and maintain database structures.
  • Experience integrating data from multiple sources (databases, APIs, streaming).
  • Familiarity with large‑scale data processing tools or distributed systems.
Nice to have:
  • Experience with cloud platforms (AWS/Azure/GCP).
  • Knowledge of R.
  • Experience with Docker, Kubernetes, and CI/CD tools (GitHub Actions, GitLab CI).
  • Understanding of data governance, metadata management, and security.
  • Experience in life sciences, biotech, genomics, or enterprise data environments.
  • Prior remote work experience with international teams.

Life science skills:

  • Molecular Biology & Bioinformatics: Leverages molecular biology and bioinformatics to analyze data and communicate biological insights.
  • Clinical Trials - Data Tools & Flow: Builds and analyzes clinical trial data pipelines, ensuring auditability and delivering insights through collaboration and visualization tools.
  • CDISC & Clinical Data Standards: Applies and designs clinical data structures using CDISC standards, ensuring compliance and supporting best practices across teams.
  • Nextflow: Develops scalable, reproducible bioinformatics pipelines with Nextflow across local, HPC, and cloud environments.
What we offer:
  • Fully remote work from anywhere in Europe or LATAM.
  • Competitive B2B compensation with clear salary ranges (provide the range).
  • Modern equipment (MacBook / ThinkPad + Linux environment).
  • Work on high‑impact, cutting‑edge projects in biotech, pharma, research, and enterprise analytics.
  • Budget (how much) for professional development (certifications, courses, conferences).
  • Opportunity to collaborate with industry experts on innovative data products.
  • A supportive, ambitious, and friendly team that cares about excellence.
What can you expect during the process:
  • Intro call with our Talent Team.
  • Technical task.
  • Technical and culture‑fit interview with the Engineering Team.
  • Final decision + offer.

Appsilon is committed to being a diverse and inclusive workplace. We encourage applicants of different backgrounds, cultures, genders, experiences, abilities, and perspectives to apply. All qualified applicants will receive consideration for employment without regard to race, color, national origin, religion, sexual orientation, gender, gender identity, age, physical disability, or length of time spent unemployed.

Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.