IQVIA Argentina
Lemarpol - Wózki Widłowe Spółka z o.o.
NeuroN Foundation
NeuroN Foundation
Connect with headhunters to apply for similar jobsNeuroN Foundation
NeuroN Foundation
TowardJobs
TowardJobs
Predict X
Newsletter Group
PMSolutions
Alpha Technologies
Alpha Technologies
Crowdworx sp. z o.o.
ITZU Polska Sp. z.o.o.
A leading technology company is seeking an experienced Azure DataBricks Engineer for remote work. You will be responsible for data migrations, developing ETL processes, and optimizing platforms. The ideal candidate has over 8 years of experience in data engineering, strong skills in DataBricks and Apache Spark, and excellent Python capabilities. The role offers competitive benefits, including private medical care and a Multisport card.
Work format: full-time, 100% remote
Start: ASAP
Hi!
We are looking for Azure DataBricks Engineers for our US-based client. The work involves areas such as migration, data collection, and optimization of solutions based on DataBricks. The client has a continuous demand for specialists. The projects they run are mostly short-term (with a high probability of extension), and due to this constant demand, the client can usually offer a new project right after the previous one ends.
Currently, specialists are needed for 3 projects:
two projects focused on migrations to DataBricks (a marketing platform and a fundraising platform)
one project focused on building a medical data analytics platform and embedding it into the Databricks environment
For the client, it is crucial to have strong experience with Azure (and/or) AWS, as well as solid knowledge of DataBricks and Apache Spark. The client mainly works with US-based companies — in most cases, only a small time-zone overlap is required (e.g., 10:00–18:00 CET), though we are open to candidates preferring different working hours.
General responsibilities:
Planning tasks and selecting appropriate tools
Integration of databases in near real-time
Designing and developing ETL processes
Conducting migrations of databases/platforms/ML models
Optimization and automation of platforms
Close cooperation with data engineers, data scientists, and architects
Requirements:
️ Solid experience working as a data engineer or in a related role (8+ years)
️ Strong knowledge (min. 2-3 years of experience) of the DataBricks platform and Apache Spark (migrations, ETL processes, integrations)
️ Strong Python skills
️ Experience with data migrations
️ Experience working in Microsoft Azure (e.g., Data Factory, Synapse, Logic Apps, Data Lake) or/ and AWS Cloud (e.g. Redshift, Athena, Glue)
️ Strong interpersonal and teamwork skills
️ Initiative-taking and ability to work independently
️ English at a level enabling fluent team communication
Nice to have:
️ Experience in designing and optimizing data workflows using DBT, SSIS, TimeXtender, or similar (ETL/ELT)
️ Experience with any big data or noSQL platforms (Redshift, Hadoop, EMR, Google Data, etc.)
How we work and what we offer:
We value open communication throughout the recruitment process and after hiring — clarity about the process and employment terms is important to us
We keep recruitment simple and human — our processes are as straightforward and candidate-friendly as possible
We follow a "remote first" approach, so remote work is our standard, and business travel is kept to a minimum
We offer private medical care (Medicover) and a Multisport card for contractors
* The salary benchmark is based on the target salaries of market leaders in their relevant sectors. It is intended to serve as a guide to help Premium Members assess open positions and to help in salary negotiations. The salary benchmark is not provided directly by the company, which could be significantly higher or lower.