Job Search and Career Advice Platform

Enable job alerts via email!

Senior Data / ML Engineer

B2B.NET S.A.

Gdynia

On-site

PLN 120,000 - 160,000

Full time

9 days ago

Generate a tailored resume in minutes

Land an interview and earn more. Learn more

Job summary

A leading tech firm in Gdynia is seeking an experienced Senior Data/ML Engineer to develop and optimize machine learning solutions within an AWS environment. Responsibilities include managing the ML lifecycle, implementing MLOps, and building data pipelines. Candidates should have 5+ years of experience in Python and Apache Spark, with strong AWS knowledge. A collaborative atmosphere with Data Scientists and IT teams is emphasized, ensuring the delivery of production-grade solutions.

Qualifications

  • 5+ years of hands-on experience with Python and Apache Spark.
  • Strong experience with AWS services like S3, Glue, and SageMaker.
  • Experience with ML model training and deployment using TensorFlow or PyTorch.

Responsibilities

  • Own the full lifecycle of ML models from development to monitoring.
  • Implement MLOps principles for ML workloads.
  • Collaborate with Data Scientists and IT teams on ML solutions.

Skills

Python
Apache Spark
AWS services
SQL
Git
Job description

We are looking for an experienced Senior Data/ML Engineer to drive the development, deployment, and optimization of large-scale Machine Learning and Big Data solutions. You will work end-to-end across the ML lifecycle, build distributed data pipelines, and shape our MLOps best practices within a modern AWS environment.

responsibilities :
  • Own the full lifecycle of ML models – from development and deployment to monitoring and continuous improvements.
  • Implement MLOps principles, including CI/CD, automation, testing, and observability for ML workloads.
  • Build and maintain data ingestion, processing, and transformation pipelines (batch & streaming) using Python and Apache Spark.
  • Design and optimize distributed, highly parallel Big Data pipelines processing massive datasets in near real-time.
  • Use Spark to enrich and prepare corporate data for search, analytics, and advanced ML use cases.
  • Closely collaborate with Data Scientists, DevOps Engineers, and IT teams to deliver production‑grade ML solutions.
  • Work with analysts and business stakeholders to develop and refine analytical models.
  • Enhance and extend the organization’s MLOps frameworks and libraries, ensuring scalability across multiple ML use cases.
  • Explore and evaluate cloud‑native AI/ML solutions on AWS.
requirements-expected :
  • 5+ years of hands‑on experience with Python and Apache Spark.
  • Strong experience with AWS services, especially S3, Glue, SageMaker, Lambda, Step Functions / Airflow / MWAA.
  • Practical knowledge of AWS automation using AWS CLI, boto3, IAM roles.
  • Solid understanding of algorithms, data structures, statistics, and linear algebra.
  • Experience with training and deploying ML models using TensorFlow or PyTorch.
  • Understanding of distributed systems and Big Data technologies (Hadoop, Hive, or equivalents).
  • Proficiency in SQL (Spark SQL / Hive SQL) and experience building production‑grade data pipelines.
  • Strong Git skills (Bitbucket, branching workflows, code review).
  • Experience working in Agile/SAFe environments.
Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.