Aktiviere Job-Benachrichtigungen per E-Mail!

SENIOR DATA OPERATIONS ENGINEER

TN Germany

Braunschweig

Vor Ort

EUR 55.000 - 85.000

Vollzeit

Vor 5 Tagen
Sei unter den ersten Bewerbenden

Erhöhe deine Chancen auf ein Interview

Erstelle einen auf die Position zugeschnittenen Lebenslauf, um deine Erfolgsquote zu erhöhen.

Zusammenfassung

A leading company in the fashion industry seeks a Senior Data Operations Engineer to enhance business processes using statistical modeling and machine learning. The role involves collaboration with data teams to ensure data infrastructure reliability while offering an attractive salary, permanent contract, and career growth opportunities.

Leistungen

30 days of vacation
Attractive salary
30% staff discount
Canteens with diverse food selection
Team events

Qualifikationen

  • Five or more years as a DevOps engineer or similar position.
  • Extensive experience with Hadoop clusters and data storage like HDFS.
  • Capability to write efficient, well-tested code.

Aufgaben

  • Maintain and improve data science pipelines on Cloudera cluster.
  • Monitor and manage Impala databases.
  • Mentor team members and conduct insightful code reviews.

Kenntnisse

Software Development
Communication
Distributed Systems
Data Modeling
Python
Bash

Ausbildung

Master's degree in computer science

Tools

Hadoop
Kubernetes

Jobbeschreibung

Social network you want to login/join with:

col-narrow-left

Client:

New Yorker

Location:
Job Category:

Other

-

EU work permit required:

Yes

col-narrow-right

Job Reference:

f0db14a570c5

Job Views:

1

Posted:

26.05.2025

Expiry Date:

10.07.2025

col-wide

Job Description:

SENIOR DATA OPERATIONS ENGINEER*

THIS IS US

The Data Science department uses statistical modeling, machine learning and optimization techniques to continuously improve business processes at NEW YORKER. We work in agile project teams on a variety of use cases, including order optimization, pricing, distribution and logistics. We are a very ambitious and international team and are proud of our high standards.

THAT'S THE JOB

  • Work closely with the data engineering and data platform teams to maintain, operate and improve the infrastructure and pipelines that run critical data science projects on our on-premise Cloudera cluster
  • Monitor and operate (including on-call duty) key data science pipelines (that rely on Spark)
  • Monitor and analyze runtime behavior, including runtimes, resource usage and distributed communication error rates across all data science pipelines
  • Maintain operating systems on our Cloudera cluster
  • Monitor and maintain our cluster and promptly react to errors and outages
  • Install and maintain the software and development environments required by the data science team on our Cloudera cluster
  • Support the network and infrastructure team in optimizing network boundaries between various systems used by data science
  • Liaise with our suppliers and network and datacenter services team to ensure hardware issues are fixed promptly
  • Initiate, align, implement and enforce a logging and monitoring strategy to make data science project runtime characteristics transparent
  • Define, align and enforce proper HDFS usage guidelines, including maintaining HDFS via appropriate retention periods and monitoring average HDFS block size count and over-partitioning in pipelines
  • Monitor and manage Impala databases and tables
  • Mentor team members, accept mentoring by team members, constantly align your work with your users, exchange constant feedback with your peers through insightful code reviews and grow together as a team

THAT'S WHAT WE NEED

  • A Master's degree in computer science or equivalent
  • The willingness and ability to work in a team with strong communication and hands-on mentality
  • The ability to think and work in a well-organized, structured, independent and self-driven manner while aligning proactively with your team
  • A strong software development background and strong programming skills
  • Five or more years of working experience as a DevOps engineer or in a similar position
  • Extensive experience working with Hadoop clusters and with data storage (e.g. HDFS)
  • Extensive knowledge in information systems theory (e.g. relational databases, distributed systems) and working with data (e.g. modeling)
  • The ability to write efficient, well-tested code with a keen eye on scalability and maintainability in Python and bash
  • Experience in software development practices and tools (e.g. Kubernetes, pull request reviews, CI/CD, agile as in agile manifesto not Scrum)
  • Speaking English fluently is a must-have, good spoken and written German is a plus

THAT SPEAKS FOR US

  • A secure, permanent job in an internationally operating company in a future-proof industry that is characterized by growth
  • Participation in shaping the digital transformation in an innovative company
  • Permanent employment contract with 30 days of vacation, an attractive salary and further training opportunities/career model
  • A structured onboarding and mentoring program in a great team with international colleagues, a “Duz Kultur” and open doors
  • Numerous benefits, such as a 30% staff discount at our NEW YORKER stores, company and team events, and canteens with a diverse food selection
Hol dir deinen kostenlosen, vertraulichen Lebenslauf-Check.
eine PDF-, DOC-, DOCX-, ODT- oder PAGES-Datei bis zu 5 MB per Drag & Drop ablegen.