Enable job alerts via email!

Data Scientists

Ruaa Data Management Services

Dubai

On-site

AED 120,000 - 180,000

Full time

30+ days ago

Boost your interview chances

Create a job specific, tailored resume for higher success rate.

Job summary

An innovative firm is seeking a skilled data scientist to design and develop cutting-edge data collection frameworks and scalable data pipelines. You will leverage advanced technologies like Spark and Splunk to enhance data quality and performance insights for global clients. This role offers the opportunity to mentor junior engineers while working closely with infrastructure teams to optimize data storage solutions. If you are passionate about big data technologies and automation, this position is perfect for you to make a significant impact in the field of data management.

Qualifications

  • 7+ years of experience in data science with a strong focus on data processing frameworks.
  • Bachelor's or Master's degree in computer science or IT is required.

Responsibilities

  • Design and develop data collection frameworks and scalable data pipelines.
  • Build CI/CD pipelines and monitor data quality for improved performance.

Skills

Python
Scala
Java
Data Processing
API Development
Coaching

Education

Bachelor's degree in Computer Science
Master's degree in IT

Tools

Spark
Splunk
Docker
Kubernetes
Kafka
Oracle
Azure

Job description

Bachelors in Computer Application (Computers)

Nationality: Any Nationality

Vacancy: 1 Vacancy

Your challenges as a data scientist

  • You will be designing, developing, testing, and documenting the data collection framework.
  • The data collection consists of (complex) data pipelines from (IoT) sensors and low/high level control components to our Data Science platform.
  • You will build a monitoring solution of the data pipeline which enables data quality improvement.
  • You will develop scalable data pipelines to transform and aggregate data for business use, following software engineering best practices.
  • For these data pipelines, you will make use of the best frameworks available for data processing like Spark and Splunk.
  • You develop our data services for customer sites towards a product, using (test & deployment) automation, componentization, templates, and standardization in order to reduce delivery time of our projects for customers.
  • The product provides insights into the performance of our material handling systems at customers all around the globe.
  • You design and build a CI/CD pipeline, including (integration) test automation for data pipelines.
  • In this process, you strive for an ever-increasing degree of automation.
  • You will work with an infrastructure engineer to extend storage capabilities and types of data collection (e.g. streaming).
  • You have experience in developing APIs.
  • You will coach and train the junior data engineer with state-of-the-art big data technologies.

What do we expect from you?

  • Bachelor's or Master's degree in computer science, IT, or equivalent with at least 7 years of relevant work experience.
  • Programming in Python/Scala/Java.
  • Experience with scalable data processing frameworks (e.g., Spark).
  • Familiarity with event processing tools like Splunk or the ELK stack.
  • Experience in deploying services as containers (e.g., Docker and Kubernetes).
  • Knowledge of streaming and/or batch storage (e.g., Kafka, Oracle).
  • Experience working with cloud services (preferably with Azure).
Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.