Enable job alerts via email!

Data Engineer (Real Time) (Remote)

Remotestar

Cambourne

Hybrid

GBP 40,000 - 65,000

Full time

30+ days ago

Generate a tailored resume in minutes

Land an interview and earn more. Learn more

Start fresh or import an existing resume

Job summary

A leading company in the iGaming industry seeks a Data Engineer to contribute to real-time data processing applications. The role involves collaboration with technical teams and the development of innovative features within a hybrid working model. Ideal candidates have strong Scala skills and experience with distributed computing frameworks, including Spark and Kafka.

Qualifications

  • Strong knowledge of Scala and relevant frameworks.
  • Experience with containerization tools like Docker.
  • Familiarity with data analytics databases.

Responsibilities

  • Develop real-time data processing applications using Spark and Kafka.
  • Collaborate with Data DevOps and manage changes effectively.
  • Troubleshoot incidents and document processes.

Skills

Scala
Distributed computing frameworks
Kafka
Problem-solving
Agile methodology
Docker
Kubernetes
Data monitoring tools
GIT version control

Tools

Spark
AWS
Prometheus
Grafana
Elasticsearch
Hadoop

Job description

DATA ENGINEER (Real Time)

About client:

At RemoteStar, we are hiring for a client who is a world-class iGaming operator offering various online gaming products across multiple markets through proprietary gaming sites and partner brands.

Their iGaming platform supports over 25 online brands and is used by hundreds of thousands of users worldwide. The company embraces a Hybrid work-from-home model, with the flexibility of working three days in the office and two days from home.

About the Data Engineer role:

You will contribute to designing and developing Real-Time Data Processing applications to meet business needs. This environment offers an excellent opportunity for technical data professionals to build a consolidated Data Platform with innovative features while working with a talented and fun team.

Responsibilities include:

  • Development and maintenance of Real-Time Data Processing applications using frameworks like Spark Streaming, Spark Structured Streaming, Kafka Streams, and Kafka Connect.
  • Manipulation of streaming data, including ingestion, transformation, and aggregation.
  • Researching and developing new technologies and techniques to enhance applications.
  • Collaborating with Data DevOps, Data Streams teams, and other disciplines.
  • Working in an Agile environment following SDLC processes.
  • Managing change and release processes.
  • Troubleshooting and incident management with an investigative mindset.
  • Owning projects and tasks, and working effectively within a team.
  • Documenting processes and sharing knowledge with the team.

Preferred skills:

  • Strong knowledge of Scala.
  • Familiarity with distributed computing frameworks such as Spark, KStreams, Kafka.
  • Experience with Kafka and streaming frameworks.
  • Understanding of monolithic vs. microservice architectures.
  • Familiarity with Apache ecosystem including Hadoop modules (HDFS, YARN, HBase, Hive, Spark) and Apache NiFi.
  • Experience with containerization and orchestration tools like Docker and Kubernetes.
  • Knowledge of time-series or analytics databases such as Elasticsearch.
  • Experience with AWS services like S3, EC2, EMR, Redshift.
  • Familiarity with data monitoring and visualization tools such as Prometheus and Grafana.
  • Experience with version control tools like Git.
  • Understanding of Data Warehouse and ETL concepts; familiarity with Snowflake is a plus.
  • Strong analytical and problem-solving skills.
  • Good learning mindset and ability to prioritize tasks effectively.
Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.