Enable job alerts via email!

Senior Data Engineer

BHFT

United Arab Emirates

Remote

AED 120,000 - 200,000

Full time

Today
Be an early applicant

Generate a tailored resume in minutes

Land an interview and earn more. Learn more

Job summary

A modern international technology company is seeking an expert to architect batch and stream pipelines for structured and unstructured market data. The ideal candidate will have 7 years of experience in building production-grade data systems and expertise in Python. This is a full-time, remote position offering excellent opportunities for growth and various employee benefits, including health insurance and professional training.

Benefits

Health insurance
Sports activities
Professional training

Qualifications

  • 7 years building production-grade data systems.
  • Familiarity with market data formats and providers.
  • Expert-level Python; Go and C are nice to have.
  • Hands-on with Airflow and Kafka.
  • Strong SQL proficiency including aggregations and optimization.
  • Experience designing high-throughput APIs.
  • Strong Linux fundamentals and cloud storage knowledge.
  • Proven track record of mentoring and engineering excellence.

Responsibilities

  • Architect batch/stream pipelines for market data.
  • Implement S3 data storage for analytics.
  • Develop internal libraries for data management.
  • Embed monitoring, testing, and incident management.
  • Collaborate with Data Science and DevOps.

Skills

Production-grade data systems
Market data formats
Expert-level Python
Airflow orchestration
Strong SQL proficiency
High-throughput API design
Linux fundamentals
Mentoring and code reviews

Tools

Kafka
Docker
AWS S3
Job description
Key Responsibilities
  • Ingestion&Pipelines: Architect batchstream pipelines (Airflow Kafka dbt) for diverse structured and unstructured market data. Provide reusable SDKs in Python and Go for internal data producers.
  • Storage&Modeling: Implement and tune S3 columnoriented and timeseries data storage for petabytescale analytics; own partitioning compression TTL versioning and cost optimisation.
  • Tooling & Libraries: Develop internal libraries for schema management data contracts validation and lineage; contribute to shared libraries and services for internal data consumers for research backtesting and real-time trading purposes.
  • Reliability & Observability: Embed monitoring alerting SLAs SLOs and CI/CD; champion automated testing data quality dashboards and incident runbooks.
  • Collaboration: Partner with Data Science QuantResearch Backend and DevOps to translate requirements into platform capabilities and evangelise best practices.
Qualifications
  • 7 years building productiongrade data systems.
  • Familiarity with market data formats (e.g. MDP ITCH FIX proprietary exchange APIs) and market data providers.
  • Expertlevel Python (Go and C nice to have).
  • Hands-on with modern orchestration (Airflow) and event streams (Kafka).
  • Strong SQL proficiency: aggregations joins subqueries window functions (first last candle histogram) indexes query planning and optimization.
  • Designing highthroughput APIs (REST/gRPC) and data access libraries.
  • Strong Linux fundamentals containers (Docker) and cloud object storage (AWSS3 / GCS).
  • Proven track record of mentoring code reviews and driving engineering excellence.
Additional Information
  • Working in a modern international technology company without bureaucracy legacy systems or technical debt.
  • Excellent opportunities for professional growth and self-realization.
  • We work remotely from anywhere in the world with a flexible schedule.
  • We offer compensation for health insurance sports activities and professional training.

Remote Work: Yes

Employment Type: Full-time

Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.