Job Search and Career Advice Platform

Enable job alerts via email!

AWS Lakehouse Platform Engineer

Deeplight

United Arab Emirates

On-site

AED 367,000 - 515,000

Full time

Today
Be an early applicant

Generate a tailored resume in minutes

Land an interview and earn more. Learn more

Job summary

A specialized AI consultancy is looking for a Lakehouse Platform Engineer to be the lead architect of their data systems. The role involves ensuring scalability and reliability of Lakehouse architecture, managing AWS Glue jobs, and optimizing for high-performance analytics. The ideal candidate will have extensive experience in data platform engineering, especially in financial services, and possess strong problem-solving abilities. This position offers competitive salary and opportunities for career growth within a rapidly growing AI company.

Benefits

Competitive salary and performance bonuses
Comprehensive health insurance
Professional development and certification support
Flexible working arrangements
Career advancement opportunities

Qualifications

  • 8+ years of experience in data platform engineering or related roles.
  • Experience managing Lakehouse platforms on AWS.
  • Strong problem-solving and troubleshooting skills.

Responsibilities

  • Manage and optimize AWS Glue jobs and S3 configurations.
  • Design and implement Disaster Recovery scenarios.
  • Deploy self-service tooling for Data Factory squads.

Skills

Data platform engineering
AWS services (S3, Glue, CloudTrail, Athena)
Disaster recovery planning
Infrastructure automation using Terraform
Collaboration and communication

Tools

Apache Iceberg
OpenMetadata
Soda Core
Kafka (MSK)
OpenSearch
Jira
Job description
About DeepLight AI

DeepLight AI is a specialist AI and data consultancy with extensive experience implementing intelligent enterprise systems across multiple industries, with particular depth in financial services and banking. Our team combines deep expertise in data science, statistical modeling, AI/ML technologies, workflow automation, and systems integration with a practical understanding of complex business operations.

We are seeking a Lakehouse Platform Engineer to serve as the lead architect and custodian of our enterprise data backbone. In this pivotal consultancy role, you will be responsible for the health, scalability, and evolution of our entire technology stack, ensuring that our Lakehouse architecture is not only reliable but optimized for high-performance AI and analytics. You will own the lifecycle of AWS Glue jobs, manage the intricacies of Apache Iceberg table registries, and operationalize industry-leading tools like OpenMetadata for governance and Soda Core for data quality. From designing robust Disaster Recovery (DR) scenarios to automating infrastructure via Terraform, your work will provide the foundation upon which our Data Factory squads build the future of intelligent enterprise systems.

As a consultant within our specialist firm, your technical prowess is matched by your ability to drive adoption. You will act as a bridge between platform engineering and business-critical operations, “selling” the value of self‑service tooling and automated lineage to senior stakeholders.

Your Responsibilities
  • Platform Management & Optimization
    • Storage: S3 bucket configurations, lifecycle policies, storage optimization.
    • Compute: AWS Glue job management, optimization, DPU allocation.
    • Catalog: AWS Glue Data Catalog and Iceberg table registry.
    • Semantic Layer: Deploy and integrate with our semantic layer.
    • Governance: Deploy, configure, and upgrade OpenMetadata.
    • Quality: Maintain Soda Core infrastructure and integration.
    • Apply Iceberg best practices for table optimization and maintenance.
    • Implement automated table maintenance processes.
  • Self‑Service & Automation
    • Deploy self‑service tooling for Data Factory squads.
    • Implement automated lineage capture from Glue jobs.
    • Configure audit logging (CloudTrail, S3 access logs).
  • Disaster Recovery & Reliability
    • Design and implement Disaster Recovery (DR) scenarios including failover, backup management, and runbooks.
    • Execute annual DR tests for the entire data landscape.
    • Ensure all critical platform components are monitored, with alerting and active follow‑up.
  • Migration & Decommissioning
    • Execute decommissioning roadmap.
  • Collaboration
    • Work closely with platform engineers and architects to ensure alignment on optimization and tooling.
    • Partner with operational teams for monitoring and alerting.
Experience & Qualifications
  • 8+ years of experience in data platform engineering or related roles, ideally within financial services or banking.
  • Managing Lakehouse platforms on AWS.
  • Proficiency with AWS services: S3, Glue, CloudTrail, Athena.
  • Experience with Apache Iceberg, OpenMetadata, and Soda Core.
  • Knowledge of disaster recovery planning and execution.
  • Infrastructure automation using Terraform and Git.
  • Experience with Kafka (MSK) and OpenSearch.
  • Ability to identify ways to automate repetitive tasks.
  • Strong problem‑solving and troubleshooting skills.
  • Experience working cross‑functionally and managing complex platform operations.
  • Ability to work in a fast‑paced environment and deliver aggressive migration targets.
  • Collaboration and communication skills.
  • Experience with Jira and agile ways of working.
Benefits & Growth Opportunities
  • Competitive salary and performance bonuses.
  • Comprehensive health insurance.
  • Professional development and certification support.
  • Opportunity to work on cutting‑edge AI projects.
  • Flexible working arrangements.
  • Career advancement opportunities in a rapidly growing AI company.

This position offers a unique opportunity to shape the future of AI implementation while working with a talented team of professionals at the forefront of technological innovation. The successful candidate will play a crucial role in driving our company's success in delivering transformative AI solutions to our clients.

Diversity & Inclusion

At DeepLight AI, we recognise that diversity drives innovation. We are committed to fostering an inclusive environment where individuals with different thinking styles can thrive and contribute their unique strengths to our specialized AI and data solutions. Our goal is to ensure our application and interview process is accessible, predictable, and fair for all candidates. If you require any specific adjustments to the application process, or if you require any reasonable adjustments should you be successful in being processed to the interview stage, please let us know. This information will be kept strictly confidential and will not impact hiring decisions.

Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.