Job Search and Career Advice Platform

Ativa os alertas de emprego por e-mail!

Data Warehouse Architect

Avm Consulting Inc

Aracaju

Presencial

BRL 529.000 - 689.000

Tempo integral

Há 30+ dias

Cria um currículo personalizado em poucos minutos

Consegue uma entrevista e ganha mais. Sabe mais

Resumo da oferta

A leading consulting firm is seeking a hands-on Data Warehouse Architect to design and implement a new data warehouse instance. Responsibilities include architecting scalable data pipelines and integrating AWS-native services. The ideal candidate will have extensive experience with Databricks and data warehousing principles, and will be able to work remotely within Latin America. The role offers the chance to work on innovative projects in the gaming industry.

Qualificações

  • End-to-end setup of Databricks workspaces and Unity Catalog.
  • Strong grasp of modern dimensional modeling.
  • Familiarity with CI / CD for data engineering.

Responsabilidades

  • Design and deploy a new Databricks Lakehouse instance.
  • Implement robust data ingestion pipelines using Spark.
  • Collaborate with product teams and data scientists.

Conhecimentos

Databricks / Lakehouse Architecture
Advanced PySpark / SQL skills
AWS Native Integration
Data Warehousing & Modeling
Automation & DevOps
Monitoring Tools
Descrição da oferta de emprego
Databricks Data Warehouse Architect

About the role :

One of the largest companies in the world in the Gaming industry is seeking a hands-on Data Architect with DataWarehouse Engineer expertise in Databricks (DBX) and AWS-native data services to spearhead the design and implementation of a new data warehouse instance for a major product line. This role will involve building from the ground up—architecting scalable pipelines, optimizing lakehouse performance, and integrating seamlessly with diverse real-time and batch data sources across AWS. The ideal candidate is passionate about data architecture , thrives in fast-moving environments, and has a proven track record of setting up high-performance lakehouse platforms on Databricks with a strong foundation in data warehousing principles.

Key Responsibilities
  • Design and deploy a new Databricks Lakehouse instance tailored to product-level data needs.
  • Architect and implement robust data ingestion pipelines using Spark (PySpark / Scala) and Delta Lake.
  • Integrate AWS-native services (S3, Glue, Athena, Redshift, Lambda) with Databricks for optimized performance and scalability.
  • Define data models, optimize query performance, and establish warehouse governance best practices.
  • Collaborate cross-functionally with product teams, data scientists, and DevOps to streamline data workflows.
  • Maintain CI/CD, preferably DBX for data pipelines using GitOps and Infrastructure-as-Code.
  • Monitor data jobs and resolve performance bottlenecks or failures across environments.
Required Skills & Experience
  • Databricks / Lakehouse Architecture: End-to-end setup of Databricks workspaces and Unity Catalog.
  • Delta Lake Expertise: internals, file compaction, and schema enforcement.
  • Advanced PySpark / SQL skills for ETL and transformations.
  • AWS Native Integration: Deep experience with AWS Glue, S3, Redshift Spectrum, Lambda, Athena, IAM and VPC configuration knowledge.
  • Data Warehousing & Modeling: Strong grasp of modern dimensional modeling (star / snowflake schemas) and lakehouse design patterns for mixed workloads.
  • Automation & DevOps: Familiarity with CI / CD for data engineering using tools like DBX, Terraform, GitHub Actions, or Azure DevOps.
  • Monitoring Tools: Proficient in monitoring tools like CloudWatch, Datadog, or New Relic for data pipelines.
Bonus / Nice to Have
  • Experience supporting gaming or real-time analytics workloads.
  • Familiarity with Airflow, Kafka, or EventBridge.
  • Exposure to data privacy and compliance practices (GDPR, CCPA).

Location: Latin America (LATAM) region - Remote, USA - Remote Length : 1+ Year Client : Gaming giant

Obtém a tua avaliação gratuita e confidencial do currículo.
ou arrasta um ficheiro em formato PDF, DOC, DOCX, ODT ou PAGES até 5 MB.