Job Search and Career Advice Platform

Ativa os alertas de emprego por e-mail!

Data Platform Architect (Remote)

Velozient

Teletrabalho

BRL 100.000 - 130.000

Tempo integral

Há 13 dias

Cria um currículo personalizado em poucos minutos

Consegue uma entrevista e ganha mais. Sabe mais

Resumo da oferta

A tech company in Brazil is seeking an experienced Data Platform Architect to design and implement a secure, scalable data architecture. The ideal candidate will have over 8 years of experience in data engineering and proven success with AWS services. Responsibilities include architecting multi-tenant platforms and ensuring data quality and security. Competitive benefits include 15 days PTO and a dynamic remote work environment.

Serviços

15 days Paid Time Off
3 sick days
Floating holiday

Qualificações

  • 8+ years in data engineering or data architecture.
  • Proven success in building large-scale data pipelines.
  • Expertise in MPP databases and query optimization.

Responsabilidades

  • Architect a multi-tenant data platform.
  • Design end-to-end architecture, including data lakes and warehouses.
  • Establish data governance frameworks and security protocols.

Conhecimentos

Data engineering
Cloud-native data platforms
AWS services (Glue, Lambda)
SQL
Python
Data modeling

Formação académica

Bachelor's or Master's in Computer Science, Data Engineering, or related field

Ferramentas

AWS Redshift
AWS Glue
Airflow
Descrição da oferta de emprego
Overview

Full‑time, remote Data Platform Architect with 8+ years of experience. Design and build secure, scalable API and service integrations that connect SaaS products, data warehouses, and embedded analytics ecosystems. Define, implement, and evolve end‑to‑end data architecture powering platform analytics, reporting, and data automation.

Responsibilities
  • Architect a multi‑tenant data platform ingesting 500+ tenant schemas with automated onboarding/offboarding workflows.
  • Design end‑to‑end architecture including data lakes, warehouses, transformation pipelines, and semantic layers.
  • Define data zone strategy (Raw, Staging, Serving), partitioning, and schema evolution patterns.
  • Establish standards for data contracts, metadata cataloging, and lineage tracking across all tenants.
  • Drive selection and configuration of core technologies (Redshift, Glue, S3, Lake Formation, dbt, Lambda, Airflow).
  • Collaborate with engineering and DevOps to apply infrastructure‑as‑code (IaC) to all data assets.
  • Design metadata‑driven ingestion and transformation pipelines supporting incremental updates, schema drift, and tenant isolation.
  • Ensure pipelines are idempotent, self‑healing, and fully integrated into CI/CD processes.
  • Partner with Data Pipeline Engineers to deliver automated data onboarding, testing, and quality validation.
Performance & Cost Optimization
  • Define workload management, clustering, and distribution strategies in Redshift or equivalent MPP systems.
  • Implement data lifecycle management policies to optimize storage and compute costs.
  • Continuously evaluate query performance, caching, and concurrency scaling options.
Governance & Security
  • Implement data governance frameworks to ensure privacy, PII compliance, and secure tenant isolation.
  • Partner with InfoSec to align encryption (KMS multi‑region keys), object lock, and access control policies with SOC/PCI/GDPR standards.
  • Establish monitoring, alerting, and anomaly detection systems for data quality and operational stability.
Collaboration & Mentorship
  • Partner closely with Product, Engineering, and Analytics teams to align architecture with business and product goals.
  • Mentor engineers on modern data platform practices (dbt, ELT, IaC, observability).
  • Act as a technical authority and evangelist for data platform best practices within the organization.
Required Experience
  • Bachelor's or Master's degree in Computer Science, Data Engineering, or related field; or equivalent professional experience.
  • 8+ years in data engineering or data architecture, including at least 3 years architecting cloud‑native data platforms.
  • Proven success in building and maintaining large‑scale data pipelines using AWS Glue, Lambda, Step Functions, or Airflow.
  • Expertise in Redshift, Snowflake, or equivalent MPP databases, including workload management and query optimization.
  • Proficiency with S3‑based data lakes, Glue Data Catalog, and Lake Formation.
  • Deep understanding of data modeling (Kimball/ELT) and multi‑tenant system design.
  • Experience defining and enforcing data governance frameworks (PII, access control, lineage).
  • Proficiency with infrastructure‑as‑code (Terraform/CloudFormation) and CI/CD for data assets.
  • Strong SQL and Python skills; dbt experience preferred.
  • Familiarity with embedded BI/analytics tools (QuickSight, GoodData) and semantic layer concepts.
Desired Experience
  • Experience with schema‑per‑tenant architectures and automated tenant provisioning.
  • Familiarity with Auth0/SSO integration for analytics tools (e.g., QuickSight embedding).
  • Experience implementing observability platforms (Datadog, CloudWatch, Monte Carlo, Soda Core).
  • Exposure to data mesh or domain‑oriented design principles.
  • Prior experience in hospitality/property‑management SaaS product ecosystems.
Benefits

15 days Paid Time Off (PTO), 1 floating day, 3 sick days, and designated national holidays. Start ASAP.

About Velozient

Privately held nearshore software development company providing outsourced development resources to North American companies. Focus on world‑class remote resources and dynamic team environments.

Obtém a tua avaliação gratuita e confidencial do currículo.
ou arrasta um ficheiro em formato PDF, DOC, DOCX, ODT ou PAGES até 5 MB.