Job Search and Career Advice Platform

Ativa os alertas de emprego por e-mail!

Data Architect with AWS & Snowflake experience

Ampstek

Teletrabalho

BRL 160.000 - 200.000

Tempo integral

Há 2 dias
Torna-te num dos primeiros candidatos

Cria um currículo personalizado em poucos minutos

Consegue uma entrevista e ganha mais. Sabe mais

Resumo da oferta

A leading tech company is seeking a Remote Data Architect with extensive experience in data engineering, specifically with AWS and Snowflake. The ideal candidate will provide technical leadership, optimize data pipelines, and work collaboratively across teams to ensure data integrity and innovative strategies. Candidates should have over 8 years of experience and a strong programming background in Python. This contract role offers flexibility and the opportunity to work in a dynamic environment.

Qualificações

  • 8+ years of data engineering experience leading teams.
  • Strong programming skills with a focus on Python.
  • Experience with Snowflake for data management.
  • Proficient in Apache Airflow for orchestration.
  • Knowledge of Kafka for data ingestion.
  • Ability to architect scalable data solutions.

Responsabilidades

  • Provide technical leadership to data engineering teams.
  • Architect and optimize data pipelines for large datasets.
  • Design workflows using Apache Airflow.
  • Lead implementation of Snowflake and data lakes.
  • Build systems for real-time and batch processing.

Conhecimentos

Data engineering leadership
Python programming
Snowflake expertise
Apache Airflow
Kafka expertise
Data solution architecture
Apache Iceberg
Docker and Kubernetes
CI/CD knowledge
Agile project management
Descrição da oferta de emprego

Title: Data Architect with AWS & Snowflake experience

Location: 100% Remote

Job Type: Contract

Responsibilities
  • Technical Leadership: Provide technical direction and mentorship to a team of data engineers, ensuring best practices in coding, architecture, and data operations.
  • End-to-End Ownership: Architect, implement, and optimize end-to-end data pipelines that process and transform large-scale datasets efficiently and reliably.
  • Orchestration and Automation: Design scalable workflows using orchestration tools such as Apache Airflow, ensuring high availability and fault tolerance.
  • Data Warehouse and Lake Optimization: Lead the implementation and optimization of Snowflake and data lake technologies like Apache Iceberg for storage, query performance, and scalability.
  • Real-Time and Batch Processing: Build robust systems leveraging Kafka, SQS, or similar messaging technologies for real-time and batch data processing.
  • Cross-Functional Collaboration: Work closely with Data Science, Product, and Engineering teams to define data requirements and deliver actionable insights.
  • Data Governance and Security: Establish and enforce data governance frameworks, ensuring compliance with regulatory standards and maintaining data integrity.
  • Scalability and Performance: Develop strategies to optimize performance for systems processing terabytes of data daily while ensuring scalability.
  • Team Building: Foster a collaborative team environment, driving skill development, career growth, and continuous learning within the team.
  • Innovation and Continuous Improvement: Stay ahead of industry trends to evaluate and incorporate new tools, technologies, and methodologies into the organization.
Qualifications

Required Skills:

  • 8+ years of experience in data engineering with a proven track record of leading data projects or teams.
  • Strong programming skills in Python, with expertise in building and optimizing ETL pipelines.
  • Extensive experience with Snowflake or equivalent data warehouses for designing schemas, optimizing queries, and managing large datasets.
  • Expertise in orchestration tools like Apache Airflow, with experience in building and managing complex workflows.
  • Deep understanding of messaging queues such as Kafka, AWS SQS, or similar technologies for real-time data ingestion and processing.
  • Demonstrated ability to architect and implement scalable data solutions handling terabytes of data.
  • Hands‑on experience with Apache Iceberg for managing and optimizing data lakes.
  • Proficiency in containerization and orchestration tools like Docker and Kubernetes for deploying and managing distributed systems.
  • Strong understanding of CI/CD pipelines, including version control, deployment strategies, and automated testing.
  • Proven experience working in an Agile development environment and managing cross‑functional team interactions.
  • Strong background in data modeling, data governance, and ensuring compliance with data security standards.
  • Experience working with cloud platforms like AWS, Azure, or GCP.

Preferred Skills:

  • Proficiency in stream processing frameworks such as Apache Flink for real‑time analytics.
  • Familiarity with programming languages like Scala or Java for additional engineering tasks.
  • Exposure to integrating data pipelines with machine learning workflows.
  • Strong analytical skills to evaluate new technologies and tools for scalability and performance.
Leadership Skills
  • Proven ability to lead and mentor data engineering teams, promoting collaboration and a culture of excellence.
  • Exceptional communication and interpersonal skills to articulate complex technical concepts to stakeholders.

Strategic thinking to align data engineering efforts with business goals and objectives.

Obtém a tua avaliação gratuita e confidencial do currículo.
ou arrasta um ficheiro em formato PDF, DOC, DOCX, ODT ou PAGES até 5 MB.