Job Search and Career Advice Platform

Ativa os alertas de emprego por e-mail!

Data Architect with Snowflake, Python and Advance hands-on SQL

Ampstek

Teletrabalho

BRL 626.000 - 784.000

Tempo parcial

Há 3 dias
Torna-te num dos primeiros candidatos

Cria um currículo personalizado em poucos minutos

Consegue uma entrevista e ganha mais. Sabe mais

Resumo da oferta

A leading data solutions company is seeking an experienced Data Architect for a 100% remote position. The ideal candidate will have over 8 years of experience in data engineering and strong programming skills in Python, alongside expertise in Snowflake for designing and optimizing data solutions. The role involves leading technical teams, architecting data pipelines, and ensuring compliance with data governance standards. Competitive salary and flexible working hours are included.

Qualificações

  • 8+ years of experience in data engineering with a proven track record of leading data projects or teams.
  • Strong programming skills in Python, with expertise in building and optimizing ETL pipelines.
  • Extensive experience with Snowflake or equivalent data warehouses.

Responsabilidades

  • Provide technical direction and mentorship to a team of data engineers.
  • Architect, implement, and optimize end-to-end data pipelines.
  • Design scalable workflows using orchestration tools like Apache Airflow.
  • Lead the implementation and optimization of Snowflake and data lake technologies.
  • Work closely with Data Science, Product, and Engineering teams to define data requirements.

Conhecimentos

Data engineering experience
Programming skills in Python
Experience with Snowflake
Expertise in orchestration tools
Understanding of messaging queues
Ability to architect scalable solutions
Hands-on experience with Apache Iceberg
Proficiency in Docker and Kubernetes
Understanding of CI/CD pipelines
Experience in an Agile environment
Strong background in data modeling
Experience with cloud platforms
Descrição da oferta de emprego

Data Architect with Snowflake, Python and Advance hands-on SQL – 100% Remote

Job Type: Contract

Qualifications
Required Skills
  • 8+ years of experience in data engineering with a proven track record of leading data projects or teams.
  • Strong programming skills in Python, with expertise in building and optimizing ETL pipelines.
  • Extensive experience with Snowflake or equivalent data warehouses for designing schemas, optimizing queries, and managing large datasets.
  • Expertise in orchestration tools like Apache Airflow, with experience in building and managing complex workflows.
  • Deep understanding of messaging queues such as, AWS SQS, or similar technologies for real-time data ingestion and processing.
  • Demonstrated ability to architect and implement scalable data solutions handling terabytes of data.
  • Hands-on experience with Apache Iceberg for managing and optimizing data lakes.
  • Proficiency in containerization and orchestration tools like Docker and Kubernetes for deploying and managing distributed systems.
  • Strong understanding of CI / CD pipelines, including version control, deployment strategies, and automated testing.
  • Proven experience working in an Agile development environment and managing cross-functional team interactions.
  • Strong background in data modeling, data governance, and ensuring compliance with data security standards.
  • Experience working with cloud platforms like AWS, Azure, or GCP.
Preferred Skills
  • Proficiency in stream processing frameworks such as Apache Flink for real-time analytics.
  • Familiarity with programming languages like Scala or Java for additional engineering tasks.
  • Exposure to integrating data pipelines with machine learning workflows.
  • Strong analytical skills to evaluate new technologies and tools for scalability and performance.
Leadership Skills
  • Proven ability to lead and mentor data engineering teams, promoting collaboration and a culture of excellence.
  • Exceptional communication and interpersonal skills to articulate complex technical concepts to stakeholders.
Responsibilities
  • Technical Leadership : Provide technical direction and mentorship to a team of data engineers, ensuring best practices in coding, architecture, and data operations.
  • End-to-End Ownership : Architect, implement, and optimize end-to-end data pipelines that process and transform large-scale datasets efficiently and reliably.
  • Orchestration and Automation : Design scalable workflows using orchestration tools such as Apache Airflow, ensuring high availability and fault tolerance.
  • Data Warehouse and Lake Optimization : Lead the implementation and optimization of Snowflake and data lake technologies like Apache Iceberg for storage, query performance, and scalability.
  • Real-Time and Batch Processing : Build robust systems leveraging Kafka, SQS, or similar messaging technologies for real-time and batch data processing.
  • Cross-Functional Collaboration : Work closely with Data Science, Product, and Engineering teams to define data requirements and deliver actionable insights.
  • Data Governance and Security : Establish and enforce data governance frameworks, ensuring compliance with regulatory standards and maintaining data integrity.
  • Scalability and Performance : Develop strategies to optimize performance for systems processing terabytes of data daily while ensuring scalability.
  • Team Building : Foster a collaborative team environment, driving skill development, career growth, and continuous learning within the team.
  • Innovation and Continuous Improvement : Stay ahead of industry trends to evaluate and incorporate new tools, technologies, and methodologies into the organization.

Strategic thinking to align data engineering efforts with business goals and objectives.

Thanks

Aatmesh

aatmesh.singh@ampstek.com

Obtém a tua avaliação gratuita e confidencial do currículo.
ou arrasta um ficheiro em formato PDF, DOC, DOCX, ODT ou PAGES até 5 MB.