Ativa os alertas de emprego por e-mail!

Big Data Consultant

buscojobs Brasil

Região Geográfica Intermediária de Boa Vista

Presencial

BRL 60.000 - 100.000

Tempo integral

Há 2 dias
Torna-te num dos primeiros candidatos

Melhora as tuas possibilidades de ir a entrevistas

Cria um currículo adaptado à oferta de emprego para teres uma taxa de sucesso superior.

Resumo da oferta

An innovative company is seeking a skilled Big Data Engineer to join its dynamic data team. In this role, you will design and optimize scalable data pipelines, enabling data-driven decision-making across the organization. Collaborating with data scientists and analysts, you'll ensure a reliable and efficient data infrastructure. This position offers the opportunity to work with cutting-edge technologies and be part of a data-driven culture in a fast-paced environment. If you're passionate about big data and eager to make an impact, this is the perfect opportunity for you.

Serviços

Modern data stack
Collaborative environment
Innovative culture

Qualificações

  • 3+ years of experience in data engineering or software engineering roles focused on big data.
  • Strong programming skills in Python, Scala, or Java.
  • Familiarity with cloud data services like AWS, GCP, or Azure.

Responsabilidades

  • Design and maintain scalable data pipelines for batch and real-time processing.
  • Collaborate with data scientists to ensure data is accessible, accurate, and clean.
  • Implement data quality checks and monitoring systems.

Conhecimentos

Python
Scala
Java
Apache Spark
Hadoop
ETL/ELT processes
SQL
NoSQL databases
Apache Airflow
Data modeling

Formação académica

Bachelor's degree in Computer Science
Master’s degree in Engineering

Ferramentas

Apache Kafka
Apache NiFi
Spark
Flink
Airflow
Hadoop
Terraform
Docker
Kubernetes

Descrição da oferta de emprego

Employment Type : [Full-Time / Contract]

About the Role :

We are looking for a highly skilled and experienced Big Data Engineer to join our growing data team. As a Big Data Engineer, you will be responsible for designing, developing, and optimizing scalable data pipelines and architectures that enable data-driven decision-making across the organization. You'll work closely with data scientists, analysts, and software engineers to ensure reliable, efficient, and secure data infrastructure.

Key Responsibilities :

  • Design, develop, and maintain robust and scalable data pipelines for batch and real-time processing.
  • Build and optimize data architectures to support advanced analytics and machine learning workloads.
  • Ingest data from various structured and unstructured sources using tools like Apache Kafka, Apache NiFi, or custom connectors.
  • Develop ETL / ELT processes using tools such as Spark, Hive, Flink, Airflow, or DBT.
  • Work with big data technologies such as Hadoop, Spark, HDFS, Hive, Presto, etc.
  • Implement data quality checks, validation processes, and monitoring systems.
  • Collaborate with data scientists and analysts to ensure data is accessible, accurate, and clean.
  • Manage and optimize data storage solutions including cloud-based data lakes (AWS S3, Azure Data Lake, Google Cloud Storage).
  • Implement and ensure compliance with data governance, privacy, and security best practices.
  • Evaluate and integrate new data tools and technologies to enhance platform capabilities.

Required Skills and Qualifications :

  • Bachelor's or Master’s degree in Computer Science, Engineering, Information Systems, or related field.
  • 3+ years of experience in data engineering or software engineering roles with a focus on big data.
  • Strong programming skills in Python, Scala, or Java.
  • Proficiency with big data processing frameworks such as Apache Spark, Hadoop, or Flink.
  • Experience with SQL and NoSQL databases (e.g., PostgreSQL, Cassandra, MongoDB, HBase).
  • Hands-on experience with data pipeline orchestration tools like Apache Airflow, Luigi, or similar.
  • Familiarity with cloud data services (AWS, GCP, or Azure), particularly services like EMR, Databricks, BigQuery, Glue, etc.
  • Solid understanding of data modeling, data warehousing, and performance optimization.
  • Experience with CI / CD for data pipelines and infrastructure-as-code tools like Terraform or CloudFormation is a plus.

Preferred Qualifications :

  • Experience working in agile development environments.
  • Familiarity with containerization tools like Docker and orchestration platforms like Kubernetes.
  • Knowledge of data privacy and regulatory compliance standards (e.g., GDPR, HIPAA).
  • Experience with real-time data processing and streaming technologies (e.g., Kafka Streams, Spark Streaming).

Why Join Us :

  • Work with a modern data stack and cutting-edge technologies.
  • Be part of a data-driven culture in a fast-paced, innovative environment.
  • Collaborate with talented professionals from diverse backgrounds.
Obtém a tua avaliação gratuita e confidencial do currículo.
ou arrasta um ficheiro em formato PDF, DOC, DOCX, ODT ou PAGES até 5 MB.