Google Cloud Data Engineer Certification

Sii tra i primi a mandare la candidatura.
Solo per membri registrati
Ferrara
EUR 45.000 - 75.000
Sii tra i primi a mandare la candidatura.
3 giorni fa
Descrizione del lavoro

Responsibilities

A Data Architect is an IT expert who enables data-driven decision making by collecting, transforming, and publishing data. At NTT Data, a Data Architect should be able to design, build, operationalize, secure, and monitor data processing systems with a focus on security, compliance, scalability, efficiency, reliability, fidelity, flexibility, and portability. The main mission of a Data Architect is to turn raw data into information, creating insights and business value.

  • Build large-scale batch and real-time data pipelines using data processing frameworks on the GCP cloud platform.
  • Apply an analytical, data-driven approach to understand rapidly changing business needs.
  • Collaborate with the team to evaluate business needs and priorities, liaise with key business partners, and address team requirements related to data systems and management.
  • Participate in project planning by identifying milestones, deliverables, and resource requirements; track activities and task execution.

Required Skills

  • Bachelor’s degree in Computer Science, Computer Engineering, or a relevant field.
  • 5 to 10 years of experience in a data engineering role.
  • Proficiency as a software engineer using Scala, Java, or Python.
  • Advanced SQL skills, preferably with BigQuery.
  • Good knowledge of Google Managed Services such as Cloud Storage, BigQuery, Dataflow, Dataproc, and Data Fusion.
  • Experience with workflow management systems.
  • Strong understanding of GCP architecture for batch and streaming data processing.
  • Solid knowledge of data technologies and data modeling.
  • Experience in building modern, cloud-native data pipelines and operations following an ELT philosophy.
  • Experience with data migration and data warehousing.
  • Ability to organize, normalize, and store complex data effectively, supporting both ETL processes and end-user needs.
  • Passion for designing ingestion and transformation processes for data from multiple sources to create cohesive data assets.
  • Good understanding of developer tools, CI/CD practices, etc.
  • Excellent communication skills with empathy for end users and internal customers.

Nice-to-have :

  • Experience with Big Data ecosystems such as Hadoop, Hive, HDFS, HBase.
  • Experience with Agile methodologies and DevOps principles.

J-18808-Ljbffr