Google Cloud Data Engineer Certification

Sii tra i primi a mandare la candidatura.
Solo per membri registrati
L'Aquila
EUR 50.000 - 80.000
Sii tra i primi a mandare la candidatura.
2 giorni fa
Descrizione del lavoro

Responsibilities

A Data Architect is an IT expert responsible for enabling data-driven decision making by collecting, transforming, and publishing data. At NTT Data, a Data Architect should be capable of designing, building, operationalizing, securing, and monitoring data processing systems with a focus on security, compliance, scalability, efficiency, reliability, fidelity, flexibility, and portability. The primary goal of a Data Architect is to convert raw data into information, generating insights and business value.

  • Build large-scale batch and real-time data pipelines using data processing frameworks on the GCP cloud platform.
  • Apply analytical, data-driven approaches to gain a deep understanding of rapidly changing business needs.
  • Collaborate with teams to evaluate business needs and priorities, liaise with key business partners, and address team requirements related to data systems and management.
  • Participate in project planning by identifying milestones, deliverables, and resource needs; track activities and task execution.

Required Skills

  • Bachelor’s degree in Computer Science, Computer Engineering, or a related field.
  • 5-10 years of experience in a data engineering role.
  • Proficiency as a software engineer using Scala, Java, or Python.
  • Advanced SQL skills, preferably with BigQuery.
  • Good knowledge of Google Managed Services such as Cloud Storage, BigQuery, Dataflow, Dataproc, and Data Fusion.
  • Experience with workflow management tools.
  • Solid understanding of GCP architecture for batch and streaming data processing.
  • Strong knowledge of data technologies and data modeling.
  • Expertise in building modern, cloud-native data pipelines and operations following an ELT approach.
  • Experience with data migration and data warehousing.
  • Ability to organize, normalize, and store complex data to support ETL processes and end-user needs.
  • Passion for designing ingestion and transformation processes for data from multiple sources to create cohesive data assets.
  • Good understanding of developer tools, CI/CD pipelines, etc.
  • Excellent communication skills, with empathy towards end users and internal customers.

Nice-to-have :

  • Experience with Big Data ecosystems such as Hadoop, Hive, HDFS, HBase.
  • Experience with Agile methodologies and DevOps principles.

J-18808-Ljbffr