¡Activa las notificaciones laborales por email!

Data Engineer

Chevron

Ciudad Autónoma de Buenos Aires

Presencial

ARS 87.105.000 - 116.141.000

Jornada completa

Hoy
Sé de los primeros/as/es en solicitar esta vacante

Descripción de la vacante

A leading global energy company in Buenos Aires is seeking a Data Engineer to design and build integration jobs for data in the Microsoft Cloud environment. The role involves developing ETL processes, managing workflows, and collaborating with teams to meet strategic business objectives. Successful candidates will have at least 3 years of experience in data engineering and strong skills in Python, Spark, and Azure technologies.

Formación

  • Minimum 3 years of experience with data engineering/pipeline.
  • Importing data via APIs and Azure tools.
  • Experience with azure data bricks.
  • Configuring data flows into analytics tools.

Responsabilidades

  • Understand business use of data and stakeholder requirements.
  • Collaborate with teams to provide data management direction.
  • Consult on data integration patterns and data quality.
  • Maintain knowledge of key data types and definitions.

Conocimientos

Data analysis/modeling
Data cleaning
Python
Spark
SQL
Data Pipeline Development
Data transformation
Communication with stakeholders

Herramientas

Azure Data Factory
Databricks
Descripción del empleo

Total Number of Openings

1

Chevron’s Business Support Center (BASSC), located in Buenos Aires, is accepting applications for the position of Data Engineer.Successful candidates will join the IT Organization, which is part of a multifunction service center with a workforce of more than 1800 employees that deliver business services and solutions to the corporation across the globe.

The Data Engineer will be responsible for designing, setting up, and building integration jobs to move and store data from existing Systems of Record (SoR) into and through the Chevron Microsoft Cloud environment (Azure Data Lake, Azure SQL, Azure Data Warehouse), and into othersources.

This includes ETL development from on-premises databases, data transformation using Databricks (Python, Spark , Scala, SQL), and orchestration of workflows via Azure Data Factory.

The candidate will follow Chevron’s standard integration patterns, tools, and languages, and will also create schemas and build Operational Controls (OC) for teams to manage SoRs once created. Responsibilities include ensuring alignment with vendors and third parties, managing code versioning via Azure DevOps, and optimizing data flow processes.

Responsibilities for this position may include but are not limited to:

  • Understand the business use of data and stakeholder requirements to support strategic business objectives.
  • Collaborate with delivery teams to provide data management direction and support for initiatives and product development.
  • Contribute to the design of common information models.
  • Consult on the appropriate data integration patterns, data modeling and data quality.
  • Maintain and share knowledge of requirements, key data types and data definitions, data stores, and data creation process.

Required qualifications

  • Minimum 3 years of experience with data analysis/modeling, data acquisition/ingestion, data cleaning, and data engineering/pipeline.
  • Importing data via APIs, ODBC, Azure Data Factory, or Azure Data Bricks from various systems of records like Azure Blob, SQL DBs, No SQL DBs, Data Lake, Data Warehouse, etc.
  • Experience with azure data bricks.
  • Data transformation using Python, Spark, Scala and SQLin Databricks.
  • Data Pipeline Development.
  • Configuring data flows and data pipelines into various data analytics tools like Power BI, Azure Analytics Service, or other data science tools.
  • Troubleshooting and supporting data issues within different solution products.
  • Clearly and professionally communication with technical and non-technical stakeholders.

Preferred Qualifications

  • Knowledge of Ansible language.
  • Experience in building data models for structured and unstructured data.
  • Experience with transforming data with various ETL tools and/or scripting.
  • Familiarity with Agile methodologies.
  • Demonstrates accountability and ownership of deliverables.
  • Proven expertise in cloud-based data architecture (Azure), large-scale data processing, and integration of diverse data sources.
  • Familiarity with operational workflows in oil & gas or industrial domains is a plus.
  • CI CD Pipelines experience is a plus.

Relocation Options:

Relocationcould beconsidered.

International Considerations:

Expatriate assignmentswill not beconsidered.

Chevron participates in E-Verify in certain locations as required by law.

Consigue la evaluación confidencial y gratuita de tu currículum.
o arrastra un archivo en formato PDF, DOC, DOCX, ODT o PAGES de hasta 5 MB.