¡Activa las notificaciones laborales por email!

Senior Data Engineer

mindIT HR Agency

Buenos Aires

Presencial

ARS 47.858.000 - 79.765.000

Jornada completa

Hace 30+ días

Descripción de la vacante

Une agence internationale spécialisée dans le recrutement d'experts en IT recherche un Senior Data Engineer pour rejoindre son équipe. Le candidat idéal aura une solide expérience en ingénierie des données, notamment dans la création de pipelines et l'optimisation de systèmes de données. Vous travaillerez sur un projet majeur de migration vers Snowflake et collaborerez avec diverses équipes pour assurer une livraison efficace des données.

Servicios

10 jours de congés payés par an
Certifications et formations internationales
Accès à des espaces de coworking
Cours d'anglais pour améliorer vos compétences linguistiques

Formación

  • Minimum de 7 ans d'expérience en ingénierie des données.
  • Expérience avec des outils d'intégration de données comme Talend ou Azure Data Factory.
  • Connaissance des plateformes Cloud comme AWS, Azure, ou GCP.

Responsabilidades

  • Construire et optimiser des pipelines de données et une architecture de données moderne.
  • Collaborer avec les parties prenantes pour définir les besoins et établir des solutions.
  • Créer des systèmes d'infrastructure de données, y compris l'accès aux données et la collecte.

Conocimientos

SQL
Python
Data Integration
Data Governance
Data Modeling

Educación

Bachelor’s degree in Software Engineering, Computer Science, or related field

Herramientas

Talend
Snowflake
Kafka
Azure Data Factory

Descripción del empleo

They're looking for a Senior Data Engineer to join the Data Science & Management team. The role focuses on expanding and optimizing data architecture and pipelines, as well as improving data flow and collection across cross-functional teams.

The ideal candidate has strong experience building data pipelines and data models from scratch, and enjoys optimizing data systems. This person will collaborate with product owners, analysts, and data scientists to ensure efficient and consistent data delivery.

The main project involves migrating approximately 1,000 pipelines to Snowflake, so experience with Active Batch, Talend, Snowflake, Docker, and GitHub is required. The candidate should be excited about designing and maintaining modern data architectures in an agile environment.

Responsibilities:

  • Require the application of theoretical, domain/technology-specific knowledge, typically gained through formal education or relevant expertise within professional areas (e.g., engineering, software design, systems architecture) to achieve results.
  • May be required to guide and influence others; however, the primary focus of the role is the application of technical and domain expertise.
  • Have wide-ranging experience and use professional concepts and company objectives to resolve complex problems.
  • Engage with business stakeholders to establish clear needs and connect them to solutions, including setting up prototypes and involving multiple parties in design sessions.
  • Exercise judgment in selecting methods and evaluation criteria to obtain results.
  • Create data collection, extraction, and transformation frameworks for structured and unstructured data.
  • Develop and maintain data infrastructure systems (e.g., data warehouses), including data access points.
  • Prepare and manipulate data using a variety of data integration and pipeline tools including, but not limited to, Talend, SQL, Snowflake, Kafka, and Azure Data Factory.
  • Create efficient load processes, including logging, exception handling, support notification, and visibility operations.
  • Organize data into formats and structures that optimize reuse and efficient delivery to business units, analytics teams, and system applications.
  • Integrate data across data lakes, data warehouses, and systems applications to ensure consistent information delivery across the enterprise.
  • Ensure efficient architecture and systems design.
  • Build and evolve the data service layer and engage the team to assemble components for a best-in-class customer offering.
  • Assess overall data architecture and integrations, and drive ongoing improvements to the solution offering.
  • Lead the architecture, design, and implementation of complex data architecture and integration solutions, applying best practices throughout the full development lifecycle, including coding standards, code reviews, source control, build processes, testing, and operations.
  • Collaborate with data governance and strategy teams to ensure data lineage is clearly defined and constructed to highlight data reuse and simplicity.
  • Assess new opportunities to simplify data operations using new tools, technologies, file storage methods, and processes. Use team context and experience to evaluate these opportunities and bring them forward for review and implementation.
  • Bachelor’s degree in Software Engineering, Computer Science, or a related field required; Master’s degree considered an asset. Equivalent work experience in a technology or business environment will also be considered.
  • Minimum of 7 years of experience in data engineering, with a focus on structured work processes.
  • Minimum of 7 years of experience developing integration solutions using data integration tools such asTalendorAzure Data Factory.
  • Minimum of 3 years of experience working with real-time data streaming tools such asKafkaor equivalent.
  • Experience with cloud platforms, includingAzure,AWS, orGCP.
  • Experience with data warehousing platforms such asSnowflakeorDatabricks.
  • High proficiency inSQL,Python, and other programming languages.
  • Strong expertise in data management, data governance, data design, and database architecture. Proven ability to manipulate, process, and extract value from large, disconnected datasets.
  • Proficiency in data modeling and data architecture; experience withWhereScape RED/3Dis an asset.
  • Experience with big data tools such asHadoop,Spark, andKafka.
  • Experience with both relational and NoSQL databases, includingPostgresandCassandra.
  • Experience with data pipeline and workflow management tools such asAzkaban,Luigi, orAirflow.
  • Experience withAWSservices includingEC2,EMR,RDS, andRedshift.
  • Experience with stream-processing systems such asStormorSpark Streaming.
  • Proficiency in object-oriented or functional scripting languages such asPython,Java,C++, orScala.
  • Strong expertise in data modeling, data integration, data orchestration, and supporting methodologies.
  • Proven experience leading large-scale projects or significant project components, with strong communication skills to engage both technical and non-technical stakeholders.
  • Proficiency in multiple programming languages with the ability to design and engineer moderately complex enterprise solutions.
  • Working knowledge of message queuing, stream processing, and highly scalable big data storage systems.
  • Strong project management and organizational skills.
  • Salary in USD.
  • 10 paid time off (PTO) days per year.
  • International certifications and trainings of your choice.
  • Access to coworking spaces.
  • English classes to enhance your language skills.
Detalles

Tags:

Our client is a global company headquartered in the United States, operating as a shared services hub that supports clients across the U.S., Canada, and Europe. They specialize in sourcing and hiring IT professionals from throughout Latin America, connecting them with established, reliable clients that integrate them as extensions of their teams.

Consigue la evaluación confidencial y gratuita de tu currículum.
o arrastra un archivo en formato PDF, DOC, DOCX, ODT o PAGES de hasta 5 MB.