Job Search and Career Advice Platform

¡Activa las notificaciones laborales por email!

Data Engineer I

Kroll

Estado de México

Presencial

MXN 448,000 - 628,000

Jornada completa

Hace 13 días

Genera un currículum adaptado en cuestión de minutos

Consigue la entrevista y gana más. Más información

Descripción de la vacante

A global risk and financial advisory leader is seeking an experienced Data Engineer I in Mexico, Estado de México. You will design and manage data pipelines, ensuring data quality and performance. Your expertise in SQL and cloud platforms like AWS or Azure will be crucial in optimizing our data architecture. This role involves a commitment to improving processes and ensuring compliance with data security standards. Join us to contribute to impactful solutions and elevate your career.

Formación

  • Proficient in SQL for complex data retrieval and manipulation.
  • Experience with Microsoft SQL Server and PostgreSQL on AWS or Azure.
  • Strong understanding of ETL processes and data cleaning techniques.

Responsabilidades

  • Design and build data pipelines ensuring smooth data flow.
  • Manage data warehouses to optimize queries and maintain quality.
  • Develop and optimize ETL processes for structured data.

Conocimientos

SQL (T-SQL, PL/pgSQL, Spark-SQL)
Data Pipeline Design
ETL Processes
Data Quality Assurance
Database Management Systems
Cloud Platforms (AWS, Azure)
Programming (Java, C#.NET, Python)
Data Warehousing Concepts

Educación

Master’s degree in Technology, Data Science, Mathematics, or Statistics

Herramientas

Apache NiFi
Informatica
Airflow
AWS
Azure
SQL Server
PostgreSQL
Descripción del empleo

We are looking for an experienced Data Engineer I who is enjoys working in a fast-paced environment. You will get involved in every layer of our data stacks, looking at ways to improve performance, reliability and quality. You’ll help establish standards and communities of practice for our International Team of data professionals and developers. Your prior experience with working in diverse Enterprise architectures will help to modernize existing systems and build new solutions. You will be an integral part of our PCM engineering team, ensuring that client deliverables meet timing and quality expectations.

Day-to-Day Responsibilities:
  • Data Pipeline Construction: Design and build data pipelines to ensure smooth data flow from multiple sources to data warehouses or lakes. This involves extracting, transforming, and loading (ETL) data to make it accessible for analysis.
  • Data Warehousing: Manage data warehouses by modeling data for efficient queries, ensuring performance, and maintaining data quality. Common tools and platforms include Synapse, Snowflake, Redshift, BigQuery.
  • Data Integration and ETL Processes: Develop and optimize ETL processes to transform raw data into a structured format that analysts and data scientists can use. Tools like Apache NiFi, Informatica, and Airflow are often used in these processes.
  • Data Quality Assurance: Implement data cleaning and validation processes to ensure data accuracy and consistency. This includes monitoring data pipelines for failures and addressing data-related issues promptly.
  • Data Security: Implement security measures to protect sensitive data and ensure compliance with data privacy regulations. This involves setting up permissions and managing data access controls.
  • Scalability and Performance Optimization: Design systems that can handle large volumes of data and ensure that the infrastructure can scale as the organization grows.
Essential Traits:
  • Database Management and Technologies:
  • Proficient in SQL (T-SQL, PL/pgSQL, Spark-SQL) for complex data retrieval, manipulation, and transformation.
  • Experienced in query optimization techniques to improve performance and efficiency.
  • Strong understanding of relational database management systems (RDBMS) concepts and principles.
  • Demonstrated experience with both on-premises and cloud-based deployments of Microsoft SQL Server and PostgreSQL on platforms like AWS or Azure.
  • Familiarity with database administration tasks such as user and security management, backup and recovery, and performance monitoring.
  • Experience in designing and implementing data pipelines for data movement and transformation.
  • Understanding of ETL (Extract, Transform, Load) process and its components (extraction, transformation, loading).
  • Familiarity with data pipeline orchestration tools and platforms (e.g., Airflow, Ascend, Apache Spark, DBT).
  • Exposure to data quality best practices and methodologies for ensuring clean and accurate data.
  • Experience with Java and/or C#.NET and/or Python, and their methods for interacting with persistence layers.
  • Working knowledge of cloud platforms like AWS or Azure for infrastructure provisioning and management.
  • Understanding of data warehousing concepts and technologies.
  • Familiarity with version control systems like Git.
  • Master’s degree in Technology, Data Science, Mathematics, Statistics or related field with minimum 3 years of in above.
  • Certifications in Cloud and Data Science
  • Ability to manage confidential, sensitive information.
About Kroll

Join the global leader in risk and financial advisory solutions—Kroll. With a nearly century-long legacy, we blend trusted expertise cutting‑edge technology to navigate and redefine industry complexities. As a part of One Team, One Kroll, you'll contribute to a collaborative and empowering environment, propelling your career to new heights. Ready to build, protect, restore and maximize our clients’ value? Your journey begins with Kroll.

In order to be considered for a position, you must formally apply via careers.kroll.com.

Kroll is committed to equal opportunity and diversity, and recruits people based on merit.

Consigue la evaluación confidencial y gratuita de tu currículum.
o arrastra un archivo en formato PDF, DOC, DOCX, ODT o PAGES de hasta 5 MB.