¡Activa las notificaciones laborales por email!

Data Engineer (Hybrid/Guadalajara or Tijuana)

Insulet

Tijuana

Presencial

MXN 600,000 - 900,000

Jornada completa

Hace 9 días

Descripción de la vacante

A leading medical device company in Tijuana is seeking a Data Architect to design and maintain data infrastructures. Responsibilities include managing data governance and collaborating with various teams to optimize data integrity and usability. The ideal candidate holds a relevant degree and has extensive experience with SQL, cloud technologies, and data quality assurance. This role presents an exciting opportunity to contribute to innovative healthcare solutions.

Formación

  • Bachelor's degree in Mathematics, Computer Science, or related field required.
  • Master’s degree or BS with relevant experience preferred.
  • Experience working with data technologies and data quality assurance.

Responsabilidades

  • Design and implement Insulet’s data lake and warehouse.
  • Work with cross-functional teams to identify data sources.
  • Perform data quality checks and cleanup.

Conocimientos

SQL
Relational databases
Cloud data tools
Python
Data quality assurance

Educación

Bachelor's degree in a STEM field
Master's degree in a STEM field (preferred)

Herramientas

Azure SQL
Google BigQuery
MongoDB
Azure Data Factory
Descripción del empleo
Overview

Insulet started in 2000 with an idea and a mission to enable our customers to enjoy simplicity, freedom and healthier lives through the use of our Omnipod product platform. In the last two decades we have improved the lives of hundreds of thousands of patients by using innovative technology that is wearable, waterproof, and lifestyle accommodating.

We are looking for highly motivated, performance driven individuals to be a part of our expanding team. We do this by hiring amazing people guided by shared values who exceed customer expectations. Our continued success depends on it!

Responsibilities
  • Design, implementation and maintenance of Insulet’s data lake, warehouse and overall architecture
  • Work with IT, analytics and cross functional teams to identify data sources, determine data collection and design aggregation mechanisms
  • Perform data quality checks and data clean up
  • Interface with business stakeholders in cross-functional teams, including manufacturing, quality assurance, and post-market surveillance in order to understand various applications and their data sets
  • Develop data preprocessing tools as needed
  • Maintenance and understanding of the various business intelligence tools used to visualize and report team analytics results to the company
Education and Experience
  • Bachelors degree in Mathematics, Computer Science, Electrical and Computer Engineering, or a closely related STEM field is required
  • Master’s degree in Mathematics, Computer Science, Electrical and Computer Engineering, or a closely related STEM field; or a BS with 2-3 year’s experience working with data technologies, is preferred
  • Experience in data quality assurance, control and lineage for large datasets in relational/non-relational databases
  • Experience managing robust ETL/ELT pipelines for big real-world datasets that could include messy data, unpredictable schema changes and/or incorrect data types
  • Experience with both batch data processing and streaming data
  • Experience in implementing and maintaining Business Intelligence tools linked to an external data warehouse or relational/non-relational databases is required
  • Experience in medical device, healthcare, or manufacturing industries is desirable
  • HIPAA experience a plus
Skills/Competencies
  • Demonstrated knowledge in SQL and relational databases is required
  • Knowledge in non-relational databases (MongoDB) is a plus
  • Demonstrated knowledge of managing large data sets in the cloud (Azure SQL, Google BigQuery, etc) is required
  • Knowledge of ETL and workflow tools (Azure Data Factory, AWS Glue, etc) is a plus
  • Demonstrated knowledge of building, maintaining and scaling cloud architectures (Azure, AWS, etc), specifically cloud data tools that leverage Spark, is required
  • Demonstrated coding abilities in Python, Java, C or scripting languages
  • Demonstrated familiarity with different data types as inputs (e.g. CSV, XML, JSON, etc)
  • Demonstrated knowledge of database and dataset validation best practices
  • Demonstrated knowledge of software engineering principles and practices
  • Ability to communicate effectively and document objectives and procedures
Consigue la evaluación confidencial y gratuita de tu currículum.
o arrastra un archivo en formato PDF, DOC, DOCX, ODT o PAGES de hasta 5 MB.