Job Search and Career Advice Platform
2,896

Jobs in Leon, Mexico

Lead Data Engineer

FuseMachines

Ciudad de México
Remote
MXN 900,000 - 1,200,000
30+ days ago
I want to receive the latest job alerts for jobs in Leon

Compliance Associate

Bitso

Mexico
Remote
MXN 552,000 - 921,000
30+ days ago

Fullstack Developer (Senior) - Python (Flask) + React - Remote, Latin America

Bluelight Consulting

Mexico
Remote
USD 60,000 - 80,000
30+ days ago

Organic Growth Lead (Remote Mexico) - Future Opening

Directive

Tijuana
Remote
MXN 200,000 - 400,000
30+ days ago

Content Strategy Manager (Remote Mexico) - Future Opening

Directive

Tijuana
Remote
MXN 939,000 - 1,315,000
30+ days ago
Discover more opportunities than anywhere else.
Find more jobs now

Customer Service Representative (JOB ID:VR800)

Inside Out

Mexico
Remote
MXN 200,000 - 400,000
30+ days ago

Territory Account Manager- Michigan

CommScope

Mexico
Remote
USD 112,000 - 170,000
30+ days ago

Territory Account Manager

CommScope

Mexico
Remote
USD 160,000 - 260,000
30+ days ago
HeadhuntersConnect with headhunters to apply for similar jobs

Senior Application Developer - Adobe Commerce

BENTLEY SYSTEMS, INC.

Texcoco de Mora
Remote
USD 80,000 - 100,000
30+ days ago

System Software Engineer - Ubuntu Networking

Canonical

Aguascalientes
Remote
MXN 1,832,000 - 2,749,000
30+ days ago

Paid Social Sr. Strategist

Power Digital

Mexico
Remote
USD 60,000 - 80,000
30+ days ago

Commissioned Sales Representative - LATAM Region (Remote)

Financecolombia

Chihuahua
Remote
MXN 200,000 - 400,000
30+ days ago

CAD Designer

Nextracker LLC, USA

Mexico
Remote
MXN 375,000 - 564,000
30+ days ago

Sales Representative Virtual Assistant (Digital Marketing Agency) (JOB ID:ERIJFC)

Inside Out

Mexico
Remote
MXN 200,000 - 400,000
30+ days ago

Campaign Manager

25Eight LLC

Ciudad de México
Remote
MXN 50,000 - 70,000
30+ days ago

Junior Paralegal

HomeLight

Mexico
Remote
MXN 469,000 - 658,000
30+ days ago

Digital Media Buyer - Remote, Latin America

Bluelight

Aguascalientes
Remote
MXN 200,000 - 400,000
30+ days ago

Sr Manager, Business Intelligence

G-P

Mexico
Remote
MXN 1,502,000 - 2,254,000
30+ days ago

Practicante No Remunerado de Gestión de Campañas de Mail para empresa estadounidense

GAOTek Inc

Veracruz
Remote
MXN 50,000 - 200,000
30+ days ago

Practicante No Remunerado de Gestión de Campañas de Mail para empresa estadounidense

GAOTek Inc

Jalisco
Remote
MXN 50,000 - 200,000
30+ days ago

Practicante No Remunerado de Gestión de Campañas de Mail para empresa estadounidense

GAOTek Inc

Guanajuato
Remote
MXN 50,000 - 200,000
30+ days ago

Technical Account Manager

Endavant

Ciudad de México
Remote
MXN 400,000 - 600,000
30+ days ago

Workday Integrations Managing Consultant (Mexico)

Topbloc, Inc.

Mexico
Remote
MXN 200,000 - 400,000
30+ days ago

Sales Development Representative

Jeeves

Mexico
Remote
USD 60,000 - 80,000
30+ days ago

SU - HR Manager

Financecolombia

Mexico
Remote
MXN 200,000 - 400,000
30+ days ago

Top job titles:

Atencion Cliente jobsAutonomo jobsLimpiezas jobsIngeniero jobsCiberseguridad jobsCall Center Bilingue jobsCientifico jobsLicenciatura jobsGestor De Cobranza jobsSupervisor De Limpieza jobs

Top companies:

Jobs at VolarisJobs at H&mJobs at Price ShoesJobs at IcaJobs at P&gJobs at C&aJobs at American EagleJobs at BaxterJobs at DominosJobs at Didi

Top cities:

Jobs in Ciudad De MexicoJobs in MonterreyJobs in Ciudad JuarezJobs in PueblaJobs in ZapopanJobs in ChihuahuaJobs in ReynosaJobs in SaltilloJobs in Veracruz
Lead Data Engineer
FuseMachines
Ciudad de México
Remote
MXN 900,000 - 1,200,000
Full time
30+ days ago

Job summary

A leading AI services provider is looking for a skilled Sr. Data Engineer/Technical Lead for a remote full-time role. This position involves designing and optimizing data architectures, implementing data solutions, and mentoring junior engineers. The ideal candidate should have over 5 years of experience in data engineering, strong skills in Python and SQL, and familiarity with AWS and GCP. Join us to leverage data for insights and strategic goals.

Benefits

Flexible working hours
Professional development opportunities

Qualifications

  • 5+ years of data engineering experience in AWS and GCP.
  • Strong programming skills in Python and SQL.
  • Experience in building and optimizing data pipelines and architectures.

Responsibilities

  • Design and implement scalable data architectures.
  • Mentor junior data engineers.
  • Collaborate with cross-functional teams on data solutions.

Skills

Python
SQL
Data Integration
Data Modeling
Data Warehousing

Education

Bachelor's degree in Computer Science, Information Systems, Engineering or related field

Tools

AWS
GCP
Redshift
Apache Airflow
Kafka
Job description

About Fusemachines

Fusemachines is a leading AI strategy, talent, and education services provider. Founded by Sameer Maskey Ph.D., Adjunct Associate Professor at Columbia University, Fusemachines has a core mission of democratizing AI. With a presence in 4 countries (Nepal, United States, Canada, and Dominican Republic and more than 450 employees). Fusemachines seeks to bring its global expertise in AI to transform companies around the world.

About the role

This is a remote full-time position, responsible for designing, building, testing, optimizing and maintaining the infrastructure and code required for data integration, storage, processing, pipelines and analytics (BI, visualization and Advanced Analytics) from ingestion to consumption, implementing data flow controls, and ensuring high data quality and accessibility for analytics and business intelligence purposes. This role requires a strong foundation in programming, and a keen understanding of how to integrate and manage data effectively across various storage systems and technologies.

We're looking for someone who can quickly ramp up, contribute right away and lead the work in Data & Analytics, helping from backlog definition, to architecture decisions, and lead technical the rest of the team with minimal oversight.

We are looking for a skilled Sr. Data Engineer/Technical Lead with a strong background in Python, SQL, Pyspark, Redshift and AWS cloud-based large scale data solutions with a passion for data quality, performance and cost optimization. The ideal candidate will develop in an Agile environment, and would have GCP experience too, to contribute to the migration from AWS to GCP.

This role is perfect for an individual passionate about leading, leveraging data to drive insights, improve decision-making, and support the strategic goals of the organization through innovative data engineering solutions.

Qualification / Skill Set Requirement:

  • Must have a full-time Bachelor's degree in Computer Science Information Systems, Engineering, or a related field.
  • 5+ years of real-world data engineering development experience in AWS and GCP (certifications preferred). Strong expertise in Python, SQL, PySpark and AWS in an Agile environment, with a proven track record of building and optimizing data pipelines, architectures, and datasets, and proven experience in data storage, modeling, management, lake, warehousing, processing/transformation, integration, cleansing, validation and analytics.
  • Senior person who can understand requirements and design end to end solutions with minimal oversight.
  • Strong programming Skills in one or more languages such as Python, Scala, and proficient in writing efficient and optimized code for data integration, storage, processing and manipulation.
  • Strong knowledge SDLC tools and technologies, including project management software (Jira or similar), source code management (GitHub or similar), CI/CD system (GitHub actions, AWS CodeBuild or similar) and binary repository manager (AWS CodeArtifact or similar).
  • Good understanding of Data Modeling and Database Design Principles. Being able to design and implement efficient database schemas that meet the requirements of the data architecture to support data solutions.
  • Strong SQL skills and experience working with complex data sets, Enterprise Data Warehouse and writing advanced SQL queries. Proficient with Relational Databases (RDS, MySQL, Postgres, or similar) and NonSQL Databases (Cassandra, MongoDB, Neo4j, etc.).
  • Skilled in Data Integration from different sources such as APIs, databases, flat files, event streaming.
  • Strong experience in implementing data pipelines and efficient ELT/ETL processes, batch and real-time, in AWS and using open source solutions, being able to develop custom integration solutions as needed, including Data Integration from different sources such as APIs (PoS integrations is a plus), ERP (Oracle and Allegra are a plus), databases, flat files, Apache Parquet, event streaming, including cleansing, transformation and validation of the data.
  • Strong experience with scalable and distributed Data Technologies such as Spark/PySpark, DBT and Kafka, to be able to handle large volumes of data.
  • Experience with stream-processing systems: Storm, Spark-Streaming, etc. is a plus.
  • Strong experience in designing and implementing Data Warehousing solutions in AWS with Redshift. Demonstrated experience in designing and implementing efficient ELT/ETL processes that extract data from source systems, transform it (DBT), and load it into the data warehouse.
  • Strong experience in Orchestration using Apache Airflow.
  • Expert in Cloud Computing in AWS, including deep knowledge of a variety of AWS services like Lambda, Kinesis, S3, Lake Formation, EC2, EMR, ECS/ECR, IAM, CloudWatch, etc
  • Good understanding of Data Quality and Governance, including implementation of data quality checks and monitoring processes to ensure that data is accurate, complete, and consistent.
  • Good understanding of BI solutions including Looker and LookML (Looker Modeling Language).
  • Strong knowledge and hands-on experience of DevOps principles, tools and technologies (GitHub and AWS DevOps) including continuous integration, continuous delivery (CI/CD), infrastructure as code (IaC – Terraform), configuration management, automated testing, performance tuning and cost management and optimization.
  • Good Problem-Solving skills: being able to troubleshoot data processing pipelines and identify performance bottlenecks and other issues.
  • Possesses strong leadership skills with a willingness to lead, create Ideas, and be assertive.
  • Strong project management and organizational skills.
  • Excellent communication skills to collaborate with cross-functional teams, including business users, data architects, DevOps/DataOps/MLOps engineers, data analyst, data scientists, developers, and operations teams. Essential to convey complex technical concepts and insights to non-technical stakeholders effectively.
  • Ability to document processes, procedures, and deployment configurations.

Responsibilities:

  • Design, implement, deploy, test and maintain highly scalable and efficient data architectures, defining and maintaining standards and best practices for data management independently with minimal guidance.
  • Ensuring the scalability, reliability, quality and performance of data systems.
  • Mentoring and guiding junior/mid-level data engineers.
  • Collaborating with Product, Engineering, Data Scientists and Analysts to understand data requirements and develop data solutions, including reusable components.
  • Evaluating and implementing new technologies and tools to improve data integration, data processing and analysis.
  • Design architecture, observability and testing strategies, and building reliable infrastructure and data pipelines.
  • Takes ownership of storage layer, data management tasks, including schema design, indexing, and performance tuning.
  • Swiftly address and resolve complex data engineering issues, incidents and resolve bottlenecks in SQL queries and database operations.
  • Conduct Discovery on existing Data Infrastructure and Proposed Architecture.
  • Evaluate and implement cutting-edge technologies and methodologies and continue learning and expanding skills in data engineering and cloud platforms, to improve and modernize existing data systems.
  • Evaluate, design, and implement data governance solutions: cataloging, lineage, quality and data governance frameworks that are suitable for a modern analytics solution, considering industry-standard best practices and patterns.
  • Define and document data engineering architectures, processes and data flows.
  • Assess best practices and design schemas that match business needs for delivering a modern analytics solution (descriptive, diagnostic, predictive, prescriptive).
  • Be an active member of our Agile team, participating in all ceremonies and continuous improvement activities.
Equal Opportunity Employer: Race, Color, Religion, Sex, Sexual Orientation, Gender Identity, National Origin, Age, Genetic Information, Disability, Protected Veteran Status, or any other legally protected group status.
  • Previous
  • 1
  • ...
  • 106
  • 107
  • 108
  • ...
  • 116
  • Next

* The salary benchmark is based on the target salaries of market leaders in their relevant sectors. It is intended to serve as a guide to help Premium Members assess open positions and to help in salary negotiations. The salary benchmark is not provided directly by the company, which could be significantly higher or lower.

Job Search and Career Advice Platform
Land a better
job faster
Follow us
JobLeads Youtube ProfileJobLeads Linkedin ProfileJobLeads Instagram ProfileJobLeads Facebook ProfileJobLeads Twitter AccountJobLeads Xing Profile
Company
  • Customer reviews
  • Careers at JobLeads
  • Site notice
Services
  • Free resume review
  • Job search
  • Headhunter matching
  • Career advice
  • JobLeads MasterClass
  • Browse jobs
Free resources
  • 5 Stages of a Successful Job Search
  • 8 Common Job Search Mistakes
  • How Long should My Resume Be?
Support
  • Help
  • Partner integration
  • ATS Partners
  • Privacy Policy
  • Terms of Use

© JobLeads 2007 - 2025 | All rights reserved