¡Activa las notificaciones laborales por email!

Data Services Developer

Driscoll Strawberry Associates, Inc

Guadalajara

Presencial

MXN 400,000 - 600,000

Jornada completa

Hace 15 días

Descripción de la vacante

A leading agriculture company in Guadalajara is seeking a Data Services Developer to turn raw data into actionable insights. This position involves creating data pipelines using SQL and AWS services, collaborating with various teams including IT and Business Stakeholders. Applicants should have a Bachelor's degree in Information Technology and experience in data engineering tools and languages such as Python and SQL.

Formación

  • 2+ years of experience in implementing data engineering pipelines using Python and Spark.
  • Strong hands-on experience in AWS data engineering tools.
  • Proficient in writing advanced SQL and Python scripts for data processing.

Responsabilidades

  • Create and manage data pipelines and analytics solutions using SQL and AWS services.
  • Document data pipeline requirements into functional specifications.
  • Design, develop, and maintain KPIs and dashboards to drive business decisions.

Conocimientos

Data engineering pipelines
AWS services
SQL
Python
Data transformation

Educación

Bachelor's degree in Information Technology or Business Analytics

Herramientas

AWS Glue
AWS Redshift
AWS Athena
AWS Lambda

Descripción del empleo

About the Opportunity
Driscoll’s is seeking an Data Services Developer with experience to turn raw data into business insights, build data pipelines and data marts to serve Tableau dashboards that impact key decision making in the organization. The Data Services Developer will analyze, interpret, and organize enormous amounts of data from the various source systems including Oracle ERP and other boundary applications. In this position, the Data Engineer will demonstrate in depth understanding of AWS services like glue, s3, redshift, Athena , lambda and step function, and recommend industry standards in stabilizing data pipeline workstream with industry standards and trends. They will be working with Sales, Finance, Supply Chain, and others along with IT teams in building the data pipelines
Responsibilities
  • Collaborate with functional leaders, Business Stakeholders, IT and project teams to create and manage data pipelines and analytics solutions using SQL, Python, and AWS services.
  • Work with requirements to document detailed data pipeline requirements into functional specifications for all data pipelines to be used during the progressive build cycles of the implementation.
  • Work closely with IT Team leads, Senior business leaders and Users to determine the data pipelineing requirements.
  • Effectively translate complex business requirements into technical requirements and assist in designing Functional Design Documents.
  • Manage technical delivery resources supporting data pipeline development, testing, deployment activities in onshore/offshore model.
  • Work with Security and Compliance team to define and build structure that defines roles/privileges and to control data pipeline access.
  • Design, development, and maintenance of ongoing KPIs, metrics, data pipelines, analyses, dashboards, etc. to drive key business decisions.
  • Monitor, respond and resolve tickets and issues submitted by users, ability to perform RCA on critical tickets/incidents.
Candidate Profile
  • Bachelor's degree in Information Technology, Business Analytics, or related field
  • 2+ yrs experience in implementing data engineering pipelines using Python and Spark
  • Strong hands-on experience in AWS data engineering tools such as Glue, Redshift, Athena, and Lambda.
  • Proficient in writing advanced SQL and Python scripts for data transformation, extraction, and processing.
  • Skilled in performance tuning, data quality validation, and pipeline troubleshooting across distributed systems.
  • Must be self-motivated and able to work independently in a fast-paced, agile team environment.
  • Excellent verbal and written communication skills, strong attention to detail, and the ability to manage multiple priorities and meet deadlines.
  • Solid understanding of data modeling, data flow, and architecture best practices in AWS.
  • Experience working with financial or supply chain datasets and transforming data for analytics and reporting.
  • Familiarity with AWS-based data governance, lineage tracking, and access control mechanisms.
  • Strong coding fundamentals and experience developing modular, reusable, and scalable pipelines.

Consigue la evaluación confidencial y gratuita de tu currículum.
o arrastra un archivo en formato PDF, DOC, DOCX, ODT o PAGES de hasta 5 MB.