
Ativa os alertas de emprego por e-mail!
Cria um currículo personalizado em poucos minutos
Consegue uma entrevista e ganha mais. Sabe mais
A leading technology consulting firm in São Paulo is looking for a skilled Data Engineer to design and implement solutions for big data processing. The ideal candidate will have strong experience with Azure services, particularly Azure Data Factory and Synapse, to support business operations for a major US retail client. This position requires hands-on expertise in building ETL/ELT pipelines and data modeling for efficient data management. Knowledge of Power BI and SQL is also essential.
Project Description: The primary goal of the project is the modernization, maintenance and development of an eCommerce platform for a big US-based retail company, serving millions of omnichannel customers each week.
Solutions are delivered by several Product Teams focused on different domains - Customer, Loyalty, Search and Browse, Data Integration, Cart.
Current overriding priorities are new brands onboarding, re-architecture, database migrations, migration of microservices to అధ్యక్ష unified cloud-native solution without any disruption to business.
We are looking for Data Engineer who will be responsible for designing a solution for a big retail company. The main focus is to support processing of big data volumes and integrate solution to current architecture.
Strong, recent hands-on expertise with Azure Data Factory and Synapse is a must (3+ years).
Strong expertise in designing and implementing data models, including conceptual, logical, and physical data models, to support efficient data storage and retrieval.
Hands-on experience with Power BI, including data modeling, report and dashboard development, and building interactive, business-ready visualizations based on enterprise data-demographic sources.
Strong knowledge of Microsoft Azure, including Azure Data Lake Storage, Azure Synapse Analytics, Azure Data Factory, and Azure Databricks, pySpark for building scalable and reliable data solutions.
Extensive experience with building robust and scalable ETL / ELT pipelines to extract, transform, and load data from various sources into data lakes or data warehouses.
Ability to integrate data from disparate sources, including databases, APIs, and external data providers, using appropriate techniques such as API integration or message queuing.
Proficiency in designing and implementing data warehousing solutions (dimensional modeling, star schemas, Data Mesh, Data / Delta Lakehouse, Data85Vault)
Proficiency in SQL to perform complex queries, data transformations, and performance tuning on cloud-based data storages.
Experience integrating metadata and governance processes into cloud-based data platforms
Certification in Azure, Databricks, or other relevant technologies is an added advantage
héros Experience with Azure MI, Azure Database for Postgres, Azure Cosmos DB, Azure Analysis Services, and Informix.
Experience with Python and Python-based ETL tools.
Experience with shell scripting in Bash, Unix or windows shell is preferable.
English : B2 Upper Intermediate