
Ativa os alertas de emprego por e-mail!
Cria um currículo personalizado em poucos minutos
Consegue uma entrevista e ganha mais. Sabe mais
A leading IT solutions provider is searching for an Azure DevOps / Data Platform Engineer in Brazil. The role involves architecting and maintaining CI/CD pipelines for various data lake components while implementing Infrastructure-as-Code using Terraform. Candidates should have experience optimizing workload performance and ensuring compliance with governance and data protection mandates. Strong skills in Azure DevOps, scripting, and monitoring stacks are essential. This position offers the opportunity to work within a fast-paced environment catering to US EST hours.
Azure DevOps pipelines, Git repos, artifact management
Terraform, IaC governance patterns
Azure Data Lake Storage Gen2, hierarchical namespace, ACL models
Azure Data Factory, Databricks, Synapse, Spark
Azure Functions, Key Vault, networking (VNets, private endpoints, firewalls)
Monitoring stacks: Log Analytics, Application Insights, Azure Monitor
Scripting: PowerShell, Python, Bash
Security controls: RBAC, managed identities, secrets management, encryption
CI / CD patterns, release strategy design, automated testing frameworks
Jira, Confluence, ServiceNow
Support: US EST (Mon-Fri 9 : 00am-5 : 00pm)(Best Fit)
Architect and maintain CI / CD pipelines for Data Lake components : Data Factory, Databricks, Functions, Synapse, Spark workloads, storage configurations.
Implement Infrastructure-as-Code with Terraform for provisioning storage accounts, networking, compute, identity, and security layers.
Enforce branching discipline, artifact integrity, automated testing, and controlled release gates.
Automate environment provisioning, ACL management, key rotation, lifecycle policies, and cluster configuration.
Integrate DevOps processes with enterprise security : RBAC, managed identities, Key Vault, private networking, encryption controls.
Build observability: logging, metrics, alerting, dashboards for pipelines and platform components.
Maintain backup, restoration, disaster-recovery patterns and test them for reliability.
Eliminate configuration drift through standardized templates and environment baselines.
Maintain and optimize agents, service connections, and deployment runtimes.
Perform incident response and root-cause analysis, document systemic fixes.
Deliver reusable automation modules for data engineering teams.
Optimize workload performance and cost within the Data Lake environment.
Ensure compliance with governance, audit requirements, and data protection mandates.
Drive continuous reduction of manual operational work through automation.