
Ativa os alertas de emprego por e-mail!
Cria um currículo personalizado em poucos minutos
Consegue uma entrevista e ganha mais. Sabe mais
A technology company is seeking an Azure DevOps / Data Platform Engineer to architect and maintain CI/CD pipelines, implement Infrastructure-as-Code with Terraform, and ensure compliance with security protocols. The ideal candidate must possess strong skills in Azure DevOps, Terraform, and manage Data Lake components efficiently. This role requires exceptional problem-solving abilities and a focus on automation to improve operational efficiency. Opportunity for remote working available in the Brazil region.
Azure DevOps pipelines, Git repos, artifact management
Terraform, IaC governance patterns
Azure Data Lake Storage Gen2, hierarchical namespace, ACL models
Azure Data Factory, Databricks, Synapse, Spark
Azure Functions, Key Vault, networking (VNets, private endpoints, firewalls)
Monitoring stacks: Log Analytics, Application Insights, Azure Monitor
Scripting: PowerShell, Python, Bash
Security controls: RBAC, managed identities, secrets management, encryption
CI / CD patterns, release strategy design, automated testing frameworks
Jira, Confluence, ServiceNow
Support: US EST (Mon-Fri 9 : 00am-5 : 00pm)(Best Fit)
Architect and maintain CI / CD pipelines for Data Lake components : Data Factory, Databricks, Functions, Synapse, Spark workloads, storage configurations.
Implement Infrastructure-as-Code with Terraform for provisioning storage accounts, networking, compute, identity, and security layers.
Enforce branching discipline, artifact integrity, automated testing, and controlled release gates.
Automate environment provisioning, ACL management, key rotation, lifecycle policies, and cluster configuration.
Integrate DevOps processes with enterprise security : RBAC, managed identities, Key Vault, private networking, encryption controls.
Build observability: logging, metrics, alerting, dashboards for pipelines and platform components.
Maintain backup, restoration, disaster-recovery patterns and test them for reliability.
Eliminate configuration drift through standardized templates and environment baselines.
Maintain and optimize agents, service connections, and deployment runtimes.
Perform incident response and root-cause analysis, document systemic fixes.
Deliver reusable automation modules for data engineering teams.
Optimize workload performance and cost within the Data Lake environment.
Ensure compliance with governance, audit requirements, and data protection mandates.
Drive continuous reduction of manual operational work through automation.