Ativa os alertas de emprego por e-mail!
A technology company in Brazil is looking for a Data Engineer to manage and optimize data pipelines utilizing tools such as Apache Spark and Azure Data Factory. The successful candidate will have strong skills in SQL, ETL processes, and Python, along with experience in data governance and scalable architecture. The position offers remote work options and requires early morning shifts.
We are seeking a Data Engineer to optimize and manage data pipelines, ensuring efficient processing of large volumes of data using tools like Apache Spark, Databricks, and Azure Data Factory. The role involves data ingestion, quality assurance, security, and collaboration with cross-disciplinary teams to implement and maintain scalable data solutions. Responsibilities also include documentation, support, and continuous improvement of data infrastructure.
Languages and Tools:
Preferred experience in Cloud environments using Azure Data Factory, Fivetran, Apache Spark, and Databricks.
Data Orchestration:
Containerization and Orchestration:
Data Governance and Architecture:
System Monitoring:
Note: If you do not meet all requirements, we encourage continuous learning and development at Compass UOL.
Work shift during the early morning hours, on-call schedule, remote work options.