Join to apply for the Azure Data Specialist role at Bimeda
Responsible for developing and managing our cloud data infrastructure in Microsoft Azure (Data Factory). This professional will be responsible for designing and optimizing data pipelines, integrating information from diverse sources, and ensuring data quality, security, and availability for analysis and reporting. This professional will be responsible for ensuring the efficiency, security, and scalability of corporate data solutions, supporting BI, data science, and data modernization initiatives.
Key responsibilities include:
- Design and implement scalable data pipelines using Azure Data Factory, Azure Synapse Analytics, Azure Databricks, and others.
- Create and maintain data models, ETL/ELT, and integrations between different data sources.
- Ensure data governance, security, and compliance in accordance with corporate and regulatory policies.
- Collaborate with engineering, BI, and data science teams, and business stakeholders to translate requirements into technical solutions.
- Monitor, diagnose, and improve the performance of data environments and pipelines in Azure.
- Automate data ingestion, transformation, and delivery processes.
- Support the implementation of data lakes, data warehouses, and modern data architecture.
- Stay up to date with best practices and innovations in the Azure data ecosystem and services.
- Data Integration: Consolidate structured and unstructured data into analysis-ready formats.
- Pipeline Development: Create and optimize ingestion, transformation, and movement pipelines, focusing on incremental loads.
- Data Platform Management: Implement, monitor, and adjust solutions using Azure Data Lake Storage Gen2, Azure Synapse Analytics, and Azure Databricks.
Knowledge, Skills And Abilities Required For The Role
- Bachelor’s degree in computer science, Engineering, Information Systems, or related fields.
- Hands-on experience with Azure tools such as:
- Proficiency in data manipulation and transformation languages: SQL, Python, PySpark, or Scala.
- Knowledge of data modeling (dimensional, relational, and domain-oriented).
- Experience with code versioning (e.g., Git) and data DevOps practices.
- Experience with Power BI (data modeling, integration, performance optimization).
- Knowledge of Scala, especially for advanced Spark/Databricks workloads.
- Background in data governance and data quality initiatives.
Behavioral Skills
- Analytical and solution-oriented thinking
- Proactivity and autonomy
- Good interpersonal communication
- Ability to work in a multidisciplinary team
Seniority level: Entry level
Employment type: Full-time
Job function: Information Technology