Responsibilities
- Design and build scalable ETL/ELT pipelines using Azure Data Factory and SQL Server Integration Services (SSIS).
- Develop and optimize workflows for extracting, transforming, and loading data from diverse on‑premises and cloud sources.
- Implement data integration solutions that ensure performance, reliability, and cost‑efficiency across Azure environments.
- Design and develop SQL Server databases, including schema design, stored procedures, functions, and complex queries.
- Conduct data quality checks, validation rules, and troubleshoot ingestion issues and pipeline failures.
- Deploy data solutions leveraging Azure services such as Azure SQL Database, Azure Data Lake, and Azure Synapse Analytics.
- Develop and maintain Power BI dashboards, semantic models, and analytics layers for business insights.
- Implement data security, access control, and compliance practices within Azure environments.
- Document data architectures, pipelines, and operational procedures to ensure long‑term maintainability.
- Collaborate with analysts, developers, and business stakeholders to support reporting, analytics, and integration needs.
Required education
Preferred education
Required technical and professional expertise
- Minimum 3 years of experience in data engineering, data integration, or database development.
- Strong proficiency with Azure Data Factory for building and orchestrating data pipelines.
- Solid experience developing and maintaining SSIS packages for ETL processes.
- Advanced SQL skills, including stored procedures, functions, performance tuning, indexing, and partitioning.
- Experience designing and developing SQL Server or Azure SQL Database solutions.
- Hands‑on experience with Power BI for dashboards, data models, and report development.
- Working knowledge of Azure cloud services, including Azure Data Lake, Azure SQL, Synapse, and cloud security fundamentals.
- Understanding of data warehousing, dimensional modeling, and ETL best practices.
- Familiarity with structured and semi‑structured data formats (JSON, XML, CSV, Parquet).
- Strong analytical skills, attention to detail, and commitment to delivering high‑quality data solutions.
- Ability to work independently, manage priorities, and communicate effectively with both technical and non‑technical stakeholders.
Preferred technical and professional experience
- Microsoft certifications such as Azure Data Engineer Associate or Azure Fundamentals.
- Experience with Azure Synapse Analytics, Azure Databricks, or large‑scale data lake architectures.
- Knowledge of Python or PowerShell for automation and data processing scripts.
- Familiarity with version control tools such as Git or Azure DevOps.
- Experience with Agile/Scrum environments.
- Understanding of data governance, data security frameworks, and compliance requirements.
- Exposure to real‑time or streaming solutions on Azure.
- Knowledge of DAX for advanced Power BI modeling and calculations.
Other relevant job details
IBM wants you to bring your whole self to work and for you this might mean the ability to work flexibly. If you are interested in a flexible working pattern, please talk to our recruitment team to find out if this is possible in the current working environment.
Job Title
Job ID
81172
City / Township / Village
Bari, Napoli, MILANO
State / Province
Country
Italy
Work arrangement
Hybrid
Area of work
Data & Analytics
Employment type
Regular
Position type
Professional
Some travel may be required based on business demand
Company
Shift
General (daytime)
Is this role a commissionable/sales incentive based position?
IBM is proud to be an equal‑opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, gender, gender identity or expression, sexual orientation, national origin, genetics, pregnancy, disability, neurodivergence, age, or other characteristics protected by the applicable law. IBM is also committed to compliance with all fair employment practices regarding citizenship and immigration status.