At Principal33 we strive to make happiness at work a reality. Because it's not just about the money, it's also about the work environment and appreciation. It's about creating the best team setup you can imagine and getting involved in the things you're passionate about. And you can be a part of it, because it's fun to get things done!
We want our employees to innovate and we allow them to do what they are truly passionate about. Based on this conviction, Principal33 aligns its strategy around its vision : to become a leading IT service company and a better working-life balance. With currently over 200 employees from different countries, we are actively shaping the future of work.
About the Job
We are seeking an experienced Data Engineer with Azure Devops in Microsoft Fabric environment. The successful candidate will be self-motivated and capable of working independently.
Responsibilities
- Collaborate with cross-functional teams to support our data reporting platform and troubleshoot issues.
- Diagnose and resolve application issues.
- Conduct thorough testing, debugging, and documentation of any changes
- Collaborate with development teams to escalate and prioritize critical issues.
- Assist in the deployment and configuration of application updates.
- Provide technical support and training to other team members when necessary.
- Maintain accurate records of support requests and resolutions.
Requirements
Skill set :
- ETL / Data Warehouse
- Bigdata
- Source code management
- Design and architecture
- Requirements Analysis
- Developing and maintaining devops pipeline
- Hands on with Jira and Confluence
- Release management in JIRA
- Data Governance
Tools and Technologies :
- ETL Architecture – Delta, Lambda, Data Mesh, Hub & Spoke
- Data Modelling – Relational (3NF, Data vault), Dimensional (Star / Snowflake / Galaxy Schema)
- Programming – Python , Java
- Cloud – Microsoft Azure and Basics of GCP
- Unit / Integration / Lint test – Pytest along with coverage.py, flake8 / pep8 for Lint test
Technical :
- Requirement analysis and be a part of design discussions to derive optimal design for data warehouse.
- Create / review data mapping documents
- Produce / develop codebase using python and spark or other recommended ETL / Bigdata tool such as; Databricks to meet the design and business requirements and deliver the software for data warehouse.
- Producing complex SQL queries in order to debug / troubleshoot / use in coding where applicable to analyze / process data.
- Develop lint tests to ensure coding meets the development standards.
- Develop utilities using powershell where required to do automation in Microsoft Azure and other azure automation
- Develop devops pipeline with Azure Yaml, Jenkins to have CICD implemented
- Act as a data architect to provide solution for complex data problems
- Design and planning the optimal way of implementing modern data platform through Medallion architecture
- Prepare the blueprint of data architecture
- Support QA and e2e testing to fix any bugs and ensure minimal bugs with the code being released to higher tiers
- Plan and execute the releases through octopus deployment.
- Support the deployment and releases on UAT and prod to resolve any technical issues / difficulties during deployment.
What we offer :
Way of working : remote mostly + 1 trip to Dublin per quarter
Please only apply if you have a valid european work-permit.
- Private medical insurance (applicable for candidates in Spain)
- Flexible compensation
- Day off on your birthday
- Annual salary review based on performance
- Gifts for special occasions
- International and multicultural environment
- A free week-long accommodation annually at our corporate apartment close to Valencia, Spain (subject to availability)
Events
Summer party!
Self)-Development
Continuous training, we will help you improve your technical skills, evolve in the tech community and develop as a professional.
We are an active part of the tech-community. You may have the opportunity to attend and participate in local and international tech-events.