Enable job alerts via email!
A leading software solutions provider in Auckland is looking for an experienced Data Engineer to design and manage scalable data lakes using MS Fabric. The role involves developing and maintaining data pipelines, ensuring data quality, and working closely with cross-functional teams. The ideal candidate will have strong programming skills in Python and SQL, along with hands-on experience in cloud platforms like Azure. Join this dynamic team and leverage your expertise to drive exceptional outcomes.
Join the dynamic team at Provoke, where we're not just about meeting expectations but exceeding them. We're looking for innovative professionals who are passionate about driving exceptional outcomes for our customers. At Provoke, you'll be empowered to challenge the status quo and encouraged to think differently, leveraging a growth mindset to deliver tangible results. Here, professional growth isn't just a concept-it's a reality, fueled by continuous learning and a supportive, forward-thinking environment. If you're ready to make a significant impact and grow alongside a team of like-minded individuals, Provoke is your destination.
We build bespoke software using modern technologies and are on a mission to help our clients flourish with smart solutions to solve their business needs.
We are committed to building high performing teams and offer tangible rewards to ensure that effort is recognized. We are dedicated to providing an enriching environment and learning opportunities for our employees to grow within and fast track their careers. We are focused on diversity diversity in race, gender, orientation and experience.
As a Data Engineer, you will be responsible for designing, developing, and managing scalable data pipelines that feed into our data lake using MS Fabric. You will work closely with our Lead Data Engineer, analysts, and other stakeholders to ensure that the data infrastructure supports both current and future needs. Your role will involve integrating various data sources, ensuring data quality, and building a robust, scalable data lake architecture.