Talentvis is looking for a Senior Data Engineer experienced in Azure Fabric for a 3-months contract position with our esteemed global IT consulting client.
YOUR ROLE:
As a Senior Data Engineer, you'll be responsible for acquiring, storing, governing, and processing large volumes of structured and unstructured data. You’ll bring strategic insight into big data technologies and help define enterprise-scale data foundations like data lakes. A strong command of Azure Fabric is mandatory, as you'll be leveraging it extensively to build scalable and resilient data infrastructure. You'll work closely with cross-functional teams—including Data Intelligence, Research, UX, Digital Tech, and Agile—to deliver intelligent and high-impact solutions for our clients.
KEY RESPONSIBILITIES:
- Design, develop, and maintain robust, scalable data pipelines to ingest, transform, and process structured and unstructured data from various sources.
- Build and manage data lakes and warehouses with optimized storage and retrieval mechanisms. Create data models tailored to business and analytics needs.
- Mandatorily experienced in Azure Fabric, using it to design and deploy efficient, scalable, and secure data architectures in the cloud.
- Leverage Azure and other cloud platforms to build high-performing, cost-effective, and reliable data systems.
- Establish data governance frameworks, validation processes, and quality checks to ensure data integrity, accuracy, and compliance.
- Monitor and continuously improve the performance and efficiency of data infrastructure and pipelines.
- Collaborate with data scientists, analysts, and other stakeholders to deliver data solutions aligned with business goals.
- Build the data presentation layer and create visualizations using Power BI, Tableau, or similar tools.
- Stay updated with the latest trends and tools in big data and data engineering to evaluate and adopt innovative technologies.
- Support ongoing development and optimization of the organization’s data infrastructure.
ABOUT YOU:
- Bachelor’s, Master’s, or Ph.D. in Computer Science, Information Management, or related discipline, with 6+ years of experience.
- Deep understanding of the big data ecosystem and distributed computing principles.
- Proven hands-on experience with Azure Fabric is mandatory.
- Experience with big data tools such as Hadoop, Spark, and distributions like Cloudera, Hortonworks, or MapR.
- Skilled in creating ETL and batch processing pipelines across multiple data sources.
- Proficient in NoSQL databases such as MongoDB, Cassandra, Neo4J, or ElasticSearch.
- Familiar with query engines like Hive, Spark SQL, or Impala.
- Strong experience with Power BI for dashboards and visualizations.
- Open to or experienced in real-time streaming platforms like Kafka, AWS Kinesis, Flume, or Spark Streaming.
- Interested in or experienced with DevOps/DataOps practices (e.g., Infrastructure as Code, pipeline automation).
- Understands data science workflows and model development at a conceptual level.
- Technologically curious and self-motivated with a passion for continuous learning