Enable job alerts via email!

Data engineer (Azure)

Flintex Consulting Pte Ltd

Singapore

On-site

SGD 60,000 - 100,000

Full time

30+ days ago

Boost your interview chances

Create a job specific, tailored resume for higher success rate.

Job summary

An innovative firm is seeking a skilled Data Engineer proficient in Azure and Python to design and develop robust data solutions. This role involves creating Pyspark scripts, managing data pipelines, and developing insightful reports using Power BI. You will work on integrating diverse data sources and ensuring effective data governance. The position is ideal for a proactive individual who thrives in a collaborative environment and is eager to tackle complex data challenges. Join a forward-thinking team where your contributions will directly impact data-driven decision-making and organizational success.

Benefits

13th Month Salary

Qualifications

  • 3-5 years of experience in Azure Data engineering and Python.
  • Experience in Visualization design with Power BI and SQL.

Responsibilities

  • Design and develop Pyspark scripts and data pipelines.
  • Create reports and dashboards in Power BI with access control.

Skills

Azure Data Engineering
Python
Pyspark
Data Warehouse
Power BI
SQL
ETL Processing
Data Visualization
Problem Solving
Communication Skills

Education

Bachelor’s Degree in Computer Science or Engineering

Tools

Azure Synapse
Azure DevOps
Power BI
AWS
Hadoop
HANA

Job description

Benefits: 13th Month Salary

Data Engineer (Azure) – Synapse and Pyspark, Python, Datawarehouse and Power BI, Azure Devops

Skills & Experience

  • Bachelor’s Degree in Computer Science or Engineering with 3-5 years of experience in Azure Data engineering, Python, Pyspark or Big Data development.

  • Sound Knowledge of Azure Synapse analytics for pipelines, orchestration, and setup.

  • 1-2 years experience in Visualization design and development with Power BI. Knowledge on row-level security and access control.

  • Sound experience in SQL, Datawarehouse, data marts, data ingestion with Pyspark and Python.

  • Expertise in developing and maintaining ETL processing pipelines in cloud-based platforms such as AWS, Azure, etc. (Azure Synapse or Data Factory preferred).

  • Team player with good interpersonal, communication, and problem-solving skills.

Job Scope

  • Design, review, and development of Pyspark scripts. Testing and troubleshooting of data pipelines and orchestration.

  • Designing and developing reports and dashboards in Power BI, setting up access control with row-level security, and DAX query experience.

  • Establishing connections to source data systems, including internal systems (e.g., SAP, Historians, Data Lake) as well as external systems (e.g., Web APIs).

  • Managing the collected data in appropriate storage/database solutions (e.g., file systems, SQL servers, Big Data platforms such as Hadoop, HANA) as required by the specific project requirements.

  • Design and development of data marts and relevant data pipelines using Pyspark, including data copy activities for batch ingestion.

  • Deployment of pipeline artifacts from one environment to another using Azure Devops.

  • Performing data integration using database table joins or other mechanisms as required by the analysis requirements of the project.

Good to Have

  • Data catalog with Purview enabling effective metadata management, lineage tracking, and data discovery.

  • Candidates should demonstrate the ability to leverage Purview to ensure data governance, compliance, and efficient data exploration within Azure environments.

Others

  • Able to work independently on assignments according to agreed schedules without much supervision.

  • Own assignments and take initiative to resolve issues hindering completion. Proactively reach out for help/guidance whenever required.

Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.