Job Search and Career Advice Platform

Enable job alerts via email!

Data Engineer

FLINTEX CONSULTING PTE. LTD.

Singapore

On-site

SGD 70,000 - 90,000

Full time

Today
Be an early applicant

Generate a tailored resume in minutes

Land an interview and earn more. Learn more

Job summary

A tech consulting firm in Singapore is seeking a Data Engineer (Azure) to design and develop data pipelines and analytics solutions using Pyspark and Azure. The role involves creating reports in Power BI, managing data in various storage solutions, and deploying pipeline artifacts using Azure DevOps. Candidates should possess a Bachelor's degree in Computer Science, strong skills in Azure analytics, Python, and collaborative problem-solving abilities. Working hours are from 8:30am to 6pm, Monday to Friday, with no hybrid option.

Qualifications

  • Experience in Azure Data engineering, Python, Pyspark, or Big Data development.
  • Knowledge of Azure Synapse analytics for pipelines and orchestration.
  • Experience in SQL, Datawarehouse, data marts, and data ingestion.

Responsibilities

  • Design and develop Pyspark scripts and data pipelines.
  • Create reports and dashboards in Power BI.
  • Manage data in storage solutions such as SQL servers and Big Data platforms.
  • Deploy pipeline artifacts using Azure DevOps.

Skills

Azure Data engineering
Python
Pyspark
SQL
Power BI
DevOps

Education

Bachelor’s Degree in Computer Science or Engineering

Tools

Azure Synapse
Azure Data Factory
Hadoop
HANA
Job description

Data engineer (Azure) – Synapse and Pyspark, Python, Datawarehouse and Azure Data Explorer, Azure Devops

Job Scope
  • Design, review and development of Pyspark scripts. Testing, troubleshooting of data pipelines, orchestration.
  • Designing and developing reports and dashboards in PowerBI, setting up access control with row level security DAX query experience.
  • Establishing connections to source data systems such as on-prem databases, IOT devices, APIs.
  • Managing the collected data in appropriate storage/data-base solutions e.g. file systems, SQL servers, Big Data platforms such as Hadoop, HANA, etc. as required by the specific project requirements.
  • Design, development of relevant data pipelines using pyspark, copy data activities for batch ingestion.
  • Performing data integration e.g. using database table joins, or other mechanisms at an appropriate level as required by the analysis requirements of the project.
  • Deployment of pipeline artifacts from one environment to the other using Azure Devops.
Skills & Experience
  • Bachelor’s Degree in Computer Science or Engineering with experience in Azure Data engineering, Python, Pyspark or Big Data development.
  • Sound Knowledge of Azure Synapse analytics for pipelines, orchestration, set up.
  • Visualization design and development with Power BI. Knowledge on row-level security, access control.
  • Sound experience in SQL, Datawarehouse, data marts, data ingestion with Pyspark and Python.
  • Expertise in developing and maintaining ETL processing pipelines in cloud-based platforms such as AWS, Azure, etc. (Azure Synapse or data factory preferred)
  • Team player with good interpersonal, communication, and problem-solving skills.
  • Preferred to have Devops expertise.
Working hours

8:30am to 6pm (Monday to Friday) onsite, no hybrid option.

Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.