Enable job alerts via email!

Data Engineer - Consultant PySpark ADF SQL 6 months contract 2 to 4 years experience required 1[...]

Virtua Advanced Solution

Dubai

Remote

AED 120,000 - 200,000

Full time

Today
Be an early applicant

Job summary

A technology firm in Dubai is seeking a Data Engineer to design and maintain data pipelines while ensuring data quality and compliance with governance policies. Candidates should have strong SQL and programming skills, and experience with PySpark and Azure Data Factory. The role offers a budget of 10k to 12k AED and allows remote work.

Qualifications

  • Minimum 2 years of experience required.
  • Strong proficiency in SQL and one programming language for data manipulation.
  • Hands-on experience with data warehousing platforms and ETL tools.

Responsibilities

  • Design, develop, and maintain data pipelines.
  • Implement data governance frameworks and ensure compliance.
  • Mentor junior team members and drive best practices.

Skills

SQL
Python
PySpark
Azure Data Factory
Data Governance
Data Warehousing
ETL Processes

Tools

Snowflake
Redshift
BigQuery
Informatica
Talend
Azure Synapse
Azure Databricks
Job description

Itsa6 monthscontractroleextendablefurther based on client discretion.

Minimum 2 years of experience is required. Budget 10k to 12k AED Visa Medical Insurance Work Permit. Please let me know if you would be interested in the role or have any friends looking for job.

What Youll Do :
  • Design develop and maintain data pipelines for ingestion transformation and loading of data into the data warehouse.
  • Design develop and maintain data pipelines using PySpark and Azure Data Factory (ADF).
  • Implement data governance frameworks and ensure data quality security and compliance with industry standards and regulations.
  • Develop complex SQL queries and manage relational databases to ensure data accuracy and performance.
  • Establish and maintain data lineage tracking within the data fabric to ensure transparency and traceability of data flows.
  • Implement ETL processes to ensure the integrity and quality of data.
  • Optimize data pipelines for performance scalability and reliability.
  • Develop data transformation processes and algorithms to standardize cleanse and enrich data for analysis. Apply data quality checks and validation rules to ensure the accuracy and reliability of data.
  • Mentor junior team members review code and drive best practices in data engineering methodologies.
  • Collaborate with crossfunctional teams including data scientists business analysts and software engineers to understand data requirements and deliver solutions that meet business objectives. Work closely with stakeholders to prioritize and execute data initiatives.
  • Maintain comprehensive documentation of data infrastructure designs ETL processes and data lineage. Ensure compliance with data governance policies security standards and regulatory requirements.
Qualifications : What Youll Bring :
  • Strong proficiency in SQL and at least one programming language (e.g. Python) for data manipulation and scripting.
  • Strong experience with PySpark ADF Databricks and SQL
  • Preferable experience with MS Fabric.
  • Proficiency in data warehousing concepts and methodologies.
  • Strong knowledge of Azure Synapse and Azure Databricks.
  • Handson experience with data warehouse platforms (e.g. Snowflake Redshift BigQuery) and ETL tools (e.g. Informatica Talend Apache Spark).
  • Deep understanding of data modeling principles data integration techniques and data governance best practices.
  • Preferrable experience with Power BI or other data visualization tools to develop dashboards and reports.
Remote Work :

Yes

Employment Type :

Fulltime

Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.