Enable job alerts via email!

Data Engineer - Consultant PySpark ADF SQL 6 months contract 2 to 4 years experience required 1[...]

Virtua Advanced Solution

United Arab Emirates

Remote

AED 120,000 - 200,000

Full time

30+ days ago

Job summary

A leading technology solution provider in the United Arab Emirates is seeking a Data Engineer to design and maintain robust data pipelines. Candidates should have strong experience with SQL, PySpark, and Azure Data Factory, along with a solid understanding of data warehousing principles. The role is full-time, offering remote work options and a salary range between 10k to 12k AED. The ideal candidate will also mentor junior team members and collaborate with cross-functional teams to meet business objectives.

Benefits

Visa sponsorship
Medical insurance
Work permit assistance

Qualifications

  • Minimum 2 years of experience in data engineering.
  • Proficiency in SQL and at least one programming language for data manipulation.
  • Experience with ETL processes and data warehousing concepts.

Responsibilities

  • Design and maintain data pipelines for data ingestion and transformation.
  • Implement data governance frameworks and ensure data quality.
  • Collaborate with cross-functional teams to deliver data solutions.

Skills

SQL
Python
PySpark
Azure Data Factory
Data modeling
Data governance

Tools

Azure Synapse
Databricks
Snowflake
Informatica
Power BI
Job description

Itsa6 monthscontractroleextendablefurther based on client discretion.

Minimum 2 years of experience is required. Budget 10k to 12k AED Visa Medical Insurance Work Permit. Please let me know if you would be interested in the role or have any friends looking for job.

What Youll Do:

Design develop and maintain data pipelines for ingestion transformation and loading of data into the data warehouse.

Design develop and maintain data pipelines using PySpark and Azure Data Factory (ADF).

Implement data governance frameworks and ensure data quality security and compliance with industry standards and regulations.

Develop complex SQL queries and manage relational databases to ensure data accuracy and performance.

Establish and maintain data lineage tracking within the data fabric to ensure transparency and traceability of data flows.

Implement ETL processes to ensure the integrity and quality of data.

Optimize data pipelines for performance scalability and reliability.

Develop data transformation processes and algorithms to standardize cleanse and enrich data for analysis. Apply data quality checks and validation rules to ensure the accuracy and reliability of data.

Mentor junior team members review code and drive best practices in data engineering methodologies.

Collaborate with crossfunctional teams including data scientists business analysts and software engineers to understand data requirements and deliver solutions that meet business objectives. Work closely with stakeholders to prioritize and execute data initiatives.

Maintain comprehensive documentation of data infrastructure designs ETL processes and data lineage. Ensure compliance with data governance policies security standards and regulatory requirements.


Qualifications :

What Youll Bring:

Strong proficiency in SQL and at least one programming language (e.g. Python) for data manipulation and scripting.

Strong experience with PySpark ADF Databricks and SQL

Preferable experience with MS Fabric.

Proficiency in data warehousing concepts and methodologies.

Strong knowledge of Azure Synapse and Azure Databricks.

Handson experience with data warehouse platforms (e.g. Snowflake Redshift BigQuery) and ETL tools (e.g. Informatica Talend Apache Spark).

Deep understanding of data modeling principles data integration techniques and data governance best practices.

Preferrable experience with Power BI or other data visualization tools to develop dashboards and reports.


Remote Work :

Yes


Employment Type :

Fulltime

Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.