Enable job alerts via email!

Data Engineer - Consultant (PySpark, ADF, SQL, )

Client of Virtua Advanced Solutions

Dubai

On-site

AED 120,000 - 200,000

Full time

5 days ago
Be an early applicant

Boost your interview chances

Create a job specific, tailored resume for higher success rate.

Job summary

An innovative firm is seeking a skilled Data Engineer to design and maintain robust data pipelines using cutting-edge technologies like PySpark and Azure Data Factory. This role offers the opportunity to work with data warehousing concepts and collaborate with cross-functional teams to ensure data quality and compliance. With a focus on optimizing data processes and mentoring junior members, you'll play a crucial role in shaping the data landscape of the organization. If you are passionate about data and eager to make a significant impact, this is the perfect opportunity for you.

Benefits

Visa
Medical Insurance
Work Permit

Qualifications

  • 2+ years of experience in data engineering with strong SQL and Python skills.
  • Proficiency in building data pipelines using PySpark and Azure Data Factory.

Responsibilities

  • Design and maintain data pipelines for data ingestion and transformation.
  • Collaborate with teams to meet data requirements and ensure compliance.

Skills

SQL
Python
PySpark
Azure Data Factory
Data Warehousing
Data Governance
ETL Processes
Data Modeling

Education

Bachelor in Computer Application

Tools

Azure Databricks
Snowflake
Redshift
BigQuery
Informatica
Talend
Apache Spark
Power BI

Job description

Bachelors in Computer Application (Computers)

Vacancy

1 Vacancy

Job Description

Role Duration: 6 months contract, extendable based on client discretion.

Experience & Compensation: Minimum 2+ years of experience. Budget: 10k to 12k AED + Visa + Medical Insurance + Work Permit.

Responsibilities:
  1. Design, develop, and maintain data pipelines for ingestion, transformation, and loading into the data warehouse.
  2. Utilize PySpark and Azure Data Factory (ADF) to build data pipelines.
  3. Implement data governance frameworks, ensuring data quality, security, and compliance.
  4. Develop complex SQL queries and manage relational databases for data accuracy and performance.
  5. Establish data lineage tracking to ensure transparency and traceability.
  6. Implement ETL processes to maintain data integrity and quality.
  7. Optimize pipelines for performance, scalability, and reliability.
  8. Create data transformation algorithms for data standardization, cleansing, and enrichment, including quality checks.
  9. Mentor junior team members, review code, and promote best practices.
  10. Collaborate with data scientists, analysts, and engineers to meet data requirements and objectives.
  11. Maintain documentation for data infrastructure, ETL processes, and data lineage.
  12. Ensure compliance with data governance policies, security standards, and regulations.
Desired Candidate Profile
What You’ll Bring:
  • Strong proficiency in SQL and Python for data manipulation and scripting.
  • Experience with PySpark, ADF, Databricks, and SQL.
  • Preferable experience with MS Fabric.
  • Knowledge of data warehousing concepts and methodologies.
  • Familiarity with Azure Synapse and Azure Databricks.
  • Hands-on experience with data warehouse platforms like Snowflake, Redshift, or BigQuery, and ETL tools such as Informatica, Talend, or Apache Spark.
  • Understanding of data modeling, integration, and governance best practices.
  • Optional experience with Power BI or similar tools for dashboards and reports.

Note: Naukrigulf.com is a platform connecting jobseekers and employers. Candidates should verify the legitimacy of employers independently. We do NOT endorse any requests for money or sharing personal/bank details. For security concerns, contact abuse@naukrigulf.com.

Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.