Enable job alerts via email!

Data Engineer - Consultant PySpark ADF SQL 6 months contract2 to 4 years experience required 10[...]

Virtua Advanced Solution

Dubai

Remote

AED 120,000 - 200,000

Full time

Yesterday
Be an early applicant

Boost your interview chances

Create a job specific, tailored resume for higher success rate.

Job summary

An innovative firm is seeking a skilled Data Engineer for a 6-month contract role, focusing on designing and maintaining robust data pipelines. This dynamic position involves working with cutting-edge technologies like PySpark and Azure Data Factory, ensuring data quality and compliance with industry standards. The ideal candidate will have a strong background in SQL and experience with data warehousing platforms. Join a forward-thinking team where your contributions will directly impact data-driven decision-making and business objectives. This is an exciting opportunity for those passionate about data engineering and looking to make a significant impact in a collaborative environment.

Benefits

Visa
Medical Insurance
Work Permit

Qualifications

  • 2+ years of experience in data engineering or related field.
  • Strong proficiency in SQL and Python for data manipulation.

Responsibilities

  • Design and maintain data pipelines for data ingestion and transformation.
  • Collaborate with teams to understand data requirements and deliver solutions.

Skills

SQL
Python
PySpark
Azure Data Factory (ADF)
Data Governance
Data Warehousing
Data Modeling
ETL Processes
Data Quality Checks
Collaboration

Tools

Azure Synapse
Azure Databricks
Snowflake
Redshift
BigQuery
Informatica
Talend
Apache Spark
Power BI

Job description

It's a 6-month contract role, extendable further based on client discretion.

Minimum of 2 years of experience is required. Budget ranges from 10k to 12k AED. The role includes Visa, Medical Insurance, and Work Permit. Please let me know if you are interested in the role or have friends looking for a job.

What You'll Do:
  1. Design, develop, and maintain data pipelines for ingestion, transformation, and loading of data into the data warehouse.
  2. Design, develop, and maintain data pipelines using PySpark and Azure Data Factory (ADF).
  3. Implement data governance frameworks and ensure data quality, security, and compliance with industry standards and regulations.
  4. Develop complex SQL queries and manage relational databases to ensure data accuracy and performance.
  5. Establish and maintain data lineage tracking within the data fabric to ensure transparency and traceability of data flows.
  6. Implement ETL processes to ensure data integrity and quality.
  7. Optimize data pipelines for performance, scalability, and reliability.
  8. Develop data transformation processes and algorithms to standardize, cleanse, and enrich data for analysis. Apply data quality checks and validation rules to ensure data accuracy and reliability.
  9. Mentor junior team members, review code, and drive best practices in data engineering methodologies.
  10. Collaborate with cross-functional teams including data scientists, business analysts, and software engineers to understand data requirements and deliver solutions that meet business objectives. Work closely with stakeholders to prioritize and execute data initiatives.
  11. Maintain comprehensive documentation of data infrastructure designs, ETL processes, and data lineage. Ensure compliance with data governance policies, security standards, and regulatory requirements.
Qualifications:
What You'll Bring:
  • Strong proficiency in SQL and at least one programming language (e.g., Python) for data manipulation and scripting.
  • Strong experience with PySpark, ADF, Databricks, and SQL.
  • Preferable experience with MS Fabric.
  • Proficiency in data warehousing concepts and methodologies.
  • Strong knowledge of Azure Synapse and Azure Databricks.
  • Hands-on experience with data warehouse platforms (e.g., Snowflake, Redshift, BigQuery) and ETL tools (e.g., Informatica, Talend, Apache Spark).
  • Deep understanding of data modeling principles, data integration techniques, and data governance best practices.
  • Preferable experience with Power BI or other data visualization tools to develop dashboards and reports.
Additional Details:
  • Remote Work: Yes
  • Employment Type: Full-time
  • Key Skills: Anti Money Laundering, Gas Turbine, CNC, JPA, Marine Services, ACCA
  • Experience: 2+ years
  • Vacancy: 1
Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.