Enable job alerts via email!

Sr. Data Analyst

Tech Mahindra

Council of the City of Sydney

On-site

AUD 100,000 - 130,000

Full time

Today
Be an early applicant

Job summary

A leading tech firm in Sydney is seeking a skilled Data Engineer to design and maintain data pipelines. Candidates should have 3 to 5 years of experience with Apache PySpark, SQL, and data migration projects. The role involves collaboration on data quality assurance and utilization of Azure Databricks. If you are passionate about data engineering and eager to work on innovative projects, apply now!

Qualifications

  • 3 to 5 years of experience in data engineering with focus on Apache PySpark and SQL.
  • Proficiency in Azure Databricks with experience in TERADATA or Cloudera.
  • Hands-on experience with Data Migration projects.

Responsibilities

  • Design and develop scalable data pipelines using Apache PySpark and SQL.
  • Collaborate with teams to support data migration projects and ensure data quality.
  • Utilize Azure Databricks and Teradata/Cloudera for processing and analytics.

Skills

Apache PySpark
SQL
Azure Databricks
Data migration
Data systems optimization
DevOps practices

Education

Bachelor's degree in Computer Science

Tools

Teradata
Cloudera
Python
Job description

Job Summary

Job Title: Data Engineer
Location: TechM AUS Sydney
Years of Experience: 5-7 Years

We are seeking a skilled Data Engineer with 2 to 5 years of experience to join our dynamic software development team. The ideal candidate will have a strong background in data engineering, particularly with Apache PySpark, SQL, and data migration processes.

Responsibilities:

  • Design, develop, and maintain scalable data pipelines using Apache PySpark and SQL.
  • Collaborate with cross-functional teams to support data migration projects and ensure data quality.
  • Utilize Azure Databricks and Teradata/Cloudera for data processing and analytics.
  • Implement DevOps practices to automate data workflows and improve deployment processes.
  • Monitor and optimize data systems for performance and reliability.
  • Document data engineering processes and maintain clear communication with stakeholders.
  • Work independently and as part of a team to meet project deadlines.

Mandatory Skills:

  • 3 to 5 years of experience in data engineering with a focus on Apache PySpark and SQL.
  • Proficiency in Azure Databricks and experience with TERADATA or Cloudera.
  • Hands-on experience with Data Migration projects.
  • Strong understanding of large systems architecture and data flow.
  • Excellent oral and written communication skills.
  • Self-motivated with the ability to work under tight deadlines.

Preferred Skills:

  • Familiarity with DevOps practices and tools.
  • Experience with Python for data manipulation and analysis.
  • Knowledge of data warehousing concepts and best practices.
  • Experience in working with cloud-based data solutions.

Qualifications:

Bachelor's degree in Computer Science, Information Technology, or a related field. Relevant certifications in data engineering or cloud technologies are a plus.

If you are passionate about data engineering and eager to contribute to innovative projects, we encourage you to apply and join our team in Sydney!

Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.