Enable job alerts via email!

Talend Big Data Developer

ARISTON SERVICES PTE. LTD.

Singapore

On-site

SGD 80,000 - 100,000

Full time

9 days ago

Generate a tailored resume in minutes

Land an interview and earn more. Learn more

Start fresh or import an existing resume

Job summary

A leading company is seeking an ETL Developer skilled in Talend, Python, and Spark. Key responsibilities include designing ETL processes, maintaining data integrity, and collaborating with the team on data transformation logic. Ideal candidates will have a strong background in data warehousing and the ability to improve data quality standards.

Qualifications

  • Experience with Talend, Python and Spark.
  • Knowledge in Database and Hadoop (Hive, Impala, HDFS).
  • Good understanding of data-warehousing and data-modeling techniques.

Responsibilities

  • Create and maintain ETL jobs using Talend.
  • Design and implement data extraction and transformation processes.
  • Collaborate with team members to ensure data integrity.

Skills

Talend
Python
Spark
Database
Hadoop
Data-Warehousing
Data-Modeling
Visualization tools

Job description

Responsibilities
• Using the Talend ETL toolset, to create new and maintain existing ETL jobs.
• Design and implement ETL for extracting and transforming data from diverse sources, such as: Cloudera, PostgreSQL, and SQL Server databases.
• Design and develop database tables necessary along with the necessary constraints as per the requirement.
• Collaborate with Team members to understand source system structures and data retrieval methods/techniques, and tools within the organization.
• Support the development of data transformation logic using ETL tools or scripting languages like SQL, Python, etc.
• Clean, validate, and transform data to conform to target schema and quality standards.
• Work with the Team to execute data quality improvement plans.
• Participate in troubleshooting activities to maintain data integrity and process efficiency.

Required skills
• Experience with Talend, Python and Spark
• Should have good knowledge and working experience in Database and Hadoop (Hive, Impala, HDFS).
• Understanding of data-warehousing and data-modeling techniques
• Knowledge of industry-wide visualization and analytics tools
• Good interpersonal skills and positive attitude

Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.