Enable job alerts via email!

Staff SW Engineer

Visa

Singapore

Hybrid

SGD 80,000 - 120,000

Full time

6 days ago
Be an early applicant

Job summary

A global payment technology company is seeking an experienced Data Engineer to design and manage data ingestion pipelines, utilizing technologies such as Kafka, Docker, and Kubernetes. The role requires a passion for collaboration and continuous learning within a hybrid work model. Ideal candidates will have 7+ years of experience and a strong background in big data technologies, Python, and data integrity practices.

Qualifications

  • 7+ years of relevant work experience with a Bachelor's Degree.
  • Experience with Cloud-based Data Engineering solutions.
  • Strong problem-solving and communication skills.

Responsibilities

  • Design and manage data ingestion pipelines using various technologies.
  • Develop applications in Docker and Kubernetes.
  • Ensure data integrity and security throughout the data lifecycle.

Skills

Python
Docker
Kubernetes
Kafka
Apache Spark
Apache Hadoop
OpenSearch
Cloud Applications
Generative AI

Education

Bachelor's degree in Computer Science or related field
Master's degree or higher

Tools

Kafka
Docker
Kubernetes
Apache Spark
Apache Hadoop
OpenSearch
AWS
Azure

Job description

    As a member of our team at Visa, you will have the opportunity to design code and systems that impact 40% of the world's population. You will play a crucial role in influencing Visa's internal standards for scalability, security, and reusability. Collaboration across functions is key as you create design artifacts and develop best-in-class software solutions for various Visa technical offerings. Your contributions will actively enhance product quality, service technology, and new business flows within diverse agile squads. Moreover, you will have the chance to make a global or local impact through mentorship and continuous learning opportunities.Your responsibilities will include designing, building, and managing data ingestion pipelines using a range of technical stacks including OpenSearch, Kafka, and Druid with open-source packages. You will also be involved in developing applications in Dockers and Kubernetes, as well as creating Python-based applications for automation and task management. Ensuring data integrity, quality, and security throughout the data lifecycle in an open-source platform will be a crucial aspect of your role. Additionally, you will collaborate with cross-functional teams to comprehend data requirements and deliver effective solutions.This position offers a hybrid work model, allowing you to alternate between remote and office work. Hybrid employees are expected to work from the office 2-3 set days a week, determined by leadership/site requirements, with a general guideline of being in the office 50% or more of the time based on business needs.Basic qualifications for this role include 7+ years of relevant work experience with a Bachelor's Degree, or at least 5 years of work experience with an Advanced degree (e.g., Masters, MBA, JD, MD), or 2 years of work experience with a PhD. Preferred qualifications consist of a Bachelor's degree in Computer Science, Engineering, or a related field with a minimum of 7+ years of experience. Strong problem-solving skills, attention to detail, excellent communication, and collaboration skills are highly valued. Experience with Cloud-based Data Engineering solutions, Big Data Technologies such as Hadoop, Kafka, and Spark, as well as knowledge in Generative AI or Generative AI tools, is preferred.In terms of technology expertise, you should have at least 5+ years of experience in Python or JAVA Programming, 3+ years of experience in OpenSearch/Elastic Search, 5+ years of experience in Apache Kafka, Apache Spark, Apache Hadoop, 5+ years of experience in Docker and Kubernetes, and 3+ years of experience in Cloud Applications.We are looking for individuals with energy and experience, a growth mindset, curiosity, and a passion for technologies. You should be comfortable challenging the status quo, pushing boundaries, and thinking beyond traditional solutions. Experience in building and deploying modern services and web applications with quality and scalability is essential. A constant drive to learn new technologies such as Angular, React, Kubernetes, and Docker, as well as experience collaborating with Product, Test, Dev-ops, and Agile/Scrum teams, are highly valued traits.,

Sign-in & see how your skills match this job

Sign-in & Get noticed by top recruiters and get hired fast

Kafka, Docker, Kubernetes, Hadoop, Spark,OpenSearch, Druid, Cloudbased Data Engineering, Generative AI

Web Development, Java, C, Python, Angular, Spring, Cloud Applications, Git, Agile Methodologies, Azure, AWS, GCP, OOP, Content Management, Troubleshooting,AI, NodeJS, REST service, Microservice Architecture, Security Scanning Tools, SSDLC, JavaJ2EE

Kafka, Docker, Kubernetes, Hadoop, Spark,OpenSearch, Druid, Cloudbased Data Engineering, Generative AI

Web Development, Java, C, Python, Angular, Spring, Cloud Applications, Git, Agile Methodologies, Azure, AWS, GCP, OOP, Content Management, Troubleshooting,AI, NodeJS, REST service, Microservice Architecture, Security Scanning Tools, SSDLC, JavaJ2EE

Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.