FPT ASIA PACIFIC PTE. LTD.
Singapore
On-site
SGD 60,000 - 90,000
Full time
Boost your interview chances
Create a job specific, tailored resume for higher success rate.
Job summary
A leading company in the tech industry is seeking a Data Engineer specializing in data pipeline deployment. The role includes responsibilities for optimizing data workflows using AWS, SQL, and various orchestration tools. The ideal candidate will possess strong technical skills with extensive experience in a dynamic, Agile environment.
Qualifications
- 5 years of relevant work experience in deploying data pipelines.
- Experience in Agile working environment.
- High proficiency in SQL and at least 1 orchestration tool.
Responsibilities
- Ensure successful deployment of data pipelines into production.
- Deploy applications using AWS cloud services.
- Automate testing and deployment of data pipelines.
Skills
SQL
Data pipeline orchestration
Python
CI/CD
Docker
AWS
Big data tools
Flask
Tools
Airflow
Jenkins
Spark
Hadoop
Kafka
Responsibilities
- Responsible for ensuring that data pipelines are deployed successfully into production environment
- Deploy applications to AWS cloud leveraging on the full spectrum of the AWS cloud services
- Automate data pipelines testing and deployment using CI/CD
Requirements
- 5 years of relevant work experience in deploying data pipelines in a production environment
- Experience working in a multi-disciplinary team of machine learning engineers, data engineers, software engineers, product managers and subject domain experts
- High proficiency in SQL and relational databases
- High proficiency in at least 1 data pipeline orchestration tool (Eg. Airflow, Dagster)
- High proficiency in Python and related data libraries (eg. Pandas)
- Experience with Docker
- Experience with CI/CD tools like Jenkins
- Experience in Agile working environment
- Experience with AWS cloud services like RDS, EKS, EMR, Redshift·
- Experience with Snowflake· Experience with Flask deployment of micro-services, preferably FastAPI·
- Experience with big data tools: Spark, Hadoop, Kafka etc.·
- Experience with SQLAlchemy and Alembic libraries is a plus