Job Search and Career Advice Platform

Enable job alerts via email!

Mid-Level and Senior Data Engineer (Web Scrapping)

NucleusX B.V

Denpasar

On-site

IDR 846.166.000 - 1.353.867.000

Full time

Yesterday
Be an early applicant

Generate a tailored resume in minutes

Land an interview and earn more. Learn more

Job summary

A data-driven tech company in Bali seeks a Medior/Senior Data Engineer. You will lead the development of scalable web scrapers, manage cloud infrastructure, and mentor junior engineers. Proficiency in Python, SQL, and frameworks like Scrapy and Selenium is essential. The role offers unique opportunities in a collaborative environment with the possibility of relocation reimbursements for candidates moving to Bali.

Benefits

One-time reimbursement for flights
Relocation expenses

Qualifications

  • Hands-on experience with web scraping using Python and SQL.
  • Deep knowledge of frameworks like Scrapy, Playwright, and Selenium.
  • Experience with Docker and Kubernetes for containerization.

Responsibilities

  • Lead the development and maintenance of scalable web scrapers.
  • Design and build reusable scraper libraries.
  • Optimize cloud infrastructure for large-scale scraping.

Skills

Python
SQL
Scrapy
Playwright
Selenium
Docker
Kubernetes

Tools

AWS
GCP
Azure
Terraform
CloudFormation
Job description
About the Role

We are looking for a Medior/Senior Data Engineer to join our team in Bali. In this role, you will play a key part in building and scaling production-grade web scrapers and supporting the Lead Data Engineer in technical decisions, project planning, and team coordination. You will also mentor junior and medior engineers while ensuring best practices are followed.

Responsibilities
  • Lead the development and maintenance of scalable, production-grade web scrapers.
  • Design, build, and maintain reusable scraper libraries and frameworks.
  • Define coding standards, workflows, and processes with the Lead Engineer.
  • Implement robust monitoring, testing, and alerting systems to ensure reliability.
  • Manage and optimize cloud infrastructure (AWS/GCP/Azure) for large-scale scraping workloads.
  • Optimize compute, storage, and proxy usage for performance and cost efficiency.
  • Mentor and review code of junior and medior engineers.
  • Collaborate with stakeholders to prioritize backlog items and integrate scraper outputs into pipelines.
  • Act as second in command to the Lead Data Engineer for technical and project support.
Requirements
Must-Have Skills
  • Strong proficiency in Python and SQL, with hands-on web scraping experience.
  • Deep knowledge of frameworks like Scrapy, Playwright, Selenium, and anti-bot strategies.
  • Experience designing reusable libraries, coding standards, and modular scraper frameworks.
  • Proficiency with cloud infrastructure (AWS/GCP/Azure), including compute, storage, and security basics.
  • Experience with Docker & Kubernetes for containerization and orchestration.
  • Familiarity with CI/CD pipelines, Git workflows, and IaC tools (Terraform/CloudFormation).
  • Strong debugging, performance tuning, and large-scale scraper optimization skills.
Nice-to-Have Skills
  • Workflow orchestration (Airflow, Prefect, Dagster).
  • Observability/monitoring tools (Prometheus, Grafana, CloudWatch, ELK).
  • Cost optimization in cloud and proxy-heavy workloads.
  • Knowledge of data governance, RBAC, and compliance.
Why You’ll Love Working Here

Be part of a data-driven company that values curiosity, precision, and impact.

Collaborate with talented data engineers and analysts to solve real business challenges.

Based in beautiful Bali, working directly on-site with a passionate and talented team.

If relocating from other parts of Indonesia, we offer one-time reimbursement for flights and relocation expenses.

Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.