Ativa os alertas de emprego por e-mail!

Python Scraping Developer

OnHires

Brasília

Teletrabalho

BRL 120.000 - 160.000

Tempo integral

Hoje
Torna-te num dos primeiros candidatos

Resumo da oferta

A remote-first SaaS company is seeking a Python Developer to work on large-scale web scraping projects. This role involves developing and maintaining scraping systems using Python, ensuring data accuracy and consistency. The ideal candidate should have proven experience in web scraping, strong knowledge of HTML parsing, and be capable of implementing asynchronous systems. A supportive environment with flexible hours and a strong learning culture is offered.

Serviços

Competitive compensation
Flexible working hours
Freedom to choose tools
Regular team meetups
Strong learning culture

Qualificações

  • Proven hands-on experience in web scraping with Python.
  • Solid understanding of HTML parsing and async programming.
  • Experience with web scraping frameworks like Scrapy or Selenium.
  • Knowledge of REST APIs and proxy management.
  • Familiarity with SQL and NoSQL databases for data processing.
  • Experience with Docker, Linux, and version control (Git).
  • Self-driven, detail-oriented, capable of project ownership.

Responsabilidades

  • Develop, test, and deploy web scraping scripts using Python.
  • Design and maintain asynchronous scraping systems.
  • Implement anti-blocking and proxy rotation strategies.
  • Monitor, debug, and improve scraper reliability.
  • Manage data ingestion pipelines and REST API integrations.
  • Collaborate with engineers to enhance tooling and monitoring.
  • Support DevOps tasks related to Docker and CI/CD.

Conhecimentos

Web scraping and data extraction
HTML parsing
Browser automation
Asynchronous programming
Scrapy
Playwright
Selenium
REST APIs
HTTP protocols
Proxy management
SQL
NoSQL
Docker
Linux
Git
Fluent in English
Descrição da oferta de emprego
Highlights
  • Remote-first role open to candidates from Brazil / South America, Turkey, and Northern Africa

  • Work on a data-driven SaaS platform focused on large-scale web data collection and automation

  • Full ownership of scraping projects – from design to deployment and maintenance

  • Fully remote, flexible working hours within a small, international team

About the company

Our client is a Berlin-based, remote-first SaaS company developing data-driven products for international clients.

They combine cutting-edge technology with a culture of freedom, ownership, and collaboration.

You’ll join a small but highly skilled team that values initiative, precision, and technical curiosity.

The company offers an environment where developers have real influence over architecture and tools, while working on challenging large-scale scraping and data automation projects.

Role Overview

We’re looking for a Python Developer focused on web scraping — someone who enjoys tackling complex data extraction challenges, building scalable crawlers, and keeping large scraping systems running reliably in production.

You’ll be responsible for developing, maintaining, and improving high-volume scraping pipelines, ensuring the data we collect is accurate, consistent, and delivered on time.

Responsibilities
  • Develop, test, and deploy web scraping scripts and crawlers using Python (Scrapy, Playwright, Selenium, Requests, BeautifulSoup, etc.)

  • Design and maintain asynchronous scraping systems capable of handling large-scale data extraction

  • Implement and optimize anti-blocking / proxy rotation strategies

  • Monitor, debug, and continuously improve scraper reliability and performance

  • Manage and automate data ingestion pipelines and integrations with REST APIs

  • Collaborate with other engineers to enhance tooling, logging, and monitoring for scraping systems

  • Support DevOps-related tasks (Docker, CI/CD, Linux environments)

Requirements
  • Proven hands-on experience in web scraping and data extraction with Python

  • Solid understanding of HTML parsing, browser automation, and async programming

  • Experience with web scraping frameworks (Scrapy, Playwright, Selenium, or similar)

  • Knowledge of REST APIs, HTTP protocols, and proxy management

  • Familiarity with SQL and NoSQL databases for storing and processing collected data

  • Experience with Docker, Linux, and version control (Git)

  • Fluent in English (written and spoken)

  • Self-driven, detail-oriented, and capable of taking ownership of projects

Nice to have:

  • Experience with asyncio, Celery, or distributed task management

  • Familiarity with cloud services (AWS, GCP, or similar)

  • Understanding of data quality validation and pipeline monitoring tools

What’s in it for you
  • Competitive compensation

  • Fully remote role within the listed regions

  • Flexible working hours and collaborative culture

  • Freedom to choose tools and influence technical decisions

  • Regular team meetups (online and on-site)

  • Supportive environment with a strong learning culture

Obtém a tua avaliação gratuita e confidencial do currículo.
ou arrasta um ficheiro em formato PDF, DOC, DOCX, ODT ou PAGES até 5 MB.