Senior Python Developer
Join a fast-paced engineering team focused on building scalable backend systems, automating complex workflows, and powering data-driven decision making. In this role, you will design backend services, develop robust web-scraping solutions, and build data pipelines that support internal operations and data science initiatives.
What You’ll Do
Process Automation & Backend Development
- Design and implement automated systems that improve internal processes and operational efficiency.
- Build scalable backend services that integrate seamlessly with existing infrastructure.
- Ensure services meet standards for reliability, performance, and maintainability.
Web Scraping & Data Extraction
- Develop and maintain scrapers for a variety of external sources.
- Handle dynamic content, authentication, rate limits, and anti-bot challenges.
- Implement strong error handling, logging, and retry logic.
Data Manipulation & Processing
- Clean, transform, and process large structured and unstructured datasets.
- Build and maintain ETL/ELT pipelines that deliver high-quality data to downstream systems.
Monitoring & Observability
- Implement monitoring for system performance, data quality, and operational metrics.
- Build dashboards and alerts to ensure reliability and data integrity.
Data Science Collaboration
- Provide the infrastructure, pipelines, and tools needed for data science experiments and model deployment.
- Partner with data scientists to deliver datasets and backend services that accelerate analytics work.
Code Quality & Engineering Best Practices
- Use test-driven development and maintain strong unit/integration test coverage.
- Perform code reviews and promote engineering standards and best practices.
- Follow modern Python packaging and dependency‑management practices.
CI/CD & Infrastructure
- Build and maintain CI/CD pipelines for automated testing and deployment.
- Collaborate with DevOps on containerization and orchestration efforts.
Lifecycle Ownership & Continuous Improvement
- Own the full lifecycle of backend services from development to deployment and ongoing improvement.
- Identify opportunities to reduce technical debt and enhance system resilience.
What You’ll Bring
Technical Expertise
- Advanced Python proficiency, including experience with modern backend frameworks (e.g., FastAPI).
- Strong understanding of HTTP, RESTful APIs, and core web technologies.
- Experience working in Linux environments.
Web Scraping & Automation
- Practical experience with scraping libraries and tools (e.g., BeautifulSoup, Scrapy, Selenium, Playwright).
- Ability to manage JavaScript‑rendered content, sessions, and complex authentication flows.
Data Manipulation
- Strong experience with Python data libraries (pandas, polars, NumPy).
- Solid SQL skills and familiarity with common data formats (JSON, CSV, XML, HTML).
Monitoring & Observability
- Experience with tools such as Datadog or similar platforms.
- Ability to define, track, and monitor key metrics, logs, and alerts.
Databases & Storage
- Hands‑on experience with relational databases (e.g., PostgreSQL).
- Familiarity with ETL/ELT concepts and data‑warehouse fundamentals.
CI/CD & DevOps Collaboration
- Experience with automated build/test/deploy pipelines.
- Familiarity with Docker and Kubernetes.
Data Science Support
- Understanding of data science workflows and the infrastructure needed for experimentation and deployment.
- Experience building tools and services for ML and analytics use cases.
Minimum Qualifications
- Bachelor’s or Master’s degree in Computer Science, Software Engineering, or a related field, or equivalent practical experience.
- 5+ years of professional Python backend development experience.
- 3+ years of experience with web scraping and multi‑source data extraction.
- Proven experience building scalable backend systems and automated processes.
- Experience with monitoring/observability tools.
- Strong SQL experience and familiarity with relational databases.
- Proficiency with Linux, Git, and command‑line tools.
Preferred Qualifications
- Experience with asynchronous/concurrent processing (e.g., asyncio, Celery).
- Exposure to logistics, transportation, or supply‑chain concepts.
- Experience with microservices or distributed systems.
- Familiarity with data engineering tools (dbt, Airflow, Prefect).
- Experience building APIs that integrate with machine‑learning pipelines.
- Open‑source contributions related to scraping, data engineering, or automation.
- Experience using LLM‑powered development tools in everyday workflows.