
Enable job alerts via email!
Generate a tailored resume in minutes
Land an interview and earn more. Learn more
A tech company specializing in data infrastructure is looking for a Platform Engineer to design and maintain their self-service infrastructure for data services. The ideal candidate will automate provisioning, deploy large-scale web scraping solutions, and optimize processes for efficiency. You will work with Docker, Kubernetes, and observability tools to enhance platform health and data integrity. Join a team dedicated to transforming how businesses interact with data.
Deck is building the data infrastructure for the internet. We make scattered, login‑protected data instantly accessible through clean APIs and integrations—empowering businesses to act fast and smart, with no friction.
We’re a team of builders from top‑tier tech companies who believe one thing: great ideas need great data. If you thrive in early‑stage chaos, own your work like a founder, and think in frameworks—not features—Deck might be your next home.
We’re looking for a Platform Engineer to design, build, and maintain the self‑service infrastructure for our large‑scale web scraping and data services. You will own the core platform that enables data engineers to efficiently deploy, operate, and observe their scraping pipelines, reducing friction and maximizing development velocity.
Infrastructure Automation: Build the Internal Developer Platform (IDP), leveraging Infrastructure‑as‑Code (IaC) and CI/CD to automate the provisioning, deployment, and configuration of the scraping environment.
Platform Development: Design and implement solutions using Docker and Kubernetes to containerize and orchestrate large‑scale, high‑density Python web scraping workloads.
Optimization: Continuously improve the efficiency and reliability of scraping processes by optimizing resource utilization, cost, and throughput.
Observability: Integrate and manage the full observability stack (Open Telemetry, Metrics, Logs and Traces, etc.) to provide self‑service insights into platform health, performance, and data accuracy.
Resilience & Integrity: Strengthen the overall resilience and data integrity of the system by implementing robust guardrails and standardized practices.
Strong experience designing and implementing highly available, scalable infrastructure for large‑scale data systems.
Expertise with containerization and orchestration, specifically Docker and Kubernetes, in a production environment.
Proficiency in setting up and managing observability tools like Prometheus, Grafana, and Open Telemetry.
Problem‑solving mindset for complex web scraping challenges and continuous platform improvement.
A focus on Developer Experience (DevEx), treating the scraping infrastructure as an internal product.
Competitive pay for the right skills
Proven leadership with a track record of big results
Significant ownership and autonomy in how you operate
A team of exceptional peers tackling complex, high‑leverage problems
Momentum: recent fundraise by top‑tier investors, massive whitespace, and accelerating traction
Deck isn’t just a product—it’s a mission. We believe that businesses should spend more time thinking and building, and less time wrestling with data plumbing. If that vision excites you, and you’re energized by early‑stage speed, ambiguity, and intensity—let’s talk.
Before applying, take a look at our Constitution. If you don't dislike it, there's a good chance you’ll love working here.