Enable job alerts via email!

Senior Data Acquisition Engineer

People Data Labs

United States

Remote

USD 160,000 - 200,000

Full time

Today
Be an early applicant

Job summary

A data solutions company is looking for a senior Data Engineer to enhance web crawling technologies and data products. The role requires substantial experience in backend development and a solid grasp of data quality standards. With a focus on autonomy, the position involves designing and implementing scalable data systems, suitable for those thriving in a fast-paced environment. Competitive compensation package offered.

Benefits

Stock
Unlimited paid time off
Health, fitness, and office stipends
Remote work flexibility

Qualifications

  • 7+ years industry experience with strategic technical problem solving.
  • Strong software development architecture for backend applications.
  • Experience in building crawlers.

Responsibilities

  • Use and develop web crawling technologies for data capture.
  • Improve web crawling infrastructure.
  • Design new data products with captured data.

Skills

Problem solving
Backend application development
Object-oriented design
Web crawling
Linux system administration
Data quality evaluation

Education

Degree in quantitative discipline

Tools

Apache Spark
SQL
Kafka
AWS
Databricks
Job description
About Us

People Data Labs (PDL) is the provider of people and company data. We do the heavy lifting of data collection and standardization so our customers can focus on building and scaling innovative, compliant data solutions. Our sole focus is on building the best data available by integrating thousands of compliantly sourced datasets into a single, developer-friendly source of truth. Leading companies across the world use PDL’s workforce data to enrich recruiting platforms, power AI models, create custom audiences, and more.

We are looking for individuals who can balance extreme ownership with a “one-team, one-dream” mindset. Our Data Engineering & Acquisition Team ensures our customers have standardized and high quality data to build upon.

You will be crucial in accelerating our efforts to build standalone data products that enable data teams and independent developers to create innovative solutions at massive scale. In this role, you will be working with a team to continuously improve our existing datasets as well as pursuing new ones. If you are looking to be part of a team discovering the next frontier of data-as-a-service (DaaS) with a high level of autonomy and opportunity for direct contributions, this might be the role for you. We like our engineers to be thoughtful, quirky, and willing to fearlessly try new things. Failure is embraced at PDL as long as we continue to learn and grow from it.

What You Get to Do
  • Use and develop web crawling technologies to capture and catalog data on the internet
  • Support and improve our web crawling infrastructure
  • Structure, define, and model captured data, providing semantic data definition and automate data quality monitoring for data that we crawl
  • Develop new techniques to increase speed, efficiency, scalability, and reliability of web crawls
  • Use big data processing platform to build data pipelines, publish data, and ensure the reliable availability of data that we crawl
  • Work with our data product and engineering team to design and implement new data products with captured data, and enhance and improve upon existing products
The Technical Chops You’ll Need
  • 7+ years industry experience with clear examples of strategic technical problem solving and implementation
  • Strong software development architecture and fundamentals for backend applications
  • Solid programming experience: strong grasp of object-oriented design and experience building applications using asynchronous programming paradigms
  • Experience building crawlers
  • Proficient in Linux / Unix command line utilities, Linux system administration, architecture, and resource management
  • Experience evaluating data quality and maintaining consistently high data standards across new feature releases (e.g., consistency, accuracy, validity, completeness)
People Thrive Here Who Can
  • Must thrive in a fast paced environment and be able to work independently
  • Can work effectively remotely and be proactive about managing blockers
  • Strong written communication skills on Slack/Chat and in documents
  • Experience writing data design docs (pipeline design, dataflow, schema design)
  • Ability to scope and break down projects, and communicate progress and blockers effectively with your manager, team, and stakeholders
Some Nice To Haves
  • Degree in a quantitative discipline such as computer science, mathematics, statistics, or engineering
  • Experience in network architecture and debugging network traffic
  • Experience with Apache Spark
  • Experience with SQL, including advanced queries
  • Experience with streaming data platforms (e.g. Kafka)
  • Experience with cloud services (AWS preferred, GCP, Azure)
  • Experience working with Databricks (including delta live tables, data lakehouse patterns)
  • Knowledge of modern data design and storage patterns (incremental updating, partitioning, backfills)
  • Experience with data warehousing (e.g. Databricks, Snowflake, Redshift, BigQuery)
  • Understanding of modern data storage formats and tools (parquet, ORC, Avro, Delta Lake)
  • Stock
  • Unlimited paid time off
  • Health, fitness, and office stipends
  • The permanent ability to work wherever and however you want

Comp: $160K - $200K

People Data Labs is an equal opportunity employer. We do not discriminate on the basis of race, color, religion, sex, gender identity, sexual orientation, national origin, age, disability, veteran status, or any other legally protected status.

Qualified applicants with arrest or conviction records will be considered for employment in accordance with applicable laws and regulations.

Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.