Enable job alerts via email!

Senior Data Engineer

Goldbelly

United States

Remote

USD 140,000 - 190,000

Full time

Today
Be an early applicant

Job summary

A leading food technology company is seeking a Data Engineer to enhance data pipelines and infrastructure. You will collaborate with various teams to design scalable data systems, ensuring data quality and governance. The role requires 3+ years of experience, expert-level SQL, and familiarity with AWS. Salary range is $140,000 - $190,000 plus equity and benefits.

Benefits

Equity (incentive stock options)
Comprehensive benefits package

Qualifications

  • 3+ years of experience in data engineering with large-scale data systems.
  • Expert level in SQL, proficiency in Python preferred.
  • Familiarity with BI platforms and data visualization tools.

Responsibilities

  • Collaborate with engineers and analysts to optimize data pipelines.
  • Design and maintain scalable data systems for analytics.
  • Ensure data quality, governance, and security best practices.

Skills

SQL
Python
Dimensional and normalized data modeling
Cloud-based data infrastructure (AWS, Snowflake)
Data ingestion and ETL pipelines (Fivetran)
Event-driven architectures (Confluent Kafka)
Version control with Git

Tools

dbt Cloud
Metabase
Sigma Computing
Job description

At Goldbelly, we believe food brings people together. We connect people with their greatest culinary desires within and beyond local communities. We empower food makers of all sizes and deliver their passion to food-lovers around the country.

As aData Engineer, you will enhance how millions of customers connect with both novel and nostalgic food experiences on our platform. By partnering with business leaders and leveraging state-of-the-art data engineering and analytics resources, you will play a key role in transforming our data infrastructure and pipelines.

Responsibilities
  • Collaborate closely with full stack engineers, machine learning engineers, data analysts, and product managers to design and optimize data pipelines and ETL processes that drive business decisions.
  • Design, develop, and maintain robust, scalable data systems to ensure seamless data integration and high availability.
  • Improve our data infrastructure by optimizing ingestion, storage, and retrieval processes to support analytics, reporting, and machine learning applications.
  • Help define and ensure data quality, governance, and security best practices are followed across all data workflows.
  • Build and maintain data models that support business insights and operational reporting.
  • Apply software engineering best practices, including CI/CD, testing, and version control, to data engineering workflows to ensure reliability and maintainability.
Qualifications
  • 3+ years of experience in data engineering and working on large-scale data systems.
  • Expert level in SQL required, proficiency in Python preferred.
  • Skilled in dimensional and normalized data modeling and transformation using tools like dbt Cloud.
  • Strong understanding of cloud-based data infrastructure (AWS and Snowflake preferred).
  • Experience with BI platforms like Metabase and Sigma Computing for data visualization/dashboard tools.
  • Experience with data ingestion and ETL pipelines like Fivetran.
  • Familiarity with event-driven architectures and real-time data streaming using tools like Confluent Kafka.
  • Proficient in version control with Git and collaborative development on GitHub or GitLab.

Salary range: $140,000 - $190,000 base salary range (dependent on experience level and interview performance) + equity (incentive stock options, vested over 4 years) + benefits.

Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.