Enable job alerts via email!

Principal Data Engineer - Remote US

ZipRecruiter

Columbus (OH)

Remote

USD 90,000 - 150,000

Full time

2 days ago
Be an early applicant

Boost your interview chances

Create a job specific, tailored resume for higher success rate.

Job summary

An innovative company is on the lookout for a Principal Data Engineer to join their dynamic team. This role involves designing and maintaining robust ETL pipelines, utilizing cutting-edge technologies like Python and AWS Glue. You will collaborate with cross-functional teams to ensure efficient data acquisition and integration while applying sophisticated methodologies for data matching and aggregation. With a focus on data governance and security, this position offers a unique opportunity to work with large data sets and contribute to the data ecosystem of a rapidly growing organization. If you're a self-starter with a passion for data engineering, this could be the perfect fit for you.

Qualifications

  • Bachelor's degree in Computer Science or related field required.
  • 7+ years of experience focusing on ETL processes and data integration.

Responsibilities

  • Design and maintain scalable ETL pipelines for data acquisition.
  • Collaborate with teams to develop data integration strategies.
  • Implement data transformation logic using Python.

Skills

Python
AWS Glue
Spark
SQL
Data Governance
Data Architecture
Data Modeling
Problem-Solving
Communication Skills
Self-Motivation

Education

Bachelor's degree in Computer Science
7+ years of experience as a Data Engineer

Tools

AWS Glue
PySpark

Job description

The Opportunity:

At Seamless.AI, we're seeking a highly skilled and experienced Principal Data Engineer with expertise in Python, Spark, AWS Glue, and other ETL (Extract, Transform, Load) technologies. The ideal candidate will have a proven track record in data acquisition and transformation, as well as experience working with large data sets and applying methodologies for data matching and aggregation. Strong organizational skills and the ability to work independently as a self-starter are essential for this role.

Responsibilities:

  1. Design, develop, and maintain robust and scalable ETL pipelines to acquire, transform, and load data from various sources into our data ecosystem.
  2. Collaborate with cross-functional teams to understand data requirements and develop efficient data acquisition and integration strategies.
  3. Implement data transformation logic using Python and other relevant programming and frameworks.
  4. Utilize AWS Glue or similar tools to create and manage ETL jobs, workflows, and data catalogs.
  5. Optimize and tune ETL processes for improved performance and scalability, particularly with large data sets.
  6. Apply methodologies and techniques for data matching, deduplication, and aggregation to ensure data accuracy and quality.
  7. Implement and maintain data governance practices to ensure compliance, data security, and privacy.
  8. Collaborate with the data engineering team to explore and adopt new technologies and tools that enhance the efficiency and effectiveness of data processing.

Skillset:

  • Strong proficiency in Python and experience with related libraries and frameworks (e.g., pandas, NumPy, PySpark).
  • Hands-on experience with AWS Glue or similar ETL tools and technologies.
  • Solid understanding of data modeling, data warehousing, and data architecture principles.
  • Expertise in working with large data sets, data lakes, and distributed computing frameworks.
  • Experience developing and training machine learning models.
  • Strong proficiency in SQL.
  • Familiarity with data matching, deduplication, and aggregation methodologies.
  • Experience with data governance, data security, and privacy practices.
  • Strong problem-solving and analytical skills, with the ability to identify and resolve data-related issues.
  • Excellent communication and collaboration skills, with the ability to work effectively in cross-functional teams.
  • Highly organized and self-motivated, with the ability to manage multiple projects and priorities simultaneously.

Education and Requirements:

  • Bachelor's degree in Computer Science, Information Systems, or related fields, or equivalent work experience.
  • 7+ years of experience as a Data Engineer, focusing on ETL processes and data integration.
  • Professional experience with Spark and AWS pipeline development is required.

Since 2015, Seamless.AI has helped sales teams maximize revenue with the world's first real-time B2B search engine. As one of Ohio's fastest-growing companies, we've earned top industry accolades, including G2's 2025 Best Software Products (#1 Highest Satisfaction Product), Purpose Jobs' 2024 Best Places to Work, and LinkedIn's Top 50 Tech Startups (2020, 2022, 2023). We are committed to a diverse, inclusive workplace and do not discriminate based on race, gender, age, disability, or other protected statuses. Visa sponsorship is not available; applicants must be authorized to work in the U.S.

Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.

Similar jobs

Lead Data Scientist

Humana

Columbus

Remote

USD 142,000 - 196,000

2 days ago
Be an early applicant

Lead Data Scientist

Humana Inc

Montana

Remote

USD 142,000 - 196,000

Today
Be an early applicant

Principal Data Engineer

Bentley iTwin Ventures

Exton

Remote

USD 90,000 - 150,000

5 days ago
Be an early applicant

Data Engineer Expert-Level

Globant

Remote

USD 90,000 - 150,000

6 days ago
Be an early applicant

Lead Data Engineer - Remote US Work From Home - USA

Convera Holdings, LLC.

Snowflake

Remote

USD 90,000 - 150,000

25 days ago

Principal Data Engineer - Remote US

Tbwa Chiat/Day Inc

Remote

USD 90,000 - 150,000

30+ days ago

Principal Data Engineer - Remote US

Seamless.AI

Remote

USD 100,000 - 125,000

30+ days ago

Lead Data Scientist I, Lifetime Value

Ohiox

Remote

USD 140,000 - 175,000

21 days ago

Lead Data Scientist I, Lifetime Value

Root

Remote

USD 140,000 - 175,000

21 days ago