Enable job alerts via email!

Principal Data Engineer - Remote US

Seamless.AI

United States

Remote

USD 100,000 - 125,000

Full time

30+ days ago

Boost your interview chances

Create a job specific, tailored resume for higher success rate.

Job summary

An established industry player is on the lookout for a Principal Data Engineer to lead the charge in designing and maintaining scalable ETL pipelines. This role demands a strong proficiency in Python and AWS Glue, along with extensive experience in data acquisition and transformation methodologies. You'll be at the forefront of optimizing ETL processes, ensuring data accuracy, and collaborating with cross-functional teams to drive data integration strategies. If you're a self-starter with a passion for data and a knack for problem-solving, this opportunity is perfect for you to make a significant impact in a dynamic environment.

Qualifications

  • 7+ years as a Data Engineer focusing on ETL processes and data integration.
  • Bachelor's degree in Computer Science or related field required.

Responsibilities

  • Design and maintain scalable ETL pipelines for data acquisition and transformation.
  • Collaborate with teams to develop efficient data integration strategies.

Skills

Python
AWS Glue
Spark
SQL
Data Modeling
Data Governance
Data Security
Machine Learning
Data Deduplication
Analytical Skills

Education

Bachelor's degree in Computer Science
Equivalent years of work experience

Tools

AWS Glue
Spark
Python Libraries (pandas, NumPy, PySpark)

Job description

At Seamless.AI, we’re seeking a highly skilled and experienced Principal Data Engineer with expertise in Python, Spark, AWS Glue, and other ETL (Extract, Transform, Load) technologies. The ideal candidate will have a proven track record in data acquisition and transformation, as well as experience working with large data sets and applying methodologies for data matching and aggregation. Strong organizational skills and the ability to work independently as a self-starter are essential for this role.

Responsibilities:
  • Design, develop, and maintain robust and scalable ETL pipelines to acquire, transform, and load data from various sources into our data ecosystem.
  • Collaborate with cross-functional teams to understand data requirements and develop efficient data acquisition and integration strategies.
  • Implement data transformation logic using Python and other relevant programming languages and frameworks.
  • Utilize AWS Glue or similar tools to create and manage ETL jobs, workflows, and data catalogs.
  • Optimize and tune ETL processes for improved performance and scalability, particularly with large data sets.
  • Apply methodologies and techniques for data matching, deduplication, and aggregation to ensure data accuracy and quality.
  • Implement and maintain data governance practices to ensure compliance, data security, and privacy.
  • Collaborate with the data engineering team to explore and adopt new technologies and tools that enhance the efficiency and effectiveness of data processing.
Skillset:
  • Strong proficiency in Python and experience with related libraries and frameworks (e.g., pandas, NumPy, PySpark).
  • Hands-on experience with AWS Glue or similar ETL tools and technologies.
  • Solid understanding of data modeling, data warehousing, and data architecture principles.
  • Expertise in working with large data sets, data lakes, and distributed computing frameworks.
  • Experience developing and training machine learning models.
  • Strong proficiency in SQL.
  • Familiarity with data matching, deduplication, and aggregation methodologies.
  • Experience with data governance, data security, and privacy practices.
  • Strong problem-solving and analytical skills, with the ability to identify and resolve data-related issues.
  • Excellent communication and collaboration skills, with the ability to work effectively in cross-functional teams.
  • Highly organized and self-motivated, with the ability to manage multiple projects and priorities simultaneously.
Education and Requirements:
  • Bachelor's degree in Computer Science, Information Systems, related fields or equivalent years of work experience.
  • 7+ years of experience as a Data Engineer, with a focus on ETL processes and data integration.
  • Professional experience with Spark and AWS pipeline development required.
Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.

Similar jobs

Principal Data Engineer - Remote US

ZipRecruiter

Columbus null

Remote

Remote

USD 120 000 - 180 000

Full time

Yesterday
Be an early applicant

Lead Data Engineer – Big Data

S&P Global

Dayton null

On-site

On-site

USD 118 000 - 238 000

Full time

Yesterday
Be an early applicant

Head Data Engineer

Netrascale

Town of Texas null

Remote

Remote

USD 100 000 - 150 000

Part time

30+ days ago

Lead Data Scientist

Upgrade, Inc.

null null

On-site

On-site

USD 120 000 - 180 000

Full time

13 days ago

Principal Data Engineer US - Remote

Apam 91

null null

Remote

Remote

USD 100 000 - 125 000

Full time

30+ days ago

Lead Data Scientist (Remote)

Lensa

Great Falls Crossing null

Remote

Remote

USD 113 000 - 225 000

Full time

30+ days ago

Principal Machine Learning Engineer - Large Scale Embedding - (Remote - US)

Jobgether

null null

Remote

Remote

USD 120 000 - 180 000

Full time

30+ days ago

Principal Data Engineer - Remote US

JOB HR

null null

Remote

Remote

USD 90 000 - 150 000

Full time

30+ days ago

Lead Machine Learning Engineer

Novaprime

New York null

Remote

Remote

USD 90 000 - 160 000

Full time

30+ days ago