Enable job alerts via email!

Principal Data Engineer - Remote US

Seamless.AI

United States

Remote

USD 100,000 - 125,000

Full time

30+ days ago

Boost your interview chances

Create a job specific, tailored resume for higher success rate.

Job summary

An established industry player is on the lookout for a Principal Data Engineer to lead the charge in designing and maintaining scalable ETL pipelines. This role demands a strong proficiency in Python and AWS Glue, along with extensive experience in data acquisition and transformation methodologies. You'll be at the forefront of optimizing ETL processes, ensuring data accuracy, and collaborating with cross-functional teams to drive data integration strategies. If you're a self-starter with a passion for data and a knack for problem-solving, this opportunity is perfect for you to make a significant impact in a dynamic environment.

Qualifications

  • 7+ years as a Data Engineer focusing on ETL processes and data integration.
  • Bachelor's degree in Computer Science or related field required.

Responsibilities

  • Design and maintain scalable ETL pipelines for data acquisition and transformation.
  • Collaborate with teams to develop efficient data integration strategies.

Skills

Python
AWS Glue
Spark
SQL
Data Modeling
Data Governance
Data Security
Machine Learning
Data Deduplication
Analytical Skills

Education

Bachelor's degree in Computer Science
Equivalent years of work experience

Tools

AWS Glue
Spark
Python Libraries (pandas, NumPy, PySpark)

Job description

At Seamless.AI, we’re seeking a highly skilled and experienced Principal Data Engineer with expertise in Python, Spark, AWS Glue, and other ETL (Extract, Transform, Load) technologies. The ideal candidate will have a proven track record in data acquisition and transformation, as well as experience working with large data sets and applying methodologies for data matching and aggregation. Strong organizational skills and the ability to work independently as a self-starter are essential for this role.

Responsibilities:
  • Design, develop, and maintain robust and scalable ETL pipelines to acquire, transform, and load data from various sources into our data ecosystem.
  • Collaborate with cross-functional teams to understand data requirements and develop efficient data acquisition and integration strategies.
  • Implement data transformation logic using Python and other relevant programming languages and frameworks.
  • Utilize AWS Glue or similar tools to create and manage ETL jobs, workflows, and data catalogs.
  • Optimize and tune ETL processes for improved performance and scalability, particularly with large data sets.
  • Apply methodologies and techniques for data matching, deduplication, and aggregation to ensure data accuracy and quality.
  • Implement and maintain data governance practices to ensure compliance, data security, and privacy.
  • Collaborate with the data engineering team to explore and adopt new technologies and tools that enhance the efficiency and effectiveness of data processing.
Skillset:
  • Strong proficiency in Python and experience with related libraries and frameworks (e.g., pandas, NumPy, PySpark).
  • Hands-on experience with AWS Glue or similar ETL tools and technologies.
  • Solid understanding of data modeling, data warehousing, and data architecture principles.
  • Expertise in working with large data sets, data lakes, and distributed computing frameworks.
  • Experience developing and training machine learning models.
  • Strong proficiency in SQL.
  • Familiarity with data matching, deduplication, and aggregation methodologies.
  • Experience with data governance, data security, and privacy practices.
  • Strong problem-solving and analytical skills, with the ability to identify and resolve data-related issues.
  • Excellent communication and collaboration skills, with the ability to work effectively in cross-functional teams.
  • Highly organized and self-motivated, with the ability to manage multiple projects and priorities simultaneously.
Education and Requirements:
  • Bachelor's degree in Computer Science, Information Systems, related fields or equivalent years of work experience.
  • 7+ years of experience as a Data Engineer, with a focus on ETL processes and data integration.
  • Professional experience with Spark and AWS pipeline development required.
Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.

Similar jobs

Principal Data Engineer - Remote US

ZipRecruiter

Columbus

Remote

USD 90,000 - 150,000

2 days ago
Be an early applicant

Principal Data Engineer

Bentley iTwin Ventures

Exton

Remote

USD 90,000 - 150,000

5 days ago
Be an early applicant

Data Engineer Expert-Level

Globant

Remote

USD 90,000 - 150,000

6 days ago
Be an early applicant

Principal Data Engineer US - Remote

Apam 91

Remote

USD 100,000 - 125,000

30+ days ago

Lead Data Engineer - Remote US Work From Home - USA

Convera Holdings, LLC.

Snowflake

Remote

USD 90,000 - 150,000

25 days ago

Principal Machine Learning Engineer - Large Scale Embedding - (Remote - US)

Jobgether

Remote

USD 120,000 - 180,000

30+ days ago

Principal Data Engineer - Remote US

Tbwa Chiat/Day Inc

Remote

USD 90,000 - 150,000

30+ days ago

Lead Data Engineers – Azure/ Databricks/ Snowflake

Exusia, Inc.

Snowflake

Remote

USD 80,000 - 130,000

30+ days ago

Principal Data Engineer

Bentley Systems

Exton

Remote

USD 90,000 - 150,000

30+ days ago