Enable job alerts via email!

Principal Data Engineer - Remote US

Tbwa Chiat/Day Inc

United States

Remote

USD 90,000 - 150,000

Full time

30+ days ago

Boost your interview chances

Create a job specific, tailored resume for higher success rate.

Job summary

An innovative firm is on the lookout for a Principal Data Engineer to spearhead the design and development of robust ETL pipelines. This position offers the chance to work with cutting-edge technologies like AWS Glue and Spark, while collaborating with cross-functional teams to enhance data acquisition and integration strategies. If you are passionate about data engineering and thrive in a dynamic environment, this is your opportunity to make a significant impact on data-driven projects. Join a team that values creativity and innovation, and help shape the future of data processing in an exciting and supportive atmosphere.

Qualifications

  • Bachelor's degree in Computer Science or related fields is required.
  • 7+ years of experience focused on ETL processes and data integration.

Responsibilities

  • Design and maintain scalable ETL pipelines for data acquisition.
  • Collaborate with teams to develop data integration strategies.
  • Implement data transformation logic using Python and AWS Glue.

Skills

Python
AWS Glue
Spark
SQL
Data Governance
Data Architecture
Data Modeling
Machine Learning
Analytical Skills
Communication Skills

Education

Bachelor's degree in Computer Science or related fields
7+ years of experience as a Data Engineer

Tools

AWS Glue
Spark
ETL Tools

Job description

At Seamless.AI, we’re seeking a highly skilled and experienced Principal Data Engineer with expertise in Python, Spark, AWS Glue, and other ETL (Extract, Transform, Load) technologies. The ideal candidate will have a proven track record in data acquisition and transformation, as well as experience working with large data sets and applying methodologies for data matching and aggregation. Strong organizational skills and the ability to work independently as a self-starter are essential for this role.

Responsibilities:
  • Design, develop, and maintain robust and scalable ETL pipelines to acquire, transform, and load data from various sources into our data ecosystem.
  • Collaborate with cross-functional teams to understand data requirements and develop efficient data acquisition and integration strategies.
  • Implement data transformation logic using Python and other relevant programming languages and frameworks.
  • Utilize AWS Glue or similar tools to create and manage ETL jobs, workflows, and data catalogs.
  • Optimize and tune ETL processes for improved performance and scalability, particularly with large data sets.
  • Apply methodologies and techniques for data matching, deduplication, and aggregation to ensure data accuracy and quality.
  • Implement and maintain data governance practices to ensure compliance, data security, and privacy.
  • Collaborate with the data engineering team to explore and adopt new technologies and tools that enhance the efficiency and effectiveness of data processing.
Skillset:
  • Strong proficiency in Python and experience with related libraries and frameworks (e.g., pandas, NumPy, PySpark).
  • Hands-on experience with AWS Glue or similar ETL tools and technologies.
  • Solid understanding of data modeling, data warehousing, and data architecture principles.
  • Expertise in working with large data sets, data lakes, and distributed computing frameworks.
  • Experience developing and training machine learning models.
  • Strong proficiency in SQL.
  • Familiarity with data matching, deduplication, and aggregation methodologies.
  • Experience with data governance, data security, and privacy practices.
  • Strong problem-solving and analytical skills, with the ability to identify and resolve data-related issues.
  • Excellent communication and collaboration skills, with the ability to work effectively in cross-functional teams.
  • Highly organized and self-motivated, with the ability to manage multiple projects and priorities simultaneously.
Education and Requirements:
  • Bachelor's degree in Computer Science, Information Systems, related fields or equivalent years of work experience.
  • 7+ years of experience as a Data Engineer, with a focus on ETL processes and data integration.
  • Professional experience with Spark and AWS pipeline development required.
Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.

Similar jobs

Principal Data Engineer - Remote US

ZipRecruiter

Columbus

Remote

USD 90,000 - 150,000

2 days ago
Be an early applicant

Principal Data Engineer

Bentley iTwin Ventures

Exton

Remote

USD 90,000 - 150,000

5 days ago
Be an early applicant

Lead Data Scientist

Humana Inc

Montana

Remote

USD 142,000 - 196,000

Today
Be an early applicant

Lead Data Scientist

Humana

Columbus

Remote

USD 142,000 - 196,000

2 days ago
Be an early applicant

Data Engineer Expert-Level

Globant

Remote

USD 90,000 - 150,000

6 days ago
Be an early applicant

Principal Data Engineer US - Remote

Apam 91

Remote

USD 100,000 - 125,000

30+ days ago

Lead Data Engineer - Remote US Work From Home - USA

Convera Holdings, LLC.

Snowflake

Remote

USD 90,000 - 150,000

25 days ago

Principal Machine Learning Engineer - Large Scale Embedding - (Remote - US)

Jobgether

Remote

USD 120,000 - 180,000

30+ days ago

Principal Data Engineer - Remote US

Seamless.AI

Remote

USD 100,000 - 125,000

30+ days ago