Enable job alerts via email!

Principal Data Engineer 1745

MeridianLink

United States

On-site

USD 152,000 - 200,000

Full time

21 days ago

Boost your interview chances

Create a job specific, tailored resume for higher success rate.

Job summary

An established industry player seeks a Principal Data Engineer to lead the development of data products and processing pipelines. In this role, you will design and implement ETL processes, ensuring data integrity while leveraging advanced technologies such as AI and machine learning. Collaborate with cross-functional teams to translate business requirements into reliable data workflows. This innovative firm values work-life balance and fosters a culture of growth, offering flexible work arrangements and comprehensive benefits. Join a team where your contributions will shape the future of data engineering and analytics.

Benefits

Stock options or equity-based awards
Medical, dental, vision, life, and disability insurance
Flexible paid time off
Paid holidays
401(k) plan with company match
Remote work

Qualifications

  • 6-8 years of experience in data engineering with a focus on financial systems.
  • Ability to assess unusual circumstances and use analytical techniques.

Responsibilities

  • Design and maintain ETL pipelines for data processing from various sources.
  • Lead complex SQL query writing to support analytics needs.

Skills

Python
SQL
Apache Spark
Data Analysis
Problem-Solving

Education

Bachelor's degree in Computer Science
Master's degree in Computer Science

Tools

Databricks
Snowflake
Redshift
BigQuery
Airflow

Job description

Position Summary:

The Principal Data Engineer, of the Data Engineering subfamily in the Development family, is responsible for the development and maintenance of MeridianLink's data products. The Data Engineering subfamily is responsible for the design, build, implementation, and maintenance of data processes throughout the organization. The Principal Data Engineer will design, build, implement, and maintain data processing pipelines for the extraction, transformation, and loading (ETL) of data from a variety of data sources. The role will develop robust and scalable solutions that transform data into a useful format for analysis, enhance data flow, and enable end-users to consume and analyze data faster and easier. The professional level 4 role will write complex SQL queries to support analytics needs.

Expected Duties:

  1. Design, build, implement, and maintain data processing pipelines for the extraction, transformation, and loading (ETL) of data from a variety of data sources.
  2. Lead the writing of complex SQL queries to support analytics needs.
  3. Develop technical tools and programming that leverage artificial intelligence, machine learning, and big-data techniques to cleanse, organize, and transform data and to maintain, defend, and update data structures and integrity on an automated basis.
  4. Evaluate and recommend tools and technologies for data infrastructure and processing. Collaborate with engineers, data scientists, data analysts, product teams, and other stakeholders to translate business requirements into technical specifications and coded data pipelines.
  5. Work with tools, languages, data processing frameworks, and databases such as R, Python, SQL, Databricks, Spark, Delta, APIs. Work with structured and unstructured data from a variety of data stores, such as data lakes, relational database management systems, and/or data warehouses.

Qualifications: Knowledge, Skills, and Abilities

  1. Ability to assess unusual circumstances and uses sophisticated analytical and problem-solving techniques to identify the cause.
  2. Ability to enhance relationships and networks with senior internal/external partners who are not familiar with the subject matter, often requiring persuasion.
  3. Architect and scale our modern data platform to support real-time and batch processing for financial forecasting, risk analytics, and customer insights.
  4. Enforce high standards for data governance, quality, lineage, and compliance.
  5. Partner with stakeholders across engineering, finance, sales, and compliance to translate business requirements into reliable data models and workflows.
  6. Evaluate emerging technologies and lead POCs that shape the future of our data stack.
  7. Champion a culture of security, automation, and continuous delivery in all data workflows.

Technical Qualifications:

  1. Deep expertise in Python, SQL, and distributed processing frameworks like Apache Spark, Databricks, Snowflake, Redshift, BigQuery.
  2. Proven experience with cloud-based data platforms (preferably AWS or Azure).
  3. Hands-on experience with data orchestration tools (e.g., Airflow, dbt) and data warehouses (e.g., Databricks, Snowflake, Redshift, BigQuery).
  4. Strong understanding of data security, privacy, and compliance within a financial services context.
  5. Experience working with structured and semi-structured data (e.g., Delta, JSON, Parquet, Avro) at scale.
  6. Familiarity with modeling datasets in Salesforce, Netsuite, and Anaplan to solve business use cases required.
  7. Previous experience democratizing data at scale for the enterprise is a huge plus.

Educational Qualifications and Work Experience:

  1. Bachelor's or master's degree in computer science, engineering, or a related field.
  2. 6-8 years of experience in data engineering, with a strong focus on financial systems on SaaS platforms.

MeridianLink has a wonderful culture where people value the work they do and appreciate each other for their contributions. We develop our employees so they can grow professionally by preferring to promote from within. We have an open-door policy with direct access to executives; we want to hear your ideas and what you think. Our company believes that to be productive in the long term, we must have a genuine work-life balance. We understand that employees have families and full lives outside of the office. To that end, we honor their personal commitments.

MeridianLink is an Equal Opportunity Employer. We do not discriminate based on race, religion, color, sex, age, national origin, disability, or any other characteristic protected by applicable law.

MeridianLink runs a comprehensive background check, credit check, and drug test as part of our offer process.

Salary range of $152,200 - $200,000 [It is not typical for offers to be made at or near the top of the range.] The actual salary will be determined based on experience and other job-related factors permitted by law including geographical location.

MeridianLink offers:

  • Stock options or other equity-based awards
  • Insurance coverage (medical, dental, vision, life, and disability)
  • Flexible paid time off
  • Paid holidays
  • 401(k) plan with company match
  • Remote work

All compensation and benefits are subject to the terms and conditions of the underlying plans or programs, as applicable and as may be amended, terminated, or superseded from time to time.

#LI-REMOTE

Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.

Similar jobs

Lead, Data Engineer

L3Harris Technologies

Remote

USD 112,000 - 241,000

10 days ago

Principal Data Engineer

ZipRecruiter

Denver

Remote

USD 178,000 - 200,000

13 days ago

Lead Data Engineer - GenAI (Hybrid or Remote)

S&P Global

Englewood

Remote

USD 90,000 - 200,000

26 days ago

Lead Data Scientist - Databricks ML experience

ON Data Staffing

Remote

USD 100,000 - 720,000

4 days ago
Be an early applicant

Lead Data Engineer Market Access

Initial Therapeutics, Inc.

Remote

USD 137,000 - 216,000

7 days ago
Be an early applicant

Principal Data Engineer

Crisis Text Line International

Mississippi

Remote

USD 178,000 - 200,000

16 days ago

Principal Data Engineer

Crisis Text Line, Inc.

Denver

Remote

USD 178,000 - 200,000

20 days ago

Lead Data Engineer - GenAI (Hybrid or Remote)

S&P Global

New York

Remote

USD 90,000 - 200,000

30+ days ago

Lead Data Engineer

S&P Global, Inc.

New York

On-site

USD 97,000 - 196,000

-1 days ago
Be an early applicant