Enable job alerts via email!

Data Engineer

MeridianLink

United States

On-site

USD 80,000 - 120,000

Full time

Today
Be an early applicant

Job summary

A technology company is seeking a skilled Data Engineer to enhance its data pipeline architecture and support critical data initiatives. The ideal candidate will have 2-4 years of experience in Data Engineering, strong skills in Python, Spark, and SQL, and will be responsible for developing large scale data solutions. This position offers a dynamic work environment and opportunities for collaboration across teams.

Qualifications

  • 2-4 years of professional Data Engineering and Data warehousing experience.
  • Strong implementation experience in Python, Parquet, and Spark.
  • Ability to write/debug complex SQL queries.

Responsibilities

  • Design and operate large scale data pipelines.
  • Improve and automate internal processes.
  • Integrate data sources to meet business requirements.

Skills

Python
Spark
SQL
ETL/ELT processes
Data warehousing

Tools

Azure Databricks
Delta Lake
Databricks Data Warehouse
BI visualization tools (Sisense)
CI/CD tools (Gitlab, Jenkins)
Job description

We are looking for an accomplished Data Engineer to join our quickly growing Analytics team. This role will be responsible for expanding and improving our data and data pipeline architecture, as well as optimizing data flow and MDM for cross functional teams. The ideal candidate is an experienced data pipeline builder and data wrangler who enjoys optimizing data systems and building them from the ground up.

The Data Engineer will support database architects, data analysts and data scientists on data initiatives and will ensure optimal data delivery throughout ongoing projects. They must be self-directed and comfortable supporting the data needs of multiple teams, systems, and products. The right candidate will be excited by the prospect of optimizing or even re-designing our company’s data pipelines to support our next generation of products and data initiatives.

Responsibilities
  • Design, develop, and operate large scale data pipelines to support internal and external consumers
  • Improve and automate internal processes
  • Integrate data sources to meet business requirements
  • Write robust, maintainable, well documented code
Qualifications
  • 2-4 years professional Data Engineering and Data warehousing experience
  • Extremely strong implementation experience in Python, Parquet, Spark, Azure Databricks, Delta Lake, Databricks Data Warehouse. Databricks workflows, Delta Sharing and Unity Catalog.
  • SQL development knowledge – Stored procedures, triggers, jobs, indexes, partitioning, pruning etc.
  • Be able to write/debug complex SQL queries
  • ETL/ELT and Data-warehousing techniques and best practices
  • Experience building, maintaining, and scaling ETL/ELT processes and infrastructure
  • Implementation experience with various data modelling techniques
  • Implementation experience working with a BI visualization tool (Sisense is a plus)
  • Experience with CI/CD tools (Preferred Gitlab, Jenkins)
  • Experience with cloud infrastructure (Azure strongly preferred)
  • Experience working in a fast-paced product environment, with an attitude of getting the job done with the least amount of tech debt
  • Pluses for experience with UI development frameworks such as java script, Django, REACT etc.
  • Knowledge of being able to work with a variety of Ingestion patterns such as API/SQL servers etc.
  • Knowledge of Master Data Management
  • Prior Financial industry experience a plus.
  • Be able to navigate ambiguity and pivot based on business priorities with ease.
  • Strong communication, negotiating and estimating skills.
  • Be a team player and should be able to collaborate well.
Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.