Enable job alerts via email!

Data Engineer (ADF, Orchestration, Data Flows, Snowflake) | REMOTE

Skill Mine

Minneapolis (MN)

Remote

USD 90,000 - 120,000

Full time

Yesterday
Be an early applicant

Boost your interview chances

Create a job specific, tailored resume for higher success rate.

Job summary

A leading company is seeking a Data Engineer with extensive experience in Azure Data Factory and Snowflake. This role involves developing ETL pipelines, optimizing data processes, and ensuring data quality. Ideal candidates should have a strong background in data warehousing and healthcare experience. The position is fully remote, offering the flexibility to work from anywhere in the United States.

Qualifications

  • 5+ years of Data engineering experience with a focus on Data Warehousing.
  • 2+ years of experience creating pipelines in Azure Data Factory (ADF).
  • 3+ years of experience creating stored procedures with SQL.

Responsibilities

  • Develop and manage effective working relationships with other departments.
  • Assist in the overall architecture of the ETL Design.
  • Investigate and mine data to identify potential issues within ETL pipelines.

Skills

Data Analysis
Problem Solving
Analytical Skills

Education

Bachelor's or Advanced Degree in Information Technology

Tools

Azure Data Factory
Snowflake
Informatica PowerCenter
SQL Server
Oracle PL/SQL

Job description

[Minneapolis, MN, 55423] | 2025-05-12 15:08:31

  • Background candidates should ideally have come from SSIS /SQL transitioning to Azure Data Factory (Data Integration) Snowflake (Source and Target End Point)
  • Azure has a lot of components to it such as Fabric, DataBricks, ADF, Pyspark
  • They need candidates that can use ADF as both orchestration and integration
  • many candidates claim to have ADF but for orchestration really just use Pyspark to trigger the jobs through ADF
  • They need to be advanced at Building the Integration through data flows logic as well

100% Telecommute

Work Hours:9am-5pm CST

Project:

  • As a member of the Data Management team, the Data Engineer supports the Alabama EDS by developing and maintaining workflows, identifying, and resolving data quality issues, and optimizing processes to improve performance.
  • The Data Engineer will also support intrastate agencies by monitoring automated data extracts and working directly with state partners to create new extracts based on business specifications.

Responsibilities:

  • Develop and manage effective working relationships with other departments, groups, and personnel with whom work must be coordinated or interfaced
  • Efficiently communicate with ETL architect while understanding the requirements and business process knowledge in order to transform the data in a way that’s geared towards the needs of end users
  • Assist in the overall architecture of the ETL Design, and proactively provide inputs in designing, implementing, and automating the ETL flows
  • Investigate and mine data to identify potential issues within ETL pipelines, notify end-users and propose adequate solutions
  • Developing ETL pipelines and data flows in and out of the data warehouse using a combination of Azure Data Factory and Snowflake toolsets
  • Developing idempotent ETL process design so that interrupted, incomplete, or failed processes can be rerun without errors using ADF dataflows and Pipelines
  • Ability to work in Snowflake Virtual Warehouses as needed in Snowflake and automate data pipelines using Snowpipe for tedious ETL problems
  • Capturing changes in data dimensions and maintaining versions of them using Stream sets in snowflake and scheduling them using Tasks
  • Optimize every step of the data movement not only limited to source and during travel but also when it's at rest in the database for accelerated responses
  • Must have the ability to build a highly efficient orchestrator that can schedule jobs, execute workflows, perform Data quality checks, and coordinate dependencies among tasks
  • Responsible for testing of ETL system code, data design, and pipelines and data flows. Root cause analysis on all processes and resolving production issues are also a part of the process and routine tests on databases and data flow and pipeline testing
  • Responsible for documenting the implementations, and test cases as well as responsible for building deployment documents needed for CI/CD

Ideal Background: Data Engineer with Healthcare (Medicaid) and Microsoft Azure based experience with Snowflake and Azure Data Factory

TOP REQUIREMENTS:

  • 5+ years of Data engineering experience with a focus on Data Warehousing
  • 2+ years of experience creating pipelines in Azure Data Factory (ADF)
  • 3+ years of experience creating stored procedures with Oracle PL/SQL, SQL Server T-SQL, or Snowflake SQL

Required:

  • 5+ years of Data engineering experience with a focus on Data Warehousing
  • 2+ years of experience creating pipelines in Azure Data Factory (ADF)
  • 5+ years developing ETL using Informatica PowerCenter, SSIS, Azure Data Factory, or similar tools.
  • 5+ years of experience with Relational Databases, such as Oracle, Snowflake, SQL Server, etc.
  • 3+ years of experience creating stored procedures with Oracle PL/SQL, SQL Server T-SQL, or Snowflake SQL
  • 2+ years of experience with GitHub, SVN, or similar source control systems
  • 2+ years of experience processing structured and un-structured data.
  • Experience with HL7 and FHIR standards, and processing files in these formats.
  • 3+ years analyzing project requirements and developing detailed specifications for ETL requirements.
  • Excellent problem-solving and analytical skills, with the ability to troubleshoot and optimize data pipelines.
  • Ability to adapt to evolving technologies and changing business requirements.
  • Bachelors or Advanced Degree in a related field such as Information Technology/Computer Science, Mathematics/Statistics, Analytics, Business

Preferred:

  • 2+ years of batch or PowerShell scripting
  • 2+ years of experience with Python scripting.
  • 3+ years of data modeling experience in a data warehouse environment
  • Experience or familiarity with Informatica Intelligent Cloud Services (specifically Data Integration)
  • Experience designing and building APIs in Snowflake and ADF (e.g. REST, RPC)
  • Experience with State Medicaid / Medicare / Healthcare applications
  • Azure certifications related to data engineering or data analytics.

Required Skills : Data Analysis

Basic Qualification :

Additional Skills :

This is a high PRIORITY requisition. This is a PROACTIVE requisition

Drug Screen : No

PSS Technology Inc, established in 2012, is a U.S.-based company specializing in Information Technology and Services. The company is dedicated to delivering value to clients by providing professional treatment and high-quality services. Their commitment to client satisfaction is evident in their efficient, effective, and cost-effective solutions.

The company offers a range of services, including:

  • Digital marketing
  • Web design and development
  • Business Process Outsourcing (BPO)
  • Staffing and recruitment
  • Talent acquisition

PSS Technology Inc operates from its headquarters in Cupertino, California.

The company's management team boasts over 15 years of combined experience, ensuring a deep understanding of client needs and a focus on delivering tailored solutions.

For more information about PSS Technology Inc and their services, you can visit their official website at www.psstechnologyinc.com .

Job Code

JPC - 2313

Job Start Date

2025-06-02

City

Minneapolis

Primary Skills

Data Analysis

Posted Date

2025-05-12 15:08:31

Job End Date

2026-12-31

Number of Positions

1

Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.

Similar jobs

Data Engineer (ADF, Orchestration, Data Flows, Snowflake) | REMOTE

Vinsys Information Technology Inc

Minneapolis

Remote

USD 90.000 - 120.000

Yesterday
Be an early applicant