United Network for Organ Sharing
Richmond (VA)
Remote
USD 90,000 - 130,000
Full time
Boost your interview chances
Create a job specific, tailored resume for higher success rate.
Job summary
An established industry player is on the lookout for a Senior Data Engineer to spearhead innovative data solutions in a cutting-edge Azure environment. This role involves collaborating with cross-functional teams to ensure data accuracy and accessibility while building scalable data pipelines. You will lead modernization efforts, mentor junior engineers, and drive strategic initiatives that enhance data quality and performance. If you are passionate about data engineering and eager to make a significant impact in a dynamic setting, this opportunity is perfect for you.
Qualifications
- 8+ years of experience in data engineering, including 2+ years with Azure.
- Strong understanding of cloud architecture and data warehousing.
Responsibilities
- Lead design and implementation of data solutions in Azure cloud.
- Architect secure, scalable data pipelines and ETL processes.
Skills
Data Engineering
Azure Cloud Solutions
Python (pandas, PySpark)
SQL Server
ETL/ELT Processes
Data Pipeline Architecture
Data Quality Management
REST APIs
Cloud Architecture
Big Data Technologies
Education
4-Year Degree in Computer Science
Tools
Azure Data Factory
Azure Functions
Azure Data Lake Storage
Databricks
Spark
Azure Synapse
Job Details
Job Location | Richmond, VA Remote Type | Fully Remote Position Type | Full Time Education Level | 4 Year Degree or Equivalent Experience
| |
Description
Position Description
We are seeking a Senior Data Engineer to lead the design and implementation of end-to-end data solutions within a modern Azure-based cloud environment. This hands-on, technical role will partner closely with data scientists, analysts, and business stakeholders to ensure that data is accurate, accessible, and optimized for analytics, reporting, and operational needs. You will play a key role in building scalable data pipelines and integrations while championing best practices, mentoring junior engineers, and leading strategic data initiatives across teams.
Key Responsibilities
- Architect and implement secure, scalable data pipelines using Azure Data Factory, Azure Functions, and Azure Data Lake Storage
- Design and build reliable ETL/ELT processes using Python (pandas, PySpark) and SQL, integrating data from diverse sources including REST APIs and various file formats (CSV, JSON, XML, Parquet)
- Collaborate cross-functionally to gather requirements and translate business needs into technical deliverables
- Lead modernization and optimization efforts, improving pipeline performance, maintainability, and scalability
- Detect and resolve data quality issues; implement automated audits and monitoring processes
- Act as a technical leader on schema design, performance tuning, and Azure data architecture
- Mentor and support junior engineers across data engineering, analytics, and BI teams
- Participate in and lead code reviews, promoting clean, well-documented, and testable code
- Stay current on trends in data engineering and cloud technologies, identifying opportunities to innovate
Minimum Requirements
- 8+ years of experience in data engineering, including
- 2+ years with Azure cloud-native solutions
Critical Skills
- Expertise in MS SQL Server, Python (pandas, PySpark), Azure Data Factory, Azure Functions and Azure Data Lake Storage.
- Experience working with a variety of file formats (e.g., CSV, JSON, XML, Parquet).
- Familiarity using REST APIs for data extraction and integration.
- Proven experience designing and implementing data solutions.
- Strong understanding of cloud architecture, data warehousing and modern data stack components.
Additional Skills & Qualifications
- Demonstrated ability to perform root cause analysis on data and processing issues
- Strong problem-solving skills with the ability to explain technical concepts to non-technical audiences
- A successful history of manipulating, processing and extracting value from large disparate datasets
- Experience with Big Data technologies such as Databricks, Spark, or Azure Synapse
- Knowledge of CI/CD workflows, version control, and agile development practices
- Familiarity with data governance, privacy, and compliance frameworks
- Experience with data warehousing, analytics tools, and BI platforms
Education
- 4-year degree in computer science, engineering or other related IT field of study, or equivalent professional work experience
Physical Requirements
- General office demands
- Prolonged periods of sitting at a desk and working on a computer.
- Frequent reaching, handling, and fine manipulation for using office equipment, filing, and managing paperwork.
- Manual dexterity sufficient to operate a keyboard, mouse, and other office tools.
- Occasional standing, walking, and bending.
- Ability to lift up to 10-20 pounds occasionally.
- Vision abilities required include close vision for computer work and reading documents.
- Reasonable accommodations may be made to enable individuals with disabilities to perform the essential functions.