Enable job alerts via email!

Senior Data Engineers - Remote

MillenniumSoft Inc

San Diego (CA)

Remote

USD 120,000 - 180,000

Full time

30+ days ago

Boost your interview chances

Create a job specific, tailored resume for higher success rate.

Job summary

Join a forward-thinking company as a Senior Data Engineer, where you'll lead a dynamic team in designing and maintaining high-performance data systems. This remote role offers the chance to work with cutting-edge technologies, including AWS and modern data processing tools, to drive the organization's data strategy. You'll be responsible for overseeing complex system architectures and mentoring team members while fostering a culture of innovation. If you're passionate about data engineering and eager to make a significant impact in a collaborative environment, this opportunity is perfect for you.

Qualifications

  • 10+ years in data solutions with strong leadership in data engineering.
  • Hands-on experience with AWS and modern data pipeline technologies.

Responsibilities

  • Lead a team in designing and maintaining high-performance data systems.
  • Architect and build scalable data pipelines for processing large data volumes.

Skills

Data Engineering Leadership
Software Engineering Principles
AWS Data Pipeline Implementation
PySpark Programming
SQL Proficiency
Cloud Technologies
Data Security and Governance

Education

Bachelor's in Computer Science
Master’s in Computer Science

Tools

AWS
Spark
Snowflake
Databricks
Airflow
ETL Tools

Job description

Position: Senior Data Engineers - Remote

Location: San Diego, CA

Duration: 06 Months

Total Hours/week: 40.00

1st Shift

Client: Medical Devices Company

Job Category: Professional

Level of Experience: Senior Level

Employment Type: Contract on W2 (Need US Citizens or GC Holders or GC EAD or OPT or EAD or CPT)

Job Description:

Temp to perm

  • As the Senior Software Engineer, you will lead a team of data engineers in designing, building, and maintaining high-performance software systems to manage analytical data pipelines that fuel the organization’s data strategy using software engineering best practices.
  • Beyond technical expertise, you will also serve as a change leader, guiding teams through adopting new tools, technologies, and workflows to improve data management and processing.
  • This position requires extensive hands-on data system design and coding experience, as well as the development of modern data pipelines (AWS Step functions, Prefect, Airflow, Luigi, Python, Spark, SQL) and associated code in AWS.
  • You will work closely with stakeholders across the business to understand their data needs, ensure scalability, and foster a culture of innovation and learning within the data engineering team and beyond.

Key Responsibilities:

  • Be responsible for the overall architecture of a specific module within a product (e.g., Data-ingestion, near-real-time-data-processor, etc.), perform design and assist implementation considering system characteristics to produce optimal performance, reliability, and maintainability.
  • Provide technical guidance to team members, ensuring they are working towards the product's architectural goals.
  • Create and manage RFCs (Request for Comments) and ADRs (Architecture Decision Records), Design notes, and technical documentation for your module, following the architecture governance processes.
  • Lead a team of data engineers, providing mentorship, setting priorities, and ensuring alignment with business goals.
  • Architect, design, and build scalable data pipelines for processing large volumes of structured and unstructured data from various sources.
  • Collaborate with software engineers, architects, and product teams to design and implement systems that enable real-time and batch data processing at scale.
  • Be the go-to person for PySpark-based solutions, ensuring optimal performance and reliability for distributed data processing.
  • Ensure that data engineering systems adhere to the best data security, privacy, and governance practices in line with industry standards.
  • Perform code reviews for the product, ensuring adherence to company coding standards and best practices.
  • Develop and implement monitoring and alerting systems to ensure timely detection and resolution of data pipeline failures and performance bottlenecks.
  • Act as a champion for new technologies, helping ease transitions and addressing concerns or resistance from team members.

Ideal Candidate:

  • Experience leading a data engineering team with a strong focus on software engineering principles such as KISS, DRY, YAGNI, etc.
  • Candidate MUST have experience in owning large, complex system architecture and hands-on experience designing and implementing data pipelines across large-scale systems.
  • Experience implementing and optimizing data pipelines with AWS is a must.
  • Production delivery experience in Cloud-based PaaS Big Data related technologies (EMR, Snowflake, Databricks, etc.)
  • Experienced in multiple Cloud PaaS persistence technologies, and in-depth knowledge of cloud-based ETL offerings and orchestration technologies (AWS Step Function, Airflow, etc.)
  • Experienced in stream-based and batch processing, applying modern technologies.
  • Working experience with distributed file systems (S3, HDFS, ADLS), table formats (HUDI, Iceberg), and various open file formats (JSON, Parquet, CSV, etc.)
  • Strong programming experience in PySpark, SQL, Python, etc.
  • Database design skills including normalization/de-normalization and data warehouse design.
  • Knowledge and understanding of relevant legal and regulatory requirements, such as SOX, PCI, HIPAA, Data Protection.
  • Experience in the healthcare industry is a plus.
  • A collaborative and informative mentality is a must!

Toolset:

  • AWS, preferably AWS certified Data Engineer and AWS certified Solutions Architect.
  • Proficiency in at least one programming language C#, GoLang, JavaScript, or ReactJs.
  • Spark / Python / SQL.
  • Snowflake / Databricks / Synapse / MS SQL Server.
  • ETL / Orchestration Tools (Step Function, DBT, etc.)
  • ML / Notebooks.

Education and Experience Required:

  • Bachelor's or Master’s in Computer Science, Information Systems, or an engineering field or relevant experience.
  • 10+ years of related experience in developing data solutions and data movement.

This role can be REMOTE.

Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.

Similar jobs

Staff Software Development Engineer - Tech Lead Data Pipelines

Dexcom

San Diego

Remote

USD 135,000 - 226,000

Yesterday
Be an early applicant

Senior Data Engineers - Remote - Urgent Need

MillenniumSoft Inc

San Diego

Remote

USD 100,000 - 160,000

30+ days ago

Senior Data Engineers

Centene Corporation

St. Louis

Remote

USD 133,000 - 159,000

8 days ago

Senior Staff Engineer - Finance Data Specialist (Remote)

GEICO

Denver

Remote

USD 130,000 - 260,000

Yesterday
Be an early applicant

Senior Staff Engineer - Finance Data Specialist (Remote)

GEICO

Atlanta

Remote

USD 130,000 - 260,000

2 days ago
Be an early applicant

Senior Staff Engineer - Finance Data Specialist (Remote)

GEICO

San Francisco

Remote

USD 130,000 - 260,000

-1 days ago
Be an early applicant

EverCommerce EverPro - Senior Data Engineer, Remote (Canada) - Old Toronto - 2 months ago purch[...]

Data Engineer Jobs

Colorado

Remote

CAD 130,000 - 150,000

Today
Be an early applicant

Senior Data Engineer

Samsara Inc.

San Francisco

Remote

USD 112,000 - 152,000

Yesterday
Be an early applicant

Senior Data Software Engineer

Jobvertise.com - Jobboard

Remote

USD 146,000 - 146,000

Yesterday
Be an early applicant