Enable job alerts via email!

Staff Software ML Engineer - R12079 Remote - India

Oportun Inc.

Mississippi

Remote

USD 90,000 - 150,000

Full time

30+ days ago

Boost your interview chances

Create a job specific, tailored resume for higher success rate.

Job summary

Join a purpose-driven fintech company that champions financial inclusion. As a Sr. Staff ML Engineer, you'll leverage your extensive experience in platform engineering and data workflows to create innovative solutions that empower underserved communities. You'll design self-serve platforms that enable real-time machine learning deployment, ensuring robust data pipelines and seamless integration. This role offers a unique opportunity to contribute to meaningful change while working in a collaborative and inclusive environment. If you're passionate about technology and making a difference, this is the role for you!

Qualifications

  • 10-15 years in platform engineering or data engineering roles.
  • Hands-on experience with real-time ML deployment and data workflows.

Responsibilities

  • Design and build self-serve platforms for real-time ML deployment.
  • Develop microservices-based solutions using Kubernetes and Docker.

Skills

Python
Data Engineering
Platform Engineering
Microservices
Machine Learning Deployment
Agile Methodologies
CI/CD

Tools

AWS SageMaker
Docker
Kubernetes
FastAPI
Jenkins
GitHub Actions
New Relic
Databricks
PySpark
PostgreSQL

Job description

Oportun (Nasdaq: OPRT) is a mission-driven fintech that puts its 2.0 million members' financial goals within reach. With intelligent borrowing, savings, and budgeting capabilities, Oportun empowers members with the confidence to build a better financial future. Since inception, Oportun has provided more than $16.6 billion in responsible and affordable credit, saved its members more than $2.4 billion in interest and fees, and helped its members save an average of more than $1,800 annually. Oportun has been certified as a Community Development Financial Institution (CDFI) since 2009.

WORKING AT OPORTUN

Working at Oportun means enjoying a differentiated experience of being part of a team that fosters a diverse, equitable and inclusive culture where we all feel a sense of belonging and are encouraged to share our perspectives. This inclusive culture is directly connected to our organization's performance and ability to fulfill our mission of delivering affordable credit to those left out of the financial mainstream. We celebrate and nurture our inclusive culture through our employee resource groups.

Company Overview:

At Oportun, we are on a mission to foster financial inclusion for all by providing affordable and responsible lending solutions to underserved communities. As a purpose-driven financial technology company, we believe in empowering our customers with access to responsible credit that can positively transform their lives. Our relentless commitment to innovation and data-driven practices has positioned us as a leader in the industry, and we are actively seeking exceptional individuals to join our team as Sr. Staff ML Engineer to play a critical role in driving positive change.

Position Overview

We are seeking a highly skilled Platform Engineer with expertise in building self-serve platforms that combine real-time ML deployment and advanced data engineering capabilities. This role requires a blend of cloud-native platform engineering, data pipeline development, and deployment expertise. The ideal candidate will have a strong background in design & implementation of data platforms to enable self-serve for ML pipelines while enabling seamless deployments supporting real-time feature computation and prediction. The expectation from this role is to build platforms like Michelangelo or metaflow.

Responsibilities
  • Design and build self-serve platforms that support real-time ML deployment and robust data engineering workflows.
  • Develop microservices-based solutions using Kubernetes and Docker for scalability, fault tolerance, and efficiency.
  • Create APIs and backend services using Python and FastAPI to manage and monitor ML workflows and data pipelines.
Real-Time ML Deployment
  • Architect and implement platforms for real-time ML inference using tools like AWS SageMaker and Databricks.
  • Enable model versioning, monitoring, and lifecycle management with observability tools such as New Relic.
  • Build and optimize ETL/ELT pipelines for data preprocessing, transformation, and storage using PySpark and Pandas.
  • Develop and manage feature stores to ensure consistent, high-quality data for ML model training and deployment.
  • Design scalable, distributed data pipelines on platforms like AWS, integrating tools such as DynamoDB, PostgreSQL, MongoDB, and MariaDB.
  • Implement data lake and data warehouse solutions to support advanced analytics and ML workflows.
CI/CD and Automation
  • Design and implement robust CI/CD pipelines using Jenkins, GitHub Actions, and other tools for automated deployments and testing.
  • Automate data validation and monitoring processes to ensure high-quality and consistent data workflows.
Documentation and Collaboration
  • Create and maintain detailed technical documentation, including high-level and low-level architecture designs.
  • Collaborate with cross-functional teams to gather requirements and deliver solutions that align with business goals.
  • Participate in Agile processes such as sprint planning, daily standups, and retrospectives using tools like Jira.
Required Qualifications

Experience

  • 10-15 years of experience in platform engineering, backend engineering, DevOps, or data engineering roles.
  • 5 years experience as architect building platforms that scale.
  • Hands-on experience with real-time ML model deployment and data engineering workflows.

Technical Skills

  • Strong expertise in core Python and experience with Pandas, PySpark, and FastAPI.
  • Proficiency in container orchestration tools such as Kubernetes (K8s) and Docker.
  • Advanced knowledge of AWS services like SageMaker, Lambda, DynamoDB, EC2, and S3.
  • Proven experience building and optimizing distributed data pipelines using Databricks and PySpark.
  • Solid understanding of databases such as MongoDB, DynamoDB, MariaDB, and PostgreSQL.
  • Proficiency with CI/CD tools like Jenkins, GitHub Actions, and related automation frameworks.
  • Hands-on experience with observability tools like New Relic for monitoring and troubleshooting.

We are proud to be an Equal Opportunity Employer and consider all qualified applicants for employment opportunities without regard to race, age, color, religion, gender, national origin, disability, sexual orientation, veteran status or any other category protected by the laws or regulations in the locations where we operate.

Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.