Enable job alerts via email!

Lead Data Scientist - Safety Alignment

Humana Inc

Saint Paul (MN)

Remote

USD 120,000 - 160,000

Full time

Yesterday
Be an early applicant

Job summary

A healthcare company is seeking a Lead Data Scientist to enhance AI system safety and alignment. You will design safety architectures and lead initiatives in AI ethics and governance. Candidates should possess a Master's Degree, with extensive experience in AI systems and technical skills, including SQL and Python. The role is remote within the US and offers the chance to impact healthcare significantly.

Qualifications

  • 4+ years of experience in research/ML engineering or applied research.
  • 2+ years of experience leading development of AI/ML systems.
  • Deep expertise in AI alignment or multi-agent systems.
  • Demonstrated ability to lead research-to-production initiatives.

Responsibilities

  • Design and implement safety architectures for AI systems.
  • Lead alignment techniques and develop monitoring strategies.
  • Partner with various teams for scaling and deployment.
  • Publish research on AI safety and alignment.

Skills

Proficiency in SQL
Python
Data analysis/data mining tools
Machine learning frameworks (e.g., PyTorch, JAX)
Large-scale ML systems
Deploying or auditing LLM-based agents
Large-scale ETL

Education

Master's Degree
Ph.D. in Computer Science or related field
Job description

Become a part of our caring community and help us put health first

The Enterprise AI organization at Humana is a pioneering force, driving AI innovation across our Insurance and CenterWell business segments. By collaborating with world-leading experts, we are at the forefront of delivering cutting-edge AI technologies for improving care quality and experience of millions of consumers. We are actively seeking top talent to develop robust and reusable AI modules and pipelines, ensuring adherence to best practices in accountable AI for effective risk management and measurement. Join us in shaping the future of healthcare through AI excellence.

We are seeking a Lead Data Scientist to drive the safety, alignment, and ethical development of Agentic AI systems. You will lead initiatives to ensure our intelligent agents behave reliably, safely, and in accordance with human values across dynamic, multi-agent, and high-stakes environments. This is a cross-functional role bridging technical safety research, systems engineering, governance, and product implementation.

Key Responsibilities
  • Design and implement safety architectures for Agentic AI systems, including guardrails, reward modeling, and self-monitoring capabilities
  • Lead and collaborate on alignment techniques such as inverse reinforcement learning, preference learning, interpretability tools, and human-in-the-loop evaluation
  • Develop continuous monitoring strategies for agent behavior in both simulated and real-world environments
  • Partner with product, legal, Responsible AI, governance, and deployment teams to ensure responsible scaling and deployment
  • Contribute to and publish novel research on alignment of LLM-based agents, multi-agent cooperation/conflict, or value learning
  • Proactively identify and mitigate failure modes, e.g., goal misgeneralization, deceptive behavior, unintended instrumental actions
  • Set safety milestones for autonomous capabilities as part of deployment readiness reviews
Technical Skills
  • Proficiency in SQL, Python, and data analysis/data mining tools.
  • Experience with machine learning frameworks like PyTorch, JAX, ReAct, LangChain, LangGraph, or AutoGen
  • Experience with high performance, large-scale ML systems
  • Experience with deploying or auditing LLM-based agents or multi-agent AI systems
  • Experience with large-scale ETL
Use your skills to make an impact
Required Qualifications
  • Master's Degree and 4+ years of experience in research/ML engineering or an applied research scientist position preferably with a focus on developing production-ready AI solutions
  • 2+ years of experience leading development of AI/ML systems
  • Deep expertise in AI alignment, multi-agent systems, or reinforcement learning
  • Demonstrated ability to lead research-to-production initiatives or technical governance frameworks
  • Strong publication or contribution record in AI safety, interoperability, or algorithm ethics
Preferred Qualifications
  • Ph.D. in Computer Science, Data Science, Machine Learning, or a related field
  • Contributions to open-source AI safety tools or benchmarks
  • Understanding of value-sensitive design, constitutional AI, or multi-agent alignment
  • Experience in regulated domains such as healthcare, finance, or defense
Location/Work Style:

Remote US

Equal Opportunity Employer

It is the policy of Humana not to discriminate against any employee or applicant for employment because of race, color, religion, sex, sexual orientation, gender identity, national origin, age, marital status, genetic information, disability or protected veteran status. It is also the policy of Humana to take affirmative action, in compliance with Section 503 of the Rehabilitation Act and VEVRAA, to employ and to advance in employment individuals with disability or protected veteran status, and to base all employment decisions only on valid job requirements.

Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.