Protect AI is shaping, defining, and innovating a new category within cybersecurity around the risk and security of AI / ML. Our ML Security Platform enables customers to see, know, and manage security risks to defend against unique AI security threats and embrace MLSecOps for a safer AI-powered world. This includes a broad set of capabilities including AI supply chain security, an auditable Bill of Materials for AI/ML model scanning, signing, attestation, and LLM Security.
Join our team to help us solve this critical need of protecting AI!
Role
At Protect AI, we're creating the most comprehensive AI security platform in the world. From safeguarding the AI supply chain to scanning ML models and securing Large Language Models (LLMs), we use advanced deep learning to protect against the latest threats. We are looking for a talented Senior Applied Researcher in NLP to help us reach our ambitious goals.
This is a unique opportunity to be at the forefront of the AI Security domain, influencing both our cutting-edge initiatives and the broader field with your innovative research and developments. You'll help build resilient AI technologies that offer robust protection against emerging threats, safeguarding global organizations.
As part of our team, you'll collaborate closely with our product engineers, architects, and CTO. You will also play a crucial role in improving our open-source models, helping organizations secure their AI applications.
Responsibilities:
- Conduct in-depth research, analyze AI systems, and develop novel methodologies and techniques to proactively detect and mitigate security risks, including adversarial attacks, data poisoning, model evasion, harmful behavior, and others.
- Develop robust classification models and frameworks using state-of-the-art deep learning techniques for various applications focusing on security and integrity.
- Evaluate and improve the performance of various AI models, including NLP, generative, and classification types, aiming for greater accuracy, efficiency, and scalability.
- Contribute to the open-source community by sharing models and algorithms, especially through initiatives like LLM Guard.
- Collaborate with cross-functional teams and effectively communicate technical findings and insights to stakeholders.
- Stay abreast of AI security and safety research advancements, attend conferences, and actively contribute to the security community through publications and presentations.
Qualifications:
- Significant practical experience in building and deploying machine learning, deep learning, and neural networks from ideation to production in academia or industry settings.
- Advanced knowledge in Deep Learning as applied to Natural Language Processing (NLP) tasks such as text classification, feature extraction, sentiment analysis, topic modeling, and named entity recognition.
- Demonstrated ability to transform cutting-edge research into viable prototypes with experience in novel NLP models to solve real-world problems.
- Strong Python programming skills and familiarity with deep learning frameworks like PyTorch or TensorFlow, including experience with fine-tuning LLMs or other transformer-based models like BERT.
- Excellent problem-solving skills, analytical thinking, and meticulous attention to detail, with a passion for working in a dynamic and fast-paced environment as part of a distributed team.
- Experience in fast-paced agile environments capable of managing uncertainty and ambiguity.
- Effective communication skills with the ability to collaborate well in a team-oriented environment.
Preferred Qualifications:
- Experience with large datasets and processing frameworks (e.g., Azure Data Lake, HDFS, Hadoop, Spark) or public cloud infrastructures (Azure, AWS, Google Cloud) for NLP model tasks.
- Experience in cybersecurity or Trustworthy AI, such as toxicity detection or algorithms for adversarial attacks and defenses.
- Proven track record of conducting research, demonstrated through publications in top-tier conferences or journals.
- Contribution to open-source software projects.
What We Offer:
- An exciting collaborative work environment in a fast-growing startup.
- Competitive salary and benefits package.
- Excellent medical, dental, and vision insurance.
- Opportunities for professional growth and development, including attending and presenting at technical talks, meetups, and conferences.
- A culture that values innovation, accountability, and teamwork.
- Opportunities to contribute to our open-source projects with thousands of GitHub stars and millions of HuggingFace downloads.
- Work with a team of talented peers from AWS, Microsoft, and Oracle Cloud.
- Work with top-tier tools, modern tech stack, and high-quality collaboration tools.
- No bureaucracy or legacy systems—empowered to innovate and excel.
- Weekly office lunches and delivery credits for food services.
Additional Details:
Protect AI is an Equal Opportunity Employer. We celebrate diversity and are committed to creating an inclusive environment for all employees.
Required Experience: Senior IC
Key Skills
Arm, Machine Learning, AI, C / C++, R, Clinical Trials, Research & Development, Semantic Web, Vulnerability Research, etc.
Employment Type: Full-Time
Department / Functional Area: Software Engineering
Experience: Years
Vacancy: 1