Job Search and Career Advice Platform

Activez les alertes d’offres d’emploi par e-mail !

AI/ML Security Engineer

BlackFluoAI

France

Sur place

EUR 70 000 - 90 000

Plein temps

Il y a 17 jours

Générez un CV personnalisé en quelques minutes

Décrochez un entretien et gagnez plus. En savoir plus

Résumé du poste

A cybersecurity firm in France is seeking an innovative AI/ML Security Engineer to enhance cybersecurity through artificial intelligence. The role focuses on developing AI models for threat detection while securing AI systems against adversarial attacks. Candidates should have over 6 years of experience in cybersecurity and machine learning, strong knowledge of security principles, and hands-on ML expertise. Advanced degrees and security certifications are preferred. This position offers opportunities to work collaboratively across various teams.

Qualifications

  • 6+ years experience in cybersecurity and/or applied machine learning.
  • Hands-on experience with ML frameworks and data pipelines.
  • Understanding of adversarial ML concepts.

Responsabilités

  • Develop AI/ML models for threat detection and incident response.
  • Integrate ML-driven detection with SIEM and XDR platforms.
  • Work with teams to embed security into AI development lifecycle.

Connaissances

Cybersecurity principles
Threat modeling
Security architecture
Programming in Python
Machine Learning frameworks
Data analysis libraries

Formation

Advanced degree in Computer Science or related field

Outils

TensorFlow
PyTorch
Scikit-learn
Description du poste
AI/ML Security Engineer

Leveraging AI to enhance cybersecurity while protecting machine learning systems from adversarial threats.

Position Overview

We are seeking an innovative AI/ML Security Engineer who combines deep knowledge of cybersecurity with experience in artificial intelligence and machine learning. This cross‑disciplinary role focuses on two key areas: using AI/ML techniques to detect and respond to threats more effectively, and ensuring the security of AI models against adversarial attacks such as data poisoning, model inversion, and evasion techniques. You will collaborate across security operations, data science, and DevSecOps teams to operationalize intelligent threat detection and protect critical AI systems from emerging threats.

Key Responsibilities

AI for Cybersecurity

  • Develop and operationalize AI/ML models for anomaly detection, threat intelligence analysis, and automated incident response
  • Analyze large volumes of log and telemetry data to detect patterns indicative of cyber threats
  • Integrate ML‑driven detection with SIEM, SOAR, and XDR platforms to improve automation and speed of response

Securing AI/ML Systems

  • Implement controls to protect AI pipelines from manipulation, including input validation, data sanitization, and model monitoring
  • Identify and mitigate threats such as data poisoning, model inversion, adversarial examples, and membership inference
  • Evaluate ML models for robustness, fairness, and explainability from a security perspective

Governance & Collaboration

  • Work with data scientists and MLOps teams to embed security into the AI development lifecycle
  • Document AI system threat models and design risk mitigation strategies
  • Stay updated on AI security research, adversarial ML techniques, and emerging regulatory considerations
Required Qualifications
  • 6+ years experience in cybersecurity and/or applied machine learning
  • Strong knowledge of cybersecurity principles, threat modeling, and security architecture
  • Hands‑on experience with ML frameworks (e.g., TensorFlow, PyTorch, Scikit‑learn) and data pipelines
  • Understanding of adversarial ML concepts and experience applying mitigation strategies
  • Programming proficiency in Python, and experience with data analysis libraries
Preferred Qualifications
  • Advanced degree in Computer Science, Cybersecurity, Machine Learning, or related field
  • Security certifications (e.g., CISSP, OSCP, or ML‑specific credentials)
  • Experience with ML model deployment in secure environments (e.g., MLOps, containerization)
  • Familiarity with AI governance, ethics, and security standards (e.g., NIST AI RMF, ISO/IEC 23894)
  • Contributions to AI security research, open‑source tools, or community knowledge
Obtenez votre examen gratuit et confidentiel de votre CV.
ou faites glisser et déposez un fichier PDF, DOC, DOCX, ODT ou PAGES jusqu’à 5 Mo.