Activez les alertes d’offres d’emploi par e-mail !

Offensive Security - Applied Machine Learning Engineer

Apple Inc.

Paris

Hybride

EUR 80 000 - 100 000

Plein temps

Il y a 10 jours

Résumé du poste

A leading tech company in Paris seeks an Applied Machine Learning Engineer to develop innovative security tools using ML and generative technologies. Responsibilities include identifying vulnerabilities in large datasets and collaborating with security researchers. Ideal candidates have a strong ML background, expertise in ML frameworks, and are skilled in programming. This position offers remote or in-office work options.

Qualifications

  • Proven foundation in ML with emphasis on generative technologies.
  • Experience in training and fine-tuning language models over various datasets.
  • Expertise in ML frameworks like PyTorch or JAX.

Responsabilités

  • Create advanced ML tools that automate security assessments.
  • Sift through system and application logs for potential vulnerabilities.
  • Apply language models to identify weaknesses and generate test suites.

Connaissances

Machine Learning
Generative technologies
Language models
Problem-solving
Analytical skills

Outils

PyTorch
JAX
C
C++
Python
Swift
Objective-C
Description du poste
Offensive Security - Applied Machine Learning Engineer

Paris, Ile-de-France, France Software and Services

Apple's Security Engineering & Architecture organization is responsible for the security of all Apple products. Passionate about safeguarding our users, we believe that the best defense requires a phenomenal offense. When it comes to securing more than a billion devices running the world's most sophisticated operating systems, that means finding vulnerabilities first. Can you make a difference on this scale? Join our extraordinary team of security researchers and help protect all Apple users.

Description

Join our team and help redefine the landscape of vulnerability research on Apple platforms by harnessing cutting-edge language models and generative technologies. In this highly innovative domain, you will create advanced ML tools that automate security assessments—making a massive impact across a global user base. A key part of your work will involve sifting through enormous volumes of system and application logs, hunting for that “needle in the haystack”: subtle anomalies and indicators of potential vulnerabilities hidden within petabytes of data. By applying large language models (LLMs) to complex codebases, you’ll identify weaknesses and generate sophisticated test suites to probe them. You’ll collaborate with dedicated vulnerability research teams and ML experts. Experience with reinforcement learning is a significant plus. Join us to push the boundaries of ML-driven security and fortify the digital experience for millions of users worldwideRemote or in-office roles are considered, including Paris, Cupertino, and other locations

Minimum Qualifications
  • Proven foundation in ML with special emphasis on generative technologies and language models.
  • Experience in training and fine tuning language models over structured, weakly structured or code-based datasets.
  • Expertise in ML frameworks like PyTorch or JAX.
  • Experience with tool development, using programming languages such as C, C++, Python, Swift, Objective-C.
Preferred Qualifications
  • Creative & effective problem-solving and analytical skills.
  • Ability to comprehend, interpret, and apply ground breaking research into products
  • Familiarity with prompt engineering, reinforcement learning, chain-of-thought approaches
Obtenez votre examen gratuit et confidentiel de votre CV.
ou faites glisser et déposez un fichier PDF, DOC, DOCX, ODT ou PAGES jusqu’à 5 Mo.