Enable job alerts via email!

Senior AI/ML Engineer – Generative AI & Autonomous Agents (10034, 10035)

Extreme Networks

Toronto (OH)

Remote

USD 120,000 - 160,000

Full time

Today
Be an early applicant

Job summary

A leading networking solutions firm is looking for a Senior AI/ML Engineer to advance their AI Competence Center. The role involves designing AI systems that enhance networking decisions using Generative AI and multi-agent systems. Candidates should have a Master's or PhD and significant experience in ML/AI engineering, especially with LLM frameworks. This position offers the opportunity to work on innovative AI-native systems in a collaborative environment.

Qualifications

  • 5 years of experience in ML/AI engineering, including 2 years with LLM systems.
  • Strong programming skills in Python for cloud-native microservices.
  • Experience with data preprocessing and ETL workflows.

Responsibilities

  • Design and implement business logic for AI agents.
  • Develop LLM-driven agents using prompt engineering.
  • Collaborate with AI engineers on multi-agent workflows.

Skills

ML fundamentals
Python
Transformers and LLM systems
Collaboration and Communication

Education

Master’s or PhD in a relevant field

Tools

LLM frameworks (LangChain, AutoGen)
Vector databases (FAISS, Weaviate)
Docker
AWS
Azure
Job description
Overview

At Extreme Networks, we create effortless networking experiences that empower people and organizations to advance. As part of our growing AI Competence Center, we are seeking a Senior AI/ML Engineer with expertise in Generative AI, multi-agent systems, and LLM-based application development. In this role, you will help build the next generation of AI-native systems that combine traditional machine learning, generative models, and autonomous agents. Your work will power intelligent, real-time decisions for network design, optimization, security, and support.

Responsibilities
  • Design and implement the business logic and modeling that governs agent behavior, including decision-making workflows, tool usage, and interaction policies.
  • Develop and refine LLM-driven agents using prompt engineering, retrieval-augmented generation (RAG), fine-tuning, or function calling.
  • Understand and model the domain knowledge behind each agent: engage with network engineers, learn the operational context, and encode this understanding into effective agent behavior.
  • Apply traditional ML modeling techniques (classification, regression, clustering, anomaly detection) to enrich agent capabilities.
  • Contribute to the data engineering pipeline that feeds agents, including data extraction, transformation, and semantic chunking.
  • Build modular, reusable AI components and integrate them with backend APIs, vector stores, and network telemetry pipelines.
  • Collaborate with other AI engineers to create multi-agent workflows, including planning, refinement, execution, and escalation steps.
  • Translate GenAI prototypes into production-grade, scalable, and testable services in collaboration with platform and engineering teams.
  • Work with frontend developers to design agent experiences and contribute to UX interactions with human-in-the-loop feedback.
  • Stay up to date on trends in LLM architectures, agent frameworks, evaluation strategies, and GenAI standards.
Qualifications
  • Master’s or PhD in Computer Science, Artificial Intelligence, Machine Learning, or a related field.
  • 5 years of experience in ML/AI engineering, including 2 years working with transformer models or LLM systems.
  • Strong knowledge of ML fundamentals, with hands-on experience building and deploying traditional ML models.
  • Solid programming skills in Python, with experience integrating AI modules into cloud-native microservices.
  • Experience with LLM frameworks (e.g., LangChain, AutoGen, Semantic Kernel, Haystack), and vector databases (e.g., FAISS, Weaviate, Pinecone).
  • Familiarity with prompt engineering techniques for system design, memory management, instruction tuning, and tool-use chaining.
  • Strong understanding of RAG architectures, including semantic chunking, metadata design, and hybrid retrieval.
  • Hands-on experience with data preprocessing, ETL workflows, and embedding generation.
  • Proven ability to work with cloud platforms like AWS or Azure for model deployment, data storage, and orchestration.
  • Excellent collaboration and communication skills, including cross-functional work with product managers, network engineers, and backend teams.
Nice to Have
  • Experience with LLMOps tools, open-source agent frameworks, or orchestration libraries.
  • Familiarity with Docker, Docker Compose, and container-based development environments.
  • Background in enterprise networking, SD-WAN, or network observability tools.
  • Contributions to open-source AI or GenAI libraries.
Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.