Course Outline

Introduction to AI Red Teaming

  • Understanding the AI threat landscape
  • Roles of red teams in AI security
  • Ethical and legal considerations

Adversarial Machine Learning

  • Types of attacks: evasion, poisoning, extraction, inference
  • Generating adversarial examples (e.g., FGSM, PGD)
  • Targeted vs untargeted attacks and success metrics

Testing Model Robustness

  • Evaluating robustness under perturbations
  • Exploring model blind spots and failure modes
  • Stress testing classification, vision, and NLP models

Red Teaming AI Pipelines

  • Attack surface of AI pipelines: data, model, deployment
  • Exploiting insecure model APIs and endpoints
  • Reverse engineering model behavior and outputs

Simulation and Tooling

  • Using the Adversarial Robustness Toolbox (ART)
  • Red teaming with tools like TextAttack and IBM ART
  • Sandboxing, monitoring, and observability tools

AI Red Team Strategy and Defense Collaboration

  • Developing red team exercises and goals
  • Communicating findings to blue teams
  • Integrating red teaming into AI risk management

Summary and Next Steps

Requirements

  • An understanding of machine learning and deep learning architectures
  • Experience with Python and ML frameworks (e.g., TensorFlow, PyTorch)
  • Familiarity with cybersecurity concepts or offensive security techniques

Audience

  • Security researchers
  • Offensive security teams
  • AI assurance and red team professionals
 14 Hours

Custom Corporate Training

Training solutions designed exclusively for businesses.

  • Customized Content: We adapt the syllabus and practical exercises to the real goals and needs of your project.
  • Flexible Schedule: Dates and times adapted to your team's agenda.
  • Format: Online (live), In-company (at your offices), or Hybrid.
Investment

Price per private group, online live training, starting from 3200 € + VAT*

Contact us for an exact quote and to hear our latest promotions

Testimonials (1)

Upcoming Courses

Related Categories