--- sidebar_position: 2 description: Red team LLM systems for security, privacy, and criminal vulnerabilities using modular testing plugins to protect AI applications from exploitation and data breaches --- import React from 'react'; import PluginTable from '../\_shared/PluginTable'; import { PLUGINS, PLUGIN_CATEGORIES, humanReadableCategoryList, CATEGORY_DESCRIPTIONS, } from '../\_shared/data/plugins'; import VulnerabilityCategoriesTables from '@site/docs/\_shared/VulnerabilityCategoriesTables'; import ApplicationVulnerabilityDropdown from '@site/docs/\_shared/ApplicationVulnerabilityDropdown'; # Types of LLM vulnerabilities This page documents categories of potential LLM vulnerabilities and failure modes. Each vulnerability type is supported Promptfoo's open-source plugins. [Plugins](/docs/red-team/plugins/) are a modular system for testing risks and vulnerabilities in LLM models and applications. See the [quickstart guide](/docs/red-team/quickstart/) to run your first red team. ![LLM vulnerability types](/img/docs/llm-vulnerability-types.svg) See also our specific guides on: - [Red teaming AI agents](/docs/red-team/agents/) - [Red teaming RAGs](/docs/red-team/rag/) - [Red teaming multi-modal models](/docs/guides/multimodal-red-team) - [Testing and validating guardrails](/docs/guides/testing-guardrails/) :::note Interpreting Attack Success Rates When comparing red team results across different tools or papers, be aware that Attack Success Rate (ASR) depends heavily on attempt budget, prompt set composition, and judge choice. See [Why ASR Isn't Comparable Across Jailbreak Papers](/blog/asr-not-portable-metric) for guidance on interpreting these metrics. ::: ## Vulnerability Types ### Security Vulnerabilities ### Privacy Vulnerabilities ### Criminal Activity ### Harmful Activity ### Misinformation and Misuse ### Bias ### Ecommerce ### Financial ### Medical ### Pharmacy ### Insurance ### Custom ## Vulnerabilities by Application Not all applications are vulnerable to certain types of exploits. Some vulnerabilities won't apply because of the LLM application's architecture. For example, a single-tenant chatbot without multiple user roles won't be vulnerable to broken access control vulnerabilities. Select a category below to see where vulnerabilities may not apply. ## Plugin Reference For a complete list of available plugins and their severity levels, see the [Plugins Overview](/docs/red-team/plugins/) page.