--- sidebar_label: Guides title: Red Teaming Guides description: Step-by-step tutorials for red teaming LLM applications, testing guardrails, and finding vulnerabilities before deployment --- # Red Teaming Guides Practical tutorials for finding vulnerabilities in LLM applications before deployment. Each guide walks through a real-world scenario with working configuration examples you can adapt to your own use case. ## Getting Started - [How to Red Team LLM Applications](/docs/guides/llm-redteaming) — End-to-end guide to adversarial testing - [Testing Guardrails](/docs/guides/testing-guardrails) — Evaluate content safety guardrails ## By Application Type - [RAG Red Teaming](/docs/red-team/rag) — Test retrieval-augmented generation for prompt injection, context manipulation, and data poisoning - [Agent Red Teaming](/docs/red-team/agents) — Test LLM agents for privilege escalation, context poisoning, and memory manipulation - [MCP Security Testing](/docs/red-team/mcp-security-testing) — Secure Model Context Protocol servers through red teaming and tool poisoning tests - [Red Teaming a Chatbase Chatbot](/docs/guides/chatbase-redteam) — Security test a production chatbot ## Specialized Testing - [Multi-Modal Red Teaming](/docs/guides/multimodal-red-team) — Test image and multi-modal AI systems - [Foundation Model Red Teaming](/docs/red-team/foundation-models) — Assess foundation model security risks through red teaming and static scanning - [Evaluating Safety with HarmBench](/docs/guides/evaling-with-harmbench) — Benchmark LLM safety using HarmBench - [Testing Google Cloud Model Armor](/docs/guides/google-cloud-model-armor) — Assess Google Cloud's AI safety layer ## Ongoing Security - [LLM Supply Chain Security](/docs/red-team/llm-supply-chain) — Detect trojans, backdoors, and safety regressions in your LLM supply chain - [Detecting Model Drift](/docs/red-team/model-drift) — Monitor LLM security posture over time by running red team tests repeatedly