# Multi-Chain Attacks ## Description Targeting applications that chain multiple LLM calls that process and refine tasks sequentially. ## Attack Examples - Feeding adversarial prompts to observe chain behavior - Using tools like Garak and Giskard for testing - Exploiting inter-model communication - Manipulating sequential processing - Creating chain reaction attacks - Exploiting model handoff points - Testing chain vulnerabilities - Reference: https://labs.withsecure.com/publications/multi-chain-prompt-injection-attacks ## Tools - https://github.com/NVIDIA/garak - https://www.giskard.ai/