# LLM09: Misinformation LLM misinformation occurs when an AI model generates false or misleading information that appears credible, potentially leading users to trust incorrect answers. This risk is amplified in sensitive contexts, where users may unknowingly rely on false data for critical decisions. Application URL: http://127.0.0.1:5009 **Note:** Due to the nature of the vulnerability, CTF is not designed for this category. You may use the predefined prompts and observe model hallucinate and provide misinformation