# LLM01- Prompt Injection A Prompt Injection Vulnerability occurs when user prompts alter the LLM’s behavior or output in unintended ways. These inputs can affect the model even if they are imperceptible to humans, therefore prompt injections do not need to be human-visible/readable, as long as the content is parsed by the model. This application allows user to login and interact with the application chatbot. The admin has stored the secret key in his chat history. Application also allows to interact with external/internal applications using `/fetch` methods. ## Challenge Objective Use Prompt Injection/Jailbreak techniques to get admin's secret key. Application URL: http://127.0.0.1:5001 **Hint:** External URL feature can help