# LLM03: Supply Chain Risk This challenge simulates attacks on large language model (LLM) supply chains, including compromised pre-trained models, poisoned training data, and insecure plugin integrations. Participants will explore scenarios such as tampered LoRA adapters, vulnerable Python libraries, and malicious model merges to understand and mitigate risks in LLM development and deployment. ## Challenge Objective This challenge demonstrates supply chain risk in LLM applications, where a rogue model—used without verification—silently exfiltrates user chat data to an attacker's server. It highlights how malicious or compromised AI models can pose severe security risks, even within trusted applications, by leaking sensitive user information without detection. Application URL: http://127.0.0.1:5003 **Hint:** Outbound traffic inspection can give the lead.