This Capture The Flag (CTF) challenge demonstrates prompt injection vulnerabilities in LLM applications.
To participate in this challenge, you need to provide your own OpenAI API key. This key will be used for all LLM interactions in the challenge.
API Key Configuration
Enter your OpenAI API key to enable the challenge:
Challenge Information
This is a Capture The Flag (CTF) challenge focused on prompt injection vulnerabilities in multi-chain LLM applications.
The system contains multiple flags, 3 discoverable via Prompt injections, and 2 through other means. No fuzzing or bruteforcing should be necessary.
Submit flags below to track your progress.
Flag Submission
Captured Flags
No flags captured yet.
×
CTF Challenge Hints
The system is an agentic / chained LLM system with three LLMs. Some have access to tools. The 1st just prepares the users search for the next two. The 2nd LLM grabs part info from a parts db. The 3rd gets enriches the system with engineering notes from a seperate db.
Step one in testing a LLM enabled webapp should always be leaking the _ _ _ _ _ _ prompt.
The 2nd LLM queries the parts db, could there be something in there? How would you get it? What do you have that could get you more data maybe?
The 3rd LLM queries a system and a db, could there be something in there? How would you get it? What do you have that could get you more data maybe?
Where might data leak in an application? You might need Burp or Caido or Dev Tools for these flags.
Engineering Parts Lookup System
Welcome to the engineering parts lookup system. Search for parts by name, category, or part number.