← back to hub ↑ parent
hack.asktam.net — AI Security Tooling Hub
// asktam.net infrastructure

hack.asktam.net

Self-hosted mirror of AI security and offensive tooling. Source code cloned from upstream GitHub repos for offline study and reference. Click any folder to browse the source tree.

13tools 5labs 5frameworks 3research

// Hub

/hack
hub
Arcanum AI Sec Resource Hub
Curated index of 23 labs, 5 competitions, 4 bug bounties, 7 tools and 3 research resources for AI/LLM security.

// Labs & CTFs

/broken-llm-app
lab
Broken LLM Integration App
Vulnerable LLM application demonstrating direct/indirect prompt injection, prompt leaking, P2SQL injection and LLM2Shell.
/promptme
lab
PromptMe — OWASP LLM Top 10
CTF-style platform with 10 hands-on challenges covering OWASP LLM Top 10 vulnerabilities. Local Python/Ollama setup.
/finbot-ctf
lab
OWASP FinBot CTF
Agentic AI security CTF — manipulate FinBot to approve fraudulent invoices without triggering detection. The "Juice Shop for Agentic AI."
/pwngpt
lab
PwnGPT CTF
Agentic LLM CTF with vector search and OpenAI models. 10+ progressive levels covering prompt injection, retrieval and ReAct agents.
/auto-parts-ctf
lab
Auto Parts CTF (MyLLMAuto)
Chained LLM-powered auto parts system with prompt injection, IDOR, WebSocket and API security challenges.

// Frameworks & Tools

/pyrit
framework
PyRIT
Microsoft's Python Risk Identification Tool — automated red-teaming framework with multi-turn orchestration, jailbreak library and Azure AI integration.
/garak
framework
Garak
NVIDIA's LLM vulnerability scanner — "nmap for LLMs." 40+ probe modules covering hallucination, leakage, jailbreaks, toxicity and bias.
/promptfoo
framework
Promptfoo
LLM testing and red-teaming framework. 15+ attack types, OWASP LLM Top 10 coverage, CI/CD integration and visual web UI.
/pyrit-ship
framework
PyRIT-Ship
Microsoft prototype extending PyRIT with a Flask API server and Burp Suite Intruder extension for AI safety testing.
/parseltongue
tool
P4RS3LT0NGV3
Elder Plinius's prompt injection payload generator. 20+ obfuscation techniques: leetspeak, ROT13, homoglyphs, Morse, phonetic.

// Reference Implementations

/secure-ai-bot
reference
Professional Secure AI Bot
Reference implementation of a secure multi-feature AI platform with RAG chatbot, web assistant and XSS prevention demos.

// Research

/pi-taxonomy
research
Arcanum PI Taxonomy
Comprehensive prompt injection attack taxonomy plus AI Pentest Questionnaire and Enterprise AI Security Ecosystem mapping.