# Projects and products consuming, wrapping, and using garak ## Tooling (open-source / free only) | Name | Licence | Description | | - | - | - | | [TrustyAI Garak](https://github.com/trustyai-explainability/llama-stack-provider-trustyai-garak) | Apache | Out-of-Tree Llama Stack Eval Provider for Garak Red Teaming | | [Garak-MCP](https://github.com/EdenYavin/Garak-MCP) | MIT license | MCP Server for garak | | [Garak Report](https://github.com/lreading/garak-repo) | Apache-2.0 | A repository for your Garak runs, as well as a modern visualizer. | ## Integrations | Name | Description | | - | - | | [Tumeryk](http://www.tumeryk.com) | The Tumeryk platform empowers organizations to safeguard AI systems, ensuring secure, reliable, and policy-aligned deployments. Scan LLMs & endpoints to prevent jailbreaks, data leaks, and IP exposure | | [Vijil](https://www.vijil.ai/) | Vijil helps organizations build and operate autonomous agents that humans can trust. Garak forms part of their “vijil score”. They have an API and efficient orchestration engine | | [Deepchecks](https://deepchecks.com) | [Integrating garak and NeMo Guardrails together](https://deepchecks.com/the-best-llm-safety-net-to-date-deepchecks-garak-and-nemo-guardrails-all-in-one-bundle/); [Example use of garak tools](https://llmdocs.deepchecks.com/docs/pentesting-your-llm-pipeline) | | [Mindgard](https://www.mindgard.ai) | An AI security platform, Mindgard integrate garak as part of their pentesting & eval suite | | [Giskard](https://giskard.ai) | [Giskard integration](https://docs.giskard.ai/en/stable/reference/scan/llm_detectors.html) | | [OpsMX](https://www.opsmx.com/) | "OpsMx Delivery Shield embeds Garak’s adversarial testing engine to continuously probe, monitor, and guard live AI and LLM workloads against jailbreaks, data leaks, and policy violations" https://www.opsmx.com/dynamic-runtime-ai-security | | [Upwind](https://www.upwind.io/) | "Upwind secures your cloud deployments, configurations, and applications through a runtime fabric that provides real-time visibility from the inside out" | | [GuardionAI](https://guardion.ai/) | "Discover, analyze, and understand 89 AI security threats across 25 categories. From prompt injection to data leakage, including 29,310 prompt attack examples. Explore the complete landscape of LLM vulnerabilities." [PromptIntel](https://guardion.ai/promptintel) | ## Want to be added? Want a project mentioned, or the description updated? Send a pull request with details. Please use a [neutral tone](https://en.wikipedia.org/wiki/Wikipedia:Neutral_point_of_view). Inclusion on this page does not imply an endorsement by the maintainers or NVIDIA.