Are you sure you want to leave? You're just one step away from your free network consultation (valued at $495).
LLM Guard: Open-Source Toolkit for Securing Large Language Models
The open-source toolkit provides evaluators for inputs and outputs of LLMs, offering features such as sanitization, detection of harmful language, data leakage prevention, and protection against prompt injection and jailbreak attacks.