LLM Guard: Open-Source Toolkit for Securing Large Language Models

Sep 21, 2023

The open-source toolkit provides evaluators for inputs and outputs of LLMs, offering features such as sanitization, detection of harmful language, data leakage prevention, and protection against prompt injection and jailbreak attacks.

Get Free Report & Network Analysis

Please check your email for the free report.