LLM Guard, a toolkit designed to enhance the security of Large Language Models (LLMs), is now available for free on GitHub. It offers sanitization, harmful language detection, data leakage prevention, and protection against prompt injection and jailbreak attacks. The toolkit aims to simplify the secure adoption of LLMs for companies, addressing security risks and control concerns. Future updates include better documentation, GPU inference support, and a security API.