A high-severity security flaw in Meta’s Llama large language model (LLM) framework, identified as CVE-2024-50050, could enable attackers to execute arbitrary code on the inference server. This vulnerability, with a critical severity rating of 9.3 by Snyk, stems from the deserialization of untrusted data in the Llama Stack component. Meta addressed this issue in version 0.0.41 by replacing the pickle serialization format with JSON to mitigate remote code execution risks. Additionally, a separate high-severity flaw in OpenAI’s ChatGPT crawler was disclosed, potentially allowing distributed denial-of-service (DDoS) attacks against websites by exploiting the handling of HTTP POST requests. These incidents highlight ongoing security challenges in AI frameworks and the importance of robust security measures to prevent exploitation.
Read more at The Hacker News…