Researchers Uncover Vulnerabilities in Open-Source AI and ML Models
- November 4, 2024
- Posted by: claudia
- Categories:
A recent disclosure has unveiled over 35 vulnerabilities in numerous open-source AI and machine learning (ML) models, posing significant security risks including remote code execution and data theft. These vulnerabilities were reported through Protect AI’s Huntr bug bounty platform and affect tools like ChuanhuChatGPT, Lunary, and LocalAI.
Among the most critical flaws are two vulnerabilities in Lunary, a toolkit designed for large language models (LLMs). The first flaw, CVE-2024-7474, with a CVSS score of 9.1, is categorized as an Insecure Direct Object Reference (IDOR) vulnerability. This flaw allows authenticated users to view or delete external users, potentially leading to unauthorized access to sensitive data. The second significant shortcoming, CVE-2024-7475 (also rated 9.1), involves improper access control that enables attackers to alter the SAML configuration, allowing unauthorized logins and access to sensitive information.
Additionally, another IDOR vulnerability identified in Lunary (CVE-2024-7473, CVSS score: 7.5) allows an attacker to manipulate user requests to change other users’ prompts by altering a user-controlled parameter. Protect AI highlighted that this could happen if an attacker intercepts a prompt update request and modifies the parameters to target another user’s prompt.
Another serious vulnerability, CVE-2024-5982, affects ChuanhuChatGPT, where a path traversal flaw in the user upload feature could enable arbitrary code execution and sensitive data exposure. LocalAI, an open-source project for self-hosted LLMs, has two vulnerabilities: one (CVE-2024-6983, CVSS score: 8.8) allows malicious file uploads to execute arbitrary code, while the other (CVE-2024-7010, CVSS score: 7.5) is a timing attack that could help attackers guess valid API keys by analyzing the response times from the server.
Furthermore, an arbitrary file overwrite vulnerability in the Deep Java Library (CVE-2024-8396, CVSS score: 7.8) could lead to remote code execution, highlighting the need for robust security in AI frameworks. In response, NVIDIA has released patches addressing a path traversal flaw in its NeMo generative AI framework (CVE-2024-0129, CVSS score: 6.3), which poses risks of code execution and data tampering. Users are encouraged to update to the latest software versions to mitigate these vulnerabilities and protect their AI/ML supply chains.
In parallel to these disclosures, Protect AI has introduced Vulnhuntr, an open-source static code analyzer that utilizes LLMs to identify zero-day vulnerabilities in Python codebases. Vulnhuntr excels by analyzing code in manageable chunks, allowing it to effectively identify potential security issues without overwhelming the LLM’s context limitations. It traces the function call paths to assess vulnerabilities comprehensively across the codebase.
Aside from these framework security issues, a new jailbreak technique by Mozilla’s 0Day Investigative Network has emerged, which allows encoding prompt injection techniques to bypass safeguards in OpenAI’s ChatGPT. By utilizing hexadecimal formats and symbolic emojis, malicious prompts can instruct the model to execute potentially harmful tasks under the guise of benign operations such as hex conversion. This exploit reveals the model’s inclination to follow instructions without adequately assessing the safety of the processes involved, a vulnerability rooted in insufficient context awareness.
This series of vulnerabilities underscores the pressing need for continuous scrutiny and enhancement of security measures within AI and ML frameworks, given their growing significance and widespread adoption in various sectors.