Critical Flaws in Ollama AI Framework Could Enable DoS, Model Theft, and Poisoning
- November 4, 2024
- Posted by: claudia
- Categories:
Recent cybersecurity research has uncovered six critical vulnerabilities within the Ollama artificial intelligence (AI) framework, which can be leveraged by malicious actors for a range of harmful activities, including denial-of-service (DoS) attacks, model poisoning, and model theft. Oligo Security researcher Avi Lumelsky highlighted that these vulnerabilities allow attackers to execute various malicious operations through a single HTTP request, underscoring the potential severity of these flaws.
Ollama is an open-source application facilitating the local deployment and operation of large language models (LLMs) on Windows, Linux, and macOS platforms. Its GitHub repository has garnered considerable attention, being forked over 7,600 times, indicating widespread adoption and usage within the developer community.
The disclosed vulnerabilities are detailed as follows:
- CVE-2024-39719 (CVSS score: 7.5) – This vulnerability allows attackers to exploit the /api/create endpoint to check for file existence on the server, patched in version 0.1.47.
- CVE-2024-39720 (CVSS score: 8.2) – This out-of-bounds read vulnerability can crash the application via the /api/create endpoint, leading to a DoS condition, corrected in version 0.1.46.
- CVE-2024-39721 (CVSS score: 7.5) – This flaw causes resource exhaustion that results in DoS when leveraging the /api/create endpoint repeatedly with "/dev/random" as an input, fixed in version 0.1.34.
- CVE-2024-39722 (CVSS score: 7.5) – A path traversal vulnerability in the api/push endpoint that reveals the directory structure and file exposure on the server, also patched in version 0.1.46.
- Other unpatched vulnerabilities include a potential model poisoning risk via the /api/pull endpoint from untrusted sources and a threat of model theft through the /api/push endpoint to untrusted targets.
For the two unaddressed vulnerabilities, Ollama maintainers advise users to restrict the exposure of endpoints to the internet using proxies or web application firewalls, stressing the importance of not assuming that all endpoints can be safely exposed. Lumelsky points out that the default setup of Ollama makes all endpoints accessible without proper segregation or documentation.
Oligo identified nearly 9,831 unique instances of Ollama running on the internet, predominantly in countries such as China, the U.S., Germany, South Korea, and others. Alarmingly, one in four of these internet-facing servers is vulnerable to the newly identified flaws.
This vulnerability advisory follows a previous significant discovery disclosed by cloud security firm Wiz in early June 2024, which indicated a severe flaw (CVE-2024-37032) capable of remote code execution against Ollama. Lumelsky emphasizes the dangers associated with exposing Ollama to the internet without proper authorization, comparing it to exposing a docker socket, as it allows file uploads and gives rise to model manipulation risks by attackers.
These findings underscore critical cybersecurity risks associated with current deployments of the Ollama framework, highlighting the necessity for rigorous security measures to protect against exploitation. Users of the framework are urged to follow best practices for securing their deployments to mitigate these vulnerabilities effectively.