No menu items!

AI Hallucinations Could Be a Cybersecurity Risk

Large language model AIs are imperfect and sometimes generate false information. These instances, called hallucinations, can pose a cyber threat to businesses and individual AI enthusiasts.


Fortunately, you can increase defenses against AI hallucinations with heightened awareness and healthy second guessing.


Why Does AI Hallucinate?

There is no consensus on why AI models hallucinate, though there are a few probable guesses.

AI is trained from massive data sets, often containing flaws like thought gaps, content saliency variance, or harmful biases. Any training from these incomplete or inadequate data sets could be the root of hallucinations, even if later iterations of the data set received curation from data scientists.

Over time, data scientists can make information more accurate and input additional knowledge to fill vacancies and minimize hallucination potential. Overseers might mislabel data. The programming code might have errors. Fixing these items is essential because AI models advance based on machine learning algorithms.

These algorithms use data to make determinations. An extension of this is the AI’s neural network, which creates new decisions from machine learning experience to resemble the originality of human minds more accurately. These networks contain transformers, which parse the relationships between distant data points. When transformers go awry, hallucinations may occur.

How AI Hallucinations Provide Opportunities for Hackers

Person wearing a hood with a blurry face.

Unfortunately, it’s not common knowledge that AI hallucinates, and AI will sound confident even when it’s completely wrong. This all contributes to making users more complacent and trusting with AI, and threat actors rely on this user behavior to get them to download or trigger their attacks.

For example, an AI model might hallucinate a fake code library and recommend that users download that library. It’s likely that the model will continue recommending this same hallucinated library to many users who ask a similar question. If hackers discover this hallucination, they can create a real version of the imagined library—but filled with dangerous code and malware. Now, when AI continues to recommend the code library, unwitting users will download the hackers’ code.

Transporting harmful code and programs by taking advantage of AI hallucinations is an unsurprising next step for threat actors. Hackers aren’t necessarily creating countless novel cyber threats—they’re merely looking for new ways to deliver them without suspicion. AI hallucinations prey on the same human naïveté clicking on email links depends on (which is why you should use link-checking tools to verify URLs).

Hackers might take it to the next level too. If you’re looking for coding help and download the fake, malicious code, the threat actor could also make the code actually functional, with a harmful program running in the background. Just because it works the way you anticipate doesn’t mean it isn’t dangerous.

A lack of education may encourage you to download AI-generated recommendations because of online autopilot behavior. Every sector is under cultural pressure to adopt AI in its business practices. Countless organizations and industries distant from tech are playing with AI tools with little experience and even more sparse cybersecurity simply to stay competitive.

How to Stay Safe From Weaponized AI Hallucinations

Closeup of a computer screen with code on-screen.

Progress is on the horizon. Creating malware with generative AI was easy before companies adjusted data sets and terms and conditions to prevent unethical generations. Knowing the societal, technical, and personal weaknesses you may have against dangerous AI hallucinations, what are some ways to stay safe?

Anyone in the industry can work on refining neural network technology and library verification. There must be checks and balances before responses hit end users. Despite this being a necessary industry advancement, you also have a role to play in protecting yourself and others from generative AI threats.

Average users can practice spotting AI hallucinations with these strategies:

  • Finding spelling and grammar errors.
  • Seeing when the context of the query doesn’t align with the context of the response.
  • Acknowledging when computer-vision-based images don’t match up with how human eyes would see the concept.

Always be careful when downloading content from the internet, even when recommended by AI. If AI recommends downloading code, don’t do so blindly; check any reviews to ensure the code is legitimate and see if you can find information on the creator.

The best resistance against AI hallucination-based attacks is education. Speaking about your experiences and reading how others prompted malicious hallucinations, whether by accident or intentional testing, is invaluable in navigating AI in the future.

Enhancing AI Cybersecurity

You must be careful what you ask for when talking to an AI. Limit the potential for dangerous outcomes by being as specific as possible and questioning anything that comes on the screen. Test code in safe environments and fact-check other seemingly trustworthy information. Additionally, collaborating with others, discussing your experiences, and simplifying jargon about AI hallucinations and cybersecurity threats can help the masses be more vigilant and resilient against hackers.

Related

How to Use ChatGPT as a Detailed and Interactive Text-Based RPG

OpenAI’s ChatGPT is arguably the most advanced AI currently...

4 New Threats Targeting Macs in 2023 and How to Avoid Them

The past decade has witnessed a drastic change in...

What Are Improper Error Handling Vulnerabilities?

Do you know that little things like the errors...

5 AI-Powered Book Recommendation Sites and Apps to Find Your Next Read

Can ChatGPT find the best next book that you...

What Is Forefront AI and Is It Better Than ChatGPT?

Key Takeaways Forefront AI is an online...