No menu items!

How to Block OpenAI’s Crawlers From Scraping Your Website

While users love ChatGPT for the sheer amount of information that it currently holds, the same can’t be said about website owners.

OpenAI’s ChatGPT uses crawlers to scrape websites, but if you’re a website owner, and you don’t want OpenAI’s crawler to access your website, here are a few things that you can do to prevent it.

How Does OpenAI Crawling Work?

A web crawler (also known as a spider or a search engine bot) is an automated program that scans the internet for information. It then compiles that information in a way that’s easy for your search engine to access it.

Web crawlers index every page of every relevant URL, usually focusing on websites that are more relevant to your search queries. For example, let’s assume you’re googling a particular Windows error. The web crawler within your search engine will scan all the URLs from websites that it deems more authoritative on the topic of Windows errors.

OpenAI’s web crawler is called GPTBot, and according to OpenAI’s documentation, giving GPTBot access to your website can help train the AI model to become safer, and more accurate, and it can even help expand the AI model’s capabilities.

How to Prevent OpenAI From Crawling Your Website

Like most other web crawlers, GPTBot can be blocked from accessing your website by modifying the website’s robots.txt protocol (also known as the robots exclusion protocol). This .txt file is hosted on the website’s server, and it controls how web crawlers and other automated programs behave on your website.

Here’s a short list of what the robot.txt file can do:

  • It can completely block GPTBot from accessing the website.
  • It can block only certain pages from a URL from being accessed by GPTBot.
  • It can tell GPTBot which links it can follow, and which it cannot.

Here’s how to control what GPTBot can do on your website:

Completely Block GPTBot From Accessing Your Website

  1. Set up the robot.txt file, and then edit it with any text editing tool.
  2. Add the GPTBot to your site’s robots.txt as follows:

 User-agent: GPTBot
Disallow: /

Block Only Certain Pages From Being Accessed by GPTBot

  1. Set up the robot.txt file, and then edit it with your preferred text editing tool.
  2. Add the GPTBot to your site’s robots.txt as follows:

 User-agent: GPTBot
Allow: /directory-1/
Disallow: /directory-2/

However, keep in mind that changing the robot.txt file is not a retroactive solution, and any information that GPTBot may have already gathered from your website will not be recoverable.

OpenAI Allows Website Owners to Opt-Out From Crawling

Ever since crawlers have been used to train AI models, website owners have been looking for ways to keep their data private.

Some fear that AI models are basically stealing their work, even attributing fewer website visits to the fact that now users get their information without ever having to visit their websites.

All in all, whether you want to completely block AI chatbots from scanning your websites is completely your choice.


How to Use ChatGPT as a Detailed and Interactive Text-Based RPG

OpenAI’s ChatGPT is arguably the most advanced AI currently...

4 New Threats Targeting Macs in 2023 and How to Avoid Them

The past decade has witnessed a drastic change in...

What Are Improper Error Handling Vulnerabilities?

Do you know that little things like the errors...

5 AI-Powered Book Recommendation Sites and Apps to Find Your Next Read

Can ChatGPT find the best next book that you...

What Is Forefront AI and Is It Better Than ChatGPT?

Key Takeaways Forefront AI is an online...