Sevenstar Websolutions

Google Robots.txt Announcement

Google’s Official Announcement – Robots.txt Noindex Support Has Banned

Google has made the official announcement that now Google Bot will no longer obey the Robots.txt directive for indexing the web page. Publishers relying on the robots.txt Noindex directive method have the time till 1st September, 2019 to remove and start using an another alternative in place of it.

Robots.txt Noindex directive declared Unofficial

Google has cancelled the robots.txt Noindex directive method because this directive is unofficial.

Though it supported the directive in past but it will no longer remain in practice now. Take the proper information about this new update and govern yourself accordingly.

Google mainly follows the Noindex directive

According to the article posted by Stone Temple states that search engine Google mostly obeys the robots.txt Noindex directive.

The conclusion came out from it:

    “Ultimately, the NoIndex directive in Robots.txt is pretty effective. It worked in 11 out of 12 cases we tested. It might work for your site, and because of how it’s implemented it gives you a path to prevent crawling of a page AND also have it removed from the index.

  That’s pretty useful in concept. However, our tests didn’t show 100 percent success, so it does not always work.”

Also have a look at the official tweet of Google:

    “Today we’re saying goodbye to undocumented and unsupported rules in robots.txt

    If you were relying on these rules, learn about your options in our blog post.”

Announcements’ relevant part:

“In the interest of maintaining a healthy ecosystem and preparing for potential future open source releases, we’re retiring all code that handles unsupported and unpublished rules (such as noindex) on September 1, 2019. “

How crawling can be controlled?

With this official announcement, Google has also suggested the four ways for controlling the indexing in its blog post:

  • Do Noindex in the robots meta tags
  • Implement 404 and 410 HTTP status codes in your web pages
  • Prevent indexing of the Noindex pages by using a password protection
  • Remove URL through Search Console tool

Comments are closed for this post.