|
Wait for Google to verify the URL Click the "Request index creation" button This process is best done when publishing a new post or page. You're effectively telling Google, "We've added a new page to your site. Please take a look." However, requesting indexing does not directly solve the serious problems that prevent Google from indexing old pages. If you have such an issue, please follow the checklist below to determine the cause of the issue and take steps to resolve it. Some of you may have already tried them, but here's a quick introduction to each tactic: Remove crawl blocks in Bermuda Email List the robots.txt file. Remove invalid noindex tags Include pages in sitemap Remove invalid canonical tags Ensure pages are not orphaned Fix nofollow internal
links Add strong “internal links” Make sure your page is valuable and unique Delete low-quality pages ("crawl budget" optimization) Building high-quality backlinks 1) Delete the crawl block in the robots.txt file. Is Google not indexing your entire website? It may be caused by crawl block in robots.txt file. To confirm this issue, visit yourdomain.com/robots.txt. And look for one of the two codes below: User-agent: Googlebot Disallow: / User-agent: * Disallow: / Both of these are telling you that Googlebot is not allowed to crawl any pages on your site. To fix this problem, remove them. That's all. Robots.txt settings can also prevent Google from indexing certain web pages. To answer this question, enter the URL into Google Search Console's URL inspection tool. Then click on the "Coverage" part to check the details and see if you get the error "Are you allowed to crawl? No: Blocked by robots.txt". If you receive this error, it means that the page in question is blocked by

robots.txt. If this happens, double-check your robots.txt file to see if the page or related sections are blocked by a "disallow" rule. robots-txt.png Remove such notation. 2) Remove invalid noindex tags Google won't index pages you've asked not to index. This is very useful when you want to make certain web pages private. There are two ways to do this. Method 1: Meta tag Pages with any of these meta tags in the <head> section will not be indexed by Google. <meta name=”robots” content=”noindex”> <meta name=“googlebot” content=“noindex”>. This is a meta robots tag that tells search engines whether or not they can index the page. Note: The important part is the "noindex" value. If this is displayed, the page is set to noindex.
|
|