The “Discovered - currently not indexed” error indicates that Google knows about these URLs, but they haven’t crawled (and therefore indexed) them yet.
For most small websites, this URL state is natural and this issue will automatically resolve after Google’s crawled the URLs. To illustrate, this is where the URLs are in Google’s indexing process:
Here’s what this error looks like in Google Search Console’s Index Coverage report:
If you’re encountering this issue on larger websites (10.000+ pages), this may be caused by:
- Overloaded server: Google was having issues crawling your site because it appeared to be overloaded. Check with your hosting provider if this was the case.
- Content overload: Your website contains much more content than Google’s willing to crawl at the moment. They think it’s not worth their time. Examples of content that fit this bill: filtered product category pages, auto-generated content and user-generated content. You can fix this either by pruning content, making the content more unique if you want Google to crawl and index it, or by removing links to it and update your robots.txt file to prevent Google from accessing these URls if you’re finding Google is discovering content that they shouldn’t.
- Poor internal link structure: Google isn’t finding enough ways in to the content that’s still due to be crawled. This can be fixed by improving the internal link structure.
Issue 1 and 2 are classic examples of crawl budget issues. Especially for larger websites, this is an area of concern.