The ultimate guide to finding and solving Duplicate Content

Duplicate Content in short

Duplicate content refers to very similar, or the exact same, content being on multiple pages. Keep this in mind:

  • Duplicate content adds little to no value for your visitors and confuses search engines.
  • Avoid having duplicate content, as it may harm your SEO performance.
  • Duplicate content can be caused by technical mishaps and manually copied content.
  • There are effective ways to prevent both cases of duplicate content from becoming an issue, which we’ll discuss in this article.

What is duplicate content?

Taken narrowly, duplicate content refers to very similar, or the exact same, content being on multiple pages within your own website or on other websites.

Taken broadly, duplicate content is content that adds little to no value for your visitors. Therefore, pages with little to no body content are also considered to be duplicate content.

You should avoid having duplicate content, as it confuses search engines and may harm your SEO performance.

Search engine robots get confused by duplicate content.
Search engine robots get confused by duplicate content.

Why is duplicate content bad for SEO?

Duplicate content is bad for two reasons:

  1. When there are several versions of content available, it’s hard for search engines to determine which version to index, and subsequently show in their search results. This lowers performance for all versions of the content, since they’re competing against each other.
  2. Search engines will have trouble consolidating link metrics (authority, relevancy and trust) for the content, especially when other websites link to more than one version of that content.

Is there a duplicate content penalty?

Having duplicate content can hurt your SEO performance, but it won’t get you a penalty from Google as long as you didn’t intentionally copy someone else’s website. If you’re an honest website owner with some technical website challenges, and you’re not trying to trick Google, you don’t have to worry about getting a penalty from Google.

If you’ve copied large amounts of other people’s content, then you’re walking a fine line. This is what Google says about it:

“Duplicate content on a site is not grounds for action on that site unless it appears that the intent of the duplicate content is to be deceptive and manipulate search engine results. If your site suffers from duplicate content issues, and you don’t follow the advice listed above, we do a good job of choosing a version of the content to show in our search results.”

Common causes of duplicate content

Duplicate content is often due to an incorrectly set up web server or website. These occurrences are technical in nature and will likely never result in a Google penalty. They can seriously harm your rankings though, so it’s important to make it a priority to fix them.

But besides technical causes, there are also human-driven causes: content that’s purposely being copied and published elsewhere. As we’ve said, these can bring penalties if they have a malicious intent.

Duplicate content due to technical reasons

Non-www vs www and HTTP vs HTTPs
Say you’re using the www subdomain and HTTPs. Then your preferred way of serving your content is via https://www.example.com. This is your canonical domain.

If your web server is badly configured, your content may also be accessible through:

Duplicate content due to different canonical domains.
Duplicate content due to different canonical domains.

Choose a preferred way of serving your content, and implement 301 redirects for non-preferred ways that lead to the preferred version: https://www.example.com.

URL structure: casing and trailing slashes
URLs are case-sensitive, meaning that https://example.com/url-a/ and https://example.com/url-A/ are seen as different URLs. When you’re creating links, it’s easy to make a typo, causing both versions of the URL to get indexed.

A forward slash (/) at the end of an URL is called a trailing slash. Often URLs are accessible through both variants here: https://example.com/url-a and https://example.com/url-a/.

Duplicate content due to inconsistencies in URL casing and trailing slash usage.
Duplicate content due to inconsistencies in URL casing and trailing slash usage.

Choose a preferred structure for your URLs, and for non-preferred URL versions, implement a 301 redirect to the preferred URL version.

Index pages (index.html, index.php)
Without your knowledge, your homepage may be accessible via multiple URLs because your web server is misconfigured. Besides https://www.example.com, your homepage may also be accessible through:

  • https://www.example.com/index.html
  • https://www.example.com/index.asp
  • https://www.example.com/index.aspx
  • https://www.example.com/index.php

Choose a preferred way to serve your homepage, and implement 301 redirects from non-preferred versions to the preferred version.

Parameters for filtering
Websites often use parameters in URLs so they can offer filtering functionality. Take this URL for example:

https://www.example.com/toys/cars?colour=black

This page would show all the black toy cars.

While this is fine for visitors, it may cause major issues for search engines. Filter options often generate a virtually infinite amount of combinations when there is more than one filter option available. All the more so because the parameters can be rearranged as well.

These two URLs would show the exact same content:

Duplicate content due to different order of URL parameters.
Duplicate content due to different order of URL parameters.

Implement canonical URLs—one for each main, unfiltered page—to prevent duplicate content and consolidate the filter-delivered page’s authority.

Taxonomies
A taxonomy is a grouping mechanism to classify content. They are often used in Content Management Systems to support categories and tags.

Let’s say you have a blog post that is in three categories. The blog post may be accessible through all three:

  • https://www.example.com/category-a/topic/
  • https://www.example.com/category-b/topic/
  • https://www.example.com/category-c/topic/
Duplicate content due to pages being in multiple categories.
Duplicate content due to pages being in multiple categories.

Be sure to choose one of these categories as the primary one, and make the others canonicalize to that one using the canonical URL.

Dedicated pages for images
Some Content Management Systems create a separate page for each image. This page often just shows the image on an otherwise empty page. Since this page has no other content, it’s very similar to all the other image pages and thus amounts to duplicate content.

So on these dedicated image pages, implement a canonical URL pointing to the page in which the image was used.

Comment pages
If you have comments enabled on your website, you may be automatically paginating them after a certain amount. The paginated comment pages will show the original content; only the comments at the bottom will be different.

For example, the article URL that shows comments 1-20 could be https://www.example.com/category/topic/, with https://www.example.com/category/topic/comments-2/ for comments 21-40, and https://www.example.com/category/topic/comments-3/ for comments 41-60.

Use the pagination link relationships to signal that these are a series of paginated pages.

Localization
When it comes to localization, duplicate content issues can arise when you’re using the exact same content to target people in different regions who speak the same language. For example: when you have a dedicated website for the Canadian market and also one for the US-market—both in English—chances are there’s a lot of duplication in the content.

Use the hreflang link relationships to signal that the localized pages are meant for a different audience.

Indexable search result pages
Many websites allow searching within the website. The pages on which the search results are displayed are all very similar, and in most cases don’t provide any value to search engines. That’s why you don’t want them to be indexable for search engines.

Prevent search engines from indexing the search result pages by utilizing the meta robots noindex attribute. And also in general, it’s a best practice not to link to your search result pages.

Indexable staging/testing environment
It’s likewise a best practice to use staging environments for rolling out and testing new features on websites. But these are often incorrectly left accessible and indexable for search engines.

Duplicate content due to multiple environments being publicly available.
Duplicate content due to multiple environments being publicly available.

Use HTTP authentication to prevent access to staging/testing environments. An additional benefit is that you’re preventing the wrong people from accessing them too.

Avoid publishing work-in-progress content
When you create a new page that contains little content, save it without publishing it yet—often it will provide little to no value.

Save unfinished pages as drafts. If you do need to publish pages with limited content, prevent search engines from indexing them: use the meta robots noindex attribute.

Parameters used for tracking
Parameters are commonly used for tracking purposes too. For instance when sharing URLs on Twitter, the source is added to the URL. This is another source of duplicate content. Take for example this URL that was tweeted using Buffer:

https://www.contentkingapp.com/academy/ecommerce-link-building/?utm_content=buffer825f4&utm_medium=social&utm_source=twitter.com&utm_campaign=buffer

It’s a best practice to implement self-referencing canonical URLs on pages. If you’ve already done that, this solves the issue. All URLs with these tracking parameters are canonicalized by default to the version without the parameters.

Session IDs
Sessions may store visitor information for web analytics. If each URL a visitor requests gets a session ID appended, this creates a lot of duplicate content, because the content at these URLs is exactly the same.

For example, when you click through to a localized version of our website, we add a Google Analytics session variable like https://www.contentking.nl/?_ga=2.41368868.703611965.1506241071-1067501800.1494424269. It shows the homepage with the exact same content, just on a different URL.

Once again—it’s a best practice to implement self-referencing canonical URLs on pages. If you’ve already done that, this solves the issue. All URLs with these tracking parameters are canonicalized by default to the version without the parameters.

Print-friendly version
When pages have a print-friendly version at a separate URL, there are essentially two version of the same content. Imagine this: https://www.example.com/some-page/ and https://www.example.com/print/some-page/.

Implement a canonical URL leading from the print friendly version to the normal version of the page.

Duplicate content caused by copied content

Landing pages for paid search
Paid search requires dedicated landing pages that target specific keywords. The landing pages are often copies of original pages, which are then adjusted to target these specific keywords. Since these pages are very similar, they produce duplicate content if they are indexed by search engines.

Duplicate content due to minor differences between landing pages.
Duplicate content due to minor differences between landing pages.

Prevent search engines from indexing the landing pages by implementing the meta robots noindex attribute. In general, it’s a best practice to neither link to your landing pages nor include them in your XML sitemap.

Other parties copying your content
Duplicate content can also originate from others copying your content and publishing it elsewhere. This is in particular a problem if your website has a low domain authority, and the one copying your content has a higher domain authority. They may be perceived as the original author and rank above you.

Make sure that other websites credit you by both implementing a canonical URL leading to your page and linking to your page. If they’re not willing to do so, you can send a DMCA request to Google and/or take legal action.

Finding duplicate content

Finding duplicate content within your own website

Using ContentKing, you can easily find duplicate content by checking whether your pages have a unique page title, meta description, and H1 heading. You can do this by going to the Issues section and opening the “Meta information” and “Content Headings” cards. See if there are any open issues regarding:

  • “Page title is not unique”
  • “Meta description is not unique”
  • “H1 heading is not unique”

Finding duplicate content outside your own website

If you’ve got a small website, you can try searching in Google for phrases between quotes. For instance, if I want to see if there are any other versions of this article, I may search for “Using ContentKing, you can easily find duplicate content by checking whether your pages have a unique page title, meta description, and H1 heading.”

Alternatively, for larger website you can use a service such as Copyscape. Copyscape crawls the web looking for multiple occurrences of the same or nearly the same content.

Frequently asked questions about duplicate content

  1. Can I get a penalty for having duplicate content?
  2. Will fixing duplicate content issues increase my rankings?
  3. How much duplicate content is acceptable?

1. Can I get a penalty for having duplicate content?

If you didn’t intentionally copy someone’s website, then it’s very unlikely for you to get a duplicate content penalty. If you have did copy large amounts of other people’s content, then you’re walking a fine line. This is what Google says about it:

“Duplicate content on a site is not grounds for action on that site unless it appears that the intent of the duplicate content is to be deceptive and manipulate search engine results. If your site suffers from duplicate content issues, and you don’t follow the advice listed above, we do a good job of choosing a version of the content to show in our search results.”

2. Will fixing duplicate content issues increase my rankings?

Yes, because by fixing the duplicate content issues you’re telling search engines what pages they should really be crawling, indexing, and ranking.

You’ll also be preventing search engines from spending their crawl budget for your website on irrelevant duplicate pages. They can focus on the unique content on your website that you want to rank for.

3. How much duplicate content is acceptable?

There’s no one good answer to this question. However:

If you want to rank with a page, it needs to be valuable to your visitors and have unique content.

Learn more about Duplicate Content

If you want to keep reading about Duplicate Content, we recommend checking out these resources:

Ready to try ContentKing?

Finally understand what’s really happening on your website.
Please enter a valid domain name (www.example.com).