Some sites seem to do really well in search engines just by ripping off other sites’ content and republishing it, possibly with an additional image or a changed headline, either with visible attribution or without it, using a correct rel=canonical or not.
Other sites that do this even minimally seem to suffer for it, whether by an algorithmic factor like Panda or a manual action or just by not being able to rank well.
Does Google penalize sites that have a lot of duplicate content?
Magic SEO Ball says: Reply hazy, try again.
One of the differences between professional SEOs and Google search quality engineers is that SEOs tend to think specifically and speak broadly, while Googlers tend to think broadly and speak specifically.
When an SEO – especially one who isn’t very good and doesn’t know enough to use precise language – says that his site was “penalized” or “punished” by Google, there are a few things he might mean:
- I did something to harm my site (eg, server errors, page speed, blocking crawlers, indexation issues, canonicalization issues), which caused me to lose rankings, which caused me to lose traffic.
- My competitors improved their sites, which caused them to gain rankings, which caused my site to decline by the same amount, which caused me to lose traffic.
- Google changed its organic ranking algorithm to favor something that I’m not doing, or not doing well, which caused me to lose rankings, which caused me to lose traffic.
- Google did something else completely different with search results pages, like knowledge graph or answer cards or seven-result SERPs or rich snippets or … which didn’t cause me to lose rankings at all, but which did cause me to lose traffic.
- Google released an algorithmic ranking factor (eg, Panda, Penguin) and this apparently suppressed my site in search results, even though nobody at Google will ever be able to confirm this for me, causing me to lose rankings, which caused me to lose traffic.
- I got caught doing something that violated Google’s webmaster guidelines, or I got caught not preventing someone from using my site to do something that violated Google’s webmaster guidelines, and Google put a manual action on my site, which caused me to lose rankings, which caused me to lose traffic.
When Google says “penalty,” however, they are talking about only one possible thing: a manual action (#6).
For instance: There is No Duplicate Content Penalty in Google. Here we have a representative of Google answering a question in the narrowest possible way to be able to say that there is not a penalty for duplicate content, which is technically true as long as you define “penalty” very narrowly, but which doesn’t even come close to answering the question.
Calling duplicate content a “filter” instead of a penalty is helpful for the five percent of SEOs who understand the difference – and there is a big and meaningful difference – between the two, and likely to be received as completely obfuscatory nonsense from the ninety-five percent of SEOs who just want an answer to this question: Is duplicate content bad?
So we will answer their question. Duplicate content is bad, for several reasons.
There actually is a duplicate content penalty
A later article in the same site linked above states very clearly that, in some specific cases, there can be a domain-wide duplicate content penalty, in the event that a certain site overwhelmingly uses other sites’ content without offering much unique material of any value. This is an actual penalty – a manual action, in Google’s words – that requires first fixing the problem and then submitting a reconsideration request to resolve.
The aforementioned duplicate content filter
As we all have seen, sometimes duplicate content is relegated to some index-below-the-index that isn’t even visible to searchers unless they click a link to view all the results.
Duplicate content opens the door to unnecessary canonicalization issues
There are a lot of ways to handle duplicate content on a site or among sites. One popular and recommended way is using rel=canonical to send the signal that a certain version is the preferred one, and that it should get the link equity of the others. The canonical tag does basically work most of the time, but it is only a good solution to a problem that’s fundamentally avoidable.
There are also a great many cases where the canonical doesn’t work as intended. For example, if the second domain has a vastly higher domain authority than the first, or if Google crawled the second version earlier and saw it there before it saw the original, or if the second gets far more links and shares than the first, the rel=canonical pointing from second to first may be ignored.
rel=canonical also does not send as much link equity as a 301 redirect, which means losing pagerank whenever it is used. It also needs to be engineered and tested and maintained, which can be a challenge for huge sites because it isn’t visible to users. Faulty implementations of rel=canonical, while now rare, are scary enough (imagine being the webmaster of the site that lost 98% of its traffic because every page suddenly had a canonical pointing to the home page) that one needs to act with caution.
Duplicate content should be avoided
Don’t avoid it at all costs, because there are some scenarios where it’s perfectly useful. But try finding a way to engineer elegant solutions that allow your site not to use duplicate content, wherever possible.