What Are the Benefits of Duplicate Content?

13 replies
So its been said that there is no penalty for duplicate content. But what are the benefits?

If there are multiple copies of the same article on the interent...

~ do all the copies get credit for links in the body?

~ is the copy that was posted first ranked highest or the one with the most backlinks to it?

~ how many copies are allowed before Google stops showing more copies of the same article?

~ is the article devalued in the rankings a little the more it is copied?
#benefits #content #duplicate
  • Profile picture of the author Matt Bard
    The definition of duplicate content has been twisted and applied to several unrelated areas of SEO over the years and now people think there is no penalty at all but that too is wrong.

    Duplicate content was originally about the amount of keywords that were stuffed on a web page. Then it was expanded to include making a duplicate page on your site that copied a high ranking page on your site.

    There has always been numerous sites on every search engine that have contained the same information as other sites together on the front page. That has never been the problem.

    There may not be an official "penalty" for having duplicate content on your site but everyone here should know that the more "keyword stuffing" you do on your site the less likely it is to have a high ranking.

    Matt
    {{ DiscussionBoard.errors[1367064].message }}
  • Profile picture of the author Matt Bard
    Sorry, forgot to answer your questions.

    1. Yes

    2. Most backlinks

    3. No limit

    4. No

    Matt
    {{ DiscussionBoard.errors[1367088].message }}
  • Profile picture of the author JohnMcCabe
    If you go beyond any SEO implications, "duplicate content", i.e. syndication, can bring you a lot more exposure than putting the same piece of content in a single location.

    As examples I could cite Dear Abby, Dave Barry, George Will, David Broder, etc. All syndicate their columns to multiple publishers and gain wide exposure (and the rewards that come with the attention).
    {{ DiscussionBoard.errors[1367109].message }}
    • Profile picture of the author Holland
      Hi

      I totally agree, the term 'duplicate content' does not really exist, therefore a penalty for such doesn't happen.

      But what does happen is that the site that is best optimized for the main keyword of article will be ranked best, before all the other ones. And especially if the site is optimized very well in relation to the niche of the site. The articles (same or unique) will get the best rank from Google...

      The others will not rank as well or get in the part of sites that googles call not important.

      You may call it the 'duplicate content' penalty!


      Yours sincerely
      Angelina
      {{ DiscussionBoard.errors[1367131].message }}
  • Profile picture of the author Matt Bard
    Here is a re-post of a post I made which goes into more detail about this subject.

    The history of duplicate content.

    Back in the early days of SEO up until around 1997/98 (Google) you could have a page about making money online and your title would be How To Make Money Online the heading would be the same. The first paragraph would start off with this too and so on.

    One day, someone figured out that you could have an entire page with nothing but make money online, make money online...just like meta keywords.

    That didn't look very good to the human eye so the trick was to have an article for the human reader and under the table that held your article would be very long list of your keywords repeated over and over again but this time the keywords text would be the same color as the background.

    Invisible to the naked eye and the little search bots didn't have a clue so you could do this and be number one at anything.

    Then the invisible keyword text wars began. It was horrible. Every page you went to had a table with an article and about 3 pages long invisible text.

    The images all had these long "alt" tags that were nothing more than an entire string of keywords. It all started looking so bad that "doorway pages" were created to hold enough keywords to put War and Peace to shame with a redirect to the "real" page.

    One of the first "smart bots", Inktomi at HotBot realized that the only way to deal with this problem was to incorporate some sort of way to recognize unnecessary duplicate content.

    They started coming up with counting programs and formulas for determining meaningful content.

    The first thing they did was to drop your site if you had an unrealistic number of the same words over and over. Then they went to work on invisible text.

    Meanwhile, across town, a couple of fellas took this idea and ran with it and Google was born.

    Matt
    {{ DiscussionBoard.errors[1367284].message }}
  • Profile picture of the author LilBlackDress
    Interesting Matt,

    So are you saying there could be 20 copies of the same article on Google. Wouldn't some of them be excluded at some point for being similar content....

    Sometimes when you search Google will not show all results. Could this be the case with duplicate articles?
    Signature

    Pen Name + 8 eBooks + social media sites 4 SALE - PM me (evergreen beauty niche)

    {{ DiscussionBoard.errors[1367308].message }}
    • Profile picture of the author halfpoint
      Originally Posted by LilBlackDress View Post

      Interesting Matt,

      So are you saying there could be 20 copies of the same article on Google. Wouldn't some of them be excluded at some point for being similar content....

      Sometimes when you search Google will not show all results. Could this be the case with duplicate articles?
      Try searching for one of your favourite songs lyrics.

      For example, here is a search for Michael Jackson - Billie Jean Lyrics - - Google Search

      A large amount of the top 100 results are the exact same content.
      {{ DiscussionBoard.errors[1367869].message }}
  • Profile picture of the author Matt Bard
    Here is just a quick little test.

    Dave Barry is a syndicated columnist writer and here is one of his article titles.

    Type this into Google

    Feeling Sick? Blame Your Computer

    The exact article comes up over and over and again.

    The sites that have those articles start ranking on the overall site's ability to satisfy the search terms not how many times the article is listed on Google's pages.

    If Google had a way to determine which individual articles were the most relevant, it would most certainly be the original newspaper article that Dave wrote for the paper he worked for. But as you can see, this is not the case.

    However, if the paper that Dave worked for had repeated his original article over and over again they would be "penalized" (marks against them in the ranking process) for having "duplicate content" (keyword stuffing) on their site.

    Obviously, Google does not count the number of times that a keyword is found throughout the web. It is concerned with the amount of times it is found on individual pages and sites.

    Matt


    {{ DiscussionBoard.errors[1367356].message }}
    • Profile picture of the author LilBlackDress
      Originally Posted by Matt M View Post

      Here is just a quick little test.

      Dave Barry is a syndicated columnist writer and here is one of his article titles.

      Type this into Google

      Feeling Sick? Blame Your Computer

      The exact article comes up over and over and again.

      The sites that have those articles start ranking on the overall site's ability to satisfy the search terms not how many times the article is listed on Google's pages.

      If Google had a way to determine which individual articles were the most relevant, it would most certainly be the original newspaper article that Dave wrote for the paper he worked for. But as you can see, this is not the case.

      However, if the paper that Dave worked for had repeated his original article over and over again they would be "penalized" (marks against them in the ranking process) for having "duplicate content" (keyword stuffing) on their site.

      Obviously, Google does not count the number of times that a keyword is found throughout the web. It is concerned with the amount of times it is found on individual pages and sites.

      Matt
      So when you are doing a Google search and it excludes certain information as being similar what is it excluding?

      And you are saying then, if someone takes an article from a directory and uses it on their site, they can rank higher than the original author if they have enough backlinks and authority.
      Signature

      Pen Name + 8 eBooks + social media sites 4 SALE - PM me (evergreen beauty niche)

      {{ DiscussionBoard.errors[1367685].message }}
      • Profile picture of the author Matt Bard
        Originally Posted by LilBlackDress View Post

        So when you are doing a Google search and it excludes certain information as being similar what is it excluding?
        It is excluding similar pages from the same site. Rather than list all of the pages from a site that are similar, it is the most relevant from that site.

        And you are saying then, if someone takes an article from a directory and uses it on their site, they can rank higher than the original author if they have enough backlinks and authority.
        Yes.

        I have had my own original articles on someone's site rank higher than mine.

        I have also used someone's article and have had it rank higher than theirs.

        Look at EzineArticles as an example. You can have your article posted on your site and have your article at EzineArticles rank higher than on your site.


        Matt
        {{ DiscussionBoard.errors[1367736].message }}
  • Profile picture of the author TheRichJerksNet
    ~ do all the copies get credit for links in the body?
    Simple answer Yes ... This is what syndication is and many many websites do this and it is not limited to only articles. You have blog post, press releases, new releases, and etc ..

    ~ is the copy that was posted first ranked highest or the one with the most backlinks to it?
    No .. This does not mean it will rank the highest, the highest depends mainly on backlinks. If no backlinks exist then it will drop to the one with the most authority in "some" cases.. There is no exact answer for this.

    ~ how many copies are allowed before Google stops showing more copies of the same article?
    Simpel answer No... There are some articles, press releases, new releases that can take up 2 or 3 pages of google.

    ~ is the article devalued in the rankings a little the more it is copied?
    No again this is what syndication is ...

    James
    {{ DiscussionBoard.errors[1367414].message }}
  • Profile picture of the author LilBlackDress
    "It is excluding similar pages from the same site. Rather than list all of the pages from a site that are similar, it is the most relevant from that site."

    Not sure what you mean. Do you mean it is not excluding the sites that have the duplicate content but it is excluding pages from each individual site that may be similar?
    I thought it did not include all the sites that have the article....though I know it does include a lot of them.
    Signature

    Pen Name + 8 eBooks + social media sites 4 SALE - PM me (evergreen beauty niche)

    {{ DiscussionBoard.errors[1367879].message }}
  • Profile picture of the author Matt Bard
    Originally Posted by LilBlackDress View Post

    I thought it did not include all the sites that have the article....though I know it does include a lot of them.
    There can be several sites that have a copy of the article but do not get listed on the first few pages because of other "issues" with their ranking.

    Let's say that I have a site that is dedicated to dogs. The only thing that my content covers is dogs. Then one day I place an article about auto insurance on my dog site hoping to get it ranked for auto insurance.

    My page with the auto insurance article not only gets "judged" by Google for relevancy but my entire site is matched against other sites for the term (keywords) auto insurance.

    Anyone that has more overall site relevancy pertaining to auto insurance will get that added boost in ranking for auto insurance.

    It stands to reason that they, the auto insurance site, has more authority on that subject than my dog site.

    So between the two sites with the same article, the site with the higher authority gets the higher rank for the article.

    As to your question about sites being excluded...it should be "pages" that are excluded from being listed due to redundancy.

    Back to my "dog" site. If I have several pages within my dog site that would be relevant to the search then Google takes my best pages and "excludes" the others just for that search.

    This allows other sites that have the same information to be listed on the page too. It adds to the variety that needs to be there to satisfy searchers.

    Google can not possibly know exactly what each and every person searching for a term is looking for. Therefore, if they add more results from different sites they have a better chance at giving the searcher what they are looking for.

    It has nothing to do with being penalized or getting a "mark against" your site when some pages are excluded.

    Matt
    {{ DiscussionBoard.errors[1368122].message }}

Trending Topics