Does Google Like Duplicate Content? Yes!!

32 replies
  • SEO
  • |
Many think Google will drop you down the page rankings if you have duplicate content. But you can see duplicate content on the first page of Google all the time.

First let's define duplicate content. Having the same content on different pages of the same site. This is the key.

But, you can have the same content on different sites. The best way is to see this in action.

Google a popular review site. Then copy a paragraph from the review.

Paste the paragraph in the search bar with quotation marks at the beginning and end of the paragraph. This means Google will search for sites with these exact words only.

Google will search a maximum of the first 32 words in the search bar.

Then you will see the different sites that contains these exact words. This was an eye opener for me, I always thought Google didn't like duplicate content, until I understood what duplicate content actually was.
#content #duplicate #duplicate content #duplicate content myth #google #google sea #google search rankings
  • Profile picture of the author mbamonaverma
    your content must be unique and more informative otherwise Google penalize your website and it is harmful for your website ranking.
    {{ DiscussionBoard.errors[8817800].message }}
  • Profile picture of the author Corey Geer
    That's it... I need a drink.

    It's too early for this.
    Signature

    Skype: Coreygeer319

    {{ DiscussionBoard.errors[8817856].message }}
    • Profile picture of the author TanM
      You could have duplicate content, but Google will not rank your site higher.
      {{ DiscussionBoard.errors[8817888].message }}
    • Profile picture of the author DavidLowes
      Around 60-70% of the content on the internet is duplicated. That doesn't mean Google 'likes' or 'doesn't like it'.

      If you are the author of content that gets shared on hundreds of other websites, but the content originated from an indexed page on your website, then to keep it simple - Google generally likes you. Vice versa; if you are one of the many that posts that same content on your website, then Google doesn't like you so much, but that's not to say they hate you.

      Everyone states that the key is to create 'original' and 'unique' content bla bla bla. But that's not necessarily the case. The key is to create a piece of content that contributes to a subject, and is valuable enough for others to take note and want to share, whether that contains snippets from others or not.

      Don't get bogged down in 'duplicate content', just worry about writing something that is interesting.
      {{ DiscussionBoard.errors[8817891].message }}
  • Profile picture of the author DireStraits
    Originally Posted by steve875 View Post

    Many think Google will drop you down the page rankings if you have duplicate content. But you can see duplicate content on the first page of Google all the time.

    First let's define duplicate content. Having the same content on different pages of the same site. This is the key.

    But, you can have the same content on different sites. The best way is to see this in action.

    [...]

    This was an eye opener for me, I always thought Google didn't like duplicate content, until I understood what duplicate content actually was.
    Kerr-ching! This is absolutely right!

    Might I say just how very refreshing it is to see people waking up to the truth. For so long this myth seemed destined to be perpetuated by the misinformed masses, much to the pleasure of the scaremongers who stood to benefit from it.

    Matt Cutts has been quite clear for many years in making the distinction between content duplicated for insidious SEO purposes - e.g. with the use of "doorway pages" - and that legitimately spread across the web, adding value to other websites' captive audiences.

    Granted, the search engines try to avoid showing too many of the same results for any one query, but they - Google particularly - have filters in place to prevent this. And filtration is not the same thing as penalisation - a point which unscrupulous product vendors have long been at pains to cover up, and which through their perverse reasoning implies some form of loss when in fact none occurs.

    I would add, though, that it's not strictly necessary to use phrase-matching queries (i.e. quoted searches) to find examples of sites ranking with the same content. Quite often you will see the same stuff across numerous prestigious outlets when running an ordinary search for news events, for example.

    A number of folk here - myself among them - find it useful to refer to this as "syndication" instead of duplication, as it is a familiar and standard term throughout the media and publishing industries and, so far as online publishing goes, is useful in clearing up misunderstandings and dispelling the myths that for so long have blighted any intelligent discussion on the matter by lumping together two very different practices under the same name.
    {{ DiscussionBoard.errors[8817944].message }}
  • Profile picture of the author yukon
    Banned
    Originally Posted by steve875 View Post

    Many think Google will drop you down the page rankings if you have duplicate content. But you can see duplicate content on the first page of Google all the time.

    First let's define duplicate content. Having the same content on different pages of the same site. This is the key.

    But, you can have the same content on different sites. The best way is to see this in action.

    Google a popular review site. Then copy a paragraph from the review.

    Paste the paragraph in the search bar with quotation marks at the beginning and end of the paragraph. This means Google will search for sites with these exact words only.

    Google will search a maximum of the first 32 words in the search bar.

    Then you will see the different sites that contains these exact words. This was an eye opener for me, I always thought Google didn't like duplicate content, until I understood what duplicate content actually was.
    I think your a bit confused.

    Searching a text phrase returns pages with the same text, it doesn't return ranked pages with the same text. That's two completely different things. If the page isn't ranking it means their SEO is weak. So, just because you can scrape a page doesn't mean you'll ever rank a page.

    Shouting Google loves duplicate content is hype.
    Signature
    Hi
    {{ DiscussionBoard.errors[8818366].message }}
    • Profile picture of the author nik0
      Banned
      Bla bla, now go rank a site that is fully scraped from Amazon and let me know how you're doing.
      {{ DiscussionBoard.errors[8818377].message }}
      • Profile picture of the author paulgl
        Originally Posted by nik0 View Post

        Bla bla, now go rank a site that is fully scraped from Amazon and let me know how you're doing.
        Amazon ranks their fully "scraped" stuff, so?...and duplicates it across
        multiple websites. I guess if you scrape it from yourself, news releases,
        press releases, manufacturer literature, publisher notes, ...etc...I guess
        you're safe.

        Wikipedia is the ultimate scrape site. Scraping junk from human beings
        across the globe.

        Google may not love all duplicate content, but they have never hated it.
        Anyone who has been a member of the WF for any decent amount of time
        would never say google hates duplicate content. They sure love a lot
        of sites with it.

        Paul
        Signature

        If you were disappointed in your results today, lower your standards tomorrow.

        {{ DiscussionBoard.errors[8818609].message }}
        • Profile picture of the author nik0
          Banned
          Originally Posted by paulgl View Post

          Amazon ranks their fully "scraped" stuff, so?...and duplicates it across
          multiple websites. I guess if you scrape it from yourself, news releases,
          press releases, manufacturer literature, publisher notes, ...etc...I guess
          you're safe.

          Wikipedia is the ultimate scrape site. Scraping junk from human beings
          across the globe.

          Google may not love all duplicate content, but they have never hated it.
          Anyone who has been a member of the WF for any decent amount of time
          would never say google hates duplicate content. They sure love a lot
          of sites with it.

          Paul
          What a total nonsense response is that.

          You ever hear people talking about how successful they are with auto blogging these days? I don't.

          Amazon has a little more on their pages then just auto scraped stuff from the vendors. Think of tons of user reviews for example, that's already plenty to make the page unique. Besides that the vendors also put a little work into listing their product in detailed ways with featured lists and all other kind of stuff.

          Wikipedia also has tons of user generated content that you only find there.

          But not very fair to start comparing brand new sites to sites with huge authorities, it has always been that they get away with much more then you or me.

          Same like why Yelp outranks almost anything since a few months.

          I can tell you right now that when a relatively young site hosts a lot of duplicate content that it has a negative influence on the overall performing of the site. I know it cause I had about 100 press releases from PRweb hosted at my site (releases that I submitted myself to PRweb btw) and my rankings danced like crazy for over a year and never went stable.

          After removal my site is performing a LOT better and the huge ups & downs in rankings are as good as gone.
          {{ DiscussionBoard.errors[8820468].message }}
  • Profile picture of the author DireStraits
    To my eyes it doesn't appear that anybody is talking about ranking a site consisting entirely of syndicated content, much less "scraped" content - scraping being a seedy practice which very often infringes copyright by making illicit use of unlicensed works.

    But that's a whole different ballgame, and one which Google has been equally explicit in denouncing on the basis that it provides no real value to visitors who might as well be pointed straight to the source.

    From the perspective of both content producer and publisher (or licensor and licensee), syndicated content has the most to give when used as a supplement alongside original works.

    That's how it was intended and how it is done in the news media. News outlets use content sourced from AP and Reuters and other agencies to fill in the gaps, bringing readers coverage above and beyond what their own staff can.
    {{ DiscussionBoard.errors[8818555].message }}
    • Profile picture of the author Kevin Maguire
      Originally Posted by DireStraits View Post

      To my eyes it doesn't appear that anybody is talking about ranking a site consisting entirely of syndicated content, much less "scraped" content - scraping being a seedy practice which very often infringes copyright by making illicit use of unlicensed works.

      But that's a whole different ballgame, and one which Google has been equally explicit in denouncing on the basis that it provides no real value to visitors who might as well be pointed straight to the source.

      From the perspective of both content producer and publisher (or licensor and licensee), syndicated content has the most to give when used as a supplement alongside original works.

      That's how it was intended and how it is done in the news media. News outlets use content sourced from AP and Reuters and other agencies to fill in the gaps, bringing readers coverage above and beyond what their own staff can.
      Yes groundbreaking..stuff really..

      I'm pretty sure by now that Google has a good grasp who is the worlds leading news websites, how they work and how to treat their content accordingly. And you won't find 10 copies of the same story ranking in SERPs. (Show Me)

      "Google have omitted 799 pages" coz its a copy of this so we don't count that.

      There's a big difference to Google when CNN syndicate a story from Rueters, then when bestreviewsoftoasterovens.com syndicates a toaster review from Amazon.

      What your describing simply does not apply to the average webmaster.
      {{ DiscussionBoard.errors[8818596].message }}
      • Profile picture of the author DireStraits
        Originally Posted by Kevin Maguire View Post

        Yes groundbreaking..stuff really..

        I'm pretty sure by now that Google has a good grasp who is the worlds leading news websites, how they work and how to treat their content accordingly. And you won't find 10 copies of the same story ranking in SERPs. (Show Me)

        "Google have omitted 799 pages" coz its a copy of this so we don't count that.

        There's a big difference to Google when CNN syndicate a story from Rueters, then when bestreviewsoftoasterovens.com syndicates a toaster review from Amazon.

        What your describing simply does not apply to the average webmaster.
        First, what you're describing isn't legitimate syndication. It's copyright infringement. Theft, pure and simple - and you're right, no-one should do that. If they did, SEO penalties should be the least of their concerns.

        Like I say, I'm not denying that the search engines filter out most duplicated results. There's obviously no point in showing them. People want diversity. But at the same time, filtration is not penalisation. For a penalty to apply, sites that publish syndicated content legitimately would have to have their other original (i.e. non-syndicated) content de-ranked as a direct consequence. There is no material loss involved; and, usually, no measurable SEO gain.

        Such penalties just aren't borne out in reality by the experience of anyone who's ever tried legitimate syndication. Never has been. Besides the armchair theorists and unscrupulous purveyors of products related to the production or promotion of unique content, the only people who speak to the contrary tend to be those who believe that walking across cracks in the pavement means Barney the bear is going to eat their kids.

        Matt Cutts echoes all this. I'm pretty sure he doesn't slap those videos on YouTube for the exclusive benefit of the likes of CNN and Reuters.
        {{ DiscussionBoard.errors[8818671].message }}
        • Profile picture of the author Kevin Maguire
          Originally Posted by DireStraits View Post

          First, what you're describing isn't legitimate syndication. It's copyright infringement. Theft, pure and simple - and you're right, no-one should do that. If they did, SEO penalties should be the least of their concerns.

          Like I say, I'm not denying that the search engines filter out most duplicated results. There's obviously no point in showing them. People want diversity. But at the same time, filtration is not penalisation. For a penalty to apply, sites that publish syndicated content legitimately would have to have their other original (i.e. non-syndicated) content de-ranked as a direct consequence. There is no material loss involved; and, usually, no measurable SEO gain.

          Such penalties just aren't borne out in reality by the experience of anyone who's ever tried legitimate syndication. Never has been. Besides the armchair theorists and unscrupulous purveyors of products related to the production or promotion of unique content, the only people who speak to the contrary tend to be those who believe that walking across cracks in the pavement means Barney the bear is going to eat their kids.

          Matt Cutts echoes all this. I'm pretty sure he doesn't slap those videos on YouTube for the exclusive benefit of the likes of CNN and Reuters.
          No what I'm describing is one big news company breaking a story then another big news company also breaking the same story. I'm pretty confident that both these companies have their own internal arrangements surrounding copyright. And I'm pretty confident that it would not be presented to Google to handle the infringement case if any.

          Filtration is not Penalisation is a fairly weak attempt at wordsmithing your way out of it. Try ranking your way out of Filtration, it's effects are pretty much the same thing as penalisation, and might as well be de-indexation. You're not ranking for nothing when your in supplemental search and never will be so long as the "Original content" remains live.

          So is it a Filter, Penalty, De-indexing, Supplemental search result? Well in this case it might as well be any and all of them. The end result is exactly the same thing. No rankings and not displayed in search results.

          btw, algo penalties work on a "page level"

          Pauly keeps charping on about Amazon, Wiki....And he knows quite well that he is not talking to the owners of Amazon or Wiki. And he also knows that "NO", the world of Google search is not a level playing field, there are sites that can pretty much do whatever they like and get away with it. Unfortunately we do not own and operate any of those sites. We don't get to play by their Google guidelines handbook.
          {{ DiscussionBoard.errors[8818991].message }}
          • Profile picture of the author DireStraits
            Originally Posted by Kevin Maguire View Post

            No what I'm describing is one big news company breaking a story then another big news company also breaking the same story. I'm pretty confident that both these companies have their own internal arrangements surrounding copyright. And I'm pretty confident that it would not be presented to Google to handle the infringement case if any.

            Filtration is not Penalisation is a fairly weak attempt at wordsmithing your way out of it. Try ranking your way out of Filtration, it's effects are pretty much the same thing as penalisation, and might as well be de-indexation. You're not ranking for nothing when your in supplemental search and never will be so long as the "Original content" remains live.

            So is it a Filter, Penalty, De-indexing, Supplemental search result? Well in this case it might as well be any and all of them. The end result is exactly the same thing. No rankings and not displayed in search results.

            btw, algo penalties work on a "page level"

            Pauly keeps charping on about Amazon, Wiki....And he knows quite well that he is not talking to the owners of Amazon or Wiki. And he also knows that "NO", the world of Google search is not a level playing field, there are sites that can pretty much do whatever they like and get away with it. Unfortunately we do not own and operate any of those sites. We don't get to play by their Google guidelines handbook.
            Round and round we go. Where will it stop...

            I present a bit of syndicated content to my visitors so they don't have to leave my website. They're thankful for it. I'm not planning to have it ranked; I don't give a damn if it ranks; and it probably won't rank. It's there to be read by those who subscribe to my RSS feed or frequent my site. For my existing audience, in other words - not to attract a new crowd.

            All the content that's "mine" - that I've written, that I own, and that isn't found anywhere else - continues to rank exactly where it did. The syndicated content perhaps never ranks at all, but then again I never expected it to, because that's not why I published it.

            I'm sorry, but I'm having a difficult time comprehending how any of this constitutes penalisation? I've lost nothing because I've staked nothing, and I was never going to write my own version of the syndicated articles anyway. It's win-win for me and win-win for the content owner who stands to capitalise on the extra traffic I send his way through the links back to his site. I've gained something in in the interest and approval of my audience, giving them one more reason to continue visiting my site over my competitors'. Everything else is remains unchanged, unaffected, undiminished.

            Where does the penalisation come in? Why would it? Where have my other pages lost rankings? Why would they?

            Matt Cutts is not speaking to the big sites, whom you seem to believe operate within special parameters. (I wasn't aware poor practice magically became good in the hands of certain folks.) If he was, he'd speak to them privately instead of putting his videos on YouTube for the perusal of every Tom, Dick or Harry, whereby they would effectively foster precisely the practices he, his department and Google generally were trying to combat.

            If you still believe that syndicated content used to supplement one's own confers real tangible losses, then I fear that your perception of risk/reward and win/lose might be diametrically opposed to that of most people, in which case nothing I or anyone else can say will change your mind.

            But the one thing I can assure you of is that many of us have fearlessly syndicated for years. In my case, both as a content creator and as a publisher of other people's. My sites have never been de-indexed, my other pages have never suffered, and - not that they're my overriding concern anyway - my rankings do progressively improve while my audiences continue to return and thank me for a job well done.

            Perhaps it's all just an incredibly vivid dream.
            {{ DiscussionBoard.errors[8819061].message }}
            • Profile picture of the author Kevin Maguire
              Originally Posted by DireStraits View Post

              Round and round we go. Where will it stop...

              I present a bit of syndicated content to my visitors so they don't have to leave my website. They're thankful for it. I'm not planning to have it ranked; I don't give a damn if it ranks; and it probably won't rank. It's there to be read by those who subscribe to my RSS feed or frequent my site. For my existing audience, in other words - not to attract a new crowd.

              All the content that's "mine" - that I've written, that I own, and that isn't found anywhere else - continues to rank exactly where it did. The syndicated content perhaps never ranks at all, but then again I never expected it to, because that's not why I published it.

              I'm sorry, but I'm having a difficult time comprehending how any of this constitutes penalisation? I've lost nothing because I've staked nothing, and I was never going to write my own version of the syndicated articles anyway. It's win-win for me and win-win for the content owner who stands to capitalise on the extra traffic I send his way through the links back to his site. I've gained something in in the interest and approval of my audience, giving them one more reason to continue visiting my site over my competitors'. Everything else is remains unchanged, unaffected, undiminished.

              Where does the penalisation come in? Why would it? Where have my other pages lost rankings? Why would they?

              Matt Cutts is not speaking to the big sites, whom you seem to believe operate within special parameters. (I wasn't aware poor practice magically became good in the hands of certain folks.) If he was, he'd speak to them privately instead of putting his videos on YouTube for the perusal of every Tom, Dick or Harry, whereby they would effectively foster precisely the practices he, his department and Google generally were trying to combat.

              If you still believe that syndicated content used to supplement one's own confers real tangible losses, then I fear that your perception of risk/reward and win/lose might be diametrically opposed to that of most people, in which case nothing I or anyone else can say will change your mind.

              But the one thing I can assure you of is that many of us have fearlessly syndicated for years. In my case, both as a content creator and as a publisher of other people's. My sites have never been de-indexed, my other pages have never suffered, and - not that they're my overriding concern anyway - my rankings do progressively improve while my audiences continue to return and thank me for a job well done.

              Perhaps it's all just an incredibly vivid dream.
              So you agree with me then...that's good. I knew you'd see sense in the end.

              Your readers might appreciate you syndicating content, but the Google algo does not. It just drops it into supplemental search never to be seen or found in their results.

              Do your site visitors who found your site through other means outside of your supplemental search listings like your duplicate content? Yes

              Does Google like duplicate content? No

              Glad we could clear that up.
              {{ DiscussionBoard.errors[8819092].message }}
      • Profile picture of the author blingo
        I agree with that. For instance, suppose you procure an article for a client on Forbes blog. It's incredibly likely (if not almost certain) that the content will be duplicated on a number of medians (and that's putting it lightly). For anybody to conclude that an article procurement such as that would be detrimental to rankings would be borderline asinine.

        Let's take it down a notch and suppose a Legal client is placed on a leading legal blog. There too, it's incredibly likely that the content will be copied and spun by a number of online medians. And again, completely beyond anybody's control.

        I know it sounds so counter-intuitive (and cheesy IMHO), but place your content where you really feel it makes the most imprint in the online world. Forget about SEO for a second (Yeah, I know that sounds cheesy and beat to death, trust me, I don purport to be a better-than-thou white hat angel). The reason I say "forget about SEO" is because that frees you from the constraints of algo-to-algo thinking. If you look at it from the "online footprint" perspective, more often than not, the algo will reward you, as your aligning your interests w/ Big G.

        If I've learned anything about SEO, it's all about building relationships your competitors can't have. It could be that you gain those relationships via money even (If it's done discreetly enough, it can still work), or just by writing kick ass content that you pay someone $75 to write and the blog editor can't get enough of you.

        But back to "original content" - Ask yourself: What would constitute effective from the "online footprint/influence" perspective. That's the question!
        {{ DiscussionBoard.errors[8818964].message }}
    • Profile picture of the author yukon
      Banned
      Originally Posted by DireStraits View Post

      To my eyes it doesn't appear that anybody is talking about ranking a site consisting entirely of syndicated content, much less "scraped" content - scraping being a seedy practice which very often infringes copyright by making illicit use of unlicensed works.
      This is the SEO forum, OP is talking about Google & duplicate content.
      Signature
      Hi
      {{ DiscussionBoard.errors[8819113].message }}
  • Profile picture of the author abruzzi
    IMO, this is silly. Yes, you can find other sites with the same content, but not all those sites are ranking for the same keyword(s).

    Google has become increasingly better at identifying the original content producer and assigning credit to that person. Good luck trying to consistently outrank the originator of the content. It might happen here and there, but I wouldn't try to make a living that way.
    {{ DiscussionBoard.errors[8818569].message }}
    • Profile picture of the author fastreplies
      Originally Posted by abruzzi View Post

      Good luck trying to consistently outrank the originator of the content. It might happen here and there, but I wouldn't try to make a living that way.
      There is very good reason why.
      G's cache keeps the date when crawler first time stumbles on something original.
      Everything after that "somebody is trying to empress his girlfriend".



      fastreplies
      {{ DiscussionBoard.errors[8821869].message }}
      • Profile picture of the author yukon
        Banned
        Originally Posted by fastreplies View Post

        There is very good reason why.
        G's cache keeps the date when crawler first time stumbles on something original.
        Everything after that "somebody is trying to empress his girlfriend".



        fastreplies
        Trust me that cache date doesn't matter. Other peoples pages with better SEO will blow right by you in the SERPs.

        This forum thread could be scraped a week from now & outranked (page/title) If someone wanted to waste their time making it happen.
        Signature
        Hi
        {{ DiscussionBoard.errors[8821884].message }}
  • Profile picture of the author K Mec
    Originally Posted by steve875 View Post

    Many think Google will drop you down the page rankings if you have duplicate content. But you can see duplicate content on the first page of Google all the time.

    First let's define duplicate content. Having the same content on different pages of the same site. This is the key.

    But, you can have the same content on different sites. The best way is to see this in action.

    Google a popular review site. Then copy a paragraph from the review.

    Paste the paragraph in the search bar with quotation marks at the beginning and end of the paragraph. This means Google will search for sites with these exact words only.

    Google will search a maximum of the first 32 words in the search bar.

    Then you will see the different sites that contains these exact words. This was an eye opener for me, I always thought Google didn't like duplicate content, until I understood what duplicate content actually was.
    Google is not deindexing your site for duplicate content but also not ranking you first with duplicate content.:p:p
    {{ DiscussionBoard.errors[8818607].message }}
  • Profile picture of the author DireStraits
    I know it's the SEO forum, but parts of my point have a relevance to SEO too.

    Irrational fears are a blight on a person's sovereignty and artificially limit the opportunities available to us. This particular myth has the potential to put people at a disadvantage when pitted against a more enlightened competition.

    I've never seen anyone dispute that most duplicate results end up filtered out of the SERPs for most queries, most of the time. Anyone can see that with their own eyes, and it's logical that Google would want to do it. I must stress this point, lest we go round and round in circles.

    Nevertheless, there are exceptions. It would be a woeful travesty of truth to suggest that no syndicated article will ever outrank an original source or see the light of day in the SERPs. Syndicated content will sometimes draw in additional search traffic, usually for some pretty long-tail keywords of varying relevance. We know pages rank for a multiplicity of phrases under normal conditions, and what these are is determined by a site's authority weighting in relation to particular topical themes. Certainly not an eventuality to stake one's family silver on or anything, but there we go.

    Moving on, anyhow...

    The real contention surrounding the matter of "duplicate content" has always been about whether there are negative consequences above and beyond the likelihood that a duplicated (syndicated) article won't rank. Whether syndication confers the likelihood of a real and tangible SEO loss, in other words, as opposed to just "no significant gain".

    Some people may see "no gain" as a punishment within itself. I consider myself a reasonable chap, but it seems like a perverse twist of logic, this belief that you can lose something you never had.

    I won't even get started on the reality, borne out by my own experience, that syndication can have real SEO benefits for the publisher as well as the syndicator. If it's all the same to you good folks, given how difficult it is to get people to concede to even the simplest of challenges to such deeply entrenched, utterly puzzling beliefs on this topic, I think I'd rather keep these benefits to myself.

    Suffice to say, syndication is not the poisoned chalice some folks in SEO circles are led, and sometimes lead others, to assume.
    {{ DiscussionBoard.errors[8819747].message }}
  • Profile picture of the author yukon
    Banned
    OP is talking about scraping review sites, not syndication. Even If syndication was the case it doesn't matter, the page with better SEO is what will rank #1 in Google SERPs. Being buried in the SERPs might as well not exist If a person cares about Google traffic.

    Originally Posted by steve875 View Post

    Google a popular review site. Then copy a paragraph from the review.
    Signature
    Hi
    {{ DiscussionBoard.errors[8819754].message }}
  • Profile picture of the author The SEO
    I did this experiment before with my blogger blog. I make a blogger blog with 6 to 8 duplicate article just change the first 2 to 3 lines of each paragraph, when its completed. Then I start manual and quality link building for him, in starting of two months its on the 3rd position of Google and after hummingbird its goes to page 3 of Google for my keyword and after Google Keyword update its jump to PR 1 from outranked but ranking isn't still recovered.
    {{ DiscussionBoard.errors[8820184].message }}
  • Profile picture of the author IMdeaming
    Even after the OP explained it you guys are STILL confusing duplicate content with syndicated or stolen content. Oh, and penalization=filtration.
    Signature
    Something stinks...
    {{ DiscussionBoard.errors[8820506].message }}
  • Profile picture of the author Dead Body
    Those are just mistakes the errors which search engine can't solve till date.. They are updating their algorithms to kick the spam out....
    Signature
    {{ DiscussionBoard.errors[8820538].message }}
  • Profile picture of the author bigcat1967
    That's it... I need a drink.

    It's too early for this.
    I agree. I thought this argument has been dealt with and put to bed.
    Signature

    <a href="https://changeyourbudget.com/save-money-on-your-water-bill/">How to Lower Your Water Bill</a>

    {{ DiscussionBoard.errors[8820540].message }}
  • Profile picture of the author holder21
    My friend has recently ranked a duplicate content above then original one and google has replaced the original with my friends duplicate one i am not confused how google decides which one is original copy.
    {{ DiscussionBoard.errors[8821971].message }}
    • Profile picture of the author fastreplies
      Originally Posted by holder21 View Post

      My friend has recently ranked a duplicate content above then original one and google has replaced the original with my friends duplicate one i am not confused how google decides which one is original copy.
      Are you saying that 'duplicate content' is a Myth and that G. is full of sh*t?

      Damn, that freaking G's mouthpiece Matt C. has been lying to us for years now.



      fastreplies
      {{ DiscussionBoard.errors[8823039].message }}
  • Profile picture of the author olicelea
    As holder21 mentioned, there are some cases when G ranks better the duplicate content than the original especially when the "duplicator" has a higher PR than the original, however this appears to be rather a flaw in their algorithm than a deliberate action.

    IMO, after pandas and penguins etc. a duplicate content URI might rank well for a while but finally will go down on SERPs or even be removed.

    In order to keep clean and clear the original G recommends using rel="canonical" as well as rel="nofollow" when it comes to duplicates, this will keep you away from lots of troubles.
    {{ DiscussionBoard.errors[8822521].message }}

Trending Topics