Duplicate Content Question

by 15 replies
18
Hey!

Here are two identical articles published on different high-traffic websites:

1. How Our Brains Stop Us From Achieving Our Goals (and How to Fight Back)
2. How Our Brains Stop Us Achieving Our Goals and How to Fight Back - The Buffer Blog

Both articles are indexed by Google.

LifeHacker reposted the article under author's name as a guest post after the author asked them to do so. Now, bear in mind that the author (Gregory Ciotti) is actually being linked to by many SEO experts as somebody who knows SEO stuff and obviously people at LifeHacker are no slouches either, therefore, I feel it's safe to assume that both these articles are safe in the eyes of Google.

With that being said, how is that not a duplicate content?
#search engine optimization #content #duplicate #question
  • It will. Just wait and you'll see.
  • It is same content, but what makes it different is lifehacker mentioned the real source of content (buffer) at the end of the article. This makes it content syndication instead of duplicate content.
  • Usually that's not a problem. Google is happy to index them, but near or exact duplicates may drop in rankings or even get filtered from results for being too similar. Nobody searches the supplemental index.

    However, both these articles also rank for something like "brain achieving goals".

    Indeed both sites are well known. I suspect that their backlink profiles may help, and possibly also the fact that Lifehacker links to the other article (syndication). The articles were published in 2012 so it's not likely that they're going anywhere.
  • need to study lof of things before judgement.
  • We can all speculate, but let's talk about the things we do know, mkay? I'd love to read what SEO experts of the forum have to say.

    @shailender and @nettiapina, That's exactly my point - articles are from 2012, and they're doing fine. So maybe all this paranoia about duplicate content is a little overblown?

    Again, bear in mind that the author of the article is actually pretty known in SEO fields, and I don't see why LifeHacker would risk any penalties just to host somebody else's guest post, even if it brings additional traffic. These people know what they're doing and they don't seem to be bothered one bit about publishing "duplicate" content.

    So another question arises then in response to @shailender and @nettiapina: you're saying that as long as we point to original article, it's not duplicate content anymore because it's now a syndication? Does this mean I can have multiple sites/pages/whatnot with the same content, all pointing to one original article? Can you see where I'm getting at?

    Also, does anybody know for a FACT that this is how Google's algorithm works? Because by the sound of it, Google bots would have to crawl the "duplicate" (read: published later than original) page and find a link back to the original. If the link exists somewhere on the page - boom, it's not duplicate; if it doesn't then it's duplicate. Pretty black & white, ey?

    ^ Now I'm not saying this is not how it is (because I don't know), but can anybody actually confirm that this is how Google bots work? Because it sounds somewhat dumb to me or rather too simplistic.

    I personally don't see how Google can have it both ways, unless they review websites manually. They either accept duplicate content or penalize it. Yes, it would be great if they can distinguish between good duplicate content and bad, but can they?

    LifeHacker case shows that it's possible, but I'm curious to know how exactly does this work...
    • [1] reply
    • No, I'm not. Wasn't too clear about it, but I tried to say that both sites are fairly strong even if Lifehacker is way more well-known than Buffer. A simple link will probably not do much when it comes to Google burying or not burying an article.

      I was hitting long keywords that probably have fairly low competition. Yukon was using even longer keyword. I'd make sense to me for Google to bury one of these articles if there was something else that seemed relevant.

      Even Matt Cutts does not have that kind of information. As always all of this is mere speculation.
      • [1] reply
  • Duplicate is: If you have one article on two pages of the same domain or subdomain

    Everything else is syndication.



    Lets say you have article ---->>> Bodybuilding secrets on

    1) www.yourdomaindotcom/
    2) www.yourdomaindotcom/yourblog/
    3) www.subdomain.yourdomaindotcom/



    1) www.facebookdotcom/

    2) www.diggdotcom/

    Geena
  • I would not get tied up in the jargon, all that matters is how Google ranks them. These articles are both safe unless they are being spammed all over the place to low PR sites.
  • It will not be considered as duplicate content if the rel=cannonical tag is applied on the copies. Otherwise it will be considered as duplicate only.
  • Banned
    Duplicate content on different domains is all about authority, strongest domain/page wins, it's that simple.

    Notice the dates on both pages for the links in OP:
    • blog.bufferapp.com (July 11th, 2012)
    • lifehacker.com (7/25/12)

    Bufferapp posted the article before Lifehacker but Lifehacker is ranked #1 for the exact page title without quotes, both pages use the same page title. Lifehacker has more authority. Bufferrapp doesn't even show up in Google SERPs for it's own page title, the same title Lifehacker is ranked #1.

    Bufferapp does show up (#1) when you search for the lifehacker related page/title. Bufferap is pretty much buried in Google Supplemental SERPs, at least for that one page title.

    You also have to take into account the page title includes the word "How" which is what Lifehackers entire site is all about (how to do things...), that puts Lifehacker at an advantage, the same way it would put an authority site like ehow at an advantage (how to do things...).

    Look at the Lifehacker internal authority for the keyword how (how to do things...):

    Now combine all that Lifehacker internal page authority with the external backlinks that Lifehacker has pointing at those thousands of relevant indexed pages (how to do things..). That's some strong authority going on there.

    I didn't bother checking any shortail keywords, especially considering Bufferapp doesn't even show up for it's own page title without double quotes. It's possible their page ranks for keywords that have traffic but I don't have time to look.

    As always, it doesn't matter who owns the original content when it comes to SEO & ranking pages. I'm not suggesting to take content you don't own, I'm just pointing out that it doesn't matter where/when the original content first existed.

    This is why I laugh when people suggest to others they should post content on domains they don't own (ex: EZA), they're clueless If they think EZA (example) won't outrank a new/weak domain with their own content. EZA (example) banks on that free content by drawing in the SERP traffic that the original author could have been getting If they didn't submit their content to an authority domain & instead ranked their own page.
    • [ 4 ] Thanks

Next Topics on Trending Feed