Has This Little Known Panda Theory Killed YOUR Site's Rankings?

by brettb
33 replies
  • SEO
  • |
OK, I'm gonna toss a theory into the ring...

Panda is largely about DUPLICATE CONTENT .

I don't think Panda looks for on-page issues, site speed or other factors. Why?

Because I have 6 near-identical WordPress sites, and two have been completely unaffected by Panda.

First of all, my theory isn't my own, it's this guy's:

Google's Panda Penalty: No Recovery After 1 Year

Read this carefully before you rip my post to shreds!

Executive summary:

This guy believes that Google trusts certain sites and knows that they will NEVER have duplicate content. Sites like the IRS, NY Times, BBC etc.

So rest assured, if you ever copy content from these sites, then you'll have a Panda penalty for dupe content.

I believe that successive iterations of Panda have allowed Google to "trust" more sites. Maybe there's a trust hierarchy. In fact they already have this - the page rank of a site's home page.

So I think what they're doing is flagging content as duplicate if it appears on a site with a higher rank than yours - and this is the important bit - even if you're the original content creator.

Yikes!

Google's Folly

Look, Google has a big problem. Before recently (2011?) they don't appear to have been recording where a piece of content first originated. Now they are recording it.

The problem is that I have content dating from 2002 or earlier. And it appears Google doesn't have a clue as to where it first popped up, because they don't appear to have ever recorded that fact.

Matt Cutts is very aware of the problem. How do I know? Because in one of his videos he responded to a guy asking what would happen if Site A published a document and Site B copied it then got indexed before Site A [so site A's content is effectively first published by Site B] .

Well Matt didn't really have an answer for this, which is why I suppose they're trying rel=author, using site maps and rapidly crawling the most trusted of sites. I can't find the video again, but I think he suggests using the web spam feedback at the bottom of the search page, or using a DCMA notice to take down the copy.

My Evidence of a Panda Duplicate Content Penalty

Panda is resource intensive and so it only updates once a month (or less).

This gives me clues that they're doing something big, like checking for duplicate content. Maybe they're checking for direct copies, as well as things that are similar (like spun/rewritten content). Maybe they also check for duplicate images, although I believe Panda only works on text.

I think that successive iterations of Panda haven't changed the algorithm much, but they've simply scaled it up to more sites (maybe they're trusting a lot more sites).

Anyway, here's my hard evidence that I've been smashed by duplicate content.

First my old sites (1999 and 2002 vintage). They've been hit hard, especially in the last Panda iteration.

Why?

Because my content is all over the web!

There are two reasons for this. Firstly, my software site used PAD files to distribute my software details, so I have pretty much identical content on loads of download sites. I also copied some of my own content from sites I used to write on. So I'm partly to blame for my own downfall.

Secondly, there's been a heck of a lot of content stealing, particularly from my 1999 blog. In fact I found some Indian programmer had actually copied one of my entire articles and posted it as his own work. And some travel blogger on blogspot had copied my entire article about visiting Japan. There are also many scraper sites that republish snippets of my sites and yours, maybe hoping that they'll rank for my keyword + your keyword.

Now this gives me clues that Panda ain't about quality. Because my stuff about Japan is unique and rare, simply because not many people have visited that country.

I've maybe also been sunk because the duplicate content is on sites with higher page rank than mine (blogspot, eHow clones, software download sites). As a result, I don't even rank for my own product name. Sheesh, Google have clearly screwed up here, because even DuckDuckGo can figure this one out!
Second, my new sites I started last year.

Some have been hit, some haven't. Why?

I think there are two reasons. I'll admit some of my content is junk. However, Google are a poor judge of quality because my site about red widgets got hit hard, but blue widgets escaped. Now I am a guru on red widgets (I have owned several), but I know a lot less about blue widgets (which I've never owned). Where I slipped up on the red widgets site is that I republished some of my banned HubPages on my own sites. Why waste content?

Big mistake!

This is a classic Panda spam signal - a small site copying content from a big site!

Finally, onto other people's sites.

Many other well respected sites have been hit by Panda. I believe that duplicate content is the issue. That Tim the Builder site did have thin content. But he also had a lot of content that was ripped off by eHow writers and legions of other sites. Maybe they didn't copy him word for word, but there are really only so many ways you can write how to fix a leaking tap. And once again, I believe that eHow would have a better trust rank than Tim's site.
Of all the copied content Warriors are likely to write, tutorials and how to's are most at risk.

Cookery sites have seen a lot of Panda smackdowns - again, a recipe is very easy to rip off, indeed cookery book writers have been doing it to each other for 100's of years!

Are YOU Affected?

Are you affected? Just search for the first paragraph of some of your prime copyable pages (tutorials, really popular content) in double quotes in Google.

If you find identical content, you could have a problem.

If you find identical content outranking your own site, you've definitely got a problem!


How to Fix Panda

OK so if you've copied content from a larger site, then you're dead in the water. Remove the copied content and hope for the best !

Beyond this...

First, make sure you've registered your site in WEBMASTER TOOLS and have made a SITE MAP.

Second, add rel=author tags to your content. I don't think Google uses this for duplicate content checking, but one day they will.

Third, if your site is older than 2009 and you've never used a site map, then maybe search for the first paragraph of some of your content (or use copyscape). Then take down the duplicates, either by DCMA requests or by emailing the hosting company listed in the WHOIS (this is apparently more effective than contacting the site's webmaster). If this theory holds water, then your priority should be to go after duplicate content on high page rank sites - don't worry about 5 page EMD's or crappy autoblogs.

Fourth, if your site is big and not vaguely MFA or anything then ask for a reconsideration request, and give Google evidence that you're the original owner of the content, and show who has stolen it.

Fifth, write FEWER pages, and make them LONGER. This will allow you and Google to more easily check for duplicates.

Sixth, make content that CAN'T be copied so easily, like YouTube Videos or Facebook social pages or forums. It's surely no coincidence that sites using these haven't been so badly hit by Panda.

Seventh, write NEW content to mitigate the penalty of having duplicate content.

Holes in My Theory

I've never been able to recover from Panda. But up till now I've really been addressing thin content, and not duplicate content.

I also don't know how long the content copying penalty lasts, and whether it still applies if the original document disappears (like my banned HubPages).

Now I'm not suggesting Panda only looks at duplicate content. However, I think it is by far the biggest factor, especially if you've only been doing white hat stuff, little link building, and you're honest enough to admit your content is great.

Anyway, I'd be interested in hearing if anyone else thinks they have been sunk by their own content appearing on other sites. Hopefully this theory will give you something to focus on.

My Action Plan

Look, I'm screwed! I want my software site back in Google because I have a real product and it's exactly the site that should rank well. Yet my content is so widespread that it would take years to DCMA it all down.

So I'm going to scrap my software site, and rebuild it with 10% of the pages it once had. I'll also add Facebook content and some videos.

I think I can save my blog, and what I'll do is to delete the worst of the duplicate content plus issue DCMA notices for the copied content.

I'll post updates if I make progress in getting my rank back, so watch this space !
#killed #panda #rankings #site #theory
  • Profile picture of the author boxoun
    I think part of your theory is sound. Panda is looking for unique content not copy scape free content and there's a difference. Adding a personal story can be a difference maker.

    No real proof just speculation based on my sites.

    Of course this is where others will jump in with contradicting info lol.
    {{ DiscussionBoard.errors[7232743].message }}
  • Profile picture of the author PerformanceMan
    What is the PageRank of your home page?
    Signature
    Free Special Report on Mindset - Level Up with Positive Thinking
    {{ DiscussionBoard.errors[7233174].message }}
  • Profile picture of the author jfambrini
    Your theory rings true to my experience. My sites have been around since 2002 and have been copied by almost all of our competitors. We are a car exporter from Asia and since most of our competitors don't speak English that well, they thought it was okay to rip us off. Since there were so many cheaters and burden of proof was on us we did not bother but especially because we were on page 1 on all our searches. Now post-Panda, post-Penguin and post-EMD our sites are being hit we are stuck. One of our copiers is just a made for Adsense site, he is not even a car exporter, and he is now out-ranking us.
    {{ DiscussionBoard.errors[7233253].message }}
  • Profile picture of the author brettb
    Thanks guys.

    You know what, I've got some more meat on the bone of this theory.

    My HubPages got chucked out of Google. You know why? Because an Indian site has stolen ALL of my Hubs! I've just filed a copyright violation notice to see if that will help.

    What's disturbing is that their site ranks for an exact match of some of my text, but my own Hubs are only shown in the supplemental results!
    Signature
    ÖŽ FindABlog: Find blogs to comment on, guest posting opportunities and more ÖŽ




    {{ DiscussionBoard.errors[7233348].message }}
  • Profile picture of the author PerformanceMan
    What's you're describing is a symptom of low PageRank, and not necessarily 'the Panda.'
    Signature
    Free Special Report on Mindset - Level Up with Positive Thinking
    {{ DiscussionBoard.errors[7233433].message }}
  • Profile picture of the author brettb
    Yeah that's a good explanation. I don't know if the fake site damaged my Hubs, or whether I'm just seeing it because my Hubs have a low page rank. It sucks that the fake site had AdSense on, well hopefully not for much longer although the AdSense team seem slow to respond to complaints.

    I have also seen my HubPages content on Scribd and other places, and I imagine that's where my Hub traffic is going instead.

    As I said in my original post, my really old web content has duplicates all over the place, so Google won't have a clue which was the original version. That's gotta be hurting my site. And putting duplicate content on my own sites does seem to damage them severely - so from now on I'm not republishing stuff anywhere else.

    And for boxoun - yeah, the scrapers are stealing my entire articles, complete with my anecdotes, and sometimes even my photos.
    Signature
    ÖŽ FindABlog: Find blogs to comment on, guest posting opportunities and more ÖŽ




    {{ DiscussionBoard.errors[7233645].message }}
    • Profile picture of the author PerformanceMan
      Originally Posted by brettb View Post

      Yeah that's a good explanation. I don't know if the fake site damaged my Hubs, or whether I'm just seeing it because my Hubs have a low page rank. It sucks that the fake site had AdSense on, well hopefully not for much longer although the AdSense team seem slow to respond to complaints.

      I have also seen my HubPages content on Scribd and other places, and I imagine that's where my Hub traffic is going instead.

      As I said in my original post, my really old web content has duplicates all over the place, so Google won't have a clue which was the original version. That's gotta be hurting my site. And putting duplicate content on my own sites does seem to damage them severely - so from now on I'm not republishing stuff anywhere else.

      And for boxoun - yeah, the scrapers are stealing my entire articles, complete with my anecdotes, and sometimes even my photos.
      Let me ask you one other question: if you search for an exact snippet of text from one of your articles, where is your page ranked?

      Take one of the pages you're sure has been copied a lot and search for a few sentences in "quotes." Then notate how many pages come up and where your page is ranked.

      Keep in mind, also that there's a big difference between duplicate content and syndicated content.

      Duplicate content: multiple copies appear on your website. Syndicated content: identical copies of content appear on diverse websites.

      What you're describing is 'unauthorized syndicated content' and not 'duplicate content.'
      Signature
      Free Special Report on Mindset - Level Up with Positive Thinking
      {{ DiscussionBoard.errors[7233676].message }}
      • Profile picture of the author paulgl
        Why is panda about duplicate content?

        Google has never said nor hinted that they hate duplicate
        content. In fact, they readily index it, show it, and even
        promote it.

        Site speed has never been an issue, except for people
        that freak out on every statement google makes. Too
        bad they keep misquoting them.

        This is too funny:
        This guy believes that Google trusts certain sites and knows that they will NEVER have duplicate content. Sites like the IRS, NY Times, BBC etc.

        So rest assured, if you ever copy content from these sites, then you'll have a Panda penalty for dupe content.
        Whoever wrote that knows absolutely nothing about duplicate content, or
        how the real internet works. NY Times? ROTFLMAO! Ever hear of a little
        ol' company called, "Reuters" ? Everybody copies from them....

        BBC? Right now one of their big story is about Sandy....scraping info from
        The National Hurricane Center...The front page is a story about Obama,
        complete with a Getty Image!!!!!!!!!!!!!!

        Every big site, and I mean EVERY BIG SITE, from google to ebay, from
        amazon to gasbuddy, from NY Times to LA times, from wikipedia,
        to yahoo, uses duplicate content!!!!

        Sure, google sure hates duplicate content....

        Paul
        Signature

        If you were disappointed in your results today, lower your standards tomorrow.

        {{ DiscussionBoard.errors[7234137].message }}
  • Profile picture of the author PerformanceMan
    The issue is syndicated content. The site with more authority and higher PR is getting the pages ranked first.

    Sadly, the scrapers have more authority and PR than you do. It's really that simple.
    Signature
    Free Special Report on Mindset - Level Up with Positive Thinking
    {{ DiscussionBoard.errors[7234193].message }}
    • Profile picture of the author dburk
      Hi brettb,

      First, the theory that you reference is totally wrong in that it has absolutely nothing to do with Panda. You, and the blogger that you referenced, are pointing out the well known duplicate content filter, it is not a penalty, there is no duplicate content penalty. The duplicate content filter has been in place for many years prior to the Panda update.

      Second, as paulgl pointed out, some of the websites you cited as sites "that Google trusts... and knows that they will NEVER have duplicate content" are known to use massive amounts of syndicated (duplicate) content.

      Finally, from the earliest days, Google has been very clear about what the Panda update was about. In case you missed it, here are some general as well as specific things that Panda targets: Official Google Webmaster Central Blog: More guidance on building high-quality sites
      {{ DiscussionBoard.errors[7235213].message }}
  • Profile picture of the author sleeperz
    Not many people have visited Japan? Ha, good one!

    Your theory makes sense , though. You could be right about many parts of it.
    {{ DiscussionBoard.errors[7235187].message }}
  • Profile picture of the author Stan
    It's why content curation was all the hype not too long ago wasn't it?
    {{ DiscussionBoard.errors[7235222].message }}
  • Profile picture of the author rahmanpaidar
    Most common problem here in this forum is that you guys think
    if I only do that, I will be rewarded and if I do this, I will sure get penalized.

    Let me stress it out. Google won't penalize or reward your site for only one mistake
    or for only using unique contents on your site. Most times, your site is penalized
    not because of only one mistake you've made but because of overusing of
    repeatedly mistakes that you are still eagerly tending to use it and share it
    gladly with your friends.
    {{ DiscussionBoard.errors[7235518].message }}
    • Profile picture of the author Billwf
      To paulgl: the Panda penalty is only applied to websites that are not Panda-trusted sites, so Panda-trusted sites can copy or syndicate all of the content that they want, without penalty. Read the article.

      To dburk: Read the article. It doesn't say that the Panda algorithm penalizes duplicate content. It says that websites with content that matches Panda-trusted sources will be penalized.
      {{ DiscussionBoard.errors[7236012].message }}
      • Profile picture of the author dburk
        Originally Posted by Billwf View Post

        To paulgl: the Panda penalty is only applied to websites that are not Panda-trusted sites, so Panda-trusted sites can copy or syndicate all of the content that they want, without penalty. Read the article.

        To dburk: Read the article. It doesn't say that the Panda algorithm penalizes duplicate content. It says that websites with content that matches Panda-trusted sources will be penalized.
        Hi Billwf,

        Sorry, but that is total rubbish.

        The one has nothing to do with the other. Duplicate content filters predate Panda by many years. The Panda Update targeted poor quality content and any website can be affected by poor quality content.
        {{ DiscussionBoard.errors[7240773].message }}
        • Profile picture of the author Billwf
          Originally Posted by dburk View Post

          Hi Billwf,

          Sorry, but that is total rubbish.

          The one has nothing to do with the other. Duplicate content filters predate Panda by many years. The Panda Update targeted poor quality content and any website can be affected by poor quality content.
          Yes, and the article points that out. The duplicate content isn't a penalty. If there is duplicate content, the affected pages will simply suffer lower rankings, but it will not affect the other pages on the site. But if you have matching content from Panda-trusted sources, then not only will the matching pages suffer, but every page on the site. Again, this is clearly stated in the article.
          {{ DiscussionBoard.errors[7245422].message }}
          • Profile picture of the author dburk
            Originally Posted by Billwf View Post

            Yes, and the article points that out. The duplicate content isn't a penalty. If there is duplicate content, the affected pages will simply suffer lower rankings, but it will not affect the other pages on the site. But if you have matching content from Panda-trusted sources, then not only will the matching pages suffer, but every page on the site. Again, this is clearly stated in the article.

            Hi Billwf,

            What I'm trying to say is that the author is conflating two unrelated things. Trust has been part of the algorithm for many years prior to Panda, the notion that a website is "Panda-Trusted" is the part that I see as rubbish. Not that there are not Trusted sites, just that it has nothing to do with Panda. When Panda hit it affected many "trusted" sites. Likewise, the Duplicate content filter pre-dates the Panda update by many years as well.

            The author himself seems to confirm that his theory is completely invalid before he proceeds to explain in detail that invalidated theory:
            I deleted the entire folder after I determined that it was the only cause of the Panda penalty. ... Nonetheless, more than 1 year later, I have not recovered.
            Which just leaves me scratching my head, wondering why I read the entire article since nothing was included that clarified why he clings to his obvious invalid theory. How can I get back the 7 minutes it took to read this rambling failure of logic?

            The bottomline is that the Panda update had wide ranging effect on many websites, including websites that had no on-page content with Panda issues, but suffered due to relying on inbound Pagerank from other websites that were hit by Panda issues. Just because your own website may have no poor quality content, you can still be affected indirectly from backlinks that lost PR due to Panda. People seem to forget about that and come up with these baseless theories.
            {{ DiscussionBoard.errors[7257000].message }}
            • Profile picture of the author Billwf
              No, the author did not invalidate what he said. He is arguing that the Panda penalty is a timed penalty, meaning that even if you correct the problem that caused the penalty, you will continue to be penalized for a set amount of time, in the same way that the manual penalties are assessed.
              Furthermore, although trust has always been a factor that Google has considered, the Panda algorithm specifically uses their content to see if other websites have matching content. The major objective of the Panda penalty is to remove content that presumably has been copied from Panda-trusted sources. Since Google had no way of identifying the originators of the content, it simply assumed that the Panda-trusted sources were the authors of their content, which is not always true. Furthermore, it also means that other websites cannot use public domain materials, since much of that stuff is found on Panda-trusted websites. The Panda algorithm may also penalize sites that even have quotes from other sites, even though that is considered to be the fair use of material, which is legally permissible.
              {{ DiscussionBoard.errors[7263806].message }}
  • Profile picture of the author brettb
    Thanks Bill - yeah the article is worth a slow and careful read.

    I'll tell you straight up that my business site MUST have been penalised for duplicate content because 99% of the content on there is all over the place. I'm like the author of that post - some of my content was originally published on an ac.uk domain, and that's obviously a Panda trusted source.

    And I have pretty much conclusively proven that I can hurt my own sites by reposting my own content on them from other sites.

    I thought I was safe by waiting to unpublish my HubPages then putting them on my own sites 2-3 months later. Well what I've realised now is that because somebody else copied them, there are other versions of them still floating around the web!

    Anyway, some good news - both my DMCA requests were successful - Scribd have a template you can cut n paste to email to offenders.

    Copied content is obviously gonna hurt your traffic levels as people go to the copied versions and not your own site.
    Signature
    ÖŽ FindABlog: Find blogs to comment on, guest posting opportunities and more ÖŽ




    {{ DiscussionBoard.errors[7238809].message }}
    • Profile picture of the author Billwf
      Well, that's why Google came up with the author attribute. If Google can detect the author, then it will start penalizing other websites that copy other people's content, I would think. But before they came up with the author attribute, they couldn't know who the originators of the content were, so they came up with the Panda assumption: that websites of governments, primary news organizations, Google Books, and educational institutions are the originators of their content, so the Panda penalty is applied to any other websites that have matching content with the Panda-trusted sources. And as the article states, the Panda algorithm isn't applied to Panda-trusted sources, so they can do what they want. I am sure that Scribd is not a Panda-trusted source, since that site has many copies of material from all over the web.
      {{ DiscussionBoard.errors[7240531].message }}
  • Profile picture of the author nik0
    Banned
    I don't know man, sites like BBC and NY times have tons of duplicate content cause they all announce the same news on the whole wide world, so you have a point about higher authority sites but I think you really chose the wrong type of sites to explain it.

    Figure yourself, news sites, in how many words can you bring it.
    {{ DiscussionBoard.errors[7240713].message }}
  • Profile picture of the author hadtic
    Google has had the ability to penalize duplicate content for a long time and it is only a very small part of Panda. Try looking a little deeper with the 6 sites and i am sure you will find the reasons behind them. Site speed, content, readability, content freshness, duplicate images are just some of the factors Panda takes into account and i would check the sites that have not been penalized for these factors plus look for other factors such as pages on the sites such as disclosure, disclaimer etc
    {{ DiscussionBoard.errors[7242715].message }}
  • Profile picture of the author jlew24
    I just don't understand this... How do lyrics, jokes, quote website get away with passing the panda? They all have the same content how do they manage to get away with it?
    {{ DiscussionBoard.errors[7245617].message }}
    • Profile picture of the author liswilliams
      Most of my hubs have been stolen and spread around the web. I don't bother with them anymore.
      {{ DiscussionBoard.errors[7247799].message }}
  • Profile picture of the author brettb
    Most folk are missing the point here. Go back and read the article. I mean, pore over it and *really* read it.

    The theory is that Google TRUSTS certain sites and they can't be affected by Panda. Sure, the BBC/Daily Mail/NY Times have loads of dupe content. There's only so many ways you can spin a story. I used to work for a news company, spinning stories from the original source.

    My theory is that Google have expanded their list of trusted sites, which is probably why my sites have got smashed for duplicate content. In actual fact, I wrote all my content, but over time I published it on other, larger, sites (some with highly trusted .ac.uk suffixes).
    Signature
    ÖŽ FindABlog: Find blogs to comment on, guest posting opportunities and more ÖŽ




    {{ DiscussionBoard.errors[7254024].message }}
  • Profile picture of the author LiftMyRank
    I think it's a filter of something like this......if domain < 10000 pages indexed and duplicate content > 90% of all content then apply panda penalty...

    This would rule out all the big authority sites and penalize the smaller, spammy scraper sites........well coming from a programming background that's one way I would do if I were google...
    {{ DiscussionBoard.errors[7255110].message }}
  • Profile picture of the author brettb
    I think really there's just a trust hierarchy - if you copy content from a higher PR site then you'll be penalised.

    Anyway, I've been successful with DMCA - I got one entire domain taken offline. It will be interesting to see if this helps me. I've also culled a fair amount of duplicate content from my own sites.

    Before I publish or republish anything I'm now also checking to make sure that my content isn't published elsewhere.

    As far as why my some of my sites got penalised, well that's really hard to tell but the duplicate content is the only common factor although two sites lacked a logo so I put one on.
    Signature
    ÖŽ FindABlog: Find blogs to comment on, guest posting opportunities and more ÖŽ




    {{ DiscussionBoard.errors[7255308].message }}
  • Profile picture of the author brettb
    Bump!

    OK, I'm redesigning one of my sites and making sure there's no duplicate content.

    Anyway, I have conclusive evidence of a duplicate content problem, although it's hard to prove if there's a penalty. Well I'm currently rebuilding my site so I'll soon know.

    Anyway, this search shows my site (winnershtriangle) being outranked by a Blogspot article. The dates show that the blogspot article is older! How has this happened? I guess I didn't have a sitemap, so Google found the copied content first.

    How do you know the article is a copy? Because it's copied word from word from my site, and even mentions my own software products!

    Sheesh. The DMCA notice has been filed, but I'm getting a very good idea as to why my site's been Panda'd.

    Hmm, if I find a few more of these then I'll be able to file a lawsuit against Google for being anti-competitive
    Signature
    ÖŽ FindABlog: Find blogs to comment on, guest posting opportunities and more ÖŽ




    {{ DiscussionBoard.errors[7303921].message }}
  • Profile picture of the author kammy19
    Its whole on duplicate content.....
    {{ DiscussionBoard.errors[7303932].message }}
  • Profile picture of the author brettb
    Well I'm glad I've found hard evidence for my claims in the original post in this thread.

    Matt Cutts clearly states that Google uses dates to determine if content is copied, and in my case I've found a site copying my content and getting that indexed even before my original article got indexed. Yeah, in his YouTube (which I wish I could find) he had no answer than this, other than using DMCA to get the copy taken down.

    I've got loads of duplicate content on the site in question, so hopefully once I've removed it all then my rankings should bounce back.

    It won't explain why my other sites got so badly hit by Panda. I have a theory though that too much supplemental content is another penalty - this would catch out the article spinners and the content farms with 1000 pages about **** berries.

    One other thing I've noticed about Panda is that it's a sitewide penalty, but I guess you guys all knew this already. Before and after Panda the top 10 pages on my site are the same, it's just that they get far less traffic.

    Why does this matter to me? Because this site was making me $1000 a month in 2008, so it's worth saving!
    Signature
    ÖŽ FindABlog: Find blogs to comment on, guest posting opportunities and more ÖŽ




    {{ DiscussionBoard.errors[7303999].message }}
  • Profile picture of the author Austin80ss
    Why not use the principle of "syndicated content" from trust domains?
    {{ DiscussionBoard.errors[7304034].message }}
  • Profile picture of the author brettb
    With new content, I guess I'm covered in the sitemaps era, or maybe not. Originally I thought that it was highly unlikely that my content could be copied within a day of me posting it, but my example shows that was a rubbish theory.

    Some of my content I published first on other highly trusted sites. Remember that Google claimed for years there was no duplicate content penalty. But now there is.

    Anyway, it will be interesting to see the results of this grand experiment.

    As to crappy Blogspot outranking my own site with my own content - a few more of these and I'd be able to launch an anti-trust lawsuit. Especially if the Blogs had AdSense on. I'll leave it to the big boys to fight it out though.
    Signature
    ÖŽ FindABlog: Find blogs to comment on, guest posting opportunities and more ÖŽ




    {{ DiscussionBoard.errors[7304263].message }}
  • Profile picture of the author brettb
    OK, here's more meat for my theory about duplicate content killing my site.

    My friend's very popular site (2500+ daily visitors) has also been hit by Panda. Guess what? Once again he's an authority in his niche, and he's also reposted articles to many high PR sites, then reposted his own content on his site.

    What a mess!
    Signature
    ÖŽ FindABlog: Find blogs to comment on, guest posting opportunities and more ÖŽ




    {{ DiscussionBoard.errors[7313879].message }}

Trending Topics