AI vs Human Synonyms: Black, white or grey?

by WebVyz
4 replies
  • SEO
  • |
Yukon's SERP keyword research tool points to an interesting black-grey-white hat dilemma:

When Google does synonym matching, it is obviously limited by the current state of the art artificial intelligence techniques for automated thesaurus construction. Webmasters that apply human intelligence to the generation of content sometimes compensate for deficiencies in the state of the art AI. One way is by generating "synonymous content" using spintax or other techniques to create variations of an article in order to capture relevant searches. This can also be abused in article spinners to create a lot of junk content. However, a primary legitimate reason for generating a large number of combinations of synonymous articles is to detect otherwise undetected high-traffic, long-tail keywords and phrases.

Since there is no way of notifying Google that an article appears merely for the purpose of detecting these long-tail keywords and phrases -- that it is intentionally duplicate content attempting to provide synonym matches that Google's automated thesaurus has inadequately cataloged -- how can one do such long-tail keyword research by content generation without running the risk of being Google-slapped as an article spinner?
#black #grey #human #synonyms #white
  • Profile picture of the author IMmarcel
    WebVyz, That is a great point. The answer in my book is to make sure the content is at least 30-50% different by using different words and phrases that mean the same things. And making sure that what is produced would still get a good grade from an English teacher.

    In my SERP work, I have always used a thesaurus, and then implemented only the appropriate words via SpinTax. It lets me go after all those Long tail keywords a potential customer *might actually use* all at once with common content (but I also spin enough of the words so that it isn't seen as duplicate content).

    Automatic spinners have never been able to create sufficient quality that I felt I was staying "white hat" which is necessary to be able to not only survive, but thrive as the Pandas and Penguins and who knows what is next go rampaging through the Google indexing for my sites.

    I've found that my manual work just keeps doing better and better!
    {{ DiscussionBoard.errors[7063912].message }}
  • Profile picture of the author yukon
    Banned
    Originally Posted by WebVyz View Post

    Yukon's SERP keyword research tool points to an interesting black-grey-white hat dilemma:

    When Google does synonym matching, it is obviously limited by the current state of the art artificial intelligence techniques for automated thesaurus construction. Webmasters that apply human intelligence to the generation of content sometimes compensate for deficiencies in the state of the art AI. One way is by generating "synonymous content" using spintax or other techniques to create variations of an article in order to capture relevant searches. This can also be abused in article spinners to create a lot of junk content. However, a primary legitimate reason for generating a large number of combinations of synonymous articles is to detect otherwise undetected high-traffic, long-tail keywords and phrases.

    Since there is no way of notifying Google that an article appears merely for the purpose of detecting these long-tail keywords and phrases -- that it is intentionally duplicate content attempting to provide synonym matches that Google's automated thesaurus has inadequately cataloged -- how can one do such long-tail keyword research by content generation without running the risk of being Google-slapped as an article spinner?
    I've dropped a few hints about this topic here in the SEO forum. The reason I've only dropped hints is to keep the spamfest from starting up. Most people would be blown away If they only knew what was possible with dynamic content (notice my WF title (Dynamic SEO) ).

    Here's the cool thing that Google does...

    Google doesn't forget, it's that simple.

    When you do this, in this exact order:
    1. Create a web page
    2. Rank the web page
    3. Edit the same web page with new content
    4. Verify that the new content has been found in Google SERPs (also edit page title)

    Google will then rank 1 page in two separate (relevant keywords) SERP positions, based on the old content & the new updated content.

    Now it gets interesting...

    Now follow the same steps above, only change the page content once per week. The single page will usually snowball in the SERPs as the weeks go by, ranking for each of the relevant text that is being edited on the single page, at the same time keeping the old SERPs intact.

    But wait there's more...

    Now multiple the same strategy x 100 pages, now you have 100 pages snowballing, saturating an entire keyword theme in the SERPs.

    Keep in mind this isn't theory, it's fact, I do this every single day. Also, there's no percentages here, but the thing you have to be careful with is your existing high traffic pages. IMO, leave existing high traffic pages alone, there's just too much room for error. What I'm doing here is fishing for traffic (editing pages), then building out pages/categories to target the new found traffic/keywords which end up pretty much being static pages (maybe a tweak here & there over time).

    I don't worry about Google slaps doing this, Google could care less what the text is as long as the text always stays on the same subject. If my page is about the movie "Star Wars", it needs to be focused on the same movie (Star Wars) when Google returns to find the edited page. It's that simple.

    As a mild example, look at Wikipedia pages, a lot of top ranking Wikipedia pages get edited a few times per week, Wiki retains the old SERP position & picks up a new SERP position for relevant text edits done to the same Wiki page.

    The key in all of this is relevancy & sticking to the same subject during each page edit.
    {{ DiscussionBoard.errors[7064207].message }}
    • 100 pages? Do you do this manually or do you have a script of sorts? I read "dynamic" and think variables and loops.

      Its like you are saying don't spin 1 article 100 times and spam it out, rather update the same article 100 times. Ranking each update of course.
      Signature
      {{ DiscussionBoard.errors[7064865].message }}
      • Profile picture of the author yukon
        Banned
        Originally Posted by PathofLeastResistance View Post

        100 pages? Do you do this manually or do you have a script of sorts? I read "dynamic" and think variables and loops.

        Its like you are saying don't spin 1 article 100 times and spam it out, rather update the same article 100 times. Ranking each update of course.
        Actually I use php, that pulls keywords from a list depending on the weeks date. You could still do this manually on a small scale for testing, that's what I did when I first started testing.

        You could simply swap out a page title & an image caption (plain text) for testing new keyword phrases, that's about as simple as I think it could get & still test for new/relevant keywords.

        Also, I don't do articles so I'm not really spinning large amounts of text, I'm simply swapping out keywords. I don't do this on proven traffic keywords/pages, I leave those pages alone.
        {{ DiscussionBoard.errors[7065332].message }}

Trending Topics