Will Scrapebox de-indexed site ?

24 replies
  • SEO
  • |
I purchased scrapebox and doing auto commenting on blog. If i use continue this work will google de-indexed by site. I am worried, please help and also how should i hide from google through scrapebox. ?:confused:
#deindexed #scrapebox #site
  • Profile picture of the author BudgetSEO
    Sooner or later they will be de-indexed mate.
    There is no escape from the long arm of G.
    Remember you only party on the streets till the police arrests you, and the police here is G.

    here is what 1 of my client told me before starting business with us

    Since I took up blackhat methods and started comment spamming, nothing but bad has happened.

    * I had 1 site get deindexed.
    * My IP is banned from just about everything (Forums, akismet, etc) because these filthy proxies bleed your true IP through (yes, I alway use proxies).
    * My real accounts got banned from some forums
    * I've had a lot of hate mail and return hate blog comments for comment spamming
    * My IP is banned from squidoo
    * Many website have lost SERPs
    * I fear that Google will kill all spammed sites


    Before I began blackhat automation techniques, I had awesomely ranked sites. Sure, it took a while to get them ranked (usually more than 6 months) but the ranks lasted without much dancing.

    With that said, I give up spam methods and especially Scrapebox.
    Signature
    Let me Secure your wordpress website for the price of a small Pizza
    Weather Balloons Election Supplies
    If you need the ''cheapest'' quote, don't waste your time contacting me.
    {{ DiscussionBoard.errors[3590763].message }}
    • Profile picture of the author dburk
      Originally Posted by BudgetSEO View Post

      Sooner or later they will be de-indexed mate.
      There is no escape from the long arm of G.
      Remember you only party on the streets till the police arrests you, and the police here is G.

      here is what 1 of my client told me before starting business with us
      Wow, that's great news.

      If what you say is true then all I need to do to outrank my toughest competitors is to buy Scrapebox and target their websites. My competitors will have "no escape from the long arm of G". When all of my competitors are de-indexed my web page will be ranked #1 by default. Who knew that search engine domination was so easy. :rolleyes:
      {{ DiscussionBoard.errors[3592332].message }}
      • Profile picture of the author Mike Anthony
        Originally Posted by dburk View Post

        Wow, that's great news.

        If what you say is true then all I need to do to outrank my toughest competitors is to buy Scrapebox and target their websites. My competitors will have "no escape from the long arm of G". When all of my competitors are de-indexed my web page will be ranked #1 by default. Who knew that search engine domination was so easy. :rolleyes:
        that argument is as old as the hills and wrong because it sides steps that Google does in fact act on sites reported to it outside the algo. So no it won't hurt you automatically but it can result in you getting apenalty if the site is manually reviewed provided that s all you have. But practically no you can't bring down your competitors site because (As every one forgets who brings up that rationale) your competitor is already ranking (or you he wouldn't be competing) so they have some links of their own not just the ones you place.
        Signature

        {{ DiscussionBoard.errors[3592780].message }}
        • Profile picture of the author dburk
          Originally Posted by Mike Anthony View Post

          that argument is as old as the hills and wrong because it sides steps that Google does in fact act on sites reported to it outside the algo. So no it won't hurt you automatically but it can result in you getting apenalty if the site is manually reviewed provided that s all you have. But practically no you can't bring down your competitors site because (As every one forgets who brings up that rationale) your competitor is already ranking (or you he wouldn't be competing) so they have some links of their own not just the ones you place.
          So run Scrapebox and then report their offense to Google, right?

          Just kidding, still I have yet to see a site de-indexed based on backlinks.
          {{ DiscussionBoard.errors[3593315].message }}
  • Profile picture of the author boosters
    my domain was 4 years old so still i will caught or only for new domain ?
    {{ DiscussionBoard.errors[3590847].message }}
    • Profile picture of the author Tashi Mortier
      Just don't overdo it and make sure to use proxies. Better start out slow and then go and create more backlinks over time, so that it looks more natural to Google.

      Seeing results from SEO might take some time. By the way you ask you questions I can tell that you are pretty new to the subject. You should know that Scrapebox isn't the only thing you can do to increase the ranking of your site. I'm sorry, you haven't found the one and only magic button.

      My recommendation is that you take some time and browse around in this forum to get more knowledge about the subject.
      Signature

      Want to read my personal blog? Tashi Mortier

      {{ DiscussionBoard.errors[3591101].message }}
  • Profile picture of the author ann1986
    Its all about how you use the tools. You must know how to use them effectively to get away with it.

    Edit:
    Scrapebox can be also use in white hat method. Example:find a high PR blog related to your niche and comment manually. Leave a relevant comment.
    {{ DiscussionBoard.errors[3591914].message }}
    • Profile picture of the author dburk
      Originally Posted by ann1986 View Post

      Its all about how you use the tools. You must know how to use them effectively to get away with it.

      Edit:
      Scrapebox can be also use in white hat method. Example:find a high PR blog related to your niche and comment manually. Leave a relevant comment.
      Really? :confused:

      Virtually everything useful Scrapebox does violates the TOS of one or more websites. How could that ever be considered White Hat?
      {{ DiscussionBoard.errors[3592351].message }}
      • Profile picture of the author Mike Anthony
        Originally Posted by dburk View Post

        Really? :confused:

        Virtually everything useful Scrapebox does violates the TOS of one or more websites. How could that ever be considered White Hat?
        Scrapebox can be used just for its name sake. Scraping. So yes you can use it for white hat. I use scrapers for that. I am not looking for sites that allow me to spam raethr opportunities with sites that allow and encourage link building ( guest blogging , directory listings, syndication opportunities etc).

        Unfortunately the gist of what you say is right for most people on this forum. It seems to be sold for the primary purpose of commenting spam
        Signature

        {{ DiscussionBoard.errors[3592419].message }}
        • Profile picture of the author dburk
          Originally Posted by Mike Anthony View Post

          Scrapebox can be used just for its name sake. Scraping. So yes you can use it for white hat. I use scrapers for that. I am not looking for sites that allow me to spam raethr opportunities with sites that allow and encourage link building ( guest blogging , directory listings, syndication opportunities etc).

          Unfortunately the gist of what you say is right for most people on this forum. It seems to be sold for the primary purpose of commenting spam
          Hi Mike,

          So, you are saying you are able to use the scraper without making automated queries to search engines, right? In that particular use, I would agree. But correct me if I'm wrong, the particular use in your example does in fact use automated queries to search engines in violation of their TOS, right?
          {{ DiscussionBoard.errors[3592659].message }}
          • Profile picture of the author Mike Anthony
            Originally Posted by dburk View Post

            Hi Mike,

            So, you are saying you are able to use the scraper without making automated queries to search engines, right? In that particular use, I would agree. But correct me if I'm wrong, the particular use in your example does in fact use automated queries to search engines in violation of their TOS, right?
            No Dburk. Obviously you are using the search engine but to qualify that as therefore black hat is silly. It is still used for white hat purposes. and you are therefore still wrong that the only thing it can be used for is Black Hat SEO. When I write a blog owner I have found and do an article for his site that is not black hat.

            Now the matter of Google TOS is another matter completely but black hat has a definition and you can't expand it as you wish.
            Signature

            {{ DiscussionBoard.errors[3592760].message }}
            • Profile picture of the author dburk
              Originally Posted by Mike Anthony View Post

              "When I write a blog owner I have found and do an article for his site that is not black hat."
              I agree, there is nothing wrong with that, but that is something you are doing outside the scope of what Scrapebox does. The part that Scrapebox does is the activity that violates the TOS of search engines.

              Originally Posted by Mike Anthony View Post

              Now the matter of Google TOS is another matter completely but black hat has a definition and you can't expand it as you wish.
              I guess, if you parse it that way, I don't. I wonder if Google sees it that way?
              {{ DiscussionBoard.errors[3593284].message }}
              • Profile picture of the author Mike Anthony
                Originally Posted by dburk View Post

                I agree, there is nothing wrong with that, but that is something you are doing outside the scope of what Scrapebox does.
                Nope. It hooks me up with the site to ask the link from. Its therefore a part of the process that is white hat. You are merely extending Black hat to scraping and thats not the definition. That what I was responding to.

                but on the SEPARATE ISSUE of Google there are some notable points. You assume

                1) Google has either an ethical or legal grounds for a TOS violation.

                Not seeing it. Google is the ULTIMATE site scraper. It hits my sites daily and never asks for my permission or reads my TOS and it does to billions of sites billions of times a day and makes billions of dollars scaping the enitre web. and yes I have pages I would prefer not to have indexed and Google forces me to use THEIR rules so that they do not violate my wishes and they use up my bandwidth in doing it too.

                2) that any forward facing company can legally tell people how to view their site.

                People can down load my entire site for offline viewing. it only becomes an issue if they use up too much resources (that they wouldn't if they viewed it online manually) . Thats part of what goes in to making a site public. Now in a true TOS violation you require the user to AGREE to certain conditions BEFORE using certain resources. No one agrees to anything by doing a search and is never required anywhere to agree to anything. google doesn't even provide notice of any such agreement on their home page and only us geeks know that they actually even have one.

                3) Google doesn't already allow access to their data to the public.

                Google is well aware and allows all kinds of SEO and PPC tools that give access to its data in various forms. They even PROVIDE TOOLs for you to do so and the tools are free. I can eve set my searches to 100 and automatically take screen shots of each page automatically - again the realities of making anything truly public.

                All I am doing is enhancing my searches by using a scraper and the way I do it there is no mass hitting of google. "I'm not looking to find tens of thousands of "powered by vbulletin" links . My strategy of using a scraper is to be as detailed and specific to get to exactly the site I want (hopefully with on page PR which you know rules out most of what people search for with scrapebox)

                All of this is no great defense of scrapebox because as scrapers go its entry level and limited. I conceded earlier that what it is really marketed for is spam and more spam but its not correct to say it can only be used for that. I've grown past that level of data mining though. I want software with intelligence that follows links not even primarily on Google and evaluates it based on what I am looking for. Scrapebox is a toy as far as that is concerned.
                Signature

                {{ DiscussionBoard.errors[3597184].message }}
                • Profile picture of the author dburk
                  Originally Posted by Mike Anthony View Post

                  Nope. It hooks me up with the site to ask the link from. Its therefore a part of the process that is white hat. You are merely extending Black hat to scraping and thats not the definition. That what I was responding to.

                  but on the SEPARATE ISSUE of Google there are some notable points. You assume

                  1) Google has either an ethical or legal grounds for a TOS violation.

                  Not seeing it. Google is the ULTIMATE site scraper. It hits my sites daily and never asks for my permission or reads my TOS and it does to billions of sites billions of times a day and makes billions of dollars scaping the enitre web. and yes I have pages I would prefer not to have indexed and Google forces me to use THEIR rules so that they do not violate my wishes and they use up my bandwidth in doing it too.

                  2) that any forward facing company can legally tell people how to view their site.

                  People can down load my entire site for offline viewing. it only becomes an issue if they use up too much resources (that they wouldn't if they viewed it online manually) . Thats part of what goes in to making a site public. Now in a true TOS violation you require the user to AGREE to certain conditions BEFORE using certain resources. No one agrees to anything by doing a search and is never required anywhere to agree to anything. google doesn't even provide notice of any such agreement on their home page and only us geeks know that they actually even have one.

                  3) Google doesn't already allow access to their data to the public.

                  Google is well aware and allows all kinds of SEO and PPC tools that give access to its data in various forms. They even PROVIDE TOOLs for you to do so and the tools are free. I can eve set my searches to 100 and automatically take screen shots of each page automatically - again the realities of making anything truly public.

                  All I am doing is enhancing my searches by using a scraper and the way I do it there is no mass hitting of google. "I'm not looking to find tens of thousands of "powered by vbulletin" links . My strategy of using a scraper is to be as detailed and specific to get to exactly the site I want (hopefully with on page PR which you know rules out most of what people search for with scrapebox)

                  All of this is no great defense of scrapebox because as scrapers go its entry level and limited. I conceded earlier that what it is really marketed for is spam and more spam but its not correct to say it can only be used for that. I've grown past that level of data mining though. I want software with intelligence that follows links not even primarily on Google and evaluates it based on what I am looking for. Scrapebox is a toy as far as that is concerned.
                  Hi Mike,

                  I sorry I didn't make my position clear. I have absolutely no problem with scraping, I do not consider scraping black hat at all. I only meant to say that violating the TOS of a website is not exactly white hat.

                  It seems Google has no problems with scraping, they build websites based on scraped content as well as ranking websites with scraped content in their search index. It's not the act of scraping that violates their terms, it is the excessive automated queries that have the potential to shutdown the service when abused.

                  Google also follows industry standards for ethical scraping practices, they only scrape and publish small snippets and always provide attribution to the source.

                  Furthermore, Google seems to have no problem with scraping from their websites as long as you are not abusing their service with excessive automated queries and provide proper attribution.

                  It's the excessive automated queries to services that expressly bar such activities that is the common abusive technique that makes most of what SB does cross what I consider to be a very bright line of what is ethical. Otherwise, why would you need to use so many proxies and why are those proxies so consistently blocked after use by SB?

                  You seemed to be rationalizing this abuse by saying that you follow white hat SEO methods with data you acquired through arguable unethical practices. I'm not saying everything you do is black hat, I'm just saying that much of what SB does crosses some sort of ethical boundary.

                  To use an analogy, it's a little like saying I use the money from bank robberies to feed the poor, therefore when I rob a bank it is completely ethical.

                  Finally, let me say that I do believe that there are some things that you can use SB for that are completely ethical, just that nearly all of it's most useful tools are crossing lines of ethics and therefore not considered white hat in my opinion.
                  {{ DiscussionBoard.errors[3601785].message }}
                  • Profile picture of the author Mike Anthony
                    Originally Posted by dburk View Post


                    You seemed to be rationalizing this abuse by saying that you follow white hat SEO methods with data you acquired through arguable unethical practices.
                    Argue with yourself because you seem to be saying you don't know how to read. I don't use proxies with my use of scrapers. I don't have to. Like I point blank said I use them to pinpoint very select results. To me the search terms should be so well targeted I get back a few hundred results which I could get manually using the normal search function but scrapers are an easy way to save and record results sooooo...... according to your own guidelines of scraping (should you care to actually read your own post and then what I said) I have nothing to defend or rationalize.

                    Scrapers including scrapebox CAN be used ethically and with no hint of black hat. meanwhile you are still trying to redefine what black hat is (and failing miserably.).

                    In case you still don't understand -read your whole long winded last response and realize it pretty much sums up all the ways that I use and do't use scrapers. No proxies , no posting, no slamming Google mercilessly.

                    We are only in this conversation because you made a blanket statement that its all black hat using them which is totally incorrect. Do most people abuse things with this software Third time yes. Does the programmer include features made for spam? Yes. Does everyone using it have to use it unethically or features that are? NO. It can and I do use scrapers stricly for white hat purposes. Truth be told for the second time I go beyond what Scrapebox can do and scrape far more from individual sites to find opportunities than I ever do Google.

                    And incidentally you are incorrect. google crawler goes through a great deal of page content that it does not display. the snippets are just for summary purposes. Crawlers can devour multiple whole pages multiple times a week. If they don't its not because Google is showing restraint its because they don't view the site as worthwhile enough to go into deep indexing.

                    Please leave the arguments about what is unethical until you understand the various ways people can use and not use a piece of software and certainly leave the proclamations about who is defending unethical practices until you learn how to read.

                    Theres alot of people who can raise serious ethical concerns about people who cybersquat domain names and use highly dubious "domain appraisals" to ratchet up the price as well.
                    Signature

                    {{ DiscussionBoard.errors[3602898].message }}
          • Profile picture of the author debra
            Originally Posted by dburk View Post

            Hi Mike,

            So, you are saying you are able to use the scraper without making automated queries to search engines, right? In that particular use, I would agree. But correct me if I'm wrong, the particular use in your example does in fact use automated queries to search engines in violation of their TOS, right?
            Are you saying that "automated queries" are black-hat techniques and totally against TOS of most all sites?

            If that were the case, then Google Labs and thier many API's, would then appear to condon and promote the black-hat technique itself. Along with most all other sites.
            {{ DiscussionBoard.errors[3592996].message }}
            • Profile picture of the author dburk
              Originally Posted by debra View Post

              Are you saying that "automated queries" are black-hat techniques and totally against TOS of most all sites?
              Nope, I am saying that "automated queries", that do violate TOS, are not white hat techniques.

              Originally Posted by debra View Post

              If that were the case, then Google Labs and thier many API's, would then appear to condon and promote the black-hat technique itself. Along with most all other sites.
              Since that is not exactly what I said this is not the same case.

              You are referring to applications of licensed API keys. Since these are licensed applications they are usually not in violation of TOS, therefor not black-hat.
              {{ DiscussionBoard.errors[3593362].message }}
  • Profile picture of the author simonbuzz
    Banned
    Build backlinks naturally...you will be fine...
    {{ DiscussionBoard.errors[3592549].message }}
    • Profile picture of the author Melvolio
      Originally Posted by simonbuzz View Post

      Build backlinks naturally...you will be fine...
      Could you clarify this please?

      How does one build links naturally?
      Signature
      {{ DiscussionBoard.errors[3602446].message }}
  • Profile picture of the author HowToMakeAWebsite
    Yea there is no reason why you can't use scrapebox to manually build links. Use it wisely people and it will work.
    {{ DiscussionBoard.errors[3594230].message }}
  • Profile picture of the author User-Name
    A google search for scrapebox
    About 2,480,000 results (0.15 seconds)
    Google dont care
    {{ DiscussionBoard.errors[3597340].message }}
    • Profile picture of the author paulgl
      Originally Posted by User-Name View Post

      A google search for scrapebox
      About 2,480,000 results (0.15 seconds)
      Google dont care
      That has nothing to do with anything. Do a search for black hat and
      see how many results you get.

      Paul
      Signature

      If you were disappointed in your results today, lower your standards tomorrow.

      {{ DiscussionBoard.errors[3603033].message }}
  • Profile picture of the author JSProjects
    Originally Posted by boosters View Post

    I purchased scrapebox and doing auto commenting on blog. If i use continue this work will google de-indexed by site. I am worried, please help and also how should i hide from google through scrapebox. ?:confused:
    No. Not as long as you're smart. Just don't blast thousands of sites one day and 0 the next. Instead, spread it out over the course of a week or more. Additionally, it's important to incorporate other link building methods into your SEO efforts.
    {{ DiscussionBoard.errors[3597406].message }}
  • Profile picture of the author BenoitT
    Originally Posted by boosters View Post

    I purchased scrapebox and doing auto commenting on blog. If i use continue this work will google de-indexed by site. I am worried, please help and also how should i hide from google through scrapebox. ?:confused:
    If you use it to spam comments all the day, you will be banned of course. However, it is very useful in some other ways. I personally use it to give a list of high PR pages to post comments and I outsource it.
    Signature

    Benoit Tremblay

    {{ DiscussionBoard.errors[3602277].message }}

Trending Topics