ScrapeBox Experts - Chime In

3 replies
  • SEO
  • |
Hello All,

Just bought ScrapeBox today and hope that it lives up to its reputation. I've bought this for a very specific purpose, which is to find high page rank targets with low OBL.

I want answers to the following questions:

1. Can ScrapeBox be used to run multiple Google commands at the same time?

I have approximately 3000 "inurl:" Google commands. My list of Google commands is something like:

inurl:siteA.com/*
inurl:siteB.com/*
inurl:siteC.com/*
inurl:siteD.com/*
and so on...

So can I feed all of these queries within the "keywords" box under "Harvester" area (after selecting "custom footprint")?

or can I only feed them one at a time, and the entire process has to be repeated manually for all 3000 queries?

2. As far as harvesting of URL's using these 3000 queries is concerned, will it be better to use private proxies or free public proxies? Let's say from each query, I harvest 500 URLs, then total number of harvested URLs would be something like 1.5 million. Can this entire process be completed using free public proxies or do I need private proxies? If I need private proxies, then how many would you recommend for harvesting 1.5 Million URLs?

3. I also need to check pagerank of these 1.5 million URLs. So again the question of proxies becomes important. Which ones should I use - free public or paid Private?

It would be really great if I could just push a button and the entire process of URL harvesting is completed without any intervention or supervision from my side. The same also goes for checking pagerank and I'm willing to make a little bit of investment towards this end.

Please answer these questions in detail, considering the fact that I'm a ScrapeBox newbie

Thanks!
#chime #experts #scrapebox #scrapebox or proxy
  • Profile picture of the author JSProjects
    Originally Posted by howudoin View Post

    Hello All,

    Just bought ScrapeBox today and hope that it lives up to its reputation. I've bought this for a very specific purpose, which is to find high page rank targets with low OBL.

    I want answers to the following questions:

    1. Can ScrapeBox be used to run multiple Google commands at the same time?

    I have approximately 3000 "inurl:" Google commands. My list of Google commands is something like:

    inurl:siteA.com/*
    inurl:siteB.com/*
    inurl:siteC.com/*
    inurl:siteD.com/*
    and so on...

    So can I feed all of these queries within the "keywords" box under "Harvester" area (after selecting "custom footprint")?

    or can I only feed them one at a time, and the entire process has to be repeated manually for all 3000 queries?

    2. As far as harvesting of URL's using these 3000 queries is concerned, will it be better to use private proxies or free public proxies? Let's say from each query, I harvest 500 URLs, then total number of harvested URLs would be something like 1.5 million. Can this entire process be completed using free public proxies or do I need private proxies? If I need private proxies, then how many would you recommend for harvesting 1.5 Million URLs?

    3. I also need to check pagerank of these 1.5 million URLs. So again the question of proxies becomes important. Which ones should I use - free public or paid Private?

    It would be really great if I could just push a button and the entire process of URL harvesting is completed without any intervention or supervision from my side. The same also goes for checking pagerank and I'm willing to make a little bit of investment towards this end.

    Please answer these questions in detail, considering the fact that I'm a ScrapeBox newbie

    Thanks!
    1. Very easily. Load up notepad and enter the following:

    inurl:%kw%

    Save it, and use the "merge" (M) function above the keyword window. Choose the text file you just created and it will add the inurl: command before all of the keywords in your keyword harvester window.

    2. Public. Private proxies will burn out REALLY fast when using the harvester. You'll need to find a source / method to get a large amount of public proxies. I wouldn't recommend the proxies supplied by the Scrapebox proxy harvester. Too unreliable.

    3. Public, and LOTS of them. 1.5 million URLs is probably at least a 2 day job. Especially if you're thorough like I am and run the PR checks multiple times. (You'll find that plenty of "PR N/A" results actually have PR if you run them through a PR check a second, even third time.)

    Split your list into 50-100k and run a PR check 2-3x on each list. Copy the PR N/A results, remove them, then "Paste / Add" to existing list. This will let you re-check them without needing to re-check ALL of the URLs. Just the PR N/A ones.
    {{ DiscussionBoard.errors[6276262].message }}
    • Profile picture of the author howudoin
      Originally Posted by JSProjects View Post

      1. Very easily. Load up notepad and enter the following:

      inurl:%kw%

      Save it, and use the "merge" (M) function above the keyword window. Choose the text file you just created and it will add the inurl: command before all of the keywords in your keyword harvester window.

      2. Public. Private proxies will burn out REALLY fast when using the harvester. You'll need to find a source / method to get a large amount of public proxies. I wouldn't recommend the proxies supplied by the Scrapebox proxy harvester. Too unreliable.

      3. Public, and LOTS of them. 1.5 million URLs is probably at least a 2 day job. Especially if you're thorough like I am and run the PR checks multiple times. (You'll find that plenty of "PR N/A" results actually have PR if you run them through a PR check a second, even third time.)

      Split your list into 50-100k and run a PR check 2-3x on each list. Copy the PR N/A results, remove them, then "Paste / Add" to existing list. This will let you re-check them without needing to re-check ALL of the URLs. Just the PR N/A ones.
      Thanks for you reply mate....Really appreciate that. Guess I need a few clarifications now:

      1. I'm not using a single keyword & hence not sure if your method of using inurl:%kw% is such a good idea. I guess all my keywords are different, meaning all of them are distinct Google commands such as:

      inurl:siteA.com/*
      inurl:siteB.com/*
      inurl:siteC.com/*
      inurl:siteD.com/*
      and so on...


      So can I simply paste these 3000 "Keywords" (Google commands) in the keyword area & let it run by itself?

      2. Regarding proxies, can you suggest any good public proxy source? If yu don't want to disclose here, then please send me a pm since I really need these.

      Thanks
      Bhupinder
      {{ DiscussionBoard.errors[6277668].message }}
      • Profile picture of the author JSProjects
        Originally Posted by howudoin View Post

        Thanks for you reply mate....Really appreciate that. Guess I need a few clarifications now:

        1. I'm not using a single keyword & hence not sure if your method of using inurl:%kw% is such a good idea. I guess all my keywords are different, meaning all of them are distinct Google commands such as:

        inurl:siteA.com/*
        inurl:siteB.com/*
        inurl:siteC.com/*
        inurl:siteD.com/*
        and so on...


        So can I simply paste these 3000 "Keywords" (Google commands) in the keyword area & let it run by itself?

        2. Regarding proxies, can you suggest any good public proxy source? If yu don't want to disclose here, then please send me a pm since I really need these.

        Thanks
        Bhupinder
        Yes, you can import 3000 keywords, lines of text, whatever, and use the merge function. Alternatively you can use a tool like: Add Prefix and/or Suffix into Each Text Line

        For public proxies, just do some googling until you find a source. There are plenty of proxy sites that provide new proxies on a daily basis. Just need to find one. (I use a few different ones.)
        {{ DiscussionBoard.errors[6279897].message }}

Trending Topics