Need Help about robots.txt file inclusion

by benthomas642 4 replies
Hi to All, I m just created robots.txt file for my website
And I updated at root file directory on the web server.
I used following codes in the robots.txt file which are:
User-agent: *
Disallow:
Disallow:/emails/

So I added these above codes and updated on the web server. But on Google Search Console, it has been showing me the old codes not the new codes above I updated. Can anyone tell me why search console doesn't showing me the updated code that I used. Or Google search console takes some times to updated the data. Is the above codes I used were wrong?

Tell me why or suggest me what should I have to do.
Thanks a lot.
#search engine optimization #file #inclusion #robotstxt
Avatar of Unregistered
  • Profile picture of the author TriState Technology
    Banned
    [DELETED]
    {{ DiscussionBoard.errors[11074715].message }}
    • Profile picture of the author benthomas642
      Thanks I already used the same. And it's working.
      {{ DiscussionBoard.errors[11075849].message }}
  • Profile picture of the author GoClickOn
    The simplest robots.txt file uses two key words, User-agent and Disallow. User-agents are search engine robots, most user-agents are listed in the Web Robots Database. Disallow is a command for the user-agent that tells it not to access a particular URL. On the other hand, to give Google access to a particular URL that is a child directory in a disallowed parent directory, then you can use a third key word, Allow.
    {{ DiscussionBoard.errors[11074733].message }}
  • Profile picture of the author paulgl
    How can google crawl your emails? Just asking....

    99% of the time, the best robots.txt is blank.

    Paul
    Signature

    If you were disappointed in your results today, lower your standards tomorrow.

    {{ DiscussionBoard.errors[11074806].message }}
    • Profile picture of the author benthomas642
      I don't thinks so about how can we use just a blank. If we use blank file or we don't it means that search engine is crawling your entire website links including those link that shouldn't be accessible from the search engine.
      When you type site:www.domainname.com on Google. Then you can see that all links would be crawled including those that is not necessary needed to be crawled on Search engine.
      {{ DiscussionBoard.errors[11075847].message }}
Avatar of Unregistered

Trending Topics