About Search Engine Verification Crawler


The purpose of this crawling is to collect web pages for verifying search engines.

Agent Name


Crawler Node IPs

Our crawler is working from only above IPs.

Seed URL List


How to Block our access

There are two ways to block crawling.

  1. Set up robots.txt
  2. If you put "robots.txt" to top directory of your domain,
    you can block our crawling access.

    When you want to block our crawling,
    please put "robots.txt" contains following text
    on top directory of your domain.

    File Name : robots.txt
    User-Agent: SearchEngineVerificationCrawler
    Disallow: /

    We need few days (1-2 days) for update our system.

  3. Mail us in order to blocking our crawling
  4. Please mail us to stop our crawling.
    We will modify our configuration after recieving your mail.
    Please include your server's host name or IP address in your mail.


Takuya Funahashi (Yamana Laboratory, Information Science and Technology, Graduate School of Fundamental Science and Engineering, Waseda University)


If you have some questions or problems,
please send a mail to "srvc@yama.info.waseda.ac.jp".
If you don't mind, include following items :
  • Your Name
  • Your E-mail Address
  • Subject
  • (if you have) Your Homepage URL