About Search Engine Verification Crawler
PurposeThe purpose of this crawling is to collect web pages for verifying search engines.
Crawler Node IPs
Our crawler is working from only above IPs.
Seed URL ListDownload
How to Block our accessThere are two ways to block crawling.
- Set up robots.txt If you put "robots.txt" to top directory of your domain,
- Mail us in order to blocking our crawling Please mail us to stop our crawling.
you can block our crawling access.
When you want to block our crawling,
please put "robots.txt" contains following text
on top directory of your domain.
File Name : robots.txt
We need few days (1-2 days) for update our system.
We will modify our configuration after recieving your mail.
Please include your server's host name or IP address in your mail.
AdministratorTakuya Funahashi (Yamana Laboratory, Information Science and Technology, Graduate School of Fundamental Science and Engineering, Waseda University)
ContactIf you have some questions or problems,
please send a mail to "email@example.com".
If you don't mind, include following items :
- Your Name
- Your E-mail Address
- (if you have) Your Homepage URL