About Search Engine Verification Crawler
Purpose
The purpose of this crawling is to collect web pages for verifying search engines.Agent Name
SearchEngineVerificationCrawlerCrawler Node IPs
133.9.84.80 |
133.9.84.81 |
133.9.84.82 |
133.9.84.83 |
133.9.84.84 |
133.9.84.85 |
133.9.84.86 |
133.9.84.87 |
133.9.84.88 |
133.9.84.89 |
133.9.84.90 |
133.9.84.91 |
133.9.84.92 |
133.9.84.93 |
133.9.84.94 |
133.9.84.95 |
133.9.84.96 |
133.9.84.97 |
133.9.84.98 |
133.9.84.99 |
133.9.84.100 |
Our crawler is working from only above IPs.
Seed URL List
DownloadHow to Block our access
There are two ways to block crawling.- Set up robots.txt If you put "robots.txt" to top directory of your domain,
- Mail us in order to blocking our crawling Please mail us to stop our crawling.
you can block our crawling access.
When you want to block our crawling,
please put "robots.txt" contains following text
on top directory of your domain.
File Name : robots.txt
User-Agent: SearchEngineVerificationCrawler
Disallow: /
Disallow: /
We need few days (1-2 days) for update our system.
We will modify our configuration after recieving your mail.
Please include your server's host name or IP address in your mail.
Administrator
Takuya Funahashi (Yamana Laboratory, Information Science and Technology, Graduate School of Fundamental Science and Engineering, Waseda University)Contact
If you have some questions or problems,please send a mail to "srvc@yama.info.waseda.ac.jp".
If you don't mind, include following items :
- Your Name
- Your E-mail Address
- Subject
- (if you have) Your Homepage URL