The robots.txt is located in the root directory of a domain and defines whether and how the web pages may be visited by crawlers.
X-Robots Tag Nofollow and X-Robots Tag Noindex
These tags achieve the same effect as Meta NoIndex and NoFollow. They are just implemented differently. These tags are applied in the HTTP header and can even be used via the .htaccess file to dynamically block content on the website. For example, a script can be implemented that defaults to NoIndex for all content ending with .doc.
Read here how to regulate access with robots.txt: Google Search Central - Introduction to robots.txt
Closing of content for the Searchmetrics Bot
If you don’t want the SearchmetricsBot to crawl your website or parts of it, please use the robots.txt file. Simply include “User-agent: SearchmetricsBot” in your robots.text to block the SearchmetricsBot.
Example:
If you don’t want the SearchmetricsBot to crawl an area of your site (e.g. ‘/hidden.html’), then enter the following rule into your robots.txt:
User-agent: SearchmetricsBot
Disallow: /hidden.html
Please note that the Searchmetrics bot will then no longer be able to crawl this page and evaluate data for this page!