It has several techniques to solve this, each with its advantages and disadvantages. For its description just use the robots.txt
which is a file that tells searchers that you don’t want that content to be indexed by them.
The presence of this file does not guarantee anything, but the best known searchers respect it. If you need guarantees, you will need to use a protection mechanism requiring at least one basic authentication before people access.
It is possible to use the meta
put in the other answer but it gives work. You have to put it on every page, and if one day you need to change this on website, has to change in all files. It is possible to say in robots.txt
which pages will be affected and centrally. It’s much better.
In some cases it may be better or the only way to do it, but it is rarely the case. It must be a secondary solution. Still, it is better to do this:
<meta name="robots" content="noindex, nofollow">
It is also possible to control this on the HTTP server. But it is rare where this is most interesting. You can use this element in the protocol header:
X-Robots-Tag: noindex
Wouldn’t it be better if you used an Apache local server? This ensures that only you can access the site and will therefore provide more security.
– regmoraes
I saw that you are a little lost to accept an answer, you can only accept one of them. Is this really what you want to accept? Vote you can in all. See the [tour].
– Maniero