1
According to this Response of the Stack Overflow it is possible to use robots.txt for avoid that search engines index the pages that the webmaster does not wish.
How is that possible?
through a
Custom Domain
or custom domain.
However, if the objective of the robots is to delimit a private area on the site (for example), what is the sense in trying to "hide"(with the robots) any content? if everything can be viewed freely on the Github platform?
For the content to be effectively hidden or hidden, then it would be necessary to pay for the Github platform and have access to the private repositories resource?
Jonathas recommend reading the tags before adding to your question, the question is unrelated to "Git" and the tag "Search" is about algorithms.
– Guilherme Nascimento
In fact, I apologize for what happened, I’ll pay more attention, thank you for the warning. Can you visualize any other tag that fits the question? i try to put as many tags as possible to gain visibility, so I end up making these mistakes.
– Jonathas B. C.
The bots will index the "sources" by github.com, they just won’t index the github-pages, so if you’re using github-pages for your own website you’ll be blocking some bots. Now pro github.com what will index is like files and "sources", I’m not sure how to explain well, but that’s the basic difference.
– Guilherme Nascimento