How does semantics/indexing work with Angularjs?

Asked

Viewed 387 times

10

I always wonder, Angularjs is a framework that is being used constantly.

But I have a question about how it works for the crawlers (googlebot example).

They even run the javascript and interpret the code to get the information and show the site developed on the platform?

Is that with the angular HTML theoretically has no information "yet", first it is necessary to trigger the controllers and such.

The question is: How does semantics/indexing work with Angular?

3 answers

9


According to this post, Google’s Crawler renders pages that contain Javascript and navigates through the listed states.

Interesting parts of the post (free translation):

[...] We decided to try interpreting pages by running Javascript. It’s hard to do this on a large scale, but we decided it’s worth it. [...] In recent months, our indexing system has been serving a large number of web pages the way a common user would see them with Javascript stivado.

Whether features like Javascript or CSS in separate files are blocked (say, with robots.txt) so that Googlebot cannot retrieve them, our indexing system will not be able to view your site as a common user.

We recommend allowing Googlebot to retrieve your Javascript and CSS so that your content can be better indexed.

Recommendations for Ajax/JS can be found at this link.

If you want to serve content from Angular applications to crawlers that do not support the same type of functionality, you need to pre-render the content. Services such as the Prerender.io are intended exactly for this.

  • 2

    Cool, so it interprets my code to get results.

  • 1

    @Exact hiagosouza. I couldn’t find similar documentation for Bing, however, so assume the MS service can’t interpret dynamic content.

  • Thanks, I was afraid to do something that later Google could not understand.

  • It wasn’t very clear to me. Will Crawler render the entire page first and then start reading the tag targets? Another question that has arisen is that social media crawlers behave in the same way ?

0

Crawlers (googlebot example) will use plain text reading, that is, first they validate meta tags, then comments, then they remove all coding and then read the whole text without code. Reason: Increase processing speed and reduce errors by having fields that hide, or have nodes (nodes) that are removed during execution. Crawlers do not run any kind of technology (Browser), they just read the file. Angular is still a Javascript like any other, for this reason its elements are ignored. Only items relevant to SEO (Optimization) are taken into question in your indexing.

Part of my explanation you find in this Google article Understanding Web Pages Better

To better understand the pure text viewing process, please make a request for the page in question by CURL, Lynx which are technologies commonly used by Crawlers.

For better indexing it is recommended to create robots txt. and Sitemaps XML.

  • The image rendering of Google bot is to display to users the items that it can see, is not used for indexing and/or executing content (Browser). As it says on the Google website itself, it is only a view for the user https://support.google.com/webmasters/answer/6066467?hl=en

-1

  • I did not give the -1, but I understand the reason - the OP is asking about SEO semantics, not language.

Browser other questions tagged

You are not signed in. Login or sign up in order to post.