Loading external "pages" via AJAX. Will Google index?

Asked

Viewed 502 times

4

Some talking to use #!, others say to use History API and others say that the Google already runs Javascript.

I’m doing a project whose pages are loaded into a container leading.

All link is an anchor but what is after the hash is really a path. My Javascript is programmed to detect exchange events hash, then he takes the path by subtracting the hash and loads the contents of the container leading.

Look at the script, it’s very simple!

    (function ($) {

        function hashNavigate() {

            var url = window.location.href;
            var hash = url.substring(url.indexOf('#'));
                hash = hash.replace("#", "");

            if( /#/.test(url) ) { // se houver Hash (se nao for pagina inicial)

                if(!$("#mainRow").html().length) { // se mainRow estiver vazia, apenas carrega o conteúdo

                    $("#mainRow").load(hash);
                } else {

                    $("#mainRow").html("").load(hash); // senão, apaga o conteúdo e carrega o novo em seguida
                }
            }       
        }

        hashNavigate() // executa a primeira vez 

        $(window).on("hashchange", hashNavigate) // armazena no evento

    })(jQuery);

In this way mine website will be indexed?

Obs: Any external document uploaded to container do not have a complete HTML structure, only what should be loaded inside it and a tag scripted.

  • Give a read on the escaped_fragment: https://developers.google.com/webmasters/ajax-crawling/docs/specification

  • You can accept an answer if it solved your problem. You can vote on every post on the site as well. Did any help you more? Something needs to be improved?

1 answer

4

Even if Google eventually executes code in Javascript it doesn’t count that this execution is perfect and plays exactly what you expect.

In fact to current documentation Google’s search engine clearly tells you to prepare content that relies on AJAX to be read also without the use of JS. This is good for users who do not have JS enabled in the browser as for the Crawler indexing. If they say this, you will rely on something extra undocumented that they may be offering?

Their recommendation (and I say it shouldn’t just be because of them) is to make pages that don’t depend on JS and then put what you need to just improve the usability of the page. This is known for Progressive Enhancement.

What they have to do is have accessible content that does not depend on JS and even links for pages that will be mounted with AJAX are also available in the traditional form.

Other than that, there’s only one way to make sure you know if your specific case is going to work, doing it and seeing if it indexes like you expect.

Maybe someone with more knowledge than me about this will come in here and make sure you index, I wouldn’t risk saying that. I may be mistaken and I may not have understood the description of the question (because it doesn’t have all the details about the use of this code) but I think the code is too complex (what it does and not the writing of it) for Google to execute correctly.

See also the Google’s guide to AJAX.

Browser other questions tagged

You are not signed in. Login or sign up in order to post.