Does React affect SEO?

Asked

Viewed 2,034 times

9

Since HTML is generated via Javascript and until the page is loaded there is no useful HTML in it, SEO can be affected if I make a 100% web application in React - a call single page application?

Also, when we use data from an external API and build HTML based on it, the search engine can wait to load these Apis?

  • 1

    Good reading https://goomore.com/blog/seo-vs-react-crawlers-mais-intelligentsia/

  • 3

    If the answer and links and opinions out there were correct it would not have appeared the SSR (Server Side Rendering) that despite word of mouth advertisements (text to text, blog to blog, YT to YT) speaking well yet they do not say that it is a thing created to solve a problem that the devs themselves have created, ie create the problem then have to create something extra to solve the problem and tie the 2 together. People don’t understand the basics of HTTP, start working with the Web, create bad things and patches (like SSR). Yes React, Angular and cia can affect SEO if you don’t know the minimum (most don’t know).

  • EVERYTHING can affect SEO, because machine intelligence works in the opposite direction to Seos who simply want ranking. See how much Google has in its policies on page rankings, precisely using response times from the first load of the site. At this point REACT is not the best option, however, current engines are able to receive all the content and wait for the DOM response to read the page, and use the dynamically generated content, as long as you can access it from the browser.

4 answers

1

Although Google informs that it can already track sites Single Page Application (SPA), we have no reports if the indexing speed in the SERP is the same when we deliver the HTML with the content already available.

What is the disadvantage of the SPA for the SSR?

The downside of the SPA is that it initially provides an HTML document empty, which forces a tracker to run Javascript before seeing any content. SSR comes out in front as all content important is available in the first HTML.

Experiments?

That one blog created an experiment using the Next.JS for SSR to generate the same version of the site with pre-rendered content.

The questions in search of an answer were:

  • Both sites will be indexed?
  • All sub-pages will be indexed?
  • Which methodology will perform best *?

According to the tested project (open-source) the test took place in March 2020.

Understand that it has been some time since the test took place and the result may be different nowadays, it is quite possible that Google and other search engines can already cope in a better way during their analysis.

  • 3

    SSR, the solution created by the same people who created the problems. Because in this "modern" world what matters is to complicate to seem better, to appear more efficient, to cost more, even if it is not expensive the salary of the DEV and yes the server costs that increase significantly. Even if the old ways were very well done and they worked even better (of course it depended only on knowledge) what is worth today is "modern". The problem is people unprepared and who wanted to (want to) mix Desktop with Web, who entered the area. Today is required rockets for municipal travel.

  • 1

    Just for the record, it’s not a criticism of the answer or your person. It’s just some things that I think should be "punctuated". Most of these concepts, ideas, libs, frameworks, are exaggerations that are born and people adopt thinking they have found the holy God and propagate it all as absolute truths.

  • 3

    I feel obliged to point out that the term SSR as it is used today, is a sign of patch and immaturity of the current "web generation". Every static page is already "rendered on the server" by nature, as is every PHP page, ASP page, or any language that runs on the server. The problem was born with excess of JS where I did not need, no vision of the future, then when it gave caca, patched. (I exchanged ASP for PHP in 1999 and never had room for indexing problem, with zero effort. It was just not invent fashion). Real solution is not to use complexity where you don’t need it. PS: it’s not critical to the answer huh!

-1

Was made a publishing explaining this, which says that if you don’t prevent Googlebot from tracking your Javascript or CSS files, it will be able to render and understand your web pages as modern browsers.

So if you make these files accessible to Googlebot your application will not be affected.

But remembering that it is essential to follow all good practices for SEO, such as the goals of each page, sitemap and the like...

-3

An application in React affects SEO too much, for that Nextjs(Server Side Rendering) a React framework to render all html and css in the back end.

-7

The basic thing about HTTP is that it’s a protocol for text exchange, and links like nodes(nodes). HTTP works as a request-response protocol in the client-server computational model, based on this assumption, yes, using something fully rendered in the client hurts the first guideline that is the response to the request. The HTTP/1.0 version was developed between 1992 and 1996 to meet the need to download not only text. With this version, the protocol started to transfer messages of type MIME44 (Multipurpose Internet Mail Extension) and new request methods were implemented, fururamente, a version of the protocol described in RFC 2616,a set of additional implementations has been developed such as the use of persistent connections; the use of proxy servers; new request methods; among others. It is claimed that HTTP is also used as a generic protocol for communication between user agents and proxies/gateways with other protocols such as SMTP, NNTP, FTP, Gopher, and WAIS, allowing access to resources available in various applications.

An HTTP session is a sequence of request-response network transactions. An HTTP client initiates a request by establishing a Transmission Control Protocol (TCP) connection to a particular server port (typically port 80. See List of TCP and UDP protocol ports). A listening HTTP server on that port waits for a client request message. Receiving the request, the server returns a status line, such as "HTTP/1.1 200 OK", and a private message of its own. The body of this message is usually the requested resource, although an error message or other information can also be returned.

https://en.wikipedia.org/wiki/Hypertext_Transfer_Protocol

For those who need more details, and the way the protocol is being directed, this WIKIPEDIA link can clarify a lot about HTTP communication and its advantages.....

  • 6

    that answer has nothing to do with the question itself

Browser other questions tagged

You are not signed in. Login or sign up in order to post.