What is the real need for SSR (Server Side Rendering)?

Asked

Viewed 1,894 times

12

It is notorious that the development of web applications for a while has taken a very "different" course. Frameworks such as Angularjs, Vue and Reactjs have become quite popular, as in common they promote reactivity, componentization and the like.

In general, when we use one of these above, the application is rendered all from Javascript, the elements are generated dynamically.

I particularly see advantages in some things and I use Vue to build some applications frontend.

But something has worried me lately: Constantly, I have seen articles or courses, saying that it is necessary to use SSR - Server Side Rendering, which consist of a Nodejs server doing all the rendering work of these applications running on the server side.

In this case, I know the Nuxt.js of the Vue, and the Next js React, which render the application built from the server.

In most cases, I see people saying that SSR is necessary because of site SEO.

But now there are some questions:

  • What are the needs of using SSR?
  • Client-rendered applications have or may have SEO issues?
  • What disadvantages I have when building a Javascript application without using SSR?
  • If the framework/library has problems being rendered client-side, it would not be better to go back to programming focused on backend that already did this, such as PHP, Ruby or Python? Using these SSR-dependent applications would not be a setback?
  • 2

    Punctually on the SEO this should interest you. And you will understand why SSR can directly affect SEO, since it is not to ensure that crawlers will wait and rewrite the DOM after you manipulate it with scripts in the client-side. So it would be indicated to already deliver everything rendered, as it is not certain that the bots will right index their manipulated content after the request has already been delivered. Even bots being able to run scripts... https://answall.com/questions/406120/alterar-as-meta-tags-no-carregamento-afeta-no-rankeamento/406128#406128

  • You also have to render on the server helps with the UX issue: if you have a lot of component rendering logic (I don’t know how it is in the case of Vue.js, but this is very important in GWT), the screen will choke and the navigation will be impaired. It also happens to overload the processing/memory of the browser (something I particularly care about in GWT).

  • @Jeffersonquesado but in these cases I wonder: Wouldn’t it be better to continue using the good old Asp.NET MVC using a Razor as a template, or an Laravel Blade? Like, it looks like after all, the people who developed those libs are regressing.

  • 4

    Basically the VUE’s SSR is to fix what the "fashion" people in general had not understood until then. HTML has always been SS by nature, but even with the advent of Ajax, there are people who were smart from the start, and did server side thing that improved with JS, without losing the server side. Unfortunately, a portion of the JS staff did not look at the larger figure and ran over much of the frameworks. For example, it has many initiatives such as pjax.js from fellow @Guilhermenascimento who are already the real SSR by nature without needing to patch.

  • 3

    SSR: Solution created to solve problems that the same people created.

  • 5

    You’re at point A, you make your way to point B because you think it’s prettier there. You notice that B is not exactly what you expected and you want to go back to A. You have two options: 1) go back the same way until A or 2) Open a new path and stop at C thinking it’s in A. SSR is the second option.

  • 1

    @Wallacemaxters, considering that it comes from the JS community, they like to reinvent the wheel and think that they are on the crest of the =P wave Gitlab works with something SSR, but in an approach more template: the frontend machine queries the API machine after translating the browser request, then, on top of that reply, it mounts the HTML to be sent.

Show 2 more comments

1 answer

15


TL;DR

CSR:

  • impairs the SEO
  • harms the UX
  • is inefficient

in certain scenarios.

Detailing

My answer here is that it was disputed where I said that SEO screws you if you use JS heavily. According to the people who contested "SSR serves no purpose, it is an unnecessary fantasy", they did not say that, but what they said indicates this.

In reality this technique is necessary because SEO is impaired since it is complicated to simulate every situation that a user can have done when abusing the use of Javascript to assemble content in the client, so obviously certain content may never be indexed, contrary to popular belief that has spread around to sell certain technology. And in many cases there will be a loss of performance when the client is very "sophisticated" and can hinder the performance that hinders SEO.

There are some very specific situations that a Crawler can identify that there are dynamic actions and render correctly, but cannot understand all, especially those initiated by the user.

In addition there are situations that the customer may not be able to perform all that would be necessary to assemble the page harming the user experience that would not see the content properly, may even see anything, or creating difficulties.

Recently there was an answer here in Sopt (already deleted) that stated that AJAX should be used always, without giving reasons, only opinion, and the person insisted after disputed. People do things without really understanding what actually happens and end up "teaching" other people wrong. And anyone who has no knowledge assumes it to be true. AJAX may not be available or may leave a slower load, contrary to popular belief. A site that had a large volume of access adopted AJAX once and greatly worsened its performance. They abandoned after I showed that AJAX was not magical as they imagined, it was another falling into the fairy tale.

Poor performance, sometimes not even because of the customer, but the bad connection can disturb the user experience. The ideal is until the rendering generates static content. In addition you will have efficiency gains in the server cost.

Obviously if you have a smart client that renders in a complex way and knows that it is good to have the rendering on the server also not to fall into the problems presented above, you do not want to make different codes on each side, then it is interesting to have some tool that uses the same client code to run on the server. But I don’t know their quality.

Web abuse and SPA

One thing I realize is that you almost always have a web application that doesn’t need SEO or has a website that can be quite or totally static, but people use frameworks heavy duty that consumes a lot of processing to give the same result. The parts that really have dynamism in a website should only be facilitators to arrive at certain pages that are already rendered. But people are following rules without understanding the motivations.

I find the last question opinionated or at least scenario dependent. I think you shouldn’t render on the client if you don’t need to. Most people do this in the extreme because it’s fashionable, this is wrong, by definition.

Generally speaking I don’t see problems with client-side rendering in web applications, because the abuse of using web for applications is already a mistake in itself, so it doesn’t make that much of a difference to other mistakes. In application (includes PWA) does not need SEO and it is expected that the person is able to perform it properly, if you can not it is not to work even. You can place a requirement on the application of what you need to run.

If you are making websites, it is best to make as much static content as possible even through generating a CMS on the server, or if it is dynamic that is done in a basic way. In applications depends.

Note that I’m not saying that making some improvement using client rendering can’t help, the problem is abuse. People learn something, then talk a little more about it, then the conversation evolves, and they adopt new techniques, which can be useful to some extent, in certain scenarios, but people decide to adopt for everything and lose the real motivation. Remember the Progressive Enhancement.

It makes it simple, when it is not enough it adds justified functionalities with real motivation considering all its disadvantages and already treat these disadvantages on the spot. I dislike that people adopt something and then go after a solution to solve the problems they caused, which is a "mantra" that I repeat a lot. Much of what has been invented in the last decades of IT is a fix for a problem that did not exist before someone adopted something wrong. It had one or two problems in desktop applications, now it has several in web applications. It was not better to fix the known problems of the desktop than to do everything again in something that never stops showing problems?

Browser other questions tagged

You are not signed in. Login or sign up in order to post.