8
What is the advantage and disadvantage of routes being made by server(php,Asp.net, etc.) or by client(Angularjs,emberjs,backbone...) ?
In terms of performance, which has the most? Security, mobile, usability?
8
What is the advantage and disadvantage of routes being made by server(php,Asp.net, etc.) or by client(Angularjs,emberjs,backbone...) ?
In terms of performance, which has the most? Security, mobile, usability?
14
Routes are behavioral triggers. When you access http://seusite.com/contato/
, the idea is to display a page for contacts.
The router, in this case, is responsible for "explain" to the application that some user is trying to access /contato/
and then something must be done - in this case fantasy, the goal is to display the page contato.html
, for example.
In the case of Javascript, regardless of the platform, the idea is that specific behaviors are invoked for each page that the user accesses. In our example (/contato/
), knows that there is a form for the user to enter their data. This form, in turn, has a validation that, in the case of Backbone.js, can be solved with Backbone.Validation.
Given the situation, we thought: it would be valid to invoke Backbone.Validation for any page, since only /contato/
has a form? Negative. This is conceptually bad. Routing does just that: it triggers a specific entity to be responsible for mastering what will or will not happen on the requested page.
To reinforce, there are two principles that apply to this case: KISS and DRY.
What are these "entities"?
The entities are the "controllers", in the case of Angular.js or Marionette.js; for Backbone.js, we have the "views" that make the service.
And client-to-server competition?
In fact, the competition is relative. You may own customer routes And server IF so you want: in the client you deal with what the proper route will work in terms of Javascript; in the server you deal with what will happen to the client when such route is executed.
But, what does it mean? If you have a SPA(single-page application), client routes are sufficient; if you have multiple pages, you (probably) will work with routing on both sides, both client and server, because indicating what Javascript should run will not be enough.
In relation to performance, which has the highest? And as for security, mobile, usability?
The ideal is not to compare the performance of the client routes against those of the server - the proposals are different and you have to choose the one that satisfies (best) your need.
You can control all the routing by the server and not use the one offered by the client: this, for now, will give you a bootstrap more productive, especially if your level of knowledge about front-end frameworks is not high; however, on the other hand, it will generate a greater number of requests and the ease of scalability of your application can be committed in the future by the loss of control and maintainability that this unique routing will bring.
As an example of the parallel employability of the two routes, let us see the Github:
Did you notice that a requisition was made GET
, other kind POST
and nothing has been reloaded? It is - Backbone, for example, brings a set that makes this mechanism possible. Starting with Backbone.Router
going up to the window.history
.
Now, I’ll ask you something else: access the folder /Docs/ directly, issuing a request synchronous of the kind GET
.
Waiting for you to do...
So, you were directed exactly to the same place as before, when you first accessed the repository and through a click
ended up in the folder /Docs/. What happened is that the server routing has just fulfilled its role - a role that a customer router would not assume.
To work with pushState i must need a router on the client?
Router is nomenclature, not something solid. The logic of routers in general is that you have to observe a change in the URL; if there is a change, then do something. Noting that "if there is a change" is part of customer routers - server routing has no ability to perceive "changes" in real time (as I said, different things!).
To illustrate a router better, see the following:
This "diagram" represents the most common routing model. At first, everything is only names, which means you can have a structure exactly like this and never before realized it. In addition, obviously, there may be other devices between the loops of this track - such as models, helpers, etc -, but the diaspora of requests moves in this stream; in this sense.
What about the security?
Speaking of security, the premise is trivial: it actually protects your application on the server. The client is to handle information that people need to see and interact with and never perform logical processes of any size.
Beforehand, the client should serve to show you error messages - the criterion behind these messages is who should be the server’s responsibility: this is robotically unbreakable.
As security is never enough, you can add useful to pleasant: make validations duplex. Try to ensure as much as possible in the browser - if something goes blank or gets swindled, make sure the server is bulletproof. Action and reaction effect: customer validations require more resources from the user. Never forget this.
Okay, and where the routes come in + safety? Hey, calm down! Their responsibility is to direct people to places. The sanitization of people going through the doors is the responsibility of someone else.
... Mobile?
The rules for (web) mobile applications are the same, but always remembering that requests should be in the least amount possible to ensure a more satisfactory browsing speed.
If you refer to native apps: this is subject to another topic.
... Usability?
Routing is not visual. This answer does not match the context.
Great answer, it was very clear now, for knowledge was studying the routes made with the Backbone and also with Angularjs, thanks!
2
There was a time when Twitter worked as follows:
The user accessed a Twitter page
A very heavy Javascript application was downloaded and compiled
Once compiled, the application got the route that worked through a hash (#
)
And only then the content of the page was downloaded by AJAX and rendered by the browser
This made the site too heavy for anyone who just wanted to see one tweet, so that they decided to change the way it works. Today the routes no longer use the hash. When you access the site, at first the route is recognized by the server, and also the page content is rendered on the server to make the first opening faster.
However, once the page has been loaded, their Javascript application "takes care" of your browser, and the internal links of the site are no longer loaded in the traditional way, but rather are intercepted and downloaded by AJAX, and the routes modified no longer by hash, but using the History API. (The site now runs without customer routes in browsers not compatible with the History API).
This story was told in official twitter development blog.
You will also be interested in reading this question on Quora.
For some years now, web framework Ruby on Rails includes by default in new applications a Javascript library called Turbolinks.
This library aims to bring the performance benefits of Single Page Application to applications that do not use a framework SPA. It works more or less like this:
The first user opens the site, the server recognizes the route and renders the page normally
After loading the page, Turbolinks "takes over" the browser, intercepting the internal links of the site
Internal links are downloaded by AJAX, the routes modified using the History API, and the <body>...</body>
page is replaced by <body>...</body>
HTML downloaded by AJAX.
Javascrips and Csss in <head>
are not downloaded and compiled again, getting better performance. The improvement in performance is especially perpectible during development, when working with many Jss and Csss, not yet compiled in one
But some do not like this approach because they consider it a gambiarra, they think that this type of optimization should be the responsibility of the browser.
So this is the embroidery that is gaining more strength lately. In addition to Twitter, Github, Facebook and many other giants from web have used this approach, which is a means between leaving the routes entirely to the server or fully to the client.
If performance is not critical in your application, surely leaving everything to the server is the best way, as it will give you less work and is easier to maintain. But if it does, I would study the above approach. The truth is that each case is a case, it all depends on your requirements, knowledge, etc.
I don’t get it. What does Turbolinks have to do with routing? The pushState?
@Guilherme Exactly. I quoted just to show that this approach exists. Actually it is still on the server, but gain performance.
I found it interesting, I had forgotten about turbolinks, rsrss, but I think SPA still has better usability, no?
@Rod Spas are complete frameworks. Turbolinks is more like a really gambiarra. = P
kkkk, Rails people will kill you if you read this kkkk
1
Let’s go there in terms of performance the processing of the routes is indifferent. Now the processing that this route will generate already depends.
If you choose to process routes on the client and request server data is of no use, you will get route processing on both sides.
For security always opt for server, client is changeable.
Mobile is certainly better in the server, because there are some limitations in several versions and models of devices, limitations in processing, cache, etc, so it would be easier to control by the server than in the client, being the data returned by the same.
Browser other questions tagged javascript angularjs backbone.js
You are not signed in. Login or sign up in order to post.
what you define as a route
client side
?– gpupo