If there are few objects, it does not matter, but if the number of them is large, put in the prototype
has the advantage that only one copy of the function exists, not several (functions are first class objects in Javascript). This representation below illustrates the fact:
{x,y,z,a,b,c} {x,y,z,a,b,c} {x,y,z,a,b,c} {x,y,z,a,b,c} {x,y,z,a,b,c} {x,y,z,a,b,c}
Vs.:
{a,b,c}
^
|
+-------+-------+-------+-------+-------+
| | | | | |
{x,y,z} {x,y,z} {x,y,z} {x,y,z} {x,y,z} {x,y,z}
That does not guarantee necessarily that the performance in time will be worse, only when copying the function in each instance will be spent more memory. And often these two factors constitute just one tradeoff (i.e. space is increased to reduce time, or vice versa). But in this particular case, I believe the solution with the prototye
you’ll do better on both counts, because:
- If an object occupies more memory, fewer objects fit on a cache page, so that the number of Misses is larger;
- If the function to be called several times is in the prototype, and the prototype is in the cache, access to it is as fast as it could be (the overhead there is but must be negligible).
Again, this is just my private interpretation, to know for sure only by testing. This example in jsperf, for example, gave results according to my interpretation (in Chrome at least).
P.S. Depending on how it is done, it may be that there is a single function object, and only several references to it. Example:
function foo() { ... }
function MinhaClasse(...) {
...
this.foo = foo;
}
In this case there is still memory spent by the reference itself, but the impact is not so great. On the other hand, if the function is defined internally - chiefly if it captures external function variables (see closure) - then the requirement in space becomes even greater (for there is in fact an extra object for each instance):
function MinhaClasse(...) {
...
this.foo = function() { ... }
}