First of all, please let me quote one of my favorite sentences from Mr D.
I have been writing JavaScript for 8 years now, and I have never once found need to use an uber function. The super idea is fairly important in the classical pattern, but it appears to be unnecessary in the prototypal and functional patterns. I now see my early attempts to support the classical model in JavaScript as a mistake.
Douglas Crockford, on Classical Inheritance in JavaScript
Override
In classical OOP, override means that a sbuclass can declare a method already inherited by its super class, making that method, the only one directly callable for each instance of that subclass.
<?php
class A {
function itsAme() {
$this->me = 'WebReflection';
}
}
class B extends A {
function itsAme() {
// B instances can access only
// this method, which could access
// internally to the parent one
parent::itsAme();
echo $this->me;
}
}
$a = new A;
$a->itsAme(); // nothing heppens
$b = new B;
$b->itsAme(); // WebReflection
?>
Override In JavaScript
Since there is not a native extends, we could say that in JavaScript an override is everything able to shadow an inherited property or method.
var o = {}; // new Object;
o.toString(); // [object Object]
o.toString = function () {
return "override: " +
// call the parent method
Object.prototype.toString.call(this)
;
};
Moreover, while in classic OOP override is usually specific for methods, and nothing else, in JavaScript we could even decide that at some point a method is not a method anymore:
// valid assignment
o.toString = "[object Object]";
// NOTE: toStirng is invoked
// every time we try to convert
// implicitly an object
// above code will break if we
// try to alert(o)
// will work if we alert(o.toString)
This introduction shows how much we are free to change rules, reminding us that if these rules are there and we would like to emulate classic inheritance patterns, maybe it's a good idea to deeply analyze if makes sense to change a method into a variable, or vice-versa. Most of the time, this is not what we want, so let's analyze just methods.
Why We Need Override
Specially in a non compilable language as JS is, and generally speaking as good practice for whatever developer, we would like to reuse code we already wrote. If a super method provides basic configuration and we would like to add more, does it make sense to rewrite all super method logic and procedures plus other part we need?Extends ABC
A must know in JavaScript, is how to inherit a constructor.prototype via another constructor, considering there is not a native way to extends constructors.In JavaScript, (almost) everything is an object that inherits from other objects.
To create an inheritance chain, all we need is a simple function like this:
var chain = (function () {
// recycled empty callback
// used to avoid constructors execution
// while extending
function __proto__() {}
// chain function
return function ($prototype) {
// associate the object/prototype
// to the __proto__.prototype
__proto__.prototype = $prototype;
// and create a chain
return new __proto__;
};
}());
function A (){}
function B (){}
// create the chain
B.prototype = chain(A.prototype);
new B instanceof A; // true
new B instanceof B; // still true
With this in mind, we can already start to create basic subclasses.
Hard Coded Override
One of the most simple, robust, fast, efficient, explicit, and clean pattern to implement overrides, is the hard coded one.
// base class
function A() {
console.log("A");
}
// enrich native prototype
A.prototype.a = function () {
console.log("A#a");
};
A.prototype.b = function () {
console.log("A#b");
};
A.prototype.c = function () {
console.log("A#c");
}
// subclass, first level
function B() {
// use super constructor
A.call(this);
console.log("B");
}
// create the chain
B.prototype = chain(A.prototype);
// enrich the prototype
B.prototype.a = function () {
// override without recycling
console.log("B#a");
};
// more complex override
B.prototype.b = function () {
// requires two super methods
A.prototype.a.call(this);
A.prototype.b.call(this);
console.log("B#b");
};
// subclass, second level
function C() {
// call the super constructor
// which will automatically call
// its super as well
B.call(this);
console.log("C");
}
// chain the subclass
C.prototype = chain(B.prototype);
// enrich the prototype
// every override will
// recycle the super method
// we don't care what's up there
// we just recycle code and logic
C.prototype.a = function () {
B.prototype.a.call(this);
console.log("C#a");
};
C.prototype.b = function () {
B.prototype.b.call(this);
console.log("C#b");
};
C.prototype.c = function () {
B.prototype.c.call(this);
console.log("C#c");
};
Above example is a simple test case to better understand how the chain works.
To do this, and test all methods, all we need is to create an instanceof the latest class, and invoke a, b, and c methods.
var test = new C;
console.log('-------------------------');
test.a();
console.log('-------------------------');
test.b();
console.log('-------------------------');
test.c();
That's it, if we use Firebug or whatever other browser console, we should read this result:
A
B
C
-------------------------
B#a
C#a
-------------------------
A#a
A#b
B#b
C#b
-------------------------
A#c
C#c
The first block shows how the B constructor invokes automatically the A one, so that the order in the console will be "A", as first executed code against the current instanceof C, "B", as second one, and finally "C".
If A, B, or C, define properties during initialization, this inside those methods, will always point to the instance of C, the "test" variable indeed.
Other logs are to follow logic and order inside other methods.
Please note that while B.prototype.b does not invoke the super method, B.prototype.c does not exist at all so that B.prototype.c, the one invoked by C.prototype.c, will be directly the inherited method: A.prototype.c.
More happens in the B.prototype.b method, where there are two different invocation, A.prototype.a first, and A.prototype.b after.
Pros
With this pattern, it's truly difficult to miss the method that caused troubles, if any. Being this pattern explicit, all we read is exactly what is going on. Method are shareable via mixins, if necessary, and performances are almost the best possible one for each instance method call.Cons
Bytes speaking, this pattern could bring us to waste lot of bandwidth. Compilers won't be able to optimize or reduce that much the way we access the method, e.g. A.prototype.a, plus we have to write a lot of code and we are not even close to the classical super/parent pattern.Another side effect, is surely the fact most developer don't even know/understand perfectly the difference between call and apply, or even worst, the concept of injected context, so that this as first argument could cause confusion.
Finally, the constant look up for the constructor, plus its prototype access, plus its method access, could let us think that performances could be somehow improved.
Hard Coded Closure
Specially designed to avoid last Cons, we could think about something able to speed up each execution. In this case, the test is against the B.prototype.b method:
B.prototype.b = (function () {
// closure to cache a, and b, parent access
var $parent_a = A.prototype.a;
var $parent_b = A.prototype.b;
// the method that will be available
// as "b" for each instance
return function () {
// cool, it works!
$parent_a.call(this);
$parent_b.call(this);
console.log("B#b");
};
}());
We still have every other Cons here, we had to write twice and in just two following lines, the A.prototype.methodName boring part.
Considering that properties access, at least the first level, is extremely fast in JavaScript, we could try to maintain performances as much as possible, reducing file size and time to write code:
B.prototype.b = (function () {
// cache just the parent
var $parent = A.prototype;
return function () {
// and use it!
$parent.a.call(this);
$parent.b.call(this);
console.log("B#b");
};
}());
The next natural step to think about, is an outer closure able to persist for the whole prototype so that every method could access at any time to the $parent:
var B = (function () {
// cache once for the whole prototype
var $parent = A.prototype;
function B() {
$parent.constructor.call(this);
console.log("B");
}
// create the chain
B.prototype = chain($parent);
// enrich in this closure the prototype
B.prototype.a = function () {
console.log("B#a");
};
B.prototype.b = function () {
$parent.a.call(this);
$parent.b.call(this);
console.log("B#b");
};
return B;
}());
OK, now we have removed almost every single Cons from this pattern ... but somebody could still argue that call and apply may confuse Junior developers.
Libraries And Frameworks Patterns
I do believe we all agree that frameworks are good when it is possible to do more complex stuff in less and cleaner code and, sometimes, with a better logic.As example, the base class A could be simply defined like this:
var A = new Class({
a: function () {
console.log("A#a");
},
b: function () {
console.log("A#b");
},
c: function () {
console.log("A#c");
}
});
// please note to avoid a massive post
// I have intentionally skipped the
// constructor part for these cases
Isn't that beautiful? I think it is! Most of the frameworks or libraries we know, somehow implements a similar approach to define an emulated Class and it's prototype. I have written a more complete Class example at the end of this post but please don't rush there and be patience, thanks.
A Common Mistake
When we think about parent/super, we think about a way to access the "inherited stuff", and not something attached to the instance and completely different from classical OOP meaning, the one we are theoretically trying to emulate.I have already provided a basic example of what I mean with the first piece of code, the one that shows how are things in PHP, and not only (Java, C#, many others).
The parent keyword should be an access point to the whole inherited stack, able to bring the current this reference there.
Unfortunately, 90% of the frameworks out there got it wrong, as I have partially described in one of my precedent posts entitled: The JavaScript _super Bullshit. Please feel free to skip the later lecture since I have done both a mistake against MooTools, still affected with the problem I am going to talk about tho, and I did not probably explain limitations in a proper way.
In any case, this post is about override patterns, so let's see what we could find somewhere in the cloud ;)
Bound $parent
As first point, I have chosen the name $parent to avoid to pollute methods scopes with a variable that could be confused with the original window.parent, while I have not used super since this is both a reserved keyword and not familiar with PHP developers.This pattern aim is to make things as simple as possible: all we have to do is to invoke when and if necessary the $parent(), directly via the current method.
To be able to test the Class emulator and this pattern, we need to define them. Please note the provided Class is incomplete and not suitable for any production environment, since it has been created simply to support this post and its tests.
function Class(definition) {
// the returned function
function Class() {}
// we would like to extend via
// the extend property, if present
if (definition.extend) {
// attach the prototype
Class.prototype = definition.extend.prototype;
// chain it, recycling the Class itself
Class.prototype = new Class;
// enrich the prototype with other definition properties
for (var key in definition) {
// but only if functions, since
// we would like to add the magic
if (typeof definition[key] === "function") {
Class.prototype[key] = callViaParent(
Class.prototype[key],
definition[key]
);
}
}
} else {
// nothing to extend
// just enrich the prototype
for (var key in definition) {
Class.prototype[key] = definition[key];
}
}
// be sure the constructor is this one
Class.prototype.constructor = Class;
// return the class
return Class;
}
// let's imagine new Class SHOULD BE an instanceof Class
// or remove the new when you create one
Class.prototype = Function.prototype;
// magic callback to add magic, yeah!
function callViaParent(parent, method) {
// create runtime the wrapping method
return function () {
// create runtime the bounded $parent
// note that ONLY $ is local scope
// $parent will defined in the GLOBAL scope
var $ = $parent = function () {
return parent.apply($this, arguments);
};
// trapped reference for the $parent call
var $this = this;
// invoke the current method
// $parent will be the one defined
// few lines before
var result = method.apply(this, arguments);
// since the method could have another $parent call
// we want to be sure that after its execution
// the global $parent will be again the above one
$parent = $;
// return the result
return result;
};
}
We've got all the magic we need to define our classes, ready?
var A = new Class({
a: function () {
console.log("A#a");
},
b: function () {
console.log("A#b");
},
c: function () {
console.log("A#c");
}
});
var B = new Class({
extend: A,
a: function () {
console.log("B#a");
},
b: function () {
// oooops, we have only one
// entry point for the parent!
$parent();
console.log("B#b");
}
});
var C = new Class({
extend: B,
a: function () {
$parent();
console.log("C#a");
},
b: function () {
$parent();
console.log("C#b");
},
c: function () {
$parent();
console.log("C#c");
}
});
The B.prototype.b method cannot emulate what we have tested before via Hard Coded Pattern. The $parent variable can obviously "host" one method to call, the A.prototype.b and nothing else.
Here we can already spot the first limitation about the magic we would like to bring in our daily code. Let's test it in any case:
var test = new C;
console.log('-------------------------');
test.a();
console.log('-------------------------');
test.b();
console.log('-------------------------');
test.c();
The result?
B#a
C#a
-------------------------
A#b
B#b
C#b
-------------------------
A#c
C#c
Cool, at least what we expected, is exactly what happened!
Pros
This pattern is closer to the classic one and simpler to understand. The global reference is something we could ignore if we think how many chars we saved during classes definition.Cons
Everything is wrong. The parent is a function, not a reference and the real parent is bound for each method call. This is a performances killer, a constant global namespace pollution, and quite illogical, even if pretty.Finally, we have only one method to call, and zero parent access, since each method will share a runtime created global $parent variable, and it will never be able to recycle any of them since the this reference could be potentially always different for every method invocation.
Slightly Better Alternatives
At least to avoid global scope pollution with a runtime changed $parent, some framework could implement a different strategy: send the $parent as first variable for each method that extends another one. Strategies to speed up this process are different, but one of the most efficient could be:
function callViaParent(parent, method) {
// create once, and sends every time
function $parent($this, arguments) {
return parent.apply($this, arguments);
}
return function () {
// put $parent as first argument
Array.prototype.unshift.call(arguments, $parent);
// invoke the method
return method.apply(this, arguments);
};
}
This could produce something like:
var B = new Class({
extend: A,
a: function ($parent) {
console.log("B#a");
},
b: function ($parent) {
$parent(this);
console.log("B#b");
}
});
Runtime Parent Method
What is the only object that travels around this references? The current instance, isn't it? So what a wonderful place to attach runtime the right parent for the right method?This pattern seems to be the most adopted one, we still have inconsistencies against the classical OOP parent concept, but somehow it becomes more natural to read or write:
// A is the same we have already
var B = new Class({
extend: A,
a: function () {
console.log("B#a");
},
b: function () {
// here comes the magic
this.$parent();
console.log("B#b");
}
});
var C = new Class({
extend: B,
a: function () {
this.$parent();
console.log("C#a");
},
b: function () {
this.$parent();
console.log("C#b");
},
c: function () {
this.$parent();
console.log("C#c");
}
});
With a runtime attached/switched/twisted property at least we have solved the binding problem. In order to obtain above behavior, we should change just the "magic" callViaParent callback.
function callViaParent(parent, method) {
// cached parent
function $parent() {
return parent.apply(this, arguments);
}
return function () {
// runtime switch
this.$parent = $parent;
// result
var result = method.apply(this, arguments);
// put the $parent as it was
// so if reused later, it's the correct one
this.$parent = $parent;
// return the result
return result;
};
}
Et voila', if we test again the C instance and its methods, we still obtain the expected result via our new elegant, "semantic" way to call a parent:
B#a
C#a
-------------------------
A#b
B#b
C#b
-------------------------
A#c
C#c
Pros
The execution speed is surely better than a constantly bounded reference. We don't need to adopt other strategies and somehow we think this is the more natural way to go, at least in JS (still a nonsense for Java and classic OOP guys).Cons
Again, the class prototype creation is slower due to all wraps we need for each method that is extending another one.The meaning of parent is still different from classic OOP, we have a single access point to the inherited stack and nothing else.
This simply means that one more time we cannot emulate the initial behavior we were trying to simplified ... can we define this more powerful? Surely cleaner tho.
Runtime Parent
The single access point for the parent stuck is truly annoying, imho. This is why we could use analogues strategies to obtain full access.To obtain this, we need again to change everything, starting from the "magic" method:
function callViaParent(parent, method) {
// still a wrap to trap arguments and reuse them
return function () {
// runtime parent
this.$parent = parent;
// result
var result = method.apply(this, arguments);
// parent back as it was before
this.$parent = parent;
// the result
return result;
};
}
We have already improved performances using a simple attachment but this time, parent will not be the method, but the SuperClass.prototype:
function Class(definition) {
function Class() {}
if (definition.extend) {
Class.prototype = definition.extend.prototype;
Class.prototype = new Class;
for (var key in definition) {
if (typeof definition[key] === "function") {
Class.prototype[key] = callViaParent(
// we pass the prototype, not the method
definition.extend.prototype,
definition[key]
);
}
}
} else {
for (var key in definition) {
Class.prototype[key] = definition[key];
}
}
Class.prototype.constructor = Class;
return Class;
}
With above changes our test code will look like this:
// A is still the same
var B = new Class({
extend: A,
a: function () {
console.log("B#a");
},
b: function () {
// HOORRAYYYY, Full Parent Access!!!
this.$parent.a.call(this);
this.$parent.b.call(this);
console.log("B#b");
}
});
var C = new Class({
extend: B,
a: function () {
this.$parent.a.call(this);
console.log("C#a");
},
b: function () {
this.$parent.b.call(this);
console.log("C#b");
},
c: function () {
this.$parent.c.call(this);
console.log("C#c");
}
});
We are back to normality, if we test above code we'll obtain this result:
B#a
C#a
-------------------------
A#a
A#b
B#b
C#b
-------------------------
A#c
C#c
The multiple parent access in method "b" is finally back, which means that now we can emulate the original code.
We have introduced a regression tho! For performances reason, and to avoid crazy steps in the middle of a simple method invocation, call and apply are back in the field!
Pros
Finally we have control over the parent, and even if attached, we can be closer to the classical OOP. Performances are reasonably fast, just one assignment as it was before, but more control.Cons
We inevitably reintroduce call and apply, hoping our users/developers got the difference, and understood them. We are still parsing methods and wrapping them around, which means overhead for each Class creation, and each extended method invocation.Lazy Module Pattern
On and on with these patterns that somebody could have already spotted we are back to the initial one:
// last described pattern:
this.$parent.b.call(this);
// hard coded closure
$parent.b.call(this);
The thing now is to understand how heavy could be to place a bloody $pattern variable inside the prototype scope, still using the new Class approach.
Wait a second ... if we simply try to merge the module pattern with our Class provider, how things can be that bad?
function Class(extend, definition) {
function Class() {}
// if we have more than an argument
if (definition != null) {
// it means that extend is the parent
Class.prototype = extend.prototype;
Class.prototype = new Class;
// while definition could be a function
if (typeof definition === "function") {
// and in this case we call it once
// and never again
definition = definition(
// sending the $parent prototype
extend.prototype
);
}
} else {
// otherwise extend is the prototype
// but it could have its own closure
// so it could be a function
// let's execute it
definition = typeof extend === "function" ? extend() : extend;
}
// enrich the prototype
for (var key in definition) {
Class.prototype[key] = definition[key];
}
// be sure about the constructor
Class.prototype.constructor = Class;
// and return the "Class"
return Class;
}
We have lost the "magic" method, less code to maintain ... good! The Class itself seems more slick than before, and easier to maintain: good!
How should our classes look like now?
// A is still the same
// we specify the super class as first argument
// only if necessary
var B = new Class(A, function ($parent) {
// this closure will be executed once
// and never again
// it will receive as argument and
// automatically, the super prototype
return {
a: function () {
console.log("B#a");
},
b: function () {
// Yeah Baby!
$parent.a.call(this);
$parent.b.call(this);
console.log("B#b");
}
};
});
// same is for this class
var C = new Class(B, function ($parent) {
// we could even use this space
// to define private methods, real ones
// those showed in the Overload Patterns
// sounds pretty cool to me
return {
a: function () {
$parent.a.call(this);
console.log("C#a");
},
b: function () {
$parent.b.call(this);
console.log("C#b");
},
c: function () {
$parent.c.call(this);
console.log("C#c");
}
};
});
Did we reach our aim? If we don't care about call and apply, surely we did!
With this refactored Class we are now able to create, without worrying about inline function calls, everything we need.
If the prototype is an object, we can simply use the classic way:
var A = new Class({
a: function () {
console.log("A#a");
},
b: function () {
console.log("A#b");
},
c: function () {
console.log("A#c");
}
});
While if we need a closure to do not share anything outside the prototype, we can still do it!
var D = new Class(function () {
function _doStuff() {
this._stuff = "applied";
}
return {
applyStuff: function () {
_doStuff.call(this);
}
};
});
In few words, we are now able to perform these operations:
new Class(prototype);
new Class(callback);
new Class(parent, prototype);
new Class(parent, callbackWithParent);
I think I gonna change my base Class implementation with this stuff pretty soon :D
Pros
Performances speaking, this pattern is the fastest one in the list. No runtime assignments, no wrappers, no look up for the super, simply a local scope variable to access whenever we need and only if we need, from public, "protected", eventually privileged, and private methods, those we can easily code in the function body. The only microscopic bottleneck compared to native Hard Coded way is provided by the Class and nothing else, but classes are something we define once and never again during a live session, we care about execution speed!Cons
call or apply ... but "dooode, please learn a bit more about JS, call and apply are essentials for your work!".Inline Override
This last pattern is all about "do what you need when you need it" approach. In few words, there are several cases where we need to change, maybe temporary, one single method.This is the way to proceed:
// before ...
var c = new C();
// somewhere else ...
c.doStuff = (function (doStuff) {
return function () {
// some other operation
// back as it was before
this.doStuff = doStuff;
};
}(c.doStuff));
// before ...
var d = new D();
// somewhere else ...
d.doStuff = (function (doStuff) {
return function () {
// some operation with the overridden method
doStuff.call(this);
// something else to do
return this._stuff;
};
}(d.doStuff));
There are several possible combination but the concept is the same: we override inline a method because for a single instance/object there is only one method that does not suite properly with our requirements.
Of course methods to override could be more than one, but if we are shadowing 4 methods or more, we could better think to inherit that constructor prototype and simply create different instances from the subclass.
As Summary
The override emulation via JavaScript is not an "easy to threat" topic. As cited at the beginning, we'll never be able to obtain what we expect in classical OOP since JavaScript is prototypal based.Somehow we can at least try to understand what kind of similitude with classical OOP we would like to reach, and which pattern could be more suitable for our purpose.
This is what I have tried to explain in this post, in order to complete the other one about overloads.
Just last week I was complaining that there are no articles on good patterns of JavaScript and all JS developers need to discover them by themselves, while other language developers have a good base to rely upon. Good start Andrea! We need more articles like this one! :)
ReplyDeleteHi Andrea, some monthes ago I have send you my OO Emulation. You seems to be very busy, but I am still nosy if there is any bottleneck in it. Here the implementation again:
ReplyDeletevar Class = (function(slice, noop){
function Class( _class ) {
function create() {
var args = slice.call(arguments, 0);
args.unshift(null);
var instance = instantiate( _class, args);
typeof instance.init == 'function' && instance.init();
return instance;
};
create.extend = function extend( subclass ) {
subclass.superclass = _class;
return Class(subclass);
};
return create;
};
function instantiate( _class, args ) {
var inst;
if ( _class.superclass ) {
// create an instance of super class
// don't use arguments.callee here, its expensive
noop.prototype = instantiate(_class.superclass, args);
// clone the super object
inst = new noop;
};
args[0] = noop.prototype;
if (!inst) inst = {};
_class.apply(inst, args);
return inst;
};
return Class;
})(Array.prototype.slice, function(){});
Class using this will look like:
// class is a constructor, first parameter is parent/super instance
var Human = Class(function(_super){
var name;
this.init = function(){
};
this.setName = function( _name ) {
name = _name;
return this;
}
});
var Programmer = Human.extend(function(_super ){
this.setName = function(_name) {
return
// full access to super instance
_super.setName.apply(this, arguments);
}
});
var programmerInstance = Programmer();
Nice! Using the closure in the definition is a great touch. I might have to use that one!
ReplyDeleteHi Oleg, sorry I keep forgetting it.
ReplyDeleteI have read your code and it seems to be ok except:
1. this may confuse since it is referenced to the class prototype during initialization and instances inside methods and/or private functions
2. this is not optimized by compilers/minifier so that code will result into a surely bigger one, compared with the Lazy Module Pattern I have described before (which you somehow emulated so godd stuff in any case) ;)
Really nice article.
ReplyDeleteWe're all seem to stuck at "Runtime parent method".. but it looks there's neater way. Great work!
@Andrea:
ReplyDelete1. Yes, I am doing it because I don't want to go throw the super instance in the loop and copy all properties manually. Using this.method, child class automatically overwrites parent properties. And YES i LOVE class shared private methods and variables, at least this privacy we can have in javascript :).
2. I think the difference in size is so small, that it can be ignored :)
3. There is one more nice feature I forgot to describe:
var Human = Class(function(_super, param1, param2){
var name;
this.init = function(){
};
this.setName = function( _name ) {
name = _name;
return this;
}
});
var humanInstance = Human('param1', 'param2');
It is shared instance arguments.
I don't passing the arguments, as it usually will done to the init method, I passing them to the closure. At this way I don't have to write the whole stuff if I want to share all arguments for all methods.
I am trying to make my classes better testable using for e.g. dependencies injections so sometimes I have for e.g. 7 arguments. Normally you whould declare 7 additional variables to share the arguments.
Thanks,
Oleg
Oleg, you did not seem to read the Lazy Module Pattern which provides exactly what you have there.
ReplyDeleteAbout non optimized this, it won't make difference in small projects, it definitively will in larger one.
The nicer feature is useless, you don't need extra arguments since you have a closure in Lazy Module Pattern and you can put everything there (via var declaration).
The class will be created inline so what you can pass as argument, could be cached inside the closure (a bit cleaner since you don't have to scroll 'till the bottom to know if there were more arguments there).
In few words, there's no whole stuff to write and no benefits at all using your ambiguous "this" reference agains a lazy module pattern. This is my opinion :)
@Andrea: The nicer feature is useless, you don't need extra arguments since you have a closure in Lazy Module Pattern and you can put everything there (via var declaration).
ReplyDeleteWell, this difference is my main point - you have to declare the whole stuff, accessing variables outside of the closure. I want to have a posibility to pass the whole stuff by instantiation as arguments.
The second thing, that you could improve, is lazy class creation. You could call the definition closure at the moment of instance creation, so if sombody want to load all classes in one request, without to use all of them immediately, you could save some performance for that moment.
Although I prefer to load only the stuff I want to use immediately. So the second thing is not for me.
Andrea,
ReplyDeleteThere's also a different kind of method overriding that comes up frequently, where we don't want to create a new subclass; we just want to "extend" a method in the current class:
// Our class
library.Person = library.Class(
{
doIt: function() { /* Whatever */ }
});
// In some other (perhaps optional) module
(function() {
var $doIt = library.Person.prototype.doIt;
library.Person.prototype.doIt = function()
{
// Do something new
// Call hidden method
$doIt.apply(this, arguments);
};
})();
I wonder if there is a more elegant solution to this problem?
Oleg, you did not even read the method.
ReplyDeleteThere is nothing evaluated "after", the closure is executed ONCE and during class definition.
You have no advantages with your method which is exactly the same with a confusing this around.
Test it, and you'll notice no differences.
As you have to write arguments name, you can simply write
var argname = whatever_you_passed_with_your_way;
got it? :)
@khs4473 I have described that in the JavaScript Overload Patterns post, exactly something like that ;)
ReplyDeleteAndrea, "closure is executed ONCE" - this is exactly what I mean. You don't need to execute the closure before the instance will be created for the first time, after that you can cache created class and use it for all next instances creation. It is really very small improvement ...
ReplyDeleteBy the way, why are you using "new Class" every time you creating the class ? :) I thought you don't like "new". Your Class function doesn't even take a use of the object you are creating with "new Class"
Well, it's not really overloading though. I'm not trying to execute different functions based on different inputs. Instead, I'm trying to extend, not the whole class, but just a particular method in the class.
ReplyDeleteLike this:
MyClass.extend("myMethod", function(base)
{
return function()
{
// Do something extra
base.call(this); // Calls previous "myMethod"
};
});
Oleg, again, the closure is executed ONCE to define the class, exactly as your code execute the closure ONCE to define the class.
ReplyDeleteI may have not got your point but I keep telling your version isn't different from a Lazy Module Pattern where in your case the class prototype is created via "new closure" whit a confusing "this" while Lazy Module Pattern creates the prototype via "closure()" passing the parent whish is shared in the Class prototype.
Since you are creating an instance via "new colsure()" plus you are accessing the "magic" arguments, Class creation will be faster via Lazy Module Pattern while performances to access parent or private variables/methods will be the same.
"new Class" in my case makes sense since Classes will be "instances of Class" thanks to the first showed operation:
Class.prototype = Function.prototype
@khs4473 you are right, but that technique is similar to Hard Coded Closure with a single method, the B.prototype.b showed in this post.
I am not sure I need to re-explain we can put only a method in a closure since I have showed this in both Overload and Override
@khs4473 actually I have to edit a couple of things in this post, so I will add your suggestion as "Inline Override" (this evening tho).
ReplyDeleteCheers
Super article ;) But, do you believe its worth to have $parent instead of CurrentClass.itsSuperProto... ?
ReplyDeletesince there is not such thing: CurrentClass, just because this.contrusctor will be the same in the parent call unless you don't switch runtime this property, I do believe my last example is the best one for performances, simplicity, and logic.
ReplyDeleteI am moving again these days, but I'll write soon a post with my latest Class version (and I will edit this article as promised before).
Stay tuned ;)
Andrea, I didn't wrote anything about this.constructor, which is realy bad, ofc. U should read CurrentClass like this:
ReplyDeletevar Employee = function() {};
Employee.prototype.getName = function() {
Employee._superClass...
}
Do you think, that $parent is so much better, than plain Employee._superClass? I think its micro optimalization and almost useless syntax sugar only.
that's just explicit as showed in Hard Code examples. It's fine to me, but function lookup could cost if widely used. Nothing that bad tho, it has been my favorite way for ages.
ReplyDelete$parent is sugar in closure, and since there could be a function in my latest example, why we should not use it's arguments as well to send something useful? :)
This is kinda related/kinda not, but...
ReplyDeleteSometimes you want a function to wrap "new":
var MyClass = new Class(...);
function create_MyClass(a, b, c) { return new MyClass(a, b, c); };
What do you think about this implementation for automatically generating such a function?
function $create(fn)
{
for (var i = 0, list = []; i < fn.length; ++i)
list.push("$" + i);
list = list.join(",");
return new Function(list, "return new this(" + list + ");");
}
And in the class generator function you'd attach like so:
ctor.create = $create(ctor);
Do you see anything wrong with using new Function(...) in this way?
Happy move, BTW!
Err - that last one wasn't correct, because I generally want to be able to move the create function around:
ReplyDeletevar $ = ElementSet.create;
and using "this" messes that up. I think this might work, though:
function $create(fn)
{
for (var i = 0, list = []; i < fn.length; ++i)
list.push("$" + i);
var args = list.join(","),
body = "return function(" + args + "){return new f(" + args + ");};";
return (new Function("f", body))(fn);
}
nice post. thanks.
ReplyDeleteThe biggest lack of your last variant with closure definition is - you can't have shared private instance variables. All private variables inside of the definition closure will be shared between all instances, so changing this variable in one instance will change its value in all other. So everebody who uses it should be aware of it and don't save any instance states theire.
ReplyDeleteOleg good point, but it's exactly the same if you set an object, array, whatever not primitive, in the prototype: every instance will share this. I would not suggest to avoid it, I would simply suggest to use it carefully when needed ;-)
ReplyDeleteHey Andrea,
ReplyDeleteAs usual, I learned some new techniques when reading your articles! Keep them coming!
Oh, and who cares about emulating OOP or if junior programmers understand the concepts. :)
I've discovered and/or devised so many interesting functional design patterns over the past few weeks -- and have put them to good use in projects.
The OOP guys sometimes ask me to draw the design pattern in UML, but there is no way AFAIK to draw some of these patterns. There is no analog in OOP.
/*****/ Javascript patterns > OOP patterns /*****/
Regarding junior programmers, I just plan to provide reference code and point them at your blog! :)
Thanks again.
-- John
hi i want to customize the way i use my ie i want to run a script so that it should click on some buttons so on for every 15 min and check whether there is ay change in display
ReplyDeletehi i want to customize the way i use my ie i want to run a script so that it should click on some buttons so on for every 15 min and check whether there is ay change in display
ReplyDelete