First of all, please let me quote one of my favorite sentences from Mr D.
I have been writing JavaScript for 8 years now, and I have never once found need to use an uber function. The super idea is fairly important in the classical pattern, but it appears to be unnecessary in the prototypal and functional patterns. I now see my early attempts to support the classical model in JavaScript as a mistake.
Douglas Crockford, on Classical Inheritance in JavaScript
Override
In classical OOP, override means that a sbuclass can declare a method already inherited by its super class, making that method, the only one directly callable for each instance of that subclass.
<?php
class A {
function itsAme() {
$this->me = 'WebReflection';
}
}
class B extends A {
function itsAme() {
// B instances can access only
// this method, which could access
// internally to the parent one
parent::itsAme();
echo $this->me;
}
}
$a = new A;
$a->itsAme(); // nothing heppens
$b = new B;
$b->itsAme(); // WebReflection
?>
Override In JavaScript
Since there is not a native extends, we could say that in JavaScript an override is everything able to shadow an inherited property or method.
var o = {}; // new Object;
o.toString(); // [object Object]
o.toString = function () {
return "override: " +
// call the parent method
Object.prototype.toString.call(this)
;
};
Moreover, while in classic OOP override is usually specific for methods, and nothing else, in JavaScript we could even decide that at some point a method is not a method anymore:
// valid assignment
o.toString = "[object Object]";
// NOTE: toStirng is invoked
// every time we try to convert
// implicitly an object
// above code will break if we
// try to alert(o)
// will work if we alert(o.toString)
This introduction shows how much we are free to change rules, reminding us that if these rules are there and we would like to emulate classic inheritance patterns, maybe it's a good idea to deeply analyze if makes sense to change a method into a variable, or vice-versa. Most of the time, this is not what we want, so let's analyze just methods.
Why We Need Override
Specially in a non compilable language as JS is, and generally speaking as good practice for whatever developer, we would like to reuse code we already wrote. If a super method provides basic configuration and we would like to add more, does it make sense to rewrite all super method logic and procedures plus other part we need?Extends ABC
A must know in JavaScript, is how to inherit a constructor.prototype via another constructor, considering there is not a native way to extends constructors.In JavaScript, (almost) everything is an object that inherits from other objects.
To create an inheritance chain, all we need is a simple function like this:
var chain = (function () {
// recycled empty callback
// used to avoid constructors execution
// while extending
function __proto__() {}
// chain function
return function ($prototype) {
// associate the object/prototype
// to the __proto__.prototype
__proto__.prototype = $prototype;
// and create a chain
return new __proto__;
};
}());
function A (){}
function B (){}
// create the chain
B.prototype = chain(A.prototype);
new B instanceof A; // true
new B instanceof B; // still true
With this in mind, we can already start to create basic subclasses.
Hard Coded Override
One of the most simple, robust, fast, efficient, explicit, and clean pattern to implement overrides, is the hard coded one.
// base class
function A() {
console.log("A");
}
// enrich native prototype
A.prototype.a = function () {
console.log("A#a");
};
A.prototype.b = function () {
console.log("A#b");
};
A.prototype.c = function () {
console.log("A#c");
}
// subclass, first level
function B() {
// use super constructor
A.call(this);
console.log("B");
}
// create the chain
B.prototype = chain(A.prototype);
// enrich the prototype
B.prototype.a = function () {
// override without recycling
console.log("B#a");
};
// more complex override
B.prototype.b = function () {
// requires two super methods
A.prototype.a.call(this);
A.prototype.b.call(this);
console.log("B#b");
};
// subclass, second level
function C() {
// call the super constructor
// which will automatically call
// its super as well
B.call(this);
console.log("C");
}
// chain the subclass
C.prototype = chain(B.prototype);
// enrich the prototype
// every override will
// recycle the super method
// we don't care what's up there
// we just recycle code and logic
C.prototype.a = function () {
B.prototype.a.call(this);
console.log("C#a");
};
C.prototype.b = function () {
B.prototype.b.call(this);
console.log("C#b");
};
C.prototype.c = function () {
B.prototype.c.call(this);
console.log("C#c");
};
Above example is a simple test case to better understand how the chain works.
To do this, and test all methods, all we need is to create an instanceof the latest class, and invoke a, b, and c methods.
var test = new C;
console.log('-------------------------');
test.a();
console.log('-------------------------');
test.b();
console.log('-------------------------');
test.c();
That's it, if we use Firebug or whatever other browser console, we should read this result:
A
B
C
-------------------------
B#a
C#a
-------------------------
A#a
A#b
B#b
C#b
-------------------------
A#c
C#c
The first block shows how the B constructor invokes automatically the A one, so that the order in the console will be "A", as first executed code against the current instanceof C, "B", as second one, and finally "C".
If A, B, or C, define properties during initialization, this inside those methods, will always point to the instance of C, the "test" variable indeed.
Other logs are to follow logic and order inside other methods.
Please note that while B.prototype.b does not invoke the super method, B.prototype.c does not exist at all so that B.prototype.c, the one invoked by C.prototype.c, will be directly the inherited method: A.prototype.c.
More happens in the B.prototype.b method, where there are two different invocation, A.prototype.a first, and A.prototype.b after.
Pros
With this pattern, it's truly difficult to miss the method that caused troubles, if any. Being this pattern explicit, all we read is exactly what is going on. Method are shareable via mixins, if necessary, and performances are almost the best possible one for each instance method call.Cons
Bytes speaking, this pattern could bring us to waste lot of bandwidth. Compilers won't be able to optimize or reduce that much the way we access the method, e.g. A.prototype.a, plus we have to write a lot of code and we are not even close to the classical super/parent pattern.Another side effect, is surely the fact most developer don't even know/understand perfectly the difference between call and apply, or even worst, the concept of injected context, so that this as first argument could cause confusion.
Finally, the constant look up for the constructor, plus its prototype access, plus its method access, could let us think that performances could be somehow improved.
Hard Coded Closure
Specially designed to avoid last Cons, we could think about something able to speed up each execution. In this case, the test is against the B.prototype.b method:
B.prototype.b = (function () {
// closure to cache a, and b, parent access
var $parent_a = A.prototype.a;
var $parent_b = A.prototype.b;
// the method that will be available
// as "b" for each instance
return function () {
// cool, it works!
$parent_a.call(this);
$parent_b.call(this);
console.log("B#b");
};
}());
We still have every other Cons here, we had to write twice and in just two following lines, the A.prototype.methodName boring part.
Considering that properties access, at least the first level, is extremely fast in JavaScript, we could try to maintain performances as much as possible, reducing file size and time to write code:
B.prototype.b = (function () {
// cache just the parent
var $parent = A.prototype;
return function () {
// and use it!
$parent.a.call(this);
$parent.b.call(this);
console.log("B#b");
};
}());
The next natural step to think about, is an outer closure able to persist for the whole prototype so that every method could access at any time to the $parent:
var B = (function () {
// cache once for the whole prototype
var $parent = A.prototype;
function B() {
$parent.constructor.call(this);
console.log("B");
}
// create the chain
B.prototype = chain($parent);
// enrich in this closure the prototype
B.prototype.a = function () {
console.log("B#a");
};
B.prototype.b = function () {
$parent.a.call(this);
$parent.b.call(this);
console.log("B#b");
};
return B;
}());
OK, now we have removed almost every single Cons from this pattern ... but somebody could still argue that call and apply may confuse Junior developers.
Libraries And Frameworks Patterns
I do believe we all agree that frameworks are good when it is possible to do more complex stuff in less and cleaner code and, sometimes, with a better logic.As example, the base class A could be simply defined like this:
var A = new Class({
a: function () {
console.log("A#a");
},
b: function () {
console.log("A#b");
},
c: function () {
console.log("A#c");
}
});
// please note to avoid a massive post
// I have intentionally skipped the
// constructor part for these cases
Isn't that beautiful? I think it is! Most of the frameworks or libraries we know, somehow implements a similar approach to define an emulated Class and it's prototype. I have written a more complete Class example at the end of this post but please don't rush there and be patience, thanks.
A Common Mistake
When we think about parent/super, we think about a way to access the "inherited stuff", and not something attached to the instance and completely different from classical OOP meaning, the one we are theoretically trying to emulate.I have already provided a basic example of what I mean with the first piece of code, the one that shows how are things in PHP, and not only (Java, C#, many others).
The parent keyword should be an access point to the whole inherited stack, able to bring the current this reference there.
Unfortunately, 90% of the frameworks out there got it wrong, as I have partially described in one of my precedent posts entitled: The JavaScript _super Bullshit. Please feel free to skip the later lecture since I have done both a mistake against MooTools, still affected with the problem I am going to talk about tho, and I did not probably explain limitations in a proper way.
In any case, this post is about override patterns, so let's see what we could find somewhere in the cloud ;)
Bound $parent
As first point, I have chosen the name $parent to avoid to pollute methods scopes with a variable that could be confused with the original window.parent, while I have not used super since this is both a reserved keyword and not familiar with PHP developers.This pattern aim is to make things as simple as possible: all we have to do is to invoke when and if necessary the $parent(), directly via the current method.
To be able to test the Class emulator and this pattern, we need to define them. Please note the provided Class is incomplete and not suitable for any production environment, since it has been created simply to support this post and its tests.
function Class(definition) {
// the returned function
function Class() {}
// we would like to extend via
// the extend property, if present
if (definition.extend) {
// attach the prototype
Class.prototype = definition.extend.prototype;
// chain it, recycling the Class itself
Class.prototype = new Class;
// enrich the prototype with other definition properties
for (var key in definition) {
// but only if functions, since
// we would like to add the magic
if (typeof definition[key] === "function") {
Class.prototype[key] = callViaParent(
Class.prototype[key],
definition[key]
);
}
}
} else {
// nothing to extend
// just enrich the prototype
for (var key in definition) {
Class.prototype[key] = definition[key];
}
}
// be sure the constructor is this one
Class.prototype.constructor = Class;
// return the class
return Class;
}
// let's imagine new Class SHOULD BE an instanceof Class
// or remove the new when you create one
Class.prototype = Function.prototype;
// magic callback to add magic, yeah!
function callViaParent(parent, method) {
// create runtime the wrapping method
return function () {
// create runtime the bounded $parent
// note that ONLY $ is local scope
// $parent will defined in the GLOBAL scope
var $ = $parent = function () {
return parent.apply($this, arguments);
};
// trapped reference for the $parent call
var $this = this;
// invoke the current method
// $parent will be the one defined
// few lines before
var result = method.apply(this, arguments);
// since the method could have another $parent call
// we want to be sure that after its execution
// the global $parent will be again the above one
$parent = $;
// return the result
return result;
};
}
We've got all the magic we need to define our classes, ready?
var A = new Class({
a: function () {
console.log("A#a");
},
b: function () {
console.log("A#b");
},
c: function () {
console.log("A#c");
}
});
var B = new Class({
extend: A,
a: function () {
console.log("B#a");
},
b: function () {
// oooops, we have only one
// entry point for the parent!
$parent();
console.log("B#b");
}
});
var C = new Class({
extend: B,
a: function () {
$parent();
console.log("C#a");
},
b: function () {
$parent();
console.log("C#b");
},
c: function () {
$parent();
console.log("C#c");
}
});
The B.prototype.b method cannot emulate what we have tested before via Hard Coded Pattern. The $parent variable can obviously "host" one method to call, the A.prototype.b and nothing else.
Here we can already spot the first limitation about the magic we would like to bring in our daily code. Let's test it in any case:
var test = new C;
console.log('-------------------------');
test.a();
console.log('-------------------------');
test.b();
console.log('-------------------------');
test.c();
The result?
B#a
C#a
-------------------------
A#b
B#b
C#b
-------------------------
A#c
C#c
Cool, at least what we expected, is exactly what happened!
Pros
This pattern is closer to the classic one and simpler to understand. The global reference is something we could ignore if we think how many chars we saved during classes definition.Cons
Everything is wrong. The parent is a function, not a reference and the real parent is bound for each method call. This is a performances killer, a constant global namespace pollution, and quite illogical, even if pretty.Finally, we have only one method to call, and zero parent access, since each method will share a runtime created global $parent variable, and it will never be able to recycle any of them since the this reference could be potentially always different for every method invocation.
Slightly Better Alternatives
At least to avoid global scope pollution with a runtime changed $parent, some framework could implement a different strategy: send the $parent as first variable for each method that extends another one. Strategies to speed up this process are different, but one of the most efficient could be:
function callViaParent(parent, method) {
// create once, and sends every time
function $parent($this, arguments) {
return parent.apply($this, arguments);
}
return function () {
// put $parent as first argument
Array.prototype.unshift.call(arguments, $parent);
// invoke the method
return method.apply(this, arguments);
};
}
This could produce something like:
var B = new Class({
extend: A,
a: function ($parent) {
console.log("B#a");
},
b: function ($parent) {
$parent(this);
console.log("B#b");
}
});
Runtime Parent Method
What is the only object that travels around this references? The current instance, isn't it? So what a wonderful place to attach runtime the right parent for the right method?This pattern seems to be the most adopted one, we still have inconsistencies against the classical OOP parent concept, but somehow it becomes more natural to read or write:
// A is the same we have already
var B = new Class({
extend: A,
a: function () {
console.log("B#a");
},
b: function () {
// here comes the magic
this.$parent();
console.log("B#b");
}
});
var C = new Class({
extend: B,
a: function () {
this.$parent();
console.log("C#a");
},
b: function () {
this.$parent();
console.log("C#b");
},
c: function () {
this.$parent();
console.log("C#c");
}
});
With a runtime attached/switched/twisted property at least we have solved the binding problem. In order to obtain above behavior, we should change just the "magic" callViaParent callback.
function callViaParent(parent, method) {
// cached parent
function $parent() {
return parent.apply(this, arguments);
}
return function () {
// runtime switch
this.$parent = $parent;
// result
var result = method.apply(this, arguments);
// put the $parent as it was
// so if reused later, it's the correct one
this.$parent = $parent;
// return the result
return result;
};
}
Et voila', if we test again the C instance and its methods, we still obtain the expected result via our new elegant, "semantic" way to call a parent:
B#a
C#a
-------------------------
A#b
B#b
C#b
-------------------------
A#c
C#c
Pros
The execution speed is surely better than a constantly bounded reference. We don't need to adopt other strategies and somehow we think this is the more natural way to go, at least in JS (still a nonsense for Java and classic OOP guys).Cons
Again, the class prototype creation is slower due to all wraps we need for each method that is extending another one.The meaning of parent is still different from classic OOP, we have a single access point to the inherited stack and nothing else.
This simply means that one more time we cannot emulate the initial behavior we were trying to simplified ... can we define this more powerful? Surely cleaner tho.
Runtime Parent
The single access point for the parent stuck is truly annoying, imho. This is why we could use analogues strategies to obtain full access.To obtain this, we need again to change everything, starting from the "magic" method:
function callViaParent(parent, method) {
// still a wrap to trap arguments and reuse them
return function () {
// runtime parent
this.$parent = parent;
// result
var result = method.apply(this, arguments);
// parent back as it was before
this.$parent = parent;
// the result
return result;
};
}
We have already improved performances using a simple attachment but this time, parent will not be the method, but the SuperClass.prototype:
function Class(definition) {
function Class() {}
if (definition.extend) {
Class.prototype = definition.extend.prototype;
Class.prototype = new Class;
for (var key in definition) {
if (typeof definition[key] === "function") {
Class.prototype[key] = callViaParent(
// we pass the prototype, not the method
definition.extend.prototype,
definition[key]
);
}
}
} else {
for (var key in definition) {
Class.prototype[key] = definition[key];
}
}
Class.prototype.constructor = Class;
return Class;
}
With above changes our test code will look like this:
// A is still the same
var B = new Class({
extend: A,
a: function () {
console.log("B#a");
},
b: function () {
// HOORRAYYYY, Full Parent Access!!!
this.$parent.a.call(this);
this.$parent.b.call(this);
console.log("B#b");
}
});
var C = new Class({
extend: B,
a: function () {
this.$parent.a.call(this);
console.log("C#a");
},
b: function () {
this.$parent.b.call(this);
console.log("C#b");
},
c: function () {
this.$parent.c.call(this);
console.log("C#c");
}
});
We are back to normality, if we test above code we'll obtain this result:
B#a
C#a
-------------------------
A#a
A#b
B#b
C#b
-------------------------
A#c
C#c
The multiple parent access in method "b" is finally back, which means that now we can emulate the original code.
We have introduced a regression tho! For performances reason, and to avoid crazy steps in the middle of a simple method invocation, call and apply are back in the field!
Pros
Finally we have control over the parent, and even if attached, we can be closer to the classical OOP. Performances are reasonably fast, just one assignment as it was before, but more control.Cons
We inevitably reintroduce call and apply, hoping our users/developers got the difference, and understood them. We are still parsing methods and wrapping them around, which means overhead for each Class creation, and each extended method invocation.Lazy Module Pattern
On and on with these patterns that somebody could have already spotted we are back to the initial one:
// last described pattern:
this.$parent.b.call(this);
// hard coded closure
$parent.b.call(this);
The thing now is to understand how heavy could be to place a bloody $pattern variable inside the prototype scope, still using the new Class approach.
Wait a second ... if we simply try to merge the module pattern with our Class provider, how things can be that bad?
function Class(extend, definition) {
function Class() {}
// if we have more than an argument
if (definition != null) {
// it means that extend is the parent
Class.prototype = extend.prototype;
Class.prototype = new Class;
// while definition could be a function
if (typeof definition === "function") {
// and in this case we call it once
// and never again
definition = definition(
// sending the $parent prototype
extend.prototype
);
}
} else {
// otherwise extend is the prototype
// but it could have its own closure
// so it could be a function
// let's execute it
definition = typeof extend === "function" ? extend() : extend;
}
// enrich the prototype
for (var key in definition) {
Class.prototype[key] = definition[key];
}
// be sure about the constructor
Class.prototype.constructor = Class;
// and return the "Class"
return Class;
}
We have lost the "magic" method, less code to maintain ... good! The Class itself seems more slick than before, and easier to maintain: good!
How should our classes look like now?
// A is still the same
// we specify the super class as first argument
// only if necessary
var B = new Class(A, function ($parent) {
// this closure will be executed once
// and never again
// it will receive as argument and
// automatically, the super prototype
return {
a: function () {
console.log("B#a");
},
b: function () {
// Yeah Baby!
$parent.a.call(this);
$parent.b.call(this);
console.log("B#b");
}
};
});
// same is for this class
var C = new Class(B, function ($parent) {
// we could even use this space
// to define private methods, real ones
// those showed in the Overload Patterns
// sounds pretty cool to me
return {
a: function () {
$parent.a.call(this);
console.log("C#a");
},
b: function () {
$parent.b.call(this);
console.log("C#b");
},
c: function () {
$parent.c.call(this);
console.log("C#c");
}
};
});
Did we reach our aim? If we don't care about call and apply, surely we did!
With this refactored Class we are now able to create, without worrying about inline function calls, everything we need.
If the prototype is an object, we can simply use the classic way:
var A = new Class({
a: function () {
console.log("A#a");
},
b: function () {
console.log("A#b");
},
c: function () {
console.log("A#c");
}
});
While if we need a closure to do not share anything outside the prototype, we can still do it!
var D = new Class(function () {
function _doStuff() {
this._stuff = "applied";
}
return {
applyStuff: function () {
_doStuff.call(this);
}
};
});
In few words, we are now able to perform these operations:
new Class(prototype);
new Class(callback);
new Class(parent, prototype);
new Class(parent, callbackWithParent);
I think I gonna change my base Class implementation with this stuff pretty soon :D
Pros
Performances speaking, this pattern is the fastest one in the list. No runtime assignments, no wrappers, no look up for the super, simply a local scope variable to access whenever we need and only if we need, from public, "protected", eventually privileged, and private methods, those we can easily code in the function body. The only microscopic bottleneck compared to native Hard Coded way is provided by the Class and nothing else, but classes are something we define once and never again during a live session, we care about execution speed!Cons
call or apply ... but "dooode, please learn a bit more about JS, call and apply are essentials for your work!".Inline Override
This last pattern is all about "do what you need when you need it" approach. In few words, there are several cases where we need to change, maybe temporary, one single method.This is the way to proceed:
// before ...
var c = new C();
// somewhere else ...
c.doStuff = (function (doStuff) {
return function () {
// some other operation
// back as it was before
this.doStuff = doStuff;
};
}(c.doStuff));
// before ...
var d = new D();
// somewhere else ...
d.doStuff = (function (doStuff) {
return function () {
// some operation with the overridden method
doStuff.call(this);
// something else to do
return this._stuff;
};
}(d.doStuff));
There are several possible combination but the concept is the same: we override inline a method because for a single instance/object there is only one method that does not suite properly with our requirements.
Of course methods to override could be more than one, but if we are shadowing 4 methods or more, we could better think to inherit that constructor prototype and simply create different instances from the subclass.
As Summary
The override emulation via JavaScript is not an "easy to threat" topic. As cited at the beginning, we'll never be able to obtain what we expect in classical OOP since JavaScript is prototypal based.Somehow we can at least try to understand what kind of similitude with classical OOP we would like to reach, and which pattern could be more suitable for our purpose.
This is what I have tried to explain in this post, in order to complete the other one about overloads.