My JavaScript book is out! Don't miss the opportunity to upgrade your beginner or average dev skills.
Showing posts with label JavaScirpt. Show all posts
Showing posts with label JavaScirpt. Show all posts

Wednesday, September 05, 2012

A Meaningful Client Side Alternative To node require()

TL;DR

Here you have the solution to all your problems ... no? So take a break, and read through :)

Current Status

Using a generic "module loader" for JavaScript files is becoming a common technique, and full of wins, not only on the node.js server side. There are different client side alternatives, not entirely based over the module concept, and apparently only one concrete proposal, called AMD, able to work in both server and client following a build/parsing procedure.
However, as somebody pointed out already ...

AMD is Not the Answer

This post does not even tell everything bad about AMD problems, and I am getting there, but it's surely a good starting point.
Tobie article is another resource that inspired somehow what I am going to propose here, so before discriminating this or that technique, I hope you'll find time to understand the whole problem ... found it? Let's go on then :)

The Beauty Of node.js require Function

I don't think I should even spend many words here ...

// module: compress

// in the global scope, it's just a module
// so no pollution to the global/window object
// no need to put everything in yet another closure
var fs = require("fs");

// just as example
this.compress = function (file, fn) {
fs
.createReadStream(file)
.pipe(
// check this out!
require("zlib").createGzip()
)
.pipe(
fs.createWriteStream(file + ".gz")
).on("close", function () {
var content = fs.readFileSync(file + ".gz");
fs.unlinkSync(file + ".gz");
fn(content);
})
;
};

So basically, with a synchronous require(module) call we can decide whenever we want where that module should be included in our logic. This means we don't need to think in advance to all needed dependencies for our current module, as well as this doesn't require to split in many nested functions our asynchronous logic.
Look at above example ... in node world the require("fs") is performed basically in every single module ... the FileSystem is that important, hell yeah!
We dont' care much about requiring it at the very beginning of above stupid module 'cause surely that operation will cost nothing! Each module is cached indeed so there's actually zero impact if we use a require hundreds of time inline ... really, it's that fast, but the very first one might cost a bit.
This is why require("zlib") is loaded only once, and only when it's needed ... so that memory, disk, cpu, and everything else, can live in peace before the very first call of that exported method while after that, that call will cost again nothing.

Is a mandatory nested function call through AMD logic as fast as above example? I tell you, NO, but this is not the real problem, is it?

AMD Does Not Truly Scale

... as simple as that. If you need a build process behind something that could fail in any case, I would say you are doing it wrong. I am talking about current era, where internet connection could be there or not ... I am talking about mobile and I give you a very basic example.
I have created my freaking cool website or webapp that loads asynchronously everything and I am on the road with my supa-dupa smart phone.
When "suddenly, le wild spot without internet coverage appears"!
Now I press that lovely button in the lovely application that loads asynchronously the lovely next part of my action and guess what happens ... are you there? Nothing!
That instant without my mobile network will screw up my flow, will break my action, will let me wait "forever" and in most common cases, it will block me to use properly the app.
Of course, if that action required server side interaction, that action won't work in any case ... but how come I don't even see a notification? How come pieces of my application are not showing up telling me that there is actually a problem, rather than letting me wait without any notification, 'couse probably even notification logic was created as AMD module?

And what if connection is simply bad? The whole application or website is responding decently but that part, oh gosh how much I was planning to use that app part, is taking ages to load! How would you feel in front of such program?

AMD Resolves Lazy/Circular Load Synchronously

That's correct ... gotcha! Have you ever read what you should do in order to resolve modules that load asynchronously other modules inside themselves?

//Inside b.js:
define(["require", "a"],
function(require, a) {
//"a" in this case will be null if a also asked for b,
//a circular dependency.
return function(title) {
// me: Excuse me ... WUT?
return require("a").doSomething();
}
}
);

So here the thing, if "by accident" you have a circular dependency you should use require() as node.js does ... oh well ...

On Circular Dependencies / Cycles

This topic is one of the biggest paradox about programming ... we try to decouple everything, specially when it comes to write modules, so that not a single part of the app should be aware of the surrounding environment ... isn't it? Then someone said that circular dependencies are bad ... but how come?
Here an example, a truly stupid one:
Hi, I am a human, the human module, and I am completely sufficient, but I need to go in a place that would take too much by my own ... so I need the module car.

Hi, I am a car, the module car, and I am fabulous by my own, but I need the module human to be able to go somewhere.

A partner, better than a car, could also explain the fact we actually think in circular references all the time ... am I wrong?
The AMD take in this case is that "we should change the require logic when this happens and we should be aware in advance in order to solve this" ... yeah, nice, so now in AMD we have two ways to require and return what we export ... and once again, in my humble opinion, this does not scale ... at all!

Double Effort, Double Code!

With AMD we don't only need to change our AMD code/style/logic when things are not known in advance, as showed before ... we also need to write code for modules that is not compatible with node.js, resulting in such piece of redundant code that I believe nobody truly want to write more than once in a programmer life.
Take an excellent library as lodash is, and check what it has to do in order to be compatible with AMD too ...

// expose Lo-Dash
// some AMD build optimizers, like r.js, check for specific condition patterns like the following:
if (typeof define == 'function' && typeof define.amd == 'object' && define.amd) {
// Expose Lo-Dash to the global object even when an AMD loader is present in
// case Lo-Dash was injected by a third-party script and not intended to be
// loaded as a module. The global assignment can be reverted in the Lo-Dash
// module via its `noConflict()` method.
window._ = lodash;

// define as an anonymous module so, through path mapping, it can be
// referenced as the "underscore" module
define(function() {
return lodash;
});
}
// check for `exports` after `define` in case a build optimizer adds an `exports` object
else if (freeExports) {
// in Node.js or RingoJS v0.8.0+
if (typeof module == 'object' && module && module.exports == freeExports) {
(module.exports = lodash)._ = lodash;
}
// in Narwhal or RingoJS v0.7.0-
else {
freeExports._ = lodash;
}
}
else {
// in a browser or Rhino
window._ = lodash;
}

Now ... ask yourself, is this what you want to write for every single module you gonna create that might work in both server and client side?

Combined AMD Modules

Here another possibility, super cool ... we can combine all modules together into a single file: YEAH! But if this what you do for your project, don't you wonder what is the point then to use all those "asynchronous ready" callbacks if these will be executed in a synchronous way in production? Was that different syntax truly needed? And what about JS engines parsing time? Is processing the whole project at once in a single file a good thing for both Desktop and Mobile?
Why are you developing asynchronously with all those nested callbacks if you provide a synchronous build? Is the code size affected? Does all this make any sense?

AMD, The Good Part... Maybe

OK, there must be some part of this logic that conquered many developers out there .. and I must admit AMD "solved with nonchalance" the fact JavaScript, in the client side, has always had problems with the global scope pollution.
The fact AMD forces us to write the module inside a function that receives already arguments with other needed modules, is a win ... but wait a second, wasn't that boring before that nobody until now wrote a single bloody closure to avoid global scope pollution?
I think AMD is a side effect, with all possible noble and good purpose, of a general misunderstanding of how is JS code sharing across libraries.
Let's remember we never even thought about modules until we started clashing with all possible each other polluting namespaces, global variables, Object.prototype, and any sort of crap, thinking we are the only script ever running in a web page ... isn't it?
So kudos for AMD, at least there is a function, but where the hack is the "use strict" directive suggested for every single bloody AMD module in any example you can find in the documentation? where is the global pollution problem solver, if developers are not educated or warned about the problem itself?

node.js require ain't gold neither

When network, roundtrips and latency come into the game, the node.js require() solution does not fit, scale, work, neither.
If you understand how Loading from node_modules Folders logic works, and you have an extra look into All Together diagram, you will realize that all those checks, performed through an HTTP connection, won't ever make sense on the client side.
Are we stuck then? Or there's some tiny little brick in the middle that is not used, common, public, or discussed yet?

A node require() Like, For Client Side

Eventually, here where I was heading since the beginning: my require_client proposal ... gosh it took long!
Rewind ... How about having the best from all techniques described "here and there" in order to:
  • avoid big files parsing, everything is parsed once and on demand
  • provide an easy to use builder for production ready single file code
  • use one single syntax/model/style to define modules for both node or client side
  • solve cycles exactly as node does
  • forget 10 lines of checks to export a bloody module
  • organize namespaces in folders
  • obtain excellent performance
  • make a single file distributable
... and finally, compare results against all other techniques?
Here, the AMD loader, versus the inline and DOM script injection loader, versus the dev version of my proposal, and finally the production/compiled version of my proposal ... how about that? You can use any console, profiler, dev tool/thing you want, to compare results, it's about a 150Kb total amount of scripts with most of them loaded ASAP, and one loaded on "load" event.
You can measure jquery, underscore, backbonejs, and the ironically added as last script head.js script there within their loading/parsing/ready time.

Reading Results

If you think nowadays the DOMContentLoaded event is all you need to startup faster your web page/app, you are probably wrong. DOMContentLoaded event means almost nothing for real UX, because a user that has a DOM ready but can't use, because modules and logic are not loaded yet, or see, because CSS and images have not been resolved yet, the page/app, is simply a "user waiting" rather than interacting and nothing else.
Accordingly, if you consider the code flow and the time everything is ready to be used, the compiled require() method is the best option you have: it's freaking fast!

"use strict" included

The best part I could think about, is the "use strict"; directive by default automatically prepended to any module that is going to be parsed.
This is a huge advantage for client side code because while we are able to create as many var as we want in the module scope, the engine parser and compiler will instantly raise an error the line and column we forgot a variable declaration. All other safer things are in place and working but .. you know, maybe you don't want this?
That's why the require_client compiler makes the strict configuration property easy to spot, configure, and change ... as long as you know why you are doing that, everything is fine.

How Does It Work

The compiler includes a 360 bytes once minzipped function that when is not optimized simply works through XHR.
This function could be the very only global function you need, since every module is evaluated sandboxed and with all node.js module behaviors.
You can export a function itself, you can export a module, you can require inside a module, you can do 90% of what you could do in a node.js environment.
You don't need to take care of global variables definition, those won't affect other modules.
What you should do, is to remember that this is the client so the path you choose in the development version, is the root, as any node_modules folder would be.
If you clone the repository, you can test via copy and paste the resulting build/require.js in whatever browser console traps such require_client ~/folder_you_put_staff/require_client/js or require_client ~/folder_you_put_staff/require_client/cycles.
require("a") in the first case and require("main") in the second.
In order to obtain a similar portable function you should create a folder with all scripts and point to that folder via require_client so that a script with all inclusions will be created.

A Basic Example

So here what's the require_client script is able to produce.
Let's imagine this is our project folder structure:

project/
css/
js/
require.dev.js
index.html

The index.html file can simply have a single script in its header that includes require.dev.js and the bootstrap module through require("main");, as example.
So, let's imagine we have module a, b, and main inside the js folder, OK?
require_client project/js project/require.js
This call will produce the require.js file such as:

/*! (C) Andrea Giammarchi */
var require=function(c,d,e){function l(n,m){return m?n:g[n]}function b(o){var m=a[o]={},n={id:o,parent:c,filename:l(o,h),web:h};n[k]=m;d("global","module",k,(e.strict?"'use strict';":"")+l(o)).call(m,c,n,m);j.call(m=n[k],i)||(m[i]=h);return m}function f(m){return j.call(a,m)?a[m]:a[m]=b(m)}var k="exports",i="loaded",h=!0,a={},j=a.hasOwnProperty,g={
"a": "console.log(\"a starting\");exports.done=false;var b=require(\"b\");console.log(\"in a, b.done = \"+b.done);exports.done=true;console.log(\"a done\");",
"main": "console.log(\"main starting\");var a=require(\"a\");var b=require(\"b\");console.log(\"in main, a.done=\"+a.done+\", b.done=\"+b.done);",
"b": "console.log(\"b starting\");exports.done=false;var a=require(\"a\");console.log(\"in b, a.done = \"+a.done);exports.done=true;console.log(\"b done\");"
};f.config=e;f.main=c;return f}(this,Function,{
strict:true
});
Now the index.html could simply include reuiqre.js rather than the dev version ;)
Above program is the same showed in node.js API documentation about cycles. If you copy and paste this code in any console and you write after require("main"); you'll see the expected result.
As summary, require_client is able to minify and place inline all your scripts, creating modules names based on files and folders hierarchy.
All modules will be evaluated with a global object, already available, as well as module, exports, and the latter used as the module context.
The simple object based cache system will ensure these modules are evaluated once and never again.

What's YAGNI

Few things, that could change, are not in on purpose. As example, the module.parent is always the global object, since in fact, it's in the global scope, through Function compilation, that the module will be parsed the very first time. Not sure we need a complicated mechanism to chain calls and also this mechanism is error prone.
If you have 2 scripts requiring the same module, first come, first serve. The second one should not affect runtime the already parsed module changing suddenly the module.parent property ... you know what I mean?
The path resolution is a no-go, rather than trying to fix all possible messes with paths and OS, put your files in a single JS folder and consider that one your virtual node_modules one for clients.
If you have folder links inside the JS folder it's OK, but if you have recursive links you are screwed. Please keep it simple while I think how to avoid such problem within the require_client logic, thanks.

What Is Not There Yet

If your project is more than a megabyte minzipped, you might want to be able to split in different chunks the code so that the second, last, injected, require, won't disturb the first one. This is not in place yet since this free time project was born for a small/medium web app I am working on that will be out pretty soon ... an app that surely does not need such amount of code as many other web app should NOT ... but you know, I've been talking about scaling so much that a note about the fact this solution won't scale so nicely with massive projects is a must say.

UpdateJust landed a version that does not cause conflicts with require itself. The first defined require will be the one used everywhere so it's now possible to name a project and include it in the main page so ... parallel projects are now available ;)

If you would like to reuse node modules that work in the client side too, you needto copy them inside the path folder.
The configuration object is the one you can find at the end of the require.js file ... there are two defaults there, but you can always change them via require.config.path = "different"; as well as you can drop the require.config.strict = false; directive so that modules will be evaluated without "use strict"; directive.
Anything else? Dunno ... you might come up with some hint, question, suggestion. And thanks for reading :), I know it was a long one!

Last Thoughts

If AMD and RequireJS comes with a compiler able to make everything already available somehow, think how much pointless become the optimization once you can have already available all dependencies without needing to write JS code in a different way, regardless it's for node.js or the client web normal code.
There are NOT really many excuse to keep polluting the global scope with variables, we have so may alternatives today that keep doing it would result as evil as any worst technique you can embrace in JS world.

Thursday, October 20, 2011

My BerlinJS Slides

It was a great event today at @co_up in @berlinjs meet-up and here are my sides about wru which accordingly with today meeting means where are you, directly out of SMS syntax.

Enjoy ;)

Saturday, April 09, 2011

ES5 NOW! ... or better, @falsyvalues

Update more than a person asked me more details and here I am: The workshop will be Thursday 19th of May on Track 3 and from 9am to 5pm. Registrations open at 8am and to be sure everything I have written is correct, please double check the schedule.

This time is not about my uncle, this time is about my workshop in Warsaw, during Falsy Values Event, and this is its description:

Massive rumours behind buzz words such HTML5 and ES5, the latest updated specification about JavaScript programming language, have surely increased confusion about where is JavaScript today, and how this language should be in the future.

Unfortunately, we all know that many users are still trapped behind really outdated browsers and their relative JS engines.

This could lead us to be stuck with old coding patterns and style but here I am to show most recent performances oriented techniques that could make the transition to this new specification less painful and efficient

  • Size matters: code size oriented techniques and advantages of a proper build process

  • Why Array extras, Object creation, and other new ES5 entries are not scary

  • Mobile and performance oriented applications: DO and DONTs

  • JS Harmony purpose and the future of JavaScript


The "It's Scripting" Logic

Too many times we convince ourself that if it is about a scripting programming language, performances are not important. Unfortunately, or fortunately, I have already said we don't have choices when it comes to "web browser environment".
It's not that if we need speed, we change programming language, this is not an option for us ... we want be fast, as fast as possible!
Everybody knows already that, even in JavaScript, a proper algorithm can be faster than a bad one written in C or ASM.
This rule could be readapted more generically considering a better pattern faster than a worse one.


ES5 Oriented Patterns

Specially on mobile and tablet, recently on desktops as well, the latest version of JavaScript could bring many advantages in therms of performances but this is not it: ES5 brings different and new approaches that we should better consider now rather than wait that all browsers "will be there" and we'll see during this workshop different examples of graceful degradation.


Performances Speaking

Not all of us are that lucky to use JavaScript on the server side only.
Even in this case, we will most likely deal with http connections and we'll have to serve some content, possibly with JavaScript as well if it's not a RESTFull service only.
Performances on web have many faces: from download size to lazy loading advantage, up to browser specific builds and the best way to serve them. All these topics will be discussed during the workshop but hey ... if I have to be honest, there are many others there that could stimulate your interest ... I am just saying :P

See you in Warsaw ;)

Tuesday, March 29, 2011

Rewind: getters & setters for all IE with cross browser VBClass!

spoiler: if once again everybody knew except me, you guys should do something to be better indexed in Google ... while if this is totally new and cool, well, you are welcome :)



Sometimes I am stubborn, so stubborn, that even if it was me writing this post, and this one after, I have never given up about IE < 9 getters and setters ... "there must be a way", I have thought during last days, and ... yes, eventually I have found the way!

Test Driven Developed Solution

I have decided the possible behaviour, I have implemented all use cases, and I have successfully validated them against all browsers I could came up, with the exception of a single test, the first one, which fails in those browsers unable to freeze an object ... well, these browsers are disappearing thanks to their suggested, or automatic, updates, so everything is fine.
Here the unit test that should cover 100% of the used JavaScript and/or the used VBScript where available (only IE < 9 so please don't freak out yet!)

The Horrendous VBScript for IE

This wannabe Web programming language has been there, and hated, for ages ... but actually, if we learn all its gotchas we may end up thinking it is not that bad!
The main problem is to learn this "whatever it is language" and trust me: you don't wanna do that in this HTML5 era ... do you ?!

Not Only Get & Let, There Is A "Set" As Well!

Where good old dojo experimental Observable stopped, I didn't.
I kept investigating what the hack was going on behind the scene, forcing my fabulous IE9 to digest such Jurassic web programming language as VBScript is.
What I have discovered yesterday evening, is that when we set a property to a VBScript "unknown" object the Let definition is invoked only with primitives, where primitives are all those JavaScript variables which typeof is not equal to "object", with the exception of null value, and "function".
Once discovered this, the door to a proper implementation able to behave the same in all browsers became easy, and this is why I have introduced a new Class definition, the VBClass

What The Hell Is VBClass

VBClass is a global function able to create global factories, and these are the rules for a VBClass definition object:

  1. the definition of a VBClass IS static, which means once we have defined the structure of our instances, we cannot add, delete, or change arbitrary the number of properties

  2. since VBScript does not accept properties that start with an underscore, the convention to define a property "protected" must be different, as example using the underscore at the end as Closure Library does for whatever reason

  3. if a definition property is a function, this will be considered an immutable method of each instance created via this factory

  4. if a definition property has a value, its reference can be changed at any time and with any kind of object, function included, but in latter case it will not be possible to attach method runtime, but it will surely be possible to invoke the function property via call or apply, specifying the current object



A Full Specs VBClass Example


VBClass("FullSpecs", {
constructor: {
value: function (arg) {
// calls the method
this.method(arg);
}
},
method: {
value: function (arg) {
// invokes the setter
this.getSet = arg;
}
},
getSet: {
get: function () {
// returns the "protected"
return this.getSet_;
},
set: function (value) {
// assign the "protected"
this.getSet_ = value;
}
},
getSet_: {
// default "protected" value
value: null
}
});

var
genericObject = {},
test = new FullSpecs(genericObject)
;

test.getSet === genericObject; // true!


Pros And Cons Of VBClass

These are a few of pros about the precedent described VBClass limits/behavior:

  1. we have to think more about what we really need in our class, forgetting the completely dynamic JavaScript behaviour

  2. we are forced to follow a better convention for what we would like to define "protected properties"

  3. we can trust our defined methods, and we have to stick with them. This is a good/common approach if we consider a constructor.prototype runtime change a bad practice

  4. properties are properties, so there is nothing ambiguous about what is a method and what is a property: only the method can be invoked directly through the instance/variable/object, everything else is a property and there is no magic context injection there, neither for set functions


However, there are few cons to consider about this technique: unified behaviour through hosted VBScript objects means slower performances!
This means that VBClass created classes cannot be used for everything and there must be a valid reason to choose them in favour of normal JavaScript functions and their prototype nature.
A good reason could be, as example, the creation of those public object that would like to implement the coolness of a robust, cross platform, getters and setters implementation ... but be careful! If these objects are created hundred times during our application life-cycle, the performance impact could be massive and specially for older IE browsers.
Fortunately, with mobile browsers, IE 6/7 for Windows Phone 7 a part that I have not tested yet, the ideal scenario should fallback into the Object.create implementation, the one used in the main file, the only one needed, hopefully, as soon as users will update their browsers.

How To Use VBClass

Grab the source code or the minified version from my Google Code VBClass Project. Once all VBClass files are in the same folder, you can simply include the VBClass.loader.js on the top of your page and the rest of the magic is done.

<script
type="text/javascript"
src="http://vbclass.googlecode.com/svn/trunk/min/VBClass.loader.js"
></script>

Please copy VBClass locally to avoid round-trip and obtain better performances (also the trunk version in Google Code is not gzipped).

Have fun with VBClass ;)

Sunday, December 26, 2010

100% Client Side Image Resizing

... I know, I have said "Happy Holidays" already, but yesterday, after a (annoying) picture upload in Facebook, I had a an idea ... why on earth I should have a Java plugin to perform images resizes on Facebook? Why on earth if I don't have such plugin I have to wait the possibly extremely long upload, up to 10x slower for high quality images, stressing Facebook servers for such "simple" operation as an image resize/resample could be?

The FileReader Interface

In the W3C File API, I guess part of the HTML5 buzzword, we can find all we need to perform the operation we want totally on client side. The interface is called FileReader, and it provides functionalities to read chosen files from an input node with type file, and that's it: we can even disconnect from the network and keep resizing and saving images without problems.

The Canvas Trick

Still into HTML5 buzzword world, the canvas element and it's 2dContext.drawImage method is the key to perform a resample/resize operation. It's not about changing a DOM Image node size and show it, it's about creating a totally fresh new image with exactly desired pixels size.
Once this is done, it is possible to send via Ajax the bas64 encoded image or it is possible to simply save the created image or reuse it, or resize it again ...

The Demo Page

This is the demo page I gonna show you soon, the code is hopefully self explanatory:

<!doctype html>
<html>
<head>
<title>JavaScript Image Resample :: WebReflection</title>
</head>
<body>
<input id="width" type="text" value="320" />
<input id="height" type="text" />
<input id="file" type="file" />
<br /><span id="message"></span><br />
<div id="img"></div>
</body>
<script src="resample.js"></script>
<script>
(function (global, $width, $height, $file, $message, $img) {

// (C) WebReflection Mit Style License

// simple FileReader detection
if (!global.FileReader)
// no way to do what we are trying to do ...
return $message.innerHTML = "FileReader API not supported"
;

// async callback, received the
// base 64 encoded resampled image
function resampled(data) {
$message.innerHTML = "done";
($img.lastChild || $img.appendChild(new Image)
).src = data;
}

// async callback, fired when the image
// file has been loaded
function load(e) {
$message.innerHTML = "resampling ...";
// see resample.js
Resample(
this.result,
this._width || null,
this._height || null,
resampled
);

}

// async callback, fired if the operation
// is aborted ( for whatever reason )
function abort(e) {
$message.innerHTML = "operation aborted";
}

// async callback, fired
// if an error occur (i.e. security)
function error(e) {
$message.innerHTML = "Error: " + (this.result || e);
}

// listener for the input@file onchange
$file.addEventListener("change", function change() {
var
// retrieve the width in pixel
width = parseInt($width.value, 10),
// retrieve the height in pixels
height = parseInt($height.value, 10),
// temporary variable, different purposes
file
;
// no width and height specified
// or both are NaN
if (!width && !height) {
// reset the input simply swapping it
$file.parentNode.replaceChild(
file = $file.cloneNode(false),
$file
);
// remove the listener to avoid leaks, if any
$file.removeEventListener("change", change, false);
// reassign the $file DOM pointer
// with the new input text and
// add the change listener
($file = file).addEventListener("change", change, false);
// notify user there was something wrong
$message.innerHTML = "please specify width or height";
} else if(
// there is a files property
// and this has a length greater than 0
($file.files || []).length &&
// the first file in this list
// has an image type, hopefully
// compatible with canvas and drawImage
// not strictly filtered in this example
/^image\//.test((file = $file.files[0]).type)
) {
// reading action notification
$message.innerHTML = "reading ...";
// create a new object
file = new FileReader;
// assign directly events
// as example, Chrome does not
// inherit EventTarget yet
// so addEventListener won't
// work as expected
file.onload = load;
file.onabort = abort;
file.onerror = error;
// cheap and easy place to store
// desired width and/or height
file._width = width;
file._height = height;
// time to read as base 64 encoded
// data te selected image
file.readAsDataURL($file.files[0]);
// it will notify onload when finished
// An onprogress listener could be added
// as well, not in this demo tho (I am lazy)
} else if (file) {
// if file variable has been created
// during precedent checks, there is a file
// but the type is not the expected one
// wrong file type notification
$message.innerHTML = "please chose an image";
} else {
// no file selected ... or no files at all
// there is really nothing to do here ...
$message.innerHTML = "nothing to do";
}
}, false);
}(
// the global object
this,
// all required fields ...
document.getElementById("width"),
document.getElementById("height"),
document.getElementById("file"),
document.getElementById("message"),
document.getElementById("img")
));
</script>
</html>


The resample.js File



var Resample = (function (canvas) {

// (C) WebReflection Mit Style License

// Resample function, accepts an image
// as url, base64 string, or Image/HTMLImgElement
// optional width or height, and a callback
// to invoke on operation complete
function Resample(img, width, height, onresample) {
var
// check the image type
load = typeof img == "string",
// Image pointer
i = load || img
;
// if string, a new Image is needed
if (load) {
i = new Image;
// with propers callbacks
i.onload = onload;
i.onerror = onerror;
}
// easy/cheap way to store info
i._onresample = onresample;
i._width = width;
i._height = height;
// if string, we trust the onload event
// otherwise we call onload directly
// with the image as callback context
load ? (i.src = img) : onload.call(img);
}

// just in case something goes wrong
function onerror() {
throw ("not found: " + this.src);
}

// called when the Image is ready
function onload() {
var
// minifier friendly
img = this,
// the desired width, if any
width = img._width,
// the desired height, if any
height = img._height,
// the callback
onresample = img._onresample
;
// if width and height are both specified
// the resample uses these pixels
// if width is specified but not the height
// the resample respects proportions
// accordingly with orginal size
// same is if there is a height, but no width
width == null && (width = round(img.width * height / img.height));
height == null && (height = round(img.height * width / img.width));
// remove (hopefully) stored info
delete img._onresample;
delete img._width;
delete img._height;
// when we reassign a canvas size
// this clears automatically
// the size should be exactly the same
// of the final image
// so that toDataURL ctx method
// will return the whole canvas as png
// without empty spaces or lines
canvas.width = width;
canvas.height = height;
// drawImage has different overloads
// in this case we need the following one ...
context.drawImage(
// original image
img,
// starting x point
0,
// starting y point
0,
// image width
img.width,
// image height
img.height,
// destination x point
0,
// destination y point
0,
// destination width
width,
// destination height
height
);
// retrieve the canvas content as
// base4 encoded PNG image
// and pass the result to the callback
onresample(canvas.toDataURL("image/png"));
}

var
// point one, use every time ...
context = canvas.getContext("2d"),
// local scope shortcut
round = Math.round
;

return Resample;

}(
// lucky us we don't even need to append
// and render anything on the screen
// let's keep this DOM node in RAM
// for all resizes we want
this.document.createElement("canvas"))
);


The Resample Demo In Action

First input for the width, second input for the height, if one out of 2 is defined, the resize maintain the aspect ratio.
You can even disconnect your machine from the network, since nothing is absolutely stored or saved in my website, everything simply runs in your machine.
Compatibility? Minefield and latest Chrome work pretty well. I don't have my MacMini with me right now but I will test eventually WebKit nightly later.
Happy end of 2010

Sunday, December 06, 2009

mjst - Micro JavaScript Template Engine

This is the content I've written so far in the official mjst repository. I'd love to know your thoughts about this little experiment, cheers!

What

mjst is a lightweight, about 1Kb minified and gzipped, and performances focused template engine with standards support. There are dozens JavaScript? Template Engines solutions out there but as far as I know this is the only one based on validation for both client and server side programming languages.

How

There is one single function to call:

// via text (e.g. Ajax or innerHTML from a template node)
var transformed = mjst('<?js var hello = "mjst!"; ?><js-hello/>');

// classic template node example
<script id="mytpl" type="text/html">
 
<?js
   
var hello = "mjst!";
 
?>
 
<js-hello/>
</script>
myEl.innerHTML = mjst(document.getElementById("mytpl"));

/
/ via XML (e.g. Ajax or a created document)
// this is a configuration object as second argument example as well
myEl
.innerHTML = mjst(
 
new DOMParser().parseFromString(
   
'<root><js-hello/><br class="${myBrClass}" /></root>',
   
"text/xml"
 
), {
    hello
:"Hi there!",
    myBrClass
:"clear-both"
 
}
);

// Ajax example with an XML template as response
/*
<root>
  <ul>
    <?js
      for(var i = 0; i < collection.length; ++i) {
    ?>
      <li class="li-${i}">Item No: <js-i/> Name: <?js print(collection[i]); ?></li>
    <?js
      }
    ?>
  </ul>
</root>
*/


// transformation ...
myEl
.innerHTML = mjst(xhr.responseXML, {collection:["a", "b", "c"]});

It is possible to write whatever we want inside an mjst template block via print(1, 2, 3, N, func(), whatever) function, the only one injected in the execution scope.

JavaScript? block VS Attributes

To make mjst templates XML compatible I decided to remove JavaScript? block inside attributes. It is still possible to access JavaScript? variables via ${myVar.propName} as showed, as example, in the br node class.

PHP And JavaScript? Template Example

<?php // mjst example
function mjst($name, $value){
    echo
'<?js var ', $name, '=', json_encode($value), ' ?>';
};
header('Content-Type: text/xml');
$a = array(
    'some database result' => array(1, 2, 3),
    'some other value' => 'Hi There'
);
?>
<?xml version="1.0"?>
<root>
   
<?php mjst('result', $a); ?>
   
<?js
   
for(var k in result)
       
print(result[k], "<br />")
   
;
   
?>
</root>

<script type="text/javascript" src="mjst.js"></script>
<script type="text/
javascript">
onload
= function(){
   
with(this.XMLHttpRequest ?
     
new XMLHttpRequest :
     
new ActiveXObject("Microsoft.MSXML")
   
){
      open
("get", "mjst.php", true);
      onreadystatechange
= function(){
       
if(readyState == 4)
          document
.body.innerHTML = mjst(responseXML);
       
;
     
};
      send
(null);
   
};
};
</script>

Why Standards

Apparently every other JavaScript? template engine is implementing a manual, char by char, or RegExp? based, parser, over a syntax that in PHP world has basically defined deprecated ages ago: the classic <%=stuff%> (<?=stuff?> in PHP). The first reason to avoid this kind of syntax is ambiguity. Which program language should consider that block of code? PHP, ASP, JavaScript? ... who else? Moreover, to make things as clear and simple as possible, avoiding ambiguity, PHP community decided to solve the classic <?xml?> problem, caused by short code block, via an explicit <?php ?> which is the suggested, default one. This is also standards safe, in the meaning that a nodeType 7 is defined as a PROCESSING_INSTRUCTION_NODE In few words mjst is compatible with both valid XML and XHTML, which means that its templates can be created via standard DOM API being sure the template will be valid as well. At the same time, using official engines to validate and parse templates blocks mjst could be considered more reliable than other solutions, more robust, and at the same time fast, thanks to core functionality. The good part, at least for me, is that delegating templates parsing operations to the browser core, is more probable that the error is inside a non valid template block, rather than inside this tiny library source code: a welcome side-effect about standards and core validation.

Why Fast

Every transformation could create one or two pre-compiled functions reused every time we would like to transform an already parsed template block. The reason mjst creates up to two functions is that we can arbitrary send a second argument as configuration object. Since this procedure requires the usage of the with statement, avoiding outer scope name conflicts during the execution, and since we do not necessary need to send this second configuration object, the runtime compiled function will consider both cases, only when encountered, making the with free execution 3 to 10 times faster. In other words, if we transform the same template block always without the second argument, the function that will contain the with statement will never be created and vice versa.

Why Robust

The XML core parser and transformer is also another reason to consider mjst both fast and reliable. As example, probably the most famous JavaScript micro templating, from John Resig, is surely lightweight, but it could suffer some problem if the template block is not perfect. For instance, try to use a string like s = "<%" for whatever reason, and you'll be able to break the John micro suggestion, but this is not only about John code. Another well known product is the EJS library, adopted in many different environments and definitively a good piece of code. EJS offers somethng more than mjst but it is still based on manual parsing over a syntax that XML does not like that much. This means that we could have hard life to generate EJS templates via server or directly in our page while mjst delegates template validity to the markup itself.

<?xml version="1.0" ?> 
<script id="mytpl" type="text/html">
 
<!-- msjt accepts valid XML (as XHTML as well) or HTML5 -->
 
<?js
   
var hello = "mjst is here!";
 
?>
 
<js-hello />
</script>

Why ... Generally Speaking

I am a PHP developer, before being a JavaScript? one, and I have always complained about the open close everythere PHP style. This technique does not split presentation and business logic layers and it is generally "slower than a single echo". I have started to appreciate the couple XML and XSL-T years ago but XSL-T sometimes is just a pain in the ass and via server is difficult to cache XSL-T instances. As example, the New York Time had to create their own core library to speed up transformations over common XSL files allowing fast serialization. Brilliant, but still this ad hoc library does not scale for the whole World Wide Web scenario. Different hosts, different PHP versions, but at the end of the day, what could happen if we delegate the transformation to the client side via core JavaScript? features? Nothing bad, actually it's almost a year I am transforming complex layout into html in the fastest way ever even in IE6 and only thanks to XSL-T but, as I have said, the XSL syntax could not sound familiar at all, specially for programmers that would like to have much more power while they generate a layout. This is basically the reason PHP had such great success over these years, so why don't try to emulate XSL bringing there directly JavaScript? rather than XSL syntax, and in a PHP familiar style? This is all about this experiment. It delegates layout render into the client side, the best place imho to do this. It avoids useless server side stress, it could be embed in the page thanks to the fake script with an unknown type, and as side effect, it could speed up client server interactions re-using templates blocks whenever we need and requiring only data, which nowadays, it is just all we need to make Web users happy. So, as pure final evil plan, we could create a view able to produce mjst templates directly via CouchDB resolving some greedy map reduce operation in the db moving partial logic in the client via simple row data sets or generating clear layout directly via Server Side JavaScript? so that all we need is a language, rather than 3 different technologies ... does it sound insane? ;-)

Thursday, October 15, 2009

DOM Node Proxy

This is just a quick post from home sweet home.
A common DOM related problem is to create an association between a node and a generic object. The most dirty, memory leaks prone, and obtrusive way to perform this task is this one:

document.body.obj = {
prop:"value",
otherProp:function(){}
};

Above snippet is a bad practice for different reasons.
  1. obtrusive, it's assuming that no other libraries will use "obj" property name to perform an analogue task
  2. dirty, if we associate a primitive value Internet Explorer will expose it in the node string representation
  3. memory leaks, if the object points something "live", another node, or a HTMLCollection, the generic node will never be collected by the garbage


Alternatives

Specially to avoid last problem, the memory consumption, it's a good practice to store an index, rather than an object. To make things less obtrusive and get rid of conflicts, we usually create a "unique id".

// the array with all objects
var stack = [];

// the unobtrusive property name
var expando = "prefix" + new Date().getTime();

// the obejct to relate
var o = {};

stack.push(o);

// the relation via index (last object)
document.body[expando] = stack.length - 1;

As I have already linked and explained, this technique is still dirty because Internet Explorer will show the unique id via outerHTML or generic node representation.

Strategies

jQuery, and many others, create an association for each manipulated dom node. This could consume RAM without a reason since there could be a lot of nodes with an associated object that will never be used.
Next version of jQuery, right now in alpha stage, understood this point changing the object association logic. I have not read how yet, but I would like to write something I've been used for a while, a sort of proxy object created for DOM nodes and object relations.

DOM Node Proxy

var proxy = (function(){
// another (C) WebReflection silly idea
var expando = "@".concat(+new Date, Math.random()),
stack = []
;
return function proxy(){
return stack[this[expando]] || stack[
this[expando] = new Number(stack.push({}) - 1)
];
};
})();

Above snippet uses almost all strategies I know to avoid obtrusive property, dirty layout, and direct object assignment (index strategy).
To better understand what exactly is above function I have commented each part of it:

var proxy = (function(){

// another (C) WebReflection silly idea

// one function to associate as proxy
// Being assignment a direct one
// standard browsers won't modify
// attributes while being proxy
// an object (not primitive value)
// IE won't expose it in node string
// representation (e.g. outerHTML)
function proxy(){

// a proxy call costs only once:
// the first time it's called
// Other calls will return the object
// This is to avoid objects association
// even if these are not necessary
return stack[this[expando]] || stack[

// the index is the last one in the
// private stack Array. To avoid
// leaks we don't associate directly
// an object but simply an integer.
// If we directly associate
// a primitive value, IE will expose
// is in the dom string representation
// (e.g. outerHTML)
// To avoid this we can just assign
// a Number instance, rather than
// a primitive "number"
this[expando] = new Number(

// push returns the new length
// we need last inserted object
// index to relate the object
stack.push({}) - 1
)
];
};

var
// private unique expando with
// an invalid char as prefix
// in order to make attr name
// easy to recognize in a possible
// IE attributes loop
expando = "@".concat(
+new Date,
Math.random()
),

// list of associated object
stack = []
;

// ready to go!
return proxy;

})();

Is it clear enough? This is a simple usage example (please reado NOTEs):

onload = function(){

// associate a proxy
// NOTE: this is still obtrusive
// the property name should be a unique id
// or it should have library prefix
// otherwise we could have conflicts
document.body.proxy = proxy;
// aka: node[expando] = proxy;

// retrieve the proxy object
var p = document.body.proxy();

// test proxy: true
alert(p === document.body.proxy());

// test clean body string representation
alert(document.documentElement.innerHTML);

// find proxy created property
for(var k in document.body){
if(k.charAt(0) === "@")
alert([k, document.body[k]])
;
}
};


As Summary

This is more a proof of concept but I hope showed code will help us to replicate the behavior. The main missed part is the internal stack management: how can I clean the stack index when I don't need the node anymore? All we need is an extra in-proxy-scope function or a specific associated instance rather than a raw object.
In ew words there are no best strategies for this second problem, it just depends what we need.
From a logical point of view, if we give indirect access to that stack, exposing its length or via functions able to modify it, stack safety could be compromised. What I could suggest is something like:
var proxy = (function(){
// (C) WebReflection - Mit Style License
function proxy(){
return stack[this[expando]] || stack[
this[expando] = new Number(stack.push(new $proxy) - 1)
];
};
function $proxy(){
this._index = stack.length;
};
$proxy.prototype.destroy = function destroy(){
delete stack[this._index];
};
var expando = "@".concat(+new Date, Math.random()),
stack = []
;
return proxy;
})();
where the stack is manipulated indirectly while nothing is publicly exposed.

Monday, October 12, 2009

Named function expressions demystified III

Update For those interested about Internet Explorer scope resolution, I summarized everything in 5 slides.


This is hopefully the end of the Named function expressions demystified trilogy, where here you can find episode I, and episode II.
Juriy knows I am hard to convince, but apparently he is not better than me at all ...

Inglorious Correction

Sure, it's better than nothing, but after I have spent dunno how many tweets plus 2 posts, all I have obtained is a small correction in the whole article (and you have to scroll a bit before):

Generally, we can emulate function statements behavior from the previous example with this standards-compliant (and unfortunately, more verbose) code:

var foo;
if (true) {
foo = function foo(){ return 1; };
}
else {
foo = function foo() { return 2; };
};

// call the function, easy?
foo();


Above snippet is the best solution in the entire article but probably to avoid my name in article credits, and it does not matter since I have already said it's not about the copyright, and surely to avoid personal ego conflicts, the suggested one is a surrogate of above snippet, quite embarrassing from a developer point of view, isn't it?

var f; // create an alias, WHY!!!!!!!!!
if (true) {
f = function foo(){ return 1; };
}
else {
f = function foo() { return 2; };
};

// create a reference in order to remove a reference, WHY!!!!!!!!!!!
var foo = null;

// call the function via another alias, WHY!!!!!!!!!!!
f();

Above snippet is just a surrogate because the first one simply create an alias which will refer the proper function. Let's be simply developers avoiding obtuseness, OK?

Re Solution Pros

  1. it's standard, no excuses at all!
  2. variables on top, no way we can forget to nullify the function and we don't need to create a reference which aim is to remove a reference, cause the reference is already assigned, no memory problems at all (possibly less, since there is nothing referenced to a null value)
  3. semantic, we can easily refer to the function, since the whole point is to solve the missed arguments.callee plus IE inconsistency ...
  4. standard again, because it perfectly emulates ECMAScript 3rd Edition behavior even in Internet Explorer
  5. the day we will need to nullify the function will be the day we meant it, and not a surpassed convention


Why My Re Solution Is More Standard

This is the expected behavior in all browsers, except Internet Exlorer and Opera, via Internet Explorer emulation:

var f; // let's use the suggested alias
// for demonstration purpose
if (true) {
f = function foo(){
// this is LOGICAL
// but in IE it will be false
// because foo will be the
// the other one, even if that
// else will never be executed!
alert(foo == arguments.callee)
};
}
else {
// IE will declare this function
// in any case since there is NO DIFFERENCE
// between expression and declaration
f = function foo() { return 2; };
};

f();

With my Re Solution the behavior is the expected one, alert(foo == arguments.callee) will be true in every browser ... do you still have doubts?

Re Solution Cons

  1. unfortunately, more verbose
This must be a joke ... more verbose? First of all we are dealing with developers that don't care at all about verbosity. Kangax as everybody else in credits always preferred verbosity since minifier and compressor could take care about this verbosity recreating, and I love the irony of this part, exactly the suggested case.

Re Solution Pros II

  1. being the function name declared on the top of the function, as I have said a well known good practice, every IDE will automatically suggest that name as soon as we'll startt to type it: does verbosity matter?
  2. As we all know gzip and deflate compresses repeated words more efficiently, as result the Re Solution is even smaller if we don't munge it, does verbosity matter?


Re Solution Is More Logical And Smaller

Here the simple test everybody can do. Two pages, same code, except the first one is Re Solution, 99 bytes, while the second one is Juriy suggestion, 111 bytes.

<?php ob_start('ob_gzhandler'); ?>
var foo;
if (true) {
foo = function foo(){ return 1; };
}
else {
foo = function foo() { return 2; };
};


<?php ob_start('ob_gzhandler'); ?>
var f;
if (true) {
f = function foo(){ return 1; };
}
else {
f = function foo() { return 2; };
};
var foo = null;

Do we have a single valid reason to use the Juriy suggestion over mine? I would honestly feel an idiot preferring the second one, since few bytes and more logic (foo is the function foo) against spread variables declarations (top and the middle with F = null) via aliases rather than function names inside the function itself where a debugger will show the name but we have to remember the alias, plus the possibility we forget to nullify the reference consuming more memory ... I mean, this is not the first of April, isn't it?

And That's Not All Folks

We are developers, not monkey, I always dislike generic affirmations a la eval is evil, the most used function since Ajax epoch, obviously included in json2.js itself, since is natural and logical to use it when necessary. Juriy corrected his article to underline how unprofessional am I suggesting a last option for IE behavior ... well, probably he has never thought about ternary assignment, isn't it?
// Never do this!, that's what I can read at some point, but we should think carefully before these statements.
Here there is an example where the function will be a named one but there are no differences in this case for IE, indeed the last option is the IE one.

// somewhere in a closure ...
// (otherwise add will be public in IE,
// but I hope we went further than this
// at this point and after 3 posts
// plus an entire article ...)
var event = {
add:document.addEventListener?
function add(){
alert([add, document.addEventListener]);
}:
function add(){
alert([add, document.attachEvent]);
}
,
del:document.removeEventListener?
function del(){
alert([del, document.removeEventListener]);
}:
function del(){
alert([del, document.detachEvent]);
}
};

In one shot we have created a unique add reference, a named function, plus the right one for the event object. Now guess what's up if we invert the order putting the IE version at the end ... add will be the last option ... the one with addEventListener, got the point?

As Summary

I am pretty much sure I'll be criticized again and only for the last part of this post where obviously things work but somebody will argue about future IE9 sci-fi behavior or stuff like that ... well, that day my library will be deprecated in any case and, if needed, it's easy to implement over my Re Solution

var event = {
add:(function(add){document.addEventListener?
(add = function add(){
alert([add, document.addEventListener]);
}):
(add = function add(){
alert([add, document.attachEvent]);
}); return add
})()
};

Now let's see if the most interesting analysis about functions expressions and declaration will keep ending up with the wrong suggestion, rather than mine ... kinda curios, still hopeful though.

Saturday, October 10, 2009

Named function expressions demystified

Update If after this reading things are still the same, please read the part 2 of this post, thanks.



This is a re-post, and few considerations, about the good Juriy article, which I suggest for every JavaScript developer with a deeper knowledge than just an API (jQuery or others).

Github For Everything!

My first consideration is about github, something I've never used that much since via Google Code I feel pretty comfortable with subversion. I find truly interesting the way Juriy is tracking his documentation, I've never thought about an article, as my old JavaScript Prototypal Inheritance could be, in a code repository as kangax did: good stuff!

My Alternative Solution

There are few extra consideration to do over Juriy explanation, plus minor inconsistencies. The first thing is that Internet Explorer basically manages Function expressions and Function declarations in the same way, there's no such VS in the middle.
The fact we assign the function to a whatever named variable does not make any difference:

alert(F); // (function F(f){/*IE Function*/return f})

F((function F(f){/*Function Declaration*/return f}));
F((function F(f){/*IE Function*/return f}));

var f = (function F(){return F}); // <== which F?
function F(){/*IE Function*/}; // this one!

alert(f()); // function F(){/*IE Function*/};
We can play with above IE traps for ages but the point is simple: the last defined function with that name, will simply take the place of the other one, if any, in the same scope, and at the end of its resolution, before code execution.

What IE does is basically a top-down parsing over functions without taking care at all about code execution or normal and expected syntax execution flow.
This simply means that return whatever, var whatever = function whatever(){} ... does not change anything for our loved IE:what we see or what we expect is not what will be, otherwise IE would have been a perfect browser.
To better understand what I am talking about, this example should speak for me:

var f = 123;
alert(F); // last function
(function(){
alert(F); // (function F(){alert(f);return arguments.callee})
var f = (function F(){alert(f);return arguments.callee})();
// undefined
})();
function F(){};

After these two snippets is quite logical understand this behavior:
  1. scope resultion
  2. function resolution plus optional inline assignment, if any, before next function resolution
  3. code interpretation/execution over scope resolution

Accordingly with this Richard Cornford post, in JScript every function is performed sending the execution context, a behavior somehow similar to the injected context via eval in Firefox few months ago.
The latter one has been considered a security hole ... same kind of hole we need to deal with IE on daily basis every time we would simply chose a name for a bloody function.

My Solution

Juriy perfectly knows my point of view about this problem, he knows it so well that his addEvent solution example is created ad hoc to make mine inefficient (and I'll tell you later why).
It does not matter, as long as I can use the same example, avoiding IE4 support ...

var addEvent = (function(){
var docEl = document.documentElement;
if (docEl.addEventListener) {
/* return */ function addEvent(element, eventName, callback) {
element.addEventListener(eventName, callback, false);
};
} else {
// IE has to be the last option
// other addEvent will be "lost"
// cause this will be the only named reference
// in this scope ...
function addEvent(element, eventName, callback) {
element.attachEvent('on' + eventName, callback);
}
};
return addEvent;
})();

The key is simple, and is based exactly over same considerations and suggestions Juriy gives in the Alternative solution paragraph, except my suggestion uses an explicit last option callback, without requiring var this or that plus var this or that = null in any place, and for this purpose.

Indeed, for the same reason Juriy did not perform this task at the end of his latest suggestion and before the returned function:

if (typeof docEl.addEventListener != 'undefined') {
attachEvent = addEventAsProperty = null;
return addEventListener;
}
elseif (typeof docEl.attachEvent != 'undefined') {
addEventListener = addEventAsProperty = null;
return attachEvent;
}
addEventListener = attachEvent = null;
return addEventAsProperty;


totally boring and absolutely anti pattern, on daily basis and real case scenarios, with my suggestion there is absolutely no need to remember to nullify variables used only to refer inline assignments and, moreover, only as Internet Explorer workaround!!! That's too much, I mean we have to change our code, change JS logic, to support such drama JS engine, we even need to write more and nullify everything? No way!

But It Could Be Inefficient

The only real side effect about my suggestion, surprisingly working in Safari 2 as well and solving its problems with names, is that if we have to deal with two different versions of IE in the same scope, we cannot use the "last definition" trick, 'cause one out of 2, 3, or 234567 versions of the most problematic browser since Web epoc, still Internet Explorer and JavaScript speaking, will mess up like a charm ... to be honest, IE4 days, the one without try catch and much more support, are far away from 2009, and I've never had to deal with such problem but in this case, there is nothing better, so far, than Juriy proposal.

Update
In my addEvent example Opera, thanks to its duality, will behave IE like. This is not a problem, since addEvent will work in any case, but we can return if we would like to force Opera with addEventListener (avoiding Safari 2 then). To solve this problem, when necessarym we can use the best from both proposal.

Re Solution


var addEvent = (function(){
var docEl = document.documentElement,
addEvent;
if (docEl.addEventListener) {
addEvent = function addEvent(element, eventName, callback) {
element.addEventListener(eventName, callback, false);
}
}
else {
addEvent = function addEvent(element, eventName, callback) {
element.attachEvent('on' + eventName, callback);
}
}
return addEvent;
})();
Quite semantic, isn't it? ;)
The principle is still the same suggested by Juriy except there is nothing to nullify, just a reference to return, and being based on variable assignment, we can have 2345IE versions in the if else without problems at all.

As Summary

In Internet Explorer there is no difference between function expression and function declaration, this is the whole point of this post, plus the updated suggestion which makes things a bit more logical, from a developer point of view - addEvent is a well defined reference, and that's what we need.
The var F = null; in Juriy suggestion is useless for IE. References comes after, in the interpretation flow, while references come before in the execution one. var F = null; will nullify a reference, it won't mark anything for the IE garbage collector.
Juriy article is in any case a must read as soon as we understand JavaScript scope and lambdas, and specially if we would like to support multiple browsers.
Hopefully these gotchas will disappear with ES5 and the next "dreamed" IE9 with its totally new fabulous V8 engine ( OK, OK, ... let me dream please ... )
The day IE will disappear from browsers panorama I'll be the most happy and drunk web developer in the area but until that day, we can say the panorama is still weird, at least well explained, and with all possible solutions, we have choices!

Sunday, July 05, 2009

YUI Compressor Meets The Batch Scripting

Mainly to work, also to investigate some good old DOS feature, I created a batch file able to make YUI Compressor more portable and easy for every Windows based client/server developer.
I wrote the story, and I described each file, plus I added a workable Zip with everything you need to understand what I have created and use it, starting with a double click into a projects/vice-versa.bat file which aim is to create from a list of files a unique YUI Compressed one.
Here is the link, the rest is up to you (comments, questions, suggestions, etc)

Wednesday, April 29, 2009

Drip under control via another IE memory leak tentative ...

Apparently, using this strategy I can obtain a flat line in Drip monitor:

// function used to remove a node, every attached
// attribute and every nested node via the same procedure
var destroy = (function(destroy){
// WebReflection IE leaks attemp!
function $destroy(node){
while(node.lastChild)
destroy(node.lastChild);
if(node.parentNode)
node.parentNode.removeChild(node);
};
return destroy = destroy.clearAttributes ?
function(node){
if(node.clearAttributes) // Hedger suggestion
node.clearAttributes();
$destroy(node);
} :
$destroy
;
})(document.createElement("script"));


// used to remove everything
function destroyAll(){
destroy(document.documentElement);
};

// used to avoid leaks when the page is refreshed
// or the url is changed
if(this.attachEvent)
attachEvent("onunload", destroyAll);


reasonable performances and apparently a reliable solution.
tested via this code:

attachEvent("onload", function(){
detachEvent("onload", arguments.callee);
for(var i = 0; i < 1000; i++){
a.push(document.body.appendChild(document.createElement("div")));
var node = a[i];
node.innerText = i;
node.obj = a[i];
node.attachEvent("onmouseover", function(){
node.obj.other1 = node;
});
node.onclick = function(){
this.obj.other2 = node;
};
};
});

The procedure is based on assumptions I did in this post about div[expando] and div.removeAttribute(expando)

Tuesday, November 18, 2008

Ext JS - How to hack the JsonReader

I have a new job ( hooooray?! ) and I suggested Ext JS framework as web UI to focus more about Ajax, XML + XSLT data interactions rather than problems with CSS, events delegations, etc ... and I guess I am doing well, so well, that here I am with a simple tiny trick to hack an Ext.data.JsonReader instance, specially the root and the totalProperty params:

// directly from Ext JS 2.2 API site
// http://extjs.com/deploy/dev/docs/

new Ext.data.JsonReader({
totalProperty: "results", // The property which contains the total dataset size (optional)
root: "rows", // The property which contains an Array of row objects
id: "id" // The property within each row object that provides an ID for the record (optional)
})

Especially for the paginator toolbar, the JsonReader is a must to surf a big amount of data without stressing too much both server and client sides.

One nice feature, or one clever way to make the root node customizable, is the usage of evaluated code via a new Function call.

If the root property ontains a dot, that property is retrieved via nested objec properties.


...
root:"items[0].myList",
...

Thanks to this feature, it is possible to pre parse and pre generate the list that will be assigned as root Array, the one used inside the Grid, DataView, or whatever.Component is managing your interactions.

The trick to pass the returned object to an arbitrary function is this:

...
root:"toString.length||callback(obj)",
...

The callback suppose to be a valid function with a global scope that will return a filtered list of objects compatible with the column model or the data manager we chose.

The trick is based on their regexp that checks simply a dot or a square bracket "[" in the passed string.

That's it, let me discover better tricky stuff in the source and I'll post them :D

Tuesday, August 05, 2008

Image Protector, a nice waste of time

I've read right now about another, pointless, attempt, to avoid the classic Save Image As from a website.

Guys, we are in 2008, and I hope that everybody knows what can the simple Stamp / Print button can do, when we are simply watching whatever in our screen.

No way, every tot months, somebody "creates" the ultimate version of an Image Protector, usually based on JavaScript, then "startly pointless" because JS could be easily disabled.

Morevore, this times the trickless trick even requires an excellent library as MooTools is.

Well, since I hate disinformations about techniques to make data truly safe, I can tell you that you need 159 characters to remove the protecion, javascript protocol included.

The funny stuff is that basing the same anti protector over MooTools, since this library is required for the amazing protector, you need even less characters to do the same, basing them on a $$("img") call, instead of document.getElementsByTagName.

Being sure that this technique will be probably adopted from people that do not know a single thing about web or security, and do not know how to implement a basic Watermark, I suppose somebody will implement the same trickless trick, so I prefer to show you the extended link that, if saved in your bookemark, will be able to remove the protection whenever you need, and in a click.

This is the code:

javascript:(function(b,r,l){l=b.length;while(0<l--)r.test(b[l].src)&&b[l].parentNode.removeChild(b[l--]);})(document.getElementsByTagName("img"),/blank\.gif$/);


And this is the link:
noMooreProtection

Save into bookmark, drag there, try the example page, one click in the "bookmarked magic guru crack", and sweat dreams, you can still grab images from those sites that do not have a clue about safe contents :D

Friday, April 11, 2008

Io programming language List for JavaScript

Io is a small, prototype-based programming language. The ideas in Io are mostly inspired by Smalltalk (all values are objects, all messages are dynamic), Self (prototype-based), NewtonScript (differential inheritance), Act1 (actors and futures for concurrency), LISP (code is a runtime inspectable/modifiable tree) and Lua (small, embeddable).

This programming language is really interesting, starting from syntax, throw the entire guide.

One of its primitive type is called List, and this is a summary of this type:
A List is an array of references and supports all the standard array manipulation and enumeration methods.

It seems that List is all we need when we think about an Array of elements ... so why couldn't we have something similar in JavaScript?

// Io programming language List example
// followed by my JavaScript List implementation
a := List clone
a = List.clone()

a := list(33, "a")
a = list(33, "a")

a append("b")
a.append("b")
==> list(33, "a", "b")

a size
a.size
==> 3

a at(1)
a.at(1)
==> "a"

a atPut(2, "foo")
a.atPut(2, "foo")
==> list(33, "a", "foo", "b")

a atPut(6, "Fred")
a.atPut(6, "Fred")
==> Exception: index out of bounds

a remove("foo")
a.remove("foo")
==> list(33, "a", "b")

a atPut(2, "foo")
a.atPut(2, "foo")
==> list(33, "a", "foo", "56")

a := list(65, 21, 122)
a = list(65, 21, 122);

a foreach(i, v, write(i, ":", v, ", "))
a.foreach(function(i, v){alert(i + ":" + v + ", ")})
==> 0:65, 1:21, 2:122,

a foreach(v, v println)
a.foreach(function(v){document.writeln(v)})
==> 65
21
122

numbers := list(1, 2, 3, 4, 5, 6)
numbers = list(1, 2, 3, 4, 5, 6)

numbers select(x, x isOdd)
numbers.select(function isOdd(x){return !!(x%2)})
==> list(1, 3, 5)

numbers select(i, x, x isOdd)
numbers.select(function isOdd(i, x){return !!(x%2)})
==> list(1, 3, 5)

numbers map(x, x*2)
numbers.map(function(x){return x*2})
==> list(2, 4, 6, 8, 10, 12)

numbers map(i, x, x+i)
numbers.map(function(i, x){return x+i})
==> list(1, 3, 5, 7, 9, 11)

The map and select methods return new lists. To do the same operations in-place, you can use selectInPlace() and mapInPlace() methods.

and my implementation has mapInPlace and selectInPlace as well :)

Am I forgetting something? ... of course, the source!

P.S. because of nature of List, you can do stuff like this one:

list(1,2,3).append(4).remove(2).size;
// 3

an so on ;)