My JavaScript book is out! Don't miss the opportunity to upgrade your beginner or average dev skills.

Tuesday, December 22, 2009

JS And CSS3 Snow FX? Happy Holidays!

I was expecting some sort of snow experiment after one of my posts in Ajaxian but apparently nobody tried ... and there I am :)

Check out my JavaScript and CSS3 Snow Effect and have best holidays, I gonna fly tomorrow to meet my family and my friends for few days in Italy!

Friday, December 18, 2009

Leaving London for Berlin - Nokia Gate5

This is just a quick post. First of all, my apologies about my recent absence: you have not even a rough idea about how busy I have been these days ...
Everything is a rush, but if "the rush" is for a good cause it's just a pleasure!
I am organizing my relocation which means I need to close contracts here, flat included, bills, sort out any kind of paperwork I need to leave UK in basically 2 weeks, while I am working, and I still have no idea where I gonna sleep, where I gonna move, but it does not matter: I can't wait to start this new role in Nokia!

Why Nokia

To be honest, I had different choices, but Nokia company in my opinion has been:

Perfectly Organized

When you have to deal with big companies you deal with teams, and in this case I had the opportunity to meet extremely interesting teams and persons while somebody else is not even able to organize their recruitment, putting online "We Are Looking For ..." while they have "all they are looking for" in that moment in the conference, but for another role that does not even match my CV ... FAIL! Big or not, if you have a database it does not matter if somebody has been already interviewed ... isn't it?

Mobile Device Development

It is already the present in USA and it's gonna definitively be the future everywhere.
I cannot take IE and IE6 anymore and moving into mobile where browsers are already better than IE8 and things must be perfect, since the CPU is not powerful, the RAM is never enough, and even the battery life is crucial, can you imagine any other JavaScript sector where every single coma and "var" matters?

Modest and Nice

Even if it's a sector leader, Nokia never gave me the impression they were feeling somehow superior (or that kind of impression such: they are doing you a favor interviewing you ... you know what I mean?)
Trust me when I say that another company "could" ask you during an interview something like: who do you think is the best company in the world?
First of all I do believe that if a company asks these kind of questions it means the company has serious problems ... ask me, whoever you are, which is the best IT company and I will reply in 0.01 milliseconds and as an honest person: Google FFS!!!

Unfortunately, the interview was not with Big G (trust me, I've tried, but they keep ignoring me in London, Victoria ... and moreover, I don't think they actually develop mobile related stuff there ... so you know what? ;) )


When you are perfectly organized, you are able to do everything instantly and for me this means that a company cares about you and does not have much time to waste with internal bureaucracy: things simply work as expected!

Marry Christmas

This goes to all WebReflection readers, to all Londoners I have met during these 2 years, and to Nokia as well for the kind guide they sent me in order to show me Berlin beauty in English, because there is only one thing I am a bit worried about: I don't know German at all! It's gonna be fun :D

P.S. I am looking for a senior JavaScript to replace me in this current amazing company I am working with - interesting challenging stuff - agile environment - and a new project to manage: leave me a message with your email and a brief description and I swear you gonna like this place in London Bridge: come on guys!

Friday, December 11, 2009


As I have twitted yesterday, I have discovered one of the most weird JavaScript behaviors ever thanks to the document.write method.

The Problem We All Know ...

document.write has been used for ages since it is basically the only way we have to create a blocking script during a page download.
The advantage is that a script included via document.write could write something else in that point of the page and again via document.write.
Classic examples are Google AdSense or others similar services where we would like to put in place an iframe with banners or an external service.
The most known side effect of the document.write method is that if the document has been already loaded (DOMContentLoaded or onload event) it could "destroy" our web page erasing everything and showing nothing except what we wrote via this method. This means that every time we include in our page an external resource, we should be sure this resource will be included synchronously otherwise we could break the whole web site showing "blank scripts" and nothing else.

... And The WTF We Should Be Aware About!

Bear in mind:blocking script does NOT necessary mean synchronous.
There are truly few de facto standards about document.write behavior and the only one that seems to be consistent across different browsers is the script without an external source:

<!DOCTYPE html>
var sequence = [1];
alert(document.getElementsByTagName("script")[1].text); // sequence.push(2);
alert(sequence); // 1,2,3

Everything seems obvious and normal, right? It's NOT!
We need to do a step backward about the document write method ... what does it exactly do?
As example, this is not a valid markup:

var a = 1;
var b = 2;
var c = 3;

If we assume that document.write is writing the code inside the current executed script tag we are wrong.
This method is basically able to splice the "download flow" inserting everything we write instantly after the current script node.
In other words, with document write we can perform that task able to crash Internet Explorer: append a node after the last one rather than inserting it before the current one, which is the reason you should never appendChild when you add a script node, as example, while you can always insertBefore, specially for these days where everybody is talking about non-blocking scripts.
Let's go back in the first example, OK?
If we analyze that simple HTML page we can easily spot there is only ONE script node. The hard coded alert is there to show that the browser has already put and evaluated a script node that is after. Virtually speaking, this is how the page looks when we use the document write.

<!DOCTYPE html>
var sequence = [1];
// put there and execute then come back in this execution flow ...
alert(sequence); // 1,2,3

If we strip out the write operation nobody in this world would expect a 1,2,3 result when sequence is alerted.
This is already an inconsistent behavior since we are breaking the execution flow adding nodes sequentially after but executed in line.
What we could have expected is a sort of "break this script node, write the new one here, open and go on with the original script".
Well, this makes sense in a global context flow, but if we where inside a nested closure we cannot split the current node or the code will break and the context spread around ... (metaphorically speaking)

... guess what? what I have said so far is the less interesting part ...

External Source Double WTF!

Against every logic we can still assume that a script written via document.write is both blocking and synchronous but things change "a tiny bit" if we write a script node with an src attribute.

<!DOCTYPE html>
var sequence = [1];
document.write('<script src="test2.js"><'+'/script>');
// how cute, the blocking script is already there
// but it is not blocking the execution flow!
alert(document.getElementsByTagName("script")[1].src); // test2.js
// indeed!!!
alert(sequence); // 1,3,2

What just happened? First of all, the test2.js file is nothing different from:

but the result could sound a bit weird, isn't it?
As I have commented, the script is still a blocking one, in the meaning that after the current script node its content will be available but never before!

var sequence = [];
// global id
var id = 1;
// test3.js contains the code
document.write('<script src="test3.js"><'+'/script>');
// global id re-defined
var id = 2;
document.write('<script src="test3.js"><'+'/script>');
alert(sequence); // 2,2

The non blocking external script involves network traffic so every browser vendor decided that a scope cannot be "paused" that long and every browser I have tested confirms the behavior: the current scope has already finished when the injected script will be parsed and evaluated.
The script execution is something performed in the future, not in the current script scope. To put order in this chaos we need to split the script execution flow

<!DOCTYPE html>
var sequence = [];
var id = 1;
document.write('<script src="test3.js"><'+'/script>');
<!-- the precedent write will put the blocking node here -->
var id = 2;
document.write('<script src="test3.js"><'+'/script>');
<!-- the precedent write will put the blocking node here -->
alert(sequence); // 1,2 in every browser

So Why Bother?

Imagine we are doing everything is possible to do to use best practices to speed up our page, avoid network calls, pre-optimize, pre-compile, pre-gzip/deflate a single JavaScript file with everything we want and we need for that page ... well, if for some reason we have a document.write in the middle we will never know what the hell will happen!

var myServiceKey = "I_FEEL_SAFE";
document.write('<'+'script src=""><'+'/script>');
// where will consider the global myServiceKey

// and 2 thousands lines after inside a third part library
// and gosh knows which kind of nested closure is doing it ...
myServiceKey = "HE_FELT_SAFE";
document.write('<'+'script src=""><'+'/script>');

Good stuff, will never know my API, is it cool?

The Long Ride To The Solution

I don't want to create illusions, so I start this paragraph with this fact:
I have not been able to find a reasonable solution!

If still interested, follow me into my "evil monkey patch procedure" ...
First of all, it's a matter of nodes? So, if I create an external node to mess up the execution flow something should happen, right?

<!DOCTYPE html>
function elsewhere(text){
// wuick way to inject script nodes
// to execute on a global context
.text = "(" + text + ")()"
var sequence = [];
var id;
id = 1;
document.write('<script src="test3.js"><'+'/script>');
id = 2;
document.write('<script src="test3.js"><'+'/script>');
alert(sequence); // 1,2 in Firefox
// 2,2 in IE, Chrome, Safari
// undefined,undefined in Opera

I have basically opened "Hell Doors" with this hack, but trust me, I felt the coolest JavaScript hacker ever when I first tried with Firefox, the only one that gave me the expected messed result. NO WAY, IE, Chrome, and Safari, still respect the order and since the injected global evaluated node is before the one appended in the middle of the browser parsing flow, nothing changes, and it's even kinda expected (I have tried to find out how buggy could have been our browsers).
Apparently, Firefox demonstrates an absolute non-sense: the blocking not synchronous script is still appended but some how it will consider the JavaScript snapshot created via another node as privileged ... I am sure this is a hole in the parser security model but I don't want to investigate more (specially for this topic).
What about Opera 10? Apparently it simply executes the blocking script before the injected one, so some how it has a proper queue and maybe, Opera behavior is the most logical ever: a queue of scripts FIFO style, excellent!

The Evil Plan + A Partial Solution

Still about the assumption that everything is because of the opened/closed script tag:

<!DOCTYPE html>
var sequence = [];
var id;
document.write('<script>id = 1;<'+'/script><script src="test3.js"><'+'/script>');
document.write('<script>id = 2;<'+'/script><script src="test3.js"><'+'/script>');
alert(sequence); // 2,2 in IE - 1,2 Others

This time the queue is almost respected by every browser, but remember the difference between a script without an external source and one with an external source?
Well, in Internet Explorer the script without an external source is extrapolated from the write context and parsed/evaluated in-line while external source based scripts are still considered "something that will be but that cannot be right now".
There is a perverse logic in this behavior but somehow makes sense from a programming point of view. Not because I agree with what happens in IE, simply because if we have "attached" a behavior, we expect it works just like that ... not everybody is able to think smart, and somehow I can see your faces: nobody surprised IE failed here as well!

The Partial Solution

So IE is the last one, but it is consistent with its crap. This means that we need to put order in the execution flow.
How? Creating two scripts with an external source:

<!DOCTYPE html>
<script src="params.js"></script>
var sequence = [];
document.write('<script src="test6.js?id=1"><'+'/script><script src="test3.js"><'+'/script>');
document.write('<script src="test6.js?id=2"><'+'/script><script src="test3.js"><'+'/script>');
alert(sequence); // 1,2

In few words we need a sort of clever javascript file able to "understand itself" and at the same time able to mutate run-time global scope variables.
This is the commented content of the test6.js file:

(function (script) {
// a loop is necesary since Opera and IE will
// have already every node in the DOM
// (document.write writes everything in a go
// so everything is already there, id 2,2 indeed if we don't check nodes)
// Firefox and Chrome will consider one script a time
for (var
filter = {id:function(i){return parseInt(i,10)}},
script = document.getElementsByTagName("script"),
i = 0, length = script.length,
tmp; i < length; ++i
) {
tmp = params(script[i].src, filter);
if ( {
// let's mark somehow the script so that next loop
// won't consider this one
if (!script[i].touched) {
script[i].touched = true; =

The params function is something I wrote few minutes ago able to transform an url with a query string into an object with key/value pairs:

function params(url, filter) {
// (C) WebReflection params snippet - Mit Style
// create a key/value(s)? object
// with urls query string and optional filters
var query = url.indexOf("?"),
result = {},
key, match, re, value
if (query < 0)
return result
if (!filter)
filter = {}
re = /([^=]*?)=(.*?)(?:&|#|$)/g;
url = url.slice(query + 1);
while(match = re.exec(url)) {
key = match[1];
value = typeof filter[key] === "function" ? filter[key](match[2]) : match[2];
if (result[key]) {
if (typeof result[key] === "string")
result[key] = [result[key], value]
; else
} else
result[key] = value
return result;

Above snippet is the "re-usable for other purpose" content of the params function.
The main side effect of this approach is that rather than speed up the page download we will ask the client browser to download an extra file.
Sure, it can be static, it can be handled via 304, but it is still an extra call to our server.
The good part is that this file could be reused in many situations, the bad one is that is potentially a file we cannot trust, since other scripts could be able to use this file for other purposes ... this is why it is a partial solution, and why I have tried another experiment:

<!DOCTYPE html>
function $write(src, exec) {
var eval = "Function('('+unescape('" + escape("" + exec) + "')+')()')()";
'<script src="javascript:void(0)"',
' onerror="',
'/script><script src="',
var sequence = [];
var id;
$write("test3.js?1", function(){
id = 1;
$write("test3.js?2", function(){
id = 2;
alert(sequence); // 1,2 Firefox, Chrome

The problem is that IE won't work, since there is no such onerror handler in script nodes, plus the behavior is not consistent. If we try to reload the page a couple of times, mainly if other files have been cached, we could have randomly 1,2 or 2,2 - at least in Opera so ... experiment failed.


document.write is dangerous! I could not expect all this crap for a single "innocent" function but what I am sure about is that I have never used it, unless strictly necessary (never, imho), and I will try to do not use this method anymore since it causes too many problems, and I cannot even imagine why on earth and how Google, as many others, have been able to handle all their services faking the script injection via blocking writes, and obtaining potentially wrong info and nothing else. Be careful, oops, sorry, I meant ... forget document.write!

Thursday, December 10, 2009

TaskSpeed - The Real Cheat

Inline "make" test that no library should "outperform" - this is a reply to this post which has few good points, we all know about browsers VS SunSpider, and other tests, but it falls down when it criticizes the TaskSpeed aim which is completely different.

"make": function(){
el = document.createElement("div"),
outperform_this = [],
i = 0, length = 250; i < length; ++i
outperform_this[i] = '<ul class="fromcode" id="setid' + i + '"><li>one</li><li>two</li><li>three</li></ul>'
el.innerHTML = outperform_this.join("");
for(var i = 0, fragment = document.createDocumentFragment(), childNodes = el.childNodes; i < length; ++i)
return document.querySelectorAll("ul.fromcode").length
for(var childNodes = document.getElementsByClassName("fromcode"), length = childNodes.length, i = 0, l = 250; i < length; ++i){
if(childNodes[i].nodeName !== "UL") --l;
return l;
for(var childNodes = document.getElementsByTagName("ul"), i = 0, l = 0, length = childNodes.length; i < length; ++i){
if(childNodes[i].className === "fromcode")
return l;

Enjoy cheats, and feel cool!

Wednesday, December 09, 2009

Two Step Backward: Read Cookies!

OK, I am the first one with non updated examples in devpro, but today just as curiosity I decided to have a look into every cookie related piece of library.

Apparently to retrieve cookies we have loops, splits, multiple RegExp, and any other sort of alchemy ... but are these functions the only one we need to read cookies?

var Cookie = {
get:function (name) {
var match = document.cookie.match(
new RegExp("(?:^|;\\s*)" + name + "=([^;]*?)", "g")
return match && match[1];
getAll:function () {
var re = new RegExp("([^=]+?)=([^;]*?)(?:;\\s*|$)", "g"),
cookie = "" + document.cookie,
result = {},
while(match = re.exec(cookie))
result[match[1]] = match[2]
return result;

I think sometimes we overload methods without a valid reason, but maybe I am missing something ... the only option I could eventually spot is the encode/decode, but as far as I know that is a browser problem, is it?
If we need that, well ...

var Cookie = {
get:function (name, encoded) {
var match = document.cookie.match(
new RegExp("(?:^|;\\s*)" + (encoded ? encodeURIComponent(name) : name) + "=([^;]*?)", "g")
return match && decodeURIComponent(match[1]);
getAll:function (encoded) {
var re = new RegExp("([^=]+?)=([^;]*?)(?:;\\s*|$)", "g"),
cookie = "" + document.cookie,
result = {},
while(match = re.exec(cookie))
result[encoded ? decodeURIComponent(match[1]) : match[1]] =
encoded ? decodeURIComponent(match[2]) : match[2];
return result;

Tuesday, December 08, 2009

Bing UK - Round 2

I've been contacted for the second time by Jordi Ribas from but this time for a one to one chat rather than just a Round Table.

First of all, even if I keep thinking why on earth they would like to know my opinion, I do appreciate the fact Microsoft teams are getting closer with developers as well (the problem is: do they know I am a web developer and not a web surfer?)

One Hour Chat

Nothing more and quite frenetic: I had a lot of questions while Jordi had to show me improvements since first disaster where they introduced bing UK showing us the USA version ...

Different Teams

Apparently Team System is not that powerful as we think. Microsoft teams are not sharing realtime code or improvements. There is a delay, that could be few days up to few months between USA and other versions and every team seems to be in charge of proper country search algorithm ...
In my opinion this is a massive waste of resources for whatever IT company. The team knowledge is partially sandboxed, but this is what we expect from whatever Microsoft product, but even strategies are different.
As example, bing USA shows useful info about whatever city plus touristic stuff and much more. Try to look for Chicago in bing USA and first info will be both useful and well organized: nice.
After 30 minutes of "look how much we take care about UK results" we tried London and not a single weather, map, or useful info has been showed. Even cooler, the first sponsored link is called: The Fastest Way To France ... I mean, Doode, if you search London of course it's because you wanna escape!

Visualize Mode - Aka Silverlight

Apparently Bing team put a lot of effort into visualize section ... what, you cannot use it?
Well, in my PC I have Flash, Silverlight, Java ...

So what? I do NOT
Jordi is a nice guy but he lost 300 points out of 10 when he replied something like:

JavaScript? ... ah, I am big fan, one team long time ago did this stuff and they used ECMAScript ...

Say what? In few words I had to show a truly silly example explaining that if I've been able to do that kind of FX via 1.23 KB of JavaScript, there is absolutely nothing, I repeat: nothing that is not possible to replicate in that visualize page via JavaScript rather than require a plug-in that is Microsoft specific.
He was surprised:
Did you really do this? ... oh yes, your name is there ... really it is only JavaScript?

(... I mean, guys, if you don't even know who you are inviting please don't ...)

Anyway, after this He talked about some standard stuff and I almost flipped out: so, Silverlight is considered a standard in Microsoft while JavaScript is considered "the old ECMAScript somebody tried ages ago" ... so, now my eyes are opened ... they don't even know they are using JavaScript in the whole site and partially even in the Silverlight section (images fade in/out FX) ... uhm ...

Why Silverlight

The only reasonable answers I've had about Silverlight are:
  1. we could not obtain same performances
  2. it's Microsoft stuff
In few words, Microsoft spent years and money to create a plugin able to bring what whatever TraceMonkey, V8, or WebKit based browser can devour with closed eyes. The day Microsoft will realize that the reason is loosing market share is an historical software cancer called Internet Explorer will be too late, surely after Google will beat everybody with free wireless access and free operating systems virus free and created to be online 100% of the time.

We Are Morons

I have tried to surf Bing via my Android ... slow, too slow, plus I need to move the page around to perform a search. What I have asked was:
Is there any chance you will put in the preferences panel a checkbox able to remove all that layout since the reason I use a search engine is to search stuff rather than check, as example, info about a Zebra mouth?

The answer was something like:
You see? When we have been in a school the Teacher was asking kids about those info showed over home page images!

There we are, the we are all morons Microsoft style is another reason developers do not always appreciate Microsoft decisions.
In few words Bing has been created for kids. OK, nice try, good purpose, I am sure we all wanna know more about the Zebra, but can it be possible that me, as 31 years old man that was only trying to search something had to wait 175 Kb before I can perform a search via suggestion and other more useful stuff? Can I decide to be interested about the Zebra mouth, only when I search for it? Maybe I am asking too much consideration ... so why does not Microsoft provide a teacher for all of us, interviewing about whatever bing shows in the home page?

Mobile Device Version

Since an average of 175 Kb plus a layout created for massive images, "let me suspect" that Bing did not think at all about all current and new Smart Phones (too futuristic, isn't it?), I have asked if there was any plan for a mobile version since that layout was clearly too much for my Android.
He showed me a weird phone I have no idea about, plus an address:
Well, the is just bing but remember the "we don't share code and knowledge" strategy? USA did a step forward providing this smart, fast, simple, and efficient interface and only in 4 Kb ... I already love this version!!! I would like to spend more time about this super fast and efficient Bing but I have been called for something else ... the "for kid and broadband only" version ... how cute!

Partially Color Blind

Both first time and this time I had to underline it's difficult and confusing for me to have all these bloody random images under main features on the top left of the page.
This is the main reason I have asked if it was possible to remove from preferences panel these pointless images since I am not interested but all I received as answer was something like:
We have a blind developer that is complaining more about tags positions

So, since I am extremely lucky compared with this person, I can be ignored ... my problem does not exist, right?
FAIL, let me remove those bloody thingy under what I am trying to read!
Microsoft has always been closed to assisted technologies so why this Bing cannot be "simply simpler" and closer to me as well?


For the good part I let you discover what UK team has done so far. Definitively Bing UK improved from the first beta that was a graphic wrapper around Live, but it's still not that clever as is the USA version and adopted strategies/choices are arguable.
Silverlight: get out my view! Google success is speed and simplicity and Bing best feature requires a third part plugin that could not be there? FAIL
Images: nice, cool, good hint for people that have nothing to do and lot of time to play looking for squares over those images ... but maybe somebody else is trying to work with his/her PC and all this stuff becomes just annoying. I mean, even random images ...
Mobile: if there is a mobile version I expect a massive GO MOBILE button ... where is it, exactly? Not there plus no automatic redirect, and best of all, is a completely unknown alternative. Microsoft, what are you exactly doing there? We are basically in 2010, did you realize this?

Sunday, December 06, 2009

mjst - Micro JavaScript Template Engine

This is the content I've written so far in the official mjst repository. I'd love to know your thoughts about this little experiment, cheers!


mjst is a lightweight, about 1Kb minified and gzipped, and performances focused template engine with standards support. There are dozens JavaScript? Template Engines solutions out there but as far as I know this is the only one based on validation for both client and server side programming languages.


There is one single function to call:

// via text (e.g. Ajax or innerHTML from a template node)
var transformed = mjst('<?js var hello = "mjst!"; ?><js-hello/>');

// classic template node example
<script id="mytpl" type="text/html">
var hello = "mjst!";
myEl.innerHTML = mjst(document.getElementById("mytpl"));

/ via XML (e.g. Ajax or a created document)
// this is a configuration object as second argument example as well
.innerHTML = mjst(
new DOMParser().parseFromString(
'<root><js-hello/><br class="${myBrClass}" /></root>',
), {
:"Hi there!",

// Ajax example with an XML template as response
      for(var i = 0; i < collection.length; ++i) {
      <li class="li-${i}">Item No: <js-i/> Name: <?js print(collection[i]); ?></li>

// transformation ...
.innerHTML = mjst(xhr.responseXML, {collection:["a", "b", "c"]});

It is possible to write whatever we want inside an mjst template block via print(1, 2, 3, N, func(), whatever) function, the only one injected in the execution scope.

JavaScript? block VS Attributes

To make mjst templates XML compatible I decided to remove JavaScript? block inside attributes. It is still possible to access JavaScript? variables via ${myVar.propName} as showed, as example, in the br node class.

PHP And JavaScript? Template Example

<?php // mjst example
function mjst($name, $value){
'<?js var ', $name, '=', json_encode($value), ' ?>';
header('Content-Type: text/xml');
$a = array(
    'some database result' => array(1, 2, 3),
    'some other value' => 'Hi There'
<?xml version="1.0"?>
<?php mjst('result', $a); ?>
for(var k in result)
print(result[k], "<br />")

<script type="text/javascript" src="mjst.js"></script>
<script type="text/
= function(){
with(this.XMLHttpRequest ?
new XMLHttpRequest :
new ActiveXObject("Microsoft.MSXML")
("get", "mjst.php", true);
= function(){
if(readyState == 4)
.body.innerHTML = mjst(responseXML);

Why Standards

Apparently every other JavaScript? template engine is implementing a manual, char by char, or RegExp? based, parser, over a syntax that in PHP world has basically defined deprecated ages ago: the classic <%=stuff%> (<?=stuff?> in PHP). The first reason to avoid this kind of syntax is ambiguity. Which program language should consider that block of code? PHP, ASP, JavaScript? ... who else? Moreover, to make things as clear and simple as possible, avoiding ambiguity, PHP community decided to solve the classic <?xml?> problem, caused by short code block, via an explicit <?php ?> which is the suggested, default one. This is also standards safe, in the meaning that a nodeType 7 is defined as a PROCESSING_INSTRUCTION_NODE In few words mjst is compatible with both valid XML and XHTML, which means that its templates can be created via standard DOM API being sure the template will be valid as well. At the same time, using official engines to validate and parse templates blocks mjst could be considered more reliable than other solutions, more robust, and at the same time fast, thanks to core functionality. The good part, at least for me, is that delegating templates parsing operations to the browser core, is more probable that the error is inside a non valid template block, rather than inside this tiny library source code: a welcome side-effect about standards and core validation.

Why Fast

Every transformation could create one or two pre-compiled functions reused every time we would like to transform an already parsed template block. The reason mjst creates up to two functions is that we can arbitrary send a second argument as configuration object. Since this procedure requires the usage of the with statement, avoiding outer scope name conflicts during the execution, and since we do not necessary need to send this second configuration object, the runtime compiled function will consider both cases, only when encountered, making the with free execution 3 to 10 times faster. In other words, if we transform the same template block always without the second argument, the function that will contain the with statement will never be created and vice versa.

Why Robust

The XML core parser and transformer is also another reason to consider mjst both fast and reliable. As example, probably the most famous JavaScript micro templating, from John Resig, is surely lightweight, but it could suffer some problem if the template block is not perfect. For instance, try to use a string like s = "<%" for whatever reason, and you'll be able to break the John micro suggestion, but this is not only about John code. Another well known product is the EJS library, adopted in many different environments and definitively a good piece of code. EJS offers somethng more than mjst but it is still based on manual parsing over a syntax that XML does not like that much. This means that we could have hard life to generate EJS templates via server or directly in our page while mjst delegates template validity to the markup itself.

<?xml version="1.0" ?> 
<script id="mytpl" type="text/html">
<!-- msjt accepts valid XML (as XHTML as well) or HTML5 -->
var hello = "mjst is here!";
<js-hello />

Why ... Generally Speaking

I am a PHP developer, before being a JavaScript? one, and I have always complained about the open close everythere PHP style. This technique does not split presentation and business logic layers and it is generally "slower than a single echo". I have started to appreciate the couple XML and XSL-T years ago but XSL-T sometimes is just a pain in the ass and via server is difficult to cache XSL-T instances. As example, the New York Time had to create their own core library to speed up transformations over common XSL files allowing fast serialization. Brilliant, but still this ad hoc library does not scale for the whole World Wide Web scenario. Different hosts, different PHP versions, but at the end of the day, what could happen if we delegate the transformation to the client side via core JavaScript? features? Nothing bad, actually it's almost a year I am transforming complex layout into html in the fastest way ever even in IE6 and only thanks to XSL-T but, as I have said, the XSL syntax could not sound familiar at all, specially for programmers that would like to have much more power while they generate a layout. This is basically the reason PHP had such great success over these years, so why don't try to emulate XSL bringing there directly JavaScript? rather than XSL syntax, and in a PHP familiar style? This is all about this experiment. It delegates layout render into the client side, the best place imho to do this. It avoids useless server side stress, it could be embed in the page thanks to the fake script with an unknown type, and as side effect, it could speed up client server interactions re-using templates blocks whenever we need and requiring only data, which nowadays, it is just all we need to make Web users happy. So, as pure final evil plan, we could create a view able to produce mjst templates directly via CouchDB resolving some greedy map reduce operation in the db moving partial logic in the client via simple row data sets or generating clear layout directly via Server Side JavaScript? so that all we need is a language, rather than 3 different technologies ... does it sound insane? ;-)

Friday, December 04, 2009

with: Some Good Example

OK, I have said my last post would have been the last one about the with statement ... well, before I close the argument, I would like to show a couple of common good examples.

Solve References Problem

This is what somebody defined
a tiny masterpiece

o.var1 = "whatever";
// etc etc ...

Above example annihilate every blame about the not sure if that is the var I meant to use since there is an explicit reference as is, as example, in Python programming language (until version 3.11)

// Python
with open("x.txt") as f:
data =

// JavaScript equivalent
with({f:/* as */open("x.txt")})
data =

Memory Safe Operations

Another recycled example before I show more interesting stuff ... apparently the whole problem is about write silly code inside with, as if everybody has to define variables or assign stuff inside this statement. Hilariously, I have basically never assigned anything inside a with in 8 years of ECMAScript programming ... does it tell you anything?
A clever usage could be the one to beat whatever compressor and make code less redundant:

.text = "alert(1)"

Try to do more in less characters considering I am not creating a single reference at all, as I am not creating a useless closure just to justify references creation ... done? Now use whatever compiler/compresor/minifier you want and try to obtain less than 99 bytes, respecting the same cross browser, clean, memory safe and leaks free nature ...

Discover Named Arguments

A too much common approach in JavaScript is the one to create an inline executed closure to perform some task being sure local variables won't disturb outer or global scope.

(function(obj, collection, callback){
// a common argument normalizer
collection = []
// a loop, 'cause we don't want to perform it
// in the global scope
for(var i = 0, length = collection.length, tmp; i < length; ++i){
// callback returns an object only under certain conditions
tmp =, collection[i]);
collection[i] = tmp;

Guys, above example is daily basis JavaScript programming
  1. create a closure to avoid outer scope properties declaration

  2. normalize, if necessary, unexpected/undefined/empty arguments

  3. define internal closure variables with or without a value

  4. perform some operation

  5. exit from the executed closure (implicit)
Now, consider we all love to call methods or functions via configuration objects


A configuration object is semantic, it allows us to perform some task inline, like ternary operator assignment (and I bet since somebody does not know ternary operator ES5 team will decide it's evil ...) and make callback calls more friendly ... but what we have always been envious about Python is the ability to use named arguments defining defaults, if necessary, for each value.
In JavaScript we can send named arguments, via a configuration object, but we cannot define arguments defaults ... but in one shot, we could emulate everything defining both defaults and, since we need to define them in any case, local variables ... don't you follow me?

// zero closure named arguments example
// inline defaults, if necessary
collection:collection || [],
// local variables
length:collection ? collection.length : 0,
// local undefined variable
for(;i < length; ++i){
tmp =, collection[i]);
collection[i] = tmp;

Same behavior, zero ambiguity, if we call another variable name is because we need that variable from the outer scope. If we create a variable we are doing wrong because inside a closure we would have used var, while with this approach we need simply to define, rather than var, a property with null value.
Moreover, if tomorrow we need an extra argument via with statement all we need to do is to define it in whatever position we like, without being worried about arguments order (let's say bad design) since there is no order, except the one we prefer, in object properties.

Global Clear Ajax Call

This is the last example for this post, just ajax when we need it, whenever we are, without the classic reference and the missed "this" feature in the onreadystatechange function:

with(this.XMLHttpRequest ?
new XMLHttpRequest :
new ActiveXObject("Microsoft.XMLHTTP")
open("get", "?ajax=true", true);
onreadystatechange = function(){
if(readyState === 4)
// do stuff with the responseText/XML

Global Paradox

The funniest part ever about this with statement is that basically every JavaScript has an implicit global object scope.
In few words every time a script is executed is basically the same of a massive with

// same as window.alert(123);

// same as window.onload = function(){};
onload = function(){

// same as new window.XMLHttpRequest
var xhr = new XMLHttpRequest;

// same as window.String.fromCharCode

Of course being the global object the last possible outerscope, if we define a variable directly or via var in that scope we will attach this property to the window one.
In few words the with is the ABC of JavaScript but again, some clever guy decided that since somebody cannot understand closures, then extended closures, the basic principle of the programming language itself has to be removed ... right?

Think about it, if ES5 will remove even the with statement making this unavailable in the "use strict" future proof declaration, JavaScript won't be the one that created all its success around its closure and prototype based nature.


Wednesday, December 02, 2009

with: The World's Most Misunderstood Statement

Update If after this post and its comments you are not convinced yet, have a look into with: Some Good Example one.

Every reference to this post is purely casual ... no it's NOT!
It's time to talk about the fuss around the innocent with statement, and why somebody decided it's bad or it should not be part of next JavaScript.

Extends the scope chain for a statement

Extended chains are probably what we love more about JavaScript libraries ...


but for some sadistic reason we decided that we don't like native extended chains in our code ...


Bad Examples And Bad Usages

Every time I say that with statement has nothing bad, somebody perpetually points out this bloody post ... OK, from Yahoo! ... so what?
Honestly, I could post thousands of bad examples and point my finger into this or that library, function, method, evaluation, etc etc ... it's like the myth eval is evil ... so how come one of the first one to talk about evil is using eval to speed up code interpretation?
Easy: there are cases and cases!

The Power Of with(){}

I give you the most basic example ever, something happened in twitter few minutes ago, OK?
I wanna use the smallest amount of characters to obtain a script injection ... right?

.text = "alert(1)"

Elegant, memory aware (no variables declarations, no need to create a closure), compact, efficient, cross-browser ... where are "JavaScripters" that used with to assign variables here?

with has been used since first ECMAScript 3rd Edition implementation and nobody has never had a single problem. Moreover, if we contest that we could cause a wrong variable assignment, what about a required closure plus a required var assigned to a possible node causing possible leaks plus name conflicts?

We remove a statement but we don't remove those developers that constantly forget to define local variables so they can destroy the global namespace?
for(i = 0; ... ) // FIRED FFS!!!

I would go for the second specie before the first one!

In any case, I'd love to see who is able to produce a code smaller than the precedent example taking care about removing any trace of that operation in the current scope.

Pointless Points

Somebody could say: "doode, you are not sure that insertBefore is the document.documentElement " ... The classic piece of antagonist code would be 99.9% of cases this one:

// avoid global scope pollution
).text = "alert(1)"
// feeling cool and clever
// just to avoid "var"

Wait a second ... in the compress everything to improve performances Web era we need to create a closure and a scoped variable over greater number of characters? ... REALLY? And what about the "non being sure point", does anybody check if document.documentElement, always present, has an insertBefore method? NO because sometimes we simply don't have to!


If we willing to remove a historical valid statement as with is because people could assign stuff there and they don't know where the stuff is assigned, we should remove every function able to evaluate code from a scripting language as JavaScript is.
It's like blaming features because some silly junior developer could cause disasters:


... we cannot be serious here, can we? So, remove textarea HTML element because it could be used from somebody to write malicious code directly evaluated when the content changes ... is this the ES5 philosophy? I do hope no, but apparently, and at least for the with statement, it is.
Maybe we should consider the fact that if we have not control over our objects and scopes we can remove every evil thingy but the bad code will be still there ... isn't it? ... but it's a language fault, right?

Real Reasons To Remove with()

Compilers, munger, minifier, however we call them, these analyzers have hard life because of with statement.
The reason is that these parsers do not consider a with statement as an in-line scope and cannot predict in that moment the accessible outer scope, unless these parsers won't be able to virtually replicate every possible scenario in that scope.
Furthermore, since some compressor could optimize properties access, bytewise speaking, a with statement will make that optimization redundant:

// sometimes enough to decide
// doStuff could be optimized via
// and here it cannot
// or it requires extra effort ...

If we use the string shortcut in the statement we'll assign the variable or raise an error (if the string is for some reason hard coded).
Another example is property access optimizations typical via ordered namespaces as YUI! dojo Closure or other libraries have.

// enough in this scope to optimize access via
var s = MyLib.namespace.stuff,
d = "doSomething"

Can you spot the difference? And what if in the middle of the closure there is a with?
These are just few examples where a statement like with could become uncomfortable ... but why nobody else has ever had this problem?

Implicit this

If we would like to compare a closure with a classic Object Oriented Programming world, we could consider that most of the languages uses implicit reference inside methods.

function plusPlus(void):MyClass {
return this;

That "i" is referencing an instance variable and it does not requires the "this" prefix. So how come implicit scope is permitted in many other languages, Java and C# included, but we don't want an in-line implicit scope in JavaScript, as with statement basically is?

Python way

There is a basic rule in the Python local/global variable choice: explicit is better than implicit ... and I cannot agree more:

class MyOne():
value = 0
def add(self, i):
# requires to be explicit
self.value = self.value + i
# otherwise it will generates an error
# since value, as local, has not been defined
# value = value + i

value = 5

o = MyOne()

To avoid the error we need to define the value as global one:

global value
value = value + 1

In JavaScript we have the opposite scenario, what has been explicitly declared as local, arguments included, is local, everything else is considered from outer scopes or, if nothing there, global. It could sounds messy but this is all about JavaScript and closures so if we don't like it, why are we using JavaScript? Just use GWT or whatever hybrid way to end up with translated closures ... no? I see ... we think and we feel we are truly programming in Java, as example, rather than JavaScript, don't we? ... so, how come this unicorn became the most used programing language?

Anyway, we are not here to decide who made the best choice, we are here to discuss why with statement is still present in other programing languages, right?
The answer is simple: to make things simpler!
This Fredrik Lundh post is the first one I found searching an example ... so, does with make sense?

If We Need It, Create A Reference

Imagine we have a global open function able to return a File instance ... now imagine we justify python because there is an implicit "as" to reference that pointer ...

// Python with style simulation
var data =;

// if we need a temporary variable
// but we would like to be sure about
// variables in the scope
with({f:open("x.txt"), data:null}){
data = 123;
// do other stuff

But what is wrong if we are sure that we are dealing with a variable and its method?

// I don't want to care
// about variables declaration
// neither about leaks
// scope, names, etc ...
if(read(1) === "+")

In few words, there are really thousands of bad examples online and rarely good one but the blamed one is always the with statement rather than people unable to understand it.
Another example, still twitter, few minutes ago ...

var obj = {};
obj.var1 = "objvar1";
var var1 = "globalvar1";
var var2 = "globalvar2";

// so we chose the object to use ...
with(obj) {
// .. and we have no idea what kind of object is it?
print(var1); // the object one, EASY!

// if we think obj should have a var2
// property but it does not
// we are doing wrong in any case
// and we don't know our instances
print(var2); // scoped var, EASY

// if we know what we are dealing with
// we have already resolved this problem
// knowing var1 and var2 values
var1 = "bar1";
var2 = "bar2";

// this is quite obvious ... isn't it?
// we have assigned these references one line ago
print(var1); //'bar1'
print(var2); //'bar2'

// is there anything unexpected here?
print(var1); // 'globalvar1'
print(var2); //'bar2'

As Summary

At the end of the day these developers scared by with do not simply use it, while developers able to use it in a proper way should say good-bye because somebody else decided that with statement should not be part of ES5: WTF ... I mean, am I the only one able to spot the beauty of the with statement here?
We could find solutions for arguments.callee but with statement ... guys, please be serious about that, and ask every other programing language to remove it, while somebody could find even the good old goto useful.
Remove features, for what? Performances? Who cares if it is rarely used, I do care if it is not possible to use anymore.

Even More Evil

.text = "alert(1)"

Sunday, November 29, 2009

XML To HTML Snippet

This John Resig post about nodeName demonstrates once again how trustfulness are edge cases in JavaScript.
I must agree 100% with @jdalton: frameworks or selector libraries should not be concerned about these cases.

First of all there is no universal solution so whatever effort able to slow down libraries won't be perfect, then why bother?

Secondly, I cannot even understand why on earth somebody could need to adopt XML nodes in that way.
Agreed that importNode or adoptNode should not be that buggy, but at the same time I have always used XSL(T) to inject XML into HTML and I have never had problems.

Different Worlds

In an XML document a tag is just a tag. It does not matter which name we chose or which JavaScript event we attached, XML is simply a data protocol, or transporter, and nothing else.
A link, a div, an head, the html node itself, does not mean anything different in XML so again: why do we need to import in that way?
In Internet Explorer we have the xml property which is, for HTML represented inside XML docs, the fastest/simplest way to move that node and what it contains inside an element and via innerHTML.
Moreover, namespaces are a problem, we cannot easily represent them into an HTML document so, in any case, we need to be sure about represented data.

A Better Import

Rather than ask every selector library to handle these edge cases we could simply adopt a better import strategy.
This XML to HTML transformer is already a valid alternative, but we can write something a bit better or more suitable for common cases.
As example, a truly common case in these kind of transformations is a CDATA section inside a script node.
With CDATA we can put almost whatever we want inside the node but 99.9% of the time what we need is a JavaScript code, rather than a comment.
Another thing to consider is that if we need to import or adopt an XML node, 99.9% of the time we need to import its full content and not an empty node (otherwise we should ask us why we are representing data like that, no?).
I bet the deep variable will be basically true by default so, here there is my alternative proposal which should be a bit faster, avoiding implicit boolean cast for each attribute or node, and considering what I have said few lines ago:

function XML2HTML(xml){
// WebReflection Suggestion - MIT Style License
nodeName = xml.nodeName.toUpperCase(),
html = document.createElement(nodeName),
attributes = xml.attributes || [],
i = 0, length = attributes.length,
i < length; ++i
html.setAttribute((tmp = attributes[i]).name, tmp.value)
childNodes = xml.childNodes,
i = 0, length = childNodes.length;
i < length; ++i
switch((tmp = childNodes[i]).nodeType){
case 1:
case 3:
case 4:
case 8:
// assuming .text works in every browser
nodeName === "SCRIPT" ?
html.text = tmp.nodeValue :
return html

I have tried to post above snippet into John post as well but for some reason it is still not there (maybe waiting to be approved)
We can test above snippet via this piece of code:

var data = '<section data="123"><script><![CDATA[\nalert(123)\n]]></script><option>value</option></section>';
var xml = new ActiveXObject("Microsoft.XMLDOM");
var xml = new DOMParser().parseFromString(data, "text/xml");
var section = xml.getElementsByTagName("section")[0];
onload = function(){

We can use latest snippet to test the other function as well and as soon as I can I will try to compare solutions to provide some benchmark.

Monday, November 23, 2009

On element.dataset And data-* Attribute

Why on earth? I mean why we should put a data-whatever attribute into our layout? Where is the good old MVC? How can you consider data-* more semantic? more semantic than what? Why we would like to kill truly semantic pages and graceful enhancements? Why we need JavaScript redundant info inside nodes attributes but we cannot understand a script tag in the middle of the page? Why in the performances matter era we would like to let users download stuff that they will probably never use?

What Am I Talking About

We can find the "magic" data attribute description in the W3C Semantics, structure, and APIs for HTML documents page. These days there is a page that is going around even too much ... finally somebody realized that this data-thingy is nothing different than what we could have always used since ages: XML.

Why We Are Doing Wrong

If we are enthusiast about a custom attribute able to bring whatever information, we should ask ourself why on earth we are not using simply XML plus XLS to transform nodes with custom attributes. Each node can be easily and quickly transformed, in the client side as well and in a cross browser way, via one or more cached XSLT able to create runtime whatever we need.
We need to sort the music file list via data-length attribute? No problems at all, we can reorder via DOM every node we need and put the XML fragment transformed via XSLT into a single HTML node in that page. Moreover, we can use modular XSL to transform branches or specific cases ... but that is too clean and professional, isn't it?

Data Used Via Selectors

Let's say we have 3 songs with the same length, taking the example from W3 page, OK?

<li data-length="2m11s">Beyond The Sea</li>
<li data-length="2m11s">Beside The Sea</li>
<li data-length="2m11s">Be The Sea</li>

First of all that example is quite hilarious, to make a sort as fast as possible that kind of representation is not ideal, is it?
It does not matter, the point is that as soon as this stuff will be implemented, all jQuery users will start to think so semantic that everything will become a:


Seriously ... if we want to make the web the worst semantic place ever, just put in medium skilled developers this kind of weapon, and we'll scream in few months HTML5 Failed!
Who will care anymore about the appropriate element when everything can be easily validated in the W3 for totally dirty and useless, for non JS users, layouts?

The Namespace War Has Started Already

I can imagine lots of new devs starting to use the data as if it is their own attribute, ignoring conflicts problem we've always had with the global namespace. Move this problem into the DOM, and the Cactus Jam is ready to eat.
On the other hand how best practice will be a DOM loads of nodes with multiple data attributes?


... I mean ... seriously, is this the HTML5 we are talking about? An empty element there just to make the anti semantic magic happen?

Dozens Of Different Ways

First of all querySelectorAll.
The document method we all were waiting for is finally 90% here and rather than use semantic and logic selectors to retrieve what we need and when we need, we prefer to make the DOM that dirty? Are we truly so monkeys that we cannot spot a list of songs and order them by their length that will be 99.9% of the time part of that song info and accordingly present in the DOM as context?
Where are classes? Where are external resources totally ignored by those users which aim is simply the one to surf quickly, or to crawl info? data-whatever?

Why We Don't Need data-*

First of all, classes. If we have a list of songs the class songlist in the single outer UL or OL node is everything we need to retrieve that list and order by title, duration, everything else present in that list, since in whatever grid we have ever used, we order by columns where we know the column content.
How can a user even decide to order by length if this length is not displayed?
How can he order by something not even showed?
It's like me in a shop asking to order by palm trees a list of sounds systems ... I think they'll look at me like a mad person and they would be right, don't you agree?
So, the semantic part of this attribute does not exist. The same example showed in the W3 draft is ridiculous for the reason I have already said. If the song length info is already in the DOM and properly showed we don't need redundant info for every user, we just need a good sort function for that list of songs and nothing else.

$.fn.sort = function(){
// proof of concept, not even tested
// written directly here in the blogger textarea
// don't try at home but please try to understand the logic
var li =;
li.sort(function(a, b){
$(a).find(".length").text() <
$(b).find(".length").text() ?
1 : -1
return $(li).each(function(i, li){
$(".songs li").sort();

Is above proof of concept that different from a $(".songs @data-length").sort() ? It's even shorter!

Map The DOM If Necessary

If we are struggling that much with this "so much missed data-*" we can still use the class to map common info ... how?

<ol class="songs">
<li class="map-0">Beyond The Sea</li>
<li class="map-0">Beside The Sea</li>
<li class="map-1">Be The Sea</li>

If we need to attach properties into a DOM node we can still use the class attribute.
Semantic, classifying the node as a mapped one, not redundant, letting us provide more than once same kind of info for different nodes, and lightweight, avoiding data description provided by the mapped object.
In few words, and always via JavaScript, we can attach a file with this content:

var map=[{

For each node, when necessary, we could simply retrieve the associated map in this way:

function nodeInfo(node){
return map[
/(?:^|\s)map-(\d+)(?:\s|$)/.exec(node.className) ||

And that's it. Each mapped node will return an object or undefined when nodeInfo function is called. Again, all this stuff is a proof of concept, but can we agree that data-* is just the wrong response to solve a JavaScript problem that should be solved everywhere except into the HTML?

Tuesday, November 17, 2009

195 Chars To Help Lazy Loading

removed named function expression

I wrote events letters in the wrong place, please use the latest script

We have talked many times about performances, and not only here.
One common technique to speed up a generic online page is the lazy loading.
For lazy loading I mean:
  • runtime dependencies resolution via script injection, preferably including all dependencies we need in a shot rather than script after script
  • Google comments style, where code is evaluated when required but the size is still there
  • namespaced strings, an example I have explained via code in Ajaxian
  • whatever else, because sometimes we need this or that library only in certain conditions

What Is The Problem

If we would like to widely adopt this lazy load pattern, we should be aware about a "little detail": Firefox < 3.6 does not support document.readyState, an historically valid Internet Explorer feature adopted in HTML5 and present in basically every other browsers.
We can find the readyState description and the related bug in Mozilla domain.
Reading suggestions, I found quite pointless the long diatribe about:

Should it be "loading" or "interactive" before it will switch to "complete" ?

IMHO, who cares, as long as we can rely in the final statement: complete

Affected Libraries

I have no idea but for sure jQuery! The event.js file shows at line 857 this check for the bindReady event:

// Catch cases where $(document).ready() is called after the
// browser event has already occurred.
if ( document.readyState === "complete" ) {
return jQuery.ready();

The code goes on with common DOMContentLoaded checks and emulations.
The problem is that with such library where basically everything starts with an ready event:

// the ready event we are theoretically
// sure will be fired even if the
// code has been loaded after

every Firefox < 3.6 user will miss that code, plug-in, extension, whatever.
I have already informed jQuery dev ML about this problem but obviously they already know. John Resig told me that there is no guarantee the ready event will be fired if the page has been loaded.
Fair enough, I can perfectly understand John point which is: all jQuery supported browsers may not support document.readyState.
AFAIK, even if this is a good reason to avoid some obtrusive code, we all would expect consistency from a framework so if something worked even in IE I can't even think about Firefox problems.

The Solution

This missed FF feature could affect different libraries, not only jQuery.
We, as developers, could help every library author adding 195 uncompressed bytes, even less once deflated, as first stand alone piece of code ever in our page:

// WebReflection Solution

// NOTE: IE will never consider false s[o]==null

Since Firefox is usually updated automatically, all we need to do once we are sure people are surfing with version 3.6 or greater is simply remove above little line of code.

Explained Solution

// verify that document.readyState is undefined
// verify that document.addEventListener is there
// these two conditions are basically telling us
// we are using Firefox < 3.6
if(document.readyState == null && document.addEventListener){
// on DOMContentLoaded event, supported since ages
document.addEventListener("DOMContentLoaded", function DOMContentLoaded(){
// remove the listener itself
document.removeEventListener("DOMContentLoaded", DOMContentLoaded, false);
// assign readyState as complete
document.readyState = "complete";
}, false);
// set readyState = loading or interactive
// it does not really matter for this purpose
document.readyState = "loading";


Being this a browser problem and not directly libraries related, it does not probably make sense to put this fix just for few months until next FF release. At the same time we can guarantee for Firefox users, and only if the library does not sniff the browser via this read only property, that lazy loaded stuff a la $(rockNroll) will work into our favorite browser too or we could simply rely and with our code into a readyState "complete" check to decide what to do (this is the real reason I have investigated more this problem, but this is another story coming soon).

Sunday, November 15, 2009

Why Samsung N510

An unusual topic for this blog but this toy turned into a wonderful surprise and maybe, somebody would like to know how come that a developers decides to buy a netbook, rather than latest ultra speedy tech on the go ...

Decide What You Need

It was about 5 years or more I bought another elegant toy: the VAIO vgns2hp, with an extra 512Mb of RAM in order to reach the mythical Gig of RAM which did not come for free. I spent a fortune by that time but it was for a good reason: in 2009 I was still able to perform every task I had to perform ... except gaming.
Please note I am talking about a daily basis "back home from work" computer and not necessary about development environments, where compilation time as example has to bee fast or we gonna spend our life compiling and nothing else ...
So, being a performances maniac I always thought dated hardware is the best proof that we are doing well. Specially Web speaking, it does not matter if we have the 16 core 128 bit monster, 'cause people do not change hardware so frequently as we think. As example, if we put IE6 over an Quad Core Opteron I bet it will be faster than every browser I have used so far with my "humiliating hardware".
The funny part of web development is testing (I know ... I know ...) and we cannot recognize performances problems, bad practices, missed optimizations, if we test with a powerful platform. Ideally we should use iPhone, Android, or Palm to test our web applications! A bit extreme, I agree, but one thing is sure: if that little device has been able to render and execute the page code in a reasonable time, I don't need a degree to understand that everybody else at home or office will be able to experience good overall performances.
Does anybody remember the option: simulate 33.6Kbps connection?
In Flash world what we call Ajax has always been there (since version 5 and loadVariables, switched to LoadVars in version MX ... plus XMLSockets).
That option aim was to better understand bandwidth situation by that time. The IDE was able to simulate low bandwidth connections giving us the real feeling (or the real frustration). Nowadays, I do not know tools able to simulate precedent hardware configurations, neither about virtual machine able to run browsers in slow mode ... but it does not matter, I don't need them, cause my hardware is a good compromise between the past, and the present ... the future? Well, it's far away and unpredictable!
As summary, if the VAIO was still ther, I would have use it for other N months because I have never felt it was not good enough to do what I do on the go, or at home: experiments, projects, surfing, developing, music listening, some movie ... all good ... but!

Decide What You Miss

In 4 new hardware generations I have missed different things such:
  • long battery life, mine died a year ago but even before I could not be productive on the go due to 1 Hr and a half maximum battery
  • led based display, I have always been envious of these new amazing brilliant shiny displays people where showing off around ... and I need to read and write 80% of the time, and my eyes got bored even if VAIO had a truly good LCD display
  • recently, WebGL and O3D support, that 32Mb mobile radeon has always been a pain the ass with 3D stuff and O3D as WebGL could not even work properly
  • 720p and 1080p HD support, a connection with the present and the future few mobile devices can handle properly
  • money, come on, Christmas is coming, and I was not planning at all to spend 1000 or more pounds for myself since I could have done well with my VAIO for other 6 months or more ...

Be Happy With Your Choice

Too many times I have bought something without being absolutely sure about the product ... specially in an emergency (without a keyboard, a monitor, and a flat connection I feel like Gollum without his ring) it could happen to buy something knowing something better is coming soon.
Let's think about a kid waiting for its first console ever for his birthday, knowing that PS3 is out since two months ago and the rich uncle arrive with the PS2 present ... that feeling! Even when I have chose the VAIO I knew Intel was planning to release new kickass mobile dual core processors but at that time I absolutely need a laptop and I could not wait other 2 months ... but hey, this is about technology, the second you pay the latest gadget ever somebody has already tweeted the same company released the new one and for half of its price!
In few words, if we decide to buy tech, specially IT related tech, we pay 1000% more the value of the tech if it's new, and just the right price for the old one ... but if we spend a reasonable amount of money for something that will match 100% our expectation without being greedy about having the last piece of the art technology, we will appreciate more what we have, and this has been the case so far: I am happy!

Be Updated

I used to be a Quake 3 Arena killer and during my career an hardware and software technician ... in those days there was nothing I did not know about latest technologies. I was talking about SLI and multi core as the computation solution only thanks to my Voodoo 2 card and it was in 1998 or even before, I was an hardware maniac, overclocker, bios hacker, etc etc ... but now I am a completely different person ... I mean, I don't care at all about hardware, I have switched my IT interests, but I still like to have a quick view of the tech panorama. We don't need that much time. Sites such Tom's Hardware has always been there, it is worth it to have a quick look, specially if we are thinking to buy some new stuff (and it is since O3D problems I started to think ... maybe I should change this VAIO ... ).
Being updated is also the key for the precedent point: be happy! If we buy something and we don't know that something is old, we'll feel like idiots the second after we'll meet our geek friend who spent less money for the newer model ... isn't it?

Define Your Product

This operation is not that difficult to perform. All we need to do is to define a price range, the rest will come easily 'cause in the same range we often have similar products, so we don't need to decide between a dishwasher and a lighter, we need to decide between similar products, considering different variables.
I have used one of the first Samsung success in the netbook market, the NC10, a truly exemplar piece of art by that debut time. I was stunned by its compactness and speed ... it was performing better than my VAIO and it was half the size with 8 times more battery life ... are you kidding me? That has been the day I have decided my next "portable friend" would have been a netbook. I don't get why people would like to spend more to use 1/16 of their hardware and I don't use heavy software at all ...
That NC10 was amazing, but few months later ASUS came out with another piece of art: Eee PC Seashell 1101HA, a masterpiece of design and features.
I was going straight on to buy latter toy but something blocked me: the Tottenham Court Road Escalator has been completely covered with new Samsung X-Series Advertisement ... an artistic way to do ad and I have been trapped by that ad like an idiot. Models where cool and tiny, and I thought those were new netbook from Samsung ... and I was wrong. As soon as I have found a Samsung shop I went in to ask about these new netbook and they corrected me saying: Sir, these are powerful laptop for these prices ... OK, reasonable, a powerful laptop for 499 pounds, elegant and reach of features, become instantly part of my new range of products.
But it is only thanks to that ad that I could have spotted the N510 ... and that shiny nVidia logo: ion
All I knew is that nVidia and Intel has few arguments about this ion solution ... and you know what I have done? The simplest thing to do: I said thanks, I went outside the shop, and I went back to the chapter: Be Updated
Surfing with my Android I have discovered that nVidia finally arrived into netbooks and laptop and that this ion is promising 10 times faster performances and full HD stream decoder. The price is that obviously if we use ion 100% of the time the battery life will be shorter, but this N510 has a maximum declared battery life of 7 hours! As average, the battery life should be about 3 or 4 hours even if I am watching a movie or surfing the latest porting for Compiz into WebGL (AFAIK it does not exists yet).
The price, less than latest new entry, the X-Serie laptop, a bit more than Seashell, but with an Atom N280 CPU, slightly faster than the Z version present in the Seashell, and the newest nVidia technology inside, theoretically able to move new Operating Systems as well (Windows 7 - Kubuntu or Ubuntu) ... Deal!


Christmas is close, and I thought that some guidelines about shopping could be nice. The IT sector is extremely complex, we can rarely find the best product because every product seems to miss something. The Seashell had slower CPU for no reason and no hardware acceleration, but it has best battery life and stunning design plus it's ASUS, a company to me historically famous for its speed prone motherboards and always in the performances field. The X-Serie wa new and cool but it missed Seashell dimensions, battery life, and it does not come with nVidia ion. Intel Graphic Chips are truly good but every netbook has not the power ion has.
Samsung N510 has everything to be truly used as daily basis on the go device. The keyboard is good, the hardware is excellent, the price reasonable (I have spent 390 pounds) and it is future proof thanks to this pioneer match between Atom 280 and nVidia ion. Any side effect? The OS is XP and not Seven, as showed in the website, and I have spent long time to remove unnecessary crapware, but now it boots fast and it works like a charm. I think Seven could run without problems adding 1 Gb of RAM and there are already specific drivers and software in the product page. I would vote this Samsung N510 5 stars out of 5 'cause I am convinced this is the best netbook you can find in the market right now: pretending zero but giving all you expect from a netbook and even more, good stuff!