My JavaScript book is out! Don't miss the opportunity to upgrade your beginner or average dev skills.

Wednesday, June 12, 2013

On Harmony JavaScript Generators

Developers get easily excited when something so used, acclaimed, and desired in another land comes to their own ... or the one they think they own ...
This is the case of ECMAScript 6 Harmony Generators, something at this time you need to activate through the --harmony flag in node or going to about:flags in Google Chrome Canary url bar and enable experimental harmony/extension.
Once you've done that, you'll be able, still through Chrome Canary at this time, to test examples, benchmarks, and other things in this post ... ready? So here the first great news:

Generators Are Slower

At least two times slower than forEach and about 10 times slower than regular loops.
This is the result showed in this jsperf benchmark you can run too.
Of course generators are slower, there's a massive process behind each generator such:
  code: 'the generator function body',
  context: 'the bound/trapped context',
  scope: 'the whole scope used by the generator',
  handler: 'to perform iterations',
  state: 'the current generator private state',
  // inherited from GeneratorConstructorPrototype
  send: 'the method to address values',
  throw: 'the method to throw Errors',
  next: 'the method to keep looping'
// plus every 'next' or 'send' call
// will return a fresh new object
  value: 'the current value at this time',
  done: 'the boolean value that helps with iterations'
  // when done is true, an extra call
  // to .next() will throw an error!

Not Only Slower ...

The fact every interaction with a single generator creates N amount of objects means that the garbage collector will work more than necessary and the RAM will be easily saturated whenever your server does not have such big amount of it ... and cheap hosts are still the most common choice plus if the program/language is greedy, why should you spend more in hosting hardware? You should not, as easy as that.

... But Feel Free To Use Them

If you believe generators can help anything in your logic, infrastructure, system, and you don't need best performance for that situation go with generators. These have been used in Mozilla internals for a while since there when Firefox was version 3 or even lower, can you believe it?
So, if these worked before becoming part of a standard and before hardware was as good as it is today, there must be use cases where generators are a better choice ... right?

JavaScript Never Needed To Sleep !!!

Unfortunately, a part for some academic Fibonacci exercise or even worst some sleep(delay) example, there's no much more you'll find about how cools are generators in JS .. simply because JavaScript never really needed them, being an Event handler oriented/prone programming language, where events always worked even better than generators for other languages since events can be triggered at any point, not just in a synchronous "top-to-bottom" flow.

Coming From Mars

One common problem in JS is that every new comer would like to find what is missing the most from her/his own old programming language ...
  • PHP developers never complained about missing types, they'll rarely get how prototype inheritance works there though
  • Java developers complains about missing types ... they'll try to use the JS flexibility to make it as similar as Java as possible understanding inheritance slightly better than PHP devs and abusing closures by all means to make it as super() compatible as possible 'cause; inside ChildClass constructor freaks them out
  • C# developers think they have all the best there ... forgetting C# is not statically compilable and it is derived from ECMAScript 4th Edition, almost 2 editions before current JavaScript specification ^_^
  • C++ developers will propose new optimized Virtual Machines every day and most likely will probably never use JS ... still they will decide how JS developers should use JS regardless
  • Python and Ruby developers will just laugh about all JS shenanigans thinking their favorite language has none of them or worst
Well, here the thing ... generators and yield keyword are really old concept from languages that have not being created to work asynchronously as JS does, included all those mentioned in above list.
That's why I believe generators aim is being misunderstood from JS community ... and once again, feel free to use them as much as you want, but please keep reading too, thanks!

Queuing The Delay

if you start waiting for events after other events in a generator way:
var file1 = yield readingFile('one'),
    file2 = yield readingFile('two'),
    combined = file1.value + file2.value;
Here the bad news: that won't work magically as you expect!
// a magic function with many yields ...
function* gottaCatchEmAll(fileN) {
  for (var i = 0; i < arguments.length; i++) {
    yield arguments[i];

// a magic expected behavior that won't work
// as many might expect ...
var content = gottaCatchEmAll(
Until we call, we eventually store the object value if no error has been threw and the done property is false, no parallel file loading will be performed by all means!
That's correct, what node.js elegantly solved with what JS was offering already, is screwed again with this new approach that won't block and won't execute at the same time.

Still Room For New Users

The controversial part about generators is that these might be useful to synchronize sequential, inevitably delayed or dependent executions while still non blocking other handlers ... well, here a couple of thoughts:
  1. try to make a generator behave as you expect ... seriously!
  2. try to learn how to use a queue instead
Not kidding, the second part is much easier than expected plus is a Promise like approach compatible with every environment and it fits in a tweet.
function Queue(a,b){
return a}

How Does That Work?

I've tried to explain that in details in this working with queues blog post and at the same time I have written a slightly improved queue so that arguments can be passed between callbacks.
var fs = require('fs');
var q = Queue([
  function onRead(queue, args){
    if (args) {
      // add result to the content
      // if there was an error ...
      if (args[0]) {
        // attach it to the queue object
        queue.error = args[0];
    } else {
      // first time execution
      queue.content = [];
    // if there's anything to read
    if (queue.files.length) {
      // add "priority queue" to itself
      // so that once done ...
        // ... reducing the number of files to read
        // ... will be re-executed
    } else {
      // simply fire the end of this thing;
  function theEnd(queue) {
    // if there was an error ...
    if (queue.error) {
      // throw it or do whatever!
      throw queue.error;
    // otherwise simply show results

// files to load
q.files = [

OH Come On What Is That

If you think dealing with generators is easier and the real effort behind the yield keyword is less verbose than above abstract example over a single use case, I am here waiting for your link to show me the ease, the cross version/platform compatibility, the performance (and I am not talking about your latest MacBook Air device but hardware Raspberry-Pi like which is suitable and already used as a web server) of your generator based solution willing to reconsider my point of view and change some module in order to switch, even if not needed, to this new approach.
Right now I see this new entry as completely overrated, able to bring fragmentation between node.js and the Web, and unable to concretely simplify or solve parallel asynchronous operations as elegantly as events would do through emitters.
Thanks for your effort reading 'till the end.
Some comment outside this blog:
  • Alex Russel on performance, and my reply which is: bound functions are still slow. I am not expecting generators to be faster than bound functions at any time in the near future

Wednesday, June 05, 2013

ArchLinux Might Not Play Cool


This rant, referred to that time, is still valid. However, the lastest ArchLinuxARM package has been updated after the big change and everything works again as expected. Thanks Arch Linux for updating all packages, appreciated!
I'm actually avoiding a title such WTF ArchLinux to not leave a mark for this awesome community ... I mean, the best of the best: an always updated Linux distro for many devices and architectures blazing fast on boot time and freaking lightweight so you can put any extra you want and nothing more .. how can anyone ask more ...

Kids Play Too Much There

No, really ... this is not about blaming anyone specifically but seriously, a change able to brick every device ain't cool guys, not at all.

Cannot Trust Pacman Anymore

The main tool designed to update your system is screwed for basically every single tutorial about installing Arch Linux I could find on the net. pacman is doomed by an update, philosophically superfluous, able to make the update manager itself a joke.
That's correct, now you need to update all tutorials out there that says that
to update your system, you should simply pacman -Syu and working magic happens
Now every single piece of internet related to pacman or how ArchLinux is updated won't work anymore.

Cubieboard Is Just One

Every single article that will make you happy about installing ArchLinux in this board, or any other, will piss you off if source files you download are before 4th of July 2013 because the moment you can say hooray, it worked and you'll keep following instructions that will tell you best thing ever to do after a successful ArchLinux installation is to pacman -Syu as first command will nicely fuck up all your effort.

Focusing On Something Else

I am a developer and I love tools. The OS is just one of them to me, the one that makes me work and experiment with things I do for working or things I do for myself.
If I cannot trust the fact Linux is there since ever and every related OS is using some rootfs structure that worked without problems untile now, I don't want to find that everything that was using that structure won't work anymore because of some decision that is not practically friendly with any user in the community since it's not compatible with the single package manager the community is using ... I mean .. seriously ... WTF!!!

Just A Haughty Community ?

Worst part is that I was going to humbly open a post in your forum then I've realized to do that I have to register and answer this question in order to do that:
What is the output of "date -u +%V$(uname)|sha256sum|sed 's/\W//g'"?
I've never seen anything dumber that that .. first of all, if that's because a possible robot, most likely runs over Linux and is able to simulate the question wrapped in quotes and produce automatic result, if this was an idiotic robot attempt, secondly because the moment I want register to the forum is probably because one of your community "hackers" did such dumb mistake that I cannot even test my ArchLinux machine anymore ... so, whoever thought that was a good way to welcome your forum members, has a very regular IQ, if not lower than that.

Focus On Something Else

The moment a community with a vibrant and excellent product as ArchLinux is starts wasting everybody time behind these philosophical changes that break everything in facts is the moment the community needs to breath 5 minutes and think what the hell is going on and what's really needed to make the community better instead of pissing everybody off about some decision that no matter if was OK and right, it should NOT have broken all users around that were trusting the package manager to simply work as explained and emphasized everywhere on the WEB
Apologies for the rant but ... bloody hell, I've got all my boards screwed because of this "little change" in the whole OS.

Even Dumber

The page that is telling us how to update things is suggesting
# pacman -Syu --ignore filesystem,bash
# pacman -S bash
# pacman -Su
... too bad the current latest updated pacman cannot be installed without the updated bash too ... congratulations!

Saturday, June 01, 2013

The node.js Relative Path Case

Right now, it sucks because ___, as @izs told me to start with, I could not find a simple way to resolve a path from a module that exported a function into another one.
Spoiler: the reason I am trying to resolve paths is to load fresh modules runtime in polpetta. However, this might be a bad practice.
@WebReflection Lest I commit malpractice, I must tell you this is a terrible idea. Now warned, rock on with your bad self. Hack away :)
Still, my point is that it might be handy to be able to resolve paths relatively form the invoker regardless the why, even if you should always ask yourself that ;)
This case is quite easy to misunderstand so I'll skip extra explanations now and put down some code.


This example file aim is to log ASAP two different path resolutions: the one from the path, passing through the process.cwd(), and the one from the relative.js file itself.
Object.defineProperties(this, {
  parent: {
    get: function () {
      // used later on
      return module.parent;
  resolve: {
    value: function (path) {
      // it will resolve from this file
      // not from the invoker
      return require.resolve(path);

// path module resolves relatively
// from the current process.cwd()

// require resolves relatively from
// the current file
Running this from node terminal will most likely show something like:
Neither logs or resolution are actually OK if we would like to resolve relatively from that path.
If we would like to use that module method, talking about the resolve() one, we cannot trust the current path.
// will throw an error

// will pass
If we install that module through npm as global, or even local in some super folder, gosh knows where we should start the relative path resolution accordingly with the module itself, you know what I mean?

Being Relative To The Invoker Path

In order to be able to resolve relatively from the invoker, we need to know at least where is the invoker.
Thankfully, this is easy but be aware of the caching problem:
// relative.js
  path = require('path'),
  relativeDir = path.dirname(
this.resolve = function (module) {
  return require.resolve(
    path.join(relativeDir, module)
At this point we can invoke the method as expected without having erros, from the process folder.
// will log the right path
Good, we are able to resolve module names, problem is ... only from the very first one that required relative.js due module caching so that module.parent will be one, and only one, for any other module.

A Hacky Solution

In order to avoid the caching problem within the required module itself, I came up with such trick at the end of the file:
// .. same content described above ...

// remove the module itself from the cache
delete require.cache[__filename];
In this way every single module that will require('./some-path/relative.js') will have a fresh new version of that module so that module.parent, and its filename, will be always the right one: how cool is that?
I am able to resolve relatively from any outer module its path the same way require.resolve(path) would do inside that module which is exactly needed and the goal of require-updated so that any module can use paths as if these were resolved from the file itself in order to require some other file, relative, absolute, or globally installed.
Still I believe there should be a better way to do this ... what do you say?