I believe we all agree that
Google Closure Compiler is one of the most advanced tool we have for JavaScript development:
- fast parsing with pretty printed or minified output with warnings about possible problems or errors
- type checks analysis before it was cool, as example via
java -jar compiler.jar --jscomp_warning=checkTypes --compilation_level=SIMPLE_OPTIMIZATIONS ~/code/file.js
- creation of source map for an easier debugging experience with the real, minified, code
- best raw compression ratio, death code elimination, and more with
@compilation_level ADVANCED_OPTIMIZATIONS
I also believe the latter point is a trap for developers and projects: unable to scale and capable of many headaches and shenanigans.
If you want to know why, here a list of the top
5 facts I always talk about, when some developer comes to me telling the solution to all problems is advanced optimizations.
1. Incompatible With ES5
Every current version of a browser is compatible with ES5 features but the tool that supposes to help us is able to change the program creating security problems or misbehaving logic in our code.
No "use strict";
this is the very first problem. Not much about
with()
statement and
eval()
, rather about the possibility to pollute by mistake the global scope so that any namespace could be destroyed by accident.
function setName(name) {
"use strict";
this.name = name;
}
var obj = {setName: setName};
obj.setName("name");
Above code will produce the warning
dangerous use of the global this object at line 7 character 0: this.name = name; which is correct only because
- it does not understand how the function is used
- the
"use strict";
directive has been removed
If you think that's OK you have no idea how many libraries out there might use the
window.name
property, the one that never changes in a tab lifecycle, you should think that
name
could have been any runtime or predefined property used by your library or any dependency your library has. ... is this not enough? Let's see more then :)
Update on "use strict";
As
@slicknet pointed out, there is the possibility to support the
"use strict"
directive via
--language=ECMASCRIPT5_STRICT command line argument.
This flag comes in any case with
extra related bugs.
However, I didn't know that flag and this is cool but
"use strict";
in JavaScript world is already a directive so that shuold not be necessary at all. We are in an era where ES3 only browsers are a minority and tools should be updated accordingly, IMHO.
This part is another reason modules are not friendly with Closure Compiler Advanced portability and security is still under attack. Please read more!
Object.defineProperty(shenanigans)
Since strings are the only thing that cannot be changed, guess what is the output here:
var program = Object.defineProperty(
{},
"theUntouchablePropertyName",
{
writable: true,
configurable: true,
enumerable: true,
value: 123
}
);
alert(program.theUntouchablePropertyName);
You can test almost all these snippets via the
Closure Compiler Service to double check, if you want. Anyway, here what that will be:
var a=Object.defineProperty({},
"theUntouchablePropertyName",
// **STATIC WHITE LIST**
{writable:!0,configurable:!0,enumerable:!0,value:123});
alert(a.a); // <== You see this ?
If you are thinking "
sure, you have to access that property as program["theUntouchablePropertyName"]
" bear with me 'cause I am getting there ...
The number of problems we have with above snippet are different. The very first one is that with a massive usage of inline ES5 descriptors there's
no final size gain because of the static list of properties that Closure Compiler won't change.
All DOM classes and properties cannot be touched, together with events properties and basically everything defined in the W3C together with
all globals and non standard features such
window.opera
which is preserved and I have no idea what else but the elephant here is that
prefixed properties are not supported too!
function doAmazingStuff(obj) {
if (obj.type === "custom:event") {
obj.target.onwebkitTransitionEnd();
}
}
doAmazingStuff({});
Above snippet will result in this one:
var a={};"custom:event"===a.type&&a.target.a();
Why This Is A Problem
You might say that, as everything else, the property should be accessed dynamically. But how comes that properties such
writable
and
enumerable
are maintained while anything that could come from
ES6 or most recent proposal
might be not supported and so compromised? What if some browser has some special case that Closure Compiler is not aware of in a reality where every browser might have some special case in core?
Are we really willing to write an inconsistently looking code such
var descriptor = {
configurable: true,
"inheritable": false
};
to avoid problems? Uh wait ... I know where you come from ...
2. Requires A Different Syntax
When I've tweeted about
another-js somebody complained about the fact that developers will get really bored to
obj.get("key")
and
obj.set("key", value)
all the time ... and yes, I might agree that is much more typing for large applications. While the project is about the ability to observe everything that's going on, is not required at all to follow that convention: when we know an object should not be observable, we can access properties directly in order to get or set values as we have always done.
In Closure Compiler Advanced world
we are forced to follow a convention about accessing properties which is dynamically rather than statically.
// JavaScript
obj.newProp = obj.oldProp + 123;
// Closure Compiler Advanced (CCA)
obj["newProp"] = obj["oldProp"] + 123;
Holy crap, I have spent already at least twice the time to write the second line ... and this is because I am planning to make this old function still able to work:
function doMoarWithDescriptors(d) {
for (var key in d) {
if (k === "newProp") {
// would never happen after CCA
// if not changing the code
}
}
}
So, not only we need to
type more with less ability to be helped by autoComplete, since strings are strings and not every IDE supports dynamic access, but every time we would like to be sure that a key is the one we expect, in a state machine, or a more common
switch()
, we need to do refactoring with all consequences we know about refactoring in big, as well as smaller projects!
... and the best part is ...
3. More Things To Be Worried About
The funnies non-sense about Closure Compiler Advanced, is that while
everyone is trying to change JavaScript because they believe is not good as it is and it has too many quirks in the specifications, Closure Compiler Advanced
adds even more syntax problems to be worried about .. isn't this lovely insane?
Danger Indeed !
The unique id used to described problems caused by the Closure Compiler Advanced is
#danger, because disasters might happen to everything if you don't follow those rules ... but wait a minute ...
shouldn't DEV tools help rather than create more problems to be aware of ? Ask yourself ... you are probably still struggling understanding JavaScript coercion but you had to learn that
obj["prop"]
is a better way to access a property ... you know what I mean, right?
Even worst, something described in that solution list is the most common use case in modern JavaScript and guess what ...
4. It Does Not Scale With RequireJS/AMD Modules
True story bro, and reasons are many!
Requires Extra Steps In The Loop
This is not a big deal, but you must be able to create code that can be put together and then split again a part.
While this might sound easy, the ability to eliminate death code and understand the code in a way that closures, namespaces, and about everything could be inlined, might be a problem. How do you understand what piece of code can be split and still work as it was before and for sure?
There's no way you can run unit tests to prove that part is still working unless you compile all tests together and be able to split them a part. How much extra effort and possible trouble maker variables in the middle for a flow that should be as linear and easy as possible?
Broken Source Map
That's correct, is not only about splitting back the code when you want preserve
a lazy module loading approach, the source map, the coolest feature ever, won't mach a thing anymore because it will be generated in the whole block of all modules and not for the single one so rows and columns will be screwed ^_^
Worse Performance
In order to export a module and make it compatible with Closure Compiler Advanced after build, which could be the case if you don't want to put the module source online or you upload in a CDN only its already minified version, either you create the
exports.js version of the file, which means you are assuming every user is basing the build process on Closure Compiler, or you need to
manually fix the exports so that methods and properties won't be changed.
// original module for CCA
module["exports"] = {
"dontTouchThis": genericValue
};
If you are the one using Closure Compiler to create your own script, and you use Advanced Optimizations, above code will result in
module.exports={dontTouchThis:genericValue};
Cool, uh? The only problem is that if you pass again that code through the Closure Compiler you are screwed because the result will be
module.a={b:genericValue};
.
Accordingly, you have to change back the property that should never be touched resulting in a minified version that is
module["exports"]={"dontTouchThis":genericValue};
Not every browser optimizes runtime that kind of property access as Chrome does, you can see specially mobile, IE, and others going 3X or more faster with
direct property rather than squared brackets plus string access. I know, this could be optimized by browsers but ... 'till now, is not, and I don't really want to write two kind of ways to access properties all the time.
5. A Compromised Security
An automation is an automation and the current status of software
AI is far away from perfect or superior, period.
This means that sentences like
Crockford's one about JSLint, regardless the excellent interview, should be taken metaphorically speaking and nothing more:
JSLint is smarter than I am at JavaScript and it is probably smarter than you, too. So as painful as it is: Follow its advice. It is trying to help you.
If you think the same about Closure Compiler in Advanced mode here the easy answer:
This tool has one task to do and is not to understand the code in a way developers do.
As example, if you want to avoid that your whole namespace is removed by mistake in the global scope and you don't have ES5 possibilities, as broken as these are in Closure Compiler world, you will certainly declare your namespace as:
// if already defined in the global scope,
// do not redefine it!
// handy for libraries required in different
// other libraries
var myNamespace = myNamespace || {
utility: {
// the magic
}
};
As you know,
var defined variables
cannot be deleted.
delete myNamespace;
delete window.myNamespace;
delete this.myNamespace;
// it doesn't matter
typeof myNamespace; // "object" !!!
This is because there's no way that by accident someone can delete that namespace ... how? Here an example:
// let's pretend this is a global utility
window["clearProperties"] = function () {
var self = this;
for(var key in self) {
delete self[key];
}
};
This will compile into next snippet, and right now without warnings:
window.clearProperties=function(){for(var a in this)delete this[a]};
Right ... I have to tell you something ... first of all:
That Is A Bug
Being a software, Closure Compiler can have bugs as any other software you are creating or you know.
Since, as I have said, is not really smarter than us, Closure Compiler created in this case a piece of code that if you pass it again to the closure compiler itself will throw errors all over the place.
JSC_USED_GLOBAL_THIS: dangerous use of the global this object at line 1 character 59
window.clearProperties=function(){for(var a in this)delete this[a]};
In case you are wondering what the initial bug is, using next piece of code without
var self = this;
upfront causes warnings:
window["clearProperties"] = function () {
// GCC not that smart here
// it does not understand addressed `this`
// warnings only in this case
for(var key in this) {
delete this[key];
}
};
So, let me recap:
Closure Compiler generates code you should not write ... got it?
That's how much is worth an advice from any automatically transformed, and meant code ...
That Namespace Can Be Deleted
So here was my main point: you don't want to make the namespace deletable but if you write just the very first snippet with a variable declaration, the output will be most likely this one:
Your code compiled to down to 0 bytes!
And no warnings at all ... great ... so how about another boring syntax change to learn such:
var myNamespace =
window.myNamespace =
myNamespace ||
{
utility: {
// the magic
}
}
;
No way, that crap compiles to
var b=window.a=b||{b:{}};
and with one warning, which means we have to double check the warning to be sure is safe, plus we have a double global namespace pollution with both
a
and
b
... awesome ... now ask yourself:
should I waste my time investigating about Closure Compiler Advanced problems, rather than focus on software development?
Closure Advanced Is Just Bad
I don't know why the advanced optimization option became so popular but I've personally never used it and never suggested it.
The amount of extra crap caused by this obtrusive directive you don't even want to think about is massive!
Myths About Size
One reason I've been thinking about is that somebody said that this option is able to make miracles within the code. A part that I've never considered a broken environment a worthy miracle, if that is about the elimination of death code so that even if you want to
hook in runtime you are screwed ... well, enjoy!
However, that code is 90% of the time there for a reason and if you have like 30% of code that should not be there once minified, ask yourself why you are using closure compiler rather than clean up your bloody messed up and pointless code: profit!
The Size That Should Matter
Let's be realistic ... if you have 1Mb of JavaScript the problem is not in those few bytes you could save minifying properties. You are also probably not considering that these two snippets are equivalent:
// how you write for CCA
window["namespace"] = {
"dontChangeSinceExported": {}
};
// how is JS
var namespace = {
dontChangeSinceExported: {}
};
The funny part is that after minification the second, normal, natural, fully JS approach, wins!
// after CCA
window.namespace={dontChangeSinceExported:{}};
// after SIMPLE_OPTIMIZATION
var namespace={dontChangeSinceExported:{}};
3 bytes less and a safer namespace ... hell yeah!!!
As Summary
Closure Compiler is really the best tool I know for many reasons, included the ability to create warnings if you don't respect the
Java Docs Like Annotation but the ADVANCED_OPTIMIZATION flag has so many problems that I wonder if is really worth it for you to adopt it.
A
simply minified code with a minifier able to generate source map, understand type checks, and let your logic, as developer, be always available, is a full of win that
together with gzip compression will never be the **real** bottleneck.
Death code elimination is a good hint for a tool, not for a minifier able to screw up logic and code. I'd love to see death code elimination tools detached from the minification logic or upfront with the ability to understand what is really death code and drop it, rather than letting an automation understand that was OK to do.
My 2 cents
A Tiny Update
One thing I have forgotten to mention, is that AFAIK Closure Compiler Advanced supports by default Java Documentation, but this is not capable to even tell us inside the generated html whenever a property should be accessed as exported, via
obj["property"]
, or a mungable one, as
obj.property
could be.