tag:blogger.com,1999:blog-34454975.post2803291665341565395..comments2023-06-28T16:58:41.189+02:00Comments on Web Reflection: On My Vicious JavaScript CodeAndrea Giammarchihttp://www.blogger.com/profile/16277820774810688474noreply@blogger.comBlogger25125tag:blogger.com,1999:blog-34454975.post-25023181896651511072009-02-23T10:58:00.000+01:002009-02-23T10:58:00.000+01:00@kangax, guys, I have modified the source since I ...@kangax, guys, I have modified the source since I will probably add more and more stuff and I agree with you that a proposal should be as clear as possible.<BR/>As summary, I have updated the <A HREF="http://www.devpro.it/code/195.html" REL="nofollow">extend proposal</A>, hoping it will be more clear for everybody.Andrea Giammarchihttps://www.blogger.com/profile/16277820774810688474noreply@blogger.comtag:blogger.com,1999:blog-34454975.post-14196253254181223122009-02-22T20:35:00.000+01:002009-02-22T20:35:00.000+01:00kangaz => kangaxworks => words:)kangaz => kangax<BR/>works => words<BR/>:)Andrea Giammarchihttps://www.blogger.com/profile/16277820774810688474noreply@blogger.comtag:blogger.com,1999:blog-34454975.post-90281899394009717602009-02-22T20:33:00.000+01:002009-02-22T20:33:00.000+01:00ywg, you demonstrated that with few reductions you...ywg, you demonstrated that with few reductions you have already .1% of final size.<BR/>Now try to imagine that for an entire library (for myself and nobody else) all my practices could make this reduction up to 10% or more, which for 100Kb could mean 30Kb instead of 40.<BR/>I already said my style is probably too maniac, but as long as practical results do not demonstrate I am wrong, I will stay with my point (what I mean is that I prefer the kangaz point rather than a test which demonstrate my way still produces a smaller output, even if it is not that smaller and specially removing short works as div, script, IE, and others are).<BR/><BR/>I am not planning to rewrite the prototype library, but if you want to change every variable name or every function which uses up to 3 different arguments with my practices, global prototypes included and everything else, I am sure the margin can only be bigger.<BR/><BR/>Finally, I think my code is readable enough for shorts functions, but I got your points (all of you).<BR/><BR/>Cheers.Andrea Giammarchihttps://www.blogger.com/profile/16277820774810688474noreply@blogger.comtag:blogger.com,1999:blog-34454975.post-33373266982036909942009-02-22T20:09:00.000+01:002009-02-22T20:09:00.000+01:00'give me a practical case to study'Take pr...'give me a practical case to study'<BR/><BR/>Take prototype.js, and do the following replacements :<BR/><BR/>IE (56 occurences)<BR/>Opera (20 occurences)<BR/>WebKit (11 occurences)<BR/>Gecko (4 occurences)<BR/>MobileSafari (2 occurences)<BR/><BR/>replace by ==> Browser (39 + 92 occurences)<BR/><BR/>apply (12 occurences)<BR/>call (47 occurences)<BR/><BR/>replace by ==> function (572 + 59 occurences)<BR/><BR/>div (29 occurences)<BR/>script (37 occurences)<BR/>style (127 occurences)<BR/>form (108 occurences)<BR/><BR/>replace by ==> for (234 + 301 occurences)<BR/><BR/>Thats a big dictionnary reduction, and filesize stay approximately the same. Despite that dictionnary reduction, the compression gain is near nothing :<BR/><BR/>original prototype = 134057 bytes<BR/>original prototype (gzip) = 30498 bytes<BR/>compress rate = 22,7%<BR/><BR/>reduced prototype = 134137 bytes<BR/>original prototype (gzip) = 30374 bytes<BR/>compress rate = 22,6%<BR/><BR/>We have reduced the dictionnary but we did not gain any significant gain. Dictionnary reduction has no inpact because the huffman tree is build on the output of the LZ77 algorithm, not on the original stream.<BR/><BR/>I don't see how can I give you a better usecase.Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-34454975.post-72102963636423847312009-02-22T19:26:00.000+01:002009-02-22T19:26:00.000+01:00ywg, what you call noise, are different characters...ywg, what you call noise, are different characters combination ... so, again, give me a practical case to study and we can go on talkingAndrea Giammarchihttps://www.blogger.com/profile/16277820774810688474noreply@blogger.comtag:blogger.com,1999:blog-34454975.post-43314171514265278022009-02-22T18:47:00.000+01:002009-02-22T18:47:00.000+01:00Spaces were just for the sake of the example, repl...Spaces were just for the sake of the example, replace them with any set of 5~7 words (with regular repartition) if you prefer.Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-34454975.post-34651414004916817922009-02-22T18:27:00.000+01:002009-02-22T18:27:00.000+01:00spaces are minifier stuff, I wrote it :)The rest i...spaces are minifier stuff, I wrote it :)<BR/>The rest is superfluous extreme optimization which is probably not worthy in a team but could let me produce best ratio scripts. This is simply my point.Andrea Giammarchihttps://www.blogger.com/profile/16277820774810688474noreply@blogger.comtag:blogger.com,1999:blog-34454975.post-84260305860095302152009-02-22T18:08:00.000+01:002009-02-22T18:08:00.000+01:00I'm very suspicious when posting on blogs... S...I'm very suspicious when posting on blogs... Sadly this kind of "incidents" have happened to me many times.<BR/><BR/>As it was not your intention, please accept my apologizes.<BR/><BR/>Going back to topic :<BR/><BR/>Of course dictionnary size is important, but considering the fact that your code pass through LZ algorithm before, its inpact is greatly reduced.<BR/><BR/>Using \w and replacing every word is a biased example, it's not different than you're first demonstration where you used perfect redundancy.<BR/><BR/>With your codestyle you only reduce the dictionnary of a small amount of word... Something more revelant is to replace whitespaces.<BR/><BR/>Try a replace \s -> s on prototype.js and gzip them after :<BR/><BR/>original prototype.js = 30498 bytes<BR/>dictionnary reduces prototype.js = 30371 bytes<BR/><BR/>A benefit of 127 bytes... Considering that even on the crappyest network you'll have a MTU of at least 1300 bytes, you don't have any benefit.<BR/><BR/>IMHO the legibility loss does not worth it : 'Early optimization is the root of all evil'.Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-34454975.post-46734170789543303522009-02-22T17:26:00.000+01:002009-02-22T17:26:00.000+01:00Going back in topic, if it is possible ... take a ...Going back in topic, if it is possible ... take a library bigger than 15Kb, replace \w+ with "same" and compress it.<BR/>Then compare the result with the original library compressed.<BR/>More occurrence you have, less will be the size via common compressors.<BR/>This is always true for packer algo, as example, while could be not true in an theoretically scenario we both cannot reproduce.Andrea Giammarchihttps://www.blogger.com/profile/16277820774810688474noreply@blogger.comtag:blogger.com,1999:blog-34454975.post-52419473890910360902009-02-22T17:19:00.000+01:002009-02-22T17:19:00.000+01:00P.S. and I am not offended by false thoughts like ...P.S. and I am not offended by false thoughts like the one you wrote before. RegardsAndrea Giammarchihttps://www.blogger.com/profile/16277820774810688474noreply@blogger.comtag:blogger.com,1999:blog-34454975.post-87978931727931215332009-02-22T17:14:00.000+01:002009-02-22T17:14:00.000+01:00ehr, I did not notice the post, it was on the list...ehr, I did not notice the post, it was on the list before last kangax one. I am not that kind of person, unless there is not offending stuff, I am here to write, listen to you and learn from other developers. RegardsAndrea Giammarchihttps://www.blogger.com/profile/16277820774810688474noreply@blogger.comtag:blogger.com,1999:blog-34454975.post-88037293101931456352009-02-22T16:12:00.000+01:002009-02-22T16:12:00.000+01:00@Andrea/WebReflection 'So I am sorry, but I am sti...@Andrea/WebReflection 'So I am sorry, but I am still asking you to demonstrate that for Huffman based compressors, dictionary length is not important'<BR/><BR/>I'm very disappointed, I took the time to write you an argumented reply showing how biased and irrevelant your demonstration is. You choosed not to publish it and instead write this comment pretending I didn't provided any argument.<BR/><BR/>This is very childish... continue to truncate people post so you can look like you're right on your own blog.<BR/><BR/>PS : as you will probably not publish this comment either I'll also post it on the ajaxian.Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-34454975.post-80482651048803543562009-02-22T15:57:00.000+01:002009-02-22T15:57:00.000+01:00P.S. the "it does not matter" means that I simply ...P.S. the "it does not matter" means that I simply exposed my point of view without pretending you like or approve it - aka: I wrote my reasons, I am open to read your one.Andrea Giammarchihttps://www.blogger.com/profile/16277820774810688474noreply@blogger.comtag:blogger.com,1999:blog-34454975.post-81282696531354697502009-02-22T15:46:00.000+01:002009-02-22T15:46:00.000+01:00@kangax, all this post is based on differences bet...@kangax, all this post is based on differences between me coding for myself and showing a little function and team collaboration.<BR/>In a team of course rules are different and I worked in team enough to say that my vicious code is not that good for a team.<BR/>On the other hand, tell me when you need to use a native constructor in an entire library ... it hapens usually a couple of times and mainly to use one of its prototype methods. As I wrote in the post, if you have a variable which inherits from that constructor without overrides, there is no reason at all to look for the native one when that variable method will be in scope and shorter to write (and since it is in scope faster to execute as well).<BR/>We all use closures to avoid conflicts with external environment, so why do you think this way is so bad? It is like setting a variable "$" to define your library and say: No, I cannot use it since there are other "$" libraries outside. I cannot spot this big difference and I think if we are in a closure and we understand the meaning of a closure, limits are only those we choose and nothing else.<BR/>Finally, of course I take care about readers and I would like everybody understand my code, but that's why I am here: if there is something not clear, I can explain without problems.<BR/>So far, in a couple of years of posts, rarely somebody told me: what is that?Andrea Giammarchihttps://www.blogger.com/profile/16277820774810688474noreply@blogger.comtag:blogger.com,1999:blog-34454975.post-4659510851930640702009-02-22T15:33:00.000+01:002009-02-22T15:33:00.000+01:00I'm disappointed to hear these arguments of yours....I'm disappointed to hear these arguments of yours. Minification benefits is nothing comparing to unexpected pesky bugs that you or other developer (more likely) will run into. I'm surprised that you, having quite some experience writing/maintaining code - as I understand it, don't see this.<BR/><BR/>When you edit some part of the app, and you need to use some of the native objects, do you really want to scan the entire scope for variables named as those native objects? More often than not, the scope of a function that's being edited is not just a few lines block. It's silly to waste time on things like that.<BR/><BR/>You don't care if a reader likes it? It's part of your craziness? <BR/><BR/>Come on, that's just so arrogant of you. Do you not care about collaboration either?Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-34454975.post-84450577194040719262009-02-22T15:30:00.000+01:002009-02-22T15:30:00.000+01:00So I am sorry, but I am still asking you to demons...So I am sorry, but I am still asking you to demonstrate that for Huffman based compressors, dictionary length is not important :)Andrea Giammarchihttps://www.blogger.com/profile/16277820774810688474noreply@blogger.comtag:blogger.com,1999:blog-34454975.post-29162578848352536162009-02-22T15:18:00.000+01:002009-02-22T15:18:00.000+01:00@AndreaI post anonymously because I don't have any...@Andrea<BR/><BR/>I post anonymously because I don't have any blogger and/or OpenID account, not to hide myself. You may see me on ajaxian posting under the name 'Ywg'.<BR/><BR/>Don't take it as an offense, but I still think you're size argument is wrong.<BR/>The example is not revelant, it do not have the caracteristics of real code, nor a sufficiant length to produce a reliable result.<BR/><BR/>In the first case you produce a string perfectly redundant :<BR/><BR/>'TestMe.TestMe.TestMe.TestMe.TestMe...'<BR/>Which will result in the something like that after the first compression step :<BR/><BR/>'TestMe.TestMe.TestMe...' [lz77 buffer][back_pointer][back_pointer]<BR/><BR/>Where each [back_pointer] contains several occurence of 'TestMe.', and so, very few [back_pointer].<BR/><BR/>In the second example you produce a string with very few redundancy and a lot of noise :<BR/><BR/>'test01.test02.test03.test04.test05.test06.test07.test08.test09.test10.test11...' [lz77 buffer][back_pointer]32[back_pointer]33[back_pointer]03[back_pointer]34...<BR/><BR/>Each [back_pointer] can only contain one occurence of '.test', this result on a lot of [back_pointer] pointing to only five bytes. Considering that a lempel-ziv pointer is usually made of 2 bytes, that's a lot of overhead.<BR/><BR/>In fact it may be even more broken, you're input string is so small I'm not even sure you fill up the lz buffer and get any compression at all before building the huffman tree.<BR/><BR/>Try with a real world code sample of at least 15kb.<BR/><BR/>PS : Sorry I didn't understand you're last question (my english is a bit limited...).Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-34454975.post-33399933694526090532009-02-22T15:12:00.000+01:002009-02-22T15:12:00.000+01:00I only posted first comment, though at the time, I...I only posted first comment, though at the time, I could only post as anonymous, this 2nd time I can actually give a name and url...Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-34454975.post-35686673199290992872009-02-22T14:35:00.000+01:002009-02-22T14:35:00.000+01:00@Anonymous, demonstration in the update at the end...@Anonymous, demonstration in the update at the end of the post.<BR/>I am still up to learn something from you but a link to specs I read dunno how many times is probably not enough, is it? RegardsAndrea Giammarchihttps://www.blogger.com/profile/16277820774810688474noreply@blogger.comtag:blogger.com,1999:blog-34454975.post-39200731768753328842009-02-22T14:24:00.000+01:002009-02-22T14:24:00.000+01:00P.S. why on earth you write comments without a nam...P.S. why on earth you write comments without a name? Don't you think in this way your critic/point becomes automatically less reliable?Andrea Giammarchihttps://www.blogger.com/profile/16277820774810688474noreply@blogger.comtag:blogger.com,1999:blog-34454975.post-55435103520253280762009-02-22T14:23:00.000+01:002009-02-22T14:23:00.000+01:00Anonymous I simply tested to confirm my point and ...Anonymous I simply tested to confirm my point and ... yeah, reduced dictionary make my code final size smaller with every common technique (packer, gzip, deflate)<BR/><BR/>Just try to gzip the simple toString example I showed or try to demonstrate that a file with 80 "testMe", once compressed, is bigger than a file with 80 "test01", "test02", ..., "test80"Andrea Giammarchihttps://www.blogger.com/profile/16277820774810688474noreply@blogger.comtag:blogger.com,1999:blog-34454975.post-2312353208789392792009-02-22T13:46:00.000+01:002009-02-22T13:46:00.000+01:00Your whole point about code weight is based on a w...Your whole point about code weight is based on a wrong belief :<BR/><BR/>Gzip/deflate uses a Lempel-Ziv 77 algorithm <B>before</B> building the huffman tree. Therefore having duplicate string or 'reducing' your dictionnary has virtually no inpact on compression result.<BR/><BR/>http://www.gzip.org/algorithm.txtAnonymousnoreply@blogger.comtag:blogger.com,1999:blog-34454975.post-85753336753783005532009-02-22T03:22:00.000+01:002009-02-22T03:22:00.000+01:00Great post, it had me thinking. I love to read her...Great post, it had me thinking. I love to read heretical, well-founded arguments about established conventions, it makes you question everything, and that's a good thing.<BR/><BR/>By the way, there is another con to this coding style: Some syntax highlighters will make a mess with the variable coloring (it also happens in this blog if you notice). And another pro: You get a much deeper understanding of the language when coding like this (although this is a quite temporary pro, after you get used to it, that pro doesn't apply any more).<BR/>I also agree with Anonymous, when code is posted publically, it has to be cleaned up first, in order to not confuse the newbies.Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-34454975.post-74723101424801032662009-02-21T21:51:00.000+01:002009-02-21T21:51:00.000+01:00Fortunately it does not happen every time :DBut ye...Fortunately it does not happen every time :D<BR/><BR/>But yes, I agree with you, but I consider my last proposal simple to understand ;)Andrea Giammarchihttps://www.blogger.com/profile/16277820774810688474noreply@blogger.comtag:blogger.com,1999:blog-34454975.post-70395858664688384192009-02-21T19:37:00.000+01:002009-02-21T19:37:00.000+01:00I use some codestyle tricks myself as well, but wh...I use some codestyle tricks myself as well, but when posting code to the public, it's best to clean up your code so that even apprentice javascripters can understand or better yet, prevent confusing programmers from another language.<BR/><BR/>If lack for a better reason, it prevents other JavaScripters from commenting about your code style each and every time. =PAnonymousnoreply@blogger.com