My JavaScript book is out! Don't miss the opportunity to upgrade your beginner or average dev skills.

Tuesday, May 05, 2015

Stop fooling around with deferred CSS

I was happily testing my latest little contact form page for my own fresh new company, that I couldn't resist testing on this Google page speed tool.
There's basically only one point missing in there, which is the following:
Eliminate render-blocking JavaScript and CSS in above-the-fold content
Your page has 1 blocking CSS resources. This causes a delay in rendering your page.

Deferred main CSS is an Anti-Pattern

Not only since this tool came out, everyone decided to write any sort of JavaScript based solution, making JS mandatory for those that maybe blocked JS but still would like to see the page as it should be ... apparently, nobody ever questioned this practice on real-world scenarios, so here a quick recap:
  • it's a very bad User eXperience when you can already surf a page that keeps moving under your eyes/fingers because deferred CSS changed every-bloody-box on the page ... style changing everything under our eyes is simply disorienting, period!
  • it cannot compete with native applications, those that might take a little longer to load the first screen, but FFS you gonna see them the way their designers meant, and not the uncontrolled way some random browser decided to show the naked page
  • it's against W3C standards where it's clearly stated the following:

link should be used where metadata content is expected

The definition of Metadata content is content that sets up the presentation or behavior of the rest of the content.
The list of element that should be placed inside the head one is deadly simple: base, link, meta, noscript, script, style, template, title.
Please note that script is indeed also an element that just should be on the header, specially to inform ASAP the browser that some JS related logic might be involved.
Here again we shouldn't fool ourselves with broken-prone patterns like async, when the page cannot as example possibly even work without JS, but while script gained the right to support defer and async, the link is still "the dangerous tag to avoid for CSS on the head", which is simply ridiculous if you ask me.

And yet not a reliable analysis

Guess how big is the CSS I am talking about, after minification and served as compressed content? 1.3KB ... and this is how the graph looks like when you reach the page:

As Summary

For a page that triggers a DOMContentLoaded in less than 400ms and with all stylesheets in, I would expect something more than "you shuold defer through JS your CSS or put them inline" ... no, I have 4 pages based on the same CSS, why would i even think about wasting time to do something worst?
And please, despite my disappointed tone, don't get me wrong, I love tools like this because there's always something to learn and improve (as example, I completely forgot about pngquant so thanks a lot for pointing that out!)

My take on this little story is: don't ever take for correct, right, or absolutely a good practice, anything without before understanding the prooblem and why the eventual solution might work, or not, as it has been in my case.
Thank You for reading, and apologies for not linking such page, it's just not the point to show you a contact form, the rest is ;-)

19 comments:

  1. This is a misunderstanding of the rule. No one's suggesting you should render without CSS, and no one's suggesting elements should move around the page during load.

    The idea is you inline the CSS needed for first render, then lazyload the rest of the CSS using loadCSS or similar, and of course you should hide / position elements so you don't get a flash of unstyled content, or content that moves as it loads.

    I did this with SVGOMG, Smashing Mag do it too. It's a great performance boost.

    You don't need to put a link element in the body for this, but regardless, it's a silly spec rule that you can't. It's already supported in every browser. Browsers also support style elements in body, even if they import external resources. I really like how Firefox & IE handle link elements in this case, and I think Chrome should do the same, it makes staged loading of CSS much easier.

    ReplyDelete
  2. > no one's suggesting elements should move around the page during load.

    that happens if you load asynchronously ... and I am not talking about hidden elements, just margin, padding, table layouts, grid ... etc.

    No, I don't want to make JS mandatory for my layout, I don't want lazy load via JS.

    There is a gap that needs to be filled eventually in HTML land, I don't want a JS library because I need to lazy load that too.

    I have DOMContentLoaded triggered in less than 500ms and a tool that tells me I am not doing things the right way. Other tools didn't complain because they took timing and size in consideration.

    I don't want anyone to see my body "naked" so I won't put that in the link.

    I can create faster, instantly better looking, more user friendly, web-pages without following machine advices, and I am happy with it ... without needing a single line of JS.

    ReplyDelete
  3. * I won't put that link in the body

    ReplyDelete
  4. to rephrase my POV:

    1. this tool should explain more what you think "nobody said that" ... because it states things in that way

    2. this tool should consider overall size and DOMContentLoad timing, not every web page is as big as Smashing Mag

    3. this problem should be solved in HTML land, proposing both defer (which is better than async) and eventually async for links too.

    4. this tool should not tell people to place links down the body where these do not belong, neither suggest to create pages that will not render as expected on JS blocked browsers


    Can you please help with any of these points? Thank You!

    ReplyDelete
  5. I see your point, and I've thought about that once.

    To "bullet-proof" this situation, one would have to make a VERY smart and optimised "starting CSS" with less than 1 kb, with all necessary paddings and colours, inlined and, if possible, included by server-side include routines to be on every page of the website (if it is not a single-page app), especially crafted not to break the layout once the "full" CSS is loaded.
    Either that or a "loading" screen, which is, except in very particular occasions, a nasty thing (using cache and service workers it can be bearable, though, as in this case it is meant to appear just once).

    ReplyDelete
  6. 1. The tool discourages render-blocking external CSS. There are big performance wins to be had here.

    2. DOMContentLoaded is a poor metric compared to first render.

    3. I agree, as you see by the ticket I raised.

    4. Does it recommend that? I believe it recommends loading the CSS with JS. If it's asked you to appendChild the link to body, *shrug*, add it to the head instead. In the mean time, I'm not too concerned about JS-blocked browsers, they're like CSS-blocked browsers, and browsers that block the letter "e", if they want to disable important browser features, that's their problem. Of course though, I'm looking for an html solution, as you can see by the ticket.

    ReplyDelete
  7. And of course you can use noscript as a fallback to load styles, this is exactly what smashingmag does. Smashingmag doesn't just do this because it's "big", SVGOMG is tiny and still gets a benefit.

    ReplyDelete
  8. Anyway, all of those hacky rules will be useless pretty soon with the rise of HTTP/2.
    No more roundtrip for a css so no more (or much shorter) blocking.

    ReplyDelete
  9. > I believe it recommends loading the CSS with JS

    which is IMO an epic separation of concerns failure in the Web ecosystem.

    I will not use JS to load CSS and yet you won't find many sites faster than this little one.

    Agreed it's good to know Chrome might block in there, but not scoring on a render (not only the DOMContentLoaded) of a fully CSS styled page under 1 second (actually again under 500 in my case) feels like the tool lacks some common sense.

    ReplyDelete
  10. I wonder why nobody already mentioned it - if your CSS is 1.3 kb small - then just inline it in the head of your site. No JS needed.

    What you are complaining about is the result of a poor implementation of the whole "not render blocking" thing. If you do it correct every style that is needed to render the above the fold content is inlined in the head. Everything else is loaded via loadCSS - the FOUC should not be visible because it happens "beyond the fold" - and as a fallback for loadCSS the link tags are in the head inside a noscript tag.

    On slow connections FOUC still can happen - but: it's not a bug, it's a feature - i rather see some unstyled text (that i already can start reading) on my mobile phone, instead of waiting 5 minutes till the css finished downloading on edge connection.

    ReplyDelete
  11. > if your CSS is 1.3 kb small - then just inline it in the head of your site

    and this is the preferred solution for who exactly?

    I have 4 pages that depends on the same CSS, why would I exactly wuadruplicate the amount of data a user should download for my CSS?

    ... I agree ... like I wrote, I wonder why nobody tested this on the real world, considering real benefits of cached CSS VS 2 ms difference when the CSS is small anyway.

    ReplyDelete
  12. also ...

    > instead of waiting 5 minutes


    you didn't get a single word written in this post, why are you even complaining about my complain since you mention 5 minutes wait?

    Which part of less than 500 ms is not clear?

    ReplyDelete
  13. You're talking about a cable connection with 20ms - I talk about mobile connections (soon to be > 50% of your traffic) with 100 - 1000 ms - so this 1.3kb file can make a difference of seconds or even minutes (if you loose your connection while the page is loading - that i meant with the 5 min).

    Caching should not be a big issue if you can optimize everything to the point, that the above the fold content is delivered within the first 14 kb

    It might not be worth it in every case - like the site and the js/css assets are already very small. And it might be, that this is even better than having a 5 Mb site that must be highly optimized with tricks like those - but in many cases it is highly worth it (like the already mentioned smashingmag)

    ReplyDelete
  14. I like the fact that from time to time somebody comes here thinking it's not just about 10 years this blog tells some story ... so, GTFO with your 5 minutes and please come back to reality, here your results.

    The blocking part is ONLY the icon, regardless I've specified sizes.

    These browsers are dumb enough to donwload every size anyway ... but that's another story.

    first service

    and again, ignore the full part, because it consider favicons, it's not in the render part.


    Second tool

    ... and I can test again that even on 14K old modem it wasn't more than 3 seconds wait.

    Can you please just be reasonable and drop this 5 minutes FUD? You sound just a fanboy ootherwise, without any common sense, thank you.

    ReplyDelete
  15. and just to extra clarify: if you have trottling connection you are going to have troibles regardless ... so please, let's try to keep it real here, thanks.

    ReplyDelete
  16. Oh sorry, it wasn't my attention to be mean or anything.

    Your page is 60 kb - I guess it's very hard to reach anything more than 3 sec with modern technology. (Although very possible with mobile connections, as I mentioned - if I loose my connection [e.g. tunnel] while requesting the 1.3kb file it doesn't matter at all how big that file is - I just don't get anything to see till i have a connection again) But yes - that may be a very rare edge case.

    Like I said before - it's great if you can deliver a very lightweight page that serves the purpose. And yes, in those cases the google page speed recommendations may sound a little silly (after all it's just a "machine"). But lets be real 60 kb is far from the average webpage size - and with pages that weigh 1Mb and more - it's very easy to reach loading times of multiple minutes if you do not follow best practices. (And those "best practices" may vary from device to device / connection type to connection type / page size to page size and need to adept accordingly)

    And now I GTFO - I'm sorry if I was offensive - english is not my first language and a may have not the best "feeling" to communicate in a polite way.

    ReplyDelete
  17. I didn't want to be that mean too but honestly you've seen yourself a page that works even under lynx, browsers without JS but CSS enabled that look good, Mobile Friendly, etc etc ... I've created one of the simplest "fucking works" contact page ad there's some tool telling me 60KB are too much ... but if you look closer, 60KB aren't the rendering path, 60KB is the total including "favicons" that dumb browsers donwload even if you specify a proper size they don't need. I might even get rid of the favicon because it's madness in terms of bandwidth that's completely useless ... so, modern tools complain about a bloody legit CSS in a link that is under 1.5KB minzipped, and don't complain about the dumb browsers behavior of downloading 40KB of pointless favicon images they'll never use per device was kinda my point on how screwed are these tools, and developers trusting them, these days.

    ReplyDelete
  18. I think everyone is missing the point. The rule explicitly says "above-the-fold content", and if you read further in Google's own documentation, they do NOT recommend deferring ALL of your styles.

    They're saying "ABOVE THE FOLD CONTENT" meaning a site might inline ~0.5k of styles to immediately render stuff as it should be seen, then defer everything that is not required for that initial render. This means:

    1) nothing is changing or jumping around the page, because that first render is 100% covered.

    2) there is no FOUC, because that first render is 100% covered.

    Is it still overoptimization? Absolutely YES, for all but the heaviest of sites who have like 200k+ worth of css that causes that initial render to delay. In those cases, it is very beneficial to extract a tiny fraction of the CSS and inline it, then load the 199k afterward.

    Maybe Pagespeed needs different rulesets like Yslow has - big site, small site, etc. That way these rules that are overkill for personal blogs don't cause such misunderstandings.

    ReplyDelete
  19. point is ... no, inlining CSS is not a best practice because it doesn't allow caching, even if it's just one KB. Repeat that for 200 pages and there it goes the optimization.

    This tool should analyze real-world performance, in terms of KB, instead of raw semantics ... the CSS was essential in this case, and above the content. We should also be capable of recognize "robots failures" and improve algorithms if we realize these are weak.

    ReplyDelete

Note: Only a member of this blog may post a comment.