What Is UA Sniffing
UserAgent sniffing means that a generic software is relying into a generic string representation of the underlying system. The User Agent is basically considered a unique identifier of "the current software or hardware that is running the app".In native applications world UA could be simply the platform name ... where if it's "Darwin" it means we are in a Mac platform, while if it's Win32 or any other "/^Win.*$/" environment, the app reacts, compile, execute, as if it is in a Windows machine ... and so on with Linux and relative distributions.
The "Native" Software Behavior
If you have an hybrid solution, example those solutions not allowed anymore but called Hachintosh not long time ago, your machine has most likely Windows Starter capable Hardware but it's running compiled Objective-C language. How reliable you think this machine is? Every software will consider it a Mac Hardware capable machine.Should these applications optimize for non Mac hardware? I don't think so .... I mean, that machine was not classified in first place as Mac capable machine, it was the user/hacker that decided to do something "nasty" so that if something does not work ... who does really care?
Do you really want to provide support for that "random machine in the system"?
I still don't think so ... also, if you know performances and provided hardware has reached certain levels in that environment, do you even want to waste time optimizing things for a Netbook user?
I think reality is that you just create software for the target, or those targets, you want to support and nothing else, isn't it? ... but of course new unexpected comers are, hopefully, welcome ...
The Old Web Behavior
UA sniffing has historically been a bad practice in the world wide web (internet). At the very beginning there was only a major and supported browser, Internet Explorer, and this had something like 80% or more of market share. All developers, browsers vendors, and users with a different browser where most likely redirected into a page that was saying something like: "Your browser is not supported. Please come back with IE!"Even worst, this was happening on the server side level ... "why that"? Because websites where created, and tested, entirely in Internet Explorer as unique target for any sort of online business.
Was that a good choice? Today we can say it wasn't but back at that time it was making sense on business level.
How many apps we know that work only on Windows or only on Mac? Many of them, and we are talking about only two platforms.
At least at that point we had a server side degradation into a non service completely useless for not targeted browsers but ... hey, that was their business, and if they wanted to use ActiveXObject because many things where not possible in other browsers, how can we blame these companies? "Everywhere or nothing"? A nice utopia that won't bring you that far in real world .... nothing, I repeat, nothing works 100% as expected everywhere.
The dream is to reach that point but stories like Java, .NET VS Mono, Python itself, and of course JavaScript, should ring a little bell in our developers mind ... we can still go close though, at least on the Web side!
The Modern Web Behavior
Recently things changed quite a lot on web side and only few companies are redirecting via server side User Agent sniffing. We have now something called runtime features detections, something that supposes to test indeed runtime browser capabilities and understand, still runtime, if the browser should be redirected or not into a hopefully meaningful fallback or degraded service.Features Detections Good Because
Well, specially because the browsers fragmentation is massive, FD can tell us what we need from the current one, without penalizing in advance anybody.The potential redirection or message only if necessary, informing the user his/her browser is not capable of features required to grant a decent experience in the current online application/service.
FDs are also widely suggested for future compatibility with new browsers we may not be able to test, or recognize, with any sort of list present in our server side logic, the one that is not directly able to understand if the current browser may run the application/service or not.
Of course to be automatically compatible with newer browsers is both business value, as "there before we know", and simplified maintenance of the application/logic itself, since if it was working accordingly with certain features, of course it's going to work accordingly with newer or improved features we need.
As summary, runtime features detections can be extremely valuable for our business ... but
Features Detections Bad Because
Not sure I have to tell you that the first browser with disabled JavaScript support will fail all detections even if theoretically capable ... but lets ignore these cases for now, right?Well, it's kinda right, 'cause we may have detected browsers with JS disabled already in the server side thanks to user headers or specific agent ... should I mention Lynx browser ? Try to detect that one via JavaScript ...
Back to "real world cases", all techniques used today for runtime features detections are kinda weak ... or better, extremely weak!
I give you an example:
Not because I want to disappoint you but you gonna be potentially wrong in both cases ... why that?
// the "shimmable"
if (!("forEach" in []) || !Array.prototype.forEach) {
// you wish this gonna fix everything, uh? ...
Array.prototype.forEach = function () { ... };
}
// the unshimmable
if (!document.createElement("canvas").getContext("2d")) {
// no canvas support ... you wish to know here ...
}
Even if Array.prototype.forEach is exposed and this is the only Array extra you need, things may go wrong. As example, the first shim will never be executed in a case where "forEach" in [] is true, even if that shim would have solved our problem.
That bug I have filed few days ago demonstrated that we cannot really trust the fact a method is somewhere since we should write a whole test suite for a single method in order to be sure everything will work as expected OR we gonna write unit, acceptance, integration, and functional tests to be sure that a bloody browser works as expected in our application.
Same is valid for classic canvas capability ... once we have that, do we really test that every method works as expected? And if we need only a single method out of the canvas, how can we understand that method is there and is working as expected without involving, for the single test, part of the API that may not work but even though we don't care since we need only the very first one?
I am talking about drawImage, as example, in old Symbian browsers, where canvas is exposed but drawImage does not visually draw anything on the element ... nice, isn't it?
You Cannot Detect Everything Runtime
... or better, if you do, most likely any user has to wait few minutes before the whole test suite becomes green, specially in mobile browsers where any of these tests take ages burning battery life, CPU clocks, RAM, and everything else before the page can be even visualized since we would like to redirect the user before he can see the experience is already broken, isn't it?IT Is Not Black Or White
... you think so? I think IT is more about "what's the most convenient solution for this problem", assuming there is, generally speaking, no best solution to a specific problem, since every problem can be solved differently and in a better way, accordingly with the surrounding environment.So how do we brainstorm all these possible edge cases that cannot obviously be solved runtime in a meaningful, reliable way?
I want provide same experience to as many users as possible but thanks to my tests I have already found user X, Y, and Z, that cannot possibly be compatible with the application/service I am trying to offer.
If I detect runtime everything I need for my app, assuming this is possible, every browser I already know has no problems there will be penalized for non updated, low market share, problematic alternatives.
If I sniff the User Agent with a list of browsers I already know I cannot possibly support due lack of unshimmable features, how faster will be on startup time every other browser I am interested on?
Best Solution Now
If you ask me, today and specially on mobile side, we have 3 categories of browsers:- those almost there
- those not there yet
- those will never be there
In a business logic you don't even want to waste time for the third category ... "money for nothing", as Mark Knopfler would say.
You also do not want to penalize most interesting browsers categories due massive amount, size and computation logic speaking, of features detections ... I mean, we know those browsers are crap and a minority, the server side User Agent sniffing would be the most suitable solution ever providing any sort of possible fallback or info, if there is no budget for that fallback.
But what about the second category?
Well, it depends ... if the second category has a decent market share you may try to support it and let it pass all your tests but at which price?
If the whole application has to be different for that single browser, and it shares less than 10% of the global market share, reflected into 1% of your users, do you really want to spend all possible effort to make it work?
I would say it makes sense only if this browser has few, shimmable, problems ... otherwise the best place for this browser would be directly the server side, don't you think?
About the first category ... well, it's still about guessing, hoping, praying, that things go as expected but at least for these browsers we can run all our tests against them and be sure that things are at least similar.
I am not talking about pixel perfection, that is bad as well in most of the Web related cases, I am talking about providing a decent experience in your Web application/software/page that strongly relies into JavaScript and that without it cannot possibly work.
As Summary
Few things must be re-considered in the current Web era. Kangax already explained that things today are different, regarding native prototype pollutions and specially via Object.defineProperty and the non enumerable flag but for years we have been all convinced that extending those proto was absolutely something to avoid.Well, as I agree with Juriy about latter topic, I am still a problem solver that does not exclude any possibility, including User Agent sniffing, when it comes to solve a real world problem, rather than have fantasies about ideals that unfortunately do not reflect reality on our daily basis web development role.
Just think about it ;)