Yesterday I read a blog post which was wholly negative about the
However, the post was inaccurate about the drawbacks of hashbangs, and I think its author failed to recognize its necessity in some cases. So what, someone is wrong on the internet, which wouldn’t normally bother me, but when I saw the arguments being propagated, I felt it was necessary to post corrections.
First in rebuttal to first the post I read:
Caching, by proxy server or otherwise, is not broken by using AJAX or even a hashbang. The JS/XML fragments served for page content are as elligible for caching as any other content. On the contrary, AJAX often improves cachability, because instead of serving one entire webpage whose content is rendered uncacheable because of small pockets of dynamic content, JS can compose the HTML from separate JS/XML fragments each with their own expiry headers.
It is quite true that only RFC-2396 compliant crawlers will recognize the content. But if you need what hashbangs provide (see below) you are no worse off than if you just used a hash. And if you anticipate that the RFC-2396 standard will see broad adoption (which I do - how many search engines can ignore Facebook and Twitter?) then the concern is diminished.
I don’t see any negative implications for Microformats other than those that were already present in the adoption of AJAX.
The Facebook “Like” widget and other such services already have access (via JS) to the full URL of the page and this includes the fragment. Such services simply need to preserve the fragment in the case that it starts with a
Finally, the author dismisses adoption of hashbangs with:
Engineers will mutter something about preserving state within an Ajax application. And frankly, that’s a ridiculous reason for breaking URLs like that.
which is a blasé dismissal of one of the engineer’s core responsibilities, which is to preserve application state in a way that is consistent with the user’s expectations. It boils down to this:
then you must use hashbangs. Occasionally, for reasons of usability/performance, the first condition isn’t an option - and who wouldn’t want the other two?
I’m certainly not advocating that hashbangs are a good thing, or even desirable: I’m a huge fan of progressive refinement (and her sister graceful degradation). But engineers make judgments based on the priorities associated with their website/web-app; though these decisions should be informed by the web’s history, they shouldn’t be bound by them or we’d never have anything new on the web.
One parting comment, which I’ve been intending to make for a very long time…
I’ve been doing lots of Android development, which features a lovely system of Intents which act as entry points into the components of Android applications. These objects capture the information that an application needs to restore a user’s activity within the application; precisely the sort of information that web app developers are attempting to shoehorn into the URL.
So it’s a terrible missed opportunity that Android doesn’t provide either a way of externalizing Intents (as URLs provide for on the web) or alternatively, allow applications to expose their present state as a URL. It would open-up new possibilities, most obviously the ability to bookmark and potentially share one’s location within an application. Think how much more simply Google Analytics would have mapped onto Android had this been considered.
It’s ironic that web developers are trying to fabricate Intents from URLs, while Android developers are stuck doing the opposite.