Am I misreading, or is the thing you link to a proposal for what user-agents should do, rather than a technique that works now? Ah, I see you are the author of what you linked to, so maybe I am misreading?
(Also, are there secruity concerns with first making the request, and only finding out after you made it if you were allowed to make it? Seems like for POST requests (or GET requests on badly implemented apps), the request alone can be dangerous, even if the browser refuses to share the response with the script after seeing the response headers)
Yes, you generally need a fallback when using CORS. When I can get away with it, I prefer running a proxy on my origin domain for this fallback, as it allows me to use all the headers I like (rather than JSONP).
The article doesn't mention that many modern browsers don't respect MaxAgeSec for caching the OPTIONS requests, at least not for more than a few minutes, which is another knock against CORS due to it causing 2x the number of requests versus JSONP.
Also the article doesn't mention that, once you've already moved all your header data into the body, you can make simple requests that don't require preflight OPTIONS requests [0] - which eliminates the problem with the 2x the requests.
I am the author of the post. You're right, I should have mentioned the OPTIONS related problems.
The first problem we have got with OPTIONS request is that the cache is only working for the full URL (including URL parameters). So you should prefer a POST API endpoint with parameters in the body instead of a GET API endpoint with parameters in the URL.
We have tested a lot caching of OPTIONS requests and we did not found any issue on recent browsers, they respect the cache is you specify the header Access-Control-Max-Age (they do not respect the Cache-Control header)
At RunOrg, we've encountered this a few times with our CORS-only API.
It is against our philosophy to leave users behind only because they are locked in by outdated infrastructure. We still want to support them.
It is against our philosophy to bend the purity of our API to accommodate wrinkles in how outdated infrastructure declines to support standards. There will be no JSONP alternative to CORS in RunOrg.
The solution we propose each and every time is to mount a proxy to our API on the same domain as the site it is used on. Users on modern infrastructure reach us directly at api.runorg.com, users on CORS-hostile infrastructure reach us through the proxy and still get their data (albeit with decreased performance). It's a fairly simple technical solution that leavers our API clean and supports non-CORS modes of access.
A different position would be to simply let these misconfigured and badly written networking programs fail - and let them fail hard.
Yes, it will be painful in the short term and users will notice outages, but in the long run it will result in these issues being fixed. If we continue a policy of reversion to bad workarounds (which is really what JSONP is), we not only add complexity and the potential for obscure exploits to our own software, we ensure that nobody at Cisco et al will ever feel the pressure to fix these bugs.
I am agree that JSONP is a bad workaround, but it is not an option to do nothing. The problem is that we have large-audience websites as customers, even with a patch from Cisco you cannot force all people to update their firewall, especially because there is no automatic software upgrade on this VPN.
I depends on the position you're in. If you're a scrappy B2B startup looking to gain traction, this is indeed not a battle you want to fight. But things look vastly different if you're Facebook or Techcrunch. Then the story suddenly becomes "there is a bug in our router that causes TC not to load, this needs to be fixed ASAP" instead of "screw this app, it doesn't seem work".
True. I was mostly responding to the tone of the original comment. By all means encourage powerful sites to deprecate support for poor implementations as a way of encouraging progress.
By firewall, he would have to be talking about a client side firewall running on the machine making the request. Something that sees the request before it actually goes out on the wire.
And this is why JSONP is almost always a bad idea (for sensitive data): http://homakov.blogspot.com/2013/02/are-you-sure-you-use-jso...