About 6 months ago I rewrote the Let's Crate (https://letscrate.com) backend to work exclusively with Amazon S3 Direct POST uploads. Getting upload progress to work with that was a royal PITA, but in the end I got it working. If you're interested in how, perhaps that's a good subject for a far more lengthy post on how to write extremely convoluted Javascript. I gave myself a pat on the back (no flash, yay!) and vowed to never do anything like that again.
As requested: Basically, the gist is that you accept the upload via a local JS file that acts as a conduit. You then turn the dropped / selected file object into a blob object and transfer that blob JS file that lives on S3 (using postMessage and a hidden iframe). That JS file on S3 is what actually performs the upload and tracks the upload progress. On progress events, I send back postMessage payloads to the local JS file to show updates to the user.
After weighing the pros and cons of proxying versus this, I settled on the postMessage strategy -- but still no fun at all, and I'm so glad that CORS is finally an option.
I'd love it if you could write that blog post. The technique sounds like it could be used for a lot of things, including communication in the other direction, i.e. downloading images meant to be drawn on a canvas from servers that don't support CORS.
I'll soon be implementing image upload support where the images live on S3. It would be fantastic to be able to have the user's browser directly upload to S3 rather than go through the web server.
If you were to open source your code I'd love to learn from it.
Finally, I won't have to proxy s3 requests through my own nginxes.
I've pled for this feature in the AWS forum, over their commercial support (which I bought just to bug them about this), and to werner vogels directly.
Thank you so much man, you saved me and my team a bunch of development next week. ABSOLUTELY PERFECT TIMING!! We used the iframe trick, but it sucked. This is MUCH, MUCH, MUCH better, thank you!!!!
Funny, I had an email from the s3 team asking if I was in need of redirection support yesterday. I said I'd rather have CORS and SSL. So, SSL support next week, Jeff? :)
As excited as I am about this finally happening, I was so pissed about having to deal with this issue over and over (e.g. JS files describing WebGL models), that I was on the verge of starting a service to provide the layer of redirection with CORS support, ala what Heroku does for EC2. I was actually getting a bit psyched for it, because I was convinced Amazon didn't care about ever implementing this.
At least now I won't launch something only to have Amazon eat my lunch when they finally came around to providing this much-needed feature.
Could somebody explain CORS to me? How is making the server you're contacting specify it wants to receive requests, in the response header, secure? The request has already been made!
Not really, no. If all it takes to exploit the remote service is to make the request (i.e., you don't need to be able to read the response data to exploit it), you can easily force a request by means other than XHR; an image tag is probably the most straightforward.
Also, strictly speaking, this class of attack is Cross-Site Request Forgery (CSRF), not Cross-Site Scripting (XSS).
The request isn't actually made (at least not your request). The browser sends an OPTIONS request to get the CORS policy and then will block your request if its not allowed.
Edit: My above comment is slightly incorrect. If the request is "simple" it will be made and then you'll be blocked if it doesn't fit the CORS policy. If the request is not deemed "simple" (according to some rules you can look up in the spec) then the OPTIONS flow occurs.
Can anybody Show me an working example of a working html/JavaScript script which uploads to S3 directly over html. Never worked with html5 Upload before. Where can i get more information how this will work? Is it a normal form send with additional fields for authentication? I really have no clue and would be very happy to be pointed into right direction.
Great timing. I recently began working on a project where I ran into the problem of cross-domain fonts in Firefox trying to serve static assets from S3 to CloudFront. Had to resort using my own nginx proxy through CloudFront for fonts and add an additional request to the page. Finally, problem solved!
pretty good strategy if you ask me. amazon has to have engineers working seven days a week anyways, if they push new stuff on a friday afternoon then it gets a couple days of low usage before all their customers get back to work on monday and try to implement it.
Sure, critical systems have round-the-clock coverage, but pushing big changes before the weekend is still not optimal. If there is an emergency, you'd rather have most of your workforce available, awake, and at work.
About 6 months ago I rewrote the Let's Crate (https://letscrate.com) backend to work exclusively with Amazon S3 Direct POST uploads. Getting upload progress to work with that was a royal PITA, but in the end I got it working. If you're interested in how, perhaps that's a good subject for a far more lengthy post on how to write extremely convoluted Javascript. I gave myself a pat on the back (no flash, yay!) and vowed to never do anything like that again.
As requested: Basically, the gist is that you accept the upload via a local JS file that acts as a conduit. You then turn the dropped / selected file object into a blob object and transfer that blob JS file that lives on S3 (using postMessage and a hidden iframe). That JS file on S3 is what actually performs the upload and tracks the upload progress. On progress events, I send back postMessage payloads to the local JS file to show updates to the user.
Convoluted, but it works. :)