I am the author of the library. Just to be clear about my motivations, I wrote this library for chrome extensions. It's a simple way to export/import large user data.
Can you tell us why you don't rely on built-in compression? I would guess both local storage and HTTP POST automatically compress files... is this not the case?
My need is to extract data (archived web pages) from a WebSQL database (or HTML5 filesytem). So the user can export it to a file. Otherwise, he couldn't export the extension data. All is done on client-side so I cannot use HTTP post.
I used it a week ago for a simple project (a kinda secure file system - with PHP and MongoDB for backend). I would encrypt a file on my server (AES) and then send the encrypted data to user's browser. Then I could decrypt the data with JavaScript and used JSZip to make user's browser download the decrypted files.
Because it was an assignment for my 'Information & Communication Security' class and I had to do it without SSL/TLS! And it was a fun little project - I learnt how to use JQuery and worked with PHP, MongoDB and JavaScript. I usually code in Java and ObjC and it was a fun break...
Before writing zip.js, I tried this library. It was really great but id does not scale at all: it means the created zip file is not "streamed" so it can use a lot of RAM when creating large files.
Thanks for the report. The zip file seems to be valid: I can open it without any issues with 7zip (on Win7). The "read zip demo" [1] is also able to open it and extract the compressed PDF. What zip software do you use?
Anyone know of a Javascript implementation of Snappy? There's a binding for Node, but not a Javascript implementation which would be useful in the browser considering IndexedDB and Websockets support binary data.
No, but I've been using an LZMA library [1] to compress data sent over binary WebSockets.
LZMA is nice for streaming data to many clients (i.e. compress once, decompress many times), because although compression time/CPU is high (LZMA > BZIP2 > GZIP), compression ratio is very good (LZMA > BZIP2 > GZIP) and decompression not too bad (BZIP2 > LZMA > GZIP) [2].
[2] Disclaimer: I should check, but haven't yet, whether these inequalities hold for this JS LZMA implementation relative to the available GZIP and BZIP2 ones.
Thank you, I spotted BZIP2 and LZMA. Looking for compression/decompression cpu cost of X < GZIP < BZIP2 < LZMA where compression ratio is of secondary concern.
While I haven't used this in production, I can say that for zipping up stuff that was already generated client side it is usually quicker than sending the data to the server to be zipped, then getting it back. While this seems like a just-because-its-cool sort of hack, it actually has awesome implications. Zipped file creation from offline apps? Yes please.
Edit: The numbers are only for the C implementation so they don't add much to the discussion. Sorry for posting too quickly!
I don't know about this library in particular, but there's some numbers available on http://liblzg.bitsnbites.eu/ for a similar library.
(The same author has also made a library for self extracting javascript code: http://crunchme.bitsnbites.eu/. Pretty cool, but probably not that useful outside of the demo scene.)