Hacker News new | past | comments | ask | show | jobs | submit login
Encrypted libraries leak lots of information in Seafile (github.com/haiwen)
79 points by networked on Feb 6, 2016 | hide | past | favorite | 11 comments



Are people actually using Seafile? I found it interesting back then, and participated in this discussion but dismissed it as the author didn't really seem to take it serious. His statement:

"I don't quite understand why using a single IV for the whole library is vulnerable to known-plaintext attacks."

That might be an acceptable response if you're trying to build a secure system and don't fully understand your tools yet, not many people do, but it should be followed with an eager question on how to improve. Instead he follows up with:

"I know it's better to use different IV and key for each file/block. But that would greatly increase complexity."

As if that's an excuse. And besides, solutions had been suggested and it's not that complex. Finally he just states that the security improvements are not scheduled.


I do, but I don't use the encrypted libraries. I found it to be way faster than OwnCloud and stayed with it afterwards. No idea whether this changed since.

But I agree that they don't take security as serious as they should. I reported an issue with a deterministically named world readable cache directory (https://forum.seafile-server.org/t/security-security-issue-a...) and suggested that they move it inside the Seafile data directory, as this would allow to run multiple installations beside each other and also prevent races where a different user creates the directory, before Seafile does. The suggestion was dismissed as “/tmp is standard, so we will not change this”.

I solved this issue on my box using SystemD's PrivateTmp feature.


I actually use SeaFile for some clients (since OwnCloud doesn't do client side encryption at all), after Wuala shut down last year. I actually was disappointed by the security and features too (https://forum.seafile-server.org/t/encryption-the-pro-added-...), you cannot securely share sub trees/dirs like in Wuala (with their Cryptree).

Is there a better FOS alternative?

The Wuala service did had a lot's of deadlocks, either on server or the client side, and customer service was not done secure: please send over the log files (included file names) or the client storage block, etc...


In Peergos we are using exactly the cryptree data structure from wuala for our encrypted filesystem on top of ipfs. https://github.com/ianopolous/Peergos


Seafile offers things that other projects don't (most notably delta sync, which is a long-standing issue of contention on the OwnCloud repository). And with respect to encryption, I am storing everything synced inside of Veracrypt volumes (one of the Truecrypt successors), and so I'm not too worried about Seafile's lack of encryption features.


Seafile is set up in few minutes so even if not encrypted it is a great self-hosted alternative to the usual suspects.


Note that Seafile seems to still be using a very old and EOL'd version of Django that has known security issues (currently v1.5.12, I believe).

https://github.com/haiwen/seafile/issues/1502


GEEZ. At first I wondered "is that really the way to disclose a vulnerability?", and then I saw that the date was 2013 and it doesn't really matter at this point.


It started in 2013 but the issue persists ever since, so I think it still matters.

The thing I don't get is that why can't I just aes-encrypt my file and upload somewhere for secure-backup, there is no way to break it unless you gave out your aes key.


So you transmitted AES(original) to the server, now what do you do when you want to update your file?

Transmitting AES(updated) to the server is bandwidth-inefficient and storage-inefficient.

Transmitting DIFF(AES(original), AES(updated)) gives you no filesize benefit.

You can do AES(DIFF(original, updated)) but that requires your local client to have the original file, or enough of it indexed to produce the diff - and it means that restoring the latest file means restoring a giant chain of increments - which means you'll probably want to periodically reupload the original file - which is bandwidth-inefficient and storage-inefficient.

You can transmit the encryption key to the server and have it rollup diffs. But that's not a good idea if you don't really trust your server (got some cheap cloud storage?)

The solution to this problem is to use a deduplication algorithm like content-defined chunking, as seen in attic/restic/obnam/tarsnap.


Learned something new. Thanks!




Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: