Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> There has not ever been a reason for memory to be correlated with storage capacity nor any reason to believe that such a correlation ought to exist.

However specific implementations can indeed have memory requirements that scale in relation to storage capacity. For example, if the implementation keeps the bitmap of free space in memory, then more storage = larger bitmap = more memory required.

There's been several attempts in ZFS to reduce memory overhead. I'm pretty sure that if you took a decade old version of ZFS you'd struggle to run it on a system with 512MB RAM.



My first ZFS box (running OpenSolaris 2008.11) ran on 512MB, sometimes including a Gnome 2 desktop. It wasn't fast - I didn't need it to be - but it absolutely did work.


Interesting. I started with ZFS around 2009, and I recall several people struggling to us it on a system with less than 2GB, even after messing with tuneables. That was on FreeBSD though, so maybe implementation specific.


I've been using ZFS on Solaris 10 on a 4 x Pentium Pro @ 200 MHz a socket, with 256 MB of RAM since July 2006. Wasn't a speed demon but it ran okay for years as our central storage server until we upgraded to faster hardware.


Your stories got me inspired, so I downloaded the FreeBSD 8.0 image, released 2009, and fired it up in a VM with 256MB memory. I then created a 2TB dynamically-allocated disk in the VM and used it to create a single-vdev pool.

ZFS does complain that the minimum recommended memory is 512MB and that I can expect unstable behavior. However basic file I/O seems to work, I copied some multi-GB files around and such without issues.

So seems the bare minimum was lower than I recalled, at least on a plain system.

A proper test would include heavily fragmenting the pool, and preferably with more vdevs. But it's something.


What's supposed to happen with ZFS is that if the rest of system needs more memory, ZFS should back off and release some of the memory that it's using. It can do this because most of its memory use is just caching, and while there no doubt is a limit to how far you can take this, I never heard anyone question the effectiveness of this on the OpenSolaris/Illumos implementation.

I don't follow FreeBSD closely, but if memory serves there was a concern, particularly in the early days of the ZFS port, that their implementation couldn't be counted on to release RAM fast enough, if the system suddenly came under significant memory pressure. Hence the advice was to always run with more-than-sufficient RAM to minimize the likelihood of getting into low memory situations. I think this is a significant part of why FreeNAS considers 8GB to be the minimum supported configuration.

So it seems to me that this isn't really about ZFS's RAM requirement, rather it's about hedging against the volatility of the RAM requirements of other software on the same box, in case ZFS can't back off fast enough.

These days I run ZFS on Linux, and I remember about 4 years back spinning up some bulk data processing job that was configured to use 14GB of RAM, on a 16GB box, and watching ZFS's ARC RAM use drop in a single second from 5.5GB to 0.5GB. So I'm satisfied that for my purposes, on ZoL in recent times, this isn't an issue I need to worry about.


The bare minimum is 128 MB. A fully working yet minimal Solaris 10 system runs in less than 64 MB (~57 MB).

ZFS releases ARC memory as the memory pressure from applications running on the system increases; it's been that way since day 1.


At present 512MB of RAM is notable in how ridiculously tiny it is and 2TB is still an acceptable amount of storage. Without resorting to decades obsolete software can you put a pin on exactly how much storage it would take to render that tiny amount of RAM unusable and then explain how much storage it would take to render a machine with 4GB of RAM likewise unusable so that we may demonstrate memory usage scaling with storage?


My point was merely that your blanket statement doesn't really hold water, since any actual memory requirements by a filesystem would be implementation specific.

I will agree that ZFS should handle large pools once you clear the ~fixed minimum memory requirement.


We've been using an old CORAID box running 24 drives with 50TB usable (100TB actual) on 16GB of RAM using FreeNAS 9.x for years without noticeable problems :) I've tried to upgrade to 32GB a couple times but for whatever reason the board won't allow more than 16GB even though according to Intel docs the RAM should be compatible. We have up to 12 PCs connected at gigabit and never any noticeable lag, even while resilvering, though I'm sure it would be faster if there was more RAM available.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: