Interesting. I started with ZFS around 2009, and I recall several people struggling to us it on a system with less than 2GB, even after messing with tuneables. That was on FreeBSD though, so maybe implementation specific.
I've been using ZFS on Solaris 10 on a 4 x Pentium Pro @ 200 MHz a socket, with 256 MB of RAM since July 2006. Wasn't a speed demon but it ran okay for years as our central storage server until we upgraded to faster hardware.
Your stories got me inspired, so I downloaded the FreeBSD 8.0 image, released 2009, and fired it up in a VM with 256MB memory. I then created a 2TB dynamically-allocated disk in the VM and used it to create a single-vdev pool.
ZFS does complain that the minimum recommended memory is 512MB and that I can expect unstable behavior. However basic file I/O seems to work, I copied some multi-GB files around and such without issues.
So seems the bare minimum was lower than I recalled, at least on a plain system.
A proper test would include heavily fragmenting the pool, and preferably with more vdevs. But it's something.
What's supposed to happen with ZFS is that if the rest of system needs more memory, ZFS should back off and release some of the memory that it's using. It can do this because most of its memory use is just caching, and while there no doubt is a limit to how far you can take this, I never heard anyone question the effectiveness of this on the OpenSolaris/Illumos implementation.
I don't follow FreeBSD closely, but if memory serves there was a concern, particularly in the early days of the ZFS port, that their implementation couldn't be counted on to release RAM fast enough, if the system suddenly came under significant memory pressure. Hence the advice was to always run with more-than-sufficient RAM to minimize the likelihood of getting into low memory situations. I think this is a significant part of why FreeNAS considers 8GB to be the minimum supported configuration.
So it seems to me that this isn't really about ZFS's RAM requirement, rather it's about hedging against the volatility of the RAM requirements of other software on the same box, in case ZFS can't back off fast enough.
These days I run ZFS on Linux, and I remember about 4 years back spinning up some bulk data processing job that was configured to use 14GB of RAM, on a 16GB box, and watching ZFS's ARC RAM use drop in a single second from 5.5GB to 0.5GB. So I'm satisfied that for my purposes, on ZoL in recent times, this isn't an issue I need to worry about.