L2ARC in my understanding is only for reads, whereas ZIL is the write ahead log. Ideally, ZFS would "combine" the two into a MRU "write through cache" such that data is written to the SSD first, then asynchronously written to the disk after (ZIL does this already) but then, when the data is read back, it's read back from the SSD.
This makes sense; if you were building a service that served bytes off of disk, with an in-memory LRU cache, you wouldn't have two separate pools of memory, with one for writes and one for reads.
Did the ZIL and L2ARC concepts come up before SSD was widely available? Especially the ZIL seems very much optimized for crazy enterprise 15k rpm spinning rust. Memory and SSD access characteristics are so different from spinning disks; I don't know why ZFS separates ZIL and L2ARC.
Because the ZIL is a journal log, not a cache. It is intended to increase data security without sacrificing too much performance. Many people also confuse ZIL and SLOG devices though...
By default, the ZIL is written on the same disks as where the data will be stored, but an external device (aka SLOG) can be added. From that point on, your write IOPS will be limited by this SLOG device, so normally you add a more expensive fast disk as SLOG device to increase your write IOPS.
> Did the ZIL and L2ARC concepts come up before SSD was widely available?
Yes. After they realized that their initial claims about not needing such things was bullshit (which some of us had told them at the time) but before SSDs became common.
L2ARC in my understanding is only for reads, whereas ZIL is the write ahead log. Ideally, ZFS would "combine" the two into a MRU "write through cache" such that data is written to the SSD first, then asynchronously written to the disk after (ZIL does this already) but then, when the data is read back, it's read back from the SSD.