Other posters have discussed the first part of your post.
But the second...
> That's what most programs actually do most of the time because files are essentially a stream abstraction. Programs that jump around a file would map it into memory, then the CPU and the kernel would do their best to cache the hot regions, even if the access to these regions is temporally or spatially distant.
Just an FYI: you should always use mmap (Linux / POSIX), or File-based Section Objects (Windows). I don't think streams have any performance benefit aside from backwards compatibility, and maybe clarity in some cases.
MMap and the Windows equivalent allows the kernel to share the virtual memory associated with that file across processes. So if the user opens a 2nd, or 3rd version of your program, more of it will be stored "hot" in the RAM.
Since mmap and section objects only use "virtual memory" (of which we have 48-bits worth on Intel / AMD x64 platforms), we are at virtually no risk of running out of ~256TB of virtual RAM available to our platforms.
But the second...
> That's what most programs actually do most of the time because files are essentially a stream abstraction. Programs that jump around a file would map it into memory, then the CPU and the kernel would do their best to cache the hot regions, even if the access to these regions is temporally or spatially distant.
Just an FYI: you should always use mmap (Linux / POSIX), or File-based Section Objects (Windows). I don't think streams have any performance benefit aside from backwards compatibility, and maybe clarity in some cases.
MMap and the Windows equivalent allows the kernel to share the virtual memory associated with that file across processes. So if the user opens a 2nd, or 3rd version of your program, more of it will be stored "hot" in the RAM.
Since mmap and section objects only use "virtual memory" (of which we have 48-bits worth on Intel / AMD x64 platforms), we are at virtually no risk of running out of ~256TB of virtual RAM available to our platforms.