Reduction in price isn't enough on its own, you need to also reduce the growth in memory usage by software to be slower than that reduction in price, and stay slower for a long time.
Even then, it's rough. Consider that the IBM 704 from 1954 (the first machine to run Lisp) had a memory bandwidth of around 375kBps. That means that in theory it could need up to 12TB to run for a year without reclaiming memory, although real-world usage would surely be much less.
So, just the most important ones, then. In a sense, processes are just a form of arena allocation. They don't eliminate the need for memory management; they are memory management (among many other things).
Even if memory is cheap, it still uses power, and as a result, has lots of heat to dissipate. In many high-end server designs, the power / heat aspects are the limiting factor -- not the cost.