It's a good idea IMO, but the two aren't alternatives, i.e. not mutually exclusive — you can do both. See Knuth's “A Flame About 64-bit Pointers” from 2008 (https://cs.stanford.edu/~knuth/news08.html, starting with “It is absolutely idiotic to have 64-bit pointers when I compile a program that uses less than 4 gigabytes of RAM”). Work on making this possible in Linux started as the x32 ABI in 2011 (see Wikipedia https://en.wikipedia.org/w/index.php?title=X32_ABI&oldid=887... or this LWN article: https://lwn.net/Articles/456731/); unfortunately it looks like there's discussion about removing it (Dec 2018 thread starting here https://lkml.org/lkml/fancy/2018/12/10/1145 though apparently I can't figure out how to navigate the LKML tree using constant space).
I've found myself using 32-bit "array indexes" to half my "pointer-sizes" for some code I'm writing (for fun). When doing linked data-structures (linked lists, trees, etc. etc.), a huge amount of the data ends up being a pointer.
Consider a typical binary-tree with a 64-bit integer as its value. You use 24-bytes per node (value, left child, right child) if you use 64-bit pointers, but only use 16-bytes per node if you use 32-bit pointers (or array indexes).
Now, a 32-bit pointer can have at most 4billion nodes across the whole tree. But 4billion is more than usable for many programs.
EDIT: Consider that 4billion nodes of size 16 bytes (where each node is the value + left child + right child struct discussed earlier) will take up 64GB of RAM.
EDIT2: And half the time, my brain gets short-circuited and I end up recreating some terrible form of segmented memory before having to slap myself for thinking up such a horrible idea. In any case, a surprising amount of memory is used up as pointers in almost all the code I write. If you care about fitting as much data into L1 cache (64kB on modern machines), you will absolutely want to minimize your data usage.