Hacker News new | past | comments | ask | show | jobs | submit login

I've found myself using 32-bit "array indexes" to half my "pointer-sizes" for some code I'm writing (for fun). When doing linked data-structures (linked lists, trees, etc. etc.), a huge amount of the data ends up being a pointer.

Consider a typical binary-tree with a 64-bit integer as its value. You use 24-bytes per node (value, left child, right child) if you use 64-bit pointers, but only use 16-bytes per node if you use 32-bit pointers (or array indexes).

Now, a 32-bit pointer can have at most 4billion nodes across the whole tree. But 4billion is more than usable for many programs.

EDIT: Consider that 4billion nodes of size 16 bytes (where each node is the value + left child + right child struct discussed earlier) will take up 64GB of RAM.

EDIT2: And half the time, my brain gets short-circuited and I end up recreating some terrible form of segmented memory before having to slap myself for thinking up such a horrible idea. In any case, a surprising amount of memory is used up as pointers in almost all the code I write. If you care about fitting as much data into L1 cache (64kB on modern machines), you will absolutely want to minimize your data usage.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: