The problem is not being unpaid. The problem is lack of verification and control, which itself result from lack of help and participation. You won't fix these problems by throwing money at it.
What I don't understand is why computers and operating systems have not adopted the TAI as time reference which is monotonic, etc. Time is a complete mess.
I would have tried parallelism just by curiosity. Split and spread the computation over multiple cores. If you have n cores you could get close to a factor n increase minus the cost of spreading the data and combining the results. That's an easy optimization right out of the box with go (no assembly required).
It depends on the distribution of probabilities of being used. If least recently used data has the same probability to be requested than the others, then random picking will do as well as LRU.
When the least recently used data does have a lower probability to be requested, than LRU will outperform random picking.
The peer review system is definitely biased. It's ok for many articles, but there are outliers. For instance really original theories or experimental data. The publication may be stopped to avoid putting the journal's reputation at risk or because the reviewers have personal reasons to not support the article (it would shade their own pet theory, invalidate their research project, they didn't understood it, etc.).
The problem with publishing elsewhere is that many people unable to objectively evaluate the validity of the theory juge its value by the reputation of the journal. Also the other journal will most probably have less visibility. There is thus a higher chance that the other theory will not be reported in reviews.
There are many assumptions in the above comment that I can't agree with. But my experience is with physics, not computer science.
The effectiveness really depends on the request pattern. In some use case, the LRU is the most efficient. When it is purely random, it obviously won't be the best as any value has the same probability to be selected.
Regarding simplicity, there is a simpler algorithm where we don't need the next and prev pointers. We simply replace the non-flagged value at the hand. This is most probably what is used in CPUs.
A close up section of the same zone in the images would make them visible. I could hardly see the artefacts in the first place as my attention was caught with the highly contrasted parts of the images.
Go is awesome and I hope it will continue to progress in that direction. Thank you Russ Cox