Hacker News new | past | comments | ask | show | jobs | submit | chmike's comments login

humor: I wonder if the upvotes are cheerings that he finally stepped down or a respectful salute. I give my respectful salute.

Go is awesome and I hope it will continue to progress in that direction. Thank you Russ Cox


While I agree that we're not there yet, I'm one of the people who assumes that it is feasible to beat humans intellectually.


I believe computers have been doing that for decades.. I'm not sure if simplifying these outputs into a tldr; text is feasible


The problem is not being unpaid. The problem is lack of verification and control, which itself result from lack of help and participation. You won't fix these problems by throwing money at it.


And will people do verification and control without pay?


And human generated data may not ?


What I don't understand is why computers and operating systems have not adopted the TAI as time reference which is monotonic, etc. Time is a complete mess.


I would have tried parallelism just by curiosity. Split and spread the computation over multiple cores. If you have n cores you could get close to a factor n increase minus the cost of spreading the data and combining the results. That's an easy optimization right out of the box with go (no assembly required).


Oh, we do also heavily parallelize. This blog post is just focusing on the single-core perf.


It depends on the distribution of probabilities of being used. If least recently used data has the same probability to be requested than the others, then random picking will do as well as LRU.

When the least recently used data does have a lower probability to be requested, than LRU will outperform random picking.

There is no silver bullet algorithm.


The peer review system is definitely biased. It's ok for many articles, but there are outliers. For instance really original theories or experimental data. The publication may be stopped to avoid putting the journal's reputation at risk or because the reviewers have personal reasons to not support the article (it would shade their own pet theory, invalidate their research project, they didn't understood it, etc.).

The problem with publishing elsewhere is that many people unable to objectively evaluate the validity of the theory juge its value by the reputation of the journal. Also the other journal will most probably have less visibility. There is thus a higher chance that the other theory will not be reported in reviews.

There are many assumptions in the above comment that I can't agree with. But my experience is with physics, not computer science.


The effectiveness really depends on the request pattern. In some use case, the LRU is the most efficient. When it is purely random, it obviously won't be the best as any value has the same probability to be selected.

Regarding simplicity, there is a simpler algorithm where we don't need the next and prev pointers. We simply replace the non-flagged value at the hand. This is most probably what is used in CPUs.


A close up section of the same zone in the images would make them visible. I could hardly see the artefacts in the first place as my attention was caught with the highly contrasted parts of the images.


Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: