Hacker News new | past | comments | ask | show | jobs | submit login

It also adds another piece of infrastructure that needs to be maintained and can go down. Not necessarily the best option for everyone.



While the caching would be more effective with one large centralized instance, I think the intended use case is to have one cache per server. So then it's not really extra infrastructure.


Actually, you can do it either way since it's backed by Redis. You can set all servers to connect to the same Redis instance, or run them all individually.


Do you use Redis clustering in this case (otherwise how would the cache stay consistent)?


What if Redis goes down?


"What if X goes down?" is my new favorite straw man on HN.

Regardless, it's just a caching layer, and requests are passed through to the API in that case.


There's always a single point of failure.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: