Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I never understood this meme.

We use caching a lot, anything that gets cached can only be written by one service each. The writing services emit cache invalidation messages via SNS that cache users must listen to via SQS, to clear/update their cache.

Alternatively we cache stuff with just a TTL, when immediate cache invalidation is not important.

Where‘s the struggle?






> Where‘s the struggle?

If there are no real consequences for reading stale data, and your writes are infrequent enough, then indeed you're lucky and have a relatively simple problem.


You don’t support read-your-own-write and your cache data might be stale for arbitrarily long. These relaxed consistency constraints make caching a lot easier. If that’s acceptable to your use cases then you’re in a great place! If not… well, at scale you often need to find a way for it to be acceptable anyway

Does SQS guarantee delivery to all clients? If it does then that’s doing a lot of heavy lifting for you.

If it doesn’t guarantee delivery, then I believe you will at some point have a client that reads a cached value thinking it’s still valid because the invalidation message got lost in the network.


Eventually. The problem is that eventually delivering that message will result in clients assuming that it will always be the same, when it’s not.

> Where‘s the struggle?

> anything that gets cached can only be written by one service each

How do you guarantee it's only written by one service each? Sounds like locking across network boundaries, which is not easy.

> The writing services emit cache invalidation messages via SNS that cache users must listen to via SQS

SNS and SQS are both nontrivial services (at least you don't have to build / maintain them I suppose) that require training to use effectively and avoid any possible footguns

I think you're underestimating the complexity in your own solution, and you're probably lucky that some of the harder problems have already been solved for you.


> I never understood this meme.

If you don't understand how and why and when eventual consistency is a problem, you will never understand why cache invalidation is hard.

By the sound of your example, you only handle scenarios where naive approaches to cache invalidation serve your needs, and you don't even have to deal with problems caused by spikes to origin servers. That's perfectly fine.

Others do. They understand the meme. You can too if you invest a fee minutes reading up on the topic.


Here's one: everybody invalidating and refreshing their cache at the same time can cause a thundering herd problem.

That's because relying on a TTL simplifies the concept of caching, and makes invalidation trivial, and also inflexible.

It's used in DNS, which already was an example here. There is no way to be sure clients see an updated value before end of TTL. As a result, you have to use very conservative TTLs. It's very inefficient.


You can’t be sure even after the TTL to be fair.

I've never really understood it either. In my experience, in order for a cache to be a possible solution to a given problem at all, you must either:

1. Be content with/resilient to the possibility of stale data.

2. Gatekeep all reads and writes (for some subset of the key space) through a single thread.

That's basically it.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: