We use caching a lot, anything that gets cached can only be written by one service each. The writing services emit cache invalidation messages via SNS that cache users must listen to via SQS, to clear/update their cache.
Alternatively we cache stuff with just a TTL, when immediate cache invalidation is not important.
If there are no real consequences for reading stale data, and your writes are infrequent enough, then indeed you're lucky and have a relatively simple problem.
You don’t support read-your-own-write and your cache data might be stale for arbitrarily long. These relaxed consistency constraints make caching a lot easier. If that’s acceptable to your use cases then you’re in a great place! If not… well, at scale you often need to find a way for it to be acceptable anyway
Does SQS guarantee delivery to all clients? If it does then that’s doing a lot of heavy lifting for you.
If it doesn’t guarantee delivery, then I believe you will at some point have a client that reads a cached value thinking it’s still valid because the invalidation message got lost in the network.
> anything that gets cached can only be written by one service each
How do you guarantee it's only written by one service each? Sounds like locking across network boundaries, which is not easy.
> The writing services emit cache invalidation messages via SNS that cache users must listen to via SQS
SNS and SQS are both nontrivial services (at least you don't have to build / maintain them I suppose) that require training to use effectively and avoid any possible footguns
I think you're underestimating the complexity in your own solution, and you're probably lucky that some of the harder problems have already been solved for you.
If you don't understand how and why and when eventual consistency is a problem, you will never understand why cache invalidation is hard.
By the sound of your example, you only handle scenarios where naive approaches to cache invalidation serve your needs, and you don't even have to deal with problems caused by spikes to origin servers. That's perfectly fine.
Others do. They understand the meme. You can too if you invest a fee minutes reading up on the topic.
That's because relying on a TTL simplifies the concept of caching, and makes invalidation trivial, and also inflexible.
It's used in DNS, which already was an example here. There is no way to be sure clients see an updated value before end of TTL. As a result, you have to use very conservative TTLs. It's very inefficient.
We use caching a lot, anything that gets cached can only be written by one service each. The writing services emit cache invalidation messages via SNS that cache users must listen to via SQS, to clear/update their cache.
Alternatively we cache stuff with just a TTL, when immediate cache invalidation is not important.
Where‘s the struggle?