Not quite the same but at Fandom (Wikia), every wiki has its own DB (over 300,000 wikis), and they are clustered across a bunch of servers (usually balanced by traffic). It works well - but we don't ever really need to query across databases. There's a bunch of logic around instance/db selection but that's about as complex as it gets.
Interesting architecture. From a design point of view, I like the idea of full isolation. From an infrastructure point of view I'm a little scared. I'd assume it's actually not that bad and there's a good way to manage the individual DBs and scale them individually.
Really interested if you can share any details.
Edit: I know each wiki is on a subdomain. Does each wiki also have it's own server?
There are _many_ databases on each server, last I checked there was around 8 servers (or: "clusters") - and we have it so the traffic is somewhat evenly distributed across each server. There are reasonable capacity limits, and when servers get full we spin up a new one and start accepting new wikis there. I am not in OPS, and they do a lot of work behind the scenes to make this all run smoothly - but from an eng perspective we rarely have issues with this at scale.
Some of this was open source before we unified all of our wiki products, which has a lot of the selection / db logic, at https://github.com/Wikia/app.
It doesn't change often, if we do we just have large automated rollout plans - but we've done mass changes enough times there are good procedures around large DB migrations.