Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Wait. Why? This sounds like something that feels hard, if you are used to the giant DBs of old. But you can probably get many many instances of the smaller databases without much trouble.

Would still be some maintenance, don't get me wrong. But far from impossible.



Imagine the database schema migrations...


Having worked at shops that used this architecture it's really not that bad. Can you write the code to do one schema migration? Great, now you can do 1000. App server boots and runs the schema migrations, drops privs and launches the app. Now you've staved off your scaling issues from "how to have a db large enough to hold all our customer data" to "how to have a db large enough to hold our biggest customer's data." Much easier.


You can write the code to do 1000 schema migrations, but the problem is if you've migrated 40% of them and hit an issue. What do?


One of the many reasons to put good constrains on fields and use referential integrity! If you don't let the database enforce data validity you are gonna get fucked at some point!

source: every single place I've worked at that poo-poos referential integrity has a database that is full of bullshit that "the application code" never cleaned up

Always use referential integrity. The people who are against it almost always are against it for superstitious reasons (eg: "it makes things slow" or "only one codebase calls it so the code can enforce the integrity"). All it takes is exactly one bug in the application code to corrupt the whole damn thing. And that bug will happen over the lifetime of the product regardless of how "good" or "awesome" the programmers think they are....

... I'll get off my soapbox now!


That's one thing yes. What if there's a transient network error, or the DB runs out of memory, and now you have some data in an old state and some in a new.

You're lecturing about table design. I'm talking about more general transactionality over any errors.


the good news is by the time you get to the 100th client, you'll likely have run into all possible bugs and the remaining 6900 will be pretty smooth.


You'll quickly run into limitations of how many tcp connections you can hold open. Unless you also want to run separate app servers for each customer, which will cost a lot of $$$

Oh, and just forget about allowing your customers to share their data with each other, which most enterprises want in one way or another.


Wait. What? None of the enterprise customers want to share data with each other. And definitely not on a DB level. That should happen in the business logic.


Lots of companies have consultants, and want to be able to share their consulting-related tickets with their consultants. And the consultants want one system they can log into and see the tickets from all of the companies that are hiring them.


It would be a nightmarish scenario if you have thousands of customers. And completely unnecessary. You can create multiple databases and or schemas in a single instance.

Don't do any of the above unless you understand the implications.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: