For internet user facing full-text search I would always prefer to use a separate tool and not a SQL database, because
- the fulltext tool, can and should hold only 'active' data
- as it has only active data, data size is usually much much smaller
- as data size is smaller, it better fits in RAM
- as data size is smaller, it can be probably run on poorer HW the full ACID db
- as the indexed data are mostly read-only, the VM where it runs can be relatively easily cloned (never seen a corruption till now)
- as FTS tools are usually schema-less, there is no outage during schema changes (compared to doing changes in ACID db)
- as the indexed data are mostly read-only, the can be easily backup-ed
- as the backups are smaller, restoring a backup can be very fast
- and there is no such thing as database upgrade outage, you just spin a new version, feed it with new data and than change the backends
- functionality and extensibility
There is probably more, but if one doesn't needs to do a fulltext search on whole database (and you usually don't), than its IMHO better to use separate tool, that doesn't comes with all the ACID constraints.
Probably only downside is that you need to format data for the FTS and index them, but if you want run a serious full-text search, you will have to take almost the same steps in the database.
On a 15y old side project, I use SOLR for full-text search, serving 20-30k/request per day on a cheap VM, and PostgreSQL is used as primary data source.
The PostgreSQL has had several longer outages - during major upgrades, because of disk corruption, because of failed schema migrations, because of 'problems' between the chair and keyboard etc...
During that outages the full-text search always worked - it didn't had most recent data, but most users probably never noticed.
> the fulltext tool, can and should hold only 'active' data
very possible with postgres, too. instead of augmenting your primary table to support search, you would have a secondary/ephemeral table serving search duties
> as data size is smaller, it better fits in RAM
likewise, a standalone table for search helps here, containing only the relevant fields and attributes. this can be further optimized by using partial indexes.
> as FTS tools are usually schema-less, there is no outage during schema changes (compared to doing changes in ACID db)
postgresql can be used in this manner by using json/jsonb fields. instead of defining every field, just define one field and drop whatever you want in it.
> as the indexed data are mostly read-only, the can be easily backup-ed
same for postgres. the search table can be exported very easily as parquet, csv, etc.
> as the backups are smaller, restoring a backup can be very fast
tbh regardless of underlying mechanism, if your search index is based on upstream data it is likely easier to just rebuild it versus restoring a backup of throwaway data.
> The PostgreSQL has had several longer outages - during major upgrades, because of disk corruption, because of failed schema migrations, because of 'problems' between the chair and keyboard etc...
to be fair, these same issues can happen with elasticsearch or any other tool.
The PostgreSQL has to handle writes, reports, etc..., so I doubt it will cache as efficiently as full-text engine, you'll need to have full or partial replicas to distribute the load.
And, yes, I agree, almost all of this can be done with separate search table(s), but this table(s) will still live in a 'crowded house', so again replicas will be probably necessary at some point.
And using replicas brings new set of problems and costs ;-)
One client used MySQL for fulltext search, it was a single beefy RDS server, costing well over $1k per month and the costs kept raising. It was replaced with a single ~$100 EC2 machine running Meilisearch.
While agreeing somewhat with the post above, the answer isn't really so black and white but depends on your context, i.e. scale, app-complexity, search needs, data size etc.
>the fulltext tool, can and should hold only 'active' data
Same can be said about your DB. You can create separate tables, partitions to hold only active data. I assume materialized views are also there(but never used them for FTS). You can even choose to create a separate postgres instance but only use it for FTS data.
The reason to do that might be to avoid coupling your business logic to another ORM/DSL and having your team t learn another query language and its gotchas.
> as data size is smaller, it better fits in RAM
> as data size is smaller, it better fits in RAM
> as the indexed data are mostly read-only, the VM where it runs can be relatively easily cloned
> as the indexed data are mostly read-only, the can be easily backup-ed
> as the backups are smaller, restoring a backup can be very fast
Once the pg tables are separate and relevant indexing, i assume PG can also keep most data in memory.
There isn't anything stopping you from using a different instance of PG for FTS if needed.
> as FTS tools are usually schema-less, there is no outage during schema changes
True. But in practice for example ES does have schema(mappings, columns, indexes), and will have you re-index your rows/data in some cases rebuild your index entirely to be safe. There are field types and your querying will depend on the field types you choose. i remember even SOLR did, because i had to figure out Geospatial field types to do those queries, but haven't used it in a decade so can't say how things stand now.
While the OPs point stands, in a sufficiently complex FTS search project you'll need all of the features and you'll have to deal with the following on search oriented DBs
- Schema migrations or some async jobs to re-index data. Infact it was worse than postgres because atleast in RDBMS migrations are well understood. In ES devs would change field types and expect everything to work without realizing only the new data was getting it. So we had to re-index entire indexes sometimes to get around this for each change in schema.
- At scale you'll have to tap into WAL logs via CDC/Debezium to ensure your data in your search index is up-to-date and no rows were missed. Which means dealing with robust queues/pub-sub.
- A whole another ORM or DSL for elasticsearch. If you don't use these, your queries will soon start to become a mish-mash of string concats or f-strings which is even worse for maintainability.
- Unless your search server is directly serving browser traffic, you'll add additional latency traversing hops. In some cases meilisearch, typesense might work here.
I usually recommend engineers(starting out on a new search product feature) to start with FTS on postgres and jump to another search DB as and when needed. FTS support has improved greatly on python frameworks like Django. I've made the other choice of jumping too soon to a separate search DB and come to regret it because it needed me to either build abstractions on top or use DSL sdk, then ensure the data in both is "synced" up and maintain observability/telemetry on this new DB and so on. The time/effort investment was not linear is and the ROI wasn't in the same range for the use-case i was working on.
I actually got more mileage out of search by just dumping small CSV datasets into S3 and downloading them in the browser and doing FTS client side via JS libs. This basically got me zero latency search, albeit for small enough per-user datasets.
Yes, it always depends on application and purpose.
But once you will have to deal with a real FTS load, as you say, you have to use separate instances and replication, use materialized views etc.. and you find your self almost halfway to implementing ETL pipeline and because of replicas, with more complicated setup than having a FTS tool. And than somebody finds out what vector search is, and ask you if there is an PG extension for it (yes it is).
So IMHO with FTS in database, you'll probably have to deal with the almost same problems as with external FTS (materialized views, triggers, reindexing, replication, migrations) but without all its features, and with constrains of ACID database (locks, transactions, writes)...
Btw. I've SOLR right behind the OpenResty, so no hops. With database there would be one more hop and bunch of SQL queries, because it doesn't speaks HTTP (although I'm sure there is an PG extension for that ;-)
- the fulltext tool, can and should hold only 'active' data
- as it has only active data, data size is usually much much smaller
- as data size is smaller, it better fits in RAM
- as data size is smaller, it can be probably run on poorer HW the full ACID db
- as the indexed data are mostly read-only, the VM where it runs can be relatively easily cloned (never seen a corruption till now)
- as FTS tools are usually schema-less, there is no outage during schema changes (compared to doing changes in ACID db)
- as the indexed data are mostly read-only, the can be easily backup-ed
- as the backups are smaller, restoring a backup can be very fast
- and there is no such thing as database upgrade outage, you just spin a new version, feed it with new data and than change the backends
- functionality and extensibility
There is probably more, but if one doesn't needs to do a fulltext search on whole database (and you usually don't), than its IMHO better to use separate tool, that doesn't comes with all the ACID constraints. Probably only downside is that you need to format data for the FTS and index them, but if you want run a serious full-text search, you will have to take almost the same steps in the database.
On a 15y old side project, I use SOLR for full-text search, serving 20-30k/request per day on a cheap VM, and PostgreSQL is used as primary data source. The PostgreSQL has had several longer outages - during major upgrades, because of disk corruption, because of failed schema migrations, because of 'problems' between the chair and keyboard etc... During that outages the full-text search always worked - it didn't had most recent data, but most users probably never noticed.