Hacker News new | past | comments | ask | show | jobs | submit login

You can still persist them; you just have to wire it up yourself. Using workers is a best practice for any CPU-bound task, so that's not a drawback by itself in my mind.

It's good for single-page applications. Many datasets are relatively small -- pushing them to the client is reasonable. In exchange, you get zero-latency querying and can build very responsive UIs that can use SQL, versus writing REST APIs or GraphQL APIs.

Taken to an extreme, it permits publishing datasets that can be queried with no ongoing server-side expenses.

A wild example: Datasette is a Python service that makes SQLite databases queryable via the web. It turns out that since you can compile Python and SQLite to WASM, you can run Datasette entirely in the user's browser [1]. The startup time is brutal, because it's literally simulating the `pip install`, but a purpose-built SPA wouldn't have this problem.

[1]: https://lite.datasette.io/?url=https%3A%2F%2Fcongress-legisl...




Thanks for the explanation, the author of Datasette himself answered, which is a pretty cool thing I like about HN =)




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: