Hacker News new | past | comments | ask | show | jobs | submit login

dang. Only thing I've found so far is a `mysqldump --no-data` and then a restore from dump. Not fast at all. Maybe you could pre-provision a few thousand DBs this way in parallel and write a service to hand out DB handles...



Using tmpfs for MySQL/MariaDB's data directory helps tremendously. If you're using Docker natively on Linux, use `docker run --tmpfs /var/lib/mysql ...` and that'll do the trick. Only downside is each container restart is slightly slower due to having to re-init the database instance from scratch.

Tuning the database server settings can help a lot too. You can add overrides to the very end of your `docker run` command-line, so that they get sent as command-line args to the database server. For example, use --skip-performance-schema to avoid the overhead of performance_schema if you don't need it in your test/CI environment.

For MySQL 8 in particular, I've found a few additional options tend to help: --skip-log-bin --skip-innodb-adaptive-hash-index --skip-innodb-log-writer-threads

(Those don't help on MariaDB, since it already defaults to disabling the binary log and adaptive hash index; and it doesn't have separate log writer threads.)

A lot of other options may be workload-specific. My product Skeema [1] can optionally use ephemeral containerized databases [2] for testing DDL and linting database objects, so its workload is very DDL-heavy, which means the settings can be tuned pretty differently than a typical DML-based workload.

[1] https://github.com/skeema/skeema/

[2] https://www.skeema.io/docs/options/#workspace




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: