Thanks a lot for the suggestion. We have used
http://pgtune.leopard.in.ua I have appended the resulting config.
The result is that the default config is already very good for our benchmark. There is no visible difference between the old and new config when running the benchmark. We will publish an update to the blog post and show the numbers using the tuned config.
best Frank
DBVersion: 10
Linux,
Type: "Mixed type of Applications"
122GB RAM
25 Connections
SSD Storage
Hi, it's Frank from ArangoDB. We have now included the starter in the package. It is now possible to start a cluster with a single command line. For example, to start a test cluster on a single machine, "arangodb --starter.local" is all you need to type. Starting on 3 machines requires a "arangodb" on the first machine and "arangodb --dataDir=./dbX --join serverX1" on the others.
Thanks Frank. I wonder if Web Fronted is in order akin to Counchbase setup process. Ultimately CLI tools are of course the king of the Ops world when maintaining clusters. Simple Web app can go long way though for attracting new users. First impression matters. My 2c.
We are closely monitoring the development. Not being a Java database, we cannot run Java queries natively. Tinkerpop3 has added some hooks for non-Java, but it would still very hard to get the speed of AQL. But I hope that Tinkerpop will open up to non-Java even more.
Hi, I'm Frank from ArangoDB. You should create a number of shards that is much higher than your initial number of servers. ArangoDB can cope with multiple shards per server. This way, you can easily redistribute shards when adding new servers.
Thank you for the answer. I'm currently dealing with an inflow of about one million documents a day; it's an ever growing collection (grows by a few TB each year). Should I just configure it with 1000 shards? Or would it perform bettet with fewer shards?
We are evaluating various possibilities, how to implement streaming queries in an efficient and scalable way. For instance, are restrictions to the general AQL necessary for such queries to be able to scale? Stay tuned.
That is one of the reason, why I renamed it the LINENOISE NG. The interface is compatible with the original LINENOISE, but it works on linux, Mac and Windows and it supports UTF-8.
(The original authors should have made the header comment "against the idea that a line editing lib needs to be 20,000 lines of C code, or any quantity of C++".)
The result is that the default config is already very good for our benchmark. There is no visible difference between the old and new config when running the benchmark. We will publish an update to the blog post and show the numbers using the tuned config.
best Frank
DBVersion: 10 Linux, Type: "Mixed type of Applications" 122GB RAM 25 Connections SSD Storage
=>
max_connections = 25 shared_buffers = 31232MB effective_cache_size = 93696MB work_mem = 639631kB maintenance_work_mem = 2GB min_wal_size = 1GB max_wal_size = 2GB checkpoint_completion_target = 0.9 wal_buffers = 16MB default_statistics_target = 100 random_page_cost = 1.1