FWIW, kdb+ is not that extremely performant - there's a lot of things that could be faster, and a lot of limitations that mean that you often would be better of not using a DB at all (or to use another DB and just pull everything you might need into memory). There is/was a tradeoff in that many things that would make it faster would require more code, and a cool thing about q/kdb+ is that it takes so little code you don't have I$ issues, but I think that's a tradeoff that doesn't make as much sense anymore in 2023.
What it's really great for is that it's really neatly integrated into the q language, which is great for exploratory programming, and it's fast enough not to get in the way.
I've encountered this idea that k's terseness somehow improves instruction cache use before. Can you explain further? It seems nonsensical, since instruction caching is about machine code, not source code. Why should it use the instruction cache better than any other JIT? Or is it interpreted, in which case "the terseness of the language improves cache use" might seem more of an admission than a boast... :-)
Thanks the insights. Not to over do self promotion, but aside from learning, the main reason I made KlongPy was to allow for optionality with the ecosystem. Use Klong for array operations and other libraries for standard stuff.
What it's really great for is that it's really neatly integrated into the q language, which is great for exploratory programming, and it's fast enough not to get in the way.