VSCode (or more specifically, the VSCode Go extension) can't handle a mono-repo of golang microservices. My CPU/fans go crazy opening the darn thing. With the extension disabled, VSCode is fine, but it lacks all of the "necessary" features offered in the extension.
I wonder if these performance issues apply to all language extensions that rely on a language server (implemented in VSCode or otherwise). From what I understand, since JSON [0] is used over the wire between the editor and the language server process, there's a lot of serialisation/deserialisation overhead.
Microsoft used to maintain a log inspector [1] which you could use to see the chatter between the server and client, and there was a _lot_ of chatter with a _lot_ of JSON.
LSP stuff is done asynchronously in VS code (and other editors). This doesn't slow down the main editing UI at all. It just means when you start typing and expect to see linter warnings, autocomplete suggestions, etc. it can take a bit longer if the LSP server is slow to respond.
If someone has all their CPU fans, etc. spinning up when opening a big monorepo it's probably just aggressive indexing inside the LSP server they're using. Almost certainly it's an optimization trade-off made by the LSP server author, trying to balance the common case of people typically opening smaller repos and assuming they immediately want search, etc. to be fast and ready.
There's nothing inherent to the LSP protocol or design that causes this problem--someone could build a better LSP server designed to handle large monorepos by deferring indexing, etc. until absolutely needed (but potentially with some tradeoffs like searching inside a subunit being delayed until its used). It's the same basic problem git faced with enormous repos over time slowing down and all the hacks/workarounds people have bolted on to try to limit how much of the monorepo state needs to be available at any moment.
> There's nothing inherent to the LSP protocol or design that causes this problem
Even though most of the indexing and intellisense features are done within the language server itself, there'd still be significant overhead in JSON parsing and serialisation right?
The response example for the `textDocument/definition` request [0] shows a very large snippet of JSON considering the information it conveys:
One of the benefits of the JSON standard being pretty simple is that it makes JSON is pretty efficient to parse. We're not talking ProtoBuf efficient here, but I've easily parsed files containing gigabytes of JSON.
That's true, and JSON is probably the best choice as a lowest common denominator given the protocol's intent to be language agnostic. You'd want to implement a language server for language X in X itself, and most languages have mature libraries for working with JSON.