> getting them to help me fix distributed consensus problems in my fully bespoke codebase make them look worse than useless.
Often the complex context of such problem is more clear in your head than you can write down. No wonder the LLM cannot solve it, it has not the right info on the problem. But if you then suggest to it: what if it had to do with this or that race condition since service A does not know the end time of service Z, it can often come up with different search strategies to find that out.
Often the complex context of such problem is more clear in your head than you can write down. No wonder the LLM cannot solve it, it has not the right info on the problem. But if you then suggest to it: what if it had to do with this or that race condition since service A does not know the end time of service Z, it can often come up with different search strategies to find that out.