I would argue the opposite: a good therapist isn't just offering back-and-forth conversation, they're bringing knowledge, experience and insight into the client after interacting with them. A good therapist understands when one approach isn't working and can shift to a different one; they're also self-reflective and very aware of how they're influencing the situation, and try to apply that intelligently. This all requires reflective and improvisational reasoning that LLMs famously can't do.
Put another way, a good therapist is professionally trained and consciously monitoring whether or not they're misleading you. An LLM has no executive function acting as a check on its input/output cycle.
Not in the least. LLMs don't introspect. LLMs have no sense of self. There is no secondary process in an LLM monitoring the output and checking it against anything else. This is how they hallucinate: a complete lack of self-awareness. All they can do is sound convincing based on mostly coherent training data.
How does an LLM look at a heptagon and confidently say it's an octagon? Because visually they're similar, octagons are relatively more common (and identified as such) while heptagons are rare. What it doesn't do is count the sides, something a child in kindergarten can do.
If I were working in AI I would be focussing on exactly this problem: finding the "right sounding" answer solves a lot of cases well enough, but falls down embarrassingly when other cogitive processes are available that are guaranteed to produce correct results (when done correctly). Anyone asking chatgpt a math question should be able to get back a correctly calculated math answer, and the way to get that answer is not to massage the training data, it's to dispatch the prompt to a different subsystem that can parse the request and return a result that a calculator can provide.
It's similar to using LLMs for law: they hallucinate cases and precedents that don't exist because they're not checking against nexis, they're just sounding good. The next problem in AI is the layer of executive functioning that taps the correct part of the AI based on the input.
Put another way, a good therapist is professionally trained and consciously monitoring whether or not they're misleading you. An LLM has no executive function acting as a check on its input/output cycle.