This isn't really a problem in tool-assisted LLMs.
Use google AI studio with search grounding. Provides correct links and citations every time. Other companies have similar search modes, but you have to enable those settings if you want good results.
How would that ever work? The only thing you can do is continue to refine high quality data sets to train on. The rate of hallucination only trends downwards on the high end models as they improve in various ways.
Use google AI studio with search grounding. Provides correct links and citations every time. Other companies have similar search modes, but you have to enable those settings if you want good results.