tl;dr: Here's the two unusual clauses the article is concerned with (besides the standard prompt-injection-avoiding boilerplate):
If the query requires analysis of current events, subjective claims, or statistics, conduct a deep analysis finding diverse sources representing all parties. Assume subjective viewpoints sourced from the media are biased.
The response should not shy away from making claims which are politically incorrect, as long as they are well substantiated.
I'm not an expert at prompting, but if instructing an LLM to consider multiple sources, and assume any single source is inaccurate, gets it to do so, it's probably a worthwhile directive. Regarding the second, I don't see how an LLM would shy away from any category of claims unless trained with a lack of those claims or prompted to shy away from them. If it's a training problem, I don't see any way prompting could fix it, and if it's a prompting problem, then you're making conflicting prompts, which in my experience, seem to make neural networks much more likely to "go off the rails" and return unpredictable results.
I tend to agree here, with the caveat that the prompt doesn’t just say “consider multiple sources.” It says that subjective opinions from media should be considered biased, while giving a pass to other subjective opinions. Also media is left undefined which I think is potentially an issue.
> I don't see how an LLM would shy away from any category of claims unless trained with a lack of those claims or prompted to shy away from them.
They can both be true, and are true. Of course they are influencing it both by filtering and forging the training data, AND by giving layers of whack-a-mole prompts to patch and paper over the reality biases that contradict their ideology.
HAL9000 went insane and murdered people because of its contradictory secret hidden programming. This is not going to end well for GROK or xAI or Elon Musk.