These are all "techniques" on top of the foundations of RAG. It's similar to "Chain of Thought" in prompt engineering. You have an underlying technology, and then come up with techniques/frameworks on top. What MVC was for Web dev +15 years ago.
RAPTOR for example is a technique that groups and clusters documents together, summarizes them, and creates embeddings defining a sort of a Tree. Paper: https://arxiv.org/html/2401.18059v1
Agentic RAG is creating an agent that can decide to augment "conversations" (or other LLM tools) with RAG searches and analyze its relevance. Pretty useful, but hard to implement right.
You can google the others, they're all more or less these "techniques" to improve an old-fashioned RAG search.
Worth noting that a lot of the improvement gains you get from RAPTOR are (from my use cases) related to giving context to the chunks. Simpler but easier to implement methods of summarizing context (e.g. in a hierarchical document) and cutting chunks around document boundaries can get you most of the way there with less effort (again, as other mentioned, it depends though on your use)
Another solution is to downvote / not upvote comments which place an unreasonable burden on the reader. The best comments are those which can be broadly understood without a need for Googling acronyms or "expanding" the comment using an LLM.