AGI will likely incorporate LLMs as a significant element. The issue is that we are far from having any good understanding of how and why LLMs achieve what they achieve, and unless that changes, the same will be true for anything built on them or of their future evolution. If we continue to pursue progress by only caring about results and disregarding that we don't really understand the mechanisms, then we may very well run into the fatal scenario described by Yudkowsky.