I know. I trained RWKV myself using both methods, like a transformer and like an RNN.
Ultimately it probably doesn't matter that you can train it like a transformer because you can just train it in parallel on multiple documents simultaneously one token at a time, and, at least from my experience, this worked just as well, if not better.
Plus, doing it this way is more general because you don't need any custom kernels to do it, and it also helps the model to learn to deal with an "infinite" context better (while if you train it like a transformer its performance will regress once you evaluate it outside of the context window on which you've trained it, at least from what I've seen in my training runs).
Ultimately it probably doesn't matter that you can train it like a transformer because you can just train it in parallel on multiple documents simultaneously one token at a time, and, at least from my experience, this worked just as well, if not better.
Plus, doing it this way is more general because you don't need any custom kernels to do it, and it also helps the model to learn to deal with an "infinite" context better (while if you train it like a transformer its performance will regress once you evaluate it outside of the context window on which you've trained it, at least from what I've seen in my training runs).