I mean, correct validation logic is always ideal, but I'm positing a world in which software doesn't always get intentional validation logic. In particular, an intermediate server might prevent packets from flowing to the target client for any number of reasons which aren't intended as "validation". It's just harder to hack through an intermediary.
Ok, but I still don't see why you can't move that intermediary to the client. Spin up a docker container and run the game server there. Ta-da! You have the same security as with a remote server.
My point is that IPv6 restoring the end-to-end principle need not jeopardise the - real or perceived - security of multiplayer games.
It’s not clear to me who “you” is meant to refer to in this scenario.
If “you” refers to the user, then because the game isn’t architected to have a server running next to each client if the server binary is even distributed to users at all.
If “you” refers to the game publisher, then because they aren’t architecting it that way to begin with, because they aren’t thinking about running the server as a security feature.
Moreover, a game developer has incentives to protect its own servers; it has much less incentive to protect its end users. You might argue that it’s end users being hacked is bad for business, but most end users wouldn’t be able to attribute a hack to a particular piece of software or infrastructure if they even know they’re hacked in the first place (consider the rampant insecurity in the consumer router and iot spaces).