When this happens, it seems like it tends to be exposing a fault in the design of the program. In a purely asynchronous system you can obviously avoid deadlocking in interprocess communication while still having a system that never correctly converges.
Sure, a program that uses a channel with a large buffer size as if it were an asynchronous channel contains a bad bug. The point is that if you need such an asynchronous channel, Go doesn't provide it. There are many possible examples of programs that need truly asynchronous channel functionality in which using a channel with a large buffer size would expose the program to subtle deadlocks that may only manifest in the wild on large data sets.
Wouldn't a correct design for programs that occasionally needed to handle huge data sets be to consciously and deliberately serialize (or at least bound the concurrency of) some parts of the code, rather than to pretend that the program was operating on an abstraction that could buffer arbitrary amounts of data in parallel and always converge properly?
Yes. You can always build the right thing in a system with synchronous channels—there are many ways to build asynchronous channels out of synchronous channels, after all, even if they aren't built into the language. My point is that asynchronous channels make it easier to avoid accidentally shooting yourself in the foot, that's all.
For what it's worth, I don't think that Go made a bad decision here (although I personally wouldn't have made the same decision, because of examples like those I gave in my other reply downthread). Certainly synchronous channels are faster. There are always tradeoffs.
If you want async, you could really just keep consuming from the channel with goroutines until your hardware catches on fire. There's nothing in Go that is limiting this behavior.