That's like saying for CPU's, "Who needs to code at assembler for performance when Java and its threading model are available?" Two very different levels of abstraction and capabilities. Best results out of FPGA's are done by hardware designers using hardware tools that map to lowest levels.
Sure, but I don't see the need for _an open bitstream_ to enable general purpose computing on FPGAs. The best argument I see is that tooling could get better.
Ignoring security or subversion concerns, there's still plenty of reasons to want access to the bitstream. The brilliant work below gives several use cases in its opening section along with a clever workaround:
Back to my analogy, the current situation is like having everything to produce assembler or binary but no tools to work with those. You can't debug them, modify, hotload, do JIT's, app restructuring... any improvement to your software that requires access to assembler or binary in RAM. Quite limiting eh, esp given JIT benefits?
Situation is similar for FPGA's with open bitstreams allowing them to be manipulated in situations that would improve performance, flexibility, and so on. Instead, we get these closed bitstreams done all at once by inefficient tools. Poses artificial limits esp for embedded or dynamic scenarios. Opening them up opens up the full power of academic and OSS innovation to those areas. Much of the best innovations in ASIC and FPGA came out of academia so we want the rest opened for improvements.