Because (static) FP is built on a sound and rich theory (type theory and lambda calculus) unless other language paradigms (e.g. OO). There are also different approaches, IMHO:
FPL research: Ok, we have this cool way to describe our computation. But how can we efficiently map this to the HW? In essence build a nice language and find an implementation.
Imperative PL research: Ok, we have this HW, how can we build an expressive PL on top of it? In essence build an implementation (the HW) and find a language.
Microprocessors evolved while chasing minicomputer features, which in term were shaped by C/UNIX sorts of ideas, which expanded and fed back. That was a very successful co-evolution. COTS supercomputers beat specialized hardware and specialized forms of parallelism not because they were abstractly better, but because markets and economies of scale drive whatever is popular to be the best performing and cheapest over time. (Down to $30 UNIX systems on a board.)
The two language schools you describe seem to be asking whether it is time (whether we have enough capacity) to simply float a new more mathematical world view on top of it all, or whether we should continue to play toward the strengths of the commodity hardware stack.
I personally think that something like Go works for my mind, and the hardware. (I will go make coffee now, rather than "define coffee" and wait for it to appear ;-)
FPL research: Ok, we have this cool way to describe our computation. But how can we efficiently map this to the HW? In essence build a nice language and find an implementation.
Imperative PL research: Ok, we have this HW, how can we build an expressive PL on top of it? In essence build an implementation (the HW) and find a language.