Data scientists arguably have too much choice. 10 data scientists will have 50 different tools, can't share work or build on another's experiments or even remember what the result of an experiment were. those are some of the reasons why most data science projects fail. that and integrations. standardization has real benefits.
Of course standardization has benefits but how do you choose? Standardization only works if choice is eliminated so choice is a barrier to achieving standardization.
It often just comes down to project requirements.
Eg, what kind of model is required? How hard would it be to build with tool x?
For example, a big reason why a lot of computer vision research was built (and sorta still is because of momentum) on caffe was pre existing model zoos.
A big reason why people choose TF (despite lacking dynamic graphs) is just because of existing community.
Requirements for both papers as well as industry will continue to evolve. Each framework will have their own trade offs.
There's tradeoffs to choice. In the case of another commenter "too much choice" means a ton of churn and a lot of friction when it comes to building models.
I think there's always a trade off of innovation vs stability that people should be thinking about here.
Granted, things like the model formats should help long term, but for now we're going to be dealing with a ton of churn on APIs.
I'm sure another thing like dynamic graphs will come along and we'll need to update the apis.
I suspect keras will respond to this at some point by adding primitives for eager mode and the like.
I know both data scientists who need more advanced models and others who prefer the keras api just building off the shelf models.