It's a form of technical debt for the research culture. It's also a prisoner's dilemma: putting in the extra effort to be different just puts you at a competitive disadvantage b/c the system doesn't offer any rewards for going beyond the bare minimum. (Heck, you open yourself up to liability by publishing your code -- people misuse/misunderstand it and blame the poor results on you, people expect you to support it, people find bugs and it reflects poorly on you...) The system selects for people who either don't see the issue or are not bothered enough by it to insist on behaving differently.
It's not just a computer science problem btw -- it encompasses physics too.
Coupled with a lot of academics viewing coding as an unfortunate requirement to their craft. I can probably count on one hand of the folks I've worked with who know what a linter even is, or even keep minimally abreast of new versions of the language they work in every day.
Not even a requirement -- some of the elderly scientists I've worked with get by on spreadsheets! I did a tutorial on Git for my lab group in grad school -- there was some code under SVN at the time, but most folks didn't really use version control at all. (One nerve-wracking instance, I had to revert some months-old changes to our instrument control code, with only my memory to go on, because a bug I'd introduced was holding up someone else's experiment.)
Most people in my field use either IDL or Python (if they're doing data analysis) or C/Fortran (for heavy-duty simulations). Some also know Matlab or Mathematica. I've been trying to convert people to Julia -- it's gonna supplant Python for data analysis and maybe even C/Fortran for simulations.
It's not just a computer science problem btw -- it encompasses physics too.