There's a lot of Fortran code in underlying math libraries that are highly highly optimized, including the Fortran compilers themselves (mainly due to age and demand to eek performance out).
I worked with an old Fortran codebase at one point and there were comments in the documentation (a scan of a typewritten via typewriter document) throughout about switching "cards" and "decks"... took me a moment to realize it was refering to punch cards (and I thought I was old) which also led to the program structure fragmented in several individual smaller sub programs (so card reader could handle it) that now is a trivial matter to handle. Maybe they were just ready for the SOA and microservices trend.
In academia, pressure is often on publishing and pulling funding in through grants and contracts. I've done a lot of rapid prototyping in academic research environments and while writing clean software is always on my mind, often, sitting down and refactoring to be more cleverly efficient or taking time to focus on structure, long term maintainability, etc. isn't a priority and refocuses needed cognitive load from the high level research goal the software needed to achieve to instead focusing on production quality software.
I'm not concerned if it takes O(2n) vs O(n) or O(n log n) vs. O(n) time if I know the target scale is small. I'm not concerned that I can cleverly avoid using an extra data structure (and reduce space complexity) if I can do this operation in place on an existing data structure using some reasonably complex algorithm. Chances are I might remove this functionality entirely tomorrow or some student may have to figure it out later on, and I don't want to implement or explain to the student the Boyer-Moore majority algorithm when a brute force O(n^2) time is just fine here and a lot easier to adjust/maintain for a passer by scientist/student.
I'm aware there's a lot of problems and maybe my abstraction hierarchies aren't the best, I could probably make something better with more time.
You have some high-level complex process you're trying to represent and translate in to a program (maybe a simulation, maybe a complex model or set models, etc.). You're not always concerned about if there's a better way to write it or make extensive use of all the features of whatever language you needed to work in (which you may or may not have experience with since you needed to work from existing codebases to start with since time is tight), you simply want to use whatever requires the least time and cognitive load to think about and produce results so you can keep your eyes on the target of what you're developing.
Later on, when prototypes work (or if you hit performance bottlenecks stopping progress), then and only then do you start refactoring and looking at performance optimization--targeting the biggest bottlenecks first.
If everything works, then you can focus on overall refactoring and optimization and turning your Frankenstein into a supermodel (if you have resources/money to do that with--good luck), but you typically need a functional proof of concept to even have a chance of securing funding for that step.
If there's no money in that effort moving forward and you decide "well, maybe someone can use this" so let's release it, that typically has to get approval through a technology transfer office who are always in arms about protecting potential IP so it ends up on some disks rotting away never to be seen or used again.
If you're permitted to release the IP, you begin wondering how the development quality will reflect on you and your group, especially for those who see it and have no context of the constraints you worked with to produce that miracle functional Frankenstein. It's ugly as sin, but it fulfilled the goal to deliver the core research results and did so as quickly as possible and cheaply as possible.
I worked with an old Fortran codebase at one point and there were comments in the documentation (a scan of a typewritten via typewriter document) throughout about switching "cards" and "decks"... took me a moment to realize it was refering to punch cards (and I thought I was old) which also led to the program structure fragmented in several individual smaller sub programs (so card reader could handle it) that now is a trivial matter to handle. Maybe they were just ready for the SOA and microservices trend.
In academia, pressure is often on publishing and pulling funding in through grants and contracts. I've done a lot of rapid prototyping in academic research environments and while writing clean software is always on my mind, often, sitting down and refactoring to be more cleverly efficient or taking time to focus on structure, long term maintainability, etc. isn't a priority and refocuses needed cognitive load from the high level research goal the software needed to achieve to instead focusing on production quality software.
I'm not concerned if it takes O(2n) vs O(n) or O(n log n) vs. O(n) time if I know the target scale is small. I'm not concerned that I can cleverly avoid using an extra data structure (and reduce space complexity) if I can do this operation in place on an existing data structure using some reasonably complex algorithm. Chances are I might remove this functionality entirely tomorrow or some student may have to figure it out later on, and I don't want to implement or explain to the student the Boyer-Moore majority algorithm when a brute force O(n^2) time is just fine here and a lot easier to adjust/maintain for a passer by scientist/student.
I'm aware there's a lot of problems and maybe my abstraction hierarchies aren't the best, I could probably make something better with more time.
You have some high-level complex process you're trying to represent and translate in to a program (maybe a simulation, maybe a complex model or set models, etc.). You're not always concerned about if there's a better way to write it or make extensive use of all the features of whatever language you needed to work in (which you may or may not have experience with since you needed to work from existing codebases to start with since time is tight), you simply want to use whatever requires the least time and cognitive load to think about and produce results so you can keep your eyes on the target of what you're developing.
Later on, when prototypes work (or if you hit performance bottlenecks stopping progress), then and only then do you start refactoring and looking at performance optimization--targeting the biggest bottlenecks first.
If everything works, then you can focus on overall refactoring and optimization and turning your Frankenstein into a supermodel (if you have resources/money to do that with--good luck), but you typically need a functional proof of concept to even have a chance of securing funding for that step.
If there's no money in that effort moving forward and you decide "well, maybe someone can use this" so let's release it, that typically has to get approval through a technology transfer office who are always in arms about protecting potential IP so it ends up on some disks rotting away never to be seen or used again.
If you're permitted to release the IP, you begin wondering how the development quality will reflect on you and your group, especially for those who see it and have no context of the constraints you worked with to produce that miracle functional Frankenstein. It's ugly as sin, but it fulfilled the goal to deliver the core research results and did so as quickly as possible and cheaply as possible.