I think it's partially excusable. Most LP solvers target large-scale instances, but instances that still fit in RAM. Think single-digit millions of variables and constraints, maybe a billion nonzeros at most. PDLP is not designed for this type of instances and gets trounced by the best solvers at this game [1]: more than 15x slower (shifted geometric mean) while being 100x less accurate (1e-4 tolerances when other solvers work with 1e-6).
PDLP is targeted at instances for which factorizations won't fit in memory. I think their idea for now is to give acceptable solutions for gigantic instances when other solvers crash.
Indeed those are the "big four" solver businesses in the West, and also probably the most reliably-good solvers. But by the time Gurobi withdrew from the benchmarks (a few weeks ago), COpt was handily beating them in the LP benchmarks, and closing down on them in MIP benchmarks. Solver devs like to accuse each other of gaming benchmarks, but I'm not convinced anyone is outright cheating right now. Plus, all solver companies have poached quite a bit from each other since cplex lost all its devs, which probably equalizes the playing field. So overall, I think Mittelmann benchmarks still provide a good rough estimate of where the SOTA is.
PDLP is targeted at instances for which factorizations won't fit in memory. I think their idea for now is to give acceptable solutions for gigantic instances when other solvers crash.
[1] https://plato.asu.edu/ftp/lpfeas.html