Why do we trust journals at all, instead of the community of respected scientist?
We could capture a network of trust among scientists, where individual scientists vet other scientists and articles. Think of it as ScienceRank, a PageRank where the nodes are individual scientists and the individual articles they publish, and the links are publish, review, reproduce and consistent-with events:
- scientist A published article X
- scientist B gave a positive peer review of article X
- scientist B gave a positive peer review of scientist A
- scientist B gave a negative peer review of article X
- scientist B gave a negative peer review of scientist A
- scientist C independently reproduced the experiment in article X
- scientist C failed to independently reproduce the experiment in article X
- article Y is consistent with article X
- article Y is inconsistent with article X
Trust would flow from trusted scientists. Scientists gain and lose trust via the positive and negative reviews they or their publications receive. The algorithm would be a little more complex than PageRank's, given the different treatment required for the different links.
Technology could be a multiplying force in the positive direction instead.
I think negative reviews are hard to interpret, especially algorithmically. My favorite example is Arthur Kornbergs JBC papers on polymerase in 1957 [1] where the reviewers recommended rejection with amongst others the comment “It is very doubtful that the authors are entitled to speak of the enzymatic synthesis of DNA”. Just two years later he received the Nobel prize in medicine for that work.
I'm a big fan of Thomas Kuhn's The Structure of Scientific Revolutions. But revolutions are supposed to be hard, as are changes to the Constitution, etc. But I'm thinking an open trust network can actually support revolutionary research: acceptance/rejection is not limited to the small subset of scientists that control journals. Fringe scientists can provide positive peer review to something, and add supportive research, allowing a gradual growth of support. And if this fringe that went against the grain early, and are ultimately proven right, they gain a lot of trust in the system for being early.
As another poster noted, journals are communities of respected scientists. When I publish something in say, the American Journal of Epidemiology, I am publish something in the official journal of a professional society.
The other problem is that "trust among scientists", and many proposals along those lines, implicitly favor the "Old Guard", who have larger networks to give reviews, and those networks will be less inclined to give them poor reviews.
The Old Guard problem is far worse when the gatekeepers are the small subset of scientist that control the journals. An open network of peer review allows for fringe, revolutionary or controversial research to gain a foothold among some scientists, and then gain support as people who respect these vanguard scientists take a first or second look, and so on.
We could capture a network of trust among scientists, where individual scientists vet other scientists and articles. Think of it as ScienceRank, a PageRank where the nodes are individual scientists and the individual articles they publish, and the links are publish, review, reproduce and consistent-with events:
Trust would flow from trusted scientists. Scientists gain and lose trust via the positive and negative reviews they or their publications receive. The algorithm would be a little more complex than PageRank's, given the different treatment required for the different links.Technology could be a multiplying force in the positive direction instead.