Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
The Problem of Publication-Pollution Denialism (mayoclinicproceedings.org)
51 points by tchalla on April 7, 2015 | hide | past | favorite | 30 comments


Seems to me these publishers are simply an obvious-in-retrospect symptom of the perverse incentive facing researchers everywhere to churn out publications as fast as possible regardless of quality, as long as the papers fulfill some superficial criteria of "scienceness". It's cargo cult science plain and simple [0]. Meanwhile, Moloch just laughs [1].

[0] http://neurotheory.columbia.edu/~ken/cargo_cult.html

[1] http://slatestarcodex.com/2014/07/30/meditations-on-moloch/


Speaking as an academic, this seems overblown.

Sure you can publish any crap in one of these fake journals. But nobody cares about those journals, or has even heard their names. For all the credibility you gain, you might as well upload it to scribd.


"But nobody cares about those journals..."

People who are familiar with the field don't care about these journals. The problem is that poorly fact-checked popular media are more than happy to cite these journals in click-bait articles touting "A Miraculous Cure for Pancreatic Cancer", which are widely disseminated to readers who don't have the scientific background to evaluate them rigorously. They see that the research was published in The Foobar Journal of Crypto-Oncology, and assume that it's reliable.

And if no reputable journal has published a cure for pancreatic cancer yet, the junk-journal articles will be at the top of the search results.


The situation of fake/shit journals and career incentives to publish in them strikes me as similar to the issue of for-profit "universities" in the US. These are apparently popular among school teachers, who can often get union-assured raises by getting a new degree (the quality of the degree not being considered).

Nobody with much sense thinks that a degree from a for-profit "school" lends credibility, but they can nevertheless be used to further a career.


In the particular context in which this is written, I think it's likely to be accurate. Optimally, MD's should show research credentials for career advancement, but the system rarely gives them the time to do serious research.


This has been a well known problem for a while, though the internet may be a multiplying force here, and this article didn't do much beyond stating the obvious for anyone even peripheral to science fields.

What I would like to see is actual discussions on how to address the issue. You can't stop those publications. Despite the article saying talk of free speech is preventing action, these publishers have every right to do what they are doing, predatory or not.

Doesn't it, then, fall on the science community to build an institution that can be easily recognized and trusted to aggregate the "true" (or at least vetted and peer-reviewed) work in a way approachable by the general public?


Why do we trust journals at all, instead of the community of respected scientist?

We could capture a network of trust among scientists, where individual scientists vet other scientists and articles. Think of it as ScienceRank, a PageRank where the nodes are individual scientists and the individual articles they publish, and the links are publish, review, reproduce and consistent-with events:

    - scientist A published article X
    - scientist B gave a positive peer review of article X
    - scientist B gave a positive peer review of scientist A
    - scientist B gave a negative peer review of article X
    - scientist B gave a negative peer review of scientist A
    - scientist C independently reproduced the experiment in article X
    - scientist C failed to independently reproduce the experiment in article X
    - article Y is consistent with article X
    - article Y is inconsistent with article X

Trust would flow from trusted scientists. Scientists gain and lose trust via the positive and negative reviews they or their publications receive. The algorithm would be a little more complex than PageRank's, given the different treatment required for the different links.

Technology could be a multiplying force in the positive direction instead.


I think negative reviews are hard to interpret, especially algorithmically. My favorite example is Arthur Kornbergs JBC papers on polymerase in 1957 [1] where the reviewers recommended rejection with amongst others the comment “It is very doubtful that the authors are entitled to speak of the enzymatic synthesis of DNA”. Just two years later he received the Nobel prize in medicine for that work.

[1] http://m.jbc.org/content/280/49/e46.full


I'm a big fan of Thomas Kuhn's The Structure of Scientific Revolutions. But revolutions are supposed to be hard, as are changes to the Constitution, etc. But I'm thinking an open trust network can actually support revolutionary research: acceptance/rejection is not limited to the small subset of scientists that control journals. Fringe scientists can provide positive peer review to something, and add supportive research, allowing a gradual growth of support. And if this fringe that went against the grain early, and are ultimately proven right, they gain a lot of trust in the system for being early.


As another poster noted, journals are communities of respected scientists. When I publish something in say, the American Journal of Epidemiology, I am publish something in the official journal of a professional society.

The other problem is that "trust among scientists", and many proposals along those lines, implicitly favor the "Old Guard", who have larger networks to give reviews, and those networks will be less inclined to give them poor reviews.


The Old Guard problem is far worse when the gatekeepers are the small subset of scientist that control the journals. An open network of peer review allows for fringe, revolutionary or controversial research to gain a foothold among some scientists, and then gain support as people who respect these vanguard scientists take a first or second look, and so on.


Journals are communities of respected scientists.


I agree with that, but they are small scale, and not very democratic.


I think most western countries already have a national body that maintains a list of "valid" journals. Often also with a system ranking the journals, so your beancounters get more happy when you publish in Nature than The Hungarian Journal of Chemical Engineering. So maybe what we need is a union of these national bodies? Sort of like we have for most sports?


Sports unions are rather corrupt.



I agree it is a problem, but for those in a particular field, you learn pretty quick which are the reputable journals and conferences.


This has always been a problem, and "trusting papers written by people I know, or who have worked with people I know" has always been an adequate solution for people in the field, whatever the field is.

But with the democratization of scientific communication in the Internet Age, as well as the massive over-production of people with PhDs in the sciences relative to the number of academic jobs available, the number of accessibility of bogus publications has skyrocketed at the same time as the pressure to use them has increased enormously. Meanwhile, the amount of good science has stayed pretty much constant, so the good/total ratio is dropping precipitously.

So while insider information is still effective for us, it's much less so for others, and in particular the popular science press <em>and the majority of the people who read it</em> are interested entirely in page views on the one hand and being wooed by the "next amazing breakthrough!" on the other, rather than anything to do with actual science.

There was a comic someplace (should be xkcd but doesn't seem to be) saying of the average reader if "IFLScience" and similar sites, "You don't love science, you just want to look at its ass as it walks by." This is not the worst of all possible worlds (in other places they want science to never show its face in public, much less its ass) but it does create an environment where science communication is hard, and the proliferation of bogus journals and people willing to publish in them is problematic.

It wouldn't hurt for hiring committees to have an explicit rule that anyone who has published in a problematic journal has that publication counted against them, but mostly there needs to be some better form of triage than conventional peer review (which is weak enough as it is) or we're going to drown in papers debating the number of angels than can dance on the head of a pin, and lose that one weird trick that discovered by a post-doctoral mom that actually solves an interesting problem.


The comment you cite about IFLscience etc. is extremely poignant. Would you mind trying once again to dig up the source? Sounds like a comic I should read.


Found it! "Cyanide and Happiness".

http://explosm.net/comics/3557/


Sweet, thanks!


I think this is sharply correlated with what Tom Nichols' termed ['The death of expertise'](http://thefederalist.com/2014/01/17/the-death-of-expertise/) and how we tend to assign equal weight to all opinions, irrespective of the voice attached to them. Hence, we end up measuring volume/loudness rather than quality.


Maybe if academics were engaged in real work that can't be faked this wouldn't be a problem.

For instance, why not only accept papers that come along with cold, hard evidence for what the paper is saying? If it's a chemistry paper there should be a video of all the lab work and the results being shown. If it's a computer science paper then have a runnable docker image of the program. If it's a linguistics paper then have a program that includes the corpus search and the searches you performed along with your statistical analysis files.

So long as academics don't have to prove they did anything, there will be fakers and posers.


Since outside of math the goal of science is not proof but plausibility, this would not be an effective approach. Science is the discipline of publicly testing ideas by systematic observation, controlled experiment and Bayesian inference, and as such is aimed at changing the posterior plausibility of some proposition, not "proving" anything. Proof and certainty are the Alchemist's Stone: philosophers sought after them for thousands of years the way alchemists sought after the secret of turning base metals into gold, never realizing that the fundamental problem wasn't in their methods (although their methods had problems) but in their goal, which was impossible and wrong. We should seek knowledge, not certainty.

The range of means by which that can be done is huge (though considerably less than "anything goes") and there is a definite role for work that is exploratory and speculative, up to and including stuff that is almost certainly wrong but worth publishing because a) the error is not obvious and b) publishing creates the opportunity for others to respond to it, hopefully putting the error to rest for good and all. Some of the early work on the "no cloning" theorem was motivated by publications of this type (it turns out if you could clone a quantum state you could use entanglement to communicate faster than light, and there was a series of papers in Physics Letters in the late '80's proposing to do just that.)

So trusting in the self-correcting ability of the discipline of science is fundamental to its progress, and therefore insisting on some alchemical standard of "proof" as the goal for publication would fatally cripple the scientific enterprise. For science to work we have to be tolerant of the publication of error.

But we need to keep the rate of erroneous publications down to a manageable level. Peer review and society membership were ways of doing this in the past. They have broken down today, and we are still casting about for new ways to keep the error rate well above zero, but not so high as to swamp everything else.


I think the OP is saying that you should prove that you actually did the science, not implying you have to prove that you are correct.


The attached article was discussing open, "pay to publish" journals that don't care about the contents of submitted papers. They will publish just about anything that sounds vaguely scientific as long as they get paid.

The purpose of peer reviewed journals is to disseminate scientific research that's been vetted so that readers don't have to attempt to reproduce the results to have some comfort level with the paper. That's not to say that published articles in quality journals shouldn't be approached with some degree of skepticism, but the sheer volume of science publishing precludes combing through the data of every article you read.

High quality papers will include enough information to reproduce any results referenced in the paper - video is no substitute for actually running the experiment.


If no one is going to review the paper, no one is going to examine the additional artifacts that "prove" science was being done.


Good venues do: that's a big part of peer review in any decent journal or conference. While reproducibility could be improved, it's not nearly as large a problem as what the article is talking about: publication venues that don't even try to vet incoming papers.

These are places that publish papers without even reading the title. You could submit obvious nonsense—randomly generated gibberish—and get published. You can't do that in respected venues in most fields, especially in science. But for people outside of the field, it's really hard to tell which venues are legitimate and which ones aren't, and incentives are misaligned so that quantity of publishing matters far more than anything else.


I would submit that these venues are not doing an adequate job. The Nature case and the Duke cancer case are examples of how even the most reputable journals can be scammed with fabricated data. We weren't even talking about questionable practices with subtle problems. They were outright fabrications. The most damning comment is near the end where 2% admit to fabricating and 30% to questionable practices. And that was just the self reported results.


What is "real work that can't be faked"? I think it much better to define clear criteria by which a journal should accept research.

Further, I wouldn't single-out academics when there are corporate funded studies which cherry-pick results and data to support their products/services.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: