Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Nature rejects double-blind peer review - how corrupt is academia? (sciam.com)
29 points by andreyf on Feb 18, 2008 | hide | past | favorite | 19 comments


I've reviewed a bunch of submissions to ACM Transactions on Networking, all they were all double-blind.

Usenix, on the other hand, is is "single-blind" (if you want to dignify this with a name).

It makes a huge difference. Forget about gender bias. The real problem is bias against newcomers. I was reluctant to write anything negative about well-known researchers. When you're doing a bunch of these, you don't have a whole hell of a lot of time to track down cites and evaluate experimental results. So your default is going to be, people who could kick your ass in your field are getting good reviews.


The trouble with double blind is that in many disciplines the number of people publishing is quite small, and then when you get into sub disciplines you get a very small number. If you've shopped a paper around at conferences and workshops before submitting it, it can be fairly unlikely that a person qualified to review the manuscript doesn't know who wrote a paper or couldn't find out with a phone call.

So effectively you get single blind (if that, it is often pretty obvious who your referee was if you know the writing style or the referee keeps citing papers by the same author) . However, even though it often doesn't work very well in practice, not doing it doesn't work with perfect certainty.


Bear in mind that established researchers aren't going to take a "superficial No" for an answer, and maybe the reason pubs stay single-blind is that double-blind would invite havoc; Raj Jain would be getting articles turned down, and everything would bog down in appeals.

Oh, wait. No he wouldn't. ACM SIGCOMM is double-blind. Yeah, I have no idea why everything doesn't work that way.


The title of this post is excessively melodramatic.

It isn't necessarily "corrupt" to reject double-blind peer review. In my experience, most scientists who are qualified to review a paper will be able to infer the name of the researcher behind it, based solely on the content.

This is particularly true in areas where software packages or novel algorithms come into play, because people tend to name their ideas for maximum recognition. It's silly to hide the name of the researcher when the paper refers to their pet algorithm a dozen times by name.


I agree that the post head is melodramatic. But I think perhaps you missed something in the article. As it pointed out, the kind of "easy guesses" you mention are far less likely to occur in broad fields, even if they are more likely in narrower fields. Besides, so long as the default "fail" state in a double-blind review is at least _no worse than_ the current "succeed" state of single-blind, I don't see why there would be an objection.

I confess that I've done a reasonable amount of graduate work, and I work at a well-known college (staff, not faculty), and I was genuinely surprised to learn that most reviews aren't double-blind. It seems to me to make more sense as the default option.

But again, I don't disagree that the submitter is hyping the situation with that head. The situation isn't necessarily "corrupt" -- but I can understand the inclination to interpret it that way.


I may have a bit of excessive taste for melodrama, please forgive.

In my experience, most scientists who are qualified to review a paper will be able to infer the name of the researcher behind it, based solely on the content.

Your point isn't just anecdotal, the original Nature editorial mentions a study: "Referees could identify at least one of the authors on about 40% of the papers, undermining the raison d'être for double-blinding."

http://www.nature.com/nature/journal/v451/n7179/full/451605b...

But sexism is probably a problem with female names one doesn't recognize, not females whose work one is familiar with (anecdotal, but I'm sure I could find studies on prejudice/discrimination), so double-blind review would still help.

Still, we're talking on different premises: I think it's important both for publishers to strive for fairness in publishing and for researchers to demand a unceasing effort towards general progress in publishing. The real problem is a lack of dedication to thought about the best way to organize research, which, if I understand correctly, is the biggest value added by the journals. Why is it that the closest thing to thinking about organizing research on a macro level is one of Nature's 17 blogs (is it really just as important as the Avian Flu)?

http://blogs.nature.com/

Why is there so little thought about how it is we organize our research? Is it just because old business models stand in the way? Is it because those that benefit from the status quo have undue influence?


Interesting title. You indict the entire community of scientific researchers based on a single decision made by a single journal. Is your decision making process any better than theirs?


Touché, I'm sure there are many researchers who work hard and mean well.

But this isn't to say that the system of publication is perfect. Why don't we innovate more in how we organize research on a macro level?


That's not what he said. There's a world from "the publication system could be improved" to suggesting that academia is corrupt.


First, I'm sure most researchers "work hard and mean well". If not, they'd be much better rewarded in almost any other line of work. I agree that the system is not perfect. However, I would also have a hard time saying that double blind is the perfect system (or even necessarily better). There are many reasons why it would be worse than the current system.


Asking a question is an indictment? Is your comment considered and sensible?


The purpose of the question was clearly rhetorical. He was using the Nature story to imply that academia is corrupt. He wasn't curious of our impressions on the corruptness of academia. Therefore my comment is sensible.


Similarly, an interesting review of a book critical of the status-quo in academia (The Access Principle, by John Willinsky):

http://www.scottaaronson.com/writings/journal.html

It seems that journals (and textbook publishers) give the authors of intellectual works prestige and bragging rights in return for billions of dollars worth of intellectual property. The more valuable the intellectual property you give up, the more bragging rights you get. The entire industry reminds me of Tom Sawyer's whitewashing trick:

Tom said to himself that it was not such a hollow world, after all. He had discovered a great law of human action, without knowing it – namely, that in order to make a man or a boy covet a thing, it is only necessary to make the thing difficult to attain. http://www.pbs.org/marktwain/learnmore/writings_tom.html

Or is there something I not getting?


I'm concerned that the way we fund and review science has some big problems. I don't want to get into a 30,000 word essay here, but some sciences are starting to sound like echo-chambers where there is a narrative and research is supposed to support that narrative. Double-blind peer reviews should help with that, or at least try to help. But I remain skeptical that anything serious is going to be done until science completes politicizing itself somewhere in the next 2-3 decades.


Nice observation..and the situation is, I think, exacerbated by the often-cited "publish or perish" nature of things. I strongly suspect that this increases the tendency to adhere to the narrative, making it even more self-reinforcing.

Dangerous.


There are many ways in which academia is non-transparent, but I don't think one journal rejecting double-blind reviewing is a major one. In the areas of computing in which I work not only can I guess who's written a paper (I would estimate at least 75% of the time off the top of my head, and with a little help from Google I imagine I could get well over 90%), but I can often guess who's written a review of my papers (perhaps 10-20% of the time). Computing might be an odd example - it's all I know. But I suspect that even most "broad" subjects end up breaking down into sub-areas where the participants tend to know each other pretty well.

Double-blind seems to me to be an attempt to try and cover up the much deeper flaws of "peer review". Peer review doesn't in practice mean that a peer-reviewed article is 100% correct, and that legions of scientists have honed every sentence. It means it got past the prejudices of a few reviewers, and (unless the reviewer is unusually committed) means that it didn't contain anything that screamed "obviously wrong" on a quick read through. When I get a genuinely good review (in particular if it finds a flaw in my work), I find it immensely useful, but these are sadly in the minority. IMHO the greatest reason why new-comers to the field find it hard to get papers accepted is because their ideas have a higher chance of grating against those held dear by the established people who act as reviewers; this is such a fundamental flaw in the system that fiddling with double-blind barely even counts as fiddling at the edges.

Given all this, it greatly amuses me when in North America to see products advertised as being based on peer reviewed science.


The idea of double blind review goes against the recent (almost as old as the internet actually) trend of posting everything online long before publication.


Joao Magueijo has an interesting look into this problem (specifically referring to Nature, too) in his book Faster than the Speed of Light. I read it when I was studying physics and it really intrigued me to see how difficult it is to challenge the status quo in a science that is defined by its fundamental revolutions.


Not very.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: