> Many researchers are privately terrified of being falsely accused by the "data cops," one scientist said. But no one wants to criticize them because no one wants a target on their own back — including this individual, who was granted anonymity for this reason. "Batman is a vigilante," the scientist said. "So is the Joker."
> For collaborators, it's a stressful and infuriating time. When a paper is retracted, the research is erased, and all related citations are lost.
> Others who attempted to build on Gino's studies are grappling with having wasted time, money, and energy.
It feels, to me, that they're making Data Colada (the data vigilante group) the bad guys here. It's like blaming the Police for stopping a criminal because they took away the criminal's source of income... I'm kinda uneasy with this line of editing :/
Everyone, please read the whole article, not just this comment. The second two quotes here appear very out of context in this comment - they are about anger and frustration towards Gino, not towards Data Colada.
The first quote here is not a defense of Gino, it's a nuanced opinion of self-proclaimed vigilantes who are also human and might not always make perfect decisions. It is followed by this:
> Discussing the allegations that Gino falsified data is seen almost unanimously as fair game, especially in light of Harvard putting her on leave. People expressed more mixed feelings toward disgraced researchers such as Cuddy, who failed to replicate her findings but did not fake her data.
I have no special insight here, but this at least seems like a reasonable thing to discuss.
> It feels, to me, that they're making Data Colada (the data vigilante group) the bad guys here.
I strongly disagree. The light questioning of Data Colada was a very small part of the article, and even that section did not give me the impression that they are "bad guys".
If the findings can’t be replicated, were they worth publishing to begin with? I know there is pressure to publish, but the effect on society is significantly greater than a single researcher’s personal job security. If nothing else, papers with expensive or one-off experiments should be clearly marked as observations that should not be relied on as foundational for other work until replicated. When they’re cited peer reviewers could make sure they are referenced as such without requiring the peer reviewer to intimately know the details of the reference.
I'm in a different field (computer security and privacy) and I want to point out that the notion of "replication" is trickier than what you might expect. Let's say I do a simple empircal measurement study, say on network traffic on some piece of internet infrastructure day-by-day in the year 2020. I find there's a huge sustained increase starting in mid-March, sustaining through the end of the year. I submit my results for publication. How can someone replicate those results, when the measurements were subject to conditions unique to that observation period?
The best anyone can do is to replicate my statistics/analysis if I gave them my data set. That's different notion of replication. Would you consider that sufficient?
That narrative is insane. It is one thing to produce shoddy research. Even good senior researchers sometimes just have a paper that... doesn't work out well, with a late found flaw in the methods that can't be fully mitigated or uninteresting findings, and then they'd rather still try to publish it than shelve months of work. That's not good, but almost unavoidable in the current academic environment.
The accusations here are very different: Producing fake data is fraudulent, of course the papers need to be retracted, and people involved and in the know need to be punished. The quote from the article describes well something that seems to be a growing group of people within academia:
> It's possible "these people don't care and never cared about the science," Michael Sanders, a professor of public policy at King's College London, said. "They talk about science, and they talk about experiments, and they sort of wrap themselves in the fabric of the scientific method — just as a way of selling stuff."
Is there a market for publishing the retrospective of the process which resulted in a "late found flaw"? I would expect uninteresting data would still be published if only to prevent someone from going down that rabbit hole in the future.
The citations aren’t lost though, are they. They still exist in the paper, they just point to a retraction. You can still find the papers that cited the retracted paper.
And I suspect you’ll still get new citations since it’s not unheard of that people cite a paper based on seeing it in some other paper, rather than reading the paper directly. I don’t think people should do that, but they do.
Of course they shouldn’t be measured. They are recalled.
Of course, I don’t think people with recalled papers due to their fraud need to ever concern themselves with grants. (Kind of like felons can never be an officer of a publicly traded company)
I don't follow you. They track scientific publications and their citation networks. I feel like you are reading something in my post I didn't claim. My point was that the citations can be lost in the sense than they wont be counted by the bibliographic databases that are usually used by administrators to get citation numbers for evaluating researchers and institutions. Of course the citations still exist between the papers. But for a researcher their bibliometric indicators will come out lower after a redaction, in most cases when they are evaluated for grants or career progression.
But that measurement isn’t done through Scopus or Web of Science.
It’s done by evaluating biosketches of grant applicants in their application. It’s not like the grant reviewer checks Scopus for the applicants citation count or something.
I recently reviewed some grant applications and I reviewed the applicants bio sketch including their experience and publications. I didnt count the citations for each paper. But if I did, I certainly wouldn’t count citations for any papers that were retracted.
Of course the citations should be lost (not counted).
But the paragraph is about collaboraters being angry and frustrated. And they seem to be angry at those who faked the data. Which is understandable.
You would hope that this would also mean that collaborators getting their name on a paper would put a little more effort into scutinizing the data provided by co-authors.
Maybe it is just a little to easy to work on paper and just accept the underlying data without asking any questions.
Guilt is a very effective way of keeping corrupt status quos going. Even if you change your ways, there is still dirt on you. The only way to break the cycle is if someone from the outside exposes the filth, or someone guilty on the inside decides to pick up his cross and accept the consequences of admission and exposure. On top of that, there are many who won't even entertain the possibility that the accused could be guilty even with evidence, because they have an emotional interest in preserving a false image of decency. Or they're terrified of the consequences of reporting the problem.
This is what happens with sex abuse. When there's enough guilt to go around in the right places, the guilty form a kind of hellish camaraderie, a Mexican standoff of pointing fingers. Once such people assume positions of power, they also have a tendency to concentrate corruption in the institution by favoring the corrupt. Doing so entrenches the standoff and eliminates those who would oppose or expose them. The so-called "lavender mafia" is a well-known example, because the media hate nothing more than the RCC, but this occurs everywhere and at greater rates.
What's important is what we distinguish between institutions and the people that temporarily occupy them. Good institutions should generally be preserved. It's the corruption that should be dealt with. Of course, that's not always an option. Sometimes, corruption destroys the institution, or perhaps the institution is beyond repair. But generally, there is a presumption in favor of cleaning house rather than abolition.
What kinds of steroids and how much of them can I get away with using so I'm in the perfect spot where I am ahead my fellow doping athletes, but still under the bar of detection...
There's a pretty clear solution for that in athletics which is sometimes implemented. They take blood samples and store them for decades. New methods can come along and analyze the data and detect fraud long after the fact.
The same can be done in science, it just more disclosure of data.
This doesn't really help, though? My understanding is that in the sports where doping was a thing, if you wanted to be competitive at top level, you basically had to find a way to dope. This isn't something 'just the winners' did. Near everyone was doing it.
So, yeah, you can look for it with later testing, but all that will do is claw back a winner. It doesn't help anyone that wasn't doing it. Is why none of us have any clue who the best cyclist during the Armstrong era was. With the scary view that the cheating was so widespread that the current winners were still the best if everyone stopped cheating.
Of course you can't fix the past before they started retaining samples.
The point is to prevent people from doping moving forward because they know they can be caught.
If we had better data on cyclists from when Armstrong was racing, you could absolutely claw back the Awards and give them to the best cyclist that wasn't doping.
I think that misses just how entrenched doping was. As mind blowing as it is, it is fairly safe to say that everyone that was making the leaderboards was likely cheating.
Probably the entire peloton was -- if so, can it still be considered cheating?
Of course, not everybody used EPO and not everybody who did used as much as Riis, Ullrich, Pantani, Virenque, Armstrong, Zabel, Hamburger... Blood doping existed long before EPO. Amphetamines (and strychnine!) were used long before blood doping. Induráin almost certainly used either EPO or blood doping.
EPO and blood doping can still be used (and are likely still used) but noone can push it to the dangerous extremes of the 90's. That will be caught (because such hematocrit values aren't allowed anymore -- and it's easy to test for).
For a lot of people, they are most interested in the recognition and achievement. You are basically saying "you may be a footnote as someone that should have won, if the others weren't caught cheating years later." Right? I can see how that is less than comforting. (Especially so with no way of knowing who all is actively cheating at the time.)
I can't say it is a bad idea to try, mind you. It could very well work. I just have my doubts.
The bar for being accused is so incredibly high. People are given so many possible outs and are allowed to produce the most implausible explanations. And then, even having one or two faked papers basically has no impact.
Here's an example.
Ken Rogoff and Carmen Reinhart. Professors at Harvard. Despite having faked the paper that introduced austerity to the world, had an immense impact on the economy, and caused massive and needless suffering for hundreds of millions of people. That paper still continues to haunt countries today. To make their paper work they had to lie 3 times: to exclude some countries, exclude data from others selectively, and reweight all of the data. None of this is reported in their paper! Each of these took deliberate actions on their part, they cannot be mistakes. Every single "mistake" they made is carefully calculated to produce the result they want. Every one of them is needed otherwise their paper would be junk.
What happened to them? Well, Reinhart has been promoted to the Chief Economist of the World Bank and is still a professor at Harvard. Rogoff is still a professor at Harvard too. Their faked paper literally resulted in deaths, toppled governments, and the rise of the alt right in places like Greece.
But they got a promotion for it.
The only people who are afraid are the people who are faking their data. No real scientist is getting accused of anything here.
> the paper that introduced austerity to the world
This is complete nonsense. The paper was published in 2010.
The principle of fiscal austerity for EU member states was enshrined in the Maastricht Treaty of 1992 (Article 104) and the accompanying Protocol No 20 on the excessive deficit procedure. [1]
> The reference values referred to in Article 104(2) of this Treaty are:
> — 3 % for the ratio of the planned or actual government deficit to gross domestic product at market prices;
> — 60 % for the ratio of government debt to gross domestic product at market prices.
In fact R&R cite the Maastricht Treaty as the source of the 60% magic number in their paper!
These numbers were basically pulled out of thin air by a French civil servant in 1981. [2]
That is how government policy is actually decided. Not based on an academic paper published 20–30 years after the fact.
It's fairly common for people at prestigious institutions to write papers about stuff that already existed for years to try and snatch some points, and then to get all the credit, because they are from a prestigious institution with many press related megaphones. The most famous example is probably Von Neumann writing about ENIAC a project he wasn't involved with at all.
These are aspirations to be applied as needed, not a proscription for austerity. The behavior of the ECB, the European Central Bank, is what matters. And that's a political and economic decision made with essentially a free hand (given that it has only one legal goal).
Austerity is an economic and political choice. No mechanism in the EU forces it.
I read it as an argument for a causal chain of 2 professors => paper => Greece elects far right. A few problems with this:
1. Paper came far after the idea of cutting government spending, so causation is wrong off the bat, as OP gently pointed out.
2. The aspect of the treaty doesn't have a clear enforcement mechanism, yes. However, it was a major part of many countries actions at the time.[1]
3. "What are they going to do, kick me out?" doesn't obviate rules without a clear enforcement mechanism.
4. It's unclear why Greece electing a far right party at some point in the 2010s is enough to motivate the argument, it has a liberal technocratic government and has turned things around substantially, in fact, recent elections delivered an unexpectedly wide reelection victory.
[1] To wit, that's why the Greece stuff took O(years) to resolve. To wit, recently, it was shared that Merkle _broke down crying_ when she felt she was being bullied into enabling violations of it.
n.b. On a personal note, I found the tone you're adopting surprising and not the conduct I'm familiar with after 14 years on HN. What stood out was making personal attacks in response to innocuous comments by interluctors, and the aggressive insistence that they don't understand -- dang has a wonderful shibboleth, "come with curiosity", and I don't see that here. I see someone berating other people for engaging at all because it feels like they're questioning you. If it's the case you are 100% wedded to the idea two Harvard professors created "austerity", it's not worth saying anything at all,
it's ill-defined and there's a 1000 ways to read "austerity", 999 of them may conflict with however you are internally defining it.
I have to say your final paragraph seems very off-base. Tone issues were started by diogocp with "complete nonsense" and were only slightly increased by light_hue in their response by calling that "rude".
You're the one calling out only one side for tone and in very dismissive manner. Ironically you are doing the exact thing you accuse light_hue of and should probably take your own good advice.
Just because I didn't point out every detail that was wrong with that post doesn't mean anything else is right. Even the basic facts are wrong.
> 1. Paper came far after the idea of cutting government spending, so causation is wrong off the bat, as OP gently pointed out.
This just isn't true.
The Greek crisis started in 2009, the paper came out in Jan 2010, harsh austerity measures were imposed on Greece in May 2010 by the EC/ECB and then they just kept coming for years on end. The timeline works perfectly and the EC/ECB had not imposed such measures before.
> 2. The aspect of the treaty doesn't have a clear enforcement mechanism, yes. However, it was a major part of many countries actions at the time.[1]
>
> 3. "What are they going to do, kick me out?" doesn't obviate rules without a clear enforcement mechanism.
Let's talk about 2 and 3 at the same time.
This just isn't how the EU works. And it isn't how laws work. Laws don't rely on a strict interpretation of a text. Laws are not computer programs.
When someone says something is a "rule", they mean that it is a principle that must be followed "one of a set of explicit or understood regulations or principles governing conduct within a particular activity or sphere."
The 3% is not a rule. The EC does not see it as such. The member governments who routinely go over the 3% level don't see it as such. It's misleading to call it a rule.
The Stability and Growth Pact (SPG) is the official mechanism for dealing with this. At the moment the SPG is suspended wholesale.
Never in its history has the SPG ever imposed sanctions on any country for violating the 3% deficit to GDP ratio. It would be unthinkable for it to do so. Germany and a few other countries have been trying to reform the system for a long time to give the SPG teeth and turn that 3% level into an actual rule, but without any success.
These are not rules. If you want to call them that, you're misleading yourself and any reader.
> 4. It's unclear why Greece electing a far right party at some point in the 2010s is enough to motivate the argument, it has a liberal technocratic government and has turned things around substantially, in fact, recent elections delivered an unexpectedly wide reelection victory.
I said that austerity led to the far right getting elected into parliament. Which Golden Dawn did, and far right parties continue to win seats even today at lower levels. I'm not sure what this argues? That Greece is not a fascist state today? I never said that.
> n.b. On a personal note, I found the tone you're adopting surprising and not the conduct I'm familiar with after 14 years on HN. What stood out was making personal attacks in response to innocuous comments by interluctors, and the aggressive insistence that they don't understand -- dang has a wonderful shibboleth, "come with curiosity", and I don't see that here. I see someone berating other people for engaging at all because it feels like they're questioning you. If it's the case you are 100% wedded to the idea two Harvard professors created "austerity", it's not worth saying anything at all, it's ill-defined and there's a 1000 ways to read "austerity", 999 of them may conflict with however you are internally defining it.
I never said they created austerity.
On a personal note, I find the contempt for facts and scholarship on HN quite amazing. All of this information is available on Wikipedia. From the timeline, to the fact that the EU does not consider this a binding rule, to Golden Dawn getting elected as a consequence of austerity. To the fact that I'm not "internally defining" austerity, it's an actual field of study in economics.
"come with curiosity", and I don't see that here ~> On this we agree. Curiosity would involve a quest to find out what the truth is.
Those measures were a condition of the Troika (IMF/EC/ECB) bailout. And the IMF has a long history of imposing similar conditions, dating back to at least the 1970s. The conditions where straight out of their playbook.
There was an IMF intervention in Portugal in the 1970s, and here [1] is what they had to say about conditionality. Sounds familiar?
> it was accepted that the stabilization program would need to include quantitative limits on domestic credit expansion and on increases in credit to the Government, if it was to be supported by a stand-by arrangement. These limits would, it was hoped, make certain that total expenditure would be kept within the overall resource constraint. Policy changes strong enough to give reasonable assurance that the public sector deficit would be reduced had to be put in place.
> The Greek crisis started in 2009, the paper came out in Jan 2010, harsh austerity measures were imposed on Greece in May 2010 by the EC/ECB and then they just kept coming for years on end. The timeline works perfectly and the EC/ECB had not imposed such measures before.
Greece was free to not borrow money and go bankrupt instead. That would also have led to austerity but in a much more chaotic way.
They published a paper that claimed they ran analysis X. Reviewers look at it, and believed them, because we usually don't assume that people are lying. And it's hard to read over every line of code and compare it to the paper to make sure they did run the analysis they claimed.
But in reality, they did a totally different analysis. They hid data, they fudged data, they inserted weighting factors into the analysis. And they never mentioned in this in the paper.
In the article, they claim this was justified.
But hiding the true nature of the analysis they did was not. That hiding, of making the paper read one way but the code another, is plan and simple scientific fraud.
If what they believed in reality is that the code is correct, and that's the better analysis, then they should have just reported in the paper. The reason they didn't, is because reviewers would have looked at it and said that this is unacceptable and doesn't meet basic scientific standards. So they lied.
Completely and totally different case. You are wrong on the facts.
There is no one in the economics profession who thinks Reinhardt or Rogoff deliberately made that error bc they had something personal at stake in convincing policymakers that austerity policies were the right outcome. Quite unlike the cases of ariely or gino or various others.
They made an embarrassing error but ultimately it was a coding error.
This is an appeal to authority and regardless, if you pass off something as proven fact you should have to be damned sure. Being wrong and claiming later that it’s a coding error should not cut it. This should result in a total loss of credibility
I'm right about the facts, you're misinformed about what is wrong about the paper.
They did 3 things in that paper. One of them they have called a coding error. Ok.
Two of the 3 were intentional. They intentionally omitted date ranges for some countries. And they weighted countries differently. This they admit to. And they defend.
Yet, nowhere in the manuscript does it say that in order to get these results you must apply weights to the data and you must leave out some inconvenient data points.
I can make up any argument for anything that I feel like if I start to apply weights to the data designed to make my argument work. They failed the most basic of scientific ethical standards.
> Despite having faked the paper that introduced austerity to the world, had an immense impact on the economy, and caused massive and needless suffering for hundreds of millions of people. [..] But they got a promotion for it.
I find it ironic that we have a reproducibility problem in many scientific disciplines, that we find that many notable scientists have been less than 100% honest in their research, that peer review fails to catch many of these errors, and still there are many people running around saying stupid bumper sticker slogans like “trust the science” and “I believe in science” and “95% of scientists agree…”
What we’ve learned is that scientists are human and they have all the human weaknesses and venality as any other group. The practice of science in the modern day (publish or perish, beg for grants, non-blind peer review, financial conflicts of interest, politically charged university atmosphere) is rife with incentives to predetermine desired outcomes rather than let the chips fall as they may. Individual scientists of course will fall all across the distribution of integrity.
I think the scientific method is one of, if not the greatest gifts the minds of the Renaissance gave us. BUT I refuse to be browbeaten by priests in lab coats or their acolytes. If you cite scientific papers in front of me trying to make a political case for a course of action, I will use my own judgment, informed by he paper as well as other factors like motivations and so forth.
It's not ironic. It's a reflection of the fact that "scientists" are not an homogeneous group, and we are currently living through a slow-moving revolution in which some scientists are, broadly, taking a stand against fraud and attempting to dismantle the power/incentive structures that make it possible to perpetuate. The system is imperfect, and the work is in progress.
"Trust the science" is a poorly-phrased attempt to counter the persistent conspiracy theories that various science disciplines, and the scientists that work therein, are part of this or that conspiracy to reduce your freedoms, mislead the public, etc. Or the related conspiracy theory that science broadly is bullshit and that the primary motivation of any scientific study is the continued justification of funding, rather than pursuit of knowledge.
The fact that scientists can be wrong, some studies are faked or their results are overstated, etc. has little or nothing to do with it.
> I refuse to be browbeaten by priests in lab coats or their acolytes. If you cite scientific papers in front of me trying to make a political case for a course of action, I will use my own judgment, informed by he paper as well as other factors like motivations and so forth.
And this is exactly the kind of attitude that I described above. It's the ACAB approach applied to scientists, where it makes no sense if you look at what scientists actually do and care about. A huge number of scientists are persistently and actively fighting against the "priest in lab coat" archetype.
Do you work in science? Because since I have, I agree with some of the "conspiracy theories". I didn't before but as someone who'd always been fascinated with knowledge, research and human progress from a young age, I was thoroughly disappointed when I discovered the reality of it isn't as I had imagined in my almost religious belief in science. The reality was I found people who worked in it like a regular job. They were not driven by by a desire to further truth and expand human knowledge, but rather just like most other people they care about their salaries, funding, promotions, intrigues/political games, etc.
The mistake is thinking there must be a conspiracy to make it true. In reality it's just human nature and group behavior. One might think some of the most critical thinkers work in science, and it can be true in some cases, but more often than not those are actually the ones that get filtered out. Ironically, you must not question too much if you're a researcher who wants to receive funding. The safest thing to do is research one of the latest hypes and have findings similar to what other groups find. Don't go against the current, that gets you in trouble.
> "Trust the science" is a poorly-phrased attempt to counter the persistent conspiracy theories that various science disciplines, and the scientists that work therein, are part of this or that conspiracy to reduce your freedoms, mislead the public, etc. Or the related conspiracy theory that science broadly is bullshit and that the primary motivation of any scientific study is the continued justification of funding, rather than pursuit of knowledge
Actually no. It's a purely anti science/authoritarian approach to avoid any dissent when politically convenient.
Covid policies were the perfect example and labeling/censoring instead of debating (even other scientists like those on the great Barrington declaration) shows that it's not really about science but about control.
Corruption is not a grand conspiracy but a natural thing that occurs in bureocratic institutions and, nonetheless, accusing and exposing corruption and inconsistency was a big no in anything related covid (lockdowns, mandates, censorship, etc). Freedoms WERE limited unconstitutionally and selectively but people would be telling you "my fredumbs, grandmakiller, conspiracy theorist" because they couldn't really argue except via way of appeals to authority and slogans such as trust the science.
The science wasn't one and wasn't uniform, but you couldn't even compare "sciences". You only had to take it.
Those of us who engage in science are not afraid to expose our results and hypothesis and don't need censorship.
Peer review can't catch lies. It reviews your methodology and whether your findings are supported by your own data. If you outright falsify your data, peer review won't catch that. Replication catches lies.
Many studies, even those relied on for medical products, don't get replicated by independent entities.
It's safe to assume, any non-replicated study is an outright fabrication at this point.
It wasn't the data that the study produced, it was the data the study consumed (odometer readings from insurance companies). Obviously, that is independently verifiable. Also, this was only discovered AFTER numerous failed replication attempts.
9x% of scientists agree has always struck me as profoundly stupid. Are these scientists in the same field? Has there been a Galilean discovery that is not widely accepted yet? Who surveyed a statistically significant portion of “scientists”? What credentials does one need to be considered in that population? Does a master of computer science have authority on biology, physics, chemistry, ecology, geology, or any other number of fields?
I've found that a good rule of thumb is to ignore anybody who is self-describing as "a scientist". If they self-describe as some specific kind of scientist then I'm more than willing to listen. If somebody says "as a biologist" or "as a geologist" or "as a whateverologist", that's fine. If they identify themselves in an even narrower way, that's even better. But if somebody says "as a scientist", just generic 'scientist', then they're trying to imply a very broad sort of expertise that almost certainly is not justified. They're basically saying "as an expert in everything..." Such people are usually safe to ignore.
Sounds like something that's usually based on context. To someone who's not a scientist "I'm a scientist" is most likely more helpful than "I'm a particle physicist".
Same as you might say "I'm an engineer" to a non-technical person vs. "I'm a Go backend engineer" on a tech meetup.
No, because the general public is comfortable with the words like 'physicist', 'chemist', 'biologist', 'geologist', etc.
If a particle physicist is opining on particle physics to the general public, they'll self describe "as a particle physicist" or at the very least "as a physicist". They'll use the most precise term that is likely to be understood by the audience. Only if that person isn't a physicist at all will 'scientist' be the most precise expertise they can claim. The general public is comfortable with the word physicist, so there's no reason for a physicist to avoid identifying as such when discussing physics with the general public.
On the other hand, a marine biologist who is opining on particle physics may self describe "as a scientist", because that's the most precise level of expertise they can claim without lying. Saying "as a biologist" when discussing particle physics wouldn't confer the desired credibility, so a less precise term is used.
Conspicuous lack of precision is the tell:
If people are discussing basketball and somebody says "as an athlete", it's safe to conclude they aren't a basketball player. They would have said "as a basketball player" if they were.
If people are discussing motherhood and somebody says "as a parent", it's fairly safe to conclude they aren't a mother. If they were a mother, they'd say "as a mother" because that more precisely relates to the topic of motherhood.
If somebody is opining on firefighting practices and says "as an emergency responder", they aren't a firefighter. They might be an EMT or a cop, but a firefighter would say "as a firefighter" when discussing firefighting.
And finally, if a Go backend engineer claims his expertise to be "programmer" to a general audience, that's fine. But if he's saying "as an engineer" or "as a scientist" then he should probably be ignored. Yes, I've known people to do this, programmers calling themselves "scientist" because they have a CS degree, opining on bullshit unrelated to anything they ever studied.
Why did you crawl through this persons post history? Why do you think his opinion should result in a ban? What does any of this have to do with what’s being discussed here?
Seems to me like maybe you should be banned. You’re the only one not adding anything to the conversation and instead trying to instigate a flame war. You’d be happier circle jerking on Reddit.
Post histories are the only way to hold people on a pseudonymous website accountable.
I think I made it very clear why their opinion should result in a ban.
Also, as a general rule, it's impolite to use anything other than "they" if you don't know someone's pronouns.
edit: any comments on the object-level issue at hand, shkkmo? Or are you going to be like the worst people on the internet and focused entirely on the meta level?
Also, I don't trust dang enough, given reports from Hector Martin and others of being ignored. So I post about it here. (As to why I don't just leave HN, well, it's too important a forum.)
I agree with you that papers shouldn't be treated like some holy text. That would go against the core of scientific method. We should inspect the core research.
That said, there are many who would use the fact that "humans are flawed" to throw out scientific research and distrust it. Scientific research will still be more reliable and dependable than the spew of others because there is rigor to it.
but the fact that this research is public and available is why frauds like Gino can be caught. This is the scientific method working. Media tends to amplify the hell out of the fantastical, fraudulent or not. But this IS the system working.
We may step in the wrong direction here and there, but we will inevitably push towards truth.
Reminder that "peer review" is part of the scientific journal system, not part of the scientific process itself. Journals have it done to serve their own interests, to save on paper/printing expenses and maintain their prestige. Peer reviewer isn't replication (which is actually important to the scientific process.) Google n-grams for "peer review" and you'll see that nobody was talking about it before the 1960s.
Yes, despite difficulty reproducing results in some fields, it still generally makes sense to trust scientific results. The scientific method is the best thing humanity has for establishing knowledge from empirical data.
Now if you'd prefer to put your faith in religion, or some demagogue, that's fine. And you're free to go eat all the horse paste you want. But overall dismissal of science as "stupid" is just a flat out ridiculous take.
> and still there are many people running around saying stupid bumper sticker slogans like “trust the science” and “I believe in science” and “95% of scientists agree
That has all to do with authoritarianism and how in partisan politics and during the covid draconian policies push (in many different countries) any dissent or non agreement was treated as a terrible sin that HAD to be punished.
Even nowadays, in HN, you'll get people repeating the mantra, downvoting and acting super obfuscated that someone dare point out that just because govs/tech/pharma/media sayd so, doesn't mean it's true, specially when we there's money and politics involved and that security or hygiene theater is nothing new.
There's a lot of wrong with your argument, but let's go down the list.
1. The optimal rate of fraud is not 0%, just like the optimal rate of shoplifting in a store is not 0%, and the optimal amount of crime in a society is not zero. Science, just like the grocery, and the street, has to operate on some degree of trust, and not non-stop big-brother gestapo police state surveillance. Just like a zero-crime society, a zero-scientific fraud society would have a lot of negative externalities that aren't accounted for by 'bring fraud down to zero' paperclip-maximizaiton. That's why it makes sense to talk about what can be done to reduce fraud, but makes no sense to throw yourself off a roof when an instance of it actually happens. It's a statistical inevitability. Look at the blast radius, pick up the pieces, have a conversation about what, if anything, could be done better next time.
2. If even one bit of dishonesty is enough to distrust a system, you shouldn't trust anything. But nobody actually thinks or operates in that kind of insane solipsism. You wouldn't get anything done.
3. What actually happens is that people from low-integrity fields (Politics, sponsored media, lobbying, organizational PR, internal corporate politics) are trying their fucking hardest to use that argument to bring public perception of everyone else down to their level. If they are blatantly lying half the time, and someone working in science blatantly lies, say, 0.5% of the time, and is blatantly wrong 5% of the time, and is mostly-right, but subtly wrong[0] the rest of it, that solipsist from #2 would look at it and conclude 'well, there's no difference'. He couldn't be more wrong, and I would like to gamble on loaded-dice games with anyone who disagrees with me on this point.
4. You shouldn't use a single paper, or two papers, or even fifteen papers as an arbiter of expert consensus. Papers disagree. Expert consensus is reached by looking at the whole body of a field. This requires extensive understanding of a field, which you're not going to get from clicking a few links over an afternoon.
5. You shouldn't use expert consensus as a definitive arbiter of truth.
6. But you sure as hell should defer to an expert, over your own judgement, about non-trivial technical matters. I don't care how smart you are and how many papers you read last week[1], I have serious doubts that you're going to be any good at performing, say, an open-heart surgery. There's a reason we leave it to the experts, even if what they know is wrong. [2] [3]
[0] Almost all of our understanding of the world is at least subtly wrong. That's fine, though, many bits of that understanding are, while subtly wrong, useful.
[1] It's a different story if you read ten years worth of papers. And not just the ones that support your biases. And also spent five to ten years working with, and for experts in the field, making a lot of mistakes along the way. You don't get mastery of a subject by reading, you do it by doing, making mistakes, and correcting them.
[2] What experts know is almost certainly, objectively wrong. But it's often useful, and better than blind stumbling.
[3] If we treated the low-integrity fields with an iota of the scrutiny you want to apply to the lab-coated tech-priests, half the world wouldn't have a job.
Well stated. Inevitably this discussion reverts to a practical one: what happened with COVID? A few powerful people used "trust the science" to elevate their own conclusions (which, to use your #3 and #5, are subtly wrong some of the time) to forcibly silence discussion. We have ample proof that top health officials and top political officials did exactly that.
People who spend an afternoon clicking links, to use your language, are entitled to voice their opinions, however ignorant they are. Just like you're entitled to say that the Earth is 6,000 years old. Most people will ignore it, but those who are receptive to those beliefs already will listen.
Again, using your language, "non-stop big-brother gestapo police state surveillance" of wrong opinions does have "a lot of negative externalities."
Those who say "trust the science" in the typical way it seems to be meant are rarely people who understand what science is. They're merely virtue signalling fan boys and groupies trying to demonstrate their loyalty to the regime. Because that's why they say it. If the regime said "don't trust the science", they'd say that, but scientism happens to be the state religion.
I firmly believe that guys like Dawkins/Harris/Tyson have done substantial harm to a certain kind of person's perception of the world. One can quite easily see the worldview they espouse, in the broadest sense, is compatible with the worst of human tendencies and ideologies. I find people can gain utterly monstrous views about fellow human beings through the superiority they earn for simply "believing in Science."
> One can quite easily see the worldview they espouse, in the broadest sense, is compatible with the worst of human tendencies and ideologies.
Could you name one or more major orthodox worldviews or institutions that are not, in practice, compatible with amplifying and perpetuating the worst of human tendencies and ideologies?
'Orthodox' is an interesting qualifier here.. and with it I am inclined to say 'no'. But saying that is more a comment on the fraught history of humanity, rather than an agreement on the more reductive or banal point you want to be making here.
But even if I concede to maybe a more charitable reading of your question, that any conceivable worldview could be compatible with facism or whatever, it doesn't mean that some can't be worse than others! And, IMO, this one is definitely on the more compatible side of the spectrum.
But totally respect your opinion if you want argue about degrees here, because either way any given ideology is substantially determined and sustained from material conditions, social relations, etc, anyway; its always more a symptom than a problem itself.
It's a necessary qualifier, because we're not talking about some niche five-person ideology or group that hasn't actually been tested by time. We need to talk about one that I've heard of, and can plausibly evaluate!
> But saying that is more a comment on the fraught history of humanity, rather than an agreement on the more reductive or banal point you want to be making here.
No, it's a comment that most structures of thought and reasoning are not actually a guardrail that will protect you against poor behaviour, and that in itself, without specifics, it's a poor criticism to level against a type of thinking, or an idea.
You can criticize the specifics of the problems of Dawkins, but it's a cheap, parting shot to just say 'Well, Dawkins is compatible with horrible things', without elaborating further. Almost everything is compatible with horrible things! It's a low-information criticism!
> When discussing Gino with Insider, multiple people brought up the idea of "me-search" — that researchers gravitate to topics that are of personal interest to them. "We're our own therapists, in a sense," Gordon Pennycook, a behavioral-science professor, said.
I guess that might be common with students picking psychology as a major, too. Noticed it with a few acquaintances of mine.
> In a study about "contagious dishonesty" that Gino coauthored with Ariely, the researchers found that students were more likely to cheat if they saw someone they believed to also attend their university cheating.
That’s fascinating since both of them fabricated data. At least they can add themselves as the N+1 data point to their study. It’s retracted already, but still good enough for a TED talk or a book probably. /s
Wonder if we have many sincere “coming clean” stories where they explain the thought process behind it. Why they did it, at what point did they decide to go through with it? Did they share their secret with others. Did they feel shame, etc.
That's one way of turning a vice into a virtue... "Right, I've been found out to have fabricated years of research, entire PhDs are now junk. How can I make some more money out of this?"
I've never seen a coming clean story. But this has been going on at every scale for decades.
The scientific method isn't broken but the institutions that surround it and the people riding the tops of it tend to be. It'd be easy to fix but you'd instantly make a million high power enemies with psychopathic tendencies. Oh well.
> I guess that might be common with students picking psychology as a major, too. Noticed it with a few acquaintances of mine.
That's no secret. Any psych professor will tell you, it's rare to find a student (and consequently, psychologist) whose interest was not sparked by a personal history of mental illness.
In a similar vein, UPenn professor Angela Duckworth of "Grit" fame was caught misrepresenting statistics to embellish her claims a few years ago: https://www.npr.org/sections/ed/2016/05/25/479172868/angela-.... Who knew psychology had such questionable science?
I mean, people don't like to say it because you get taxed of being snobbish at best or lacking culture and being close-minded at worst but everyone know that "soft" sciences are mostly bullshit.
I started my eductation with a bachelor of maths and an engineering degree before doing a lot of economics while in business school. The field is full of interesting ideas but methodologically it mostly is garbage even you stick to the more mathematically oriented part of it. And that's economy which is supposed to be the most serious of the soft science.
Agreed, I have a similar background. Science is done in economics, but a lot of economics ends up being thought experiments turned into a graph eventually taken as fact. Once you get into the science of the shapes of everyones’ multi dimensional personal utility curves and rational behavior you are drifting from actual observed behavior. That’s the pivot to the other side of irrational behavior that Dan Ariely loved to push based on observations, but now we find that’s full of false claims.
Based on the NPR article you linked this sounds like a normal scientific debate, though? One researcher says that data supports their theory, another says that it doesn't if you look at the data more carefully. I don't see accusations of fraud here, just different interpretation of facts, and a valid challenge to an earlier theory. This is how science is supposed to work, isn't it?
Gino, on the other hand, is accused of outright fraud. Data Colada claims she took real data and changed it to support her theory, or possibly created entirely fake data.
Harkening back to episode 73, Alexa and Yoel discuss recent evidence of fraud documented in the Data Colada blog post "Clusterfake." The post is the first in a series of four, which will collectively detail evidence of fraud in four papers co-authored by Harvard Business School Professor Francesca Gino. […] Finally, they consider what this means for a field still struggling to build a more trustworthy foundation.
They've had a bunch of trouble in medicine-adjacent fields. Nutrition and sports science has had an especially tough time, but it's been cropping up all over the field.
I do think most of the problem is rooted in the fact that it's difficult to do this type of science, especially without serious ethical breaches. There are so many confounding variables, and self-reported observational data just isn't a very good tool. To truly eliminate confounders, you basically need to lock someone up for decades and control their entire lifestyle.
It is to the merit of psychology. Other branches of social sciences don't dare to propose such a thing. They know that their replication numbers are going to be worse than psychology's.
For fuck's sake, marketing and management has better numbers than psychology. It is to their merit they're admitting they have a problem, but it is 100% a problem of the field of study.
You misrepresent the linked article which doesn't report replication rates but forecasted replication rates from participants "recruited through a number of mailing lists, Twitter and blog posts".
They used a prediction market, which at the time the article was published was still to resolve. This is standard practice and methodologically sound. They didn't go around asking people on Twitter.
> Participants were recruited through a number of mailing lists, Twitter and blog posts. [...] For all analyses, we use survey responses; for analysis (iii), we additionally use responses to a demographic survey that included a question on academic interests; and for analysis (iv), we additionally use the prediction market data.
So it sounds like they only used prediction-market data "for analysis (iv)", which they describe as:
> [...] (iv) whether survey-based aggregated forecasts and market-based forecasts are correlated.
---
The last line of their paper:
> If our forecasts hold up, it will be interesting to investigate if specific factors (such as different methodologies and policies) can be identified that influence replication rates.
So, it doesn't sound like they're claiming that their forecasts should be assumed reliable.
I didn't say that the methodology was not standard practice or methodologically sound. I said you misrepresented the study, which you did. You said "Most social sciences have numbers that are far better than psychology's." in the context of a conversation about *replication numbers*. You did not say "in a prediction market, participants ("recruited through a number of mailing lists, Twitter and blog posts") have lower replication expectation rates for Psychology than other social sciences" -- if you had then, well, you wouldn't have misrepresented the study.
Any meta-study trying to get replication numbers would likely use a combination of methods, including prediction markets, with techniques like having a small fraction of the contracts resolve.
So no, implying this is a Twitter poll is a far worse misrepresentation than even the most uncharitable interpretation of what I said.
> Any meta-study trying to get replication numbers would likely use a combination of methods, including prediction markets, with techniques like having a small fraction of the contracts resolve."
this is non-nonsensical obfuscation.
poor attempt at moving the goal posts -- for the record I did not imply, I DIRECTLY QUOTED from the article.
Its not just psychology. All science is affected. Stuart Ritchie’s Science Fictions describes cases from across many disciplines. The worst are cases of misconduct in medical research with maybe the worst case being Paolo Macchiarini who fooled the Karolinska Institute of Nobel fame into letting him experiment on patients with trachea replacement surgeries:
https://news.ki.se/the-macchiarini-case-timeline
For those of you interested in the accusations, here are the primary sources for them[1]. The are multiple (it is a 4-part series) and it looks quite damning.
Might be worth mentioning that while not considered "fake" data per se, Amy Cuddy's famous Power Poses research (one of the most popular TED talks) was debunked and she eventually left Harvard.
In the 6th grade I had a science project due, I of course had done nothing until the day before so I made up weeks of data in the form of journal entries from which bar and line charts were created since I was good with computers back when having pictures off the internet was considered fancy. My aim was just a passing grade and not being found out.
Lo an behold do I not only get put into the school's science fair, I actually end up winning second place. This is actually how I became the science kid from then on and to live up to this name I went and invested more time into both math and science classes from then on. Eventually I went from an okay student to a top student and becoming a real scientist was something I always wished I had done.
Moral of the story?, there isn't one... I just never realized just how much 'real' science I was doing back then.
It amazes me that in modern times people are not required to publish their raw data. Doing so would (a) prevent errors in stats going unnoticed (b) remove the incentive to use statistics creatively (c) make it much easier and faster to find errors (d) make fraud detection easier.
Increasingly I think we should be offering either direct funding or bounties to groups that DO actually check this stuff, attempt replication etc. The days of "a gentleman's word is his bond" are long gone (if they ever existed).
I think it’d be quite nice to celebrate process more than results. Investigating a topic and demonstrating lacks of correlation or causation is remarkably useful for future investigators. The opposite can be said if a result falsely gets promoted which causes others to waste time in attempts of reproduction or continuing a mostly failed path of analysis.
The "multi-million dollar empire" is a big part of the problem here. When scientists build a lucrative business that is based on the truth of some hypothesis they've been studying it immediately creates a conflict of interest. How can they ever be trusted to report results that would falsify the hypothesis?
Personally I'm less worried about outright-fake data than sloppiness. The article mentions Brian Wanskink; AFAIK he never deliberately invented anything but he did a whole bunch of p-hacking and his lab was so sloppy that data got mislabeled (one study allegedly done on 8-11 year olds was actually done on pre-schoolers). He was "caught" when he published a blog post[2] giving advice to young scientists and it went viral. Clearly no misconduct intended.
Most of this garbage research takes place in domains that don't matter, where people are hardly taking the results seriously anyway (see also power posing). Typically when somebody actually cares about the truth of a result, they kick the tires and vet the result pretty thoroughly. The fraudulent LeCour and Green study from a few years ago was exposed by Brookman and Kalla, who were attempting a related study (rather than being science "vigilantes").
But not always. Clearly people acting on Gino's bogus research. The Reinhart-Rogoff paper [0] was discussed globally, and may have actually influenced fiscal policy. They used Excel for analysis, made a click-and-drag mistake, and improperly excluded a couple datapoints. It appears this exclusion was accidental [1]. Nevertheless, including those points changes the conclusion.
Catching this error took 3 years. It probably would've been caught faster if they had published their data alongside the paper, although apparently they actually did provide it upon request, so if people checked these things more frequently it would've been caught earlier.
[1] "A coding error in the RR working spreadsheet entirely excludes five countries, Australia, Austria, Belgium, Canada, and Denmark, from the analysis.5 The omitted countries are selected alphabetically and, hence, likely randomly with respect to economic relationships." http://peri.umass.edu/fileadmin/pdf/working_papers/working_p...
One of the problems is that if you run more studies, you are statistically more likely to eventually yield a statistically significant result. The tests that folks run are not designed for series of statistical tests, but for individual tests. So when you run a ton of experiments and throw away all the statistically insignificant results, you are engaging in the form of p-hacking that the discipline of psychology is currently struggling with. The overarching replication crisis.
And you may be thinking that's all harmless, but it's really not. Folks running experiments are competing for jobs with other folks who might do more worthwhile research. Because psychologists are good at running the pipeline, and they have found a nearly inexhaustible source of publications (experiments run on undergrad students), the up the bar for everyone else.
I remember just a few days ago a discussion on this site about how a hiring committee skipped over the "better" applicant with more publications and citations than the successful candidate. The reality is that there are plenty of faculty on hiring committees out there who are looking for candidates with a good publication record, because that is how they assess if someone is a good researcher. Psychologists have already succeeded in pushing out most of the less experiment-prone folks in their discipline. The pressure extends to adjacent fields.
I simply don't get this. I say the same thing every time I hear of research/scientific fraud.
I can understand why researchers want to commit fraud but why do they actually do it? Is it because they actually believe they can get away with it indefinitely, or are they gambling they can do so long enough to meet their goals—fool everyone long enough to get through their careers after which they couldn't care less about their reputations?
I find it hard to believe these smart people actually believe they can get away with it. Surely they don't and they must know their reputations will be ruined when they're found out.
As sure as day follows night, they'll eventually get caught out by others who are researching their papers and comparing their data with other researchers whether it's within weeks of publication or decades later. And they must know that AI will expose them as it can automate comparisons with similar research and their data will be found to be anomalous.
1) People definitely did get away with this, all the time. Historically, researchers had no obligation (at least that was practically enforced) to maintain and share their data and code. Peer review would check for specific methodological flaws and nothing deeper. If someone emailed you about your 1995 study in 2005, you'd say "I no longer have the code," or, more likely, simply never email them back.
2) In a highly competitive landscape where cheating is effective, the "winners" will be heavily selected for willingness to maximize the use of cheating. Even if only 1% of the population is willing to commit explicit fraud, that 1% is going to be heavily overrepresented in a world where explicit fraud gets you the top-tier publications that bring you to prominence.
3) Ariely and Gino both made millions of dollars from their fraud that they will not have to give back. It's worth emphasizing how poorly Ariely's fraud was executed -- he did the laziest possible fraud and easily converted it into money and prestige.
4) Related to the first point, it's hard to overstate how much the culture has shifted since the '08-'12 time period. The replication crisis was just kicking into gear, and only among people who were paying attention to that type of thing. Ariely didn't come up reading Andrew Gelman's blog. There's simply far more light on any paper today than there would've been 10 years ago -- statistical and methodological understanding have come a long way as cohorts of academics came up in the shadow of the replication crisis. Having established credible groups like Data Colada to centralize these analyses has also been a big deal; tenured profs can't bully Data Colada by threatening their career progression the same way they could if accused via email by a random grad student.
I agree with most of your comments but perhaps with a different emphasis, and you're right, some people do get away with falsifying data but the risks are very high when that data actually matters and is subject to scrutiny.
I amplify my comments below in my reply to droopyEyelids.
I think the missing piece is your belief that "As sure as day follows night, they'll eventually get caught"
Academic research is sort of a "Tyranny of Structurelessness"[1] system where people can't risk alienating themselves from the underlying power structures or it'll destroy their career. On top of that, there isn't a good mechanism to report fraudulent research. So when someone finds a paper that can't be replicated, the sensible thing to do is ignore it and move on. Causing a ruckus will get them labeled a troublemaker, and there isn't any guarantee it'll make a difference.
And further, there are no material incentives for exposing a bad paper. It doesn't get the researcher another citation which is their currency. It just wastes time they could have spent working towards a new paper of their own.
I essentially agree with what you say, similarly bweing's comment above but despite that committing scientific fraud is just such a risky business. If one falsifies results and the subject matter turns out to be irrelevant in the grand scheme of things one might get away with it because it is irrelevant. — no one would care anyway. If it's high profile and contentious then one's likely to get caught out either sooner or later.
Let's take two cases. The Piltdown man fraud took place in 1912 and it wasn't until over 40 years later in the '50s that it was conclusively determined as a fraud, it then took about another 60 years to determine who the fraudster was. Right, the fraudster got away with the deception in his lifetime but he goes down in history with his reputation in tatters. Whether he considered this or not or perhaps he was hoping to be discovered and wanted to get himself in the history books when his name would otherwise be lost to history is a moot point.
Now to the infamous Jan Hendrik Schön https://en.m.wikipedia.org/wiki/Sch%C3%B6n_scandal. He was caught out quickly. My background is electronics so I understand what he did and I can say his chances of getting caught were damned high and he was caught. He must have known the risk he was taking, how could he not when the fraud was intrinsically recognizable by its nature? Then he makes his situation worse by virtue of repeating the fraud many times, so he was left without a 'plausible deniability' defense on grounds of a mistake/error/mix-up etc.
He's obviously not completely stupid or he wouldn't have been in that job. I'll rephrase that, he's not so stupid that he wouldn't recognize the risks. So why did he ruin his reputation? He deliberately shot himself in the foot without good reason—unless his reason was to become notorious.
People may think they can get away with faking data but if it's in a high profile field with others hanging off their results things will go pear-shaped very quickly. Remember, we've had excellent tools like Benford's Law for years and that's damned good at detecting forged data: https://en.m.wikipedia.org/wiki/Benford%27s_law.
Knowing what I know there's no way I'd risk my reputation faking data, I'd forever be living fear of being caught out. Moreover, I wouldn't do it for ethical reasons, as in the end I'd be not only cheating others but also myself.
You are assuming that the rest of the researches are interested in exposing them. They may be in the same boat (institutions), paid by the same people to produce similar "research".
Still on the faculty roster. Still "Distinguished Professor of Behavioral Economics".
So why not fake data and cheat? You might get discovered, but by then, you'd have given tons of TED talks, speaking engagements and probably made millions of dollars. You get to keep that and you get coast relax as a professor for the rest of your life.
I’m a big fan of open data but struggle to articulate its value in a quantified way other than just my gut feeling of more transparency leads to more collaboration leads to more research leads to more knowledge.
But I think examples like this mean that we’re getting closer to if a study doesn’t release all its data- raw, cleaned, and analytical tables- then it will be discounted or ignored. And there’s fame and money to be made in finding data irregularities.
We still need some way to audit and confirm data regularities, but that is boring and I think no money there. I’m not sure if there will be some chore assignment for students to pick a dataset, analyze it for rigor, and post results somewhere so every paper has a link to people who checked data and found it appropriate. I hope this means 99.9% of studies have proper data and not just when we hear about the (hopefully) rare studies that falsify or just don’t work with properly and make inaccurate conclusions.
Honestly, most of these studies are getting caught simply by not having the expected distribution on their data. I suspect improving this analysis will more often just lead to better fakes.
Unfortunately I think this means we need to disregard these popsci papers, especially in the social sciences unless there's a clear, uninterested party providing the data as well.
Sociological studies should be disregarded and remain uncited until they're reproduced by an unaffiliated team that isn't ideologically aligned with the original researchers. Which isn't gonna happen for most studies since it's prohibitively expensive and most sociologists share the same biases.
The Very Bad Wizards podcast talked a little about this case in a recent episode (https://verybadwizards.com/episode/episode-263-free-yoel - main topic is also interesting, about a professor not getting hired for what seem like not-great reasons). The podcast notes include a link to the Data Colada post about the Gino fraud: http://datacolada.org/109
Holy hell that statement from The Hartford is devastating. Insurance companies are famously risk-averse, so for their legal team to sign off on that...
There's also gov and private research, and the things in between. GPS for example is DoD, the gov gov has funded lots of companies that make new things, like In-Q-Tel funding Keyhole which made the precursor to Google Earth[1], or Oracle which gets it's name from the program it was developed for.[2] This is not to disparage academia and fundamental research, but it's not the only possible way to make new stuff.
It's rather funny you seem to make the case that you can't generalize an entire population based on the actions of individuals, then go on to... generalize entire populations based on the actions of individuals.
You posted this comment 8 times, which is abusive. In addition, you've continued to use HN primarily for flamewar after we asked you to stop (https://news.ycombinator.com/item?id=36796469). You can't do these things on HN, so I've banned the account.
I hate to ban an account that has been here for 15 years, but we don't have much choice when an account is behaving like this—it's not what this site is for, and destroys what it is for.
If you don't want to be banned, you're welcome to email hn@ycombinator.com and give us reason to believe that you'll follow the rules in the future. They're here: https://news.ycombinator.com/newsguidelines.html.
Why aren't these scientists jailed for fraud? These people are low-life criminals and should face consequences for fraud, wasting grant money and peoples time. But they aren't poors so they get special treatment.
Can everyone stop focusing on accused persons? The court of public opinion should not rely on accusations. Donald Trump made a lot of accusations, and did you believe all of them upfront?
Even if you do private research, you still need your work to get peer-reviewed. The pressure comes because we need to make the results sound more dramatic, rather than "meh. some effect is visible, but nothing to lose sleep over" - that will not get you the awards, the grants, and the Nature publications. Almost all these carry over (more so) if you are doing private research, since in addition to publications, funding is a more pressing need.
Is corporate life any better? I've tried both. And in terms of autonomy, it is a huge relief not to be bound to quarterly evaluations and KPAs and trying to belittle others' work just so that you don't get axed and so on.
> For collaborators, it's a stressful and infuriating time. When a paper is retracted, the research is erased, and all related citations are lost.
> Others who attempted to build on Gino's studies are grappling with having wasted time, money, and energy.
It feels, to me, that they're making Data Colada (the data vigilante group) the bad guys here. It's like blaming the Police for stopping a criminal because they took away the criminal's source of income... I'm kinda uneasy with this line of editing :/