Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Trials avoid high risk patients and underestimate drug harms (nber.org)
64 points by bikenaga 7 hours ago | hide | past | favorite | 28 comments




It's understandable that unusual patients are seen as confounding variables in any study, especially those with small numbers of patients. Though I haven't read beyond the abstract, it also makes sense that larger studies (phase 3 or 4) should not exclude such patients, but perhaps could report results in more than one way -- including only those with the primary malady as well as those with common confounding conditions.

Introducing too many secondary conditions in any trial is an invitation for the drug to fail safety and/or efficacy due to increased demands on both. And as we all know, a huge fraction of drugs fail in phase 3 already. Raising the bar further, without great care, will serve neither patients nor business.


Having been an "investigator" in a few phase 3 and 4 trials, it is true that all actions involving subjects must strictly follow protocols governing conduct of the trial. It is extremely intricate and labor intensive work. But the smallest violations of the rules can invalidate part of or even the entire trial.

Most trials have long lists of excluded conditions. As you say, one reason is reducing variability among subjects so effects of the treatment can be determined.

This is especially true when effects of a new treatment are subtle, but still quite important. If subjects with serious comorbidities are included, treatment effects can be obscured by these conditions. For example, if a subject is hospitalized was that because of the treatment or another condition or some interaction of the condition and treatment?

Initial phase 3 studies necessarily have to strive for as "pure" a study population as possible. Later phase 3/4 studies could in principle cautiously add more severe cases and those with specific comorbidities. However there's a sharp limit to how many variations can be systematically studied due to intrinsic cost and complexity.

The reality is that the burden of sorting out use of treatments in real-world patients falls to clinicians. It's worth noting level of support for clinicians reporting their observations has if anything declined over decades. IOW valuable information is lost in the increasingly bureaucratic and compartmentalized healthcare systems that now dominate delivery of services.


This could at least be done after release, but I don’t think any incentives are there, while collecting the data is incredibly difficult

Abstract: "The FDA does not formally regulate representativeness, but if trials under-enroll vulnerable patients, the resulting evidence may understate harm from drugs. We study the relationship between trial participation and the risk of drug-induced adverse events for cancer medications using data from the Surveillance, Epidemiology, and End Results Program linked to Medicare claims. Initiating treatment with a cancer drug increases the risk of hospitalization due to serious adverse events (SAE) by 2 percentage points per month (a 250% increase). Heterogeneity in SAE treatment effects can be predicted by patient's comorbidities, frailty, and demographic characteristics. Patients at the 90th percentile of the risk distribution experience a 2.5 times greater increase in SAEs after treatment initiation compared to patients at the 10th percentile of the risk distribution yet are 4 times less likely to enroll in trials. The predicted SAE treatment effects for the drug's target population are 15% larger than the predicted SAE treatment effects for trial enrollees, corresponding to 1 additional induced SAE hospitalization for every 25 patients per year of treatment. We formalize conditions under which regulating representativeness of SAE risk will lead to more externally valid trials, and we discuss how our results could inform regulatory requirements."

This seems like an odd criticism.

First off it ignore the fact that if you include frail patients you’ll confound the results of the trial. So there is a good reason for it.

Second, saying “rate of SAE is higher than rate of treatment effect” is a bit silly considering these are cancer trial - without treatment there is a risk of death so most people are willing to accept SAE in order to achieve treatment effect.

Third, saying “the sickest patients saw the highest increase in SAE” seems obvious? It’s exactly what you’d expect.


First, ignoring frail patients means your trial isn't representative of the wider population, so it shouldn't be accepted for general use - only on people who were well-represented in the trial.

Second, you're ignoring the possibility of other treatment options. It isn't always the binary life-or-death you're making it, so SAEs do matter.

Third, a big part of trials is to discover and develop prevention methods for SAEs. Explicitly ignoring the people most likely to provide data valuable for the general population sounds like a pretty silly approach.


> Second, you're ignoring the possibility of other treatment options. It isn't always the binary life-or-death you're making it, so SAEs do matter.

A common reason for a drug (especially a cancer drug) going to trial is because other options have already failed. For example CAR-T therapies are commonly trialed on patients with R/R (relapsed/refractory) cohorts.

https://www.fda.gov/regulatory-information/search-fda-guidan...

> "In subjects who have early-stage disease and available therapies, the unknown benefits of first-in-human (FIH) CAR T cells may not justify the risks associated with the therapy."


But you’re stating the obvious? It’s not like physicians don’t know trials are designed this way, and for good reasons.

Frail patients confound results. A drug may work great, but you’d never know because your frail patients die for reasons unrelated to the drug.

Second is obvious as well. Doctors know there are treatment alternatives (with the same drawback to trial design).

And I already touched on your third point. The alternative to excluding frail patients is not being able to tell if the drug does anything. In many cases that means the drug isn’t approved.

Excluding frail patients has its drawbacks, but it has benefits as well. This paper acts like the benefits don’t exist.


I've personally been excluded from several depression clinical trials for having suicidal ideations, it makes me wonder just what kind of "depression" they are testing drugs on.

The type of depression that makes the sufferer lie about not having suicidal ideations

Be strong, brother, there is hope. Antidepressant can be really hard to administer, they exclude particularly vulnerable people from trials because they need to be protected the most.

Tangentially related, but I was surprised to learn about the lax attitude towards placebos in trials. Classes of drugs have expected side effects, so it's common to use medications with similar effects as placebos. Last I heard, there is no requirement or expectation to document placebos used, and they are often not mentioned in publications.

> Classes of drugs have expected side effects, so it's common to use medications with similar effects as placebos.

This would be called an "active placebo" and would certainly be documented.

It's common to find controlled trials against an existing drug to demonstrate that the new drug performs better in some way, or at least is equivalent with some benefit like lower toxicity or side effects. In this case, using an active comparison against another drug makes sense.

You wouldn't see a placebo-controlled trial that used an active drug but called it placebo, though. Not only would that never get past the study review, it wouldn't even benefit the study operator because it would make their medication look worse.

In some cases, if the active drug produces a very noticeable effect (e.g. psychedelics) then study operators might try to introduce another compound that produces some effect so patients in both arms feel like they've taken something. Niacin was used in the past because it produces a flushing sensation, although it's not perfect. This is all clearly documented, though.


You were surprised to learn this because it’s not true.

This covers the trials not being fully representative, but largely neglects why that is the case.

The paper defines a population "at high risk of drug-induced serious adverse events", which presumably means they're also the most likely people to be harmed or killed by the drug trial itself.


Also, if they're known to be at such a high risk of adverse events, would they even be given the treatments, trial or not?

This was a plot in an early season of ER.

This problem is actually even worse than the article identifies, because broad definitions of what a "risk" is, result in broad exclusions.

The most pernicious of these problems is that women--yes, more than half the earth's population--are considered a high risk group because researchers fear menstrual cycles will affect test results. Until 1993 policy changes, excluding women from trials was the norm. Many trials have not been re-done to include women, and the policies don't include animal trials, so many rat studies, for example, still do not include female rats--a practice which makes later human trials more dangerous for (human) female participates.

[1] Sort of one citation: https://www.aamc.org/news/why-we-know-so-little-about-women-... There's more than this--I wrote a paper about this in college, but I don't have access to jstor now, so I'm not sure I could find the citations any more.


See also: women.

Move generally, whenever you read the percentage of patients that are noted as having a particular side effect from a medicine, the real percentage is much higher.

> whenever you read the percentage of patients that are noted as having a particular side effect from a medicine, the real percentage is much higher.

The patients self-report their own side effects, then the numbers go into the paper.

Are you suggesting the study operators are tampering with numbers before publishing?


> Are you suggesting the study operators are tampering with numbers before publishing?

No, but did you not read the posted article? Firstly, trials don't select participants unbiasedly. Secondly, many trials are not long enough for the side effects to manifest. Thirdly, I have enough real world experience.


Real world experience doesn't count on HN health articles. If it wasn't documented by a researcher paid via funding from his industry leaders, or a government official trying to fast track his hiring in the public sector for $800k a year, it basically didn't happen.

This is why I encourage the reporting of any and all side-effects of any treatment to the FDA. Information withheld cannot be collected.

https://www.fda.gov/safety/medwatch-fda-safety-information-a...


And this just goes to reinforcing the beliefs of those who are skeptical of medical research. "Trust the science" is all well and good in theory except when the scientists are telling you a selective, cherry-picked story.

Strange how that line of thinking always winds up in places like "vaccines are bad" or "ivermectin cures COVID".

It correctly observes that experts are not always right, and often incorrectly responds by turning to loud, persuasive quackery.

No relation (except in your winding mind).



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: