There's a famous study about what happens when medical professions (eg cardiologists) have major conferences. The juniors and non specialists are left running the wards for a week or so. And survival rates go up not down.
The leading theory is that more junior doctors are more likely to wait to and see while seniors intervene and trust in the skills. But their skills aren't as good as they think for the marginal patient...
The other leading theory is that major (non emergent) operations and management changes (like starting chemotherapy) will often be postponed until the senior physicians get back.
It's only testable in the data if these decisions are being made based on factors that are adequately represented in your statistics (no unobserved confounding). Maybe there's patient A, a smoker who says they're a non-smoker but seems like a liar to the doctor, and an otherwise identical patient B who is actually a non-smoker. Patient A is probably higher risk, so there's a difference between doing A during conference week then B later, versus B during conference week than A later. Even supposing that the doctor wrote that they think A might be lying and actually maybe a smoker, it's going to be hard to control for those notes in a t-test. It's a shitty example, but hopefully demonstrates the idea. It's always super hard to make the argument that there's no unobserved confounding.
Conference dates are independent of patient heath, so you can just look at the numbers directly across many healthcare systems and several decades. Vastly more data generally trumps better analysis.
> Vastly more data generally trumps better analysis.
If you mean more data in the sense of more covariates, sure. If you mean more data in terms of just more observations, no way. Not when it comes to causal inference.
Think about getting a helicopter to the hospital. It's probably pretty serious, right? You're more likely to die if you get a helicopter evac than if you don't. If all you have recorded is whether they got to the hospital via helicopter or not, and whether they died or not, then you're going to see that helicopters are associated with death. If you could control for what's wrong with the person, you might see that helicopters save lives. If you don't have that data, then just having more observations of (helicopter y/n, died y/n) will just look like helivac is murder. No matter what you do with that data, you just can't control for what you'd need to, because what you need was an unobserved confounder. When you're trying to establish causation, you have to rule out unobserved confounders, which is tricky. There's always something where someone might say "well what if it was X?" and the data doesn't contain that answer.
In this particular case, that commenter said that conference dates are independent enough from patient health to say that aside from the important things, everything else is equal. As in, the only difference between the treatments is who is doing the treatment. I disagree with that assertion, but that's the kind of argument you need for causation: Once we've dealt with/controlled for XYZ, the only difference left is the one we're interested in.
It's very difficult to demonstrate that the only difference left is what you care about when you can't even see certain variables. Someone says "what if people in group A are more likely to be left handed and it's patient handedness instead of doctor quality that's causing death here" but you didn't measure left handedness. More and more observations of group A without measuring handedness can't rule out that maybe they were lefties. So you either measure handedness, or argue that your setup will even it out (via randomization for example), or argue that handedness just doesn't matter here. And not just for handedness, but for everything imaginable.
While intuitively that reasoning seems valid, having more data makes signals more clear. If your sample sizes is say 99.8% of all doctors that’s going to be representative in ways a carefully crafted representative sample of 1,000 doctors simply isn’t.
Sampling always loses out to having the entire population.
But you say missing data makes the interpretation incorrect. That’s always possible but only in ways that also fit the original data. Thus, you can then preform a study that improves upon the understanding of the correlation, but it’s not going to disprove the original correlation.
PS: And even here it’s easy to test with more data, just compare total deaths for the month around conferences.
Wouldn't we be able to see the count of operations and management changes go down during these periods and up immediately afterwards if they were being delayed?
This isn't going to sound good, but from the doctors point of view there may be stress or anxiety over a critical patient. Doing a procedure may be better for the doctors well being even if the patient dies. If it's just a small shift in the statistics, they may not feel responsible for the outcome while not having to worry about a pending decision.
Another important part of good mental health is not to judge negatively when something uncertain goes wrong. All you can do is ask yourself if you made the right choice at the time
I don't think that's really a counterpoint to the type of "life decisions" discussed in the article. The article is really talking about an individual making a change in their own "status quo". I'd argue in your example that a bias toward intervention is actually the status quo of the experienced doctors.
Not based on any sources, but I believe that these results say more about our subjective experience of decisions (specifically in situations of action vs. inaction) rather than the actual likelihood of outcomes. In other words, I'm not convinced that statistically in a situation of action / inaction an action is always more likely to lead to a more favourable outcome, but it is more likely to be later considered in a more positive light. Some of the reasons for that would be:
* Empowered sense of agency
* Taking action requires resources which would be "wasted" if the action is later considered the wrong choice, which potentially creates rationalization to avoid that (the sunk cost fallacy)
* When you take an action to change a status quo you've probably prepared for the expected negative aspects of the change, diminishing their effect
* When you stay inactive the potential to act is always there nagging
In purely personal decisions the subjective experience might actually be more important than the real outcome (barring extreme circumstances), so taking an action might indeed be the right choice regardless. However, I do believe that these effects are also present in decisions that affect others (organizations, families etc.), in which the outcome is more important, and also affected others don't share the decision-maker's perspective and may subjectively experience the decision completely differently as a result.
Excellent point. There’s research showing that people that make irrevocable decisions are happier than people who make the same decision but are told they can change their mind later.
Sticking with the status who is like making a revocable decision not to make a change, where as deciding to do something else is a less-irrevocable decision.
The book “Transformative Experiences” by LA Paul argues that it’s impossible to rationally decide what to do when faced with a life-altering (aka epistemically transformative) decision, because you have no way of knowing the person you’ll be after the choice, and what their preferences will be (“what it’s like to be them”)
The great leap of faith -- you have to trust in the transformation and that you won't remember who you used to be anyway
Try getting a tattoo for an example of this
Edit; it's very right you can't decide rationally and that's why we're emotional beings but on the other hand given you have already done it once you emerge on the other side chances are you'll be pleased
If you don't want to get a tattoo try wearing professional clothing to work for a month
Those who don't believe in magic will never find it -- Roald Dahl
The world is not as straightforward as it seems believe me.
I have Japanese tattoos where does that place me?
How about another example. Have a child.
Dude that link is about a tattoo friendly place. Anyway nobody really cares everyone is keeping to themselves and even the Japanese are covered in English lettering same way we're covered in kanji go figure.
Yo! But you don't know whether you would've regretted them had you gotten them. But then again some people get them for the wrong reasons, granted.
You should only get one when you can practically see it coming out from under your skin.
What LA Paul takes from this is that since you can’t predict what will happen after a transformative experience, the question you’re left with is, “Am I the kind of person who wants to find out what it will be like to have kids / get married / move to Borneo / etc?”
Sure it's possible. From evidence in your life of previous adaptations. You could be a very adaptable person, and very likely to be happy after another adaptation.
Sounds like LA Paul wrote some sort of soft self-help book, to be throwing words around like 'impossible'.
Being adaptable doesn't tell you at all whether it is a good decision, or predict whether you should make it. At the limit no decision affects you at all and all decisions are indifferent.
This [0] is another take on the same research that sums it up better: "If you’re genuinely unsure whether to quit your job or break up, then you probably should." I was quite influenced by this post when deciding to switch jobs - in the end I figured that if I couldn't make up my mind then I should go for the change. So far it's worked out, but of course you can't ever go back and know whether you would have been happier staying put. Life isn't a series of randomised trials. I think the fact that you're considering a new job (or a new partner) tells you that at least some part of you wants to make a change though.
I have lived by quote “change is better” and for jobs it has almost always worked out because fortunately there are plenty of equal or better jobs in IT. But I am not sure this would work out for an aeronautical engineer at Boeing in same way, for example, given there are so few well paying jobs in that field. Similarly I am not sure this would work out great if you were thinking about divorce and had kids. In those situations, much more rationalization than simple “always make change” directive is necessary.
I'd say if you have kids and are wondering strongly about divorce, then you definitely should make a change - it just might not be to actually get divorced. Could be counseling, couples therapy, 3 day retreat for self analysis. Small changes before the biggest change.
At least for certain types and locations of tech and given people who are in-demand/interview well/have a good network/etc., the penalty for a "grass is greener" mistake is usually fairly minor. They were unlikely to hang around too long in any case.
However, as you say, if changing jobs is going to require moving across the country to maybe one of a handful of companies suitable to your skill set, you should probably be a lot more conservative.
Even within a company you may (should) have opportunities to change though. Going into management, or changing projects. Most careers aren't a straight line, there are forks in the road. Sometimes they're presented to you and other times you have to create the opportunity for change yourself.
And if you've made a commitment to your partner, you probably should try to talk and figure some way to improve things with them first before jumping ship (unless there's some sort of serious abuse going on). Having that talk can be really difficult, though.
There are very few jobs where you've made any lasting commitment to nowadays. They can lay you off at the drop of a hat for any reason whatsoever, so you shouldn't give them anything more than a notice of you're leaving out of courtesy.
Am i right in thinking that the study shows that those who took action were happier, but not that they were actually better off? Does it attempt to account for a bias towards rationalising your actions?
In reality, the alternative to taking action is usually not simply taking no action, it is collecting more information and then deciding whether to take action. Does the study get at this at all?
It does address the potential role of a number of biases, including self-deception / rationalising. As a result it compares responses against third party assessments (finding little evidence of self-deception being a major factor).
I agree that decisions aren't typically binary but I suppose even if you are actively researching after 6 months, for the purposes of this comparison you have remained with the status quo.
Why this should depend on convexity at all? The study’s premise is marginal benefit, i.e., estimated risk difference is either minimal or perhaps too poorly known to call it out as marginal. If it was known to be positive or negative, action should be obvious. I am not defending this study because authors seem to rely heavily on survey where participants satisfaction measured only after 6 months. Typically, people tend to comfort themselves in short term for making big decisions such as divorce or job change even though in longer run they may higher accumulated regret. Economists need to develop good models instead of just keep doing surveys.
Maybe this has some applicability for personal, individual decisions. But to translate this to larger, more important things, affecting way more people?
Take great caution.
If you find yourself to be a leader, CEO, president, or whatever, facing really hard, ambiguous decisions, sometimes the right thing is not to act, yet this is very hard to do and resist the temptation to act. We end up disincentivizing leaders from hesitating to act, when that might be the appropriate thing, because it might be seen as weakness or indecision.
All too often the bias towards action (for anything but personal decisions) is a quick way to get in a lot of trouble.
The study itself looks like it's well done, result is statistically highly significant, but it was still just an internet study where people were self reporting. All you end with is suggestive evidence not be taken too seriously.
I agree that (like most studies) it could be more robust but what I think is more interesting is the underlying explanation (which, like any theory, may not be correct). Making complex decisions is notoriously challenging - e.g. you are comparing factors across different scales and timeframes (see the book 'Farsighted' by Steven Johnson) - and no single rule should be wholly relied upon, but I think that being aware of the potential impacts of loss aversion / status quo bias during the process can only be helpful.
> it was still just an internet study where people were self reporting
And I wonder whether it would largely have been people contemplating a decision because they were dissatisfied/ unhappy with the status quo?
My immediate thought it to Nassim Taleb’s via negativa - not necessarily any more well researched, but the view that in uncertainty do nothing or indeed do less to create a better outcome. That inaction is to be preferred where change and action has no clear benefit.
Interesting point. I haven't read that but maybe the appropriate approach depends on the situation / level of uncertainty. For example, in situations where we have little clarity on the options or what the outcomes will look like, it perhaps makes sense to resist any urge to take action because we can't make any sensible comparison between the change and the status quo (and 'action bias' may be more of a factor). In other situations, like leaving one job to go to another, we have a better (though definitely imperfect) idea of what the options / outcomes look like and so may be better placed to make a comparison (in which case loss aversion may be more important). This is just a theory though - I may be totally wrong!
This can really go both ways, which makes it complicated. I feel like contemporary human beings have a massive bias toward action, often to their own detriment. "Do nothing and wait" is often a good piece of advice, but I can't see any political leader, entrepreneur, or 'thought leader' advocating for that. Patience seems to have lost out to instant gratification and the desire to do something, anything.
..those who had opted for the choice that involved making a change (as opposed to sticking with the status quo) were more satisfied with their decision and generally happier.
If you study history, specifically military history, you'll see the "be patient" strategy employed successfully (and ignored, unsuccessfully) quite often. Fabius defeated Hannibal by essentially out-waiting him. [1] One of the Thirty-Six Stratagems from ancient Chinese warfare is Wait at leisure while the enemy labors.
[2] So I would amend the advice to be, In marginal decisions, don't be rash, but don't over-analyze. Wait for the right moment, then act.
The example cited in the article could also just be retroactive rationalization. I.e. people want to rationalize their decisions as being the right one.
Timing a specific action is different than not taking an action. Unless you lie to yourself that you're waiting for the right time in order to not take action. ;-)
> The findings may be explained by 'loss aversion', a cognitive bias that causes potential losses to be weighed more heavily than potential gains (the ratio is somewhere around 2:1, meaning that most people will feel comfortable with a decision only when the likely gains are double the likely losses). As a result, in situations where the benefits and drawbacks of making a change appear to be evenly matched, it may be sensible to take action.
This loss aversion ratio of 2:1 is an average or other. How do you find your own ratio? It's the only one that matters. I'm always switching tracks and on to the next new (or old and new to me) thing.
A better reason is most people toward the ends of their lives regret the things they didn't do, rarely any of the things they've done. Do more things. Change is good.
Happiness or wellbeing are such difficult concepts to measure and compare that I don't think you can ever arrive at a specific, generally-applicable ratio. But that doesn't mean that being aware of loss aversion (and the way it may be shaping your assessment of a decision) isn't helpful in reaching better outcomes.
It's the Monty Hall problem effect. When you take an action, you choose from a small set of options. When you stay with the status quo, you are following a Markov chain than has not been narrowed by your choices (which is worse on average).
Yeah, I'm not sure I agree with that. The Monty Haff problem is about changing a choice on the basis of additional information.
The point the parent is hinting at which can apply although I'm not prepared to argue for it as a general rule is the following: If you are in a not good situation (and e.g. you're torn about changing your job as an example given in the OP), I can believe that making any reasonable change is better to just plodding along with the status quo.
In a similar vein--and I've run into this a bunch with project management--making a decision (any decision!) is often better than a status quo of deferring and studying for longer.
The door being revealed is time passing and understanding your own needs and what the world provides. The status quo is sticking to a historical choice made with less information. When you consider action today, you use fresh information, but to actually realise that advantage, you need to pick one of the new choices, even though it's outcome is stocastic
Counterpoint: the way people assess decisions after the fact is no more rational than the way people assess them beforehand.
Some parents with n children wrestle with the decision of whether to have n+1. You seldom hear regret from people who choose to go ahead with having an additional kid - yet science shows that, in aggregate, people with multiple kids tend to be unhappier than people with just one. In this example, each individual claimed to be happier with choosing 'action' over 'inaction', but in aggregate the action made people unhappier.
> ... whenever you cannot decide what you should do, choose the action that represents a change, rather than continuing the status quo.
As a heuristic, this is kind of rough: what does it mean to say you "cannot decide what you should do"?
How hard have you tried?
If this research was mutated into a piece of folk wisdom, I bet it would be interpreted as advising that "change is preferable, so don't overthink it."
But, that would be the wrong conclusion to draw. It's better to make important decisions after getting as much information as possible, and using rational decision-making strategies. So, it's likely always more effective, though more work, to try to push back the boundary of when the word "cannot" becomes applicable than it is to just say "fuck it, let's do this".
Overall, and depending on context, I think this is a moot point. Inaction is as important as action and can be considered a different action in itself. What some call inaction is not necessary a state of being inactive, but simply observing and analyzing the outcome longer, while focusing efforts elsewhere.
Maybe "being on the fence" or staying in a frame of doubt and uncertainty for too long, without developing future steps for a given plan could really be considered disruptive inaction.
Of course they are not satisfied. That's why they were considering a life-changing decision, because they were not happy with the status-quo. Duh!
—"Levitt found that those who had opted for the choice that involved making a change (as opposed to sticking with the status quo) were more satisfied with their decision and generally happier."
This would seem to line up with some old wisdom I heard years ago, where some researcher interviewed elders and found that his subjects regretted chances they had not taken far more than chances they had taken, even if it didn't work out.
That idea was a big focus of mine as I tried to get over fear in dating.
Agency has to be remembered tho, your decisions impact others, and change that is received is generally less well tolerated than change that has been initiated. It's real in teams where the management decides on change and underlings can only agree or not.
Very good point - this study relates to the personal impacts but perhaps not to decisions that impact many people and run counter to what they would individually have decided.
It's whenever you have that feeling of "I just can't decide between these two options", and don't have a strong feeling either way.
Quitting a job or ending a relationship can be a marginal decision, if you're not sure you want to do it. Sometimes you're just fed up with your current employer, or there's an obviously good opportunity available - that's not a marginal decision, that's an easy decision. But sometimes there's an opportunity, but you're not sure you want to take it, but it seems intriguing, but it'd be a big risk, but you wonder if you'll regret not taking it, etc. That's a marginal decision, and the type that the article suggests you should take.
There's a famous study about what happens when medical professions (eg cardiologists) have major conferences. The juniors and non specialists are left running the wards for a week or so. And survival rates go up not down.
The leading theory is that more junior doctors are more likely to wait to and see while seniors intervene and trust in the skills. But their skills aren't as good as they think for the marginal patient...
https://www.newscientist.com/article/mg22530032-100-death-ra...