Von Neumann and Morgenstern showed that, as long as people can consistently order choices (ordinal utility), there is a utility function (cardinal utility) they are satisfying.
What Kahneman and Tversky observed is that people don't even choose consistently. It depends on how the choices are presented. For instance, whether the subject frames an outcome as a loss or a smaller-than-expected gain. No matter how you define a utility function, it will not always be maximized. So, it's not a question of defining the function less narrowly. You can present two games with mathematically identical sets of outcomes and people consistently rank the outcomes differently.
Anyway, it's a very good and important book, and doesn't have much to do with Bayesian statistics.
[Ninja-edited since HN doesn't let me respond further below...if you can show the outcomes K & T observed are in fact consistent with a more broadly defined utility function, then you too can win a Nobel prize!]
> What Kahnemann and Tversky observed is that people don't even choose consistently. It depends on how the choices are presented. For instance, whether the subject frames a choice as a loss or a smaller-than-expected gain. So, no matter how you define a utility function, it will not always be maximized.
I disagree, I think you are assuming that people accept questions at face value and unfailingly trust the experimenter. Then equivalent but differently-stated problems would be equivalent, and you would reach that conclusion.
But when people use heuristics, those heuristics are grounded in their experience, and are like a prior on the meaning of the question. Stating the same question in two different ways and getting different answers means either that there is no utility function, or that the "utility function" depends (through model uncertainty, for example) on the exact phrasing of the question.
My point is that these discussions are very closely tied to the kinds of assumptions you make about how people reason, what is rational, and what inputs the utility function has. Kahneman and Tversky got around this problem, I think, by doing something eminently reasonable: postulating a clear and unambiguous definition of a utility function. But the concept of "rationality" is richer than that, so the conversation should not stop there.
> postulating a clear and unambiguous definition of a utility function. But the concept of "rationality" is richer than that
The word "rationality" may be ambiguous, as most words describing anything complex are, but the authors attempted to provide a clear model and work within those bounds. When we begin discussing the ideas informally, and using terms in a broader and more colloquial sense, then we're at fault if the results have become muddied.
The authors demonstrated a reasonable utility function, one which most people upon reflection would agree is logical, and demonstrated that people do not consistently act in a way that maximizes that function.
We can always move the goal post, and claim that if people appear to be acting irrationally it's because we simply don't understand their concept of rationality (or the more complex function they're maximizing). But that seems rather circular; it would be nice hear examples of a richer concept of rationality, in the context of the author's experiments, that might explain seemingly inconsistent behavior.
What Kahneman and Tversky observed is that people don't even choose consistently. It depends on how the choices are presented. For instance, whether the subject frames an outcome as a loss or a smaller-than-expected gain. No matter how you define a utility function, it will not always be maximized. So, it's not a question of defining the function less narrowly. You can present two games with mathematically identical sets of outcomes and people consistently rank the outcomes differently.
Anyway, it's a very good and important book, and doesn't have much to do with Bayesian statistics.
[Ninja-edited since HN doesn't let me respond further below...if you can show the outcomes K & T observed are in fact consistent with a more broadly defined utility function, then you too can win a Nobel prize!]