Gilboa *et al.* here seem to me to pretty much get it wrong.

The right way to talk about what is to be called “rational“ is to imbed the problem of thinking in its proper context. I think thought should proceed thus:

**Do we have time and resources to gather more information before we have to make a decision?**If the answer to this question is “yes”, we then face a second question:**Would gathering more information increase our knowledge enough to make it worthwhile to do so before making our decision?**Answering that question requires very un-Bayesian modes of thought. But that question must be answered. And, having answered it, we either go and gather more information or we proceed to question:**Are we playing some kind of game against nature, or are we playing against another mind?**If the answer is “we are playing against nature”, then it is appropriate to go full Bayesian. If the answer is “we are playing against another month“, then there is yet another question:**What is the other mind that we are playing against thinking that makes it willing to enter into this strategic interaction with us?**The answer will probably involve: “at least one of the two of us is wrong in our understanding of the situation“—as Warren Buffett says: “if you do not know who is the fool in the market, you are the fool in the market”. Attempting to understand the implications of this question also leaves us down very un-Bayesian roads of thought. But understanding the implications of this question is essential to making a rational assessment of the situation.

My problem with Gilboa *et al.* is that their criticisms of Bayesian Savagery do not provide any gruel at all to nourish us and thus help us in our task of figuring out what to do when Bayesian Savagery breaks down, either for information-gathering or for other-mind reasons: **Itzhak Gilboa, Andrew Postlewaite, and David Schmeidler** (2009): *Is It Always Rational to Satisfy Savage’s Axioms?*: "To explain our notion of rational choice, consider the following scenario. You are a public health official who must make a decision about immunization of newborn babies. Specifically, you have a choice of including another vaccine in the standard immunization package. This vaccine will prevent deaths from virus A. But it can cause deaths with some probability. The exact probabilities of death with and without the vaccine are not known. Given the large numbers of babies involved, you are quite confident that some fatalities are to be expected whatever your decision is...

...You will have to face bereaved parents and perhaps lawsuits. Will it be rational for you to pick prior probabilities arbitrarily and make decisions based on them? We argue that the answer is negative. What would then be the rational thing to do, in the absence of additional information? Our main point is that there may not be any decision that is perfectly rational. There is a tension between the inability to justify any decision based on statistical data, scientific research and logical reasoning on the one hand, and the need to make a decision on the other. This tension is well recognized and it is typically resolved in one of two ways. The first is the reliance on default choices. If the choices that can be rationally justified result in an incomplete preference relation, a default is used to make decisions where justified choice remains silent. For example, the medical profession suggests a host of “common practices” that are considered justified in the absence of good reasons to deviate from them. The second approach is to avoid defaults and to use a complete preference relation that incorporates caution into the decision rule. For example, dealing with worst-case scenarios, which is equivalent to a maxmin approach, can be suggested as a rational decision rule in the face of extreme uncertainty...

```
#shouldread #cognitive #statistics #decisiontheory #Bayesian
```