Posts Tagged ‘decision theory’

We Don't Need No Stinkin' Bayesian Up-Dating!

Wednesday, 1 April 2009

The Classic Monty Hall Problem

Andy is a contestant in a game. In this game, each contestant makes a choice amongst three tags. Each tag is committed to an outcome, with the commitment concealed from each contestant. Two outcomes are undesirable; one is desirable. Nothing reveals a pattern to assignments.

After Andy makes his choice, it is revealed to him that a specific tag that he did not choose is committed to an undesirable outcome. Andy is offered a chance to change his selection. Should he change?

Three Contestants

Andy, Barb, and Pat are contestants in a game. In this game, each contestant makes an independent choice amongst three tags. Each tag is committed to an outcome, with the commitment concealed from each contestant. Two outcomes are undesirable; one is desirable. Nothing reveals a pattern to assignments. In the event that multiple players select the same tag, outcomes are duplicated.

After all contestants make their choices, it is revealed that Andy, Barb, and Pat have selected tags each different from those of the other two contestants. And it is revealed that Pat's tag is associated with an undesirable outcome. Andy and Barb are each offerd a chance to change their selections. What should each do?

3-Player Monty Hall

View Results

Loading ... Loading ...

Marlboro Man

Wednesday, 18 March 2009

I've been taking another run at Subjective Probability: The Real Thing (2004) by Richard C. Jeffrey. I'd started reading it a while back, but got distracted. Anyway, Jeffrey was an important subjectivist — someone who argued that probability is a measure of belief, and that any degree of belief that does not violate certain rationality constraints is permitted. (As I have noted earlier, the subjectivism here is in the assignment of quantities not specifically required by objective criteria. The subjectivists believe either that quantity by reason must be assigned, albeït often arbitrarily, or that Ockham's Razor is not a binding constraint.) And the posthumous Subjective Probability was his final statement.


At some point, I encountered the following entry in the index:

Nozick, Robert, 119, 123

which entry was almost immediately annoying. Page 119 is in the References section, and indeed has the references for Nozick, but that's a pretty punk thing to drop in an index. Even more punk would be an index entry that refers to itself; and, indeed, page 123 is in the index, and it is on that page that one finds Nozick, Robert, 119, 123.

Well, actually, I'd forgot something about this book, which is probably an artefact of its being posthumous: Most or all of the index entries are off by ten pages, such that one ought to translate Nozick, Robert, 119, 123 to Nozick, Robert, 109, 113. And, yes, there are references to Nozick on those pages (which are part of a discussion of Newcomb's Problem and of related puzzles). It was just chance-coïncidence that ten pages later one found the listings in the references and in the index.


In decision theory, there are propositions call independence axiomata. The first such proposition to be explicitly advanced for discussion (in an article by Paul Anthony Samuelson) is the Strong Independence Axiom, the gist of which is that the value of a reälized outcome is independent of the probability that it had before it was reälized. Say that we had a lottery of possible outcomes X1, X2,… Xn, each Xi having associated probability pi. If we assert that the expected value of this lottery were

Σ[pi · u(Xi)]

where u( ) is some utility function, then (amongst other things) we've accepted an independence proposition. Otherwise, we may have to assert something such as that the expect value were

Σ[pi · u(Xi,pi)]

to account for such things as people taking an unlikely million dollars to be somehow better than a likely million dollars.

Anyway, there's another proposition which to most of us doesn't look like the Strong Independence Axiom, and yet is pretty much the same thing, the Sure Thing Principle, which is associated with Leonard Jimmie Savage (an important subjectivist, whom I much admire, and with whom I markèdly disagree). Formally, it's thus:

{[(AB) pref C] ∧ [(A ∧ ¬B) pref C]} ⇒ (A pref C)

Less formally,

If the combination of A and B is preferred to C, and the combination of A without B is preferred to C, then A is just plain preferred to C, regardless of B.

Savage gives us the example of a businessman trying to decide whether to buy a piece of property with an election coming-up. He thinks-through whether he would be better off with the property if a Democrat is elected, and decides that he would prefer that he had bought the property in that case. He thinks-through whether he would be better off with the property if a Republican is elected, and decides that he would prefer that he had bought the property in that case. So he buys the property. This seems very reasonable.

But there is a famous class of counter-examples, presented by Jeffrey in the form of the case of the Marlboro Man. The hypothetical Marlboro Man is trying to decide whether to smoke. He considers that, if he should live a long life, he would wish at its end that he had enjoyed the pleasure of smoking. He considers that, if he should live a short life, he would wish at its end that he had enjoyed the pleasure of smoking. So he smokes. That doesn't seem nearly so reasonable.

There is an underlying difference between our two examples. The businessman would not normally expect his choice to affect the outcome of the election; the Marlboro Man ought to expect his choice to affect the length of his life. Jeffrey asserts that Savage only meant the Sure Thing Principle to hold in cases where the probability of B were independent of A.

But what makes the discussion poignant is this: Jeffrey, dying of surfeit of Pall Malls, wrote this book as his last, and passed-away from lung cancer on 9 November 2002.

Deciding on a Theory of Decision

Wednesday, 19 November 2008

Much of my time of late has been going into my paper on operationalizing a model of preference in which strict preference and indifference don't provide a total ordering.

Quite a while ago, I reälized very precisely what sort of system the assumptions would have to imply; I mistakenly presumed that I would relatively quickly identify sufficient assumptions (beyond those already recognized). But, at this point, I have a sufficient assemblage, each member of which is, taken by itself, at least passably acceptable. Jointly, however, there's an issue of factoring.

The paper derives its results from three sets of propositions. The first and second sets seem perfectly fine to me, and I don't expect them to provoke much dispute. The third set are more ad hoc. For the purposes of the paper they function as axiomata, but some or all of them would more ideally be derived from deeper principles (the pursuit of which, however, would be mostly a distraction from my goals).

It's amongst this last set of propositions that the factoring problem exists. One of them used to play an important rôle; right now it's doing nothing but occupying space. I'd remove it, except that I suspect that, in conjunction with the very principle that seemed to make it superfluous, it renders redundant another principle which feels even more ad hoc.

At the same time, I am now wrestling with what sort of discussion to provide after presenting the theoremata. I just don't seem to be in much of a frame-of-mind to ruminate.

Nicht Sehr Gut

Tuesday, 29 July 2008

I have been reading Gut Feelings: The Intelligence of the Unconscious by Gerd Gigerenzer. Gut Feelings seeks to explain — and in large part to vindicate — some of the processes of intuïtive thinking.

Years ago, I became something of a fan of Gigerenzer when I read a very able critique that he wrote of some work by Kahneman and Tversky. And there are things in Gut Feelings that make it worth reading. But there are also a number of active deficiencies in the book.

Gigerenzer leans heavily on undocumented anecdotal evidence, and an unlikely share of these anecdotes are perfectly structured to his purpose.

Gigerenzer writes of how using simple heuristics in stock-market investment has worked as well or better than use of more involved models, and sees this as an argument for the heuristics, but completely ignores the efficient-markets hypothesis. The efficient-markets hypothesis basically says that, almost as soon as relevant information is available, profit-seeking arbitrage causes prices to reflect that information, and then there isn't much profit left to be made, except by luckunpredictable change. (And one can lose through such change as easily as one might win.) If this theory is correct, then one will do as well picking stocks with a dart board as by listening to an investment counselor. In the face of the efficient-markets hypothesis, the evidence that he presents might simply illustrate the futility of any sort of deliberation.

Gigerenzer makes a point of noting where better decisions seem often to be made by altogether ignoring some information, and provides some good examples and explanations. But he fails to properly locate a significant part of the problem, and very much appears to mislocate it. Specifically, a simple, incorrectly-specified model may predict more accurately that a complex, incorrectly-specified model. Gigerenzer (who makes no reference to misspecification) writes

In an uncertain environment, good intuitions must ignore information

but uncertainty (as such) isn't to-the-point; the consequences of misspecification are what may justify ignoring information. It's very true that misspecification is more likely in the context of uncertainty, but one system which is intrinsically less predictable than another may none-the-less have been better specified.

I am very irked by the latest chapter that I've read, Why Good Intuitions Shouldn't Be Logical. In note 2 to this chapter, one reads

Tversky and Kahneman, 1982, 98. Note that here and in the following the term logic is used to refer to the laws of first-order logic.[1]

The peculiar definition has been tucked behind a bibliographical reference. Further, the notes appear at the end of the volume (rather than as actual foot-notes), And this particular note appears well after Gigerenzer has already begun using the word logic (and its adjectival form) baldly. If Gigerenzer didn't want to monkey dance, then he could have found an better term, or kept logic (and derivative forms) in quotes. As it is, he didn't even associate the explanatory note with the chapter title.

Further, Gigerenzer again mislocates errors. Kahneman and Tversky (like many others) mistakenly thought that natural language and, or, and probable simply map to logical conjunction, logical disjunction, and something-or-another fitting the Kolmogorov axiomata; they don't. Translations that presume such simple mappings in fact result in absurdities, as when

She petted the cat and the cat bit her.

is presumed to mean the same thing as

The cat bit her and she petted the cat.

because conjunction is commutative.[2] Gigerenzer writes as if the lack of correspondence is a failure of the formal system, when it's instead a failure of translation. Greek δε should sometimes be translated and, but not always, and vice versa; likewise, shouldn't always be translated as and nor vice versa. The fact that such translations can be in error does not exhibit an inadequacy in Greek, in English, nor in the formal system.


[1]The term first-order logic refers not to a comprehensive notion of abstract principles of reasoning, but to a limited formal system. Perhaps the simplest formal system to be called a logic is propositional logic, which applies negation, conjunction, and disjunction to propositions under a set of axiomata. First-order logic adds quantifiers (for all, for some) and rules therefor to facilitate handling propositional functions. Higher-order logics extend the range of what may be treated as variable.

[2]That is to say that

[(P1P2) ⇔ (P2P1)] ∀(P1,P2)