### Strong Independence in Decision Theory

Thursday, 21 July 2016In the course of some remarks on Subjective Probability by Richard C. Jeffrey, and later in defending a claim by Gary Stanley Becker, I have previously given some explanation of the model of expected-utility maximization and of axiomata of independence.

Models of expected-utility maximization are so intuïtively appealing to some people that they take one of these models to be *peculiarly rational*, and deviations from any such model thus to be *ir*rational. I note that the author of a popular 'blog seems to have done just that, yester-day.[0]

My own work shows that quantities cannot be fitted to preferences, which pulls the rug from under expected-utility maximization, but there are other problems as well. The paradox that the 'blogger explores represents a violation of the strong independence axiom. What I want to do here is first to explain again expected-utility maximization, and then to show that *the strong independence axiom violates rationality*.

A mathematical expectation is what people often mean when they say average

— a probability-weighted sum of measures of possible outcomes. For example, when a meteorologist gives an expected rainfall or an expected temperature for to-morrow, she isn't actually telling you to *anticipate* exactly that rainfall or exactly that temperature; she's telling you that, given observed conditions to-day, the probability distribution for to-morrow has a particular mean quantity of rain or a particular mean temperature.

The actual mathematics of expectation is easiest to explain in simple cases of *gambling* (which is just whence the modern, main-stream theories of probability itself arose). For example, let's say that we have a fair

coin (with a 50% chance of heads and a 50% chance of tails); and that if it comes-up heads then you get $100, while if it comes-up tails then you get $1. The expected pay-out is .5 × $100 + .5 × $1 = $50.50 Now, let's say that another coin has a 25% chance of coming-up heads and a 75% chance of coming-up tails, and you'd get $150 for heads and $10 for tails. Its expected pay-out is .25 × $150 + .75 × $10 = $45 More complicated cases arise when there are more than two possible outcomes, but the basic formula is just `prob`(`x`_{1})·`m`(`x`_{1}) + `prob`(`x`_{2})·`m`(`x`_{2}) + … + `prob`(`x`_{n})·`m`(`x`_{n}) where `x`_{i} is the `i`-th possible outcome, `prob`(`x`_{i}) is the probability of that `i`-th possible outcome, and `m`(`x`_{i}) is some measure (mass, temperature, dollar-value, or whatever) of that outcome. In our coin-flipping examples, each expectation is of form `prob`(heads)·`payout`(heads) + `prob`(tails)·`payout`(tails)

One of the numerical examples of coin-flips offered *both* a higher *maximum* pay-out ($150 v $100) and a higher *minimum* pay-out ($10 v $1) yet a lower *expected* pay-out ($45 v $50.50). *Most* people will look at this, and decide that the *expected* pay-out should be the determining factor, though it's harder than many people reälize to *make the case*.

With monetary pay-outs, there is a temptation to use the monetary unit as the measure in computing the expectation by which we choose. But the actual *usefulness* of money isn't constant. We have various priorities; and, when possible, we take care of the things of greatest priority before we take care of things of lower priority. So, typically, if we get *more* money, it goes to things of *lower* priority than did the money that we already had. The *next* dollar isn't usually as valuable to us as any one of the dollars that we already had. Thus, a pay-out of $1 million shouldn't be a thousand times as valuable as a pay-out of $1000, especially if we keep in-mind a context in which a pay-out will be *on top of* whatever we already have in life. So, if we're making our decisions based upon some sort of mathematical expectation then, instead of computing an expected *monetary* value, we really want an expected *usefulness* value, `prob`(`x`_{1})·`u`(`x`_{1}) + `prob`(`x`_{2})·`u`(`x`_{2}) + … + `prob`(`x`_{n})·`u`(`x`_{n}) where `u`() is a function giving a measure of usefulness. This `u` is the main-stream notion of utility, though sadly it should be noted that most main-stream economists have quite lost sight of the point that utility as they imagine it is just a *special case* of usefulness.

A model of expected-utility maximization is one that takes each possible action `a`_{j}, associates it with a set of probabilities {`prob`(`x`_{1}|`a`_{j}),`prob`(`x`_{2}|`a`_{j}),…,`prob`(`x`_{n}|`a`_{j})} (the probabilities now explicitly noted as *conditioned* upon the choice of action) and asserts that we should chose an action `a`_{k} which gives us an expected utility `prob`(`x`_{1}|`a`_{k})·`u`(`x`_{1}) + `prob`(`x`_{2}|`a`_{k})·`u`(`x`_{2}) + … + `prob`(`x`_{n}|`a`_{k})·`u`(`x`_{n}) as high or higher than that of any other action.

If there is a non-monetary *measure* of usefulness in the case of monetary pay-outs, then there is no evident reason that there should not be such a measure in the case of *non*-monetary pay-outs. (And, likewise, if there is no such measure in the case of non-monetary pay-outs, there is no reason to suppose one in the case of monetary pay-outs, where we have seen that the monetary pay-out isn't really a proper measure.) The main-stream of economic theory runs with that; its model of decision-making is expected-utility maximization.

The model does *not* require that people have a *conscious* measure of usefulness, and certainly does not require that they have a conscious *process* for arriving at such a measure; it can be taken as a model of the gut

. And *employment* of the model doesn't mean that the economist believes that it is literally true; economists across many schools-of-thought regard idealizations of various sorts as *approximations* sufficient for their purposes. It is only lesser economists who do so incautiously and without regard to problems of scale.

But, while expected-utility maximization may certainly be regarded as an *idealization*, it should not be mistaken for an idealization of *peculiar rationality* nor even for an idealization of rationality of just one *variety* amongst many. Expected-utility maximization is not rational even if we grant — as I would not — that there is some quantification that can be fitted to our priorities.

Expected-utility maximization entails a proposition that the relevant expectation is of potential outcomes which are taken themselves to be no better or worse for being more or less probable. That is to say that what *would be* the reälized value of an outcome is the measure of the outcome to be used in the computation of the expectation; the expectation is simply lineär in the probabilities. This feature of the model follows from what is known as the strong independence

(underscore mine) because Paul Anthony Samuelson, having noticed it, conceptualized it as an axiom. It and propositions suggested to serve in its stead as an axiom (thus rendering it a theorem) have been challenged in various ways. I will not here survey the challenges.__axiom__

However, the first problem that I saw with expected-utility maximization was with that lineärity, in-so-far as *it implies that people do not benefit from the experience of selecting amongst discernible non-trivial lotteries as such*.[1]

*Good* comes from engaging in *some* gambles *as such*, exactly because gambling more generally is unavoidable. We need *practice* to gamble properly, and *practice* to stay in cognitive shape for gambling. Even if we get that practice without seeking it, in the course of engaging in our everyday gambles, there is still value to that practice as such. A gamble may become *more* valuable as a result of the probability of the *best* outcome being made *less* probable, and less valuable as a result of the best outcome becoming more certain. The value of lotteries is *not* lineär in their probabilities!

It might be objected that this value is only associated with our cognitive limitations, which limitations it might be argued represented a sort of *ir*rationality. But we only *compound* the irrationality if we avoid remedial activity because *under other circumstance* it would not have done us good. Nor do I see that we should any more accept that a person who *needs* cognitive exercise to stay in cognitive shape is thus *out* of cognitive shape than we would say that someone who needs physical exercise to stay in physical shape is thus out of physical shape.

[0 (2016:07/22)] *Very* quickly, in a brief exchange, he saw the error, and he's corrected his entry; so I've removed the link and identification here.

[1] When I speak or write of lotteries

or of gambling

, I'm not confining myself to those cases for which lay-people normally use those terms, but applying to situations in which one is confronted by a choice of actions, and various outcomes (albeït some perhaps quite impossible) may be imagined; things to which the term lottery

or gamble

are more usually applied are simply special cases of this general idea. A *trivial* lottery is one that most people would especially not think to be a lottery or gamble *at all*, because the only probabilities are either 0 or 1; a *non*-trivial lottery involves outcomes with probabilities *in between* those two. Of course, in real life there are few if any perfectly trivial lotteries, but a lot of things are *close enough* that people imagine them as having no risk or uncertainty; that's why I refer to *discernible* non-trivial lotteries, which people see as involving risk or uncertainty.