Posts Tagged ‘probability’

Strong Independence in Decision Theory

Thursday, 21 July 2016

In the course of some remarks on Subjective Probability by Richard C. Jeffrey, and later in defending a claim by Gary Stanley Becker, I have previously given some explanation of the model of expected-utility maximization and of axiomata of independence.

Models of expected-utility maximization are so intuïtively appealing to some people that they take one of these models to be peculiarly rational, and deviations from any such model thus to be irrational. I note that the author of a popular 'blog seems to have done just that, yester-day.[0]

My own work shows that quantities cannot be fitted to preferences, which pulls the rug from under expected-utility maximization, but there are other problems as well. The paradox that the 'blogger explores represents a violation of the strong independence axiom. What I want to do here is first to explain again expected-utility maximization, and then to show that the strong independence axiom violates rationality.


A mathematical expectation is what people often mean when they say average — a probability-weighted sum of measures of possible outcomes. For example, when a meteorologist gives an expected rainfall or an expected temperature for to-morrow, she isn't actually telling you to anticipate exactly that rainfall or exactly that temperature; she's telling you that, given observed conditions to-day, the probability distribution for to-morrow has a particular mean quantity of rain or a particular mean temperature.

The actual mathematics of expectation is easiest to explain in simple cases of gambling (which is just whence the modern, main-stream theories of probability itself arose). For example, let's say that we have a fair coin (with a 50% chance of heads and a 50% chance of tails); and that if it comes-up heads then you get $100, while if it comes-up tails then you get $1. The expected pay-out is .5 × $100 + .5 × $1 = $50.50 Now, let's say that another coin has a 25% chance of coming-up heads and a 75% chance of coming-up tails, and you'd get $150 for heads and $10 for tails. Its expected pay-out is .25 × $150 + .75 × $10 = $45 More complicated cases arise when there are more than two possible outcomes, but the basic formula is just prob(x1m(x1) + prob(x2m(x2) + … + prob(xnm(xn) where xi is the i-th possible outcome, prob(xi) is the probability of that i-th possible outcome, and m(xi) is some measure (mass, temperature, dollar-value, or whatever) of that outcome. In our coin-flipping examples, each expectation is of form prob(headspayout(heads) + prob(tailspayout(tails)

One of the numerical examples of coin-flips offered both a higher maximum pay-out ($150 v $100) and a higher minimum pay-out ($10 v $1) yet a lower expected pay-out ($45 v $50.50). Most people will look at this, and decide that the expected pay-out should be the determining factor, though it's harder than many people reälize to make the case.

With monetary pay-outs, there is a temptation to use the monetary unit as the measure in computing the expectation by which we choose. But the actual usefulness of money isn't constant. We have various priorities; and, when possible, we take care of the things of greatest priority before we take care of things of lower priority. So, typically, if we get more money, it goes to things of lower priority than did the money that we already had. The next dollar isn't usually as valuable to us as any one of the dollars that we already had. Thus, a pay-out of $1 million shouldn't be a thousand times as valuable as a pay-out of $1000, especially if we keep in-mind a context in which a pay-out will be on top of whatever we already have in life. So, if we're making our decisions based upon some sort of mathematical expectation then, instead of computing an expected monetary value, we really want an expected usefulness value, prob(x1u(x1) + prob(x2u(x2) + … + prob(xnu(xn) where u() is a function giving a measure of usefulness. This u is the main-stream notion of utility, though sadly it should be noted that most main-stream economists have quite lost sight of the point that utility as they imagine it is just a special case of usefulness.

A model of expected-utility maximization is one that takes each possible action aj, associates it with a set of probabilities {prob(x1|aj),prob(x2|aj),…,prob(xn|aj)} (the probabilities now explicitly noted as conditioned upon the choice of action) and asserts that we should chose an action ak which gives us an expected utility prob(x1|aku(x1) + prob(x2|aku(x2) + … + prob(xn|aku(xn) as high or higher than that of any other action.

If there is a non-monetary measure of usefulness in the case of monetary pay-outs, then there is no evident reason that there should not be such a measure in the case of non-monetary pay-outs. (And, likewise, if there is no such measure in the case of non-monetary pay-outs, there is no reason to suppose one in the case of monetary pay-outs, where we have seen that the monetary pay-out isn't really a proper measure.) The main-stream of economic theory runs with that; its model of decision-making is expected-utility maximization.

The model does not require that people have a conscious measure of usefulness, and certainly does not require that they have a conscious process for arriving at such a measure; it can be taken as a model of the gut. And employment of the model doesn't mean that the economist believes that it is literally true; economists across many schools-of-thought regard idealizations of various sorts as approximations sufficient for their purposes. It is only lesser economists who do so incautiously and without regard to problems of scale.


But, while expected-utility maximization may certainly be regarded as an idealization, it should not be mistaken for an idealization of peculiar rationality nor even for an idealization of rationality of just one variety amongst many. Expected-utility maximization is not rational even if we grant — as I would not — that there is some quantification that can be fitted to our priorities.

Expected-utility maximization entails a proposition that the relevant expectation is of potential outcomes which are taken themselves to be no better or worse for being more or less probable. That is to say that what would be the reälized value of an outcome is the measure of the outcome to be used in the computation of the expectation; the expectation is simply lineär in the probabilities. This feature of the model follows from what is known as the strong independence axiom (underscore mine) because Paul Anthony Samuelson, having noticed it, conceptualized it as an axiom. It and propositions suggested to serve in its stead as an axiom (thus rendering it a theorem) have been challenged in various ways. I will not here survey the challenges.

However, the first problem that I saw with expected-utility maximization was with that lineärity, in-so-far as it implies that people do not benefit from the experience of selecting amongst discernible non-trivial lotteries as such.[1]

Good comes from engaging in some gambles as such, exactly because gambling more generally is unavoidable. We need practice to gamble properly, and practice to stay in cognitive shape for gambling. Even if we get that practice without seeking it, in the course of engaging in our everyday gambles, there is still value to that practice as such. A gamble may become more valuable as a result of the probability of the best outcome being made less probable, and less valuable as a result of the best outcome becoming more certain. The value of lotteries is not lineär in their probabilities!

It might be objected that this value is only associated with our cognitive limitations, which limitations it might be argued represented a sort of irrationality. But we only compound the irrationality if we avoid remedial activity because under other circumstance it would not have done us good. Nor do I see that we should any more accept that a person who needs cognitive exercise to stay in cognitive shape is thus out of cognitive shape than we would say that someone who needs physical exercise to stay in physical shape is thus out of physical shape.


[0 (2016:07/22)] Very quickly, in a brief exchange, he saw the error, and he's corrected his entry; so I've removed the link and identification here.

[1] When I speak or write of lotteries or of gambling, I'm not confining myself to those cases for which lay-people normally use those terms, but applying to situations in which one is confronted by a choice of actions, and various outcomes (albeït some perhaps quite impossible) may be imagined; things to which the term lottery or gamble are more usually applied are simply special cases of this general idea. A trivial lottery is one that most people would especially not think to be a lottery or gamble at all, because the only probabilities are either 0 or 1; a non-trivial lottery involves outcomes with probabilities in between those two. Of course, in real life there are few if any perfectly trivial lotteries, but a lot of things are close enough that people imagine them as having no risk or uncertainty; that's why I refer to discernible non-trivial lotteries, which people see as involving risk or uncertainty.

Dying Asymptotically

Thursday, 2 July 2015

It seems as if most economists do not know how to handle death.

What I here mean is not that they don't cope well with the deaths of loved ones or with their own mortality — though I suspect that they don't. What I mean is that their models of the very long-run are over-simply conceived and poorly interpretted when it comes to life-spans.

In the typical economic model of the very long-run, agents either live forever, or they live some fixed span of time, and then die. Often, economists find that a model begins to fit the real world better if they change it from assuming that people live that fixed amount of time to assuming that people live forever, and some economists then conclude that people are irrationally assuming their own immortality.

Here's a better thought. In the now, people are quite sure that they are alive. They are less sure about the next instant, and still less sure about the instant after that. The further that they think into the future, the less their expectation of being alive … but there is no time at which most people are dead certain that their lives will have ended. (If I asked you, the reader, how it might be possible for you to be alive in a thousand years, chances are that you could come up with some scenario.)

On the assumption that personalistic probabilities may be quantified, then, imputed probabilities of being alive, graphed against time, would approach some minimum asymptotically. My presumption would be that the value thus approached would be 0 — that most people would have almost no expectation of being alive after some span of years. But it would never quite be zero.

While I'm sure that some models will only work on the assumption that people impute absolute certainty to being alive forever, I suspect that an awful lot of models will work simply by accepting that most people embrace neither that madness nor the madness of absolute certainty that they will be dead at some specific time. Other models may need a more detailed description of the probability function.

As I've perhaps said or implied somewhere in this 'blog; I don't think that real-life probabilities are usually quantified; I would therefore be inclined to resist adopting a model with quantified probabilities, though such toys can be very useful heuristics. The weaker notion that probabilities are an incomplete preördering would correspond to some weaker notion than an asymptotic approach, but I haven't given much thought to what it would be.

Fifth Rejection and Sixth Attempt

Sunday, 30 November 2014

My short article was rejected by one journal yester-day, and submitted to another in the wee hours of this morning. And, yes, that's just how the previous entry began.

This time, an editor at the rejecting journal informed me that an unnamed associate editor felt that the article didn't fit the purposes of the journal. I got no further critique from them than that. (It should be understood that, as many submissions are made, critiquing every one would be very time-consuming.)

With respect to my paper on indecision, I had some fear that I would run out of good journals to which I might submit it. With respect to this short article, I have a fear that I might run out of any journal to which I might submit it. It just falls in an area where the audience seems small, however important I might think these foundational issues.

Fourth Rejection and Fifth Attempt

Tuesday, 11 November 2014

My short article was rejected by one journal yester-day, and submitted to another in the wee hours of this morning.

At the journal that rejected it, the article was approved by one of the two reviewers, but felt to be unsuited to the readership of the journal by the other reviewer and by the associate editor. Additionally, the second reviewer and the associate editor suggested that it be made a more widely ranging discussion of the history of subjectivist thought, which suggestion shows some lack of appreciation that foundational issues are of more than historical interest, and that the axiomata invoked by the subjectivists are typically also invoked by logicists. (I say appreciation rather than understanding, because the reviewer briefly noted that perhaps my concern was with the logic as such.)

I made three tweaks to the article. One was to make the point that axiomata such as de Finetti's are still the subject of active discussion. Another was to deal with the fact that secondary criticism arose from the editor's and the objecting reviewer's not knowing what weak would mean in reference to an ordering relation. The third was simply to move a parenthetical remark to its own (still parenthetical) paragraph.

The journal that now has it tries to provide its first review within three months.

Third Rejection and Fourth Attempt

Friday, 29 August 2014

As expected, my brief paper was quickly rejected by the third journal to which I sent it. The rejection came mid-day on 19 July; the editor said that it didn't fit the general readership of the journal. He suggested sending it to a journal focussed on Bayesian theory, or to a specific journal of the very same association as that of the journal that he edits. I decided to try the latter.

On the one hand, I don't see my paper as of interest only to those whom I would call Bayesian. The principle in question concerns qualitative probability, whether in the development of a subjectivist theory or of a logicist theory, and issues of Bayes' Theorem only arise if one proceeds to develop a quantitative theory. On the other hand, submitting to that other journal of the same association was something that I could do relatively quickly.

I postponed an up-date here because I thought that I'd report both rejections together if indeed another came quickly. But, so far, my paper remains officially under review at that fourth journal.

The paper is so brief — and really so simple — that someone with an expertise in its area could decide upon it minutes. But reviewing it isn't just a matter of cleverness; one must be familiar with the literature to feel assured that its point is novel. A reviewer without that familiarity would surely want to check the papers in the bibliography, and possibly to seek other work.

Additionally, a friend discovered that, if he returned papers as quickly as he could properly review them, then editors began seeking to get him to review many more papers. Quite reasonably, he slowed the pace of at which he returned his reviews.

Just a Note

Thursday, 12 June 2014

Years ago, I planned to write a paper on decision-making under uncertainty when possible outcomes were completely ordered neither by desirability nor by plausibility.

On the way to writing that paper, I was impressed by Mark Machina with the need for a paper that would explain how an incompleteness of preferences would operationalize, so I wrote that article before exploring the logic of the dual incompleteness that interested me.

Returning to the previously planned paper, I did not find existing work on qualitative probability that was adequate to my purposes, so I began trying to formulating just that as a part of the paper, and found that the work was growing large and cumbersome. I have enough trouble getting my hyper-modernistic work read without delivering it in large quantities! So I began developing a paper concerned only with qualitative probability as such.

In the course of writing that spin-off paper, I noticed that a rather well-established proposition concerning the axiomata of probability contains an unnecessary restriction; and that, over the course of more than 80 years, the proposition has repeatedly been discussed without the excessiveness of the restriction being noted. Yet it's one of those points that will be taken as obvious once it has been made. I originally planned to note that dispensibility in the paper on qualitative probability, but I have to be concerned about increasing clutter in that paper. Yester-day, I decided to write a note — a very brief paper — that draws attention to the needlessness of the restriction. The note didn't take very long to write; I spent more time with the process of submission than with that of writing.

So, yes, a spin-off of a spin-off; but at least it is spun-off, instead of being one more thing pending. Meanwhile, as well as there now being three papers developed or being developed prior to that originally planned, I long ago saw that the original paper ought to have at least two sequels. If I complete the whole project, what was to be one paper will have become at least six.

The note has been submitted to a journal of logic, rather than of economics; likewise, I plan to submit the paper on qualitative probability to such a journal. While economics draws upon theories of probability, work that does not itself go beyond such theories would not typically be seen as economics. The body of the note just submitted is only about a hundred words and three formulæ. On top of the usual reasons for not knowing whether a paper will be accepted, a problem in this case is exactly that the point made by the paper will seem obvious, in spite of being repeatedly overlooked.

As to the remainder of the paper on qualitative probability, I'm working to get its axiomata into a presentable state. At present, it has more of them than I'd like.

Notions of Probability

Wednesday, 26 March 2014

I've previously touched on the matter of there being markèdly differing notions all associated with the word probability. Various attempts have been made by various writers to catalogue and to coördinate these notions; this will be one of my own attempts.

[an attempt to discuss conceptions of probability]

Quantifying Evidence

Friday, 12 August 2011
The only novel thing [in the Dark Ages] concerning probability is the following remarkable text, which appears in the False Decretals, an influential mixture of old papal letters, quotations taken out of context, and outright forgeries put together somewhere in Western Europe about 850. The passage itself may be much older. A bishop should not be condemned except with seventy-two witnesses … a cardinal priest should not be condemned except with forty-four witnesses, a cardinal deacon of the city of Rome without thirty-six witnesses, a subdeacon, acolyte, exorcist, lector, or doorkeeper except with seven witnesses.⁹ It is the world's first quantitative theory of probability. Which shows why being quantitative about probability is not necessarily a good thing.
James Franklin
The Science of Conjecture: Evidence and Probability before Pascal
Chapter 2

(Actually, there is some evidence that a quantitative theory of probability developed and then disappeared in ancient India.[10] But Franklin's essential point here is none-the-less well-taken.)


⁹ Foot-note in the original, citing Decretales Pseudo-Isidorianae, et Capitula Angilramni edited by Paul Hinschius, and recommending comparison with The Collection in Seventy-Four Titles: A Canon Law Manual of the Gregorian Reform edited by John Gilchrist.

[10] In The Story of Nala and Damayanti within the Mahābhārata, there is a character Rtuparna (aka Rituparna, and mistakenly as Rtupama and as Ritupama) who seems to have a marvelous understanding of sampling and is a master of dice-play. I learned about Rtuparna by way of Ian Hacking's outstanding The Emergence of Probability; Hacking seems to have learned of it by way of V.P. Godambe, who noted the apparent implication in A historical perspective of the recent developments in the theory of sampling from actual populations, Journal of the Indian Society of Agricultural Statistics v. 38 #1 (Apr 1976) pp 1-12.

Disappointment and Disgust

Sunday, 21 March 2010

In his Philosophical Theories of Probability, Donald Gillies proposes what he calls an intersubjective theory of probability. A better name for it would be group-strategy model of probability.

Subjectivists such as Bruno di Finetti ask the reader to consider the following sort of game:

  • Some potential event is identified.
  • Our hero must choose a real number (negative or positive) q, a betting quotient.
  • The nemesis, who is rational, must choose a stake S, which is a positive or negative sum of money or zero.
  • Our hero must, under any circumstance, pay the nemesis q·S. (If the product q·S is negative, then this amounts to the nemesis paying money to our hero.)
  • If the identified event occurs, then the nemesis must pay our hero S (which, if S is negative, then amounts to taking money from our hero). If it does not occur, then our hero gets nothing.
Di Finetti argues that a rational betting quotient will capture a rational degree of personal belief, and that a probability is exactly and only a degree of personal belief.

Gillies asks us to consider games of the very same sort, except that the betting quotients must be chosen jointly amongst a team of players. Such betting quotients would be at least examples of what Gillies calls intersubjective probabilities. Gillies tells us that these are the probabilities of rational consensus. For example, these are ostensibly the probabilities of scientific consensus.

Opponents of subjectivists such as di Finetti have long argued that the sort of game that he proposes fails in one way or another to be formally identical to the general problem for the application of personal degrees of belief. Gillies doesn't even try to show how the game, if played by a team, is formally identical to the general problem of group commitment to propositions. He instead belabors a different point, which should already be obvious to all of his readers, that teamwork is sometimes in the interest of the individual.

Amongst other things, scientific method is about best approximation of the truth. There are some genuine, difficult questions about just what makes one approximation better than another, but an approximation isn't relevantly better for promoting such things as the social standing as such or material wealth as such of a particular clique. It isn't at all clear who or what, in the formation of genuinely scientific consensus, would play a rôle that corresponds to that of the nemesis in the betting game.


Karl Popper, who proposed to explain probabilities in terms of objective propensities (rather than in terms of judgmental orderings or in terms of frequencies), asserted that

Causation is just a special case of propensity: the case of propensity equal to 1, a determining demand, or force, for realization.

Gillies joins others in taking him to task for the simple reason that probabilities can be inverted — one can talk both about the probability of A given B and that of B given A, whereäs presumably if A caused B then B cannot have caused A.

Later, for his own propensity theory, Gillies proposes to define probability to apply only to events that display a sort of independence. Thus, flips of coins might be described by probabilities, but the value of a random-walk process (where changes are independent but present value is a sum of past changes) would not itself have a probability. None-the-less, while the value of a random walk and similar processes would not themselves have probabilities, they'd still be subject to compositions of probabilities which we would previously have called probabilities.

In other words, Gillies has basically taken the liberty of employing a foundational notion of probability, and permitting its extension; he chooses not to call the extension probability, but that's just notation. Well, Popper had a foundational notion of propensity, which is a generalization of causality. He identified this notion with probability, and implicitly extended the notion to include inversions.


Later, Gillies offers dreadful criticism of Keynes. Keynes's judgmental theory of probability implies that every rational person with sufficient intellect and the same information set would ascribe exactly the same probability to a proposition. Gillies asserts

[…] different individuals may come to quite different conclusions even though they have the same background knowledge and expertise in the relevant area, and even though they are all quite rational. A single rational degree of belief on which all rational being should agree seems to be a myth.

So much for the logical interpretation of probability, […].

No two human beings have or could have the same information set. (I am reminded of infuriating claims that monozygotic children raised by the same parents have both the same heredity and the same environment.) Gillies writes of the relevant area, but in the formation of judgments about uncertain matters, we may and as I believe should be informed by a very extensive body of knowledge. Awareness that others might dismiss as irrelevant can provide support for general relationships. And I don't recall Keynes ever suggesting that there would be real-world cases of two people having the same information set and hence not disagreeing unless one of them were of inferior intellect.

After objecting that the traditional subjective theory doesn't satisfactorily cover all manner of judgmental probability, and claiming that his intersubjective notion can describe probabilities imputed by groups, Gillies takes another shot at Keynes:

When Keynes propounded his logical theory of probability, he was a member of an elite group of logically minded Cambridge intellectuals (the Apostles). In these circumstances, what he regarded as a single rational degree of belief valid for the whole of humanity may have been no more than the consensus belief of the Apostles. However admirable the Apostles, their consensus beliefs were very far from being shared by the rest of humanity. This became obvious in the 1930s when the Apostles developed a consensus belief in Soviet communism, a belief which was certainly not shared by everyone else.

Note the insinuation that Keynes thought that there were a single rational degree of belief valid for the whole of humanity, whereäs there is no indication that Keynes felt that everyone did, should, or could have the same information set. Rather than becoming obvious to him in the 1930s, it would have been evident to Keynes much earlier that many of his own beliefs and those of the other Apostles were at odds with those of most of mankind. Gillies' reference to embrace of Marxism in the '30s by most of the Apostles simply looks like irrelevant, Red-baiting ad hominem to me. One doesn't have to like Keynes (as I don't), Marxism (as I don't) or the Apostles (as I don't) to be appalled by this passage (as I am).

A Note to the Other Five

Sunday, 14 March 2010

Probability is one elephant, not two or more formally identical or formally similar elephants.