Posts Tagged ‘probability’

Missed Article

Saturday, 21 November 2020

I found an article that, had I known of it, I would have noted in my probability paper, A Logic of Comparative Support: Qualitative Conditional Probability Relations Represented by Popper Functions by James Allen Hawthorne
in Oxford Handbook of Probabilities and Philosophy, edited by Alan Hájek and Chris Hitchcock

Professor Hawthorne adopts essentially unchanged most of Koopman's axiomata from The Axioms and Algebra of Intuitive Probability, but sets aside Koopman's axiom of Subdivision, noting that it may not seem as intuitively compelling as the others. In my own paper, I showed that Koopman's axiom of Subdivision was a theorem of a much simpler, more general principle in combination with an axiom that is equivalent to two of the axiomata in Koopman's later revision of his system. (The article containing that revision is not listed in Hawthorne's bibliography.) I provided less radically simpler alternatives to other axiomata, and included axiomata that did not apply to Koopman's purposes in his paper but did to the purposes of a general theory of decision-making.

Libertine Bayesianism

Thursday, 24 September 2020

As repeatedly noted by me and by many others, there are multiple theories about the fundamental notion of probability, including (though not restricted to) the notion of probabilities as objective, logical relationships amongst propositions and that of probabilities as degrees of belief.

Though those two notions are distinct, subscribers to each typically agree with subscribers to the other upon a great deal of the axiomatic structure of the logic of probability. Further, in practice the main-stream of the first group and that of the second group both arrive at their estimates of measures of probability by adjusting initial values through repeated application, as observations accumulate, of a principle known as Bayes' theorem. Indeed, the main-stream of one group are called objective Bayesian and the mainstream of the other are often called subjective Bayesian.[1] Where the two main-streams differ in practice is in the source of those initial values.

The objective Bayesians believe that, in the absence of information, one begins with what are called non-informative priors. This notion is evolved from the classical idea of a principle of insufficient reason, which said that one should assign equal probabilities to events or to propositions, in the absence of a reason for assigning different probabilities. (For example, begin by assume that a die is fair.) The objective Bayesians attempt to be more shrewd than the classical theorists, but will often admit that in some cases non-informative priors cannot be found because of a lack of understanding of how to divide the possibilities (in some cases because of complexity).

The subjective Bayesians believe that one may use as a prior whatever initial degree of belief one has, measured on an interval from 0 through 1. As measures of probability are taken to be degrees of belief, any application of Bayes' theorem that results in a new value is supposed to result in a new degree of belief.

I want to suggest what I think to be a new school of thought, with a Bayesian sub-school, not-withstanding that I have no intention of joining this school.

If a set of things is completely ranked, it's possible to proxy that ranking with a quantification, such that if one thing has a higher rank than another then it is assigned a greater quantification, and that if two things have the same rank then they are assigned the same quantification. If all that we have is a ranking, with no further stipulations, then there will be infinitely many possible quantifications that will work as proxies. Often, we may want to tighten-up the rules of quantification (for example, by requiring that all quantities be in the interval from 0 through 1), and yet still it may be the case that infinitely many quantifications would work equally well as proxies.

Sets of measures of probability may be considered as proxies for underlying rankings of propositions or of events by probability. The principles to which most theorists agree when they consider probability rankings as such constrain the sets of possible measures, but so long as only a finite set of propositions or of events is under consideration, there are infinitely many sets of measures that will work as proxies.

A subjectivist feels free to use his or her degrees of belief so long as they fit the constraints, even though someone else may have a different set of degrees of belief that also fit the constraints. However, the argument for the admissibility of the subjectivist's own set of degrees of belief is not that it is believed; the argument is that one's own set of degrees of belief fits the constraints. Belief as such is irrelevant. It might be that one's own belief is colored by private information, but then the argument is not that one believes the private information, but that the information as such is relevant (as indeed it might be); and there would always be some other sets of measures that also conformed to the private information.

Perhaps one might as well use one's own set of degrees of belief, but one also might every bit as well use any conforming set of measures.

So what I now suggest is what I call a libertine school, which regards measures of probability as proxies for probability rankings and which accepts any set of measures that conform to what is known of the probability ranking of propositions or of events, regardless of whether these measures are thought to be the degrees of belief of anyone, and without any concern that these should become the degrees of belief of anyone; and in particular I suggest libertine Bayesianism, which accepts the analytic principles common to the objective Bayesians and to the subjective Bayesians, but which will allow any set of priors that conforms to those principles.


[1] So great a share of subjectivists subscribe to a Bayesian principle of updating that often the subjective Bayesians are simply called subjectivists as if there were no need to distinguish amongst subjectivists. And, until relatively recently, so little recognition was given to the objective Bayesians that Bayesian was often taken as synonymous with subjectivist.

Again into the Breach

Monday, 15 January 2018

As occasionally noted in publicly accessible entries to this 'blog, I have been working on a paper on qualitative probability. A day or so before Christmas, I had a draft that I was willing to promote beyond a circle of friends.

I sent links to a few researchers, some of them quite prominent in the field. One of them responded very quickly in a way that I found very encouraging; and his remarks motivated me to make some improvements in the verbal exposition.

I hoped and still hope to receive responses from others, but as of to-day have not. I'd set to-day as my dead-line to begin the process of submitting the paper to academic journals, and therefore have done so.

The process of submission is emotionally difficult for many authors, and my past experiences have been especially bad, including having a journal fail to reach a decision for more than a year-and-a-half, so that I ultimate withdrew the paper from their consideration. I even abandoned one short paper because the psychological cost of trying to get it accepted in some journal was significantly impeding my development of other work. While there is some possibility that finding acceptance for this latest paper will be less painful, I am likely to be in for a very trying time.

It is to be hoped that, none-the-less, I will be able to make some progress on the next paper in the programme of which my paper on indecision and now this paper on probability are the first two installments. In the presumably forth-coming paper, I will integrate incomplete preferences with incompletely ordered probabilities to arrive at a theory of rational decision-making more generalized and more reälistic than that of expected-utility maximization. A fourth and fifth installment are to follow that.

But the probability paper may be the most important thing that I will ever have written.

Theories of Probability — Perfectly Fair and Perfectly Awful

Tuesday, 11 April 2017

I've not heard nor read anyone remarking about a particular contrast between the classical approach to probability theory and the Bayesian subjectivist approach. The classical approach began with a presumption that the formal mathematical principles of probability could be discovered by considering situations that were impossibly good; the Bayesian subjectivist approach was founded on a presumption that those principles could be discovered by considered situations that were implausibly bad.


The classical development of probability theory began in 1654, when Fermat and Pascal took-up a problem of gambling on dice. At that time, the word probability and its cognates from the Latin probabilitas meant plausibility.

Fermat and Pascal developed a theory of the relative plausibility of various sequences of dice-throws. They worked from significant presumptions, including that the dice had a perfect symmetry (except in-so-far as one side could be distinguished from another), so that, with any given throw, it were no more plausible that one face should be upper-most than that any other face should be upper-most. A model of this sort could be be reworked for various other devices. Coins, wheels, and cards could be imagined as perfectly symmetrical. More generally, very similar outcomes could be imagined as each no more probable than any other. If one presumes that to be no more probable is to be equally probable, then a natural quantification arises.

Now, the preceptors did understand that most or all of the things that they were treating as perfectly symmetrical were no such thing. Even the most sincere efforts wouldn't produce a perfectly balanced die, coin, or roulette wheel, and so forth. But these theorists were very sure that consideration of these idealized cases had revealed the proper mathematics for use across all cases. Some were so sure of that mathematics that they inferred that it must be possible to describe the world in terms of cases that were somehow equally likely, without prior investigation positively revealing them as such. (The problem for this theory was that different descriptions divide the world into different cases; it would take some sort of investigation to reveal which of these descriptions, if any, results in division into cases of equal likelihood. Indeed, even with the notion of perfectly balanced dice, one is implicitly calling upon experience to understand what it means for a die to be more or less balanced; likewise for other devices.)


As subjectivists have it, to say that one thing is more probable than another is to say that that first thing is more believed than is the other. (GLS Shackle proposed that the probability of something might be measured by how surprised one would be if that something were discovered not to be true.)

But most subjectivists insist that there are rationality constraints that must be followed in forming these beliefs, so that for example if X is more probable than Y and Y more probable than Z, then X must be more probable than Z. And the Bayesian subjectivists make a particular demand for what they call coherence. These subjectivists imagine that one assigns quantifications of belief to outcomes; the quantifications are coherent if they could be used as gambling ratios without an opponent finding some combination of gambles with those ratios that would guarantee that one suffered a net loss. Such a combination is known as a Dutch book.

But, while quantifications can in theory be chosen that insulate one against the possibility of a Dutch book, it would only be under extraordinary circumstances that one could not avoid a Dutch book by some other means, such as simply rejecting complex contracts to gamble, and instead deciding on gambles one-at-a-time, without losing sight of the gambles to which one had already agreed. In the absence of complex contracts or something like them, it is not clear that one would need a preëstablished set of quantifications or even could justify committing to such a set. (It is also not clear why, if one's beliefs correspond to measures, one may not use different measures for gambling ratios.) Indeed, it is only under rather unusual circumstances that one is confronted by opponents who would attempt to get one to agree to a Dutch book. (I don't believe that anyone has ever tried to present me with such a combination, except hypothetically.) None-the-less, these theorists have been very sure that consideration of antagonistic cases of this class has revealed the proper mathematics for use across all cases.


The impossible goodness imagined by the classical theorists was of a different aspect than is the implausible badness of the Bayesian subjectivists. A fair coin is not a friendly coin. Still, one framework is that of the Ivory Tower, and the other is that of Murphy's Law.

Generalizing the Principle of Additivity

Friday, 17 February 2017

One of the principles often suggested as an axiom of probability is that of additivity. The additivity here is a generalization of arithmetic addivity — which generalization, with other assumptions, will imply the arithmetic case.

The classic formulation of this principle came from Bruno di Finetti. Di Finetti was a subjectivist. A typical subjectivist is amongst those who prefer to think in terms of the probability of events, rather than in terms of the probability of propositions. And subjectivists like to found their theory of probability in terms of unconditional probabilities. Using somewhat different notation from that here, the classic formulation of the principle of additivity is in which X, Y, and Z are sets of events. The underscored arrowhead is again my notation for weak supraprobability, the union of strict supraprobability with equiprobability.

One of the things that I noticed when considering this proposition is that the condition that YZ be empty is superfluous. I tried to get a note published on that issue, but journals were not receptive. I had bigger fish to fry other than that one, so I threw-up my hands and moved onward.

When it comes to probability, I'm a logicist. I see probability as primarily about relations amongst propositions (though every event corresponds to a proposition that the event happen and every proposition corresponds to the event that the proposition is true), and I see each thing about which we state a probability as a compound proposition of the form X given c in which X and c are themselves propositions (though if c is a tautology, then the proposition operationalizes as unconditional). I've long pondered what would be a proper generalized restatement of the principle of additivity. If you've looked at the set of axiomata on which I've been working, then you've seen one or more of my efforts. Last night, I clearly saw what I think to be the proper statement: To get di Finetti's principle from it, set c2 = c1 and make it a tautology, and set X2 = Z = Y2. Note that the condition of (X2 | c1) being weakly supraprobable to (Y2 | c2) is automatically met when the two are the same thing. By itself, this generalization implies my previous generalization and part of another principle that I was treating as an axiom; the remainder of that other principle can be got by applying basic properties of equiprobability and the principle that strict supraprobability and equiprobability are mutually exclusive to this generalization. The principle that is thus demoted was awkward; the axiom that was recast as acceptable as it was, but the new version is elegant.

Deal-Breakers

Saturday, 7 January 2017

Elsewhere, Pierre Lemieux asked In two sentences, what do you think of the Monty Hall paradox? Unless I construct sentences loaded with conjunctions (which would seem to violate the spirit of the request), an answer in just two sentences will be unsatisfactory (though I provided one). Here in my 'blog, I'll write at greater length.


The first appearance in print of what's called the Monty Hall Problem seems to have been in a letter by Steve Selvin to The American Statistician v29 (1975) #1. The problem resembles those with which Monty Hall used to present contestants on Let's Make a Deal, though Hall has asserted that no problem quite like it were presented on that show. The most popular statement of the Monty Hall Problem came in a letter by Craig Whitaker to the Ask Marilyn column of Parade:

Suppose you're on a game show, and you're given the choice of three doors: Behind one door is a car; behind the others, goats. You pick a door, say No. 1, and the host, who knows what's behind the doors, opens another door, say No. 3, which has a goat. He then says to you, Do you want to pick door No. 2? Is it to your advantage to switch your choice?

(Before we continue, take car and goat to stand, respectively, for something that you want and something that you don't want, regardless of your actual feelings about cars and about goats.)

There has been considerable controversy about the proper answer, but the text-book answer is that, indeed, one should switch choices. The argument is that, initially, one has a 1/3 probability that the chosen Door has the car, and a 2/3 probability that the car is behind one of the other two Doors. When the host opens one of the other two Doors, the probability remains that the car is behind one of the unchosen Doors, but has gone to 0 for the opened Door, which is to say that the probability is now 2/3 that the car is behind the unchosen, unopened Door.


My first issue with the text-book answer is with its assignment of initial, quantified probabilities. I cannot even see a basis for qualitative probabilities here; which is to say that I don't see a proper reason for thinking either that the probability of the car being behind a given Door is equal to that for any other Door or that the probability of the car being behind some one Door is greater than that of any other Door. As far as I'm concerned, there is no ordering at all.

The belief that there must be an ordering usually follows upon the even bolder presumption that there must be a quantification. Because quantification has proven to be extremely successful in a great many applications, some people make the inference that it can be successfully applied to any and every question. Others, a bit less rash, take the position that it can be applied everywhere except where it is clearly shown not to be applicable. But even the less rash dogma violates Ockham's razor. Some believe that they have a direct apprehension of such quantification. However, for most of human history, if people thought that they had such literal intuitions then they were silent about it; a quantified notion of probability did not begin to take hold until the second half of the Seventeenth Century. And appeals to the authority of one's intuition should carry little if any weight.

Various thinkers have adopted what is sometimes called the principle of indifference or the principle of insufficient reason to argue that, in the absence of any evidence to the contrary, each of n collectively exhaustive and mutually exclusive possibilities must be assigned equal likelihood. But our division of possibilities into n cases, rather than some other number of cases, is an artefact of taxonomy. Perhaps one or more of the Doors is red and the remainder blue; our first division could then be between two possibilities, so that (under the principle of indifference) one Door would have an initial probability of 1/2 and each of the other two would have a probability of 1/4.

Other persons will propose that we have watched the game played many times, and observed that a car has with very nearly equal frequency appeared behind each of the three Doors. But, while that information might be helpful were we to play many times, I'm not aware of any real justification for treating frequencies as decision-theoretic weights in application to isolated events. You won't be on Monty's show to-morrow.

Indeed, if a guest player truly thought that the Doors initially represented equal expectations, then that player would be unable to choose amongst them, or even to delegate the choice (as the delegation has an expectation equal to that of each Door); indifference is a strange, limiting case. However, indecision — the aforementioned lack of ordering — allows the guest player to delegate the decision. So, either the Door was picked for the guest player (rather than by the guest player), or the guest player associated the chosen Door with a greater probability than either unchosen Door. That point might seem a mere quibble, but declaring that the guest player picked the Door is part of a rhetorical structure that surreptitiously and fallaciously commits the guest player to a positive judgment of prior probability. If there is no case for such commitment, then the paradox collapses.


Well, okay now, let's just beg the question, and say not only that you were assigned Door Number 1, but that for some mysterious reason you know that there is an equal probability of the car being behind each of the Doors. The host then opens Door Number 3, and there's a goat. The problem as stated does not explain why the host opened Door Number 3. The classical statement of the problem does not tell the reader what rule is being used by the host; the presentation tells us that the host knows what's behind the doors, but says nothing about whether or how he uses that knowledge. Hypothetically, he might always open a Door with a goat, or he might use some other rule, so that there were a possibility that he would open the Door with a car, leaving the guest player to select between two concealed goats.

Nowhere in the statement of the problem are we told that you are the sole guest player. Something seems to go very wrong with the text-book answer if you are not. Imagine that there are many guest players, and that outcomes are duplicated in cases in which more than one guest player selects or is otherwise assigned the same Door. The host opens Door Number 3, and each of the guest players who were assigned that Door trudges away with a goat. As with the scenario in which only one guest player is imagined, more than one rule may govern this choice made by the host. Now, each guest player who was assigned Door Number 1 is permitted to change his or her assignment to Door Number 2, and each guest player who was assigned Door Number 2 is allowed to change his or her assignment to Door Number 1. (Some of you might recall that I proposed a scenario essentially of this sort in a 'blog entry for 1 April 2009.) Their situations appear to be symmetrical, such that if one set of guest players should switch then so should the other; yet if one Door is the better choice for one group then it seems that it ought also to be the better for the other group.

The resolution is in understanding that the text-book solution silently assumed that the host were following a particular rule of selection, and that this rule were known to the guest player, whose up-dating of probabilities thus could be informed by that knowledge. But, in order for the text-book solution to be correct, all players must be targeted in the same manner by the response of the host. When there is only one guest player, it is possible for the host to observe rules that respond to all guest players in ways that are not not possible when there are multiple guest players, unless they are somehow all assigned the same Door. It isn't even possible to do this for two sets of players each assigned different Doors.


Given the typical presentation of the problem, the typical statement of ostensible solution is wrong; it doesn't solve the problem that was given, and doesn't identify the problem that was actually solved.


[No goats were harmed in the writing of this entry.]

Headway

Saturday, 7 January 2017

My paper on indecision is part of a much larger project. The next step in that project is to provide a formal theory of probability in which it is not always possible to say of outcomes either that one is more probable than another or that they are equality likely. That theory needs to be sufficient to explain the behavior of rational economic agents.

I began struggling actively with this problem before the paper on indecision was published. What I've had is an evolving set of axiomata that resembles the nest of a rat. I've thought that the set has been sufficient; but the axiomata have made over-lapping assertions, there have been rather a lot of them, and one of them has been complex to a degree that made me uncomfortable. Were I better at mathematics, then things might have been put in good order long ago. (I am more able at mathematics than is the typical economist, but I wish that I were considerably still better.) On the other hand, while there are certainly people better at mathematics than am I, no one seems to have accomplished what I seek to do. Economics is, after all, more than its mathematics.

What has most bothered me has been that complex axiom. There hasn't seemed much hope of resolving the general over-lap and of reducing the number of axiomata without first reducing that particular axiom. On 2 January, I was able to do just that, dissolving that axiom into two axiomata, each of which is acceptably simple. Granted that the number of axiomata increased by one, but now that the parts are each simple, I can begin to see how to reduce their overlap. Eliminating that overlap should either pare or vindicate the number of axiomata.

I don't know whether, upon getting results completed and a paper written around them, I would be able to get my work published in a respectable journal. I don't know whether, upon my work's getting published, it would find a significant readership. But the work is deeply important.

Nihil ex Nihilo

Tuesday, 6 December 2016

In his foundational work on probability,[1] Bernard Osgood Koopman would write something of form α /κ for a suggested observation α in the context of a presumption κ. That's not how I proceed, but I don't actively object to his having done so, and he had a reason for it. Though Koopman well understood that real-life rarely offered a basis for completely ordering such things by likelihood, let alone associating them with quantities, he was concerned to explore the cases in which quantification were possible, and he wanted his readers to see something rather like division there. Indeed, he would call the left-hand element α a numerator, and the right-hand element κ the denominator.

He would further use 0 to represent that which were impossible. This notation is usable, but I think that he got a bit lost because of it. In his presentation of axiomata, Osgood verbally imposes a tacit assumption that no denominator were 0. This attempt at assumption disturbs me, not because I think that a denominator could be 0, but because it doesn't bear assuming. And, as Koopman believed that probability theory were essentially a generalization of logic (as do I), I think that he should have seen that the proposition didn't bear assuming. Since Koopman was a logicist, the only thing that he should associate with a denominator of 0 would be a system of assumptions that entailed a self-contradiction; anything else is more plausible than that.

In formal logic, it is normally accepted that anything can follow if one allows a self-contradiction into a system, so that any conclusion as such is uninteresting. If faced by something such as X ∨ (Y ∧ ¬Y) (ie X or both Y and not-Y), one throws away the (Y ∧ ¬Y), leaving just the X; if faced with a conclusion Y ∧ ¬Y then one throws away whatever forced that awful thing upon one.[2] Thus, the formalist approach wouldn't so much forbid a denominator of 0 as declare everything that followed from it to be uninteresting, of no worth. A formal expression that no contradiction is entailed by the presumption κ would have the form ¬(κ ⇒ [(Y ∧ ¬Y)∃Y]) but this just dissolves uselessly ¬(¬κ ∨ [(Y ∧ ¬Y)∃Y])
¬¬κ ∧ ¬[(Y ∧ ¬Y)∃Y]
κ ∧ [¬(Y ∧ ¬Y)∀Y]
κ ∧ [(¬Y ∨ ¬¬Y)∀Y]
κ ∧ [(¬YY)∀Y]
κ
(because (X ⇔ [X ∧ (Y ∨ ¬Y)∀Y])∀X).

In classical logic, the principle of non-contradiction is seen as the bedrock principle, not an assumption (tacit or otherwise), because no alternative can actually be assumed instead.[3]. From that perspective, one should call the absence of 0-valued denominators simply a principle.


[1] Koopman, Bernard Osgood; The Axioms and Algebra of Intuitive Probability, The Annals of Mathematics, Series 2 Vol 41 #2, pp 269-292; and The Bases of Probability, Bulletin of the American Mathematical Society, Vol 46 #10, pp 763-774.

[2] Indeed, that principle of rejection is the basis of proof by contradiction, which method baffles so many people!

[3] Aristoteles, The Metaphysics, Bk 4, Ch 3, 1005b15-22.

Strong Independence in Decision Theory

Thursday, 21 July 2016

In the course of some remarks on Subjective Probability by Richard C. Jeffrey, and later in defending a claim by Gary Stanley Becker, I have previously given some explanation of the model of expected-utility maximization and of axiomata of independence.

Models of expected-utility maximization are so intuïtively appealing to some people that they take one of these models to be peculiarly rational, and deviations from any such model thus to be irrational. I note that the author of a popular 'blog seems to have done just that, yester-day.[0]

My own work shows that quantities cannot be fitted to preferences, which pulls the rug from under expected-utility maximization, but there are other problems as well. The paradox that the 'blogger explores represents a violation of the strong independence axiom. What I want to do here is first to explain again expected-utility maximization, and then to show that the strong independence axiom violates rationality.


A mathematical expectation is what people often mean when they say average — a probability-weighted sum of measures of possible outcomes. For example, when a meteorologist gives an expected rainfall or an expected temperature for to-morrow, she isn't actually telling you to anticipate exactly that rainfall or exactly that temperature; she's telling you that, given observed conditions to-day, the probability distribution for to-morrow has a particular mean quantity of rain or a particular mean temperature.

The actual mathematics of expectation is easiest to explain in simple cases of gambling (which is just whence the modern, main-stream theories of probability itself arose). For example, let's say that we have a fair coin (with a 50% chance of heads and a 50% chance of tails); and that if it comes-up heads then you get $100, while if it comes-up tails then you get $1. The expected pay-out is .5 × $100 + .5 × $1 = $50.50 Now, let's say that another coin has a 25% chance of coming-up heads and a 75% chance of coming-up tails, and you'd get $150 for heads and $10 for tails. Its expected pay-out is .25 × $150 + .75 × $10 = $45 More complicated cases arise when there are more than two possible outcomes, but the basic formula is just prob(x1m(x1) + prob(x2m(x2) + … + prob(xnm(xn) where xi is the i-th possible outcome, prob(xi) is the probability of that i-th possible outcome, and m(xi) is some measure (mass, temperature, dollar-value, or whatever) of that outcome. In our coin-flipping examples, each expectation is of form prob(headspayout(heads) + prob(tailspayout(tails)

One of the numerical examples of coin-flips offered both a higher maximum pay-out ($150 v $100) and a higher minimum pay-out ($10 v $1) yet a lower expected pay-out ($45 v $50.50). Most people will look at this, and decide that the expected pay-out should be the determining factor, though it's harder than many people reälize to make the case.

With monetary pay-outs, there is a temptation to use the monetary unit as the measure in computing the expectation by which we choose. But the actual usefulness of money isn't constant. We have various priorities; and, when possible, we take care of the things of greatest priority before we take care of things of lower priority. So, typically, if we get more money, it goes to things of lower priority than did the money that we already had. The next dollar isn't usually as valuable to us as any one of the dollars that we already had. Thus, a pay-out of $1 million shouldn't be a thousand times as valuable as a pay-out of $1000, especially if we keep in-mind a context in which a pay-out will be on top of whatever we already have in life. So, if we're making our decisions based upon some sort of mathematical expectation then, instead of computing an expected monetary value, we really want an expected usefulness value, prob(x1u(x1) + prob(x2u(x2) + … + prob(xnu(xn) where u() is a function giving a measure of usefulness. This u is the main-stream notion of utility, though sadly it should be noted that most main-stream economists have quite lost sight of the point that utility as they imagine it is just a special case of usefulness.

A model of expected-utility maximization is one that takes each possible action aj, associates it with a set of probabilities {prob(x1|aj),prob(x2|aj),…,prob(xn|aj)} (the probabilities now explicitly noted as conditioned upon the choice of action) and asserts that we should chose an action ak which gives us an expected utility prob(x1|aku(x1) + prob(x2|aku(x2) + … + prob(xn|aku(xn) as high or higher than that of any other action.

If there is a non-monetary measure of usefulness in the case of monetary pay-outs, then there is no evident reason that there should not be such a measure in the case of non-monetary pay-outs. (And, likewise, if there is no such measure in the case of non-monetary pay-outs, there is no reason to suppose one in the case of monetary pay-outs, where we have seen that the monetary pay-out isn't really a proper measure.) The main-stream of economic theory runs with that; its model of decision-making is expected-utility maximization.

The model does not require that people have a conscious measure of usefulness, and certainly does not require that they have a conscious process for arriving at such a measure; it can be taken as a model of the gut. And employment of the model doesn't mean that the economist believes that it is literally true; economists across many schools-of-thought regard idealizations of various sorts as approximations sufficient for their purposes. It is only lesser economists who do so incautiously and without regard to problems of scale.


But, while expected-utility maximization may certainly be regarded as an idealization, it should not be mistaken for an idealization of peculiar rationality nor even for an idealization of rationality of just one variety amongst many. Expected-utility maximization is not rational even if we grant — as I would not — that there is some quantification that can be fitted to our priorities.

Expected-utility maximization entails a proposition that the relevant expectation is of potential outcomes which are taken themselves to be no better or worse for being more or less probable. That is to say that what would be the reälized value of an outcome is the measure of the outcome to be used in the computation of the expectation; the expectation is simply lineär in the probabilities. This feature of the model follows from what is known as the strong independence axiom (underscore mine) because Paul Anthony Samuelson, having noticed it, conceptualized it as an axiom. It and propositions suggested to serve in its stead as an axiom (thus rendering it a theorem) have been challenged in various ways. I will not here survey the challenges.

However, the first problem that I saw with expected-utility maximization was with that lineärity, in-so-far as it implies that people do not benefit from the experience of selecting amongst discernible non-trivial lotteries as such.[1]

Good comes from engaging in some gambles as such, exactly because gambling more generally is unavoidable. We need practice to gamble properly, and practice to stay in cognitive shape for gambling. Even if we get that practice without seeking it, in the course of engaging in our everyday gambles, there is still value to that practice as such. A gamble may become more valuable as a result of the probability of the best outcome being made less probable, and less valuable as a result of the best outcome becoming more certain. The value of lotteries is not lineär in their probabilities!

It might be objected that this value is only associated with our cognitive limitations, which limitations it might be argued represented a sort of irrationality. But we only compound the irrationality if we avoid remedial activity because under other circumstance it would not have done us good. Nor do I see that we should any more accept that a person who needs cognitive exercise to stay in cognitive shape is thus out of cognitive shape than we would say that someone who needs physical exercise to stay in physical shape is thus out of physical shape.


[0 (2016:07/22)] Very quickly, in a brief exchange, he saw the error, and he's corrected his entry; so I've removed the link and identification here.

[1] When I speak or write of lotteries or of gambling, I'm not confining myself to those cases for which lay-people normally use those terms, but applying to situations in which one is confronted by a choice of actions, and various outcomes (albeït some perhaps quite impossible) may be imagined; things to which the term lottery or gamble are more usually applied are simply special cases of this general idea. A trivial lottery is one that most people would especially not think to be a lottery or gamble at all, because the only probabilities are either 0 or 1; a non-trivial lottery involves outcomes with probabilities in between those two. Of course, in real life there are few if any perfectly trivial lotteries, but a lot of things are close enough that people imagine them as having no risk or uncertainty; that's why I refer to discernible non-trivial lotteries, which people see as involving risk or uncertainty.

Dying Asymptotically

Thursday, 2 July 2015

It seems as if most economists do not know how to handle death.

What I here mean is not that they don't cope well with the deaths of loved ones or with their own mortality — though I suspect that they don't. What I mean is that their models of the very long-run are over-simply conceived and poorly interpretted when it comes to life-spans.

In the typical economic model of the very long-run, agents either live forever, or they live some fixed span of time, and then die. Often, economists find that a model begins to fit the real world better if they change it from assuming that people live that fixed amount of time to assuming that people live forever, and some economists then conclude that people are irrationally assuming their own immortality.

Here's a better thought. In the now, people are quite sure that they are alive. They are less sure about the next instant, and still less sure about the instant after that. The further that they think into the future, the less their expectation of being alive … but there is no time at which most people are dead certain that their lives will have ended. (If I asked you, the reader, how it might be possible for you to be alive in a thousand years, chances are that you could come up with some scenario.)

On the assumption that personalistic probabilities may be quantified, then, imputed probabilities of being alive, graphed against time, would approach some minimum asymptotically. My presumption would be that the value thus approached would be 0 — that most people would have almost no expectation of being alive after some span of years. But it would never quite be zero.

While I'm sure that some models will only work on the assumption that people impute absolute certainty to being alive forever, I suspect that an awful lot of models will work simply by accepting that most people embrace neither that madness nor the madness of absolute certainty that they will be dead at some specific time. Other models may need a more detailed description of the probability function.

As I've perhaps said or implied somewhere in this 'blog; I don't think that real-life probabilities are usually quantified; I would therefore be inclined to resist adopting a model with quantified probabilities, though such toys can be very useful heuristics. The weaker notion that probabilities are an incomplete preördering would correspond to some weaker notion than an asymptotic approach, but I haven't given much thought to what it would be.