Posts Tagged ‘decision theory’

Again into the Breach

Monday, 15 January 2018

As occasionally noted in publicly accessible entries to this 'blog, I have been working on a paper on qualitative probability. A day or so before Christmas, I had a draft that I was willing to promote beyond a circle of friends.

I sent links to a few researchers, some of them quite prominent in the field. One of them responded very quickly in a way that I found very encouraging; and his remarks motivated me to make some improvements in the verbal exposition.

I hoped and still hope to receive responses from others, but as of to-day have not. I'd set to-day as my dead-line to begin the process of submitting the paper to academic journals, and therefore have done so.

The process of submission is emotionally difficult for many authors, and my past experiences have been especially bad, including having a journal fail to reach a decision for more than a year-and-a-half, so that I ultimate withdrew the paper from their consideration. I even abandoned one short paper because the psychological cost of trying to get it accepted in some journal was significantly impeding my development of other work. While there is some possibility that finding acceptance for this latest paper will be less painful, I am likely to be in for a very trying time.

It is to be hoped that, none-the-less, I will be able to make some progress on the next paper in the programme of which my paper on indecision and now this paper on probability are the first two installments. In the presumably forth-coming paper, I will integrate incomplete preferences with incompletely ordered probabilities to arrive at a theory of rational decision-making more generalized and more reälistic than that of expected-utility maximization. A fourth and fifth installment are to follow that.

But the probability paper may be the most important thing that I will ever have written.

Deal-Breakers

Saturday, 7 January 2017

Elsewhere, Pierre Lemieux asked In two sentences, what do you think of the Monty Hall paradox? Unless I construct sentences loaded with conjunctions (which would seem to violate the spirit of the request), an answer in just two sentences will be unsatisfactory (though I provided one). Here in my 'blog, I'll write at greater length.


The first appearance in print of what's called the Monty Hall Problem seems to have been in a letter by Steve Selvin to The American Statistician v29 (1975) #1. The problem resembles those with which Monty Hall used to present contestants on Let's Make a Deal, though Hall has asserted that no problem quite like it were presented on that show. The most popular statement of the Monty Hall Problem came in a letter by Craig Whitaker to the Ask Marilyn column of Parade:

Suppose you're on a game show, and you're given the choice of three doors: Behind one door is a car; behind the others, goats. You pick a door, say No. 1, and the host, who knows what's behind the doors, opens another door, say No. 3, which has a goat. He then says to you, Do you want to pick door No. 2? Is it to your advantage to switch your choice?

(Before we continue, take car and goat to stand, respectively, for something that you want and something that you don't want, regardless of your actual feelings about cars and about goats.)

There has been considerable controversy about the proper answer, but the text-book answer is that, indeed, one should switch choices. The argument is that, initially, one has a 1/3 probability that the chosen Door has the car, and a 2/3 probability that the car is behind one of the other two Doors. When the host opens one of the other two Doors, the probability remains that the car is behind one of the unchosen Doors, but has gone to 0 for the opened Door, which is to say that the probability is now 2/3 that the car is behind the unchosen, unopened Door.


My first issue with the text-book answer is with its assignment of initial, quantified probabilities. I cannot even see a basis for qualitative probabilities here; which is to say that I don't see a proper reason for thinking either that the probability of the car being behind a given Door is equal to that for any other Door or that the probability of the car being behind some one Door is greater than that of any other Door. As far as I'm concerned, there is no ordering at all.

The belief that there must be an ordering usually follows upon the even bolder presumption that there must be a quantification. Because quantification has proven to be extremely successful in a great many applications, some people make the inference that it can be successfully applied to any and every question. Others, a bit less rash, take the position that it can be applied everywhere except where it is clearly shown not to be applicable. But even the less rash dogma violates Ockham's razor. Some believe that they have a direct apprehension of such quantification. However, for most of human history, if people thought that they had such literal intuitions then they were silent about it; a quantified notion of probability did not begin to take hold until the second half of the Seventeenth Century. And appeals to the authority of one's intuition should carry little if any weight.

Various thinkers have adopted what is sometimes called the principle of indifference or the principle of insufficient reason to argue that, in the absence of any evidence to the contrary, each of n collectively exhaustive and mutually exclusive possibilities must be assigned equal likelihood. But our division of possibilities into n cases, rather than some other number of cases, is an artefact of taxonomy. Perhaps one or more of the Doors is red and the remainder blue; our first division could then be between two possibilities, so that (under the principle of indifference) one Door would have an initial probability of 1/2 and each of the other two would have a probability of 1/4.

Other persons will propose that we have watched the game played many times, and observed that a car has with very nearly equal frequency appeared behind each of the three Doors. But, while that information might be helpful were we to play many times, I'm not aware of any real justification for treating frequencies as decision-theoretic weights in application to isolated events. You won't be on Monty's show to-morrow.

Indeed, if a guest player truly thought that the Doors initially represented equal expectations, then that player would be unable to choose amongst them, or even to delegate the choice (as the delegation has an expectation equal to that of each Door); indifference is a strange, limiting case. However, indecision — the aforementioned lack of ordering — allows the guest player to delegate the decision. So, either the Door was picked for the guest player (rather than by the guest player), or the guest player associated the chosen Door with a greater probability than either unchosen Door. That point might seem a mere quibble, but declaring that the guest player picked the Door is part of a rhetorical structure that surreptitiously and fallaciously commits the guest player to a positive judgment of prior probability. If there is no case for such commitment, then the paradox collapses.


Well, okay now, let's just beg the question, and say not only that you were assigned Door Number 1, but that for some mysterious reason you know that there is an equal probability of the car being behind each of the Doors. The host then opens Door Number 3, and there's a goat. The problem as stated does not explain why the host opened Door Number 3. The classical statement of the problem does not tell the reader what rule is being used by the host; the presentation tells us that the host knows what's behind the doors, but says nothing about whether or how he uses that knowledge. Hypothetically, he might always open a Door with a goat, or he might use some other rule, so that there were a possibility that he would open the Door with a car, leaving the guest player to select between two concealed goats.

Nowhere in the statement of the problem are we told that you are the sole guest player. Something seems to go very wrong with the text-book answer if you are not. Imagine that there are many guest players, and that outcomes are duplicated in cases in which more than one guest player selects or is otherwise assigned the same Door. The host opens Door Number 3, and each of the guest players who were assigned that Door trudges away with a goat. As with the scenario in which only one guest player is imagined, more than one rule may govern this choice made by the host. Now, each guest player who was assigned Door Number 1 is permitted to change his or her assignment to Door Number 2, and each guest player who was assigned Door Number 2 is allowed to change his or her assignment to Door Number 1. (Some of you might recall that I proposed a scenario essentially of this sort in a 'blog entry for 1 April 2009.) Their situations appear to be symmetric, such that if one set of guest players should switch then so should the other; yet if one Door is the better choice for one group then it seems that it ought also to be the better for the other group.

The resolution is in understanding that the text-book solution silently assumed that the host were following a particular rule of selection, and that this rule were known to the guest player, whose up-dating of probabilities thus could be informed by that knowledge. But, in order for the text-book solution to be correct, all players must be targeted in the same manner by the response of the host. When there is only one guest player, it is possible for the host to observe rules that respond to all guest players in ways that are not not possible when there are multiple guest players, unless they are somehow all assigned the same Door. It isn't even possible to do this for two sets of players each assigned different Doors.


Given the typical presentation of the problem, the typical statement of ostensible solution is wrong; it doesn't solve the problem that was given, and doesn't identify the problem that was actually solved.


[No goats were harmed in the writing of this entry.]

Strong Independence in Decision Theory

Thursday, 21 July 2016

In the course of some remarks on Subjective Probability by Richard C. Jeffrey, and later in defending a claim by Gary Stanley Becker, I have previously given some explanation of the model of expected-utility maximization and of axiomata of independence.

Models of expected-utility maximization are so intuïtively appealing to some people that they take one of these models to be peculiarly rational, and deviations from any such model thus to be irrational. I note that the author of a popular 'blog seems to have done just that, yester-day.[0]

My own work shows that quantities cannot be fitted to preferences, which pulls the rug from under expected-utility maximization, but there are other problems as well. The paradox that the 'blogger explores represents a violation of the strong independence axiom. What I want to do here is first to explain again expected-utility maximization, and then to show that the strong independence axiom violates rationality.


A mathematical expectation is what people often mean when they say average — a probability-weighted sum of measures of possible outcomes. For example, when a meteorologist gives an expected rainfall or an expected temperature for to-morrow, she isn't actually telling you to anticipate exactly that rainfall or exactly that temperature; she's telling you that, given observed conditions to-day, the probability distribution for to-morrow has a particular mean quantity of rain or a particular mean temperature.

The actual mathematics of expectation is easiest to explain in simple cases of gambling (which is just whence the modern, main-stream theories of probability itself arose). For example, let's say that we have a fair coin (with a 50% chance of heads and a 50% chance of tails); and that if it comes-up heads then you get $100, while if it comes-up tails then you get $1. The expected pay-out is .5 × $100 + .5 × $1 = $50.50 Now, let's say that another coin has a 25% chance of coming-up heads and a 75% chance of coming-up tails, and you'd get $150 for heads and $10 for tails. Its expected pay-out is .25 × $150 + .75 × $10 = $45 More complicated cases arise when there are more than two possible outcomes, but the basic formula is just prob(x1m(x1) + prob(x2m(x2) + … + prob(xnm(xn) where xi is the i-th possible outcome, prob(xi) is the probability of that i-th possible outcome, and m(xi) is some measure (mass, temperature, dollar-value, or whatever) of that outcome. In our coin-flipping examples, each expectation is of form prob(headspayout(heads) + prob(tailspayout(tails)

One of the numerical examples of coin-flips offered both a higher maximum pay-out ($150 v $100) and a higher minimum pay-out ($10 v $1) yet a lower expected pay-out ($45 v $50.50). Most people will look at this, and decide that the expected pay-out should be the determining factor, though it's harder than many people reälize to make the case.

With monetary pay-outs, there is a temptation to use the monetary unit as the measure in computing the expectation by which we choose. But the actual usefulness of money isn't constant. We have various priorities; and, when possible, we take care of the things of greatest priority before we take care of things of lower priority. So, typically, if we get more money, it goes to things of lower priority than did the money that we already had. The next dollar isn't usually as valuable to us as any one of the dollars that we already had. Thus, a pay-out of $1 million shouldn't be a thousand times as valuable as a pay-out of $1000, especially if we keep in-mind a context in which a pay-out will be on top of whatever we already have in life. So, if we're making our decisions based upon some sort of mathematical expectation then, instead of computing an expected monetary value, we really want an expected usefulness value, prob(x1u(x1) + prob(x2u(x2) + … + prob(xnu(xn) where u() is a function giving a measure of usefulness. This u is the main-stream notion of utility, though sadly it should be noted that most main-stream economists have quite lost sight of the point that utility as they imagine it is just a special case of usefulness.

A model of expected-utility maximization is one that takes each possible action aj, associates it with a set of probabilities {prob(x1|aj),prob(x2|aj),…,prob(xn|aj)} (the probabilities now explicitly noted as conditioned upon the choice of action) and asserts that we should chose an action ak which gives us an expected utility prob(x1|aku(x1) + prob(x2|aku(x2) + … + prob(xn|aku(xn) as high or higher than that of any other action.

If there is a non-monetary measure of usefulness in the case of monetary pay-outs, then there is no evident reason that there should not be such a measure in the case of non-monetary pay-outs. (And, likewise, if there is no such measure in the case of non-monetary pay-outs, there is no reason to suppose one in the case of monetary pay-outs, where we have seen that the monetary pay-out isn't really a proper measure.) The main-stream of economic theory runs with that; its model of decision-making is expected-utility maximization.

The model does not require that people have a conscious measure of usefulness, and certainly does not require that they have a conscious process for arriving at such a measure; it can be taken as a model of the gut. And employment of the model doesn't mean that the economist believes that it is literally true; economists across many schools-of-thought regard idealizations of various sorts as approximations sufficient for their purposes. It is only lesser economists who do so incautiously and without regard to problems of scale.


But, while expected-utility maximization may certainly be regarded as an idealization, it should not be mistaken for an idealization of peculiar rationality nor even for an idealization of rationality of just one variety amongst many. Expected-utility maximization is not rational even if we grant — as I would not — that there is some quantification that can be fitted to our priorities.

Expected-utility maximization entails a proposition that the relevant expectation is of potential outcomes which are taken themselves to be no better or worse for being more or less probable. That is to say that what would be the reälized value of an outcome is the measure of the outcome to be used in the computation of the expectation; the expectation is simply lineär in the probabilities. This feature of the model follows from what is known as the strong independence axiom (underscore mine) because Paul Anthony Samuelson, having noticed it, conceptualized it as an axiom. It and propositions suggested to serve in its stead as an axiom (thus rendering it a theorem) have been challenged in various ways. I will not here survey the challenges.

However, the first problem that I saw with expected-utility maximization was with that lineärity, in-so-far as it implies that people do not benefit from the experience of selecting amongst discernible non-trivial lotteries as such.[1]

Good comes from engaging in some gambles as such, exactly because gambling more generally is unavoidable. We need practice to gamble properly, and practice to stay in cognitive shape for gambling. Even if we get that practice without seeking it, in the course of engaging in our everyday gambles, there is still value to that practice as such. A gamble may become more valuable as a result of the probability of the best outcome being made less probable, and less valuable as a result of the best outcome becoming more certain. The value of lotteries is not lineär in their probabilities!

It might be objected that this value is only associated with our cognitive limitations, which limitations it might be argued represented a sort of irrationality. But we only compound the irrationality if we avoid remedial activity because under other circumstance it would not have done us good. Nor do I see that we should any more accept that a person who needs cognitive exercise to stay in cognitive shape is thus out of cognitive shape than we would say that someone who needs physical exercise to stay in physical shape is thus out of physical shape.


[0 (2016:07/22)] Very quickly, in a brief exchange, he saw the error, and he's corrected his entry; so I've removed the link and identification here.

[1] When I speak or write of lotteries or of gambling, I'm not confining myself to those cases for which lay-people normally use those terms, but applying to situations in which one is confronted by a choice of actions, and various outcomes (albeït some perhaps quite impossible) may be imagined; things to which the term lottery or gamble are more usually applied are simply special cases of this general idea. A trivial lottery is one that most people would especially not think to be a lottery or gamble at all, because the only probabilities are either 0 or 1; a non-trivial lottery involves outcomes with probabilities in between those two. Of course, in real life there are few if any perfectly trivial lotteries, but a lot of things are close enough that people imagine them as having no risk or uncertainty; that's why I refer to discernible non-trivial lotteries, which people see as involving risk or uncertainty.

Just a Note

Thursday, 12 June 2014

Years ago, I planned to write a paper on decision-making under uncertainty when possible outcomes were completely ordered neither by desirability nor by plausibility.

On the way to writing that paper, I was impressed by Mark Machina with the need for a paper that would explain how an incompleteness of preferences would operationalize, so I wrote that article before exploring the logic of the dual incompleteness that interested me.

Returning to the previously planned paper, I did not find existing work on qualitative probability that was adequate to my purposes, so I began trying to formulating just that as a part of the paper, and found that the work was growing large and cumbersome. I have enough trouble getting my hyper-modernistic work read without delivering it in large quantities! So I began developing a paper concerned only with qualitative probability as such.

In the course of writing that spin-off paper, I noticed that a rather well-established proposition concerning the axiomata of probability contains an unnecessary restriction; and that, over the course of more than 80 years, the proposition has repeatedly been discussed without the excessiveness of the restriction being noted. Yet it's one of those points that will be taken as obvious once it has been made. I originally planned to note that dispensibility in the paper on qualitative probability, but I have to be concerned about increasing clutter in that paper. Yester-day, I decided to write a note — a very brief paper — that draws attention to the needlessness of the restriction. The note didn't take very long to write; I spent more time with the process of submission than with that of writing.

So, yes, a spin-off of a spin-off; but at least it is spun-off, instead of being one more thing pending. Meanwhile, as well as there now being three papers developed or being developed prior to that originally planned, I long ago saw that the original paper ought to have at least two sequels. If I complete the whole project, what was to be one paper will have become at least six.

The note has been submitted to a journal of logic, rather than of economics; likewise, I plan to submit the paper on qualitative probability to such a journal. While economics draws upon theories of probability, work that does not itself go beyond such theories would not typically be seen as economics. The body of the note just submitted is only about a hundred words and three formulæ. On top of the usual reasons for not knowing whether a paper will be accepted, a problem in this case is exactly that the point made by the paper will seem obvious, in spite of being repeatedly overlooked.

As to the remainder of the paper on qualitative probability, I'm working to get its axiomata into a presentable state. At present, it has more of them than I'd like.

Notions of Probability

Wednesday, 26 March 2014

I've previously touched on the matter of there being markèdly differing notions all associated with the word probability. Various attempts have been made by various writers to catalogue and to coördinate these notions; this will be one of my own attempts.

[an attempt to discuss conceptions of probability]

Decision-Time for the Donkey

Monday, 6 May 2013

Yester-day, I finished reading the 1969 version of Choice without Preference: A Study of the History and of the Logic of the Problem of Buridan's Ass by Nicholas Rescher, which version appears in his Essays in Philosophical Analysis. An earlier version appeared in Kantstudien volume 51 (1959/60), and some version has or versions have appeared in later collections. I have only read the 1969 version, and some of the objections that I raise here may have been addressed by a revision.

The problem of Buridan's ass may not be familiar by name to all of my readers, but I imagine that all of them have encountered some form of it. A creature is given a choice between two options neither of which seems more desirable than the other. The question then is of how, if at all, the creature can make a choice. In the classical presentation, the creature is a donkey or some other member of the sub-genus Asinus of Equus, the choice is between food sources, and a failure to make a choice will result in death by starvation. The problem was not first presented by the Fourteenth-Century cleric and philosopher Jean Buridan, but it has come to be associated with his name. (Unsurprisingly, my paper on indifference and indecision makes mention of Buridan's ass.)

Rescher explores the history of the problem, in terms of the forms that it took, the ultimate purposes for which a principle were sought from its consideration, and the principles that were claimed to be found. Then he presents his own ostensible resolution, and examines how that might be applied to those ultimate purposes.

One of the immediate problems that I have with the essay is that nowhere does Rescher actually define what he means by preference. I feel this absence most keenly when Rescher objects that there is no preference where some author and I think there to be a preference.

As it happens, in my paper on indifference and indecision, I actually gave a definition of strict preference: (X1 pref X2) = ~[{X2} subset C({X1,X2})] which is to say that X1 is strictly preferred to X2 if X2 is not in the choice made from the two of them.[1] So, in that paper, strict preference really just refers to a pattern of choice. I didn't in fact define choice, and I'll return to that issue later.

The Merriam-Webster Dictionary essentially identifies preference as a gerund of prefer, and offers two potentially relevant definitions of prefer:

  1. to promote or advance to a rank or position
  2. to like better or best
The first seems to be a description of selection as such. The second might be taken to mean something more. But when I look at the definition of like, I'm still wondering what sense I might make of it other than an inclination to choose.

I'm not claiming that Rescher is necessarily caught-up in an illusion. Rather, I'm claiming, first, that he hasn't explained something that is both essential to his position and far from evident; and, second, that his criticism of some authors is based upon confusing their definitions with his own.

When I used the notion of a choice function C( ) in my paper, my conception of choice was no more than one of selection, and that's what I was unconsciously taking Rescher to mean until, towards the end of his essay, speaking of decisions made by flips of coins (and the like), he writes

In either event, we can be said to have "made a choice" purely by courtesty. It would be more rigorously correct to say that we have effected a selection.

Well, no. This isn't a matter of rigor, whatever it might be. The word choice can rigorously refer to selection of any sort. It can also refer to selection with some sort of care, which seems to be what he had in mind.

Some of the authors whom Rescher cites, and Rescher himself, assert that when a choice is to be made in the face of indifference, it may be done by random means. Indeed, Rescher argues that it must be done by such means. But he waits rather a long time before he provides any explicit definition of what he means by random, and he involves two notions without explaining why one must invoke the other, and indeed seemingly without seeing that he would involve two distinct notions. When he finally gives an explicit notion, it to characterize a choice to be made as random when there is equal weight of evidence in favor of each option. However, when earlier writing of the device by which the selection is to be made, he insists

The randomness of any selection process is a matter which in cases of importance, shall be checked by empirical means.

Now, one does not test the previously mentioned equal weight of the evidence by empirical means. An empirical test, instead, adds to the fund of evidence. We can judge the weight of the present evidence about the selection device by examining just that present evidence. The options are characterized by equal plausibility, yet Rescher has insisted that the selection device must instead be characterized by equal propensity. It isn't clear why the device can't simply also be characterized by equal plausibility.[2]

Rescher makes a somewhat naïve claim just before that insistence on empirical testing. For less critical choices, he declares

This randomizing instrument may, however, be the human mind, since men are capable of making arbitrary selections, with respect to which they can be adequately certain in their own mind that the choice was made haphazardly, and without any reasons whatsoever. This process is, it is true, open to possible intrusions of unrecognized biases, but then so are physical randomizers such as coins.

Actually, empirical testing of attempts by people to generate random numbers internally show very marked biases, such that it's fairly easy to find much less predictable physical selectors.

Rescher's confusion of notions of randomness is entangled with a confounding taxonomy of choice which is perhaps the biggest problem with Rescher's analysis. The options that he allows are

  1. decision paralysis
  2. selection favoring the first option
  3. selection favoring the second option
  4. random selection, in which random entails a lack of bias
And, proceeding thence, he seems to confuse utterly the notion that choice without some preference somewhere is impossible with the notion that choice without some preference somewhere is unreasonable. In any case, Rescher insists that only the last of these modes of selection is reasonable, and this insistence would tell Buridan's ass that it must starve unless it can find a perfectly unbiased coin![3] Reason would be a harsher mistress than I take her to be!

Another term that Rescher uses without definition is fair and its coördinates, as when he writes

Random selection, it is clear, constitutes the sole wholly satisfactory manner of resolving exclusive choice between equivalent claims in a wholly fair and unobjectionable manner.

I certainly don't see that random selection should be seen as wholly satisfactory (though I believe it to often be the least unsatisfying manner), and I don't know what Rescher imagines by fair. My experience is that when the word fair is used, it is typically for something more appealing than justice to those inclined to envy. In the case of allotments by coin-flip, there may be no motivation for envy ex ante, but things will be different ex post. People do a great deal of railing against the ostensible unfairness of their luck or of that of another.

I recall one final objection, which moves us quite out of the realm of economics, but which I have none-the-less. One of the applications of these questions of choice without preference (or, at least, without preference except stemming from meta-preference) has been to choices made by G_d. In looking at these problems, Rescher insists that G_d's knowledge must be timeless; I think that he ought to allow for the possibility that it were not.


[1] That might seem an awkward way of saying that X1 is strictly preferred to X2 if only X1 is in the choice from the two of them, but it actually made the proofs less awkward to define strict preference in this odd manner.

[2] Even if one insists that the selection device must be characterized by equal propensity, there is in fact little need for empirical testing, if one accepts the presumptions that a coin may be considered to have unchanging bias and that flips of a coin may be independent one from another. Implicitly making these assumptions, my father proposes a method for the construction of a coin where the chances of heads and of tails would be exactly equal. One starts with an ordinary coin; it comes-up heads sometimes, and tails others. Its bias is unknown; at best approximated. But, whatever the bias may be, says my father, in any pair of flips, the chances of heads-followed-by-tails are exactly equal to the chances of tails-followed-by-heads. So a pair of flips of the ordinary coin that comes-up heads-tails is heads for the constructed coin; a pair of flips of the ordinary coin that comes-up tails-heads is tails for the constructed coin; any other pair for the ordinary coin (heads-heads, tails-tails, or one or both flips on edge) is discarded.

[3] I don't know that my father could explain his solution to a donkey. I've had trouble explaining it to human beings.

Quantifying Evidence

Friday, 12 August 2011
The only novel thing [in the Dark Ages] concerning probability is the following remarkable text, which appears in the False Decretals, an influential mixture of old papal letters, quotations taken out of context, and outright forgeries put together somewhere in Western Europe about 850. The passage itself may be much older. A bishop should not be condemned except with seventy-two witnesses … a cardinal priest should not be condemned except with forty-four witnesses, a cardinal deacon of the city of Rome without thirty-six witnesses, a subdeacon, acolyte, exorcist, lector, or doorkeeper except with seven witnesses.⁹ It is the world's first quantitative theory of probability. Which shows why being quantitative about probability is not necessarily a good thing.
James Franklin
The Science of Conjecture: Evidence and Probability before Pascal
Chapter 2

(Actually, there is some evidence that a quantitative theory of probability developed and then disappeared in ancient India.[10] But Franklin's essential point here is none-the-less well-taken.)


⁹ Foot-note in the original, citing Decretales Pseudo-Isidorianae, et Capitula Angilramni edited by Paul Hinschius, and recommending comparison with The Collection in Seventy-Four Titles: A Canon Law Manual of the Gregorian Reform edited by John Gilchrist.

[10] In The Story of Nala and Damayanti within the Mahābhārata, there is a character Rtuparna (aka Rituparna, and mistakenly as Rtupama and as Ritupama) who seems to have a marvelous understanding of sampling and is a master of dice-play. I learned about Rtuparna by way of Ian Hacking's outstanding The Emergence of Probability; Hacking seems to have learned of it by way of V.P. Godambe, who noted the apparent implication in A historical perspective of the recent developments in the theory of sampling from actual populations, Journal of the Indian Society of Agricultural Statistics v. 38 #1 (Apr 1976) pp 1-12.

A Well-Expressed Thought

Saturday, 30 April 2011
But to assume from the superiority of Galilean principles in the sciences of inanimate nature that they must provide the model for the sciences of animate behaviour is to make a speculative leap, not to enunciate a necessary conclusion.
Charles Taylor
The Explanation of Behaviour
Pt I Ch I § 4
terminal sentence

Symbols for Preference Relations

Tuesday, 5 April 2011

Since some of the recent visits to this 'blog are by way of search strings containing preference symbol, I put together a table of characters frequently used to represent preference relations. Click on the graphic [detail of screen-shot of PDF file] for a PDF file providing symbols, their interpretation, their Unicode values in hexadecimal and in decimal, the names given to these symbols by the Unicode Consortium, and the LAΤΕΧ mark-up that one would enter for each of the symbols.

The Better Claim

Saturday, 19 March 2011

Whether a decision as such is good or bad is never determined by its actual consequences as such.

Decisions are made before their consequences are reälized (made actual). Instead, decisions are made in the face of possible consequences. There may be an ordering of these consequences in terms of plausibility, in which case that ordering should be incorporated into the making of the decision. Most theories even presume that levels of plausibility may be meaningfully quantified, in which case (ex hypothesi) these quantifications should be incorporated into the process. But even in a case where there were only one outcome possible, while the decision could (and should) be made in response to that unique possibility, it still were possibility of the consequence that informed the decision, and not actuality. (Inevitability is not actuality.)

When the reälized consequences of a decision are undesirable, many people will assert or believe that whoever made the choice (perhaps they themselves) should have done something different. Well, it might be that a bad outcome illustrates that a decision were poor, but that will only be true if the inappropriateness of the decision could have been seen without the illustration. For example, if someone failed to see a possibility as such, then its reälization will show the possibility, but there had to have been some failure of reasoning for a possibility to have ever been deemed impossible. On the other hand, if someone deemed something to be highly unlikely, yet it occurred anyway, that doesn't prove that it were more likely than he or she had thought — in a world with an enormous number of events, many highly unlikely things happen. If an event were highly unlikely but its consequences were so dire that they should have been factored into the decision, and yet were not, the reälization of the event might bring that to one's attention; but, again, that could have been seen without the event actually occurring. The decision was good or bad before its consequences were reälized.

A painter whose canvas is improved by the hand of another is not a better painter for this, and one whose work is slashed by a madman (other than perhaps himself) is not a worse painter for that. Likewise, choosing well is simply not the same thing as being lucky in one's choice, and choosing badly not the same as being unlucky.

Sometimes people say that this-or-that should have been chosen simply as an expression of the wish that more information had been available; in other cases, they are really declaring a change in future policy based upon experience and its new information. In either case, the form of expression is misleading.

Some readers may be thinking that what I'm saying here is obvious (and some of these may have abandoned reading this entry). But people fail to take reasonable risks because they will or fear that they will be thought fools should they be unlucky; some have responded to me as if I were being absurd when I've referred to something as a good idea that didn't work; our culture treats people who attempt heinous acts but fail at them as somehow less wicked than those who succeed at them; and I was drawn to thinking about this matter to-day in considering the debate between those who defend a consequentialist ethics and those who defend a deöntological ethics, and the amount of confusion on this issue of the rôle of consequences in decision-making (especially on the side of the self-identified consequentialists) that underlies that debate.