## Theories of Probability — Perfectly Fair and Perfectly Awful

I’ve not heard nor read anyone remarking about a particular contrast between the classical approach to probability theory and the Bayesian subjectivist approach. The classical approach began with a presumption that the formal mathematical principles of probability could be discovered by considering situations that were *impossibly good*; the Bayesian subjectivist approach was founded on a presumption that those principles could be discovered by considered situations that were *implausibly bad*.

The classical development of probability theory began in 1654, when Fermat and Pascal took-up a problem of gambling on dice. At that time, the word probability

and its cognates from the Latin probabilitas

meant plausibility.

Fermat and Pascal developed a theory of the relative plausibility of various sequences of dice-throws. They worked from significant presumptions, including that the dice had a *perfect symmetry* (except in-so-far as one side could be distinguished from another), so that, with any given throw, it were no more plausible that one face should be upper-most than that any other face should be upper-most. A model of this sort could be be reworked for various other devices. Coins, wheels, and cards could be imagined as perfectly symmetrical. More generally, very similar outcomes could be imagined as each no more probable than any other. If one presumes that to be *no more* probable is to be *equally* probable, then a natural quantification arises.

Now, the preceptors did understand that most or all of the things that they were treating as perfectly symmetrical were no such thing. Even the most sincere efforts wouldn’t produce a perfectly balanced die, coin, or roulette wheel, and so forth. But these theorists were very sure *that consideration of these idealized cases had revealed the proper mathematics for use across all cases*. Some were *so* sure of that mathematics that they inferred that it *must* be possible to describe the world in terms of cases that *were* somehow equally likely, without prior investigation positively *revealing* them as such. (The problem for this theory was that different descriptions divide the world into different cases; it would take some sort of investigation to reveal which of these descriptions, if any, results in division into cases of equal likelihood. Indeed, even with the notion of perfectly balanced dice, one is implicitly calling upon *experience* to understand what it *means* for a die to be more or less *balanced*; likewise for other devices.)

As subjectivists have it, to say that one thing is more probable

than another is to say that that first thing is more *believed* than is the other. (GLS Shackle proposed that the probability of something might be measured by how *surprised* one would be if that something were discovered not to be true.)

But most subjectivists insist that there are rationality constraints that must be followed in forming these beliefs, so that for example if `X` is more probable than `Y` and `Y` more probable than `Z`, then `X` must be more probable than `Z`. And the *Bayesian* subjectivists make a particular demand for what they call coherence

. These subjectivists imagine that one assigns quantifications of belief to outcomes; the quantifications are coherent if they could be used as *gambling ratios* without an opponent finding some combination of gambles with those ratios that would *guarantee* that one suffered a net loss. Such a combination is known as a Dutch book

.

But, while quantifications can in theory be chosen that insulate one against the possibility of a Dutch book, it would only be under extraordinary circumstances that one could not avoid a Dutch book by some *other* means, such as simply rejecting complex contracts to gamble, and instead deciding on gambles one-at-a-time, without losing sight of the gambles to which one had already agreed. In the absence of complex contracts or something like them, it is not clear that one would *need* a preëstablished set of quantifications or even could *justify committing* to such a set. (It is also not clear why, if one’s beliefs correspond to measures, one may not use *different* measures for gambling ratios.) Indeed, it is only under rather unusual circumstances that one is confronted by opponents who would *attempt* to get one to agree to a Dutch book. (I don’t believe that anyone has ever tried to present me with such a combination, except hypothetically.) None-the-less, these theorists have been very sure *that consideration of antagonistic cases of this class has revealed the proper mathematics for use across all cases*.

The *impossible goodness* imagined by the classical theorists was of a different aspect than is the *implausible badness* of the Bayesian subjectivists. A fair coin is not a *friendly* coin. Still, one framework is that of the Ivory Tower, and the other is that of Murphy’s Law.

Tags: plausibility, probability, subjectivism

## Leave a Reply