## Notions of Probability

26 March 2014I've previously touched on the matter of there being markèdly differing notions all associated with the word probability

. Various attempts have been made by various writers to catalogue and to coördinate these notions; this will be one of my own attempts.

The word probable

and cognates used to carry the notion of plausible. And, with just one or two exceptions, it was not *arithmetized* until about 1660.[1] That arithmetization was developed to address a problem of dicing,[2] but a broad range of activities (eg insurance) could be seen as or believed to be essentially similar.

The original arithmetization was essentially combinatorial; in each of the cases, it was assumed or presumed that underlying possible outcomes of *equal likelihood* could be found (as with a fair

coin or *unbiased* die), and all further probabilites computed therefrom. There are a couple of sources for discomfort here. First, there is the problem of *identifying* equally likely cases. A recurring proposal is to use what is sometimes called the law of indifference

, and sometimes called the principle of insufficient reason

; it says that if one has possible outcomes with no reason to think any one of them more probable than any other, then each should be assigned equal probability. The problem with this alleged law is that probabilities then become artefacts of taxonomy. If you categorize things one way, then you get one set of results; if you categorize them another then you get a different set of results. Second, there is the issue that the question of the essence of probability is not answered by telling us that probabilities are computed from cases of equal probability (nor by describing probability in terms of an undefined likelihood

). None-the-less, an awful lot of pedagogy to this day continues to teach probability in terms of combinatoric manipulation of equally likely case.

**Logicism** and **subjectivism** are two important notions of probability that are often confused. They start with similar foundations. A logicist sees any probability as an objective relationship between some body of evidence and the believability of some proposition; but data-sets themselves can be privately held, so that potentially no two individuals would use the same probabilities, though each is perfectly rational. A subjectivist sees probability as a ranking or measure of how much a proposition is believed; but subjectivists insist that these rankings or measures should not be formed contrary to principles of effective decision-making. When it comes to *qualitative* probability — ranking without quantifications[3] — the subjectivists look very much like the logicists; both the logicists and the subjectivists believe rules such as that if `X` is more probable than `Y`, and `Y` more probable than `Z`, then `X` is more probable than `Z`.[4] [Edit (2014:04/19): Because logicism treats probability as a relationship between evidence and propositions, technically logicism must regard *all* probabilities as *conditional*, whereäs subjectivists typically begin with *un*conditional probabilities; however, unconditional probabilities could be reïmagined as conditioned on tautologies.]

When pushing onward from qualitative probability to quantitative probability, subjectivism becomes more discernibly distinct from logicism. Logicists have been more apt than have been subjectivists to allow that qualitative probabilites may not represent a complete preördering, so that, for some pairs of outcomes, a person is neither able to say that one is the more likely, nor that the two are *equally* likely; and this incompleteness would preclude assigning a particular quantity to the probability of some events. And, when quantities *may* be fitted, subjectivists are more apt to interpret *chosen* quantities as *measures* of belief.

Some subjectivists have proposed to proceed to quantification by way of **betting quotients**. Imagine a gamble in which a bettor receives (1 - `q`)·`S` if an outcome is realized, and -`q`·`S` if the outcome is not realized; and imagine that the bettor gets to choose the `q`, which is called the betting quotient

, but `S` will be chosen after this, by whomever gambles against the choser of `q`. Obviously, if the bettor is rational, then the bet will *conform* to his-or-her beliefs concerning the *qualitative* probability of the possible outcome (in relation to its opposite). Further, if the bettor is compelled to *commit* to betting quotients across some *set* of potential gambles, and is not careful in choosing them, then he or she could be guaranteed to lose money to anyone else who chose the right subset of gambles; the mathematical rules to choose betting quotients where that guarantee won't hold are exactly of the sort that are normally said to apply to quantitative probabilities. (It has been objected that the aforementioned *commitment* is contrived; people don't normally have to chose the equivalent of a full set of betting quotients in advance.[5])

But I insist that *betting quotients are the map, and the map is not the country*. That is to say that betting quotients must conform to qualitative probabilities, and thus *proxy* qualitative probability, but they may be *nothing more* than proxies; their distinctly quantitative aspect may be no more than *an artefact of the person's being somehow forced to place quantitative bets*. And, not only do betting quotients fail to do more definitively than proxy purely qualitative relationships of belief; they would proxy purely qualitative logicistic relationships just as well! Some who sees themselves as subjectivists do so on the grounds that the logico-mathematical principles of their doctrines do not uniquely determine probabilities, and people are left free to choose any betting quotients that conform; but its only by identifying these quotients as somehow having an *additional content of belief* that a subjectivism is introduced here.

One should not be greatly exercised over whether probability is an attribute of propositions or of events; any proposition corresponds to the event that the proposition is true, and any event corresponds to the proposition that the event has occurred. But it's worth noting that logicism and subjectivism typically conceptualize probability as a characteristic of *propositions*; however, bets tend to be conceptualized in terms of *events*. And other efforts to give fundamental meaning to the word probability

are generally cast in terms of events.

**Frequentism** is most often expressed in terms of events. Most people, whatever they might mean by probability

, would agree that, across cases, the proposition or event that *had been* most probable

is most often the thing that *did* in fact prove true or happen. Frequentism proposes to *define* probability

exactly as *frequency*.

A couple of real difficulties may be seen. First, there is the problem of identifying frequencies. We learn frequencies by observation, but a frequency of occurrence over some observations may not be maintained over additional observations. One is compelled, then, to think and to speak of *probable* frequencies, and a dire regress results if that probability is itself to be a frequency. In some cases, we face uncertainty where, as a practical matter, we have *no* observations. For example, it is neither perfectly certain nor impossible that Russia will invade those parts of the Ukraine that it does not already occupy; the relevant configuration is meaningfully different from every previous situation in which an invasion were possible; and decomposition into components for which we have observations of any length would be intractably complex even if possible; the frequentist must tell us No

and we are left without a rational means of making decisions. Second, frequentism is in sore need of a justification for attempting to use frequencies that describe some population in ~~soup~~ probability for you!*making choices* concerning an as yet unobserved population of smaller size, even when the smaller population is a subset of the larger. Emphatically stamping one's foot and declaring that it is *obvious* that one should weight decisions concerning *single* cases based upon frequencies for large populations does not make it so.

Most of those who subscribe to alternative interpretations of probability do agree that the probable is what most often happens, that probabilities (or estimates of probabilities) should be affected by observed frequencies, and that the mathematics of frequencies tell us something about the mathematics of probabilities.

A **propensity** interpretation considers a probability to be a relationship between some two events or between an object and an event such that a *law* associates the production of one by or from the other. At one extreme would be simple causation; at the other would be perfect preclusion; in between would be the rest of the probabilities. One can see then that a propensity interpretation treats probability as a generalization of *causation* or as the inverse thereof. Probabilities interpretted as propensities are most often expressed as attributes of events.

It is often noted that the logicist interpretation can be conceptualized as attempting to generalize from traditional logic (in which all propositions are either true or false) to a logic in which *known* truths and *known* falsehoods are extremes of probability, with other probabilities for *intermediate* states. Logic and causality are more intimately related than is ordinarily recognized,[6] and I have my doubts as to whether a propensity interpretation is not ultimately a logicist intepretation, with the representation in terms of events disguising the underlying identity.

Before I quit this entry, I'd like to return to the issue of what was *meant* by probability

in the **classical** era of combinatorial manipulation and a presumption of underlying equally likely cases. This question has been mulled and debated, and I think that any simple answer is almost surely false. But I'm inclined to see the classical probabilists as, for the most part, naïve logicists. My inclination is informed by their adoption of the word probability

, which had a prior meaning of plausibility and which continued to carry that meaning for a long time; they did not typically distinguish their subject from that of plausibility. A failure to define probability

is more easily explained if they believed themselves to be simply using a *conception* of a preëxisting *concept*. I don't see that subscribing to a propensity interpretation would actually put them in a different camp. As I've noted, almost everyone agrees that there is a relationship between frequency of occurrence and whatever probability

might mean, so working with frequencies as such isn't much evidence for being frequentists. And the presumption of underlying equally likely cases undermines any attempt to read classical probabilists as subjectivists.

Important logicists include George Boole, Johannes von Kries, William Ernest Johnson, John Maynard Keynes, Harold Jeffreys, Bernard Osgood Koopman, and Carl Gustav Hempel. Modern subjectivism in probability theory began with Frank Plumpton Ramsey; other important subjectivists include Bruno di Finetti, Leonard Jimmie Savage, and Robert Osher Schlaifer. Richard Charles Jeffrey counted himself as a subjectivist and is generally considered such by others, but an unacknowledged logicism tended to creep into his work when he confronted some of the peculiar difficulties of subjectivism. Frequentism is perhaps first clearly seen in the work of Robert Leslie Ellis; other frequentists include John Venn, Charles Sanders Peirce in his earlier work, Richard Edler von Mises, and Hans Reichenbach. Peirce later became an advocate of a propensity interpretation. Largely or completely independent of Peirce, Karl Raimund Popper adopted and argued for a propensity interpretation.

[1] One exception was an anonymous paper written about AD1400, which seems to have drawn no attention in its day. There is also some evidence that, in India, an arithmetic notion of probability was developed and then lost. The paper of AD1400 is reproduced in Il

by Laura Toti Rigatelli (in Mathemata: Festschrift für Helmuth Gericke), and discussed in The Science of Conjecture by James Franklin. The Indian evidence is presented in problema delle parti

in manoscritti del XIV e XV secoloA historical perspective of the recent developments in the theory of sampling from actual populations

by Vidyadhar Prabhakar Godambe (in Journal of the Indian Society of Agricultural Statistics v38 #1), and discussed in The Emergence of Probability: A Philosophical Study of Early Ideas about Probability, Induction and Statistical Inference by Ian Hacking.

[2] As had been the anonymous effort of circa 1400.

[3] A characteristic is quantified exactly and only to the extent that arithmetic manipulation of it is *meaningful*.

[4] The paper on which I am most immediately working notes one difference between logicism and subjectivism that I find very interesting; but intrigued readers will have to wait to be disappointed.

[5] See Subjective Probability: Criticisms, Reflections and Problems

by Henry Ely Kyburg (in Journal of Philosophical Logic v7 #1).

[6] I discussed this relationship in my LiveJournal, back when I still had a LiveJournal. I might write of it here some day.

Tags: betting quotient, decision theory, di Finetti, frequentism, Karl Popper, Keynes, logicism, plausibility, Popper, probability, propensities, quantification, subjectivism

Out of curiosity sir...have you ever heard of a British mathematician named David J. Marsay? He also is interested in probability theory, and he has a blog. Here it is...I thought you might be interested in it.

http://djmarsay.wordpress.com/

I've taken very brief looks, once or twice, at Dr. Marsay's 'blog.