Posts Tagged ‘subjectivism’

Libertine Bayesianism

Thursday, 24 September 2020

As repeatedly noted by me and by many others, there are multiple theories about the fundamental notion of probability, including (though not restricted to) the notion of probabilities as objective, logical relationships amongst propositions and that of probabilities as degrees of belief.

Though those two notions are distinct, subscribers to each typically agree with subscribers to the other upon a great deal of the axiomatic structure of the logic of probability. Further, in practice the main-stream of the first group and that of the second group both arrive at their estimates of measures of probability by adjusting initial values through repeated application, as observations accumulate, of a principle known as Bayes' theorem. Indeed, the main-stream of one group are called objective Bayesian and the mainstream of the other are often called subjective Bayesian.[1] Where the two main-streams differ in practice is in the source of those initial values.

The objective Bayesians believe that, in the absence of information, one begins with what are called non-informative priors. This notion is evolved from the classical idea of a principle of insufficient reason, which said that one should assign equal probabilities to events or to propositions, in the absence of a reason for assigning different probabilities. (For example, begin by assume that a die is fair.) The objective Bayesians attempt to be more shrewd than the classical theorists, but will often admit that in some cases non-informative priors cannot be found because of a lack of understanding of how to divide the possibilities (in some cases because of complexity).

The subjective Bayesians believe that one may use as a prior whatever initial degree of belief one has, measured on an interval from 0 through 1. As measures of probability are taken to be degrees of belief, any application of Bayes' theorem that results in a new value is supposed to result in a new degree of belief.

I want to suggest what I think to be a new school of thought, with a Bayesian sub-school, not-withstanding that I have no intention of joining this school.

If a set of things is completely ranked, it's possible to proxy that ranking with a quantification, such that if one thing has a higher rank than another then it is assigned a greater quantification, and that if two things have the same rank then they are assigned the same quantification. If all that we have is a ranking, with no further stipulations, then there will be infinitely many possible quantifications that will work as proxies. Often, we may want to tighten-up the rules of quantification (for example, by requiring that all quantities be in the interval from 0 through 1), and yet still it may be the case that infinitely many quantifications would work equally well as proxies.

Sets of measures of probability may be considered as proxies for underlying rankings of propositions or of events by probability. The principles to which most theorists agree when they consider probability rankings as such constrain the sets of possible measures, but so long as only a finite set of propositions or of events is under consideration, there are infinitely many sets of measures that will work as proxies.

A subjectivist feels free to use his or her degrees of belief so long as they fit the constraints, even though someone else may have a different set of degrees of belief that also fit the constraints. However, the argument for the admissibility of the subjectivist's own set of degrees of belief is not that it is believed; the argument is that one's own set of degrees of belief fits the constraints. Belief as such is irrelevant. It might be that one's own belief is colored by private information, but then the argument is not that one believes the private information, but that the information as such is relevant (as indeed it might be); and there would always be some other sets of measures that also conformed to the private information.

Perhaps one might as well use one's own set of degrees of belief, but one also might every bit as well use any conforming set of measures.

So what I now suggest is what I call a libertine school, which regards measures of probability as proxies for probability rankings and which accepts any set of measures that conform to what is known of the probability ranking of propositions or of events, regardless of whether these measures are thought to be the degrees of belief of anyone, and without any concern that these should become the degrees of belief of anyone; and in particular I suggest libertine Bayesianism, which accepts the analytic principles common to the objective Bayesians and to the subjective Bayesians, but which will allow any set of priors that conforms to those principles.


[1] So great a share of subjectivists subscribe to a Bayesian principle of updating that often the subjective Bayesians are simply called subjectivists as if there were no need to distinguish amongst subjectivists. And, until relatively recently, so little recognition was given to the objective Bayesians that Bayesian was often taken as synonymous with subjectivist.

Theories of Probability — Perfectly Fair and Perfectly Awful

Tuesday, 11 April 2017

I've not heard nor read anyone remarking about a particular contrast between the classical approach to probability theory and the Bayesian subjectivist approach. The classical approach began with a presumption that the formal mathematical principles of probability could be discovered by considering situations that were impossibly good; the Bayesian subjectivist approach was founded on a presumption that those principles could be discovered by considered situations that were implausibly bad.


The classical development of probability theory began in 1654, when Fermat and Pascal took-up a problem of gambling on dice. At that time, the word probability and its cognates from the Latin probabilitas meant plausibility.

Fermat and Pascal developed a theory of the relative plausibility of various sequences of dice-throws. They worked from significant presumptions, including that the dice had a perfect symmetry (except in-so-far as one side could be distinguished from another), so that, with any given throw, it were no more plausible that one face should be upper-most than that any other face should be upper-most. A model of this sort could be be reworked for various other devices. Coins, wheels, and cards could be imagined as perfectly symmetrical. More generally, very similar outcomes could be imagined as each no more probable than any other. If one presumes that to be no more probable is to be equally probable, then a natural quantification arises.

Now, the preceptors did understand that most or all of the things that they were treating as perfectly symmetrical were no such thing. Even the most sincere efforts wouldn't produce a perfectly balanced die, coin, or roulette wheel, and so forth. But these theorists were very sure that consideration of these idealized cases had revealed the proper mathematics for use across all cases. Some were so sure of that mathematics that they inferred that it must be possible to describe the world in terms of cases that were somehow equally likely, without prior investigation positively revealing them as such. (The problem for this theory was that different descriptions divide the world into different cases; it would take some sort of investigation to reveal which of these descriptions, if any, results in division into cases of equal likelihood. Indeed, even with the notion of perfectly balanced dice, one is implicitly calling upon experience to understand what it means for a die to be more or less balanced; likewise for other devices.)


As subjectivists have it, to say that one thing is more probable than another is to say that that first thing is more believed than is the other. (GLS Shackle proposed that the probability of something might be measured by how surprised one would be if that something were discovered not to be true.)

But most subjectivists insist that there are rationality constraints that must be followed in forming these beliefs, so that for example if X is more probable than Y and Y more probable than Z, then X must be more probable than Z. And the Bayesian subjectivists make a particular demand for what they call coherence. These subjectivists imagine that one assigns quantifications of belief to outcomes; the quantifications are coherent if they could be used as gambling ratios without an opponent finding some combination of gambles with those ratios that would guarantee that one suffered a net loss. Such a combination is known as a Dutch book.

But, while quantifications can in theory be chosen that insulate one against the possibility of a Dutch book, it would only be under extraordinary circumstances that one could not avoid a Dutch book by some other means, such as simply rejecting complex contracts to gamble, and instead deciding on gambles one-at-a-time, without losing sight of the gambles to which one had already agreed. In the absence of complex contracts or something like them, it is not clear that one would need a preëstablished set of quantifications or even could justify committing to such a set. (It is also not clear why, if one's beliefs correspond to measures, one may not use different measures for gambling ratios.) Indeed, it is only under rather unusual circumstances that one is confronted by opponents who would attempt to get one to agree to a Dutch book. (I don't believe that anyone has ever tried to present me with such a combination, except hypothetically.) None-the-less, these theorists have been very sure that consideration of antagonistic cases of this class has revealed the proper mathematics for use across all cases.


The impossible goodness imagined by the classical theorists was of a different aspect than is the implausible badness of the Bayesian subjectivists. A fair coin is not a friendly coin. Still, one framework is that of the Ivory Tower, and the other is that of Murphy's Law.

Notions of Probability

Wednesday, 26 March 2014

I've previously touched on the matter of there being markèdly differing notions all associated with the word probability. Various attempts have been made by various writers to catalogue and to coördinate these notions; this will be one of my own attempts.

[an attempt to discuss conceptions of probability]