Posts Tagged ‘plausibility’

Theories of Probability — Perfectly Fair and Perfectly Awful

Tuesday, 11 April 2017

I've not heard nor read anyone remarking about a particular contrast between the classical approach to probability theory and the Bayesian subjectivist approach. The classical approach began with a presumption that the formal mathematical principles of probability could be discovered by considering situations that were impossibly good; the Bayesian subjectivist approach was founded on a presumption that those principles could be discovered by considered situations that were implausibly bad.


The classical development of probability theory began in 1654, when Fermat and Pascal took-up a problem of gambling on dice. At that time, the word probability and its cognates from the Latin probabilitas meant plausibility.

Fermat and Pascal developed a theory of the relative plausibility of various sequences of dice-throws. They worked from significant presumptions, including that the dice had a perfect symmetry (except in-so-far as one side could be distinguished from another), so that, with any given throw, it were no more plausible that one face should be upper-most than that any other face should be upper-most. A model of this sort could be be reworked for various other devices. Coins, wheels, and cards could be imagined as perfectly symmetrical. More generally, very similar outcomes could be imagined as each no more probable than any other. If one presumes that to be no more probable is to be equally probable, then a natural quantification arises.

Now, the preceptors did understand that most or all of the things that they were treating as perfectly symmetrical were no such thing. Even the most sincere efforts wouldn't produce a perfectly balanced die, coin, or roulette wheel, and so forth. But these theorists were very sure that consideration of these idealized cases had revealed the proper mathematics for use across all cases. Some were so sure of that mathematics that they inferred that it must be possible to describe the world in terms of cases that were somehow equally likely, without prior investigation positively revealing them as such. (The problem for this theory was that different descriptions divide the world into different cases; it would take some sort of investigation to reveal which of these descriptions, if any, results in division into cases of equal likelihood. Indeed, even with the notion of perfectly balanced dice, one is implicitly calling upon experience to understand what it means for a die to be more or less balanced; likewise for other devices.)


As subjectivists have it, to say that one thing is more probable than another is to say that that first thing is more believed than is the other. (GLS Shackle proposed that the probability of something might be measured by how surprised one would be if that something were discovered not to be true.)

But most subjectivists insist that there are rationality constraints that must be followed in forming these beliefs, so that for example if X is more probable than Y and Y more probable than Z, then X must be more probable than Z. And the Bayesian subjectivists make a particular demand for what they call coherence. These subjectivists imagine that one assigns quantifications of belief to outcomes; the quantifications are coherent if they could be used as gambling ratios without an opponent finding some combination of gambles with those ratios that would guarantee that one suffered a net loss. Such a combination is known as a Dutch book.

But, while quantifications can in theory be chosen that insulate one against the possibility of a Dutch book, it would only be under extraordinary circumstances that one could not avoid a Dutch book by some other means, such as simply rejecting complex contracts to gamble, and instead deciding on gambles one-at-a-time, without losing sight of the gambles to which one had already agreed. In the absence of complex contracts or something like them, it is not clear that one would need a preëstablished set of quantifications or even could justify committing to such a set. (It is also not clear why, if one's beliefs correspond to measures, one may not use different measures for gambling ratios.) Indeed, it is only under rather unusual circumstances that one is confronted by opponents who would attempt to get one to agree to a Dutch book. (I don't believe that anyone has ever tried to present me with such a combination, except hypothetically.) None-the-less, these theorists have been very sure that consideration of antagonistic cases of this class has revealed the proper mathematics for use across all cases.


The impossible goodness imagined by the classical theorists was of a different aspect than is the implausible badness of the Bayesian subjectivists. A fair coin is not a friendly coin. Still, one framework is that of the Ivory Tower, and the other is that of Murphy's Law.

Headway

Saturday, 7 January 2017

My paper on indecision is part of a much larger project. The next step in that project is to provide a formal theory of probability in which it is not always possible to say of outcomes either that one is more probable than another or that they are equality likely. That theory needs to be sufficient to explain the behavior of rational economic agents.

I began struggling actively with this problem before the paper on indecision was published. What I've had is an evolving set of axiomata that resembles the nest of a rat. I've thought that the set has been sufficient; but the axiomata have made over-lapping assertions, there have been rather a lot of them, and one of them has been complex to a degree that made me uncomfortable. Were I better at mathematics, then things might have been put in good order long ago. (I am more able at mathematics than is the typical economist, but I wish that I were considerably still better.) On the other hand, while there are certainly people better at mathematics than am I, no one seems to have accomplished what I seek to do. Economics is, after all, more than its mathematics.

What has most bothered me has been that complex axiom. There hasn't seemed much hope of resolving the general over-lap and of reducing the number of axiomata without first reducing that particular axiom. On 2 January, I was able to do just that, dissolving that axiom into two axiomata, each of which is acceptably simple. Granted that the number of axiomata increased by one, but now that the parts are each simple, I can begin to see how to reduce their overlap. Eliminating that overlap should either pare or vindicate the number of axiomata.

I don't know whether, upon getting results completed and a paper written around them, I would be able to get my work published in a respectable journal. I don't know whether, upon my work's getting published, it would find a significant readership. But the work is deeply important.

Nihil ex Nihilo

Tuesday, 6 December 2016

In his foundational work on probability,[1] Bernard Osgood Koopman would write something of form α /κ for a suggested observation α in the context of a presumption κ. That's not how I proceed, but I don't actively object to his having done so, and he had a reason for it. Though Koopman well understood that real-life rarely offered a basis for completely ordering such things by likelihood, let alone associating them with quantities, he was concerned to explore the cases in which quantification were possible, and he wanted his readers to see something rather like division there. Indeed, he would call the left-hand element α a numerator, and the right-hand element κ the denominator.

He would further use 0 to represent that which were impossible. This notation is usable, but I think that he got a bit lost because of it. In his presentation of axiomata, Osgood verbally imposes a tacit assumption that no denominator were 0. This attempt at assumption disturbs me, not because I think that a denominator could be 0, but because it doesn't bear assuming. And, as Koopman believed that probability theory were essentially a generalization of logic (as do I), I think that he should have seen that the proposition didn't bear assuming. Since Koopman was a logicist, the only thing that he should associate with a denominator of 0 would be a system of assumptions that entailed a self-contradiction; anything else is more plausible than that.

In formal logic, it is normally accepted that anything can follow if one allows a self-contradiction into a system, so that any conclusion as such is uninteresting. If faced by something such as X ∨ (Y ∧ ¬Y) (ie X or both Y and not-Y), one throws away the (Y ∧ ¬Y), leaving just the X; if faced with a conclusion Y ∧ ¬Y then one throws away whatever forced that awful thing upon one.[2] Thus, the formalist approach wouldn't so much forbid a denominator of 0 as declare everything that followed from it to be uninteresting, of no worth. A formal expression that no contradiction is entailed by the presumption κ would have the form ¬(κ ⇒ [(Y ∧ ¬Y)∃Y]) but this just dissolves uselessly ¬(¬κ ∨ [(Y ∧ ¬Y)∃Y])
¬¬κ ∧ ¬[(Y ∧ ¬Y)∃Y]
κ ∧ [¬(Y ∧ ¬Y)∀Y]
κ ∧ [(¬Y ∨ ¬¬Y)∀Y]
κ ∧ [(¬YY)∀Y]
κ
(because (X ⇔ [X ∧ (Y ∨ ¬Y)∀Y])∀X).

In classical logic, the principle of non-contradiction is seen as the bedrock principle, not an assumption (tacit or otherwise), because no alternative can actually be assumed instead.[3]. From that perspective, one should call the absence of 0-valued denominators simply a principle.


[1] Koopman, Bernard Osgood; The Axioms and Algebra of Intuitive Probability, The Annals of Mathematics, Series 2 Vol 41 #2, pp 269-292; and The Bases of Probability, Bulletin of the American Mathematical Society, Vol 46 #10, pp 763-774.

[2] Indeed, that principle of rejection is the basis of proof by contradiction, which method baffles so many people!

[3] Aristoteles, The Metaphysics, Bk 4, Ch 3, 1005b15-22.

Notions of Probability

Wednesday, 26 March 2014

I've previously touched on the matter of there being markèdly differing notions all associated with the word probability. Various attempts have been made by various writers to catalogue and to coördinate these notions; this will be one of my own attempts.

(an attempt to discuss conceptions of probability)