In his foundational work on probability,[1] Bernard Osgood Koopman would write something of form `α` /`κ`

for a suggested observation `α` in the context of a presumption `κ`. That's not how I proceed, but I don't actively object to his having done so, and he had a reason for it. Though Koopman well understood that real-life rarely offered a basis for completely ordering such things by likelihood, let alone associating them with quantities, he was concerned to explore the cases in which quantification were possible, and he wanted his readers to see something rather like *division* there. Indeed, he would call the left-hand element `α` a numerator

, and the right-hand element `κ` the denominator

.

He would further use 0

to represent that which were *impossible*. This notation is usable, but I think that he got a bit lost because of it. In his presentation of axiomata, Osgood verbally imposes a tacit assumption

that no denominator were 0. This attempt at assumption disturbs me, not because I think that a denominator *could* be 0, but because it doesn't bear *assuming*. And, as Koopman believed that probability theory were essentially a generalization of *logic* (as do I), I think that he should have seen that the proposition didn't bear assuming. Since Koopman was a logicist, the *only* thing that he should associate with a denominator of 0 would be a system of assumptions that entailed a *self-contradiction*; *anything* else is more plausible than that.

In formal logic, it is normally accepted that *anything* can follow if one allows a self-contradiction into a system, so that any conclusion as such is uninteresting. If faced by something such as `X` ∨ (`Y` ∧ ¬`Y`) (ie `X` or both `Y` and not-`Y`), one throws away the (`Y` ∧ ¬`Y`), leaving just the `X`; if faced with a conclusion `Y` ∧ ¬`Y` then one throws away whatever forced that awful thing upon one.[2] Thus, the formalist approach wouldn't so much *forbid* a denominator of 0 as declare everything that followed from it to be *uninteresting*, *of no worth*. A formal expression that no contradiction is entailed by the presumption `κ` would have the form ¬(`κ` ⇒ [(`Y` ∧ ¬`Y`)∃`Y`]) but this just dissolves *uselessly* ¬(¬`κ` ∨ [(`Y` ∧ ¬`Y`)∃`Y`])

¬¬`κ` ∧ ¬[(`Y` ∧ ¬`Y`)∃`Y`]

`κ` ∧ [¬(`Y` ∧ ¬`Y`)∀`Y`]

`κ` ∧ [(¬`Y` ∨ ¬¬`Y`)∀`Y`]

`κ` ∧ [(¬`Y` ∨ `Y`)∀`Y`]

`κ` (because (`X` ⇔ [`X` ∧ (`Y` ∨ ¬`Y`)∀`Y`])∀`X`).

In *classical* logic, the principle of non-contradiction is seen as the *bedrock principle*, not an *assumption* (tacit or otherwise), because no alternative *can* actually be assumed instead.[3]. From that perspective, one should call the absence of 0-valued denominators simply a principle

.

[1] Koopman, Bernard Osgood; The Axioms and Algebra of Intuitive Probability

, The Annals of Mathematics, Series 2 Vol 41 #2, pp 269-292; and The Bases of Probability

, Bulletin of the American Mathematical Society, Vol 46 #10, pp 763-774.

[2] Indeed, that principle of rejection is the basis of proof by contradiction, which method baffles so many people!

[3] Aristoteles, The Metaphysics, Bk 4, Ch 3, 1005b15-22.