Archive for the ‘metaphysics’ Category

Helmholtz's Zählen und Messen

Monday, 16 October 2017

When I first encountered mention of Zählen und Messen, erkenntnisstheoretisch betrachtet [Numbering and Measuring, Epistemologically Considered] by Hermann [Ludwig Ferdinand] von Helmholtz, which sought to construct arithmetic on an empiricist foundation, I was interested. But for a very long while I did not act on that interest.

A few years ago, I learned of Zahl und Mass in der Ökonomik: Eine kritische Untersuchung der mathematischen Methode und der mathematischen Preistheorie (1893), by Andreas Heinrich Voigt, a early work on the mathematics of utility, and that it drew upon Helmholtz's Zählen und Messen, which impelled me to seek a copy of the latter to read. To my annoyance, I found that there was no English-language version of it freely available on-line. I decided to create one, but was distracted from the project by other matters. A few days ago, I recognized that my immediate circumstances were such that it might be a good time to return to the task.

I have produced a translation, Numbering and Measuring, Epistemologically Considered by Hermann von Helmholtz It is not much better than serviceable. I don't plan to return to the work, to refine the translation, except perhaps where some reader has suggested a clear improvement and I effect a transcription.

I have not inserted what criticisms I might make of this work into the document. Nor have I presented my thoughts on how Helmholtz's ostensible empiricism and Frege's logicism are not as far apart as might be thought.

Vocal Cues

Monday, 26 June 2017

Many animals, across different classes, have two distinct sounds that may be classified as growls or as whines, respectively. The growls signal threat; the whines signal friendship or appeasement.

The bark of a dog is actually a combination of a growl with a whine; it is thus not a pure signal of aggression, as many take it to be; it is literally a mixed signal, perhaps indicating confusion on the part of the dog, perhaps signalling both that the dog is prepared to fight and that the dog would consider a peaceful interaction.

When women talk with men whom they find attractive, women tend to raise the pitches of their voices. Men tend to do something different when talking with women whom they find attractive; they mix deeper tones than they would normally use with higher tones than they would normally use. The deep tones are signals of masculinity, of being able to do what men are expected to do. The higher tones of men carry much the same significance as do the higher tones of women — with the additional point in contrast to the deep tones that the man does not mean to threaten the woman.

It amused me to reälize consciously that this behavior by men is at least something like barking. Then I grimly considered that some men are actually barking, telling the woman that he can be nice to her if she is nice to him, but will actively make things unpleasant if she is not. But at least it should typically be possible to disambiguate the threatening behavior, based upon where the low notes are used, and of course the choice of words.

Theories of Probability — Perfectly Fair and Perfectly Awful

Tuesday, 11 April 2017

I've not heard nor read anyone remarking about a particular contrast between the classical approach to probability theory and the Bayesian subjectivist approach. The classical approach began with a presumption that the formal mathematical principles of probability could be discovered by considering situations that were impossibly good; the Bayesian subjectivist approach was founded on a presumption that those principles could be discovered by considered situations that were implausibly bad.


The classical development of probability theory began in 1654, when Fermat and Pascal took-up a problem of gambling on dice. At that time, the word probability and its cognates from the Latin probabilitas meant plausibility.

Fermat and Pascal developed a theory of the relative plausibility of various sequences of dice-throws. They worked from significant presumptions, including that the dice had a perfect symmetry (except in-so-far as one side could be distinguished from another), so that, with any given throw, it were no more plausible that one face should be upper-most than that any other face should be upper-most. A model of this sort could be be reworked for various other devices. Coins, wheels, and cards could be imagined as perfectly symmetrical. More generally, very similar outcomes could be imagined as each no more probable than any other. If one presumes that to be no more probable is to be equally probable, then a natural quantification arises.

Now, the preceptors did understand that most or all of the things that they were treating as perfectly symmetrical were no such thing. Even the most sincere efforts wouldn't produce a perfectly balanced die, coin, or roulette wheel, and so forth. But these theorists were very sure that consideration of these idealized cases had revealed the proper mathematics for use across all cases. Some were so sure of that mathematics that they inferred that it must be possible to describe the world in terms of cases that were somehow equally likely, without prior investigation positively revealing them as such. (The problem for this theory was that different descriptions divide the world into different cases; it would take some sort of investigation to reveal which of these descriptions, if any, results in division into cases of equal likelihood. Indeed, even with the notion of perfectly balanced dice, one is implicitly calling upon experience to understand what it means for a die to be more or less balanced; likewise for other devices.)


As subjectivists have it, to say that one thing is more probable than another is to say that that first thing is more believed than is the other. (GLS Shackle proposed that the probability of something might be measured by how surprised one would be if that something were discovered not to be true.)

But most subjectivists insist that there are rationality constraints that must be followed in forming these beliefs, so that for example if X is more probable than Y and Y more probable than Z, then X must be more probable than Z. And the Bayesian subjectivists make a particular demand for what they call coherence. These subjectivists imagine that one assigns quantifications of belief to outcomes; the quantifications are coherent if they could be used as gambling ratios without an opponent finding some combination of gambles with those ratios that would guarantee that one suffered a net loss. Such a combination is known as a Dutch book.

But, while quantifications can in theory be chosen that insulate one against the possibility of a Dutch book, it would only be under extraordinary circumstances that one could not avoid a Dutch book by some other means, such as simply rejecting complex contracts to gamble, and instead deciding on gambles one-at-a-time, without losing sight of the gambles to which one had already agreed. In the absence of complex contracts or something like them, it is not clear that one would need a preëstablished set of quantifications or even could justify committing to such a set. (It is also not clear why, if one's beliefs correspond to measures, one may not use different measures for gambling ratios.) Indeed, it is only under rather unusual circumstances that one is confronted by opponents who would attempt to get one to agree to a Dutch book. (I don't believe that anyone has ever tried to present me with such a combination, except hypothetically.) None-the-less, these theorists have been very sure that consideration of antagonistic cases of this class has revealed the proper mathematics for use across all cases.


The impossible goodness imagined by the classical theorists was of a different aspect than is the implausible badness of the Bayesian subjectivists. A fair coin is not a friendly coin. Still, one framework is that of the Ivory Tower, and the other is that of Murphy's Law.

Generalizing the Principle of Additivity

Friday, 17 February 2017

One of the principles often suggested as an axiom of probability is that of additivity. The additivity here is a generalization of arithmetic addivity — which generalization, with other assumptions, will imply the arithmetic case.

The classic formulation of this principle came from Bruno di Finetti. Di Finetti was a subjectivist. A typical subjectivist is amongst those who prefer to think in terms of the probability of events, rather than in terms of the probability of propositions. And subjectivists like to found their theory of probability in terms of unconditional probabilities. Using somewhat different notation from that here, the classic formulation of the principle of additivity is in which X, Y, and Z are sets of events. The underscored arrowhead is again my notation for weak supraprobability, the union of strict supraprobability with equiprobability.

One of the things that I noticed when considering this proposition is that the condition that YZ be empty is superfluous. I tried to get a note published on that issue, but journals were not receptive. I had bigger fish to fry other than that one, so I threw-up my hands and moved onward.

When it comes to probability, I'm a logicist. I see probability as primarily about relations amongst propositions (though every event corresponds to a proposition that the event happen and every proposition corresponds to the event that the proposition is true), and I see each thing about which we state a probability as a compound proposition of the form X given c in which X and c are themselves propositions (though if c is a tautology, then the proposition operationalizes as unconditional). I've long pondered what would be a proper generalized restatement of the principle of additivity. If you've looked at the set of axiomata on which I've been working, then you've seen one or more of my efforts. Last night, I clearly saw what I think to be the proper statement: To get di Finetti's principle from it, set c2 = c1 and make it a tautology, and set X2 = Z = Y2. Note that the condition of (X2 | c1) being weakly supraprobable to (Y2 | c2) is automatically met when the two are the same thing. By itself, this generalization implies my previous generalization and part of another principle that I was treating as an axiom; the remainder of that other principle can be got by applying basic properties of equiprobability and the principle that strict supraprobability and equiprobability are mutually exclusive to this generalization. The principle that is thus demoted was awkward; the axiom that was recast as acceptable as it was, but the new version is elegant.

λέγει αὐτῷ ὁ Πιλᾶτος τί ἐστιν ἀλήθεια;

Wednesday, 25 January 2017

Years ago, National Lampoon had a monthly column that they entitled True Facts. The title was a joke, not because the contents weren't true (they were an assembly of extraödinary news reports), but because facts cannot be untrue; something untrue is not a fact. Yet many people in various contexts were using terms such as actual fact, real fact, and true fact, almost as if it were possible for some facts to be false, imaginary, unreal. People still do, perhaps even more often. One can find lots of instances of people using imaginary fact; sometimes they do so ironically, but more often they are quite serious. By imaginary fact they mean a proposition that may be untrue, is likely to be untrue, or simply is untrue. In this retasking of the word fact, they've lost the use of the word to talk about facts, unless they add a word such as true. But, with that change in meaning, it not only becomes possible to use a term such as alternative fact to refer to a rival claim, but it becomes harder to see that untrue rival claims don't have equal standing with true rival claims, as they are all supposedly facts.

We aren't at all helped here that a great many people don't understand the words true and truth. That's not simply a problem of vocabulary. Truth is a hard concept, because it entails a meta-propositional act of mapping from a proposition back to itself. That is to say that, in most cases when we apply the word true or equivalent and certainly in the case of true facts, we are explicitly or implicitly making a proposition about a proposition. When we say It's true that I went to the store, that actual referent of the grammatic subject is not I, but the proposition that I went to the store, yet the upshot of this sentence is merely what would be conveyed in saying I went to the store. We perhaps don't need this device of recasting a proposition (I went to the store) as a meta-proposition (It is true that I went to the store), but it is useful because we are not omniscient, and must entertain propositions that are uncertain or discovered to be false; the concept of truth complements the conditions of falsehood and of uncertainty. Yet it is very hard to see that function, exactly because we use the concept to discuss itself. Truth is more easily named than described, if indeed a description is possible.

The difficulty in understanding the nature of truth makes it psychologically easier to embrace such notions as that all aspects of of past, present, and future are simply artefacts of individual belief or of group belief (expressed with formulæ such as truth is a social construct) or that what one wants or ought to want is to be treated a true. The word fact may then be used for components of narratives; embracing one narrative is seen as licensing one to accept propositions as fact that are alternative to components of rival narratives, and to reject propositions for no better reason than that they participate in rival narratives. Evolution of narratives is seen as licensing one to change the status of a proposition from fact to falsehood, or vice versa, even when discussing history. And we may even observe those socially identified as fact-checkers testing claims against narratives which are themselves never fact-checked, because the checkers implictly treat their favored narratives as the ultimate determinant of fact.

When Pilate asked What is truth?, perhaps he was truly curious as to the nature of truth, but he may merely have been asking why he should give a damn about it. Our political leaders have become ever more disdainful of truth. They have long offered us alternative facts, and their followers in each of our major political tribes and in most of the smaller groups as well have decided that, for them, these are the facts. Now we have an Administration that does so more baldly and less artfully. One might hope that this practice will explode on them; but, even if that explosion should happen, their opponents are likely to see an expansion of the envelope within which they may disregard the facts.

Deal-Breakers

Saturday, 7 January 2017

Elsewhere, Pierre Lemieux asked In two sentences, what do you think of the Monty Hall paradox? Unless I construct sentences loaded with conjunctions (which would seem to violate the spirit of the request), an answer in just two sentences will be unsatisfactory (though I provided one). Here in my 'blog, I'll write at greater length.


The first appearance in print of what's called the Monty Hall Problem seems to have been in a letter by Steve Selvin to The American Statistician v29 (1975) #1. The problem resembles those with which Monty Hall used to present contestants on Let's Make a Deal, though Hall has asserted that no problem quite like it were presented on that show. The most popular statement of the Monty Hall Problem came in a letter by Craig Whitaker to the Ask Marilyn column of Parade:

Suppose you're on a game show, and you're given the choice of three doors: Behind one door is a car; behind the others, goats. You pick a door, say No. 1, and the host, who knows what's behind the doors, opens another door, say No. 3, which has a goat. He then says to you, Do you want to pick door No. 2? Is it to your advantage to switch your choice?

(Before we continue, take car and goat to stand, respectively, for something that you want and something that you don't want, regardless of your actual feelings about cars and about goats.)

There has been considerable controversy about the proper answer, but the text-book answer is that, indeed, one should switch choices. The argument is that, initially, one has a 1/3 probability that the chosen Door has the car, and a 2/3 probability that the car is behind one of the other two Doors. When the host opens one of the other two Doors, the probability remains that the car is behind one of the unchosen Doors, but has gone to 0 for the opened Door, which is to say that the probability is now 2/3 that the car is behind the unchosen, unopened Door.


My first issue with the text-book answer is with its assignment of initial, quantified probabilities. I cannot even see a basis for qualitative probabilities here; which is to say that I don't see a proper reason for thinking either that the probability of the car being behind a given Door is equal to that for any other Door or that the probability of the car being behind some one Door is greater than that of any other Door. As far as I'm concerned, there is no ordering at all.

The belief that there must be an ordering usually follows upon the even bolder presumption that there must be a quantification. Because quantification has proven to be extremely successful in a great many applications, some people make the inference that it can be successfully applied to any and every question. Others, a bit less rash, take the position that it can be applied everywhere except where it is clearly shown not to be applicable. But even the less rash dogma violates Ockham's razor. Some believe that they have a direct apprehension of such quantification. However, for most of human history, if people thought that they had such literal intuitions then they were silent about it; a quantified notion of probability did not begin to take hold until the second half of the Seventeenth Century. And appeals to the authority of one's intuition should carry little if any weight.

Various thinkers have adopted what is sometimes called the principle of indifference or the principle of insufficient reason to argue that, in the absence of any evidence to the contrary, each of n collectively exhaustive and mutually exclusive possibilities must be assigned equal likelihood. But our division of possibilities into n cases, rather than some other number of cases, is an artefact of taxonomy. Perhaps one or more of the Doors is red and the remainder blue; our first division could then be between two possibilities, so that (under the principle of indifference) one Door would have an initial probability of 1/2 and each of the other two would have a probability of 1/4.

Other persons will propose that we have watched the game played many times, and observed that a car has with very nearly equal frequency appeared behind each of the three Doors. But, while that information might be helpful were we to play many times, I'm not aware of any real justification for treating frequencies as decision-theoretic weights in application to isolated events. You won't be on Monty's show to-morrow.

Indeed, if a guest player truly thought that the Doors initially represented equal expectations, then that player would be unable to choose amongst them, or even to delegate the choice (as the delegation has an expectation equal to that of each Door); indifference is a strange, limiting case. However, indecision — the aforementioned lack of ordering — allows the guest player to delegate the decision. So, either the Door was picked for the guest player (rather than by the guest player), or the guest player associated the chosen Door with a greater probability than either unchosen Door. That point might seem a mere quibble, but declaring that the guest player picked the Door is part of a rhetorical structure that surreptitiously and fallaciously commits the guest player to a positive judgment of prior probability. If there is no case for such commitment, then the paradox collapses.


Well, okay now, let's just beg the question, and say not only that you were assigned Door Number 1, but that for some mysterious reason you know that there is an equal probability of the car being behind each of the Doors. The host then opens Door Number 3, and there's a goat. The problem as stated does not explain why the host opened Door Number 3. The classical statement of the problem does not tell the reader what rule is being used by the host; the presentation tells us that the host knows what's behind the doors, but says nothing about whether or how he uses that knowledge. Hypothetically, he might always open a Door with a goat, or he might use some other rule, so that there were a possibility that he would open the Door with a car, leaving the guest player to select between two concealed goats.

Nowhere in the statement of the problem are we told that you are the sole guest player. Something seems to go very wrong with the text-book answer if you are not. Imagine that there are many guest players, and that outcomes are duplicated in cases in which more than one guest player selects or is otherwise assigned the same Door. The host opens Door Number 3, and each of the guest players who were assigned that Door trudges away with a goat. As with the scenario in which only one guest player is imagined, more than one rule may govern this choice made by the host. Now, each guest player who was assigned Door Number 1 is permitted to change his or her assignment to Door Number 2, and each guest player who was assigned Door Number 2 is allowed to change his or her assignment to Door Number 1. (Some of you might recall that I proposed a scenario essentially of this sort in a 'blog entry for 1 April 2009.) Their situations appear to be symmetric, such that if one set of guest players should switch then so should the other; yet if one Door is the better choice for one group then it seems that it ought also to be the better for the other group.

The resolution is in understanding that the text-book solution silently assumed that the host were following a particular rule of selection, and that this rule were known to the guest player, whose up-dating of probabilities thus could be informed by that knowledge. But, in order for the text-book solution to be correct, all players must be targeted in the same manner by the response of the host. When there is only one guest player, it is possible for the host to observe rules that respond to all guest players in ways that are not not possible when there are multiple guest players, unless they are somehow all assigned the same Door. It isn't even possible to do this for two sets of players each assigned different Doors.


Given the typical presentation of the problem, the typical statement of ostensible solution is wrong; it doesn't solve the problem that was given, and doesn't identify the problem that was actually solved.


[No goats were harmed in the writing of this entry.]

Headway

Saturday, 7 January 2017

My paper on indecision is part of a much larger project. The next step in that project is to provide a formal theory of probability in which it is not always possible to say of outcomes either that one is more probable than another or that they are equality likely. That theory needs to be sufficient to explain the behavior of rational economic agents.

I began struggling actively with this problem before the paper on indecision was published. What I've had is an evolving set of axiomata that resembles the nest of a rat. I've thought that the set has been sufficient; but the axiomata have made over-lapping assertions, there have been rather a lot of them, and one of them has been complex to a degree that made me uncomfortable. Were I better at mathematics, then things might have been put in good order long ago. (I am more able at mathematics than is the typical economist, but I wish that I were considerably still better.) On the other hand, while there are certainly people better at mathematics than am I, no one seems to have accomplished what I seek to do. Economics is, after all, more than its mathematics.

What has most bothered me has been that complex axiom. There hasn't seemed much hope of resolving the general over-lap and of reducing the number of axiomata without first reducing that particular axiom. On 2 January, I was able to do just that, dissolving that axiom into two axiomata, each of which is acceptably simple. Granted that the number of axiomata increased by one, but now that the parts are each simple, I can begin to see how to reduce their overlap. Eliminating that overlap should either pare or vindicate the number of axiomata.

I don't know whether, upon getting results completed and a paper written around them, I would be able to get my work published in a respectable journal. I don't know whether, upon my work's getting published, it would find a significant readership. But the work is deeply important.

Humpty Dumpty, Prescriptivism, and Linguistic Evolution

Tuesday, 13 December 2016

In Chapter 6 of Through the Looking Glass by Charles Lutwidge Dodgson (writing as Lewis Carroll), a famous and rather popular position on language is taken:

When I use a word, Humpty Dumpty said, in rather a scornful tone, it means just what I choose it to mean — neither more nor less.

If Mr Dumpty's words simply mean whatever he intends them to mean, then the rest of us are not in a position to understand them. If he provides us with verbal definitions, we must know what the defining words mean. He could not even declare in a manner intelligible to us that he meant most words in the same sense as do you or I. We might attempt to tease-out meanings by looking for correlations, but then we would be finding meanings as correlations, which assumes properties (such as stability) that represent more than pure choice on the part of Mr Dumpty. Having been made perfectly private, his vocabulary as such would have no practical value except for internal dialogue. There is a paradox here, which Dodgson surely saw, yet which so very many people don't: If Mr Dumpty's apparent declaration were true, then it could not be understood by us. He might actually just be making some claim about breakfast. We might take (or mistake) his claim for a true proposition (that his vocabulary were purely idiosyncratic), but any co-incidence between his intention and our interpretation would be a result of chance. We could not actually recognize it for whatever proposition it actually expressed.

In order to communicate thoughts with language to other persons, we must have shared presumptions not only about definitions of individual words, but also about grammar. The more that such presumptions are shared, the more that we may communicate; the more fine-grained the presumptions, the more precise the communication possible. In the context of such presumptions, there are right ways of using language in attempt to communicate — though any one of these ways may not be uniquely right or even uniquely best — and there are ways that are wrong.

Those who believe that there are right ways and wrong ways to use language are often called prescriptivist, and generally by those who wish to treat prescriptism as wrong-headed or as simply a position in no way superior to the alternatives. Yet, while one could find or imagine specific cases where the beliefs concerning what is right or wrong in language-use were indeed wrong-headed, forms of prescriptivism follows logically from a belief that it is desirable for people to communicate, and especially from a belief that communication is, typically speaking, something rather a lot of which is desirable. As a practical matter, altogether rejecting prescriptivism is thoughtless.

To the extent that the same presumptions of meaning are shared across persons, the meanings of words are independent of the intentions of any one person. Meanings may be treated as adhering to the words themselves. Should Mr Dumpty take a great fall, from which recovery were not possible, still his words would mean exactly what they meant when he uttered them. A very weak prescriptivism would settle there, with the meaning of expressions simply being whatever were common intention in the relevant population. This prescriptivism is so weak as not often to be recognized as prescriptivism at all; but even it says that there is a right and wrong within the use of language.

Those more widely recognized as prescriptivists want something rather different from rude democracy. In the eyes of their detractors, these prescriptivists are dogmatic traditionalists or seeking to creäte or to maintain artificial elites; such prescriptivists have existed and do exist. But, more typically, prescriptivism is founded on the belief that language should be a powerful tool for communication as such. When a typical prescriptivist encounters and considers a linguistic pattern, his or her response is conditioned by concern for how it may be expected to affect the ability to communicate, and not merely in the moment, but how its acceptance or rejection will affect our ability to understand what has been said in the past and what will be said in the future. (Such effects are not confined to the repetition of specific pattern; other specific patterns may arise from analogy; which is to say that general patterns may be repeated.) Being understood is not considered as licensing patterns that will cause future misunderstandings.

In opposing the replacement of can with the negative can't in can hardly, the typical prescriptivist isn't fighting dogmatically nor to oppress the downtrodden, nor merely concerned to protect our ability to refer to the odd-ball cases to which can't hardly with its original sense applies; rather, the prescriptivist is trying to ward-off a more general chaos in which we can hardly distinguish negation from affirmation. (Likewise for the positive could care less standing where the negative couldn't care less would be proper.) When the prescriptivist objects to using podium to refer to a lectern, it's so that we continue to understand prior use and so that we don't lose a word for the exact meaning that podium has had. We already have a word for lecterns, and we can coin new words if there is a felt need for more.

The usual attempt to rebut prescriptivism of all sorts notes that language evolves. Indeed it does, but prescriptivisms themselves — of all sorts — play rôles in that evolution. When a prescriptivist objects to can't hardly being used where can hardly would be proper, he or she isn't fighting evolution itself but participating in an evolutionary struggle. Sometimes traditional forms are successfully defended; sometimes old forms are resurrected; sometimes deliberate innovations (as opposed to spontaneous innovations) are widely adopted. Sometimes the results have benefitted our ability to communicate; sometimes they have not; but all these cases are part of the dynamic of real-world linguistic evolution.

The Evolution Card is not a good one to play in any event. Linguistic evolution may be inevitable, but it doesn't always represent progress. It will not even tend to progress without an appropriate context. Indeed, sometimes linguistic evolution reverses course. For example: English arose from Germanic languages, in which some words were formed by compounding. But English largely abandoned this characteristic for a time, only to have it reïntroduced by scholarly contact with Classical Greek and Latin. (That's largely why our compounds are so often built of Greek or Latin roots, whereäs those of Modern German are more likely to be constructed with Germanic roots.) It was evolution when compounding was abandoned, and evolution when it was reädopted. If compounding were good, then evolution were wrong to abandon it; if compounding were bad, then evolution were wrong to reëstablish it. And one cannot logically leap from the insight that evolution is both inevitable and neither necessarily good nor necessarily bad to the conclusion that any aspect of linguistic practice is a matter of indifference, that nothing of linguistic practice is good or bad. One should especially not attempt to apply such an inference peculiarly to views on practice that one dislikes.

Nihil ex Nihilo

Tuesday, 6 December 2016

In his foundational work on probability,[1] Bernard Osgood Koopman would write something of form α /κ for a suggested observation α in the context of a presumption κ. That's not how I proceed, but I don't actively object to his having done so, and he had a reason for it. Though Koopman well understood that real-life rarely offered a basis for completely ordering such things by likelihood, let alone associating them with quantities, he was concerned to explore the cases in which quantification were possible, and he wanted his readers to see something rather like division there. Indeed, he would call the left-hand element α a numerator, and the right-hand element κ the denominator.

He would further use 0 to represent that which were impossible. This notation is usable, but I think that he got a bit lost because of it. In his presentation of axiomata, Osgood verbally imposes a tacit assumption that no denominator were 0. This attempt at assumption disturbs me, not because I think that a denominator could be 0, but because it doesn't bear assuming. And, as Koopman believed that probability theory were essentially a generalization of logic (as do I), I think that he should have seen that the proposition didn't bear assuming. Since Koopman was a logicist, the only thing that he should associate with a denominator of 0 would be a system of assumptions that entailed a self-contradiction; anything else is more plausible than that.

In formal logic, it is normally accepted that anything can follow if one allows a self-contradiction into a system, so that any conclusion as such is uninteresting. If faced by something such as X ∨ (Y ∧ ¬Y) (ie X or both Y and not-Y), one throws away the (Y ∧ ¬Y), leaving just the X; if faced with a conclusion Y ∧ ¬Y then one throws away whatever forced that awful thing upon one.[2] Thus, the formalist approach wouldn't so much forbid a denominator of 0 as declare everything that followed from it to be uninteresting, of no worth. A formal expression that no contradiction is entailed by the presumption κ would have the form ¬(κ ⇒ [(Y ∧ ¬Y)∃Y]) but this just dissolves uselessly ¬(¬κ ∨ [(Y ∧ ¬Y)∃Y])
¬¬κ ∧ ¬[(Y ∧ ¬Y)∃Y]
κ ∧ [¬(Y ∧ ¬Y)∀Y]
κ ∧ [(¬Y ∨ ¬¬Y)∀Y]
κ ∧ [(¬YY)∀Y]
κ
(because (X ⇔ [X ∧ (Y ∨ ¬Y)∀Y])∀X).

In classical logic, the principle of non-contradiction is seen as the bedrock principle, not an assumption (tacit or otherwise), because no alternative can actually be assumed instead.[3]. From that perspective, one should call the absence of 0-valued denominators simply a principle.


[1] Koopman, Bernard Osgood; The Axioms and Algebra of Intuitive Probability, The Annals of Mathematics, Series 2 Vol 41 #2, pp 269-292; and The Bases of Probability, Bulletin of the American Mathematical Society, Vol 46 #10, pp 763-774.

[2] Indeed, that principle of rejection is the basis of proof by contradiction, which method baffles so many people!

[3] Aristoteles, The Metaphysics, Bk 4, Ch 3, 1005b15-22.

Delusions of Scientific Literacy

Saturday, 19 November 2016

Science is reasoned analysis of — and theorizing about — empirical data. A scientific conclusion cannot be recognized as such unless one understands the science.

It might be imagined that one can recognize a conclusion as scientific without understanding the science, by recognizing the scientists as such. But the popular formula that science is what scientists do is vacuous when taken literally, and wrong in its usual interpretation. Someone can can have an institutional certification as having been trained to be a scientist, and have a paid position ostensibly as a scientist, and yet not be a scientist; for those who actually understand some scientific area, it is fairly easy to find historical examples or perhaps present cases.[1] To recognize a scientist as such one must recognize what he or she does as science, not the other way around.

Even if it is in some contexts reasonable to accept conclusions from such persons on the basis of their social standing, it is not scientific literacy to accept conclusions on that basis; it is simply trust in the social order.

The full understanding of a scientific expert isn't always necessary to have a scientific understanding of the reasoning behind some of the broad conclusions of a scientific discipline. But in some cases of present controversy with significant policy implications, the dispute over the relevant conclusions turns upon issues of applied mathematics, and perhaps other things such as thermodynamics. No one can be scientifically literate in the areas of controversy without understanding that mathematics and so forth.

In many of the disputations amongst lay-persons over these issues, I observe people in at least one group who assert themselves to be scientifically literate, when they are no such thing, and to accept science, when they are not positioned to know whether what they are accepting is science. These are actually people who simply trust some part of the social order — typically, those state-funded institutions that declare themselves to engage in scientific research.


[1] It is certainly easy to find what lay-persons will acknowledge as examples. However, some of these ostensible examples are actually spurious.