Archive for the ‘epistemology’ Category

Helmholtz's Zählen und Messen

Monday, 16 October 2017

When I first encountered mention of Zählen und Messen, erkenntnisstheoretisch betrachtet [Numbering and Measuring, Epistemologically Considered] by Hermann [Ludwig Ferdinand] von Helmholtz, which sought to construct arithmetic on an empiricist foundation, I was interested. But for a very long while I did not act on that interest.

A few years ago, I learned of Zahl und Mass in der Ökonomik: Eine kritische Untersuchung der mathematischen Methode und der mathematischen Preistheorie (1893), by Andreas Heinrich Voigt, a early work on the mathematics of utility, and that it drew upon Helmholtz's Zählen und Messen, which impelled me to seek a copy of the latter to read. To my annoyance, I found that there was no English-language version of it freely available on-line. I decided to create one, but was distracted from the project by other matters. A few days ago, I recognized that my immediate circumstances were such that it might be a good time to return to the task.

I have produced a translation, Numbering and Measuring, Epistemologically Considered by Hermann von Helmholtz It is not much better than serviceable. I don't plan to return to the work, to refine the translation, except perhaps where some reader has suggested a clear improvement and I effect a transcription.

I have not inserted what criticisms I might make of this work into the document. Nor have I presented my thoughts on how Helmholtz's ostensible empiricism and Frege's logicism are not as far apart as might be thought.

Vocal Cues

Monday, 26 June 2017

Many animals, across different classes, have two distinct sounds that may be classified as growls or as whines, respectively. The growls signal threat; the whines signal friendship or appeasement.

The bark of a dog is actually a combination of a growl with a whine; it is thus not a pure signal of aggression, as many take it to be; it is literally a mixed signal, perhaps indicating confusion on the part of the dog, perhaps signalling both that the dog is prepared to fight and that the dog would consider a peaceful interaction.

When women talk with men whom they find attractive, women tend to raise the pitches of their voices. Men tend to do something different when talking with women whom they find attractive; they mix deeper tones than they would normally use with higher tones than they would normally use. The deep tones are signals of masculinity, of being able to do what men are expected to do. The higher tones of men carry much the same significance as do the higher tones of women — with the additional point in contrast to the deep tones that the man does not mean to threaten the woman.

It amused me to reälize consciously that this behavior by men is at least something like barking. Then I grimly considered that some men are actually barking, telling the woman that he can be nice to her if she is nice to him, but will actively make things unpleasant if she is not. But at least it should typically be possible to disambiguate the threatening behavior, based upon where the low notes are used, and of course the choice of words.

Theories of Probability — Perfectly Fair and Perfectly Awful

Tuesday, 11 April 2017

I've not heard nor read anyone remarking about a particular contrast between the classical approach to probability theory and the Bayesian subjectivist approach. The classical approach began with a presumption that the formal mathematical principles of probability could be discovered by considering situations that were impossibly good; the Bayesian subjectivist approach was founded on a presumption that those principles could be discovered by considered situations that were implausibly bad.


The classical development of probability theory began in 1654, when Fermat and Pascal took-up a problem of gambling on dice. At that time, the word probability and its cognates from the Latin probabilitas meant plausibility.

Fermat and Pascal developed a theory of the relative plausibility of various sequences of dice-throws. They worked from significant presumptions, including that the dice had a perfect symmetry (except in-so-far as one side could be distinguished from another), so that, with any given throw, it were no more plausible that one face should be upper-most than that any other face should be upper-most. A model of this sort could be be reworked for various other devices. Coins, wheels, and cards could be imagined as perfectly symmetrical. More generally, very similar outcomes could be imagined as each no more probable than any other. If one presumes that to be no more probable is to be equally probable, then a natural quantification arises.

Now, the preceptors did understand that most or all of the things that they were treating as perfectly symmetrical were no such thing. Even the most sincere efforts wouldn't produce a perfectly balanced die, coin, or roulette wheel, and so forth. But these theorists were very sure that consideration of these idealized cases had revealed the proper mathematics for use across all cases. Some were so sure of that mathematics that they inferred that it must be possible to describe the world in terms of cases that were somehow equally likely, without prior investigation positively revealing them as such. (The problem for this theory was that different descriptions divide the world into different cases; it would take some sort of investigation to reveal which of these descriptions, if any, results in division into cases of equal likelihood. Indeed, even with the notion of perfectly balanced dice, one is implicitly calling upon experience to understand what it means for a die to be more or less balanced; likewise for other devices.)


As subjectivists have it, to say that one thing is more probable than another is to say that that first thing is more believed than is the other. (GLS Shackle proposed that the probability of something might be measured by how surprised one would be if that something were discovered not to be true.)

But most subjectivists insist that there are rationality constraints that must be followed in forming these beliefs, so that for example if X is more probable than Y and Y more probable than Z, then X must be more probable than Z. And the Bayesian subjectivists make a particular demand for what they call coherence. These subjectivists imagine that one assigns quantifications of belief to outcomes; the quantifications are coherent if they could be used as gambling ratios without an opponent finding some combination of gambles with those ratios that would guarantee that one suffered a net loss. Such a combination is known as a Dutch book.

But, while quantifications can in theory be chosen that insulate one against the possibility of a Dutch book, it would only be under extraordinary circumstances that one could not avoid a Dutch book by some other means, such as simply rejecting complex contracts to gamble, and instead deciding on gambles one-at-a-time, without losing sight of the gambles to which one had already agreed. In the absence of complex contracts or something like them, it is not clear that one would need a preëstablished set of quantifications or even could justify committing to such a set. (It is also not clear why, if one's beliefs correspond to measures, one may not use different measures for gambling ratios.) Indeed, it is only under rather unusual circumstances that one is confronted by opponents who would attempt to get one to agree to a Dutch book. (I don't believe that anyone has ever tried to present me with such a combination, except hypothetically.) None-the-less, these theorists have been very sure that consideration of antagonistic cases of this class has revealed the proper mathematics for use across all cases.


The impossible goodness imagined by the classical theorists was of a different aspect than is the implausible badness of the Bayesian subjectivists. A fair coin is not a friendly coin. Still, one framework is that of the Ivory Tower, and the other is that of Murphy's Law.

λέγει αὐτῷ ὁ Πιλᾶτος τί ἐστιν ἀλήθεια;

Wednesday, 25 January 2017

Years ago, National Lampoon had a monthly column that they entitled True Facts. The title was a joke, not because the contents weren't true (they were an assembly of extraödinary news reports), but because facts cannot be untrue; something untrue is not a fact. Yet many people in various contexts were using terms such as actual fact, real fact, and true fact, almost as if it were possible for some facts to be false, imaginary, unreal. People still do, perhaps even more often. One can find lots of instances of people using imaginary fact; sometimes they do so ironically, but more often they are quite serious. By imaginary fact they mean a proposition that may be untrue, is likely to be untrue, or simply is untrue. In this retasking of the word fact, they've lost the use of the word to talk about facts, unless they add a word such as true. But, with that change in meaning, it not only becomes possible to use a term such as alternative fact to refer to a rival claim, but it becomes harder to see that untrue rival claims don't have equal standing with true rival claims, as they are all supposedly facts.

We aren't at all helped here that a great many people don't understand the words true and truth. That's not simply a problem of vocabulary. Truth is a hard concept, because it entails a meta-propositional act of mapping from a proposition back to itself. That is to say that, in most cases when we apply the word true or equivalent and certainly in the case of true facts, we are explicitly or implicitly making a proposition about a proposition. When we say It's true that I went to the store, that actual referent of the grammatic subject is not I, but the proposition that I went to the store, yet the upshot of this sentence is merely what would be conveyed in saying I went to the store. We perhaps don't need this device of recasting a proposition (I went to the store) as a meta-proposition (It is true that I went to the store), but it is useful because we are not omniscient, and must entertain propositions that are uncertain or discovered to be false; the concept of truth complements the conditions of falsehood and of uncertainty. Yet it is very hard to see that function, exactly because we use the concept to discuss itself. Truth is more easily named than described, if indeed a description is possible.

The difficulty in understanding the nature of truth makes it psychologically easier to embrace such notions as that all aspects of of past, present, and future are simply artefacts of individual belief or of group belief (expressed with formulæ such as truth is a social construct) or that what one wants or ought to want is to be treated a true. The word fact may then be used for components of narratives; embracing one narrative is seen as licensing one to accept propositions as fact that are alternative to components of rival narratives, and to reject propositions for no better reason than that they participate in rival narratives. Evolution of narratives is seen as licensing one to change the status of a proposition from fact to falsehood, or vice versa, even when discussing history. And we may even observe those socially identified as fact-checkers testing claims against narratives which are themselves never fact-checked, because the checkers implictly treat their favored narratives as the ultimate determinant of fact.

When Pilate asked What is truth?, perhaps he was truly curious as to the nature of truth, but he may merely have been asking why he should give a damn about it. Our political leaders have become ever more disdainful of truth. They have long offered us alternative facts, and their followers in each of our major political tribes and in most of the smaller groups as well have decided that, for them, these are the facts. Now we have an Administration that does so more baldly and less artfully. One might hope that this practice will explode on them; but, even if that explosion should happen, their opponents are likely to see an expansion of the envelope within which they may disregard the facts.

Deal-Breakers

Saturday, 7 January 2017

Elsewhere, Pierre Lemieux asked In two sentences, what do you think of the Monty Hall paradox? Unless I construct sentences loaded with conjunctions (which would seem to violate the spirit of the request), an answer in just two sentences will be unsatisfactory (though I provided one). Here in my 'blog, I'll write at greater length.


The first appearance in print of what's called the Monty Hall Problem seems to have been in a letter by Steve Selvin to The American Statistician v29 (1975) #1. The problem resembles those with which Monty Hall used to present contestants on Let's Make a Deal, though Hall has asserted that no problem quite like it were presented on that show. The most popular statement of the Monty Hall Problem came in a letter by Craig Whitaker to the Ask Marilyn column of Parade:

Suppose you're on a game show, and you're given the choice of three doors: Behind one door is a car; behind the others, goats. You pick a door, say No. 1, and the host, who knows what's behind the doors, opens another door, say No. 3, which has a goat. He then says to you, Do you want to pick door No. 2? Is it to your advantage to switch your choice?

(Before we continue, take car and goat to stand, respectively, for something that you want and something that you don't want, regardless of your actual feelings about cars and about goats.)

There has been considerable controversy about the proper answer, but the text-book answer is that, indeed, one should switch choices. The argument is that, initially, one has a 1/3 probability that the chosen Door has the car, and a 2/3 probability that the car is behind one of the other two Doors. When the host opens one of the other two Doors, the probability remains that the car is behind one of the unchosen Doors, but has gone to 0 for the opened Door, which is to say that the probability is now 2/3 that the car is behind the unchosen, unopened Door.


My first issue with the text-book answer is with its assignment of initial, quantified probabilities. I cannot even see a basis for qualitative probabilities here; which is to say that I don't see a proper reason for thinking either that the probability of the car being behind a given Door is equal to that for any other Door or that the probability of the car being behind some one Door is greater than that of any other Door. As far as I'm concerned, there is no ordering at all.

The belief that there must be an ordering usually follows upon the even bolder presumption that there must be a quantification. Because quantification has proven to be extremely successful in a great many applications, some people make the inference that it can be successfully applied to any and every question. Others, a bit less rash, take the position that it can be applied everywhere except where it is clearly shown not to be applicable. But even the less rash dogma violates Ockham's razor. Some believe that they have a direct apprehension of such quantification. However, for most of human history, if people thought that they had such literal intuitions then they were silent about it; a quantified notion of probability did not begin to take hold until the second half of the Seventeenth Century. And appeals to the authority of one's intuition should carry little if any weight.

Various thinkers have adopted what is sometimes called the principle of indifference or the principle of insufficient reason to argue that, in the absence of any evidence to the contrary, each of n collectively exhaustive and mutually exclusive possibilities must be assigned equal likelihood. But our division of possibilities into n cases, rather than some other number of cases, is an artefact of taxonomy. Perhaps one or more of the Doors is red and the remainder blue; our first division could then be between two possibilities, so that (under the principle of indifference) one Door would have an initial probability of 1/2 and each of the other two would have a probability of 1/4.

Other persons will propose that we have watched the game played many times, and observed that a car has with very nearly equal frequency appeared behind each of the three Doors. But, while that information might be helpful were we to play many times, I'm not aware of any real justification for treating frequencies as decision-theoretic weights in application to isolated events. You won't be on Monty's show to-morrow.

Indeed, if a guest player truly thought that the Doors initially represented equal expectations, then that player would be unable to choose amongst them, or even to delegate the choice (as the delegation has an expectation equal to that of each Door); indifference is a strange, limiting case. However, indecision — the aforementioned lack of ordering — allows the guest player to delegate the decision. So, either the Door was picked for the guest player (rather than by the guest player), or the guest player associated the chosen Door with a greater probability than either unchosen Door. That point might seem a mere quibble, but declaring that the guest player picked the Door is part of a rhetorical structure that surreptitiously and fallaciously commits the guest player to a positive judgment of prior probability. If there is no case for such commitment, then the paradox collapses.


Well, okay now, let's just beg the question, and say not only that you were assigned Door Number 1, but that for some mysterious reason you know that there is an equal probability of the car being behind each of the Doors. The host then opens Door Number 3, and there's a goat. The problem as stated does not explain why the host opened Door Number 3. The classical statement of the problem does not tell the reader what rule is being used by the host; the presentation tells us that the host knows what's behind the doors, but says nothing about whether or how he uses that knowledge. Hypothetically, he might always open a Door with a goat, or he might use some other rule, so that there were a possibility that he would open the Door with a car, leaving the guest player to select between two concealed goats.

Nowhere in the statement of the problem are we told that you are the sole guest player. Something seems to go very wrong with the text-book answer if you are not. Imagine that there are many guest players, and that outcomes are duplicated in cases in which more than one guest player selects or is otherwise assigned the same Door. The host opens Door Number 3, and each of the guest players who were assigned that Door trudges away with a goat. As with the scenario in which only one guest player is imagined, more than one rule may govern this choice made by the host. Now, each guest player who was assigned Door Number 1 is permitted to change his or her assignment to Door Number 2, and each guest player who was assigned Door Number 2 is allowed to change his or her assignment to Door Number 1. (Some of you might recall that I proposed a scenario essentially of this sort in a 'blog entry for 1 April 2009.) Their situations appear to be symmetrical, such that if one set of guest players should switch then so should the other; yet if one Door is the better choice for one group then it seems that it ought also to be the better for the other group.

The resolution is in understanding that the text-book solution silently assumed that the host were following a particular rule of selection, and that this rule were known to the guest player, whose up-dating of probabilities thus could be informed by that knowledge. But, in order for the text-book solution to be correct, all players must be targeted in the same manner by the response of the host. When there is only one guest player, it is possible for the host to observe rules that respond to all guest players in ways that are not not possible when there are multiple guest players, unless they are somehow all assigned the same Door. It isn't even possible to do this for two sets of players each assigned different Doors.


Given the typical presentation of the problem, the typical statement of ostensible solution is wrong; it doesn't solve the problem that was given, and doesn't identify the problem that was actually solved.


[No goats were harmed in the writing of this entry.]

Headway

Saturday, 7 January 2017

My paper on indecision is part of a much larger project. The next step in that project is to provide a formal theory of probability in which it is not always possible to say of outcomes either that one is more probable than another or that they are equality likely. That theory needs to be sufficient to explain the behavior of rational economic agents.

I began struggling actively with this problem before the paper on indecision was published. What I've had is an evolving set of axiomata that resembles the nest of a rat. I've thought that the set has been sufficient; but the axiomata have made over-lapping assertions, there have been rather a lot of them, and one of them has been complex to a degree that made me uncomfortable. Were I better at mathematics, then things might have been put in good order long ago. (I am more able at mathematics than is the typical economist, but I wish that I were considerably still better.) On the other hand, while there are certainly people better at mathematics than am I, no one seems to have accomplished what I seek to do. Economics is, after all, more than its mathematics.

What has most bothered me has been that complex axiom. There hasn't seemed much hope of resolving the general over-lap and of reducing the number of axiomata without first reducing that particular axiom. On 2 January, I was able to do just that, dissolving that axiom into two axiomata, each of which is acceptably simple. Granted that the number of axiomata increased by one, but now that the parts are each simple, I can begin to see how to reduce their overlap. Eliminating that overlap should either pare or vindicate the number of axiomata.

I don't know whether, upon getting results completed and a paper written around them, I would be able to get my work published in a respectable journal. I don't know whether, upon my work's getting published, it would find a significant readership. But the work is deeply important.

Humpty Dumpty, Prescriptivism, and Linguistic Evolution

Tuesday, 13 December 2016

In Chapter 6 of Through the Looking Glass by Charles Lutwidge Dodgson (writing as Lewis Carroll), a famous and rather popular position on language is taken:

When I use a word, Humpty Dumpty said, in rather a scornful tone, it means just what I choose it to mean — neither more nor less.

If Mr Dumpty's words simply mean whatever he intends them to mean, then the rest of us are not in a position to understand them. If he provides us with verbal definitions, we must know what the defining words mean. He could not even declare in a manner intelligible to us that he meant most words in the same sense as do you or I. We might attempt to tease-out meanings by looking for correlations, but then we would be finding meanings as correlations, which assumes properties (such as stability) that represent more than pure choice on the part of Mr Dumpty. Having been made perfectly private, his vocabulary as such would have no practical value except for internal dialogue. There is a paradox here, which Dodgson surely saw, yet which so very many people don't: If Mr Dumpty's apparent declaration were true, then it could not be understood by us. He might actually just be making some claim about breakfast. We might take (or mistake) his claim for a true proposition (that his vocabulary were purely idiosyncratic), but any co-incidence between his intention and our interpretation would be a result of chance. We could not actually recognize it for whatever proposition it actually expressed.

In order to communicate thoughts with language to other persons, we must have shared presumptions not only about definitions of individual words, but also about grammar. The more that such presumptions are shared, the more that we may communicate; the more fine-grained the presumptions, the more precise the communication possible. In the context of such presumptions, there are right ways of using language in attempt to communicate — though any one of these ways may not be uniquely right or even uniquely best — and there are ways that are wrong.

Those who believe that there are right ways and wrong ways to use language are often called prescriptivist, and generally by those who wish to treat prescriptism as wrong-headed or as simply a position in no way superior to the alternatives. Yet, while one could find or imagine specific cases where the beliefs concerning what is right or wrong in language-use were indeed wrong-headed, forms of prescriptivism follows logically from a belief that it is desirable for people to communicate, and especially from a belief that communication is, typically speaking, something rather a lot of which is desirable. As a practical matter, altogether rejecting prescriptivism is thoughtless.

To the extent that the same presumptions of meaning are shared across persons, the meanings of words are independent of the intentions of any one person. Meanings may be treated as adhering to the words themselves. Should Mr Dumpty take a great fall, from which recovery were not possible, still his words would mean exactly what they meant when he uttered them. A very weak prescriptivism would settle there, with the meaning of expressions simply being whatever were common intention in the relevant population. This prescriptivism is so weak as not often to be recognized as prescriptivism at all; but even it says that there is a right and wrong within the use of language.

Those more widely recognized as prescriptivists want something rather different from rude democracy. In the eyes of their detractors, these prescriptivists are dogmatic traditionalists or seeking to creäte or to maintain artificial elites; such prescriptivists have existed and do exist. But, more typically, prescriptivism is founded on the belief that language should be a powerful tool for communication as such. When a typical prescriptivist encounters and considers a linguistic pattern, his or her response is conditioned by concern for how it may be expected to affect the ability to communicate, and not merely in the moment, but how its acceptance or rejection will affect our ability to understand what has been said in the past and what will be said in the future. (Such effects are not confined to the repetition of specific pattern; other specific patterns may arise from analogy; which is to say that general patterns may be repeated.) Being understood is not considered as licensing patterns that will cause future misunderstandings.

In opposing the replacement of can with the negative can't in can hardly, the typical prescriptivist isn't fighting dogmatically nor to oppress the downtrodden, nor merely concerned to protect our ability to refer to the odd-ball cases to which can't hardly with its original sense applies; rather, the prescriptivist is trying to ward-off a more general chaos in which we can hardly distinguish negation from affirmation. (Likewise for the positive could care less standing where the negative couldn't care less would be proper.) When the prescriptivist objects to using podium to refer to a lectern, it's so that we continue to understand prior use and so that we don't lose a word for the exact meaning that podium has had. We already have a word for lecterns, and we can coin new words if there is a felt need for more.

The usual attempt to rebut prescriptivism of all sorts notes that language evolves. Indeed it does, but prescriptivisms themselves — of all sorts — play rôles in that evolution. When a prescriptivist objects to can't hardly being used where can hardly would be proper, he or she isn't fighting evolution itself but participating in an evolutionary struggle. Sometimes traditional forms are successfully defended; sometimes old forms are resurrected; sometimes deliberate innovations (as opposed to spontaneous innovations) are widely adopted. Sometimes the results have benefitted our ability to communicate; sometimes they have not; but all these cases are part of the dynamic of real-world linguistic evolution.

The Evolution Card is not a good one to play in any event. Linguistic evolution may be inevitable, but it doesn't always represent progress. It will not even tend to progress without an appropriate context. Indeed, sometimes linguistic evolution reverses course. For example: English arose from Germanic languages, in which some words were formed by compounding. But English largely abandoned this characteristic for a time, only to have it reïntroduced by scholarly contact with Classical Greek and Latin. (That's largely why our compounds are so often built of Greek or Latin roots, whereäs those of Modern German are more likely to be constructed with Germanic roots.) It was evolution when compounding was abandoned, and evolution when it was reädopted. If compounding were good, then evolution were wrong to abandon it; if compounding were bad, then evolution were wrong to reëstablish it. And one cannot logically leap from the insight that evolution is both inevitable and neither necessarily good nor necessarily bad to the conclusion that any aspect of linguistic practice is a matter of indifference, that nothing of linguistic practice is good or bad. One should especially not attempt to apply such an inference peculiarly to views on practice that one dislikes.

Nihil ex Nihilo

Tuesday, 6 December 2016

In his foundational work on probability,[1] Bernard Osgood Koopman would write something of form α /κ for a suggested observation α in the context of a presumption κ. That's not how I proceed, but I don't actively object to his having done so, and he had a reason for it. Though Koopman well understood that real-life rarely offered a basis for completely ordering such things by likelihood, let alone associating them with quantities, he was concerned to explore the cases in which quantification were possible, and he wanted his readers to see something rather like division there. Indeed, he would call the left-hand element α a numerator, and the right-hand element κ the denominator.

He would further use 0 to represent that which were impossible. This notation is usable, but I think that he got a bit lost because of it. In his presentation of axiomata, Osgood verbally imposes a tacit assumption that no denominator were 0. This attempt at assumption disturbs me, not because I think that a denominator could be 0, but because it doesn't bear assuming. And, as Koopman believed that probability theory were essentially a generalization of logic (as do I), I think that he should have seen that the proposition didn't bear assuming. Since Koopman was a logicist, the only thing that he should associate with a denominator of 0 would be a system of assumptions that entailed a self-contradiction; anything else is more plausible than that.

In formal logic, it is normally accepted that anything can follow if one allows a self-contradiction into a system, so that any conclusion as such is uninteresting. If faced by something such as X ∨ (Y ∧ ¬Y) (ie X or both Y and not-Y), one throws away the (Y ∧ ¬Y), leaving just the X; if faced with a conclusion Y ∧ ¬Y then one throws away whatever forced that awful thing upon one.[2] Thus, the formalist approach wouldn't so much forbid a denominator of 0 as declare everything that followed from it to be uninteresting, of no worth. A formal expression that no contradiction is entailed by the presumption κ would have the form ¬(κ ⇒ [(Y ∧ ¬Y)∃Y]) but this just dissolves uselessly ¬(¬κ ∨ [(Y ∧ ¬Y)∃Y])
¬¬κ ∧ ¬[(Y ∧ ¬Y)∃Y]
κ ∧ [¬(Y ∧ ¬Y)∀Y]
κ ∧ [(¬Y ∨ ¬¬Y)∀Y]
κ ∧ [(¬YY)∀Y]
κ
(because (X ⇔ [X ∧ (Y ∨ ¬Y)∀Y])∀X).

In classical logic, the principle of non-contradiction is seen as the bedrock principle, not an assumption (tacit or otherwise), because no alternative can actually be assumed instead.[3]. From that perspective, one should call the absence of 0-valued denominators simply a principle.


[1] Koopman, Bernard Osgood; The Axioms and Algebra of Intuitive Probability, The Annals of Mathematics, Series 2 Vol 41 #2, pp 269-292; and The Bases of Probability, Bulletin of the American Mathematical Society, Vol 46 #10, pp 763-774.

[2] Indeed, that principle of rejection is the basis of proof by contradiction, which method baffles so many people!

[3] Aristoteles, The Metaphysics, Bk 4, Ch 3, 1005b15-22.

Delusions of Scientific Literacy

Saturday, 19 November 2016

Science is reasoned analysis of — and theorizing about — empirical data. A scientific conclusion cannot be recognized as such unless one understands the science.

It might be imagined that one can recognize a conclusion as scientific without understanding the science, by recognizing the scientists as such. But the popular formula that science is what scientists do is vacuous when taken literally, and wrong in its usual interpretation. Someone can can have an institutional certification as having been trained to be a scientist, and have a paid position ostensibly as a scientist, and yet not be a scientist; for those who actually understand some scientific area, it is fairly easy to find historical examples or perhaps present cases.[1] To recognize a scientist as such one must recognize what he or she does as science, not the other way around.

Even if it is in some contexts reasonable to accept conclusions from such persons on the basis of their social standing, it is not scientific literacy to accept conclusions on that basis; it is simply trust in the social order.

The full understanding of a scientific expert isn't always necessary to have a scientific understanding of the reasoning behind some of the broad conclusions of a scientific discipline. But in some cases of present controversy with significant policy implications, the dispute over the relevant conclusions turns upon issues of applied mathematics, and perhaps other things such as thermodynamics. No one can be scientifically literate in the areas of controversy without understanding that mathematics and so forth.

In many of the disputations amongst lay-persons over these issues, I observe people in at least one group who assert themselves to be scientifically literate, when they are no such thing, and to accept science, when they are not positioned to know whether what they are accepting is science. These are actually people who simply trust some part of the social order — typically, those state-funded institutions that declare themselves to engage in scientific research.


[1] It is certainly easy to find what lay-persons will acknowledge as examples. However, some of these ostensible examples are actually spurious.

Location and Identity; of Angels and Pins and Important Things

Saturday, 27 August 2016

Most or all of us have heard or read of the question of how many angels can dance on the head of a pin. This question was introduced to satirize and to dismiss a problem that challenged scholastic philosophers, namely Given that there are things that do not have body but do have location, can two or more of these things occupy the same location at the same time? Now, we might labor the idea of body; but suffice it to say that it were presumed that things with body could not simultaneously occupy the exact same location, but that there were things of another class that could occupy the exact same location as could a thing with body. The question then was whether they could occupy the same location as other things of their own class.

One of the ways in which this puzzle had bearing was on attempts to understand the nature of devils (fallen angels) and how they might interact with ordinary people. The question thus actually had bearing on the witchcraft mania.

But another way in which this question had bearing was in consideration of how we distinguish one thing from another, perceptually and conceptually. In theory, two things might be identical except for location, but the distinct locations perhaps permit us to discern that there are two things, rather than one. And, if the location of an object at any one time is unique to that object, one can combine a description that might fit many other things with that location to identify a singular object, and thus to have an intrinsically singular corresponding concept, as opposed to a concept that might fit more than one thing.

That perhaps seems perfectly sound, but I don't understand space as other than a structure of relationships. For example, when a physicist asserts that objects of mass warp space increasingly with that mass, I take this to be no more or less than a claim that objects of larger or smaller mass have different spatial relationships with things, and cause other things to have different spatial relationships each with others. (The latter implies that a spatial relationship involving the object of mass underlies the spatial relationships amongst the things other than that object.) When we attempt to distinguish objects based upon location, which is a matter of relationships amongst objects, remarkable considerations arise.

First imagine a very small universe, having just one body in it. A universe is not small by virtue of one hitting a wall after travelling some distance; it is small by virtue of having a non-Euclidean geometry, such that after a relatively short amount of travel one finds oneself back where one started. As light travelled from the object, it would eventually find its way back to the object. If one could somehow see, as if within this universe, and looked in various directions, with sufficiently strong vision, one might see the object, seemingly off in the distance, even if the view began as if one were standing right at or on the object. Seemingly beyond the object, one might see the object yet again, and so forth. That's the experience in one sort of universe. Now imagine an infinitely large universe, as if built by tiling duplicate sectors, in which there were infinitely many objects, positioned to given the same experience as in the first universe.

I declared that a universe had just one object; I declared that a universe had infinitely many objects. I don't actually believe that the apparently second universe is intrinsically distinct from the first. I think that we may conceptualize the first universe as the second, and vice versa, and that the count of objects is an artefact of our conceptualization. Of course, if there were no more than indiscernible differences amongst what seemed to be infinitely many objects, then I might claim that there were no practical differences from a universe of one object; but I here make the stronger claim that, if there are no differences beyond perhaps whatever is captured by the two given descriptions of location, then these descriptions are each of the same universe. It would simply beg the question to insist that one universe is different from the other in that one were finite but configured so that it seemed infinitely repeating, while the other truly were infinite and repeating. Granted that viewing as if within the universe seems to locate a means of viewing close to one object. (One might even imagine oneself invisibly located as an additional object in the universe.) But how is the location of that means near one object distinct from its location near all of them? (How is oneself's being located in the universe near one object distinct from the location of a perfect duplicate of oneself being near each of them?)

In a universe more like our own, if we had what seemed to be two otherwise indistinguishable objects at different locations, there would be other discernible objects that seemed to support a distinction. What we might regard as one object would be near to various other objects, and far from still others. Likewise for what we might regard as a different object, with a distinct set of things.

Let's mentally step away from that scenario for a bit, and return to the scholastic problem of things that do not have body but do have location. If two of these things are otherwise indistinguishable, and if they can occupy the same location at the same time, then ex hypothesi there is no way to distinguish one from another when they do occupy the same location. (The space occupied might be different when both moved into it — for example, it might become less translucent — but that doesn't mean that we can distinguish one of two things from another. And if the properties of these things are not in some sense additive or subtractive, but combine according to inclusive disjunction — that is to say that the attribute is either there or not, but has no further possible ordering to it — then we cannot tell how just many of these things are there by discernment of these properties.)

But, when they occupy the same location, I ask whether there are in fact two things. What would be the difference of two such things coming to occupy one location from two otherwise identical things coming to be one thing at one location? Perhaps what seemed one thing might again become two, but that wouldn't prove that they had remained distinct at the one location. Perhaps one thing might have become three, each just like the two from which the one had been formed. My experience (and, as I believe, yours) is that this has never happened. But I know of no logic that prevents it from happening; it merely violates my present best guess of the physical laws (which entail principles of conservation). If what seemd to be two things came together and seemed to be one, and then that one thing seemed to become three, some person might guess that what had earlier seemed to be two things were atually three things, two already in combination. But if all the attributes combined in conformance with inclusive disjunction, then in what sense would that be different from just what had seemed to happen?

If we accept that two otherwise indistinguishable things become one thing when they occupy one location, does that thing continue to exist should it become two things? Is it one of the two things? both of the two things? each of the two things? Is the proper answer different when the two things are indistinguishable except for location from what was one thing?

(If we are transporting some very great criminal by paddy wagon and, upon arrival, find three persons, each indistiguishable from the person whom we tossed into the wagon, and each insisting that he is just that person, do we treat each of them as that person, or charge each with no more than abetting an escape, on the theory that it is most likely of any given one of them that he is not that person? This problem might be primarily epistemological — so that one of the three suspects is our original perpetrator even if we shall never know whom — but that's bad enough; and we can make it still more fundamental if we allow for teleportation and for matter duplication.)

Let's mentally step back to the scenario of two otherwise indistinguishable objects at different locations in a universe rather like our own. Are these actually two objects, or one object that is bi-located, or one object in one location that appears as two locations because of a strangeness of space? Do these three descriptions actually distinguish different realities? If we mark one of these apparently two objects and see an identical mark appear on the other, do we regard it as the same object, or as two objects such that one is some sort of sympathy with the other? If we mark what seems one object and a mark does not appear on the other, do we regard this as proof that there were never one bilocated object nor a weirdness of space, or do we interpret this case as of one object becoming two distinct objects (with an end to the bilocation or with an adjustment of space), perhaps exactly as a a result of our action? (The classic formulation of Ockham's Razor is entia non sunt multiplicanda præter necessitatem. If we posit that there were always two objects, are we conforming to that prescription?)

Because space need not be as Euclid had insightfully assumed and as Platon and Kant had thoughtlessly presumed, we can interpret any case where we have what otherwise must be a single thing simulatenously occupying multiple locations as in fact that single thing in one location. The practical cost, however, is that we are compelled to identify locations by the things occupying them (and not merely by the things about them); but we had set-out to identify things by the locations that they occupied!

Let's say that we have an object in a strange space, so that it is effectively bilocated, and we point to it and say this. Assuming that we don't also say not that, pointing to the other apparent location, is there any problem? It is one thing to incorporate mistaking of one thing for two into an assertion; another simply not to recognize some of the characteristics of that one thing. (There is a problem if Selina Kyle cries I am in love with Bruce Wayne, not with the Batman! but there would have been no such problem had she simply declared I am in love with Bruce Wayne!)

But if the case of two otherwise identical but differently located objects (perhaps each in perfect sympathy with the other) and the case of one apparently bilocated object are really just different descriptions of the same situation, then the applicability of not that — and number more generally — seems in some cases to be an artefact of the descriptive framework. Especially in the context of such implications, some people will insist that one of these descriptions must surely be mistaken, even if as a practical matter we cannot tell which. (Some people will further insist that the description that conforms to simpler spatial relations (that of two objects in perfect sympathy) is the one that is more likely correct; other people will insist that the description that requires fewer objects (that of a bilocated object) is more likely correct.) However, the apparent contradiction isn't internal to either description, and each description may be translated into the other. That one of them is right doesn't make the other wrong.

If I cannot point to something and say this and thereby distinguish it not merely from everything not there but from everything not it, then how can I have an intrinsically singular concept? To baldly incorporate singularity into a concept is just question-begging. (One ought not to say that two spheres are exactly alike except just in-so-far as one is unique, or is uniquely unique.)

Where, then, is singularity to be found? I think that it is to be found in experience, literally. The raw stuff of experience is sensation and sense-perception, not conception. (We may have concepts of sensations, but sensations are not themselves concepts; we may have concepts of sense-perceptions, but sense-perceptions are not themselves concepts.) Percepts and concepts are constructed to explain sensation and sense-perception. Those percepts and concepts may be perfectly accurate, but they are not intrinsically singular except to the extent that we associate them with sensation or with sense-perception. That is to say, for example, that we have a cluster of sensation or of sense-perception, and we have or build a concept of something by which to explain it, which concept is not singular except in-so-far as we implicitly or explicitly add to it the attribute of causing that particular cluster. And, if we do that, then we must in such case commit to a concept that does not allow co-location of otherwise identical things. That is not to say that we forbid co-location in general; but that singular concepts cannot be fitted to such co-located things. (It is probably a very bad idea to construct an explanatory model that employs co-location of otherwise identical things all of whose attributes combine in accordance with inclusive disjunction.)

In any case, the alternatives to exploring such considerations are dogmatism and nihilism. There is nothing intrinsically practical about dogmatism nor about nihilism, which stand in the way of our understanding the universe as deeply as we might and of our helping those who lose (or never find) their ways in their own attempts to understand the world. The scholastics who worried about the relationship of location to identity during what have come to be dismissed as the Dark Ages were concerned with foundational questions of what we ought to practice. It is fine to jest about their efforts only if the joke does not hide the truth.