Archive for the ‘metaphysics’ Category

Against an Argument for Science as Instrinsically Social

Saturday, 19 January 2019

I have argued that persons outside of any social context can be scientists. Recently, I watched and listened to a recording of an interview of one philosopher by another, in which the two agreed that science is intrinsically social, that persons outside of social contexts cannot be scientists.[1]

Towards explaining what was wrong with their argument, I'll first explain their argument. One of the very most important things that a scientist ought to do is to look for areas of potential vulnerability in theories, and to test those theories against what evidence may practicably be gathered. And any one researcher is imperfect in his or her ability to find such potential vulnerabilites, in knowledge of existing evidence, and in capacity to collect new evidence. It is often particularly difficult for any one researcher to recognize the unconscious presumptions that inform his or her own theories; exposing the work of one researcher to the scrutiny of other researchers may mean that those presumptions are recognized and challenged.

All right; but, just as any one researcher is imperfect, so are jointly any two researchers, or any three researchers, or any n researchers, for all finite values of n. In fact, I am nearly certain that even an infinite number of scientists would be insufficient to overcome weaknesses across the whole body of theories that these scientists could construct; but, in any case, science is not an unattainable limiting case of behavior. One might instead pick a finite n, and insist that one does not have science until one has n participants engaging in behavior of some sort, but the choice of n would seem to be quite arbitrary; and I'd like to know what one should then call the behavior when there are fewer participants.

As a practical matter, it is far from clear that two people each in isolation engaged in that behavior would continue to engage in that behavior when brought together. Social contexts can promote peculiar forms of irrationality. Historically, a great deal of what has been widely taken to be science by participants and by most observers in wider society has often been grossly unscientific behavior resulting exactly from social pressures. A great deal of what passes for science these days is socially required to conform to consensus, which is to say that social mechanisms protect widely shared presumptions from scrutiny.


[1] As it happens both one of those philosophers and I referred to Robinson Crusoe as an individual outside of a social context. It was natural for us each independent of the other to reach for the most famous example within our shared cultural context, but it heightened my sense of annoyance.

Revision

Thursday, 31 May 2018

On 17 May, I received communication from one of the editors of the journal to which, on 20 February, I had sent my paper on qualitative probability. He apologized for the delay, explaining that it were caused by a set of individually small mistakes. He said that, weeks earlier, the editors had reached a decision to request that I revise and resubmit the paper before it were sent to reviewers. They recognized that the set of axiomata had philosophical significance, but felt that the abstract would not attract their readers and that there were not enough philosophical discussion in the exposition of the paper.

I wasn't sure whether I could rewrite the paper sufficiently to get their acceptance without unbearably compromising the mission of the paper. I spent the better part of two days pondering the matter, then saw a plan of revision that I would be willing to effect and that they might find satisfactory.

The major share of the revision was to the introductory section. I pulled content from elsewhere in the paper and put it in that section, so that readers would know more of whither the paper would go. I added material that I think to be over-explanation, but from the reading of which some readers would probably benefit. Additionally, I made what were plainly major improvements to the paragraph on intervals as such. I made various other changes through-out the article.

I do not know that the editors will find these changes sufficient. I think that a major issue is that I see discussion of the formal structure of reason as philosophy, whereäs plainly some academic philosophers do not. In a revision cover-letter, I noted that the axiomata were explicitly justified in the paper as conforming to principles that hold in formal systems across all major interpretations of probability, with the exception of one principle whose justification were labored, and that were I to explain how each interpretation would justify each principle used as an axiom, then the work would mushroom to the size of a book, and its principal contributions would be swamped.

I resubmitted the article. It was quickly returned with a request that it not be submitted in PDF but in LAΤΕΧ mark-up or as a Microsoft Word .DOC. (That demand was probably an artefact of how all revisions are handled, rather than indicating that the revision were considered to be sufficient for the article to be sent to reviewers.) I had composed and entered the article using LyX, a WYSIWYM editor that uses LAΤΕΧ programs for final rendering (and converting the document to Word format would be a dreadful process because of the formulæ). But I had to modify things so that the publisher's own programs could successfully process my files. I spent a considerable amount of time figuring-out what modifications to make. At one point, I bobbled the process, but was rescued by the JEO assistant effecting a reset so that I could begin anew. I completed the resubmission at 03:50 on 30 May.

I am not sanguine about my revisions being considered sufficient. I have one more philosophy journal in-mind, after which I must consider submitting to a journal of a different sort.

If rejection does not come swiftly, then within a very few days I will return to work on my next paper, which is to combine the logic of preference and the logic of plausibility, each allowing incomplete preörderings, into a general theory of decision making.

Policy Paralogism

Thursday, 22 February 2018

Confronted with a real or imagined social problem, most people first grab for an ostensible solution that appeals to their prejudices, and then for an argument (in favor of this policy) that seems plausible to them. That approach is not ideal, but might still result in good policy if people would poke at each such argument, to see whether it were actually logical, and move away from proposed solutions in cases in which none of the arguments withstood examination. Unfortunately, people don't generally test their arguments; words strung together in emotionally satisfying ways are embraced as if any reasonable person would accept them.

I came upon an epitomal example of this behavior, in the wake of a recent mass shooting at a school. Someone posted a graphic macro suggesting how guns might be treated analogously to motor vehicles and and declared

Let’s go through this one more time…maybe they will get it. And yes, people will obtain guns illegally. And yes, people kill people. But doing nothing means more die.

(Underscore mine.) Now, there are various problems with the suggestion that guns should be treated analogously to motor vehicles, and perhaps someday I'll labor all that occur to me. But here I want to focus on that assertion doing nothing means more die. To the poster, it apparently seems that any reasonable person would accept that this assertion is an argument for the policy that he favors. Let's poke at this use, to see whether it is actually logical.

It is surely true that if we do nothing different, then people who have not yet died will die, and in this sense more will die. But, as a matter of logic, that doesn't mean that there is something that we can do such that people would not die, or even that fewer would die. If we somehow had an optimal social policy, and found that people died, we could still say that if we did nothing different then people would die. So, one question that we might ask is of whether a change in social policy would cause fewer or more people to die.

And I'm not simply talking about whether a change in social policy would cause fewer or more people to die at the hands of shooters who are not state officials, or even about the more general question of whether a change in policy would cause fewer or more people to to die at the hands of shooters of all sorts, but about the question of whether the change in policy would result in fewer or in more deaths across all causes. For example, a policy change might lead to greater use of IEDs. (The deadliest mass murder at a school in American history was effected by a bomb.) The answer is not known a priori.

There is also the issue of other costs. For example, some jurisdictions have a lower rate per capita of homicide, but a higher rate of rape. One doesn't want to switch from one set of policies to another simply on the basis that if we do nothing then more will be raped, and likewise one doesn't want to switch from one set of policies to another simply on the basis that if we do nothing then more will die. I don't think that any utilitarian calculus is actually reasonable, but one that simply counts lives is plainly inhumane. And it would be childlike to think and childish to insist that, with some set of policies, the global minimum for each costs could be achieved simultaneous to that for every other, let alone that such a fantastic minimum could be found by first finding the local minimum for one cost and then seeking the local minimum for another.

The poster has presented an example from just one class of policies, and declared doing nothing means more die. Plainly, there are other possible policy responses, so that the relevant comparison is not simply between adopting the policy that he favors and maintaining the status quo.

Moreover, if his argument were adapted to the defense of other policies, he and others might be provoked to examine that argument more carefully. His words might be left essentially unchanged, but the macro replaced with one discussing a policy of a different sort. For example, someone might propose that each person above the age of 10 years old be interned in a mental-health camp, until and unless experts appointed by the state certified that he or she was not a danger to society. I'd like to think that, if the original poster had earlier seen the very same words used in defense of an internment policy, then he would have immediately poked at the argument to find the illogic. I'm quite sure that most people who applauded or would have applauded his words in the context in which he did use them would have found their illogic in the context of an argument for rounding-up American youth and throwing them into camps. Well, they should have poked at the argument where they actually found it.

Again into the Breach

Monday, 15 January 2018

As occasionally noted in publicly accessible entries to this 'blog, I have been working on a paper on qualitative probability. A day or so before Christmas, I had a draft that I was willing to promote beyond a circle of friends.

I sent links to a few researchers, some of them quite prominent in the field. One of them responded very quickly in a way that I found very encouraging; and his remarks motivated me to make some improvements in the verbal exposition.

I hoped and still hope to receive responses from others, but as of to-day have not. I'd set to-day as my dead-line to begin the process of submitting the paper to academic journals, and therefore have done so.

The process of submission is emotionally difficult for many authors, and my past experiences have been especially bad, including having a journal fail to reach a decision for more than a year-and-a-half, so that I ultimate withdrew the paper from their consideration. I even abandoned one short paper because the psychological cost of trying to get it accepted in some journal was significantly impeding my development of other work. While there is some possibility that finding acceptance for this latest paper will be less painful, I am likely to be in for a very trying time.

It is to be hoped that, none-the-less, I will be able to make some progress on the next paper in the programme of which my paper on indecision and now this paper on probability are the first two installments. In the presumably forth-coming paper, I will integrate incomplete preferences with incompletely ordered probabilities to arrive at a theory of rational decision-making more generalized and more reälistic than that of expected-utility maximization. A fourth and fifth installment are to follow that.

But the probability paper may be the most important thing that I will ever have written.

Hume's Abstract of His Treatise

Thursday, 14 December 2017

In an attempt to promote his work A Treatise on Human Nature (1738), David Hume anonymously wrote and in 1740 had published a booklet, An Abstract of a Book Lately Published, Entituled, A Treatiſe of Human Nature, &c. It went nearly unnoticed and unrecognized until republished in 1938, with an introduction by John Maynard Keynes and Piero Sraffa. That edition was reprinted in 1965. The introduction may still be protected by copyright, as may be images of the reset text.

In any event, I did not find any editions of the booklet itself freely available on-line; so I have created one.

Well, actually, two editions. The first retains the use of long ess (‘ſ’) and the convention by which longer passages were quoted, which was a matter of prefixing a quotation mark to each line which continued a quotation from the previous line. The second replaces the long esses with now ordinary lower-case esses, and uses block quotation where now conventional, though the second version otherwise preserves the spelling and punctuation of the original.

The Abstract is about 6,500 words. The booklet was just thirty two pages, one of which was a title page and one of which was blank. My transcriptions come each to less than nine pages of twelve-point type.

Addendum (2017:12/15): After I posted my transcriptions, a Google search on an Android tablet returned a link not previously returned by a Google search on my Linux box, to a transcription by Carl Mickelsen lacking the original preface contained in the booklet, and with the remaining text extensively editted to change spellings, punctuation, italicization, &c I also found a wholesale paraphasing of the Abstract by Jonathan Bennett, with changes far more extensive than the reader is led to believe

Hyper-Vigilance and Feedback

Tuesday, 14 November 2017

Psychologists vary in precisely what they mean when using the term vigilant or hyper[-]vigilant to describe a personality type. What is common across notions and here relevant is an acute concern about — and sensitivity to — behavior by others that may carry information about intention, about propensity, or about capacity. Hyper-vigilance typically arises as an attempted adaptation in response to seriously hurtful experience; it is in any case a self-defense behavior more focussed on identifying hostile or otherwise threatening intentions, propensities, or capacities, and should be expected to be associated with other defensive behaviors and more generally with personality attributes that arise from injury.

Hyper-vigilance itself is not the same thing as paranoia. When there is an element of irrationality to hyper-vigilance as such, it is in an over-commitment of resources to the tasks of awareness or of interpretation. The hyper-vigilant may otherwise be for the most part rational in their interpretations of behavior. (And one cannot reasonably infer that there is an over-commitment of resources simply from the fact that a hyper-vigilant person is seeking greater awareness.) A paranoid systematically makes important inferences that are themselves unreasonable.

The skills of the hyper-vigilant (and, for that matter, the unreasonable inferential practices of the paranoid) aren't always employed for purposes of self-defense. People may be identified as well-intentioned or peculiarly talented, and cultivated as friends; people may be perceived as having concealed vulnerabilities, and quietly given protection.

When two people interact — whether either is hyper-vigilant or not and so long as they are at all social — they consciously or unconsciously each size-up the other. The behavior of each usually adjusts to anything learned in the present encounter, and that adjustment of behavior may then communicate something new to the other, causing a counter-adjustment on his or her part. When two people have complementary emotional responses each to the other, a feedback loop is creäted, and the responses amplify to some extent. These feedback loops can cause people to take relatively quick and markèd likings or dislikings each to the other.

When hyper-vigilant people interact, complementarity has a still more pronounced effect. They can move to attack or become friends or come to love or indeed fall in love with a speed that startles everyone — including the two people in the feedback loop if they've never considered the dynamic or if they haven't each discerned that the other is not merely ill- or good-willed but also hyper-vigilant. Because hyper-vigilance is a behavior of self-defense, it is likely to be accompanied by a suppression or masking of behaviors that would otherwise expose emotions or reveal defensive abilities or propensities, and hyper-vigilance itself would be one of those behaviors; additionally, a hyper-vigilant person may conceal vigilance to avoid censure (especially as hyper-vigilance is widely equated with paranoia). Thus one or both of two hyper-vigilant people may miss this important insight in the implicit challenge of reading the other, especially when vigilance is operating largely at an unconscious level.

Helmholtz's Zählen und Messen

Monday, 16 October 2017

When I first encountered mention of Zählen und Messen, erkenntnisstheoretisch betrachtet [Numbering and Measuring, Epistemologically Considered] by Hermann [Ludwig Ferdinand] von Helmholtz, which sought to construct arithmetic on an empiricist foundation, I was interested. But for a very long while I did not act on that interest.

A few years ago, I learned of Zahl und Mass in der Ökonomik: Eine kritische Untersuchung der mathematischen Methode und der mathematischen Preistheorie (1893), by Andreas Heinrich Voigt, a early work on the mathematics of utility, and that it drew upon Helmholtz's Zählen und Messen, which impelled me to seek a copy of the latter to read. To my annoyance, I found that there was no English-language version of it freely available on-line. I decided to create one, but was distracted from the project by other matters. A few days ago, I recognized that my immediate circumstances were such that it might be a good time to return to the task.

I have produced a translation, Numbering and Measuring, Epistemologically Considered by Hermann von Helmholtz It is not much better than serviceable. I don't plan to return to the work, to refine the translation, except perhaps where some reader has suggested a clear improvement and I effect a transcription.

I have not inserted what criticisms I might make of this work into the document. Nor have I presented my thoughts on how Helmholtz's ostensible empiricism and Frege's logicism are not as far apart as might be thought.

Vocal Cues

Monday, 26 June 2017

Many animals, across different classes, have two distinct sounds that may be classified as growls or as whines, respectively. The growls signal threat; the whines signal friendship or appeasement.

The bark of a dog is actually a combination of a growl with a whine; it is thus not a pure signal of aggression, as many take it to be; it is literally a mixed signal, perhaps indicating confusion on the part of the dog, perhaps signalling both that the dog is prepared to fight and that the dog would consider a peaceful interaction.

When women talk with men whom they find attractive, women tend to raise the pitches of their voices. Men tend to do something different when talking with women whom they find attractive; they mix deeper tones than they would normally use with higher tones than they would normally use. The deep tones are signals of masculinity, of being able to do what men are expected to do. The higher tones of men carry much the same significance as do the higher tones of women — with the additional point in contrast to the deep tones that the man does not mean to threaten the woman.

It amused me to reälize consciously that this behavior by men is at least something like barking. Then I grimly considered that some men are actually barking, telling the woman that he can be nice to her if she is nice to him, but will actively make things unpleasant if she is not. But at least it should typically be possible to disambiguate the threatening behavior, based upon where the low notes are used, and of course the choice of words.

Theories of Probability — Perfectly Fair and Perfectly Awful

Tuesday, 11 April 2017

I've not heard nor read anyone remarking about a particular contrast between the classical approach to probability theory and the Bayesian subjectivist approach. The classical approach began with a presumption that the formal mathematical principles of probability could be discovered by considering situations that were impossibly good; the Bayesian subjectivist approach was founded on a presumption that those principles could be discovered by considered situations that were implausibly bad.


The classical development of probability theory began in 1654, when Fermat and Pascal took-up a problem of gambling on dice. At that time, the word probability and its cognates from the Latin probabilitas meant plausibility.

Fermat and Pascal developed a theory of the relative plausibility of various sequences of dice-throws. They worked from significant presumptions, including that the dice had a perfect symmetry (except in-so-far as one side could be distinguished from another), so that, with any given throw, it were no more plausible that one face should be upper-most than that any other face should be upper-most. A model of this sort could be be reworked for various other devices. Coins, wheels, and cards could be imagined as perfectly symmetrical. More generally, very similar outcomes could be imagined as each no more probable than any other. If one presumes that to be no more probable is to be equally probable, then a natural quantification arises.

Now, the preceptors did understand that most or all of the things that they were treating as perfectly symmetrical were no such thing. Even the most sincere efforts wouldn't produce a perfectly balanced die, coin, or roulette wheel, and so forth. But these theorists were very sure that consideration of these idealized cases had revealed the proper mathematics for use across all cases. Some were so sure of that mathematics that they inferred that it must be possible to describe the world in terms of cases that were somehow equally likely, without prior investigation positively revealing them as such. (The problem for this theory was that different descriptions divide the world into different cases; it would take some sort of investigation to reveal which of these descriptions, if any, results in division into cases of equal likelihood. Indeed, even with the notion of perfectly balanced dice, one is implicitly calling upon experience to understand what it means for a die to be more or less balanced; likewise for other devices.)


As subjectivists have it, to say that one thing is more probable than another is to say that that first thing is more believed than is the other. (GLS Shackle proposed that the probability of something might be measured by how surprised one would be if that something were discovered not to be true.)

But most subjectivists insist that there are rationality constraints that must be followed in forming these beliefs, so that for example if X is more probable than Y and Y more probable than Z, then X must be more probable than Z. And the Bayesian subjectivists make a particular demand for what they call coherence. These subjectivists imagine that one assigns quantifications of belief to outcomes; the quantifications are coherent if they could be used as gambling ratios without an opponent finding some combination of gambles with those ratios that would guarantee that one suffered a net loss. Such a combination is known as a Dutch book.

But, while quantifications can in theory be chosen that insulate one against the possibility of a Dutch book, it would only be under extraordinary circumstances that one could not avoid a Dutch book by some other means, such as simply rejecting complex contracts to gamble, and instead deciding on gambles one-at-a-time, without losing sight of the gambles to which one had already agreed. In the absence of complex contracts or something like them, it is not clear that one would need a preëstablished set of quantifications or even could justify committing to such a set. (It is also not clear why, if one's beliefs correspond to measures, one may not use different measures for gambling ratios.) Indeed, it is only under rather unusual circumstances that one is confronted by opponents who would attempt to get one to agree to a Dutch book. (I don't believe that anyone has ever tried to present me with such a combination, except hypothetically.) None-the-less, these theorists have been very sure that consideration of antagonistic cases of this class has revealed the proper mathematics for use across all cases.


The impossible goodness imagined by the classical theorists was of a different aspect than is the implausible badness of the Bayesian subjectivists. A fair coin is not a friendly coin. Still, one framework is that of the Ivory Tower, and the other is that of Murphy's Law.

Generalizing the Principle of Additivity

Friday, 17 February 2017

One of the principles often suggested as an axiom of probability is that of additivity. The additivity here is a generalization of arithmetic addivity — which generalization, with other assumptions, will imply the arithmetic case.

The classic formulation of this principle came from Bruno di Finetti. Di Finetti was a subjectivist. A typical subjectivist is amongst those who prefer to think in terms of the probability of events, rather than in terms of the probability of propositions. And subjectivists like to found their theory of probability in terms of unconditional probabilities. Using somewhat different notation from that here, the classic formulation of the principle of additivity is in which X, Y, and Z are sets of events. The underscored arrowhead is again my notation for weak supraprobability, the union of strict supraprobability with equiprobability.

One of the things that I noticed when considering this proposition is that the condition that YZ be empty is superfluous. I tried to get a note published on that issue, but journals were not receptive. I had bigger fish to fry other than that one, so I threw-up my hands and moved onward.

When it comes to probability, I'm a logicist. I see probability as primarily about relations amongst propositions (though every event corresponds to a proposition that the event happen and every proposition corresponds to the event that the proposition is true), and I see each thing about which we state a probability as a compound proposition of the form X given c in which X and c are themselves propositions (though if c is a tautology, then the proposition operationalizes as unconditional). I've long pondered what would be a proper generalized restatement of the principle of additivity. If you've looked at the set of axiomata on which I've been working, then you've seen one or more of my efforts. Last night, I clearly saw what I think to be the proper statement: To get di Finetti's principle from it, set c2 = c1 and make it a tautology, and set X2 = Z = Y2. Note that the condition of (X2 | c1) being weakly supraprobable to (Y2 | c2) is automatically met when the two are the same thing. By itself, this generalization implies my previous generalization and part of another principle that I was treating as an axiom; the remainder of that other principle can be got by applying basic properties of equiprobability and the principle that strict supraprobability and equiprobability are mutually exclusive to this generalization. The principle that is thus demoted was awkward; the axiom that was recast as acceptable as it was, but the new version is elegant.