Posts Tagged ‘science’

Long COVID as a Description and as a Name

Friday, 15 March 2024

In the case of what has been called long COVID, two opposing camps are lost in a confusion of name with description.

The idea that SarsCoV-2 would have peculiar long-term effects upon health was immediately popular in some circles for appalling reasons, and thus viewed in other circles with strong inclination to disbelief.

Eventually, a cluster of persistent symptoms came to be widely associated with SarsCoV-2. Some of these symptoms are clearly present in some people, and not psychosomatic. But a very reasonable question is that of whether these symptoms are actually caused by SarsCoV-2, or have some other cause or causes. For some months now, the evidence has strongly indicated that, no, these are, variously, not effects of SarsCoV-2, or are common to respiratory or viral illness more generally. As a description, long COVID has been falsified, but it has lingered as a name.

I continue to encounter recent articles in prestigious, allegedly scientific journals that simply treat as given that these symptoms are caused by SarsCoV-2. An established name is treated as if it were a description. Now some institutions are beginning to insist reasonably that the name long COVID be abandoned, as inapt. But I'm encountering journalists and pundits who thence infer and claim that long COVID does not exists.

That inference doesn't follow if by long COVID is meant a cluster of symptoms, which symptoms are exactly what have been investigated under the name. Only if long COVID is taken to be defined as these symptoms resulting from SarsCoV-2 could we say that nothing fits the concept corresponding to the name.

I doubt that any Briton defined the French disease as especially French. In any case, telling a typical Briton that what he called the French disease did not exist would be tantamount to telling him that syphilis did not exist. What he should instead have been told was that syphilis were not particularly French, and ought to be called something else.

Likewise, the declarations should not be that long COVID does not exist.

Against an Argument for Science as Instrinsically Social

Saturday, 19 January 2019

I have argued that persons outside of any social context can be scientists. Recently, I watched and listened to a recording of an interview of one philosopher by another, in which the two agreed that science is intrinsically social, that persons outside of social contexts cannot be scientists.[1]

Towards explaining what was wrong with their argument, I'll first explain their argument. One of the very most important things that a scientist ought to do is to look for areas of potential vulnerability in theories, and to test those theories against what evidence may practicably be gathered. And any one researcher is imperfect in his or her ability to find such potential vulnerabilites, in knowledge of existing evidence, and in capacity to collect new evidence. It is often particularly difficult for any one researcher to recognize the unconscious presumptions that inform his or her own theories; exposing the work of one researcher to the scrutiny of other researchers may mean that those presumptions are recognized and challenged.

All right; but, just as any one researcher is imperfect, so are jointly any two researchers, or any three researchers, or any n researchers, for all finite values of n. In fact, I am nearly certain that even an infinite number of scientists would be insufficient to overcome weaknesses across the whole body of theories that these scientists could construct; but, in any case, science is not an unattainable limiting case of behavior. One might instead pick a finite n, and insist that one does not have science until one has n participants engaging in behavior of some sort, but the choice of n would seem to be quite arbitrary; and I'd like to know what one should then call the behavior when there are fewer participants.

As a practical matter, it is far from clear that two people each in isolation engaged in that behavior would continue to engage in that behavior when brought together. Social contexts can promote peculiar forms of irrationality. Historically, a great deal of what has been widely taken to be science by participants and by most observers in wider society has often been grossly unscientific behavior resulting exactly from social pressures. A great deal of what passes for science these days is socially required to conform to consensus, which is to say that social mechanisms protect widely shared presumptions from scrutiny.


[1] As it happens both one of those philosophers and I referred to Robinson Crusoe as an individual outside of a social context. It was natural for us each independent of the other to reach for the most famous example within our shared cultural context, but it heightened my sense of annoyance.

Delusions of Scientific Literacy

Saturday, 19 November 2016

Science is reasoned analysis of — and theorizing about — empirical data. A scientific conclusion cannot be recognized as such unless one understands the science.

It might be imagined that one can recognize a conclusion as scientific without understanding the science, by recognizing the scientists as such. But the popular formula that science is what scientists do is vacuous when taken literally, and wrong in its usual interpretation. Someone can can have an institutional certification as having been trained to be a scientist, and have a paid position ostensibly as a scientist, and yet not be a scientist; for those who actually understand some scientific area, it is fairly easy to find historical examples or perhaps present cases.[1] To recognize a scientist as such one must recognize what he or she does as science, not the other way around.

Even if it is in some contexts reasonable to accept conclusions from such persons on the basis of their social standing, it is not scientific literacy to accept conclusions on that basis; it is simply trust in the social order.

The full understanding of a scientific expert isn't always necessary to have a scientific understanding of the reasoning behind some of the broad conclusions of a scientific discipline. But in some cases of present controversy with significant policy implications, the dispute over the relevant conclusions turns upon issues of applied mathematics, and perhaps other things such as thermodynamics. No one can be scientifically literate in the areas of controversy without understanding that mathematics and so forth.

In many of the disputations amongst lay-persons over these issues, I observe people in at least one group who assert themselves to be scientifically literate, when they are no such thing, and to accept science, when they are not positioned to know whether what they are accepting is science. These are actually people who simply trust some part of the social order — typically, those state-funded institutions that declare themselves to engage in scientific research.


[1] It is certainly easy to find what lay-persons will acknowledge as examples. However, some of these ostensible examples are actually spurious.

Consciousness and Science

Tuesday, 9 June 2015

The January-February 2012 issue of American Scientist contains an abridged reprinting of an article by BF Skinner, followed by a newer piece, frequently polemical, by a behaviorist, Stephen F. Ledoux.[0] In his polemic, Ledoux contrasts what he insists to be the scientific approach of behaviorology[1] with the ostensibly untestable and mystical approach of reference to an inner agent.

There's a problem here, but it's not unique to behaviorists. A large share of those who would study human nature scientifically do not know what science is.

Although courts and journalists and sociologists have declared that science is what scientists do, this formula is either a perverse begging of the question or simply wrong. The nature of science is not definitionally what is done by those recognized as scientists by academia nor by some narrower or wider society. Science does not start with academic degrees nor with peer review nor with the awarding of grants.

Science is reasoned analysis of — and theorizing about — empirical data.

Some want to use science more narrowly. It's in no way essential to the principal purpose of this essay that all rational analysis and theorizing about empirical data should count as science; but it is essential to see that whatever sort of analysis and theorizing is employed must be rational and that the data must ultimately be empirical. (I doubt that, at this stage, a behaviorist would feel a need to disagree.) To side-step absurd semantic arguments, I will sometimes write rational empiricism for the concept that I would simply call science.

An ostensible science that accepts as fact unjustified empirical propositions is no science at all. That is not to say that each thing that, in everyday language, we call a science (eg, biology) must be a self-contained set of explanations. It is perfectly acceptable for one such science to be built upon the results of a prior rational empiricism (eg, for chemistry to build upon physics).

If we carefully consider what we take to be fact (and which may indeed be fact), we recognize that there is a theoretical or conjectural support to our acceptance of most of it. Such propositions taken as fact cannot be the foundation of rational empiricism because the aforementioned support must itself have been rational empiricism for rational empiricism to proceed from these propositions. Rational empiricism cannot start with measurement[1.50] nor with notions of things to be measured such as with mass or as with the speed of light; rational empiricism cannot start with a geometry. These notions arise from interpretation and conjecture.[2]

Rational empiricism starts with what may be called brute fact — data the awareness of which is not dependent upon an act of interpretation.[3] If the belief in a proposition depends upon any such act, regardless of how reasonable the act might be, then the proposition is not truly a brute fact.[4]

To develop propositions from brute facts that contradict known brute facts would be to engage in self-contradiction, which is not reasonable in interpretation nor in theorizing. It is especially unreasonable to develop propositions that contradict the very brute facts from which they were developed.[5]

Philosophers have a long history of exposing where propositions are reliant upon prior interpretation and assumption. Towards an extreme, we are asked how we know ourselves not to be brains in vats, fed stimuli corresponding to a virtual reälity. It's not my intention to labor this question, beyond noting that it may be asked, and that acts of interpretation are entailed in any belief about whether we are other than about 3 pounds of tissue, bobbing-about in Pyrex™ jars, with electrodes attached here-and-there, whether the belief (for or against) be knowledge or not.

I referred to this question about whether one is a brain-in-a-vat as towards an extreme, rather than at an extreme, because a case in which stimuli are purely engineered is not an extreme. The presence itself of stimuli is not a brute fact. We conjecture their existence in our explanation of the sensations or sense-perceptions or perceptions that appear in our minds. If those things appear in our minds ex nihilo, then there are no stimuli, engineered or otherwise. That the mind is associated with a brain (or something like it) is not a brute fact. We build a model of reality that includes a body for us, and decide that our minds are housed within that body (as an activity or as a substance) or otherwise associated with it.[6]

The formation of sense-perceptions and of perceptions would seem to involve acts of interpretation; perhaps one would want to claim that the formation even of sensations involves interpretation. However, the presences of such things in the mind are themselves brute facts, whatever may be the theorized or conjectured origins of those things.[7] If by inner we understand the kernel of our belief system, and by outer we understand that which is built around that kernel, and if we begin our notion of mind with the capacity for sensations and the system that interprets these, then we should reälize that rational empiricism begins with the inner agent that the behaviorists and others want to dismiss as fictitious, mystical, superstitious; and it is the outer that is hypothesized in our explanation of the evidence. Those who attempt to deny or otherwise to exclude the inner self are trying to turn science on its head. Rational empiricism starts with a mind, and works its way out. And science, whether we simply equate it with rational empiricism or instead see it as a specific variety thereof, is thus committed to the existence of a mind, which is present in its foundation.


I say a mind advisedly; because, when rational empiricism starts, it starts anew with each mind. Of course, some minds do a better job of the rational empiricism than do others. The mind may be relatively inert rather than interpretive, or its interpretation may be largely irrational from the earliest stages.

If the mind continues, then it may develop an elaborate theory of the world. My own mind has done just this. And one of the important features of this theory is the belief in other minds (implicit in some of what I've been writing). Now, if we set aside issues of rationality, then an elaborate theory of the world might be developed without a belief in other minds. But as I constructed my theory of the world, including a theory of my having a body, it seemed that some of the other things out there exhibited behaviors similar those of my own body, such that those behaviors of my own body were in part determined by my mind. Subsequently, my theory of minds in general, including my own, began to be informed by their behavior.[8] According to later features of the theory that I hold of these minds, some minds do a better job of developing a theory of other minds than do other minds. Some never develop such a theory; others develop theories that impute minds to things that have none; some assume that any mind must necessarily be almost identical to their own minds.

As communication developed between my mind and these other minds, my theories of things-more-generally began to be informed by what I was told of those other things. One of my problems from that point forward was ascertaining the reliability of what I was told. (It might here be noted that my aforementioned development of a theory of the world was of course in very large part a wholesale adoption of those claims that I considered reliable.) And that brings us to collaborative theorizing, of which what many people now think science to be a special case.

But science is not essentially social. It does not pause between acts of communication, nor do we require the resumption of conversation as such to learn whether our most recent attempts were or were not science (though what we learn in conversation may tell us whether our prior conclusions continue to be scientific).

Consider whether Robinson Crusoe can engage in science, even on the assumptions that Friday will never appear, that Mr Crusoe will never be rescued, and that there is no means for him to preserve his work for future consideration. He can certainly engage in rational empiricism. He can test his conclusions against different sets of observations. (He can even quantify many things, and develop arithmetic models!)

Or imagine that you think that you see Colonel Inchthwaite commit a murder, though you are the only witness. Further, whenever you confront the Colonel and he is sure that there are no other witnesses and no recording devices, he freely admits to the murder. Your hypothesis that he has committed murder is tested every time that you query him. The fact that only you witnessed the apparent murder doesn't make your experience mystical. Your theory is a reasoned conclusion from the empirical evidence available to you.

Of course, others cannot use Mr Crusoe's work. And I will readily grant that it might be unscientific for someone else to believe your theory of murder. (That someone else may have little reason to believe your testimony, may have no independent means to test the theory, may have a simpler explanation to fit the evidence available to him or to her.)

Which is all to say that there can be private science, but it is only when the science of one's position is shared that it may become science for others.[10] (And, even then, they may have other evidence that, brought to bear upon one's position, renders it unscientific.)

The notion of science as intrinsically collaborative proceeds in part from a presumption that science is what those widely recognized as scientist do,[11] and in part from identifying science with the subject of the sociology of those seen (by some researcher) as scientists. But much of what people take to be science is, rather, a set of requirements — or of conventions attempting to meet requirements — for social interaction amongst would-be scientists to be practicably applied in the scientific development of belief.


It might be asked whether the scientists manque who deny the mind plausibly can have no experience of it, and under what circumstances.

One theory might be that, indeed, some of these alleged scientists have no experience of consciousness; perhaps they are things that behave indistinguishably or almost indistinguishably from creatures with consciousness, yet do not themselves possess it. Perhaps there are natural machines amongst us, which behave like more, yet are just machines.[12] But I'm very disinclined to accept this theory, which would seem effectively to entail a reproductive process that failed to produce a creature of one sort then successfully produced mimicks thereöf, as if bees and bee-flies might have the same parents.

Another theory would be that some of these alleged scientists are autistic, having minds, but having trouble seeing them. There is actually a considerable amount of mind-blindness amongst those who attempt social science. An otherwise intelligent person without a natural propensity to understand people may involve him- or herself in the scientific study of human nature — or in an ostensibly scientific study thereöf — exactly as an outgrowth and continuation of attempts to understand it by unnatural means. These attempts may in fact be fruitful, as natural inclinations may be actively defective. The autistic can offer us an outsider perspective. But outsiders can be oblivious to things of vital importance, as would be the case here.[13]

(And one must always be alert to attempts by people who fail at the ordinary game of life to transform themselves into winners by hijacking the meta-game, rewriting the rules from positions of assumed expertise.)

A remaining theory would be that these are rather more ordinary folk, who encountered what appeared to them to be a profound, transformative theory, and over-committed to it. (There seems to be an awful lot of that sort of thing in the world.) Subsequently, little compels them to acknowledge consciousness. They aren't often competently challenged; they've constructed a framework that steers them away from the problem; and most people seem to be pretty good at not thinking about things.


While the behaviorists have run off the rails in their insistence that minds are a fiction, that does not mean that the study of human behavior with little or no reference to the mind of the subject is always necessarily a poor practice. As I stated earlier, some people assume that any mind must necessarily be almost identical to their own minds, and a great many people assume far too much similarity. I find people inferring that, because they have certain traits, I must also have these same traits, when I know that I do not; I find them presuming that others have traits that I am sure that those others do not, again based upon a presumed similarity. A study of pure behavior at least avoids this sort of error, and is in some contexts very much to be recommended.


[0] I began writing this entry shortly after seeing the articles, but allowed myself repeatedly to be distracted from completing it. I have quite a few other unfinished entries; this one was at the front of the queue.

[1] When behaviorists found other psychologists unreceptive to their approach, some of them decided to decamp, and identify that approach as a separate discipline, which they grotesquely named behaviorology, combining Germanic with Greek.

[1.50 (2015:06/10)] The comment of a friend impels me to write that, by measurement I intended to refer to the sort of description explored by Helmholtz in Zählen und Messen, by Suppes and Zinnes in Basic Measurement Theory, and by Suppes, Krantz, and Tversky in Foundations of Measurement. This notion is essentially that employed by Lord Kelvin in his famous remark on measurement and knowledge. Broader notions are possible (and we see such in, for example, Rand's Introduction to Objectivist Epistemology).

[2] Under a narrowed definition of science that entails such things as measurement, a reality in which quantification never applied would be one in which science were impossible. Many of those inclined to such narrow definitions, believing that this narrowed concept none-the-less has something approaching universal applicability, struggle to quantify things for which the laws of arithmetic are a poor or impossible fit.

[3] The term brute fact is often instead used for related but distinct notions of fact for which there can be no explanation or of fact for which there is no cause. Aside from a need to note a distinction, I am not here concerned with these notions.

[4] Propositions that are not truly brute fact are often called such, in acts of metaphor, of hyperbole, or of obliviousness.

[5] Even if one insisted on some other definition of science — which insistence would be unfortunate — the point would remain that propositions that contradict known brute fact are unreasonable.

[6] Famously or infamously, René Descartes insisted that the mind interfaced with the brain by way of the pineal gland.

[7] I am sadly sure that some will want to ask, albeït perhaps not baldly, how the mind is to know that its sensation of its sensation is correct, as if one never sensed sensations as such, but only sensations of sensations. And some people, confronted with the proposition put that baldly, will dig-in, and assert that this is indeed the case; but if no sensation can itself be sensed except by a sensation that is not itself, then no sensation can be sensed, as the logic would apply recursively.

[8] Take a moment now, to try to see the full horror of a mind whose first exposures to behavior determined by other minds are largely of neglectful or actively injurious behavior.

[9] If I impute less than certainty to some proposition then, while the proposition may be falsified, my proposition about that proposition — the plausibility that I imputed to it — is not necessarily falsified. None-the-less, it is easier to speak of being wrong about falsified propositions to which one imputed a high degree of plausibility.

[10] The confusion of transmittability with rationality is founded in stupidity. Even if one allowed science to be redefined as a collaborative activity, somehow definitionally requiring transmittability, private rationality would remain rational. But I promise you that some will adopt the madness of insisting that, indeed, any acceptance of private evidence by its holder is mystical.

[11] When would-be scientists imitate, without real understanding, the behavior of those whom they take to be scientists, the would-be scientists are behaving in a way analogous to a cargo cult.

[12] Some people are convinced that they are unique in possessing consciousness, and the rest of us are just robots who do a fair job of faking it. This is usually taken as madness, though there is rather wide acceptance of a certitude that all other sorts of animals are natural machines, and that anything that seems as if it proceeds from love by a dog or by a pig is just the machine performing well.

[13] The presence of consciousness is here a necessary truth, but the proper grounds of its necessity are not obvious to most who are aware of consciousness; thus it should be unsurprising that a markèdly autistic person could not see this truth in spite of its necessity.

Thinking inside the Box

Sunday, 4 March 2012

I recently finished reading A Budget of Paradoxes (1872) by Augustus de Morgan.

Now-a-days, we are most likely to encounter the word paradox as referring to apparent truth that seems to fly in the face of reason, but its original sense, not so radical, was of a tenet opposed to received opinion. De Morgan uses it more specifically for such tenets when they go beyond mere heterodoxy. Subscribers to paradox are those typically viewed as crackpot, though de Morgan occasionally takes pains to explain that, in some cases, the paradoxical pot is quite sound, and it is the orthodox pot that will not hold water. None-the-less, most of the paradoxers, as he calls them, proceed on an unsound basis (and he sometimes rhetorically loses sight of the exceptions).

A recurring topic in his book is attempt at quadrature of the circle. Most of us have heard of squaring the circle, though far fewer know to just what it refers.

I guess that most students are now taught to think about geometry in terms of Cartesian coördinates,[1] but there's an approach, called constructive, which concerns itself with what might be accomplished using nothing but a stylus, drawing surface, straight-edge, and compass. The equipment is assumed to be perfect: the stylus to have infinitesimal width; the surface to be perfectly planar, the straight-edge to be perfectly linear, and the pivot of the compass to stay exactly where placed. The user is assumed able to place the pivot of the compass exactly at any marked point and to open it to any other marked point; likewise, the user is assumed to be able to place the straight-edge exactly touching any one or two marked points. A marked point may be randomly placed, or constructed as the intersection of a line with a line, of an arc with an arc, or of an arc with a line. A line may be constructed by drawing along the straight-edge. An arc may be constructed by placing the compass on a marked point, opening it to touch another marked point, and then turning it. (Conceptually, these processes can be generalized into n dimensions.)

A classic problem of constructive geometry was to construct a square whose area was equal to that of a given circle. Now, if you think about it, you'll reälize that this problem is equivalent to arriving at the value of π; with a little more thought, you might see that to construct this square in a finite number of steps would be equivalent to finding a rational value for π. So, assuming that one is restricted to a finite number of steps, the problem is insoluable. It was shown to be so in the middle of the 18th Century, when it was demonstrated that π were irrational.

The demonstration not-with-standing, people continued to try to square the circle into de Morgan's day, and some of them fought in print with de Morgan. (One of them, a successful merchant, was able to self-publish repeatedly.) De Morgan tended to deal with them the way that I often deal with people who are not merely wrong but are arguing foolishly — he critiqued the argument as such, rather than attempting to walk them through a proper argument to some conclusion. I think that he did so for a number of reasons. First, bad argumentation is a deeper problem that mistaken conclusions, and de Morgan had greater concern to attack the former than the latter, in a manner that exhibited the defects to his readers. Second, some of these would-be squarers of the circle had been furnished with proper argumentation, but had just plowed-on, without attending to it. (Indeed, de Morgan notes that most paradoxers will not bother to familiarize themselves with the arguments for the systems that they seek to overthrow, let alone master those arguments.) Third, the standard proof that π is not rational is tedious to mount, and tedious to read.

But de Morgan, towards justifying attending as much as he does specifically to those who would square the circle, expresses a concern that they might gain a foothold within the social structure that allowed them to demand positions amongst the learnèd, and that they might thus undermine the advancement of useful knowledge.[2] And, with this concern in-mind, I wonder why I didn't, to my recollection, encounter de Morgan once mentioning that constructive quadrature of the circle would take an infinite number of operations; he certainly didn't emphasize this point. It seems to me that the vast majority of would-be squarers of the circle (and trisectors of the angle) simply don't see how many steps it would take; that their intuïtion fails them exactly there. And their intuïtion is an essential aspect of the problem; a large part of why the typical paradoxer will not expend the effort to learn the orthodox system is that he or she is convinced that his or her intuïtion has found a way around any need to do so. But sometimes a lynch-pin in the intuïtion may be pulled, causing the machine to be arrested, and the paradoxer to pause. Granted that this may not be as potentially edifying to the audience, but if one has real fear of the effects of paradoxers on scientific pursuit, then it is perhaps best to reduce their number by a low-cost conversion.

De Morgan's concern for the effect of these géomètres manqués might seem odd these days, though I presume that it was quite sincere. I've not even heard of an attempt in my life-time actually to square the circle[3] (though I'm sure that some could be found). I think that attempts have gone out of fashion for two reasons. First, a greater share of the population is exposed to the idea that π is irrational almost as soon as its very existence is reported to them. Second, technology, founded upon science, has got notably further along, and largely by using and thereby vindicating the mathematical notions that de Morgan was so concerned to protect because of their importance. To insist now that π is, say 3 1/8, as did some of the would-be circle-squarers of de Morgan's day, would be to insist that so much of what we do use is unusable.


[1] Cartesian coördinates are named for René Descartes (31 March 1596 – 11 February 1650) because they were invented by Nicole Oresme (c 1320 – 11 July 1382).

[2] Somewhat similarly, many people to-day are concerned that paradoxers not be allowed to influence palæobiology, climatology, or economics. But, whereäs de Morgan proposed to keep the foolish paradoxers of his day in-check by exhibiting the problems with their modes of reasoning, most of those concerned to protect to-day's orthodoxies in alleged science want to do so by methods of ostensibly wise censorship that in-practice excludes views for being unorthodox rather than for being genuinely unreasonable. When jurists and journalists propose to operationalize the definition of science with the formula that science is what scientists doie that science may be identified by the activity of those acknowledged by some social class to be scientists — actual science is being displaced by orthodoxy as such.

[3] Trisection of the angle is another matter. As a university undergraduate, I had a roommate who believed that one of his high-school classmates had worked-out how to do it.

Science and Consensus

Thursday, 17 February 2011

Sometimes I've simplistically said that invocation of consensus is not a scientific method. A more accurate claim would be that its use is a way of approximating the results of more rigorous methods — a way of approximation that should never be mistaken for the more rigorous methods, and that is often unacceptable as science.

Calling upon consensus is a generalization of calling upon an expert. Using an expert can be analogous to using an electronic calculator. In some sense, using a calculator could be said to be scientific; there are sound empirical reasons for trusting a calculator to give one the right answer — at least for some classes of problems.

But note that, while possibly scientific, the use of the calculator is, itself, not scientifically expert in answering the question actually asked of the calculator (though some scientific expertise may have gone into answering the questions of whether to use a calculator, and of which calculator to use). Likewise, calling upon opinion from a human expert is not itself scientifically expert in answering the question actually asked. That distinction might not matter much, if ultimately scientific expertise from someone (or from some thing) ultimately went into the answer.

The generalization of invoking consensus proceeds in at least one direction, and perhaps in two. First, using consensus generalizes from using one expert to using n experts. But, second, invoking consensus often generalizes from invoking the views of experts to invoking the views of those who are less expert, or even not expert at all.


Individual human experts, like individual electronic calculators, may not be perfectly reliable for answers to some sorts of questions. One response to this problem is the generalization of getting an answer from more than one, and, using a sort of probabilistic reasoning, going with the answer given by a majority of the respondents, or with some weighted sum of the answers. However, this approach goes astray when a common error prevails amongst most of the experts. If one returns to the analogy of digital calculators, various limitations and defects are typical, but not universal; a minority of calculators will answer some questions correctly, even as the majority agree on an incorrect answer. Likewise with human experts. That's not to say that being in the minority somehow proves a calculator or a human being to be correct, but it does indicate that one should be careful in how one responds to minority views as such. (In particular, mocking an answer for being unpopular amongst experts is like mocking an answer for being unpopular amongst calculators.) Counting the votes is a poor substitute for doing the math.

A hugely important special case of the problem of common design flaws obtains when most specialists form their opinions by reference to the opinions of other specialists. In this case, the expert opinion is not itself scientifically expert. Its foundation might be in perfectly sound work by some scientists, or it might be in unsound work, in misreading, in intuïtion and in guess-work, or in wishful thinking; but, in any case, what is taken to be the scientifically expert opinion of n experts proves instead to be that of some smaller number, or of none at all! In such cases, consensus may be little better, or nothing other, than a leap-of-faith. It isn't made more scientific by being a consensus.


In a world in which expert opinion were always scientifically expert, broadening the pool to include those less expert would typically be seeking the center of opinion in less reliable opinion. However, as noted above, a field of expertise isn't necessarily dominated by scientific experts, in which case, people less expert but more scientific may move the center of opinion to a better approximation of a scientific opinion.

Additionally, for an outsider in seeking the opinion of experts, there is the problem of identifying who counts as an expert. The relevant knowledge and the relevant focus do not necessarily reside in the same people. As well as experts failing to behave like scientists, there are often people instead focussed on other matters who yet have as much relevant knowledge as any of those focussed on the subject in question.

So a case can be made for sometimes looking at the opinions of more than those most specialized around the questions. None-the-less, as the pool is broadened, the ultimate tendency is for the consensus to be ever less reliable as an approximation of scientific opinion. One should become wary of a consensus of broadly defined groups, and one should especially be wary if evidence can be shown of consensus shopping, where different pools were examined until a pool was found that gave an optimal threshold of conviction for whatever proposition is being advocated.


What I've really been trying to convey when I've said that invocation of consensus is not a scientific method is that a scientist, acting as a scientist, would never treat invocation of consensus — not even the consensus of bona fide experts — within his or her own area of expertise as scientific method, and that everyone else needs to see consensus for no more than what it is: a second-hand approximation that can fail grotesquely, sometimes even by design.

Disappointment and Disgust

Sunday, 21 March 2010

In his Philosophical Theories of Probability, Donald Gillies proposes what he calls an intersubjective theory of probability. A better name for it would be group-strategy model of probability.

Subjectivists such as Bruno di Finetti ask the reader to consider the following sort of game:

  • Some potential event is identified.
  • Our hero must choose a real number (negative or positive) q, a betting quotient.
  • The nemesis, who is rational, must choose a stake S, which is a positive or negative sum of money or zero.
  • Our hero must, under any circumstance, pay the nemesis q·S. (If the product q·S is negative, then this amounts to the nemesis paying money to our hero.)
  • If the identified event occurs, then the nemesis must pay our hero S (which, if S is negative, then amounts to taking money from our hero). If it does not occur, then our hero gets nothing.
Di Finetti argues that a rational betting quotient will capture a rational degree of personal belief, and that a probability is exactly and only a degree of personal belief.

Gillies asks us to consider games of the very same sort, except that the betting quotients must be chosen jointly amongst a team of players. Such betting quotients would be at least examples of what Gillies calls intersubjective probabilities. Gillies tells us that these are the probabilities of rational consensus. For example, these are ostensibly the probabilities of scientific consensus.

Opponents of subjectivists such as di Finetti have long argued that the sort of game that he proposes fails in one way or another to be formally identical to the general problem for the application of personal degrees of belief. Gillies doesn't even try to show how the game, if played by a team, is formally identical to the general problem of group commitment to propositions. He instead belabors a different point, which should already be obvious to all of his readers, that teamwork is sometimes in the interest of the individual.

Amongst other things, scientific method is about best approximation of the truth. There are some genuine, difficult questions about just what makes one approximation better than another, but an approximation isn't relevantly better for promoting such things as the social standing as such or material wealth as such of a particular clique. It isn't at all clear who or what, in the formation of genuinely scientific consensus, would play a rôle that corresponds to that of the nemesis in the betting game.


Karl Popper, who proposed to explain probabilities in terms of objective propensities (rather than in terms of judgmental orderings or in terms of frequencies), asserted that

Causation is just a special case of propensity: the case of propensity equal to 1, a determining demand, or force, for realization.

Gillies joins others in taking him to task for the simple reason that probabilities can be inverted — one can talk both about the probability of A given B and that of B given A, whereäs presumably if A caused B then B cannot have caused A.

Later, for his own propensity theory, Gillies proposes to define probability to apply only to events that display a sort of independence. Thus, flips of coins might be described by probabilities, but the value of a random-walk process (where changes are independent but present value is a sum of past changes) would not itself have a probability. None-the-less, while the value of a random walk and similar processes would not themselves have probabilities, they'd still be subject to compositions of probabilities which we would previously have called probabilities.

In other words, Gillies has basically taken the liberty of employing a foundational notion of probability, and permitting its extension; he chooses not to call the extension probability, but that's just notation. Well, Popper had a foundational notion of propensity, which is a generalization of causality. He identified this notion with probability, and implicitly extended the notion to include inversions.


Later, Gillies offers dreadful criticism of Keynes. Keynes's judgmental theory of probability implies that every rational person with sufficient intellect and the same information set would ascribe exactly the same probability to a proposition. Gillies asserts

[…] different individuals may come to quite different conclusions even though they have the same background knowledge and expertise in the relevant area, and even though they are all quite rational. A single rational degree of belief on which all rational being should agree seems to be a myth.

So much for the logical interpretation of probability, […].

No two human beings have or could have the same information set. (I am reminded of infuriating claims that monozygotic children raised by the same parents have both the same heredity and the same environment.) Gillies writes of the relevant area, but in the formation of judgments about uncertain matters, we may and as I believe should be informed by a very extensive body of knowledge. Awareness that others might dismiss as irrelevant can provide support for general relationships. And I don't recall Keynes ever suggesting that there would be real-world cases of two people having the same information set and hence not disagreeing unless one of them were of inferior intellect.

After objecting that the traditional subjective theory doesn't satisfactorily cover all manner of judgmental probability, and claiming that his intersubjective notion can describe probabilities imputed by groups, Gillies takes another shot at Keynes:

When Keynes propounded his logical theory of probability, he was a member of an elite group of logically minded Cambridge intellectuals (the Apostles). In these circumstances, what he regarded as a single rational degree of belief valid for the whole of humanity may have been no more than the consensus belief of the Apostles. However admirable the Apostles, their consensus beliefs were very far from being shared by the rest of humanity. This became obvious in the 1930s when the Apostles developed a consensus belief in Soviet communism, a belief which was certainly not shared by everyone else.

Note the insinuation that Keynes thought that there were a single rational degree of belief valid for the whole of humanity, whereäs there is no indication that Keynes felt that everyone did, should, or could have the same information set. Rather than becoming obvious to him in the 1930s, it would have been evident to Keynes much earlier that many of his own beliefs and those of the other Apostles were at odds with those of most of mankind. Gillies' reference to embrace of Marxism in the '30s by most of the Apostles simply looks like irrelevant, Red-baiting ad hominem to me. One doesn't have to like Keynes (as I don't), Marxism (as I don't) or the Apostles (as I don't) to be appalled by this passage (as I am).

Modeling Madness

Monday, 27 April 2009

Some people try to light a candle. Some people curse the darkness. Me? Part of me wants to model the darkness.

I was led to this reälization upon reading the latest entry from zenicurean. In response to news reports about the latest swine-flu concerns, he writes

Plenty of first reactions appear to heavily involve doing things actual health care experts are not chiefly concerned about getting done, but that's how it always works, isn't it?

And I almost immediately thought about why those first reäctions are what they are. For example

  • Officials want to be seen as doing something.
  • People, including officials, often greatly over-estimate their understanding of issues that have (or seem to have) a significant bearing on general welfare.
  • Officials with axes to grind are quick to find excuses for the grinding.
  • Politicians can exploit the prejudices and desires of voters who are predisposed to support various measures (such as blocking foreign trade or travel, or subsidizing some profession).

So, could we pull this altogether, and surely other things that don't come so quickly to-mind, perhaps into a mathematical model, or perhaps into something less formal, that would have some predictive efficacy, or at least some distinctive explanatory efficacy?