Illusory Mites

27 October 2021

In the past several years, I have had occasional episodes of a false sense-perception that tiny insects or insect-like creatures are crawling on me. The technical term for such experience is formication, loosely adapted from formicate, which means crawl like an ant (from the Latin formica, meaning ant), though in my case the sense-perception is that of being crawled upon by something such as fleas or mites.

I use the term sense-perception because, in my case, an underlying sensation is real; the falsity is in how that sensation is unconsciously interpretted. The sensation is actually produced by an allergic reaction. My skin is allergic to many things in my present environment. I don't know what all of these allergens are, but normally I can keep my discomfort and other symptoms at a low level by avoiding detergents and scented cleaning products. However, sometimes a threshold is exceeded, perhaps by something I recognize, perhaps not, and one consequence might be sensations so much like those of having tiny bugs crawling on me that my unconscious mind signals that just that is happening. None-the-less, when I can see some of the areas in question, no creatures (other than myself) are visible. Further, the inferred creatures don't make any progress; it is as if they are crawling-in-place. And, finally, wounds are not later discoverable on these sites. (As it happens, an episode can be triggered by an actual insect bite, but then further wounds are not found.)

Being rational in how I consciously interpret the sense-perception doesn't seem to cause it to change. That's probably because, for most of my life, when I felt as if creatures were crawling on me, it was because I had creatures crawling on me. Years of neural networking would have to be revised for my brain not first to think that bugs were crawling on me. And a practical adaptation would account for the possibility, when the sensation were felt, that bugs were crawling on me and warranted a swift response.

If I were less rational or didn't have other awareness of my allergies, it would be natural for me to conclude that, while I could not see tiny creatures, they must never-the-less be crawling on my skin or perhaps just below its surface. Let's add to such inclinations that most of those who would deny that these creatures were crawling on-or-under my skin would themselves get matters fundamentally wrong. In particular, failing to distinguish the raw sensation from the sense-perception, most people would deny the reality of the former, not only in their protestations but in their attempts to locate the dysfunction. It wouldn't be all in my head.

On the Practical Uselessness of Minimax Prescriptions

25 October 2021

[I posted the following as an entry to Facebook two years ago.]

In game theory, there's a proposal that, in selecting amongst options, a player should choose that option with the least-bad worst case. So, for example, if the worst that could happen if you go to the bistro is that they get your order wrong, while the worst that could happen if you go to the bar is that you get knifed, then you should go the bistro.

Such a strategy is usually called minimax. (I think that it should have been called maximin, and indeed some people call it that, but they're very much bucking convention.)

The proposal seems plausible, especially when measures of probabilities are unknown (so that expected values cannot be calculated). But I think that notions of probabilities as measures inform and thus disinform the apparent reasonableness of minimax strategy.

When one conceptualizes probability as a measure, it is all too easy to think that all events with probability measure 0 might as well be treated as impossible. But they're not quite the same thing if one has to deal with infinitely many possible cases; some precisely specified possibilities would each have to have a probability measure of 0.

Possible events with truly horrible consequences can have a probability measure of zero, and will be disregarded if one treats that probability as equivalent to impossibility.

And, in the absence of actual measures of probability, then from the habit of treating events with very low measures of probability as if impossible can slide into the practice of more generally disregarding events that have low probability rank.

In a textbook game of poker or of dice or whatever, the worst possible outcome in the model is most often a loss of funds. In a real-world game of poker or of dice or whatever, the worst possible outcome is something far more dire, and hard to identify; we can think of awful possibilities, and then think of something still worse. Puffy may get angry, and shoot us; or he may shoot not only us but in his rage go shoot our loved ones as well.

Suicide Mission

13 October 2021

[I posted the following as an entry to Facebook six years ago.]

Every now and then, one of my Facebook Friends posts or comments to a posting about someone who has lost his battle with depression.

I recently saw one of those postings, and visited the page of the person who was said to have lost the battle. I saw his some of his final posts, and some of his pictures. And, yeah, he was battling with depression. If I'd know him, I would have told him to stop.

I don't mean that I would have told him to go somewhere and die. I mean that depression is not to be fought. I very much doubt that a depressive personality can ever be anything else; but I am absolutely certain that fighting it is not how to deal with it.

People who try to fight depression either are always fighting it or have lost to it. They compound the depression with a sense that there is something unacceptable about themselves, which can only be overcome by a fight. If they don't have that much fight in themselves, then they don't accept themselves; their lives hang on their belief in their ability to fight depression, to somehow refuse to be depressives.

It looks an awful lot like an unrecognized internalization of some of the things that the depressive was told as a child, by those who were failing that child, and who in many cases had taught and were teaching the perverted life-lessons that had made the child a depressive.

Depression is to be explained, to be understood, and to be put in context. There is no guarantee that life will then be livable, but at least one doesn't have to die upon losing a fight.

A Narrative to Come

6 October 2021

The mainstream narrative about SARS-CoV-2 has itself mutated many times, but it seems headed towards a crisis from which it will not recognizably survive. I believe that, as progressives try to get out from under their own responsibility for that narrative and for the homicidal and otherwise inhumane effects of the policies developed and defended by it, they will ascribe principal blame to what they will call Big Pharma, and they will insist that the lesson to be drawn is that the power of the state must be extended — to control more thoroughly the development and allocation of medical treatment, to prevent commercial interests (in general) from influencing supposedly scientific research by selective funding, and to prevent commercial interests from influencing state policy.

Some large pharmaceutical firms have played a decidedly unhealthy rôle in the response of important institutions to SARS-CoV-2. At this stage, I would be less surprised to discover that some of the persons at these firms have been guilty of crimes against humanity than that they were simply bunglers. But, when someone says Big Pharma (with or without capitalization), I don't know that he or she refers to just these firms. Nor, for that matter, am I sure that only large firms have been a cause of the problem, though I am quite sure that not all firms are responsible.

Some people had or have been primed to blame what they call Big Pharma from very early into the pandemic, if not indeed from the outset. Until recently, most of the political left wrote and spoke of Big Pharma as an enemy, demanding such things as quicker expiration of drug patents, monopsonistic bargaining by the state to drive-down drug prices, or overt price ceilings. The first time that I encountered the expression Big Pharma was in AD 2000, when Albert Arnold (Al) Gore jr used it as he made attacks on the pharmaceutical industry a key feature of his Presidential campaign. And people not on the political left had been increasingly worried about the pharmaceutical industry as they saw social perverts such as William Henry (Bill) Gates III develop an interest and involvement in pharmaceuticals as part of a broader vision to remake mankind.

But I think that it is far more reasonable to see the firms, large and small, pharmaceutical or otherwise, that have behaved problematically or downright evilly concerning SARS-CoV-2 not as masterminds but as amongst the many mercenaries and whores.

In any case, the changes that I predict that progressives will demand would in practice mean that medicine would be further socialized and made bureaucratic, that the selective funding of the state would be almost the sole determinant of prevalent, ostensibly scientific conclusions, and that those in the non-commercial commanding heights of society would have still greater control over the political process. Each of these changes would deepen the the fundamental problems that we observe connected to the present mainstream narrative and to state policy concerning SARS-CoV-2. The left should not be tolerated in some further attempt to suppress dissent and deviation.

Cut to the Goddamn'd Chase!

15 September 2021

Most prefaces, forewords, introductions, and introductory paragraphs are largely or entirely superfluous; most introductory sentences are wastes of time.[1] In the last few years, my annoyance about entropic rhetoric in general and about blathering preambles in particular has become outrage.

The internal state of affairs in the West is more terrible now than ever previously in my lifetime. A great many people believe themselves to have important insights to convey about this state of affairs, and want our time. Our time is scarce, but many of them want to present essays in the form of audio recordings, which deliver words far more slowly than most of us can read. Worse, almost every one of those who offer these recordings prologues for some minutes, usually about the importance of what they will have to say but almost always without the prologues' saying anything important.

I believe that some of these people indeed have important things to say; but, in each case, he or she behaves as if unable to recognize what is important. In each individual case, the probability is especially low that a person not getting to the point will get to an important point. I almost always abandon attention before the prologue ends, possibly well before it ends.

[1]  I acknowledge exceptions. I like to believe that I am responsible for some of them; but, had I always the luxury of being my own editor, some of my work would get more rapidily to its point.

Basic Ontology

2 September 2021

When natural languages first had need to refer to concepts as such, this need was so limited and so vaguely understood that the very same term would refer both to a concept and to that to which the concept pointed. A horse is a mammal. and A horse is on my lawn. seem superficially to be statements of the same sort. Some people, sensing a difference, declare that the first statement is essential, while the second is accidental, but this way of speaking and of writing seems to treat a horse as referring in both cases to the same thing, and embroils us in conflicts over which attributes are essential, which are accidental, and by what methods we all ought to agree on a resolution. The primary difference between the two statements is that in A horse is a mammal. the term a horse typically refers to a concept, whereäs in A horse is on my lawn. the term a horse usually refers to something to which the concept corresponds, which we may call an instantiation of the concept.

I say usually because in theory someone might use a horse only for creatures who were, amongst other things, found on her lawn; but we understand that this practice is not usual, and can find the difference between concept and instantiation by considering usual practice. At the same time, we can see that struggles about essential and accidental attributes are largely rooted in different people simply using related but different concepts.

Even people who are careful to indicate a distinction between some Y and the concept of Y when Y is not itself a concept may fail to do so when it is. But the concept of the concept of X is not the concept of X unless we can find some X which is no more or less than the idea of itself.

From this point, we should see that, to believe that instantiations depend upon their concepts, we must accept an infinite regress. The alternative is not to accept that concepts depend upon that which instantiates them — some concepts are not instantiated — but to understand that concepts must be constructed by employing some thing or things that are not concepts.

In any case, always marking the distinction between concept and instantiation can become a very great burden as we begin to ponder ideas as such, part of which burden would fall upon a reader dealing with compounding of expressions such as the concept of; but, one way or another, we should remember what we are contemplating or discussing.

The confusion in using the same term for concept and instantiation is most acute in existence statements.

The subject in Unicorns do not exist. is the concept of unicorns, not any instantiation of that concept. Grammatically we treat nothing as a something and grammatically we treat non-existence as a property of nothing-as-something. But, underlying this practice, statements about non-existence are really statements that some concepts have no instantiations; if claims about non-existence refer to properties of somethings, then these somethings are concepts. Unicorns do not exist. is not really about unicorns; it is about the idea of unicorns. We can only speak or write of the idea of unicorns.

And, when we speak or write of existence, we are speaking and writing of concepts. The claim Horses exist. is really about the concept, that it is instantiated. Coherent existential claims are no more or less than claims that concepts are instantiated.

That statements of form X exists. unpack to form The concept of X is instantiated. should lead one to recognize that a proper reading of The concept of X exists. unpacks to The concept of the concept of X is instantiated. We don't generally need a concept before we use something that would instantiate it — otherwise the infinite regress of concepts would be needed — but anything that we use is at least potentially an instantiation of multiple concepts. Some might be tempted to conclude that, thus, X exists. needn't refer to the concept of X, and can be unpacked as X is potentially an instantiation of some concept. It may seem doubtful that anyone has ever previously intended such a thing with an existential claim, and certainly existential claims are not usually claims about the ability to find or to construct an idea of a thing said to exist. However, to be potentially an instantiation of some concept is no more or less than to possess properties, so this notion would treat existence as something like a generalization of the concept of property. Still, the formula cannot be adapted to X does not exist. as unpacking it to X is not potentially an instantiation of some concept. is always incoherent when not false, whereäs declarations such as Unicorns do not exist. may be coherent and true. And we are incoherent if by a horse we mean the same thing in A horse has no properties. as in A horse is on the lawn.

I don't propose that we try to reshape our speech and writing nor our work-a-day thinking to distinguish overtly-and-always concept from instantiation. I don't even propose to do so in all philosophic discourse. But, when discussion of existence seems troubling or profound or both, then we may need to bring that distinction to bear.

Some people, encountering a discussion such as the foregoing, will not much attend to it, because they feel certain that they clearly see a truth that contradicts it. I'm going to address propositions of two sorts, mistaken for such truth.

One sort, in which X is something like what is meant by a horse in A horse is on the lawn. says X V because it exists. where the variable V takes the value of a verb. For example, A hot stove burns you because it exists. The first thing to note is that specific values of V that supposedly prove existence aren't universally applicable; we don't say A horse burns you because it exists. Generally, existence is intended to be seen as a necessary but not sufficient condition for X to V. But, when we add conditions to achieve sufficiency, we find that the added conditions (eg, being at a temperature at or above 118°F) are by themselves sufficient, without a mysterious complementary property of existence possessed by X; the notion of such a property results from thoughtlessly confusing a way in which a concept of X may be said to have properties with the way in which X has properties. What we call the horn of a unicorn is itself a concept of a horn.

Though I have encountered at least one would-be follower of Ayn Rand who mistook the tack of because it exists for hers, she made a different mistake. She declared the concept of existence to be irreducible but axiomatic, and that we were to see that it were found and proven in-so-far as a self-contradiction would result from denying Existence exists. However, because we can show that a self-contradiction indeed obtains while interpretting existence and its coördinate terms as in the prior discussion, her attempt to prove the existence of some other concept (profound or otherwise) is a failure.

If we unpack Existence exists. as we can, it is The concept of being an instantiated concept is instantiated. The source of self-contradiction in denial of this proposition is that the concept of a concept being instantiated must have been instantiated for the proposition to be formed, though this proposition could not hold before formation of the concept of existence. And its subjects are not of the sort that Rand and her followers imagined or imagine.

For whatever it's worth, if we grab for potential instantiation then the unpacking is to The concept of potential instantiation is potentially instantiated. that is, more simply put, to A concept can be formed of being the subject of a concept. Again, self-contradiction ensues if we attempt to deny the claim, but in neither expression is the subject that for which Rand reached.

Dissimilar Equivalence

28 August 2021

One of the points taught in a great many introductory courses on microeconomics is that a tax-cut can be expected to have the same effect on schedules of supply and of demand, and thence on the resulting equilibrium, as would a subsidy. In this sense, economics shows that a tax-cut is equivalent to a subsidy. And, ignoring differences in administrative costs, the resources possessed by the state given a tax-cut are equivalent to those after dispensation of a subsidy. But it is only in these effects that microeconomics shows an equivalence; and, even if we confine ourselves to the considerations of non-normative microeconomic theory, we would be speaking or writing rather loosely if we simply said that a tax-cut were the same as subsidy.

In the sphere of normative discourse, whether a party's refraining from taking is equivalent to giving is determined by whether that person or group of people is entitled to take. A person who forgives a debt may be said to give;[1] but the person who does not steal that which is yours does not in this way donate to you. The invaders or other thugs who declared themselves to be lords did not give what they merely did not confiscate from the farmers whom they conquered.

To treat a tax-cut as morally equivalent to a subsidy, or to do as so many progressives and left-wing populists — to insist that a failure to increase some tax on some party to a prior or even new level simply is a subsidy — is to insist that the state is morally entitled to tax at the greater level, that the state owns those resources.

This moral claim is certainly not a principle of economics nor a consequence of bringing economic principles to bear on moral theory, and should not be allowed to pass as such nor by insinuation.

[1] A creditor who forgives a debt has given her rights as surely as if she had assigned them to a third party.

Checked against What?

4 July 2021

Recently, I encountered a bizarre claim about deaths from two different causes, and a link to a supposèd fact-check at Lead Stories, which unequivocally called the claim false. However, when I read the rest of the report, the alleged fact-checker had only failed to find substantiation for the claim. So I sent an inquiry to Alan Duke, the Editor-in-Chief:

Date: Sun, 27 Jun 2021 05:12:03 -0700
Subject: Method?

How do you get from your not knowing of any substantiation of a claim to a declaration that the claim is false?

Merely not knowing that a claim is true is equivalent to merely not know that it is false. Declaring it to be unproven would be perfectly reasonable, but that's not always what you do (though it may sometimes be what you do).

When declaring an unproven claim to be false, or its unproven contradiction to be false, do you flip a coin? or do you decide by some other method?

I've not received a reply.

Now, some people will declare You can't prove a negative! But the mathematic form of the claim being checked was x > y . Accepting that x and y correspond to real numbers, the contradiction of the claim is yx . I don't know that one of these claims should be regarded as positive and the other as negative.

Of course, all but the most terribly gullible understand that what is now-a-days called fact-checking is primarily concerned to protect some narrative or to attack some narrative, and will disregard even basic logic if that concern seems best served by doing so.

Rôles of Prescriptive Models in Economics

30 May 2021

In introductory treatments of economics, one often encounters a distinction drawn between what is called positive economics and what is called normative economics. In these names — and in typical discussion — there are problems.

The meaning of positive here is restricted to fact, as opposed to speculation. Now, on the one hand, supposedly positive economics, like all attempts by human beings to understand the world, is permeated by speculations, which in scientific effort are hypotheses. (The philosophic movement called positivism arose with incompetent aspirations.) On the other hand, contrasting the normative with something called positive entails an implication, insinuation, or declaration that the normative cannot be placed on as solid a foundation as the rest of our understanding. Sometimes a lack of present agreement is treated as if proof that there is no objective ethical truth; sometimes the question is just begged. In any case, the distinction is irrational.

Instead using the terms descriptive and prescriptive steps away from the worst aspect of using positive, though it would be less corrosive to refer to non-prescriptive economics as, well, non-prescriptive or as non-normative.

However, in behavioral science, elements drawn from prescriptive theory are often useful non-prescriptively, either as approximations or as bounding cases. Economic rationality and expected-utility maximization (the latter sometimes conflated with the former) are such elements.

Some economists would not even recognize economic rationality or expected-utility maximization as prescriptive in any case, because they are meta-preferential — they express a preference for structures of preference that have ordering properties such as transitivity and acyclicity, but say nothing about ultimate objectives and thus, in themselves, say nothing about whether one should prefer tomatoes to apples or life over death.

The prescriptive arguments for economic rationality and for expected-utility maximization are to the effect that those who conform realize more of their objectives — regardless of what those objectives might be — than those who do not, with it usually treated as tautologic that one desires such maximization.[1]

The non-prescriptive arguments for economic rationality and for expected-utility maximization as approximations note that these are relatively tractable models of behavior for which evolutionary dynamics will select. Because the models are taken from prescriptive work, some people mistake or misrepresent any use of them as necessarily prescriptive, but the claim is neither that social or other biologic evolution ought to select for something approximated by such behavior nor that agents ought to engage in the behavior for which evolution selects. (If anything, what is illuminated is that evolution selects for a propensity to such prescriptions!)

I endorse use of these models as tractable approximations in many cases, but I also embrace use of a weaker notion of economic rationality as a bounding case. A boundary of economic outcomes is given by considering what those outcomes would be were agents economically rational.

Behavioral economics concerns itself with when-and-how people actually behave, and especially with failures of the aforementioned models. Although this research is not what I do, I acknowledge its value. However, a great deal of what passes for behavioral economics involves an inferential leap from identifying a real or apparent deviation of behavior from one of these models to a conclusion that this-or-that result could be obtained by state intervention, with the researcher looking away from any proper examination of the behavior of agents determining practices of the state. Behavioral economics is thus used as the motte for a statist bailey. Additionally, even behavioral researchers with no apparent statist agenda often fail to recognize when behavior that seems at odds with these models is or may be instead at odds with some presumption of the researcher.[2]

[1] The main-stream of economic theory treats completeness of preferences as a feature of economic rationality but I've never seen a prescriptive argument even attempted for this feature. The prescriptive cases for transitivity and for acyclicity seem to presume an absence of conflicting, prior meta-preferences. The prescriptive argument for expected-utility maximization is especially problematic.

[2] While I have problems with some of the work and with much of the rhetoric of Gerd Gigerenzer, he has ably identified important cases of such failure on the part of researchers.

On Taking the Law into One's Own Hands

17 May 2021

In almost every instance in which the admonition Don't take the law into your own hands! is used, the intention is that one should defer to some other party. But there are various parties to whom one could defer, some of them rival. A choice to defer at all is itself a choice about what is the law and implicitly about how it should be applied. In choosing to defer to one of these parties, rather than to another, one has already taken the law into one's own hands, if only then to let it go. A person is always responsible for such choices. Sometimes, deference is a very appropriate choice, and perhaps even the only appropriate choice, but one is responsible for choosing when and to whom to defer. The only way that a person could perhaps not at all take the law into his-or-her own hands would be in utter passivity — not even acting to draw some other party into the situation as giver or enforcer of law. And, still, to choose passivity would be a choice, and sometimes a morally unacceptable choice.

Those who insist that we should not take the law into our own hands almost always intend that we should defer to those with the most social power concerning law. Various concerns might motivate that intention, but most often the admonition comes from members of that group (state officials), or from people who take it that the social power somehow arises from virtue of some sort, or from those who believe that the only alternative to deferring to those with the most social power is so obviously barbarism that no argument need be made. If a reader believes that I need to critique any of these cases, then he-or-she should comment below to that effect.