Archive for the ‘epistemology’ Category

Rôles of Prescriptive Models in Economics

Sunday, 30 May 2021

In introductory treatments of economics, one often encounters a distinction drawn between what is called positive economics and what is called normative economics. In these names — and in typical discussion — there are problems.

The meaning of positive here is restricted to fact, as opposed to speculation. Now, on the one hand, supposedly positive economics, like all attempts by human beings to understand the world, is permeated by speculations, which in scientific effort are hypotheses. (The philosophic movement called positivism arose with incompetent aspirations.) On the other hand, contrasting the normative with something called positive entails an implication, insinuation, or declaration that the normative cannot be placed on as solid a foundation as the rest of our understanding. Sometimes a lack of present agreement is treated as if proof that there is no objective ethical truth; sometimes the question is just begged. In any case, the distinction is irrational.

Instead using the terms descriptive and prescriptive steps away from the worst aspect of using positive, though it would be less corrosive to refer to non-prescriptive economics as, well, non-prescriptive or as non-normative.

However, in behavioral science, elements drawn from prescriptive theory are often useful non-prescriptively, either as approximations or as bounding cases. Economic rationality and expected-utility maximization (the latter sometimes conflated with the former) are such elements.

Some economists would not even recognize economic rationality or expected-utility maximization as prescriptive in any case, because they are meta-preferential — they express a preference for structures of preference that have ordering properties such as transitivity and acyclicity, but say nothing about ultimate objectives and thus, in themselves, say nothing about whether one should prefer tomatoes to apples or life over death.

The prescriptive arguments for economic rationality and for expected-utility maximization are to the effect that those who conform realize more of their objectives — regardless of what those objectives might be — than those who do not, with it usually treated as tautologic that one desires such maximization.[1]

The non-prescriptive arguments for economic rationality and for expected-utility maximization as approximations note that these are relatively tractable models of behavior for which evolutionary dynamics will select. Because the models are taken from prescriptive work, some people mistake or misrepresent any use of them as necessarily prescriptive, but the claim is neither that social or other biologic evolution ought to select for something approximated by such behavior nor that agents ought to engage in the behavior for which evolution selects. (If anything, what is illuminated is that evolution selects for a propensity to such prescriptions!)

I endorse use of these models as tractable approximations in many cases, but I also embrace use of a weaker notion of economic rationality as a bounding case. A boundary of economic outcomes is given by considering what those outcomes would be were agents economically rational.

Behavioral economics concerns itself with when-and-how people actually behave, and especially with failures of the aforementioned models. Although this research is not what I do, I acknowledge its value. However, a great deal of what passes for behavioral economics involves an inferential leap from identifying a real or apparent deviation of behavior from one of these models to a conclusion that this-or-that result could be obtained by state intervention, with the researcher looking away from any proper examination of the behavior of agents determining practices of the state. Behavioral economics is thus used as the motte for a statist bailey. Additionally, even behavioral researchers with no apparent statist agenda often fail to recognize when behavior that seems at odds with these models is or may be instead at odds with some presumption of the researcher.[2]


[1] The main-stream of economic theory treats completeness of preferences as a feature of economic rationality but I've never seen a prescriptive argument even attempted for this feature. The prescriptive cases for transitivity and for acyclicity seem to presume an absence of conflicting, prior meta-preferences. The prescriptive argument for expected-utility maximization is especially problematic.

[2] While I have problems with some of the work and with much of the rhetoric of Gerd Gigerenzer, he has ably identified important cases of such failure on the part of researchers.

Nick Hudson on SARS-CoV-2 and the Policy Response

Thursday, 1 April 2021

Alphabet has removed this video from YouTube:

The Paradox of Shadows

Friday, 5 March 2021

For many years, one of the projects on a back-burner in my mind has been the writing of a novel, A Paradox of Shadows, in which the principal character is attempting to reconstruct or to otherwise recover an ancient work, the title of which might have been περὶ τοῦ ἀτόπου τοῦ τῶν σκιῶν, or De [Anomalia de] Obumbratio, or perhaps something else.

Everything about the work is a matter of doubt or of conjecture, including its author and the era and language in which it were originally composed; even that there ever were such a work is uncertain. Its existence is primarily inferred from how parts of it seem to be esoterically embedded in other works; sometimes these passages can be made to fit together like bits of a jigsaw, but different ways of fitting are possible, especially allowing for lacunæ, interpolations, and unintended errors in translation or in transcription. In ancient art and literature are found what may be other references to the work, but these apparent references are subject to alternate interpretation, especially as many of them would be quite oblique if indeed they refer to the work. The search is largely a matter of poring over old manuscripts and documents.

No rational person would look at any one piece of evidence known to the main character and conclude that the work must have existed or just probably existed. Few would take the evidence as jointly establishing such a probability. There is both too much and too little information, so that bold intellectual leaps must be made in chaos or in darkness. A searcher may encounter unscalable cliffs or unbridgable chasms; and, if forced to stop at any point, one is likely to look pathetic. But the evidence, taken jointly, associates a relative plausibility of recovery with each of various possibilities as to the nature of the work. In that context, the possible profundity is enough to drive the search by the principal character, even with likely failure.

Tiny Spaces

Wednesday, 20 January 2021

Famously, the Euclidean axiomata for space seemed necessary to many, so that various philosophers concluded or argued that some knowledge or something playing a rôle like that of knowledge derived from something other than experience. Yet there were doubters of one of these axiomata — that parallel lines would never intersect — and eventually physicists concluded that the universe would be better described were this axiom regarded as incorrect. Once one axiom was abandoned, the presumption of necessity of the others evaporated.

I think that our concept of space is built upon an experience of an object sometimes affecting another in ways that it sometimes does not, with the first being classified as near when it does and not near when it does; which ways are associated in the concept of near-ness are selected by experience. The concept of distance — variability of near-ness — develops from the variability of how one object affects another; and it is experience that selects which variabilities are associated with distance. Our concept of space is that of potential (realized or not) of near-ness.

The axiomata of Euclid were, implicitly, an attempted codification of observed properties of distance; in the adoption of this codification or of another, one might revise which variabilities one associated with distance. One might, in fact, hold onto those axiomata exactly by revising which variabilities are associated with distance. In saying that space is non-Euclidean, one ought to mean that the Euclidean axiomata are not the best suited to physics.

Just as the axiomata of Euclid become ill-suited to physics when distances become very large, they may be ill-suited when distances become very small.

Space might not even be divisible without limit. The mathematical construct of continuity may not apply to the physical world. At least some physical quantities that were once imagined potentially to have measures corresponding to any real number are now regarded as having measures corresponding only to integer multiples of quanta; perhaps distance cannot be reduced below some minimum.

And, at some sub-atomic level, any useable rules of distance might be more complex. On a larger scale, non-Euclidean spaces are sometimes imagined to have worm-holes, which is really to say that some spaces would have near-ness by peculiar paths. Perhaps worm-holes or some discontinuous analogue thereöf are pervasive at a sub-atomic level, making space into something of a rat's nest.

Humpty Dumpty and Commerce

Thursday, 7 January 2021

Fairly inexpensive hair combs made of hard rubber — rubber vulcanized to a state in which it is as about firm as a modern plastic — could be found in most American drugstores at least into the mid-'90s. Now-a-days, they have become something of a premium item. I was looking at listing on Amazon supposedly of hard rubber combs and discovered, to my annoyance, that a careful reading of the descriptions showed that most of the combs explicitly described as hard rubber were made of plastic. To me, the situation seemed to be of pervasive fraud, as it will to many others.

But then I realized that it is more likely to be something else. Fraud, after all, involves deliberate misrepresentation. Whereäs we live in a world in which a great many people believe that no use of a word or phrase is objectively improper — that if they think that hard rubber means a rubbery plastic or a plastic that looks like another substance called hard rubber, then it indeed means just that. (Of course, we cannot trust any verbal explanation from them of these idiosyncratic meanings, as they may be assigning different meanings to any words with which they define other words.)

My defense of linguistic prescriptivism has for the most part been driven by concerns other than those immediate to commercial transactions. And, when I've seen things such on eBay as items described with mint condition for its age or with draped nude, my inclination has been merely to groan or to laugh. But it seems to me that the effects of ignoring or of rejecting linguistic prescription have found their way into commercial transactions beyond the casual.

Well, those who are not prescriptivists are hypocrites if they complain, and they're getting no worse than they deserve.

Perverted Locusts

Wednesday, 9 December 2020

Those who support locking-down in response to SARS-CoV-2 are like weird locusts. Instead of eating the crops; these locusts prevent growth and harvest. That is to say that they prevent economic activity, which is an implicit consumption of an especially perverse sort. In any case, they leave despair and literal starvation in their wake.

Transcription Error

Monday, 23 November 2020

To my chagrin, I find that I made a transcription error for an axiom in Formal Qualitative Probability. More specifically, I placed a quantification in the wrong place. Axiom (A6) should read [image of formula] I've corrected this error in the working version.

Missed Article

Saturday, 21 November 2020

I found an article that, had I known of it, I would have noted in my probability paper, A Logic of Comparative Support: Qualitative Conditional Probability Relations Represented by Popper Functions by James Allen Hawthorne
in Oxford Handbook of Probabilities and Philosophy, edited by Alan Hájek and Chris Hitchcock

Professor Hawthorne adopts essentially unchanged most of Koopman's axiomata from The Axioms and Algebra of Intuitive Probability, but sets aside Koopman's axiom of Subdivision, noting that it may not seem as intuitively compelling as the others. In my own paper, I showed that Koopman's axiom of Subdivision was a theorem of a much simpler, more general principle in combination with an axiom that is equivalent to two of the axiomata in Koopman's later revision of his system. (The article containing that revision is not listed in Hawthorne's bibliography.) I provided less radically simpler alternatives to other axiomata, and included axiomata that did not apply to Koopman's purposes in his paper but did to the purposes of a general theory of decision-making.

Lack of Infrastructure

Saturday, 26 September 2020
[panel from Kirakira • Sutadī — Zettai Gokaku Sengen by Hanabana Tsubomi in which three students react with dismay at something given to them by a fourth student.  One dismayed student declares 'What is this…?!  This is absolutely filled with symbols I've never seen before…'  Another cries 'I don't even understand what the questions are asking…!!']
(from KiraKira★Study by Hanabana Tsubomi, v 2 ch 18)

My work and the problems that most interest me are difficult to discuss with friends and even with colleagues because so much infrastructure is unfamiliar to them.

Libertine Bayesianism

Thursday, 24 September 2020

As repeatedly noted by me and by many others, there are multiple theories about the fundamental notion of probability, including (though not restricted to) the notion of probabilities as objective, logical relationships amongst propositions and that of probabilities as degrees of belief.

Though those two notions are distinct, subscribers to each typically agree with subscribers to the other upon a great deal of the axiomatic structure of the logic of probability. Further, in practice the main-stream of the first group and that of the second group both arrive at their estimates of measures of probability by adjusting initial values through repeated application, as observations accumulate, of a principle known as Bayes' theorem. Indeed, the main-stream of one group are called objective Bayesian and the mainstream of the other are often called subjective Bayesian.[1] Where the two main-streams differ in practice is in the source of those initial values.

The objective Bayesians believe that, in the absence of information, one begins with what are called non-informative priors. This notion is evolved from the classical idea of a principle of insufficient reason, which said that one should assign equal probabilities to events or to propositions, in the absence of a reason for assigning different probabilities. (For example, begin by assume that a die is fair.) The objective Bayesians attempt to be more shrewd than the classical theorists, but will often admit that in some cases non-informative priors cannot be found because of a lack of understanding of how to divide the possibilities (in some cases because of complexity).

The subjective Bayesians believe that one may use as a prior whatever initial degree of belief one has, measured on an interval from 0 through 1. As measures of probability are taken to be degrees of belief, any application of Bayes' theorem that results in a new value is supposed to result in a new degree of belief.

I want to suggest what I think to be a new school of thought, with a Bayesian sub-school, not-withstanding that I have no intention of joining this school.

If a set of things is completely ranked, it's possible to proxy that ranking with a quantification, such that if one thing has a higher rank than another then it is assigned a greater quantification, and that if two things have the same rank then they are assigned the same quantification. If all that we have is a ranking, with no further stipulations, then there will be infinitely many possible quantifications that will work as proxies. Often, we may want to tighten-up the rules of quantification (for example, by requiring that all quantities be in the interval from 0 through 1), and yet still it may be the case that infinitely many quantifications would work equally well as proxies.

Sets of measures of probability may be considered as proxies for underlying rankings of propositions or of events by probability. The principles to which most theorists agree when they consider probability rankings as such constrain the sets of possible measures, but so long as only a finite set of propositions or of events is under consideration, there are infinitely many sets of measures that will work as proxies.

A subjectivist feels free to use his or her degrees of belief so long as they fit the constraints, even though someone else may have a different set of degrees of belief that also fit the constraints. However, the argument for the admissibility of the subjectivist's own set of degrees of belief is not that it is believed; the argument is that one's own set of degrees of belief fits the constraints. Belief as such is irrelevant. It might be that one's own belief is colored by private information, but then the argument is not that one believes the private information, but that the information as such is relevant (as indeed it might be); and there would always be some other sets of measures that also conformed to the private information.

Perhaps one might as well use one's own set of degrees of belief, but one also might every bit as well use any conforming set of measures.

So what I now suggest is what I call a libertine school, which regards measures of probability as proxies for probability rankings and which accepts any set of measures that conform to what is known of the probability ranking of propositions or of events, regardless of whether these measures are thought to be the degrees of belief of anyone, and without any concern that these should become the degrees of belief of anyone; and in particular I suggest libertine Bayesianism, which accepts the analytic principles common to the objective Bayesians and to the subjective Bayesians, but which will allow any set of priors that conforms to those principles.


[1] So great a share of subjectivists subscribe to a Bayesian principle of updating that often the subjective Bayesians are simply called subjectivists as if there were no need to distinguish amongst subjectivists. And, until relatively recently, so little recognition was given to the objective Bayesians that Bayesian was often taken as synonymous with subjectivist.