To my chagrin, I find that I made a transcription error for an axiom in
Formal Qualitative Probability. More specifically, I placed a quantification in the wrong place. Axiom (A6) should read I've corrected this error in the working version.
Archive for the ‘philosophy’ Category
To my chagrin, I find that I made a transcription error for an axiom in
Reading a book first published in 1951, I am reminded that, at one time, the definition of
humanities included sciences of human behavior within its scope. Now, one seldom encounters that inclusion in contemporary use, and the Merriam-Webster Dictionary explicitly excludes the study of social relations (though it says nothing explicit about that part of behavior outside of the social).
In the earlier period, there was a question of whether the study of human behavior were fundamentally different from the study of the properties of other things. Those who insisted upon such a difference would speak and write of
science and the humanities as if of two separate things.
But the tools by which the physical, biological, and behavioral science were studied were increasingly shared. The physical and biological sciences took-up probability and statistics; the biological sciences have taken-up chemistry, mechanics, and game theory; the behavioral science have taken-up biological explanation and mathematical modelling. All have been affected by the same philosophic theories of method. A dichotomy of science and the humanities cannot prevail so long as the behavioral sciences are included amongst what are called
Apparently that dichotomy was so dear to some of those who insisted upon it that they attempted its preservation by implicitly changing what they intended with
humanities in order to hold fast to it. Of course, the newer definition doesn't maintain the original dichotomy; but replaces it with a new one.
As repeatedly noted by me and by many others, there are multiple theories about the fundamental notion of probability, including (though not restricted to) the notion of probabilities as objective, logical relationships amongst propositions and that of probabilities as degrees of belief.
Though those two notions are distinct, subscribers to each typically agree with subscribers to the other upon a great deal of the axiomatic structure of the logic of probability. Further, in practice the main-stream of the first group and that of the second group both arrive at their estimates of measures of probability by adjusting initial values through repeated application, as observations accumulate, of a principle known as
Bayes' theorem. Indeed, the main-stream of one group are called
objective Bayesian and the mainstream of the other are often called
subjective Bayesian. Where the two main-streams differ in practice is in the source of those initial values.
The objective Bayesians believe that, in the absence of information, one begins with what are called
non-informative priors. This notion is evolved from the classical idea of a principle of insufficient reason, which said that one should assign equal probabilities to events or to propositions, in the absence of a reason for assigning different probabilities. (For example, begin by assume that a die is
fair.) The objective Bayesians attempt to be more shrewd than the classical theorists, but will often admit that in some cases non-informative priors cannot be found because of a lack of understanding of how to divide the possibilities (in some cases because of complexity).
The subjective Bayesians believe that one may use as a prior whatever initial degree of belief one has, measured on an interval from 0 through 1. As measures of probability are taken to be degrees of belief, any application of Bayes' theorem that results in a new value is supposed to result in a new degree of belief.
I want to suggest what I think to be a new school of thought, with a Bayesian sub-school, not-withstanding that I have no intention of joining this school.
If a set of things is completely ranked, it's possible to proxy that ranking with a quantification, such that if one thing has a higher rank than another then it is assigned a greater quantification, and that if two things have the same rank then they are assigned the same quantification. If all that we have is a ranking, with no further stipulations, then there will be infinitely many possible quantifications that will work as proxies. Often, we may want to tighten-up the rules of quantification (for example, by requiring that all quantities be in the interval from 0 through 1), and yet still it may be the case that infinitely many quantifications would work equally well as proxies.
Sets of measures of probability may be considered as proxies for underlying rankings of propositions or of events by probability. The principles to which most theorists agree when they consider probability rankings as such constrain the sets of possible measures, but so long as only a finite set of propositions or of events is under consideration, there are infinitely many sets of measures that will work as proxies.
A subjectivist feels free to use his or her degrees of belief so long as they fit the constraints, even though someone else may have a different set of degrees of belief that also fit the constraints. However, the argument for the admissibility of the subjectivist's own set of degrees of belief is not that it is believed; the argument is that one's own set of degrees of belief fits the constraints. Belief as such is irrelevant. It might be that one's own belief is colored by private information, but then the argument is not that one believes the private information, but that the information as such is relevant (as indeed it might be); and there would always be some other sets of measures that also conformed to the private information.
Perhaps one might as well use one's own set of degrees of belief, but one also might every bit as well use any conforming set of measures.
So what I now suggest is what I call a
libertine school, which regards measures of probability as proxies for probability rankings and which accepts any set of measures that conform to what is known of the probability ranking of propositions or of events, regardless of whether these measures are thought to be the degrees of belief of anyone, and without any concern that these should become the degrees of belief of anyone; and in particular I suggest
libertine Bayesianism, which accepts the analytic principles common to the objective Bayesians and to the subjective Bayesians, but which will allow any set of priors that conforms to those principles.
 So great a share of subjectivists subscribe to a Bayesian principle of updating that often the subjective Bayesians are simply called
subjectivists as if there were no need to distinguish amongst subjectivists. And, until relatively recently, so little recognition was given to the objective Bayesians that
Bayesian was often taken as synonymous with
In nearly every election of a state official, even those in which only a few hundred voters participate, the margin of victory is more than one vote. What that means is that, if any one voter had refrained from voting, or if any one abstaining voter had not abstained, the candidate who won would still have won. Some people — even some very intelligent people — conclude that the vote of an individual has no efficacy beyond that of other acts of expression. Those people are missing something.
Indeed, one's vote or refusal to vote has absolutely no effect on the election at hand. There are various things that one can do prior to the election which may help one candidate to achieve a margin of victory, or prevent another from achieving such a margin. But one's own vote isn't going to make any difference in that election.
However, as potential candidates and parties decide what to do with future elections in mind, they look at margins of victory in past elections. Potential candidates decide whether to run and, if they choose to run, how to position themselves, informed by those margins. Parties decide their platforms and whom to nominate, informed by those margins. With large margins in their favor, they feel free to alienate a greater number of potential voters; with small margins or losing margins, they consider what to do differently in order to pull voters who previously voted for another, or who didn't vote at all.
Thus, an individual vote or the decision not to vote has a small effect — but its only effect — on later elections and on behavior of those who are acting with concern for later election.
The least effective thing that a potential voter can do is to vote for a candidate whom he or she dislikes. People in America who have held their noses to vote for the Democratic or Republican nominee in order to stop the nominee of the other party did worse than to throw-away their votes; they have helped to ensure that the next pair of choices would likewise be disagreeable, and that the behavior of officials in the mean time would likewise be disagreeable. It is only if one genuinely thought that one of these candidates were worthwhile that one should have voted for him or for her, and then still only to affect the next election and interim behavior of officials.
The most effective thing that a potential voter as such can do is to vote for a candidate of whom that voter approves, even if that candidate has no chance of winning, or to submit a ballot from which no candidate receives a vote. An increasing number of people are doing the latter, either in expression that no candidate is worthy, or to challenge the legitimacy of the process in a way that makes it difficult for these people to be dismissed as apathetic by apologists for the process.
The way in which the political left conceptualizes an economy is a variation on how technocrats more generally conceptualize it. The left imagines the economy as possessing a kernel of processes that take inputs and produce outputs based upon purely technologic considerations. What distinguishes these processes as a kernel is that they are jointly self-sustaining; setting aside natural resources, the kernel produces everything necessary to maintain itself. Depending upon technology, the kernel may do nothing more than to sustain itself. The left often imagines an economy that does nothing more as a subsistence economy; but, as a matter of logic, they might imagine an economy as technologically constrained to produce exactly what it does to continue replicating itself, yet providing a fairly high standard of living. In any case, they more often imagine the kernel as producing a surplus, which is to say production above and beyond that necessary to sustain the kernel. Allocation and composition of the surplus is imagined to be determined not just by technologic considerations, but also by social power. This is why the left often does believe and still more frequently seems to believe that economics is a zero-sum game; they believe that for some people to get more, they must either leave less of the surplus for others or, still worse, must reduce the kernel. Because performance of the kernel is imagined to be determined purely by technologic factors, while it may be acknowledged that in our world resources have been priced largely by markets and hence inputs have been determined largely by markets, it is believed that, ultimately, the markets had little real choice; that they had to settle on relative prices that simply conformed to technologic considerations. The imagined kernel is as if an inflexible machine, however complex it may be. It is only pricing of commodities within the surplus that is imagined to be flexible.
The lock-downs that have been the political response to the SARS-CoV-2 pandemic are variously imagined either as shutting down production outside the kernel, with economic activity labelled as
essential continuing, or indeed as going further to shut-down much of the kernel itself. As the lock-downs come to an end, it will be expected by many — including many not on the political left — that the economy will pick-up at about where it was before the lock-downs. If one imagines the proper inputs to each part of the kernel (or of the economy more generally) as technologically determined, then restarting the economy is a simple matter of resuming those proper inputs. If the kernel is believed to have been kept in operation, then what remains is again to allocate the surplus roughly as it was, or (in keeping with left-wing values) with a greater share given to those who are not wealthy.
The economy will not pick-up where it left-off, because the technocratic conception in general and the left-wing conception in particular are so terribly wrong. But the political left will diagnose the failure to restore the economy quickly as
a failure of capitalism — either to solve a problem of technologic programming or to produce a
socially just or
fair division of the surplus. And, so, they will demand that the state become further involved, to take greater command of those industries that they regard as within the kernel, to strengthen worker unions, to establish floors on wages and both floors and ceilings on salaries, and to redistribute income through transfer programmes.
An die Spitze der Erörterung dieses vielberufenen Begriffes sollte gestellt werden, dass es Einheiten des Wertes giebt, dass man also untersuchen kann, wievielmal so gross ein Wert als ein anderer ist und Güter gleichen Wertes durch einander ersetzen kann, dass also der Wert ein eigentliches in einer Kardinalzahl ausdrückbares Mass hat.
which may be translated as
At the forefront of discussion of this much used concept should be placed that there are units of value that one thus can investigate how many time as large a value is as another and can replace goods of the same value with each other, that thus the value has a real measure expressible in a cardinal number.
I'll deal first with the point that it seems that one can investigate how many times as large a value is as another.
Numbers are used in many ways. Depending upon the use, what is revealed by arithmetic may be a great deal or very little. Sometimes numbers are ascribed with so little meaning that we may as well consider them strings of numerals, the characters that we use for numbers, and not numbers at all. Sometimes numbers do nothing but provide an arbitrary order, good for something such as a look-up table but nothing else. Sometimes they provide a meaningful order, but one in which the results of most arithmetic operations are meaningless, as when items produced at irregular intervals are given sequential serial numbers. (The difference between any two such numbers tells one which was produced before the other, but little else.) Sometimes the differences between the differences are meaningful, as when items are produce at regular intervals and given sequential serial numbers. And so forth.
Monetary prices are quantities, but they are more specifically quantities of money; that does not make them quantities of value nor proxies of quantities of value. One would have to show that the results of every arithmetic operation on such a quantity of money said something about value for it to be shown that value were itself a quantity.
The second part of Voigt's claim is that one
Güter gleichen Wertes durch einander ersetzen kann [
can replace goods of the same value with each other]. But an equivalence between things corresponding to the same numbers doesn't make results of the application of arithmetic to those numbers meaningful. (Consider lots of items produced at irregular intervals, with each item in the lot given the same serial number, unique to the lot but otherwise random.) And we should ask ourselves under just what circumstances we can and cannot ersetzen one set of commodities of a given price with another of the same price.
Nor does somehow combining the use of quantities of money for prices with a property of equivalence imply that value is a quantity.
Voigt is unusual not in making this unwarranted inference, but in so clearly expressing himself as he does. From the observation that prices are usually quantities of something, which quantities increase as value increases, most people, and even most economists blithely infer that value itself behaves as a quantity.
In early 2013, I made freely available a transcription of Zahl und Mass in der Ökonomik: Eine kritische Untersuchung der mathematischen Methode und der mathematischen Preistheorie (1893) by Andreas Heinrich Voigt. I have to-day completed a first pass of a translation of this as Number and Measure in Economics: A Critical Examination of Mathematical Method and of Mathematical Price Theory. Although I believe that there are many errors to be corrected in that translation, I am making it available. I do not plan to use a different URI for corrected versions.
I have been very disappointed by my reading of Voigt's article. I regard it as containing more error than insight.
In the course of translation, I found and corrected extremely minor errors in the transcription of the original. A name was at one point misspelled by me, and I failed to capitalize a word beginning a sentence. I also marked a
die die as questionable which I've since concluded was deliberate. I do not believe that anyone could have been led to a mistaken reading as a result of those errors, but I have naturally corrected them.
I may change the URI for the transcription, moving it from another domain to place it amongst the uploads for this 'blog. If so, then I will edit entries to reflect that change.
On 20 February 2020, a year to-the-day after I submitted my paper
Formal Qualitiative Probability to The Review of Symbolic Logic and nearly five months after I was notified that a revised version had been accepted, Cambridge University Press published the manuscript on-line.
(I believe that an unchanging DOI
10.1017/S1755020319000480 will be used for whatever is the latest version of the article, as it is type-set for paper publication and eventually assigned to a specific issue.)
This work was badly treated across journals of philosophy. Regardless of whether any of my future work is perhaps best regarded as philosophic, I will henceforth avoid submitting to such journals.