## Posts Tagged ‘utility’

### l'usage

Thursday, 3 January 2019

In the course of a present investigation of how the main-stream of economics lost sight of the general concept of utility, I looked again at the celebrated article Specimen Theoriæ Novæ de Mensura Sortis by Daniel Bernoulli, in which he proposed to resolve the Saint Petersburg Paradox[1] by revaluing the pay-off in terms of something other than the quantity of money.

The standard translation of his article into English[2] replaces Latin emolument- everywhere with utility, but emolumentum actually meant benefit.[3] Bernoulli's own words in his original paper show no more than that he thought that the actual marginal benefit of money were for some reason diminishing as the quantity of money were increased. However. before Bernoulli arrived at his resolution, Gabriel Cramer arrived at a resolution that had similar characteristics; and, when Bernoulli later learned of this resolution, he quoted Cramer. Cramer declared that money was properly valued à proportion de l' uſage [in proportion to the usage]. The term uſage itself carries exactly the original sense of utility. (Cramer goes on to associate the usefulness of money with plaiſir, but does not make it clear whether he has a purely hedonic notion of usefulness.) Bernoulli did not distinguish his position from that of Cramer on this point, so it is perfectly reasonable to read Bernoulli as having regarded the actual gain from money as measured by its usefulness.

Of course, both Cramer and Bernoulli were presuming that usefulness were a measure, rather than a preördering of some other sort.

[1] The classic version of the Saint Petersburg Paradox imagines a gamble. A coin whose probability of heads is that of tails is to be flipped until it comes-up tails; thus, the chance of the gamble ending on the n-th toss is 1/2n. Initially, the payoff is 2 ducats, but this is doubled after each time that the coin comes-up heads; if the coin first comes up tails on the n-th flip, then the pay-off of the gamble will be 2n ducats. So the expected pay-off of the gamble is ∑[(1/2n)·(2n ducats)] = 1 ducat + 1 ducat + … = ∞ ducats Yet one never sees people buying such contracts for very much; and most people, asked to imagine how much they would pay, say that they wouldn't offer very much.

Cramer's resolution did not account for the preëxisting wealth of an individual offered a gamble, and he suggested that the measure of usefulness of money might be measured as a square root of the quantity of money. Bernoulli's resolution did account for preëxisting wealth, and suggested that the actual benefit of money were measurable as a natural logarithm.

I'm amongst those who note that one cannot buy that which is not sold, and who believe that people asked to imagine what they would pay for such a contract instead imagine what they would pay for what were represented as such a contract, which could not possibly deliver astonishingly large amounts of purchasing power.

[3] In a footnote, translator Louise Sommer claims that mean utility is a free translation of emolumentum medium and then that the literal translation would be mean utility; I believe that she had meant to offer something else as the literal translation, but lost her train of thought.

### Strong Independence in Decision Theory

Thursday, 21 July 2016

In the course of some remarks on Subjective Probability by Richard C. Jeffrey, and later in defending a claim by Gary Stanley Becker, I have previously given some explanation of the model of expected-utility maximization and of axiomata of independence.

Models of expected-utility maximization are so intuïtively appealing to some people that they take one of these models to be peculiarly rational, and deviations from any such model thus to be irrational. I note that the author of a popular 'blog seems to have done just that, yester-day.[0]

My own work shows that quantities cannot be fitted to preferences, which pulls the rug from under expected-utility maximization, but there are other problems as well. The paradox that the 'blogger explores represents a violation of the strong independence axiom. What I want to do here is first to explain again expected-utility maximization, and then to show that the strong independence axiom violates rationality.

A mathematical expectation is what people often mean when they say average — a probability-weighted sum of measures of possible outcomes. For example, when a meteorologist gives an expected rainfall or an expected temperature for to-morrow, she isn't actually telling you to anticipate exactly that rainfall or exactly that temperature; she's telling you that, given observed conditions to-day, the probability distribution for to-morrow has a particular mean quantity of rain or a particular mean temperature.

The actual mathematics of expectation is easiest to explain in simple cases of gambling (which is just whence the modern, main-stream theories of probability itself arose). For example, let's say that we have a fair coin (with a 50% chance of heads and a 50% chance of tails); and that if it comes-up heads then you get \$100, while if it comes-up tails then you get \$1. The expected pay-out is .5 × \$100 + .5 × \$1 = \$50.50 Now, let's say that another coin has a 25% chance of coming-up heads and a 75% chance of coming-up tails, and you'd get \$150 for heads and \$10 for tails. Its expected pay-out is .25 × \$150 + .75 × \$10 = \$45 More complicated cases arise when there are more than two possible outcomes, but the basic formula is just prob(x1m(x1) + prob(x2m(x2) + … + prob(xnm(xn) where xi is the i-th possible outcome, prob(xi) is the probability of that i-th possible outcome, and m(xi) is some measure (mass, temperature, dollar-value, or whatever) of that outcome. In our coin-flipping examples, each expectation is of form prob(headspayout(heads) + prob(tailspayout(tails)

One of the numerical examples of coin-flips offered both a higher maximum pay-out (\$150 v \$100) and a higher minimum pay-out (\$10 v \$1) yet a lower expected pay-out (\$45 v \$50.50). Most people will look at this, and decide that the expected pay-out should be the determining factor, though it's harder than many people reälize to make the case.

With monetary pay-outs, there is a temptation to use the monetary unit as the measure in computing the expectation by which we choose. But the actual usefulness of money isn't constant. We have various priorities; and, when possible, we take care of the things of greatest priority before we take care of things of lower priority. So, typically, if we get more money, it goes to things of lower priority than did the money that we already had. The next dollar isn't usually as valuable to us as any one of the dollars that we already had. Thus, a pay-out of \$1 million shouldn't be a thousand times as valuable as a pay-out of \$1000, especially if we keep in-mind a context in which a pay-out will be on top of whatever we already have in life. So, if we're making our decisions based upon some sort of mathematical expectation then, instead of computing an expected monetary value, we really want an expected usefulness value, prob(x1u(x1) + prob(x2u(x2) + … + prob(xnu(xn) where u() is a function giving a measure of usefulness. This u is the main-stream notion of utility, though sadly it should be noted that most main-stream economists have quite lost sight of the point that utility as they imagine it is just a special case of usefulness.

A model of expected-utility maximization is one that takes each possible action aj, associates it with a set of probabilities {prob(x1|aj),prob(x2|aj),…,prob(xn|aj)} (the probabilities now explicitly noted as conditioned upon the choice of action) and asserts that we should chose an action ak which gives us an expected utility prob(x1|aku(x1) + prob(x2|aku(x2) + … + prob(xn|aku(xn) as high or higher than that of any other action.

If there is a non-monetary measure of usefulness in the case of monetary pay-outs, then there is no evident reason that there should not be such a measure in the case of non-monetary pay-outs. (And, likewise, if there is no such measure in the case of non-monetary pay-outs, there is no reason to suppose one in the case of monetary pay-outs, where we have seen that the monetary pay-out isn't really a proper measure.) The main-stream of economic theory runs with that; its model of decision-making is expected-utility maximization.

The model does not require that people have a conscious measure of usefulness, and certainly does not require that they have a conscious process for arriving at such a measure; it can be taken as a model of the gut. And employment of the model doesn't mean that the economist believes that it is literally true; economists across many schools-of-thought regard idealizations of various sorts as approximations sufficient for their purposes. It is only lesser economists who do so incautiously and without regard to problems of scale.

But, while expected-utility maximization may certainly be regarded as an idealization, it should not be mistaken for an idealization of peculiar rationality nor even for an idealization of rationality of just one variety amongst many. Expected-utility maximization is not rational even if we grant — as I would not — that there is some quantification that can be fitted to our priorities.

Expected-utility maximization entails a proposition that the relevant expectation is of potential outcomes which are taken themselves to be no better or worse for being more or less probable. That is to say that what would be the reälized value of an outcome is the measure of the outcome to be used in the computation of the expectation; the expectation is simply lineär in the probabilities. This feature of the model follows from what is known as the strong independence axiom (underscore mine) because Paul Anthony Samuelson, having noticed it, conceptualized it as an axiom. It and propositions suggested to serve in its stead as an axiom (thus rendering it a theorem) have been challenged in various ways. I will not here survey the challenges.

However, the first problem that I saw with expected-utility maximization was with that lineärity, in-so-far as it implies that people do not benefit from the experience of selecting amongst discernible non-trivial lotteries as such.[1]

Good comes from engaging in some gambles as such, exactly because gambling more generally is unavoidable. We need practice to gamble properly, and practice to stay in cognitive shape for gambling. Even if we get that practice without seeking it, in the course of engaging in our everyday gambles, there is still value to that practice as such. A gamble may become more valuable as a result of the probability of the best outcome being made less probable, and less valuable as a result of the best outcome becoming more certain. The value of lotteries is not lineär in their probabilities!

It might be objected that this value is only associated with our cognitive limitations, which limitations it might be argued represented a sort of irrationality. But we only compound the irrationality if we avoid remedial activity because under other circumstance it would not have done us good. Nor do I see that we should any more accept that a person who needs cognitive exercise to stay in cognitive shape is thus out of cognitive shape than we would say that someone who needs physical exercise to stay in physical shape is thus out of physical shape.

[0 (2016:07/22)] Very quickly, in a brief exchange, he saw the error, and he's corrected his entry; so I've removed the link and identification here.

[1] When I speak or write of lotteries or of gambling, I'm not confining myself to those cases for which lay-people normally use those terms, but applying to situations in which one is confronted by a choice of actions, and various outcomes (albeït some perhaps quite impossible) may be imagined; things to which the term lottery or gamble are more usually applied are simply special cases of this general idea. A trivial lottery is one that most people would especially not think to be a lottery or gamble at all, because the only probabilities are either 0 or 1; a non-trivial lottery involves outcomes with probabilities in between those two. Of course, in real life there are few if any perfectly trivial lotteries, but a lot of things are close enough that people imagine them as having no risk or uncertainty; that's why I refer to discernible non-trivial lotteries, which people see as involving risk or uncertainty.

### Value Doesn't Work that Way

Tuesday, 12 January 2016

Many different conceptions of value are employed in different contexts, and more than one conception is employed in economics. But the notion of value that is most fundamental to economics is that of usefulness.

Usefulness isn't some attribute independent of context, nor does anything have the same usefulness to one person as it does to another. When context changes, value changes. When a thing that had value is moved, it does not carry its value with it; rather, it takes-on a new value associated with its new context. When a thing that had value moves from being the property of one person to being the property of another, its old value is not delivered to the new person; rather, it takes-on a new value associated with its new ownership.

Prices represent a somewhat different sort of value. Prices are quasi-quantified prioritizations, under which things may be exchanged. But, however prices are formed, they work only to the extent that they promote any exchanges that are useful to those potentially making the exchanges, and discourage any that are not. Ostensible prices that do not do so will be ignored in markets, and bring-about economic failure in other systems of allocation. Market values — prices established by markets — are those that conform to the priorities of the parties who choose to exchange. Market values, though different from usefulness, must be informed by usefulness, and thus must thus reflect the contexts of the things priced.

Monetary prices are quantities of money but not measures of market value.* Prices are first-and-foremost rankings, and treating them as quantifications has limited heuristic value; a thing may be rationally priced at \$1000 without its being 1000 times as useful as something rationally priced at \$1. And, though the first thing may be rationally priced at \$1000 in some context, if the context is changed radically, the thing may cease to have any usefulness, so that its price should be 0. Because of contextual issues, one cannot even say that if the price of one commodity is n that of another then it will always be possible to buy n times as much of the latter as of the former with the same quantity of money.*

A great deal of the wealth in to-day's world is in the form of financial claims that have no meaning what-so-ever outside of the context of a market. If the market is eliminated, then these claims would have no usefulness and hence a rational price of 0. If the markets in which these claims might be used were somehow preserved, but the claims were seized and redistributed, then their new contexts would correspond to greatly diminished usefulness, and their rational prices would then be much smaller.

The great fallacy of popular notions that poor and middle-income people might be significantly enriched by a large-scale seizure and implicit or explicit redistribution of wealth from billionaires or from the 1% or whatever is the notion that the present prices of the seized wealth reflect an intrinsic economic property of the things seized, which property will be delivered with the things as they are transferred. Instead, the old value will evaporate, and the new value will often be 0.

This point is true even in cases in which the assets seized are not financial instruments. Imagine a community given a Lamborghini Diablo. It had more value than a Honda Fit to the millionaire who owned it; but, for the community, the Honda Fit could be more useful than a Lamborghini Diablo. The respective prices prior to redistribution were plainly poor reflections of what would be the values in the new context.

Wealth is destroyed not only when things of value are seized from the very wealthy and given to those less wealthy, but when there is any sort of large-scale redistribution; including that from the lower- and middle-income groups to the very wealthy. But further indiscriminate redistribution, as by income group, will not restore the wealth lost to past redistribution, and even in hypothetical cases in which only actual perpetrators are penalized and actual victims are compensated, there may be further loss of wealth as such.

So, no. There isn't enough money for the dreams of the Occupation movement nor for the promises made by candidates such as Bernie Sanders, because money doesn't work that way. And there isn't enough wealth, because wealth doesn't work that way. The accountings that claim otherwise are crack-pot.

*These two sentences were added and tweaked on 2020:02/02-03.

### Crime and Punishment

Thursday, 31 December 2015

My attention was drawn this morning to What Was Gary Becker's Biggest Mistake? by Alex Tabarrok, an article published at Marginal Revolution back in mid-September.

Anyone who's read my paper on indecision should understand that I reject the proposition that a quantification may be fit to the structure of preferences. I'm currently doing work that explores the idea (previously investigated by Keynes and by Koopman) of plausibility orderings to which quantifications cannot be fit. I'm not a supporter of the theory that human behavior is well-modelled as subjective expected-utility maximization, which is a guiding theory of mainstream economics. None-the-less, I am appalled by the ham-handed attacks on this theory by people who don't understand this very simple model. Tabarrok is amongst these attackers.

Let me try to explain the model. Each choice that a person might make is not really of an outcome; it is of an action, with multiple possible outcomes. We want these outcomes understood as states of the world, because the value of things is determined by their contexts. Perhaps more than one action might share possible outcomes, but typically the probability of a given outcome varies based upon which action we choose. So far, this should be quite uncontroversial. (Comment if you want to controvert.) A model of expected-utility maximization assumes that we can quantify the probability, and that there is a utility function u() that takes outcomes as its argument, and returns a quantified valuation (under the preferences of the person modelled) of that outcome. Subjective expected-utility maximization takes the probabilities in question to be judgments by the person modelled, rather than something purely objective. The expected utility of a given action a is the probability-weighted sum of the utility values of its possible outcomes; that is p1(au(o1) + p2(au(o2) + … + pn(au(on) where there are n possible outcomes (across all actions), oi is the i-th possible outcome (from any action) and pi(a) is the probability of that outcome given action a.[1] (When oj is impossible under a, pj(a) = 0. Were there really some action whose outcome was fully determinate, then all of the probabilites for other outcomes would be 0.) For some alternative action b the expected utility would be p1(bu(o1) + p2(bu(o2) + … + pn(bu(on) and so forth. Expected-utility maximization is choosing that action with the highest expected utility.

Becker applied this model to dealing with crime. Becker argued that punishments could be escalated to reduce crime, until potential criminals implicitly regarded the expected utility of criminal action to be inferior to that of non-criminal action. If this is true, then when two otherwise similar crimes have different perceived rates of apprehension and conviction, the commission rate of the crime with the lower rate of apprehension and conviction can be lowered to that of the other crime by making its punishment worse. In other words, graver punishments can be substituted for higher perceived rates of apprehension and conviction, and for things that affect (or effect) the way in which people value successful commission of crime.

The simplest model of a utility function is one in which utility itself increases linearly with a quantitative description of the outcome. So, for example, a person with \$2 million dollars might be said to experience twice the utility of a person with \$1 million dollars. Possession of such a utility function is known as risk-neutrality. For purposes of exposition, Becker explains his theory with reference to risk-neutral people. That doesn't mean that he believed that people truly are risk neutral. Tabarrok quotes a passage in which Becker explains himself by explicit reference to risk-neutrality, but Tabarrok misses the significance — because Tabarrok does not really understand the model, and confuses risk-neutrality with rationality — and proceeds as if Becker's claim hangs on a proposition that people are risk-neutral. It doesn't.

Becker's real thought doesn't even depend upon all those mathematical assumptions that allow the application of arithmetic to the issue. The real thought is simply that, for any contemplated rates of crime, we can escalate punishments to some point at which, even with very low rates of apprehension and conviction, commission will be driven below the contemplated rate. The model of people as maximizers of expected utility is here essentially a heuristic, to help us understand the active absurdity of the once fashionable claim that potential criminals are indifferent to incentives.

However, as a community shifts to relying upon punishment from relying upon other things (better policing, aid to children in developing enlightened self-interest, efforts at rehabilitation of criminals), the punishments must become increasingly … awful. And that is the moral reason that we are damned if we simply proceed as Becker said that we hypothetically could. A society of monsters licenses itself to do horrific things to people by lowering its commitment to other means of reducing crime.

[1] Another way of writing pi(a) would be prob(oi|a). We could write ui for u(oi) to and express the expected utility as p1(au1 + p2(au2 + … + pn(aun but it's important here to be aware of the utility function as such.

Saturday, 23 February 2008

One of the means by which some propose to reduce petroleum consumption is increased technological efficiency. The idea is that if it takes less oil to accomplish our tasks, then we will want and need less oil. However, let's turn that around. If it takes fewer liters of oil to accomplish a given task, then we can accomplish more with a given liter. So what's actually going to happen?

Consider how we normally decide how much of a good or service to buy at any given price, or how much we would be willing to pay for any given quantity of that good or service. Whenever we buy a unit, we are spending money that could be spend on other things. If we are rational, then we decide whether to forgo those other things based upon what they'd do for us, compared to what the unit in question would do for us. All else being equal, the more use (of some sort) that we can get out of that unit, the more that we are willing to forgo of other things. And if something causes the usefulness of a sort of good or service to increase, then we're willing to pay more for it than earlier, and we want to buy more of it at any given price than we would earlier.

It really doesn't matter whether the new usefulness is from an intrinsic change or from an extrinsic change. In other words, if a good or service just itself changes to become more useful (in which case, it's really no longer the same good or service), then we want it more; or if the context changes to allow more to be done with the good or service, then we want it more.

If all of our engines that use petroleum products were magically transformed to do more work-per-gallon — so that petroleum became more useful — then we'd want and use more petroleum.

Here we have the essence of what is called Jevons' Paradox. William Stanley Jevons (one of the preceptors of the Marginal Revolution), in his book The Coal Question (1865), noted that Watts' improvements on the design of the steam engine (so that it could do more work per ton) had been followed by a great increase in the consumption of coal in such engines. The generalization is that, as technological change diminishes the amount of a resources necessary to perform a given task, consumption of that resource may increase.

Note that the point is not merely that the resources left-over by efficiency found use elsewhere, but that efficiency increased over-all use. (I make this point because I've seen Jevons' Paradox misrepresented as if claiming that supply were constant at all prices.)

There's actually not much paradoxical about the alleged paradox; like most economics, it is explained by common sense applied with uncommon care.

So why, then, do petroleum producers join in the protests against legislation mandating greater efficiency? Well, for much the same reason as do the automobile manufacturers. You surely noticed my phrases above, all else being equal and magically transformed. If the technological change mandated by legislation were costless, then industry would rush to adopt it, with or without legislation. But, for industry to want to adopt a technology that has a cost, it has to increase the usefulness of the good or service with a value at least equal to that cost. Otherwise, the increased costs will cut manufacturer profits, in part through reduced sales of automobiles. And it's the latter — fewer cars — that worries the petroleum producers.

Now, one might then say Well, then increased technological efficiency can reduce petroleum consumption, if only in this round-about way! But it really isn't the efficiency that's reducing consumption; it's just the cost. If the same cost were imposed by simply slapping an additional tax on automobiles, petroleum consumption would go down more, because the increased cost wouldn't even be partially offset by greater usefulness from technological efficiency.