## Crime and Punishment

31 December 2015My attention was drawn this morning to What Was Gary Becker's Biggest Mistake?

by Alex Tabarrok, an article published at Marginal Revolution back in mid-September.

Anyone who's read my paper on indecision should understand that I reject the proposition that a quantification may be fit to the structure of preferences. I'm currently doing work that explores the idea (previously investigated by Keynes and by Koopman) of plausibility orderings to which quantifications cannot be fit. I'm not a supporter of the theory that human behavior is well-modelled as subjective expected-utility maximization, which is a guiding theory of mainstream economics. None-the-less, I am appalled by the ham-handed attacks on this theory by people who don't understand this *very simple* model. Tabarrok is amongst these attackers.

Let me try to explain the model. Each choice that a person might make is not *really* of an *outcome*; it is of an *action*, with multiple possible outcomes. We want these outcomes understood as states of the world, because the value of things is determined by their contexts. Perhaps more than one action might share possible outcomes, but typically the *probability* of a given outcome varies based upon which action we choose. So far, this should be quite uncontroversial. (Comment if you want to controvert.) A model of expected-utility maximization assumes that we can quantify the probability, and that there is a utility function `u`() that takes outcomes as its argument, and returns a quantified valuation (under the preferences of the person modelled) of that outcome. __Subjective__ expected-utility maximization takes the probabilities in question to be judgments by the person modelled, rather than something purely objective. The expected utility of a given action `a` is *the probability-weighted sum of the utility values of its possible outcomes*; that is `p`_{1}(`a`)·`u`(`o`_{1}) + `p`_{2}(`a`)·`u`(`o`_{2}) + … + `p`_{n}(`a`)·`u`(`o`_{n}) where there are `n` possible outcomes (across all actions), `o`_{i} is the `i`-th possible outcome (from any action) and `p`_{i}(`a`) is the probability of that outcome given action `a`.[1] (When `o`_{j} is impossible under `a`, `p`_{j}(`a`) = 0. Were there really some action whose outcome was fully determinate, then all of the probabilites for other outcomes would be 0.) For some alternative action `b` the expected utility would be `p`_{1}(`b`)·`u`(`o`_{1}) + `p`_{2}(`b`)·`u`(`o`_{2}) + … + `p`_{n}(`b`)·`u`(`o`_{n}) and so forth. *Expected-utility maximization is choosing that action with the highest expected utility.*

Becker applied this model to dealing with crime. Becker argued that punishments *could* be escalated to reduce crime, until potential criminals implicitly regarded the expected utility of criminal action to be inferior to that of non-criminal action. If this is true, then when two otherwise similar crimes have different perceived rates of apprehension and conviction, the commission rate of the crime with the lower rate of apprehension and conviction can be lowered to that of the other crime by making its punishment worse. In other words, graver punishments can be *substituted* for higher perceived rates of apprehension and conviction, and for things that affect (or effect) the way in which people value successful commission of crime.

The simplest model of a utility function is one in which utility itself increases *linearly* with a quantitative description of the outcome. So, for example, a person with $2 million dollars might be said to experience *twice* the utility of a person with $1 million dollars. Possession of such a utility function is known as risk-neutrality

. *For purposes of exposition, Becker explains his theory with reference to risk-neutral people. That doesn't mean that he believed that people truly **are* risk neutral. Tabarrok quotes a passage in which Becker explains himself by *explicit* reference to risk-neutrality, but Tabarrok misses the significance — because Tabarrok does not really *understand* the model, and *confuses* risk-neutrality with *rationality* — and proceeds as if Becker's claim *hangs* on a proposition that people are risk-neutral. It doesn't.

Becker's real thought doesn't even depend upon all those mathematical assumptions that allow the application of arithmetic to the issue. The *real* thought is simply that, for any contemplated rates of crime, we can escalate punishments to *some* point at which, even with very low rates of apprehension and conviction, commission will be driven below the contemplated rate. The model of people as maximizers of expected utility is here essentially a *heuristic*, to help us understand the *active absurdity* of the once fashionable claim that potential criminals are indifferent to incentives.

*However*, as a community shifts to relying upon punishment from relying upon other things (better policing, aid to children in developing enlightened self-interest, efforts at rehabilitation of criminals), the punishments must become increasingly … *awful*. And *that* is the moral reason that we are *damned* if we simply proceed as Becker said that we *hypothetically* could. A society of *monsters* licenses itself to do horrific things to people by lowering its commitment to other means of reducing crime.

[1] Another way of writing `p`_{i}(`a`) would be

. We could write `prob`(`o`_{i}|`a`)

for `u`_{i}`u`(`o`_{i}) to and express the expected utility as `p`_{1}(`a`)·`u`_{1} + `p`_{2}(`a`)·`u`_{2} + … + `p`_{n}(`a`)·`u`_{n} but it's important here to be aware of the utility *function* as such.

Tags: Alex Tabarrok, crime, expected utility, Gary Becker, punishment, utility

## Leave a Reply