## Posts Tagged ‘crime’

### Crime and Punishment

Thursday, 31 December 2015

My attention was drawn this morning to What Was Gary Becker's Biggest Mistake? by Alex Tabarrok, an article published at Marginal Revolution back in mid-September.

Anyone who's read my paper on indecision should understand that I reject the proposition that a quantification may be fit to the structure of preferences. I'm currently doing work that explores the idea (previously investigated by Keynes and by Koopman) of plausibility orderings to which quantifications cannot be fit. I'm not a supporter of the theory that human behavior is well-modelled as subjective expected-utility maximization, which is a guiding theory of mainstream economics. None-the-less, I am appalled by the ham-handed attacks on this theory by people who don't understand this very simple model. Tabarrok is amongst these attackers.

Let me try to explain the model. Each choice that a person might make is not really of an outcome; it is of an action, with multiple possible outcomes. We want these outcomes understood as states of the world, because the value of things is determined by their contexts. Perhaps more than one action might share possible outcomes, but typically the probability of a given outcome varies based upon which action we choose. So far, this should be quite uncontroversial. (Comment if you want to controvert.) A model of expected-utility maximization assumes that we can quantify the probability, and that there is a utility function u() that takes outcomes as its argument, and returns a quantified valuation (under the preferences of the person modelled) of that outcome. Subjective expected-utility maximization takes the probabilities in question to be judgments by the person modelled, rather than something purely objective. The expected utility of a given action a is the probability-weighted sum of the utility values of its possible outcomes; that is p1(au(o1) + p2(au(o2) + … + pn(au(on) where there are n possible outcomes (across all actions), oi is the i-th possible outcome (from any action) and pi(a) is the probability of that outcome given action a.[1] (When oj is impossible under a, pj(a) = 0. Were there really some action whose outcome was fully determinate, then all of the probabilites for other outcomes would be 0.) For some alternative action b the expected utility would be p1(bu(o1) + p2(bu(o2) + … + pn(bu(on) and so forth. Expected-utility maximization is choosing that action with the highest expected utility.

Becker applied this model to dealing with crime. Becker argued that punishments could be escalated to reduce crime, until potential criminals implicitly regarded the expected utility of criminal action to be inferior to that of non-criminal action. If this is true, then when two otherwise similar crimes have different perceived rates of apprehension and conviction, the commission rate of the crime with the lower rate of apprehension and conviction can be lowered to that of the other crime by making its punishment worse. In other words, graver punishments can be substituted for higher perceived rates of apprehension and conviction, and for things that affect (or effect) the way in which people value successful commission of crime.

The simplest model of a utility function is one in which utility itself increases linearly with a quantitative description of the outcome. So, for example, a person with \$2 million dollars might be said to experience twice the utility of a person with \$1 million dollars. Possession of such a utility function is known as risk-neutrality. For purposes of exposition, Becker explains his theory with reference to risk-neutral people. That doesn't mean that he believed that people truly are risk neutral. Tabarrok quotes a passage in which Becker explains himself by explicit reference to risk-neutrality, but Tabarrok misses the significance — because Tabarrok does not really understand the model, and confuses risk-neutrality with rationality — and proceeds as if Becker's claim hangs on a proposition that people are risk-neutral. It doesn't.

Becker's real thought doesn't even depend upon all those mathematical assumptions that allow the application of arithmetic to the issue. The real thought is simply that, for any contemplated rates of crime, we can escalate punishments to some point at which, even with very low rates of apprehension and conviction, commission will be driven below the contemplated rate. The model of people as maximizers of expected utility is here essentially a heuristic, to help us understand the active absurdity of the once fashionable claim that potential criminals are indifferent to incentives.

However, as a community shifts to relying upon punishment from relying upon other things (better policing, aid to children in developing enlightened self-interest, efforts at rehabilitation of criminals), the punishments must become increasingly … awful. And that is the moral reason that we are damned if we simply proceed as Becker said that we hypothetically could. A society of monsters licenses itself to do horrific things to people by lowering its commitment to other means of reducing crime.

[1] Another way of writing pi(a) would be prob(oi|a). We could write ui for u(oi) to and express the expected utility as p1(au1 + p2(au2 + … + pn(aun but it's important here to be aware of the utility function as such.

### Let's Be Rational Here

Saturday, 29 September 2012

Years ago, when I was in graduate school, I got into an argument, about a real-world crime statistic, with another student who didn't have much math-sense. The mathematics itself is very simple, and yet at least one implication of it seems to run counter to the intuïtions of many people.

Let's say that a population p is divided into groups, each i-th group with population pi p = ∑(pi) And let's say that the i-th group has a propensity ci to commit crimes, such that ci · pi gives the sum of the crimes committed (however measured) by members of that population.

If criminals from within each group draw their victims with each person having an equal chance of victimization regardless of his or her own group, then the proportionate share of victims that they draw from the j-th group will be pj / p The total number of crimes then committed against the j-th group by members of the i-th group will then be (pj / p) · (ci · pi) and the ratio of i-on-j crime to j-on-i crime will be [(pj / p) · (ci · pi)] / [(pi / p) · (cj · pj)] = ci / cj So, if ci = cj, then the ratio of i-on-j crime to j-on-i crime will simply be 1:1.

The other graduate student had been sure that, if group i were the smaller group, then the ratio should be larger than 1:1, because group j furnished more potential victims. The proper intuïtion here is that, if one group is larger than another, then it furnishes proportionally both more potential victims and more potential victimizers; or, to say the same thing differently, if one group is smaller than another, then it furnishes proportionally both fewer potential victims and fewer potential victimizers.

If we see a very different ratio, then the difference implies that one group has a greater propensity to criminality than the other, or that one group is seeking (or avoiding) the other in its acts of criminality, or both.

It should be noted that members of the j-th group may be sought or avoided for reasons other than their being members of that group as such. For example, members of the j-th group may happen to have more portable wealth. Still, if one sees a ratio of, say, about 50:1, then it's hard to explain this lop-sided ratio in terms simply of the j-th group having more wealth, or of the i-th group simply having a greater propensity to criminality. With a ratio like that, one should expect that members of the j-th group are indeed being targetted for being in that group, by members of the i-th group.

### A Damn'd Lie Exposed

Monday, 21 September 2009
Records of rape crime distorted from the BBC

Rape claims are being left off official crime records, the BBC has learned.

Figures obtained following a Freedom of Information request showed some UK police forces were failing to record more than 40% of cases.

For some time now, I've been asserting that, when the same statistical protocol is used, the UK is actually ahead of the US (per capita) in all but a couple of forms of violent crime. I'm going to have to revise that assertion. The UK is ahead of the US in all forms of violent crime except criminal homicide.

### Knives Away, Pinkies Out! We're British, You Know!

Monday, 7 July 2008
Jail knife carriers, says Cameron from the BBC
Anyone caught carrying a knife without a good excuse should expect to be sent to prison, David Cameron says.

[…]

Mr Cameron says knife crime is now a problem of epidemic proportions in the UK.

Knife crime is such a problem in the UK because violent crime is a problem. In fact, per capita, there are more incidents of most sorts of violent crime in the UK than in the US, though this fact is generally hidden by the British using different data reporting protocols. Violent crime is at greater levels in the UK in spite of their having gun control, because guns aren't the cause. Nor are knives. The prohibition of guns has largely resulted in a substitution of knives. A prohibition of knives will largely result in some other substitution. And innocent people who aren't protected by the police will be ever more at the mercy of criminals.