Dying Asymptotically

2 July 2015

It seems as if most economists do not know how to handle death.

What I here mean is not that they don’t cope well with the deaths of loved ones or with their own mortality — though I suspect that they don’t. What I mean is that their models of the very long-run are over-simply conceived and poorly interpretted when it comes to life-spans.

In the typical economic model of the very long-run, agents either live forever, or they live some fixed span of time, and then die. Often, economists find that a model begins to fit the real world better if they change it from assuming that people live that fixed amount of time to assuming that people live forever, and some economists then conclude that people are irrationally assuming their own immortality.

Here’s a better thought. In the now, people are quite sure that they are alive. They are less sure about the next instant, and still less sure about the instant after that. The further that they think into the future, the less their expectation of being alive … but there is no time at which most people are dead certain that their lives will have ended. (If I asked you, the reader, how it might be possible for you to be alive in a thousand years, chances are that you could come up with some scenario.)

On the assumption that personalistic probabilities may be quantified, then, imputed probabilities of being alive, graphed against time, would approach some minimum asymptotically. My presumption would be that the value thus approached would be 0 — that most people would have almost no expectation of being alive after some span of years. But it would never quite be zero.

While I’m sure that some models will only work on the assumption that people impute absolute certainty to being alive forever, I suspect that an awful lot of models will work simply by accepting that most people embrace neither that madness nor the madness of absolute certainty that they will be dead at some specific time. Other models may need a more detailed description of the probability function.

As I’ve perhaps said or implied somewhere in this 'blog; I don’t think that real-life probabilities are usually quantified; I would therefore be inclined to resist adopting a model with quantified probabilities, though such toys can be very useful heuristics. The weaker notion that probabilities are an incomplete preördering would correspond to some weaker notion than an asymptotic approach, but I haven’t given much thought to what it would be.

An Error of Multiplicities

30 June 2015

Imagine a nation containing two jurisdictions, A and B. Imagine further that the population of jurisdiction A divides neatly into two groups: 51%, who oppose and do not receive transfer benefits from the federal state; and 49%, who receive such benefits (whatever their expressed beliefs). Imagine also that the population of jurisdiction B divides neatly into two groups: 67%, who support but do not do not receive transfer benefits from the federal state; and 33%, who receive such benefits (whatever their expressed beliefs).

The majority in jurisdiction A oppose transfer benefits; yet a higher share of people in that jurisdiction draw benefits than in jurisdiction B, where a majority support such programmes. None-the-less, these figures provide no evidence of hypocrisy in jurisdiction A. Possibly no one there who draws benefits speaks out against them or works to prevent others from receiving them.

In the real world, things are messier. (There’d be six relevant types of people.) But I sometimes see it argued that the people of certain jurisdictions are hypocrites simply on the basis that a majority there oppose some set of entitlement programmes, while at the same time a higher share of the population in that district (than of populations in other districts) draw benefits from that set. The hypothetical case above illustrates the fallacy of that argument.

If we had just one jurisdiction, in which a majority opposed some set of benefits yet a large share of people drew those benefits, the idea that there were some sort of hypocrisy wouldn’t naturally arise, unless it were suggested that a majority drew those same benefits. Knowing about other jurisdictions doesn’t tell one what one needs to know about that one jurisdiction. But many people get befuddled by the multiplicity, especially when the narrator tells them what they are predisposed to believe.

(There’s here also another, perhaps more important fallacy, which I discussed in an entry more than five years ago. People who do not believe that some order should prevail can participate in that order without being hypocrites. It is when they deliberately act to sustain an order against which they express themselves that they are acting as hypocrites.)

A Monumental Error

29 June 2015

Imagine that, under some law passed long ago, some group of persons was able to take $10 000 from you, without your consent. Further imagine that they spent this money on a statue of your beloved dog, Earl, and presented it to you.

The statue is actually rather nice. The artist truly managed to convey Earl’s personality! Setting aside what it cost you, you’d like it a great deal. And, if you’d tried to have one made like it, it would perhaps have cost you $20 000, rather than $10 000. (They have many statues made, and get each at significant discount.)

None-the-less, you don’t like it as much as you’d like $20 000; you don’t like it as much as you’d like to have kept your $10 000. And no one else is willing to pay $10 000 for a statue of your dog.

Most of us would say that you’re entitled to feel yourself worse-off, not-withstanding that, by some accounting, you’ve got a $20 000 return on a $10 000 cost.

Yet officials and other citizens who complain about Federal tax burdens (or about intervention in general from the Federal government) are often mocked as supposed hypocrites if they come from jurisdictions in which the Federal government spends more than it takes in revenue. The principle may be exactly the same. Even if the Federal government delivers money (rather than commodities) to the constituent state, if it requires that the money be spent in a particular way, then this is like compelling someone to buy a statue of Earl. And the constituent states were not themselves the taxpayers, so giving those states money without mandates still leaves people with reason to feel aggrieved, even when the money is more than that taken from taxpayer. (It is not as if each constituent state has just one taxpayer who is also its one voter, able then to direct how the money be spent.)

A Symmetry

24 June 2015

The following advice has become rather common-place:

If you love someone, set them free. If they come back, they’re yours; if they don’t, they never were.

I want to note something about the logick of this formula.

To return is to have gone; implicit in the words come back is that distance develops, whether actively or passively. And, indeed, if neither of two people makes an effort to stay connected, that is what one expects to happen.

If two people each apply the rule of setting the other free and of then awaiting the return of the other, it will not be love but chance-coïncidence or a conspiracy of others or perhaps some action of the collective unconscious that brings them back together — if anything does at all. The formula as popularly given strikes me as potentially very destructive to the purposes of love.

Now, that doesn’t mean that each of two people in love should do entirely the opposite, and attempt to constrain the other person by threats or by impairments. Rather, one wants to empower the other person, yet hope that he or she stays, so that there is no coming back. And, typically, that hope should be expressed to the other person.

But, sometimes, one watches one’s love go away, and prays for a return.

The Instituted Unconscious

22 June 2015

An institution is a persistent organizing practice or relationship within a culture. When most people hear or read the word institution, they think first of a sort of an organization, somewhat like a firm though typically for some purpose other than pursuit of pecuniary profit. But, really, the scope is much wider, which is how one may, for example, speak or write of the institution of marriage.

Economists and other social thinkers recognize as institutions a great many practices and relationships that most people don’t conceptualize as such. For example, languages are institutions; markets are institutions, and monies are institutions within those institutions; professional codes of ethics are institutions; and so forth.

Any given society is exactly a society, rather than merely some selection of people, to the extent that it is characterized by institutions.

Institutions can be hard to see as institutions; they can be hard to see at all. That which pervasively informs our thinking can be invisible for lack of contrast. The fact that a competent social thinker will recognize institutions that most people over-look does not mean that any given social thinker will recognize all the institutions of the society that he or she observes, or in which he or she participates. Rather, I do not think that any social thinker manages to attain such a profound awareness. If there is a meaning to most here, then I think that none of us sees most of the institutions. We participate in them, we use them, but we are unconscious of them.

Although one might imagine some outside agency acting to preserve an institution, more typically a practice or relationship will be persistent to the extent that it is self-perpetuating. It might be self-perpetuating in some fairly direct manner, or it might be thus simply by conferring some advantage on those who adopt it. Something that behaves in a self-perpetuating manner can seem to be purposeful. There are, in fact, some who would insist that a thing that behaves in a self-perpetuating manner truly is purposeful, but I don’t want to enter into that debate here. Whether it be purpose or something that merely seems like purpose, there may not be any person to whom one could point and properly say that the purpose were his or were hers. Perhaps no individual wants the institution perpetuated — in some cases[1] participants may actually want an end to the institution — but acting through people the institution perpetuates itself.

So my claim is that we live and act within a rich frame-work of practices and relationships, largely unrecognized, that affect and effect events as if with purposes distinct from our own.

This concept may be related to various things.

In Jungian theory, there is postulated a collective unconscious, which is a set of structures of the unconscious mind, shared amongst animals to the extent that they are biologically related. In general, these structures include instincts; in humans, they also include symbols (called archetypes). Jung believed that the collective unconscious was dormant in the zygote; so that a person whose biological parents were of one ethnic group but who were raised from birth by members of another would have the collective unconscious of the biological parents, rather than of the family in which he or she were raised. I assert that this collective unconscious does not exist; but that something rather like it does, with the very important difference that it is transmitted experientially. The actual collective unconscious is the aforementioned unrecognized institutional frame-work.

Evolutionary psychology, also known as sociobiology, has sought to explain behavior (including human behavior) in terms of some habits leading to more reproductive success than do others. That much is surely part of a proper explanation of human behavior, but these theorists have had a propensity to insist or to presume that the mechanism of transmission is in the DNA of the chromosomes or of the mitochondria. (In this commitment, they have been rather like the Jungians.) After entirely too much delay, some of them acknowledged that cultures as such could be affected by evolutionary pressures. They developed the notion that Richard Dawkins called the meme,[2] and that EO Wilson grotesquely called the culgen (or something like that),[3] which was that of a culturally transmitted, self-perpetuating pattern, somewhat analogous to the chromosomal and mitochondrial genes. These patterns are institutions, viewed individually. We would be consciously aware of some of these patterns, but by no means of all.

Some people are convinced that all events are effected to some purpose, a thought typically expressed as Everything happens for a reason. This claim surely goes too far, but one could see how observing many events that seemed to happen towards a purpose, which purpose was not that of any one of us, could suggest a theory that all reälized outcomes were in some sense intended.

Others do not necessarily think that all events are effected to some purpose; but, perceiving in some events apparent purposefulness that cannot plausibly be imputed to any ordinary person, take this apparent purposefulness as evidence that events have been or are being guided an extraordinary person — G_d. As a metaphor, this works rather well, though the impersonal G_d of Spinoza would be a better fit for the institutional framework; but, in any case, the apparent purposefulness is not good evidence for the involvement of a literal G_d.

Where many believers have been too quick to see the work of G_d, many non-believers have been too quick to see mere chance-coïncidence. But teasing-out the difference between that which is mere accident from that which works to the purposes or quasi-purposes of a frame-work of unrecognized parts is at best extremely difficult, if not impossible. A pattern can be found in any data set, and from it the number of super-patterns that may potentially be extrapolated are infinite. Additionally, most of us want to find significance in our lives, which biases us to see not only purposes but particular sorts of purposes behind events.


[1] For example, sub-optimal Cournot-Nash equilibria.

[2] Largely due to laziness and misunderstanding, this word came thereafter to have its popular meaning of any sort of widely spread expression.

[3] It’s appalling how little philological sense is now had by otherwise educated people.

Preserve the Proxies!

22 June 2015

Under the original ethos of the ‘Net, those who registered domain names were required to make publicly available their contact information.

A technical loop-hole was found. One party could register a domain name, and that party could provide its own contact information; yet the party could allow (and perhaps even be contractually required to allow) some other party to use the domain name for its own ends. So the technical registrant was a proxy agent for the practical holder. This loop-hole was challenged, but ultimately allowed to remain.

Now pressure is being brought upon ICANN to prohibit proxies for what are deemed commercial sites. The primary motivation appears to be to help firms identify and pursue those who infringe upon trademarks and other intellectual property. (At present, they would have to get a court order requiring the proxy service to release the identity of the practical holder.)

I think that this effort should be strongly resisted. At the time that the use of proxies began, I had mixed feelings about it. But use of the Internet and of the World-Wide Web has evolved, and evolved within the context of this proxied registration being an accepted practice. A rule-change now would impose new costs — sometimes quite significant — on many people, the vast majority of whom are quite innocent of any trespass on intellectual property. Further, I note that most of those who are deliberate in their infringements are unlikely to have qualms about using using proxies that simply claim to be practical holders.

You may want to read ICANN‘s discussion of the matter

Comments may be sent to comments-ppsai-initial-05may15@icann.org before 7 July.

Bad Meta

16 June 2015

Although this is a non-commercial 'blog, I’ve no objection to those who make a living at 'blogging. And I’ve no general objection to those who try-but-fail to make a living at 'blogging. But making the principal theme of every few entries a pitch for support amounts to trying to make a living at 'blogging about
trying to make a living at 'blogging.
And that’s just not a good idea.

Addendum (2015:06/19): It might be art were one attempting to make a living by blogging about attempting to make a living by blogging about …

Consciousness and Science

9 June 2015

The January-February 2012 issue of American Scientist contains an abridged reprinting of an article by BF Skinner, followed by a newer piece, frequently polemical, by a behaviorist, Stephen F. Ledoux.[0] In his polemic, Ledoux contrasts what he insists to be the scientific approach of behaviorology[1] with the ostensibly untestable and mystical approach of reference to an inner agent.

There’s a problem here, but it’s not unique to behaviorists. A large share of those who would study human nature scientifically do not know what science is.

Although courts and journalists and sociologists have declared that science is what scientists do, this formula is either a perverse begging of the question or simply wrong. The nature of science is not definitionally what is done by those recognized as scientists by academia nor by some narrower or wider society. Science does not start with academic degrees nor with peer review nor with the awarding of grants.

Science is reasoned analysis of — and theorizing about — empirical data.

Some want to use science more narrowly. It’s in no way essential to the principal purpose of this essay that all rational analysis and theorizing about empirical data should count as science; but it is essential to see that whatever sort of analysis and theorizing is employs must be rational and that the data must ultimately be empirical. (I doubt that, at this stage, a behaviorist would feel a need to disagree.) To side-step absurd semantic arguments, I will sometimes write rational empiricism for the concept that I would simply call science.

An ostensible science that accepts as fact unjustified empirical propositions is no science at all. That is not to say that each thing that, in everyday language, we call a science (eg, biology) must be a self-contained set of explanations. It is perfectly acceptable for one such science to be built upon the results of a prior rational empiricism (eg, for chemistry to build upon physics).

If we carefully consider what we take to be fact (and which may indeed be fact), we recognize that there is a theoretical or conjectural support to our acceptance of most of it. Such propositions taken as fact cannot be the foundation of rational empiricism because the aforementioned support must itself have been rational empiricism for rational empiricism to proceed from these propositions. Rational empiricism cannot start with measurement[1.50] nor with notions of things to be measured such as with mass or as with the speed of light; rational empiricism cannot start with a geometry. These notions arise from interpretation and conjecture.[2]

Rational empiricism starts with what may be called brute fact — data the awareness of which is not dependent upon an act of interpretation.[3] If the belief in a proposition depends upon any such act, regardless of how reasonable the act might be, then the proposition is not truly a brute fact.[4]

To develop propositions from brute facts that contradict known brute facts would be to engage in self-contradiction, which is not reasonable in interpretation nor in theorizing. It is especially unreasonable to develop propositions that contradict the very brute facts from which they were developed.[5]

Philosophers have a long history of exposing where propositions are reliant upon prior interpretation and assumption. Towards an extreme, we are asked how we know ourselves not to be brains in vats, fed stimuli corresponding to a virtual reälity. It’s not my intention to labor this question, beyond noting that it may be asked, and that acts of interpretation are entailed in any belief about whether we are other than about 3 pounds of tissue, bobbing-about in Pyrex™ jars, with electrodes attached here-and-there, whether the belief (for or against) be knowledge or not.

I referred to this question about whether one is a brain-in-a-vat as towards an extreme, rather than at an extreme, because a case in which stimuli are purely engineered is not an extreme. The presence itself of stimuli is not a brute fact. We conjecture their existence in our explanation of the sensations or sense-perceptions or perceptions that appear in our mind. If those things appear in our mind ex nihilo, then there are no stimuli, engineered or otherwise. That the mind is associated with a brain (or something like it) is not a brute fact. We build a model of reality that includes a body for us, and decide that our minds are housed within that body (as an activity or as a substance) or otherwise associated with it.[6]

The formation of sense-perceptions and of perceptions would seem to involve acts of interpretation; perhaps one would want to claim that the formation even of sensations involves interpretation. However, the presences of such things in the mind are themselves brute facts, whatever may be the theorized or conjectured origins of those things.[7] If by inner we understand the kernel of our belief system, and by outer we understand that which is built around that kernel, and if we begin our notion of mind with the capacity for sensations and the system that interprets these, then we should reälize that rational empiricism begins with the inner agent that the behaviorists and others want to dismiss as fictitious, mystical, superstitious; and it is the outer that is hypothesized in our explanation of the evidence. Those who attempt to deny or otherwise to exclude the inner self are trying to turn science on its head. Rational empiricism starts with a mind, and works its way out. And science, whether we simply equate it with rational empiricism or instead see it as a specific variety thereof, is thus committed to the existence of a mind, which is present in its foundation.


I say a mind advisedly; because, when rational empiricism starts, it starts anew with each mind. Of course, some minds do a better job of the rational empiricism than do others. The mind may be relatively inert rather than interpretive, or its interpretation may be largely irrational from the earliest stages.

If the mind continues, then it may develop an elaborate theory of the world. My own mind has done just this. And one of the important features of this theory is the belief in other minds (implicit in some of what I’ve been writing). Now, if we set aside issues of rationality, then an elaborate theory of the world might be developed without a belief in other minds. But as I constructed my theory of the world, including a theory of my having a body, it seemed that some of the other things out there exhibited behaviors similar those of my own body, such that those behaviors of my own body were in part determined by my mind. Subsequently, my theory of minds in general, including my own, began to be informed by their behavior.[8] According to later features of the theory that I hold of these minds, some minds do a better job of developing a theory of other minds than do other minds. Some never develop such a theory; others develop theories that impute minds to things that have none; some assume that any mind must necessarily be almost identical to their own minds.

As communication developed between my mind and these other minds, my theories of things-more-generally began to be informed by what I was told of those other things. One of my problems from that point forward was ascertaining the reliability of what I was told. (It might here be noted that my aforementioned development of a theory of the world was of course in very large part a wholesale adoption of those claims that I considered reliable.) And that brings us to collaborative theorizing, of which what many people now think science to be a special case.

But science is not essentially social. It does not pause between acts of communication, nor do we require the resumption of conversation as such to learn whether our most recent attempts were or were not science (though what we learn in conversation may tell us whether our prior conclusions continue to be scientific).

Consider whether Robinson Crusoe can engage in science, even on the assumptions that Friday will never appear, that Mr Crusoe will never be rescued, and that there is no means for him to preserve his work for future consideration. He can certainly engage in rational empiricism. He can test his conclusions against different sets of observations. (He can even quantify many things, and develop arithmetic models!)

Or imagine that you think that you see Colonel Inchthwaite commit a murder, though you are the only witness. Further, whenever you confront the Colonel and he is sure that there are no other witnesses and no recording devices, he freely admits to the murder. Your hypothesis that he has committed murder is tested every time that you query him. The fact that only you witnessed the apparent murder doesn’t make your experience mystical. Your theory is a reasoned conclusion from the empirical evidence available to you.

Of course, others cannot use Mr Crusoe’s work. And I will readily grant that it might be unscientific for someone else to believe your theory of murder. (That someone else may have little reason to believe your testimony, may have no independent means to test the theory, may have a simpler explanation to fit the evidence available to him or to her.)

Which is all to say that there can be private science, but it is only when the science of one’s position is shared that it may become science for others.[10] (And, even then, they may have other evidence that, brought to bear upon one’s position, renders it unscientific.)

The notion of science as intrinsically collaborative proceeds in part from a presumption that science is what those widely recognized as scientist do,[11] and in part from identifying science with the subject of the sociology of those seen (by some researcher) as scientists. But much of what people take to be science is, rather, a set of requirements — or of conventions attempting to meet requirements — for social interaction amongst would-be scientists to be practicably applied in the scientific development of belief.


It might be asked whether the scientists manque who deny the mind plausibly can have no experience of it, and under what circumstances.

One theory might be that, indeed, some of these alleged scientists have no experience of consciousness; perhaps they are things that behave indistinguishably or almost indistinguishably from creatures with consciousness, yet do not themselves possess it. Perhaps there are natural machines amongst us, which behave like more, yet are just machines.[12] But I’m very disinclined to accept this theory, which would seem effectively to entail a reproductive process that failed to produce a creature of one sort then successfully produced mimicks thereöf, as if bees and bee-flies might have the same parents.

Another theory would be that some of these alleged scientists are autistic, having minds, but having trouble seeing them. There is actually a considerable amount of mind-blindness amongst those who attempt social science. An otherwise intelligent person without a natural propensity to understand people may involve him- or herself in the scientific study of human nature — or in an ostensibly scientific study thereöf — exactly as an outgrowth and continuation of attempts to understand it by unnatural means. These attempts may in fact be fruitful, as natural inclinations may be actively defective. The autistic can offer us an outsider perspective. But outsiders can be oblivious to things of vital importance, as would be the case here.[13]

(And one must always be alert to attempts by people who fail at the ordinary game of life to transform themselves into winners by hijacking the meta-game, rewriting the rules from positions of assumed expertise.)

A remaining theory would be that these are rather more ordinary folk, who found what appeared to them to be a profound, transformative theory, and over-committed to it. (There seems to be an awful lot of that sort of thing in the world.) Subsequently, little compels them to acknowledge consciousness. They aren’t often competently challenged; they’ve constructed a framework that steers them away from the problem; and most people seem to be pretty good at not thinking about things.


While the behaviorists have run off the rails in their insistence that minds are a fiction, that does not mean that the study of human behavior with little or no reference to the mind of the subject is always necessarily a poor practice. As I stated earlier, some people assume that any mind must necessarily be almost identical to their own minds, and a great many people assume far too much similarity. I find people inferring that, because they have certain traits, I must also have these same traits, when I know that I do not; I find them presuming that others have traits that I am sure that those others do not, again based upon a presumed similarity. A study of pure behavior at least avoids this sort of error, and is in some contexts very much to be recommended.


[0] I began writing this entry shortly after seeing the articles, but allowed myself repeatedly to be distracted from completing it. I have quite a few other unfinished entries; this one was at the front of the queue.

[1] When behaviorists found other psychologists unreceptive to their approach, some of them decided to decamp, and identify that approach as a separate discipline, which they grotesquely named behaviorology, combining Germanic with Greek.

[1.50 (2015:06/10)] The comment of a friend impels me to write that, by measurement I intended to refer to the sort of description explored by Helmholtz in Zählen und Messen, by Suppes and Zinnes in Basic Measurement Theory, and by Suppes, Krantz, and Tversky in Foundations of Measurement. This notion is essentially that employed by Lord Kelvin in his famous remark on measurement and knowledge. Broader notions are possible (and we see such in, for example, Rand’s Introduction to Objectivist Epistemology).

[2] Under a narrowed definition of science that entails such things as measurement, a reality in which quantification never applied would be one in which science were impossible. Many of those inclined to such narrow definitions, believing that this narrowed concept none-the-less has something approaching universal applicability, struggle to quantify things for which the laws of arithmetic are a poor or impossible fit.

[3] The term brute fact is often instead used for related but distinct notions of fact for which there can be no explanation or of fact for which there is no cause. Aside from a need to note a distinction, I am not here concerned with these notions.

[4] Propositions that are not truly brute fact are often called such, in acts of metaphor, of hyperbole, or of obliviousness.

[5] Even if one insisted on some other definition of science — which insistence would be unfortunate — the point would remain that propositions that contradict known brute fact are unreasonable.

[6] Famously or infamously, René Descartes insisted that the mind interfaced with the brain by way of the pineal gland.

[7] I am sadly sure that some will want to ask, albeït perhaps not baldly, how the mind is to know that its sensation of its sensation is correct, as if one never sensed sensations as such, but only sensations of sensations. And some people, confronted with the proposition put that baldly, will dig-in, and assert that this is indeed the case; but if no sensation can itself be sensed except by a sensation that is not itself, then no sensation can be sensed, as the logic would apply recursively.

[8] Take a moment now, to try to see the full horror of a mind whose first exposures to behavior determined by other minds are largely of neglectful or actively injurious behavior.

[9] If I impute less than certainty to some proposition then, while the proposition may be falsified, my proposition about that proposition — the plausibility that I imputed to it — is not necessarily falsified. None-the-less, it is easier to speak of being wrong about falsified propositions to which one imputed a high degree of plausibility.

[10] The confusion of transmittability with rationality is founded in stupidity. Even if one allowed science to be redefined as a collaborative activity, somehow definitionally requiring transmittability, private rationality would remain rational. But I promise you that some will adopt the madness of insisting that, indeed, any acceptance of private evidence by its holder is mystical.

[11] When would-be scientists imitate, without real understanding, the behavior of those whom they take to be scientists, the would-be scientists are behaving in a way analogous to a cargo cult.

[12] Some people are convinced that they are unique in possessing consciousness, and the rest of us are just robots who do a fair job of faking it. This is usually taken as madness, though there is rather wide acceptance of a certitude that all other sorts of animals are natural machines, and that anything that seems as if it proceeds from love by a dog or by a pig is just the machine performing well.

[13] The presence of consciousness is here a necessary truth, but the proper grounds of its necessity are not obvious to most who are aware of consciousness; thus it should be unsurprising that a markèdly autistic person could not see this truth in spite of its necessity.

A Question of Characters

31 May 2015

At various times, I’m confronted with confusion by persons and by systems of characters with glyphs. Most of the time, that confusion is a very minor annoyance; sometimes, as when wrestling with the preparation of a technical document, it can cause many hours of difficulty.

It’s probably rather easier for people first to see that a character may have multiple glyphs. For example, here are two distinct yet common glyphs for the lower-case letter a: and here are two for g:

People have a bit more trouble with the idea that a single glyph can correspond to more than one character. Perhaps most educated folk generally understand that a Greek Ρ is not our P, even though one could easily imagine an identical glyph being used in some fonts. But many people think that they’re looking at a o with an umlaut in each of these two words: whereäs the two dots over the o in the first word are a diæresis, an ancient diacritical mark used in various languages to clarify whether and how a vowel is pronounced.[1] The two dots over the o in the German shön are indeed an umlaut, which evolved far more recently from a superscript e.[2] (One may alternately write the same word schoen, whereäs schon is a different word.)

Out of context, what one sees is a glyph. Generally, we need context to tell use whether we’re looking at Ϲ (upper-case lunate sigma), our familiar C, or С (upper-case Cyrillic ess); likewise for many other characters and their similar or identical glyphs. Until comparatively recently, we usually had sufficient context, mistakes were relatively infrequent and usually unimportant. (Okay, so a bunch of people thought that the Soviet Union called itself the CCCP, rather than the СССР. Meh.) But, with the development of electronic information technology, and with globalization, the distinction becomes more pressing. Most of us have seen the problems of OCR; these are essentially problems of inferring characters from glyphs. It’s not so messy when converting instead from plain-text or from something such as ODF, but when character substitutions were made based upon similarity or identity of glyph, the very same problems can then arise. For example, as I said, one sees glyphs, but what is heard when the text is rendered audible will be phonetic values associated with the characters used. And sometimes the system will process a less-than sign as a left angle bracket, because everyone else is using it as such. In an abstract sense, these are of course problems of transliteration, and of its effects upon translation.

Some of you will recognize the contrast between character and glyph as a special case of the contrast between content and presentation — between what one seeks to deliver and the manner of delivery. Some will also note that the boundary between the two shifts. For example, the difference between upper-case and lower-case letters originated as nothing more than a difference in glyphs. Indeed, our R was once no more than a different way of writing the Greek Ρ; our A simply was the Greek Α, and it can remain hard to distinguish them! I don’t know that ſ (long ess) should be regarded as a different character from s, rather than just as an archaïc glyph thereof.

Still, the fact that what is sometimes mere presentation may at other times be content doesn’t mean that we should forgo the gains to be had in being mindful of the distinction and in creating structures that often help us to avoid being shackled to the accidental.


[1] In English and most other languages, a diæresis over the second of two vowels indicates that the vowel is pronounced separately, rather than forming a diphthong. (So here /koˈapəˌret/ rather than /ˈkupəˌret/ or /ˈkʊpəˌret/.) Over a vowel standing alone, as in Brontë, the diæresis signals that the vowel is not silent. (In English and some other languages, a grave accent may be used to the very same effect.) Portuguese cleverly uses a diæresis over the first of two vowels to signal that diphthong is formed where it might not be expected.

[2] Germans used to use a dreadful script — Kurrentschrift — in which such an evolution is less surprising.

On the Meaning of Entrepreneur

20 May 2015

There has been and is a lot of confusion over the English word entrepreneur. Now, I say English word advisedly, because, though entrepreneur was derived from a French word spelled exactly the same way, a word is not merely a sequence of symbols, but such a sequence in association with a concept or set of concepts, and the English word entrepreneur doesn’t have quite the same meaning as the French word.

The French word means contractor or, more generally, one who undertakes.

We didn’t need a new word for contractor; it would be contemptible affectation of one sort or of another to introduce a longer French word for such purpose. In fact, there was some attempt to engage in that sort of affectation in the 19th Century, first in the entertainment industry.

But the sequence entrepreneur was reïntroduced to English in the mid-20th Century with the intention of identifying a narrower concept that meritted a word of its own. That concept was of a person who attempts to create a market where one does not exist — offering a new sort of product, or offering a sort of product to those who have not been purchasers of such things.

The entrepreneur is not merely a small business person, nor an active business person, nor an independent contractor, nor some combination of the three. The entrepreneur is an economic explorer, seeking to cultivate new territory — typically with pecuniary profit in mind, but sometimes just for the satisfaction of having brought a market into existence.

Whatever the motivation, it is in the rôle of attempting to create markets that the entrepreneur is the great hero and the entrepreneuse the great heroine of the market economy. And some unconscious sense of that heroism has passed through our society, causing business people aren’t such explorers to want to label themselves entrepreneur. The word has become diluted in general use, and many people are using it as if, well, it meant no more than the French word from which it were derived. Economists with a fair understanding of the market process shake their heads in dismay. We need a word for those heroes.