Posts Tagged ‘psychology’

Suicide Mission

Wednesday, 13 October 2021

[I posted the following as an entry to Facebook six years ago.]

Every now and then, one of my Facebook Friends posts or comments to a posting about someone who has lost his battle with depression.

I recently saw one of those postings, and visited the page of the person who was said to have lost the battle. I saw his some of his final posts, and some of his pictures. And, yeah, he was battling with depression. If I'd know him, I would have told him to stop.

I don't mean that I would have told him to go somewhere and die. I mean that depression is not to be fought. I very much doubt that a depressive personality can ever be anything else; but I am absolutely certain that fighting it is not how to deal with it.

People who try to fight depression either are always fighting it or have lost to it. They compound the depression with a sense that there is something unacceptable about themselves, which can only be overcome by a fight. If they don't have that much fight in themselves, then they don't accept themselves; their lives hang on their belief in their ability to fight depression, to somehow refuse to be depressives.

It looks an awful lot like an unrecognized internalization of some of the things that the depressive was told as a child, by those who were failing that child, and who in many cases had taught and were teaching the perverted life-lessons that had made the child a depressive.

Depression is to be explained, to be understood, and to be put in context. There is no guarantee that life will then be livable, but at least one doesn't have to die upon losing a fight.

The Instituted Unconscious

Monday, 22 June 2015

An institution is a constructed,[0] persistent organizing practice or relationship within a culture. When most people hear or read the word institution, they think first of a sort of an organization, somewhat like a firm though typically for some purpose other than pursuit of pecuniary profit. But, really, the scope is much wider, which is how one may, for example, speak or write of the institution of marriage.

Economists and other social thinkers recognize as institutions a great many practices and relationships that most people don't conceptualize as such. For example, languages are institutions; markets are institutions, and monies are institutions within those institutions; professional codes of ethics are institutions; and so forth.

Any given society is exactly a society, rather than merely some selection of people, to the extent that it is characterized by institutions.

Institutions can be hard to see as institutions; they can be hard to see at all. That which pervasively informs our thinking can be invisible for lack of contrast. The fact that a competent social thinker will recognize institutions that most people over-look does not mean that any given social thinker will recognize all the institutions of the society that he or she observes, or in which he or she participates. Rather, I do not think that any social thinker manages to attain such a profound awareness. If there is a meaning to most here, then I think that none of us sees most of the institutions. We participate in them, we use them, but we are unconscious of them.

Although one might imagine some outside agency acting to preserve an institution, more typically a practice or relationship will be persistent to the extent that it is self-perpetuating. It might be self-perpetuating in some fairly direct manner, or it might be thus simply by conferring some advantage on those who adopt it. Something that behaves in a self-perpetuating manner can seem to be purposeful. There are, in fact, some who would insist that a thing that behaves in a self-perpetuating manner truly is purposeful, but I don't want to enter into that debate here. Whether it be purpose or something that merely seems like purpose, there may not be any person to whom one could point and properly say that the purpose were his or were hers. Perhaps no individual wants the institution perpetuated — in some cases[1] participants may actually want an end to the institution — but acting through people the institution perpetuates itself.

So my claim is that we live and act within a rich frame-work of practices and relationships, largely unrecognized, that affect and effect events as if with purposes distinct from our own.

This concept may be related to various things.

In Jungian theory, there is postulated a collective unconscious, which is a set of structures of the unconscious mind, shared amongst animals to the extent that they are biologically related. In general, these structures include instincts; in humans, they also include symbols (called archetypes). Jung believed that the collective unconscious were dormant in the zygote; so that a person whose biological parents were of one ethnic group but who were raised from birth by members of another would have the collective unconscious of the biological parents, rather than of the family in which he or she were raised. I assert that this collective unconscious does not exist; but that something rather like it does, with the very important difference that it is transmitted experientially. The actual collective unconscious is the aforementioned unrecognized institutional frame-work.

Evolutionary psychology, also known as sociobiology, has sought to explain behavior (including human behavior) in terms of some habits leading to more reproductive success than do others. That much is surely part of a proper explanation of human behavior, but these theorists have had a propensity to insist or to presume that the mechanism of transmission is in the DNA of the chromosomes or of the mitochondria. (In this commitment, they have been rather like the Jungians.) After entirely too much delay, some of them acknowledged that cultures as such could be affected by evolutionary pressures. They developed the notion that Richard Dawkins called the meme,[2] and that EO Wilson grotesquely called the culgen (or something like that),[3] which was that of a culturally transmitted, self-perpetuating pattern, somewhat analogous to the chromosomal and mitochondrial genes. These patterns are institutions, viewed individually. We would be consciously aware of some of these patterns, but by no means of all.

Some people are convinced that all events are effected to some purpose, a thought typically expressed as Everything happens for a reason. This claim surely goes too far, but one could see how observing many events that seemed to happen towards a purpose, which purpose was not that of any one of us, could suggest a theory that all reälized outcomes were in some sense intended.

Others do not necessarily think that all events are effected to some purpose; but, perceiving in some events apparent purposefulness that cannot plausibly be imputed to any ordinary person, take this apparent purposefulness as evidence that events have been or are being guided an extraordinary person — G_d. As a metaphor, this works rather well, though the impersonal G_d of Spinoza would be a better fit for the institutional framework; but, in any case, the apparent purposefulness is not good evidence for the involvement of a literal G_d.

Where many believers have been too quick to see the work of G_d, many non-believers have been too quick to see mere chance-coïncidence. But teasing-out the difference between that which is mere accident from that which works to the purposes or quasi-purposes of a frame-work of unrecognized parts is at best extremely difficult, if not impossible. A pattern can be found in any data set, and from it the number of super-patterns that may potentially be extrapolated are infinite. Additionally, most of us want to find significance in our lives, which biases us to see not only purposes but purposes of particular sorts behind events.


[0 (2017:07/07)] A discussion of rather different matters impelled me to recognize that I needed to distinguish institutions from unconstructed, persistent organizing practices or relationships within a culture.

[1] For example, sub-optimal Cournot-Nash equilibria.

[2] Largely due to laziness and misunderstanding, this word came thereafter to have its popular meaning of any sort of widely spread expression.

[3] It's appalling how little philological sense is now had by otherwise educated people.

Consciousness and Science

Tuesday, 9 June 2015

The January-February 2012 issue of American Scientist contains an abridged reprinting of an article by BF Skinner, followed by a newer piece, frequently polemical, by a behaviorist, Stephen F. Ledoux.[0] In his polemic, Ledoux contrasts what he insists to be the scientific approach of behaviorology[1] with the ostensibly untestable and mystical approach of reference to an inner agent.

There's a problem here, but it's not unique to behaviorists. A large share of those who would study human nature scientifically do not know what science is.

Although courts and journalists and sociologists have declared that science is what scientists do, this formula is either a perverse begging of the question or simply wrong. The nature of science is not definitionally what is done by those recognized as scientists by academia nor by some narrower or wider society. Science does not start with academic degrees nor with peer review nor with the awarding of grants.

Science is reasoned analysis of — and theorizing about — empirical data.

Some want to use science more narrowly. It's in no way essential to the principal purpose of this essay that all rational analysis and theorizing about empirical data should count as science; but it is essential to see that whatever sort of analysis and theorizing is employed must be rational and that the data must ultimately be empirical. (I doubt that, at this stage, a behaviorist would feel a need to disagree.) To side-step absurd semantic arguments, I will sometimes write rational empiricism for the concept that I would simply call science.

An ostensible science that accepts as fact unjustified empirical propositions is no science at all. That is not to say that each thing that, in everyday language, we call a science (eg, biology) must be a self-contained set of explanations. It is perfectly acceptable for one such science to be built upon the results of a prior rational empiricism (eg, for chemistry to build upon physics).

If we carefully consider what we take to be fact (and which may indeed be fact), we recognize that there is a theoretical or conjectural support to our acceptance of most of it. Such propositions taken as fact cannot be the foundation of rational empiricism because the aforementioned support must itself have been rational empiricism for rational empiricism to proceed from these propositions. Rational empiricism cannot start with measurement[1.50] nor with notions of things to be measured such as with mass or as with the speed of light; rational empiricism cannot start with a geometry. These notions arise from interpretation and conjecture.[2]

Rational empiricism starts with what may be called brute fact — data the awareness of which is not dependent upon an act of interpretation.[3] If the belief in a proposition depends upon any such act, regardless of how reasonable the act might be, then the proposition is not truly a brute fact.[4]

To develop propositions from brute facts that contradict known brute facts would be to engage in self-contradiction, which is not reasonable in interpretation nor in theorizing. It is especially unreasonable to develop propositions that contradict the very brute facts from which they were developed.[5]

Philosophers have a long history of exposing where propositions are reliant upon prior interpretation and assumption. Towards an extreme, we are asked how we know ourselves not to be brains in vats, fed stimuli corresponding to a virtual reälity. It's not my intention to labor this question, beyond noting that it may be asked, and that acts of interpretation are entailed in any belief about whether we are other than about 3 pounds of tissue, bobbing-about in Pyrex™ jars, with electrodes attached here-and-there, whether the belief (for or against) be knowledge or not.

I referred to this question about whether one is a brain-in-a-vat as towards an extreme, rather than at an extreme, because a case in which stimuli are purely engineered is not an extreme. The presence itself of stimuli is not a brute fact. We conjecture their existence in our explanation of the sensations or sense-perceptions or perceptions that appear in our minds. If those things appear in our minds ex nihilo, then there are no stimuli, engineered or otherwise. That the mind is associated with a brain (or something like it) is not a brute fact. We build a model of reality that includes a body for us, and decide that our minds are housed within that body (as an activity or as a substance) or otherwise associated with it.[6]

The formation of sense-perceptions and of perceptions would seem to involve acts of interpretation; perhaps one would want to claim that the formation even of sensations involves interpretation. However, the presences of such things in the mind are themselves brute facts, whatever may be the theorized or conjectured origins of those things.[7] If by inner we understand the kernel of our belief system, and by outer we understand that which is built around that kernel, and if we begin our notion of mind with the capacity for sensations and the system that interprets these, then we should reälize that rational empiricism begins with the inner agent that the behaviorists and others want to dismiss as fictitious, mystical, superstitious; and it is the outer that is hypothesized in our explanation of the evidence. Those who attempt to deny or otherwise to exclude the inner self are trying to turn science on its head. Rational empiricism starts with a mind, and works its way out. And science, whether we simply equate it with rational empiricism or instead see it as a specific variety thereof, is thus committed to the existence of a mind, which is present in its foundation.


I say a mind advisedly; because, when rational empiricism starts, it starts anew with each mind. Of course, some minds do a better job of the rational empiricism than do others. The mind may be relatively inert rather than interpretive, or its interpretation may be largely irrational from the earliest stages.

If the mind continues, then it may develop an elaborate theory of the world. My own mind has done just this. And one of the important features of this theory is the belief in other minds (implicit in some of what I've been writing). Now, if we set aside issues of rationality, then an elaborate theory of the world might be developed without a belief in other minds. But as I constructed my theory of the world, including a theory of my having a body, it seemed that some of the other things out there exhibited behaviors similar those of my own body, such that those behaviors of my own body were in part determined by my mind. Subsequently, my theory of minds in general, including my own, began to be informed by their behavior.[8] According to later features of the theory that I hold of these minds, some minds do a better job of developing a theory of other minds than do other minds. Some never develop such a theory; others develop theories that impute minds to things that have none; some assume that any mind must necessarily be almost identical to their own minds.

As communication developed between my mind and these other minds, my theories of things-more-generally began to be informed by what I was told of those other things. One of my problems from that point forward was ascertaining the reliability of what I was told. (It might here be noted that my aforementioned development of a theory of the world was of course in very large part a wholesale adoption of those claims that I considered reliable.) And that brings us to collaborative theorizing, of which what many people now think science to be a special case.

But science is not essentially social. It does not pause between acts of communication, nor do we require the resumption of conversation as such to learn whether our most recent attempts were or were not science (though what we learn in conversation may tell us whether our prior conclusions continue to be scientific).

Consider whether Robinson Crusoe can engage in science, even on the assumptions that Friday will never appear, that Mr Crusoe will never be rescued, and that there is no means for him to preserve his work for future consideration. He can certainly engage in rational empiricism. He can test his conclusions against different sets of observations. (He can even quantify many things, and develop arithmetic models!)

Or imagine that you think that you see Colonel Inchthwaite commit a murder, though you are the only witness. Further, whenever you confront the Colonel and he is sure that there are no other witnesses and no recording devices, he freely admits to the murder. Your hypothesis that he has committed murder is tested every time that you query him. The fact that only you witnessed the apparent murder doesn't make your experience mystical. Your theory is a reasoned conclusion from the empirical evidence available to you.

Of course, others cannot use Mr Crusoe's work. And I will readily grant that it might be unscientific for someone else to believe your theory of murder. (That someone else may have little reason to believe your testimony, may have no independent means to test the theory, may have a simpler explanation to fit the evidence available to him or to her.)

Which is all to say that there can be private science, but it is only when the science of one's position is shared that it may become science for others.[10] (And, even then, they may have other evidence that, brought to bear upon one's position, renders it unscientific.)

The notion of science as intrinsically collaborative proceeds in part from a presumption that science is what those widely recognized as scientist do,[11] and in part from identifying science with the subject of the sociology of those seen (by some researcher) as scientists. But much of what people take to be science is, rather, a set of requirements — or of conventions attempting to meet requirements — for social interaction amongst would-be scientists to be practicably applied in the scientific development of belief.


It might be asked whether the scientists manque who deny the mind plausibly can have no experience of it, and under what circumstances.

One theory might be that, indeed, some of these alleged scientists have no experience of consciousness; perhaps they are things that behave indistinguishably or almost indistinguishably from creatures with consciousness, yet do not themselves possess it. Perhaps there are natural machines amongst us, which behave like more, yet are just machines.[12] But I'm very disinclined to accept this theory, which would seem effectively to entail a reproductive process that failed to produce a creature of one sort then successfully produced mimicks thereöf, as if bees and bee-flies might have the same parents.

Another theory would be that some of these alleged scientists are autistic, having minds, but having trouble seeing them. There is actually a considerable amount of mind-blindness amongst those who attempt social science. An otherwise intelligent person without a natural propensity to understand people may involve him- or herself in the scientific study of human nature — or in an ostensibly scientific study thereöf — exactly as an outgrowth and continuation of attempts to understand it by unnatural means. These attempts may in fact be fruitful, as natural inclinations may be actively defective. The autistic can offer us an outsider perspective. But outsiders can be oblivious to things of vital importance, as would be the case here.[13]

(And one must always be alert to attempts by people who fail at the ordinary game of life to transform themselves into winners by hijacking the meta-game, rewriting the rules from positions of assumed expertise.)

A remaining theory would be that these are rather more ordinary folk, who encountered what appeared to them to be a profound, transformative theory, and over-committed to it. (There seems to be an awful lot of that sort of thing in the world.) Subsequently, little compels them to acknowledge consciousness. They aren't often competently challenged; they've constructed a framework that steers them away from the problem; and most people seem to be pretty good at not thinking about things.


While the behaviorists have run off the rails in their insistence that minds are a fiction, that does not mean that the study of human behavior with little or no reference to the mind of the subject is always necessarily a poor practice. As I stated earlier, some people assume that any mind must necessarily be almost identical to their own minds, and a great many people assume far too much similarity. I find people inferring that, because they have certain traits, I must also have these same traits, when I know that I do not; I find them presuming that others have traits that I am sure that those others do not, again based upon a presumed similarity. A study of pure behavior at least avoids this sort of error, and is in some contexts very much to be recommended.


[0] I began writing this entry shortly after seeing the articles, but allowed myself repeatedly to be distracted from completing it. I have quite a few other unfinished entries; this one was at the front of the queue.

[1] When behaviorists found other psychologists unreceptive to their approach, some of them decided to decamp, and identify that approach as a separate discipline, which they grotesquely named behaviorology, combining Germanic with Greek.

[1.50 (2015:06/10)] The comment of a friend impels me to write that, by measurement I intended to refer to the sort of description explored by Helmholtz in Zählen und Messen, by Suppes and Zinnes in Basic Measurement Theory, and by Suppes, Krantz, and Tversky in Foundations of Measurement. This notion is essentially that employed by Lord Kelvin in his famous remark on measurement and knowledge. Broader notions are possible (and we see such in, for example, Rand's Introduction to Objectivist Epistemology).

[2] Under a narrowed definition of science that entails such things as measurement, a reality in which quantification never applied would be one in which science were impossible. Many of those inclined to such narrow definitions, believing that this narrowed concept none-the-less has something approaching universal applicability, struggle to quantify things for which the laws of arithmetic are a poor or impossible fit.

[3] The term brute fact is often instead used for related but distinct notions of fact for which there can be no explanation or of fact for which there is no cause. Aside from a need to note a distinction, I am not here concerned with these notions.

[4] Propositions that are not truly brute fact are often called such, in acts of metaphor, of hyperbole, or of obliviousness.

[5] Even if one insisted on some other definition of science — which insistence would be unfortunate — the point would remain that propositions that contradict known brute fact are unreasonable.

[6] Famously or infamously, René Descartes insisted that the mind interfaced with the brain by way of the pineal gland.

[7] I am sadly sure that some will want to ask, albeït perhaps not baldly, how the mind is to know that its sensation of its sensation is correct, as if one never sensed sensations as such, but only sensations of sensations. And some people, confronted with the proposition put that baldly, will dig-in, and assert that this is indeed the case; but if no sensation can itself be sensed except by a sensation that is not itself, then no sensation can be sensed, as the logic would apply recursively.

[8] Take a moment now, to try to see the full horror of a mind whose first exposures to behavior determined by other minds are largely of neglectful or actively injurious behavior.

[9] If I impute less than certainty to some proposition then, while the proposition may be falsified, my proposition about that proposition — the plausibility that I imputed to it — is not necessarily falsified. None-the-less, it is easier to speak of being wrong about falsified propositions to which one imputed a high degree of plausibility.

[10] The confusion of transmittability with rationality is founded in stupidity. Even if one allowed science to be redefined as a collaborative activity, somehow definitionally requiring transmittability, private rationality would remain rational. But I promise you that some will adopt the madness of insisting that, indeed, any acceptance of private evidence by its holder is mystical.

[11] When would-be scientists imitate, without real understanding, the behavior of those whom they take to be scientists, the would-be scientists are behaving in a way analogous to a cargo cult.

[12] Some people are convinced that they are unique in possessing consciousness, and the rest of us are just robots who do a fair job of faking it. This is usually taken as madness, though there is rather wide acceptance of a certitude that all other sorts of animals are natural machines, and that anything that seems as if it proceeds from love by a dog or by a pig is just the machine performing well.

[13] The presence of consciousness is here a necessary truth, but the proper grounds of its necessity are not obvious to most who are aware of consciousness; thus it should be unsurprising that a markèdly autistic person could not see this truth in spite of its necessity.

But Baby Made Three

Monday, 31 December 2012

Various unhappy things have occurred in my life since my last entry here. The worst of these was the death of the beloved cat of the Woman of Interest.


Some years before she and I had ever had any contact, the Woman of Interest was passing by a dumpster, and heard a cry as if from a tomcat trapped in it. So she went to free the creature. What she found, to her surprise, was not an adult, but a truly tiny little black kitten, with preternaturally beautiful green eyes. He had been closed in a box with a toy, put in the dumpster, and left for death. (We can only speculate about this combination of apparent affection and ruthlessness.)

He was far too young to be properly weened and separated from his mother — the Woman of Interest was horrified when a veterinarian estimated his age — but she became the best possible substitute for that mother. He grew to be a cat with an rather large frame (though she took care not to let him become obese, and put him on a diet when he became a bit pudgy).

Actually, though, he really never ceased to be a kitten, and a pretty rambunctious one at that. And, as far as he was concerned, she was always his Best Mommy. When he and I would be alone in her apartment, he would spend a fair amount of time laying where he could watch the door, waiting for her return; I'd not before seen an adult cat do that. (My family had a cat, and I had a cat of my own, so I'm not without prior experience.)

He was a talkative cat, who usually made something more like the sound of a duck quack than a typical meow; and, if he were awake and she were at home, then he'd usually be vocalizing at her — I think that usually what he was saying could be translated as Mom! — though sometimes he'd just roam-about, engaged in apparent commentary to himself. I'd be especially amused when I'd hear him trying to tell her something after she'd fallen asleep while on the phone with me.

He and I got along quite well. I was looking forward to spending years hanging-out with him. Without conscious categorization, I thought of us as buddies.


I won't here rehash the details of how he was lost. Not long before he died, tests established that he had developed diabetes; this was caught much sooner than is typical in house cats (because the Woman of Interest was very mindful of his health), and she began treating it as per the veterinarian's instructions. But what was also discovered at the same time was that he had a common heart condition. Neither, by itself, would have proved fatal. But, jointly, they were too much. She found him dead on the morning of the fourth.


I'm not writing here to grieve (though I sometimes still cry about him), but because I observe something qua social scientist.

My relationship with the Woman of Interest wasn't well-bounded by just the two of us. Her cat played a significant rôle in our relationship, and it was the relationship more of another person than of an impersonal thing. A relationship that I would have categorized as between two persons was actually somewhat rather like one amongst three persons. With his death, the relationship of the three of us took a terrible hit, and the relationship of the two of us is henceforth informed by that injury. He did not mean — and could not have meant — the same thing to each of us individually, but he meant something shared in the relationship as such; and, even if the loss to the relationship could be fully decomposed into our losses as individuals (an issue that I don't propose here to labor), still it is important to recognize it as a sort of loss to the relationship, albeït not one to cause us to love each other less or to become more distant.


(Many years ago, I lost my dog while I was in a relationship, but the nature of that relationship was different. For my part, I knew that, without radical and unlikely change, I needed to be freed of that wretched woman; and, for her part, she had already been preparing to discard as much of her responsibilities as she might, including any that were had for the dog. He died before she left us, but she was going to leave him too. So I didn't observe then anything analogous to what I observe now.)

Just Pining

Sunday, 5 August 2012

On Sunday, 27 May, I received a pair of e.mail messages announcing formal acceptance for publication of my paper on indecision, and I ceased being braced for rejection. From 15 June, Elsevier had a version for sale on-line (first the uncorrected proof, then the corrected proof, now the version found in the journal). The issue itself (J Math Econ v48 #4) was made available on-line on 3 August. (I assume that the print copies will be received by subscribers soon.)


Reader may recall that, not very long ago, I was reading A Budget of Paradoxes by Augustus de Morgan, and that when de Morgan used the term paradox he did not use in in the sense of an apparent truth which seems to fly in the face of reason, but in the older sense of a tenet opposed to received opinion. De Morgan was especially concerned with cases of heterodoxy to which no credibility would be ascribed by the established mainstream.

Some paradoxes would later move from heterodoxy to orthodoxy, as when the Earth came to be viewed as closely approximated by a sphere, and with no particular claim to being the center of the universe. But most paradoxes are unreasonable, and have little chance of ever becoming orthodoxy.

I began reading de Morgan's Budget largely because I have at least a passing interest in cranky ideas. But reading it at the time that I did was not conducive to my mental health.


Under ideal circumstances, one would not use a weight of opinion — whether the opinion were popular or that of experts — to approximate most sorts of truth. But circumstances are seldom ideal, and social norms are often less than optimal whatever the circumstances. When confronted with work that is heterodox about foundational matters, the vast majority of people judge work to be crackpot if it is not treated with respect by some ostensibly relevant population.

In cases where respect is used as the measure of authority, there can be a problem of whose respect is itself taken to have some authority; often a layering obtains. The topology of that layering can be conceptualized in at least three ways, but the point is that the layers run from those considered to have little authority beyond that to declare who has more authority, to those who are considered to actually do the most respected research, with respected popularizers usually in one of the layers in-between. In such structures, absurdities can obtain, such as presumptions that popularizers have themselves done important research, or that the more famous authorities are the better authorities.


As I was reading de Morgan's book, my paper was waiting for a response from the seventh journal to which it had been offered. The first rejection had been preëmptory; no reason was given for it, though there was some assurance that this need not be taken as indicating that the paper were incompetent or unimportant. The next three rejections (2nd, 3rd, 4th) were less worrisome, as they seemed to be about the paper being too specialized, and two of them made a point of suggesting what the editor or reviewer thought to be more suitable journals. But then came the awful experience of my paper being held by Theory and Decision for more than a year-and-a half, with editor Mohammed Abdellaoui refusing to communicate with me about what the Hell were happening. And this was followed by a perverse rejection at the next journal from a reviewer with a conflict of interest. Six rejections[1] might not seem like a lot, but there really aren't that many academically respected journals which might have published my paper (especially as I vowed never again to submit anything to a Springer journal); I was running-out of possibilities.

I didn't produce my work with my reputation in mind, and I wouldn't see damage to my reputation as the worst consequence of my work being rejected; but de Morgan's book drew my attention to the grim fact that my work, which is heterodox and foundational, was in danger of being classified as crackpot, and I along with it.


Crackpots, finding their work dismissed, often vent about the injustice of that rejection. That venting is taken by some as confirmation that the crackpots are crackpots. It's not; it's a natural reäction to a rejection that is perceived to be unjust, whether the perception is correct or not. The psychological effect can be profoundly injurious; crackpots may collapse or snap, but so may people who were perfectly reasonable in their heterodoxy. (Society will be inclined to see a collapse or break as confirmation that the person were a crackpot, until and unless the ostensible authorities reverse themselves, at which point the person may be seen as a martyr.)


As things went from bad to worse for my paper, I dealt with how I felt by compartmentalization and dissociation. When the paper was first given conditional acceptance, my reäction was not one of happiness nor of relief; rather, with some greater prospect that the paper would be published, the structure of compartmentalization came largely undone, and I felt traumatized.


Meanwhile, some other things in my life were going or just-plain went wrong, at least one of which I'll note in some later entry. In any case, the recent quietude of this 'blog hasn't been because I'd lost interest in it, but because properly to continue the 'blog this entry was needed, and I've not been in a good frame-of-mind to write it.


[1] Actually five rejections joined with the behavior of Abdellaoui, which was something far worse than a rejection.

Nicht Sehr Gut

Tuesday, 29 July 2008

I have been reading Gut Feelings: The Intelligence of the Unconscious by Gerd Gigerenzer. Gut Feelings seeks to explain — and in large part to vindicate — some of the processes of intuïtive thinking.

Years ago, I became something of a fan of Gigerenzer when I read a very able critique that he wrote of some work by Kahneman and Tversky. And there are things in Gut Feelings that make it worth reading. But there are also a number of active deficiencies in the book.

Gigerenzer leans heavily on undocumented anecdotal evidence, and an unlikely share of these anecdotes are perfectly structured to his purpose.

Gigerenzer writes of how using simple heuristics in stock-market investment has worked as well or better than use of more involved models, and sees this as an argument for the heuristics, but completely ignores the efficient-markets hypothesis. The efficient-markets hypothesis basically says that, almost as soon as relevant information is available, profit-seeking arbitrage causes prices to reflect that information, and then there isn't much profit left to be made, except by luckunpredictable change. (And one can lose through such change as easily as one might win.) If this theory is correct, then one will do as well picking stocks with a dart board as by listening to an investment counselor. In the face of the efficient-markets hypothesis, the evidence that he presents might simply illustrate the futility of any sort of deliberation.

Gigerenzer makes a point of noting where better decisions seem often to be made by altogether ignoring some information, and provides some good examples and explanations. But he fails to properly locate a significant part of the problem, and very much appears to mislocate it. Specifically, a simple, incorrectly-specified model may predict more accurately that a complex, incorrectly-specified model. Gigerenzer (who makes no reference to misspecification) writes

In an uncertain environment, good intuitions must ignore information

but uncertainty (as such) isn't to-the-point; the consequences of misspecification are what may justify ignoring information. It's very true that misspecification is more likely in the context of uncertainty, but one system which is intrinsically less predictable than another may none-the-less have been better specified.

I am very irked by the latest chapter that I've read, Why Good Intuitions Shouldn't Be Logical. In note 2 to this chapter, one reads

Tversky and Kahneman, 1982, 98. Note that here and in the following the term logic is used to refer to the laws of first-order logic.[1]

The peculiar definition has been tucked behind a bibliographical reference. Further, the notes appear at the end of the volume (rather than as actual foot-notes), And this particular note appears well after Gigerenzer has already begun using the word logic (and its adjectival form) baldly. If Gigerenzer didn't want to monkey dance, then he could have found an better term, or kept logic (and derivative forms) in quotes. As it is, he didn't even associate the explanatory note with the chapter title.

Further, Gigerenzer again mislocates errors. Kahneman and Tversky (like many others) mistakenly thought that natural language and, or, and probable simply map to logical conjunction, logical disjunction, and something-or-another fitting the Kolmogorov axiomata; they don't. Translations that presume such simple mappings in fact result in absurdities, as when

She petted the cat and the cat bit her.

is presumed to mean the same thing as

The cat bit her and she petted the cat.

because conjunction is commutative.[2] Gigerenzer writes as if the lack of correspondence is a failure of the formal system, when it's instead a failure of translation. Greek δε should sometimes be translated and, but not always, and vice versa; likewise, shouldn't always be translated as and nor vice versa. The fact that such translations can be in error does not exhibit an inadequacy in Greek, in English, nor in the formal system.


[1]The term first-order logic refers not to a comprehensive notion of abstract principles of reasoning, but to a limited formal system. Perhaps the simplest formal system to be called a logic is propositional logic, which applies negation, conjunction, and disjunction to propositions under a set of axiomata. First-order logic adds quantifiers (for all, for some) and rules therefor to facilitate handling propositional functions. Higher-order logics extend the range of what may be treated as variable.

[2]That is to say that

[(P1P2) ⇔ (P2P1)] ∀(P1,P2)