Archive for the ‘commentary’ Category

Consciousness and Science

Tuesday, 9 June 2015

The January-February 2012 issue of American Scientist contains an abridged reprinting of an article by BF Skinner, followed by a newer piece, frequently polemical, by a behaviorist, Stephen F. Ledoux.[0] In his polemic, Ledoux contrasts what he insists to be the scientific approach of behaviorology[1] with the ostensibly untestable and mystical approach of reference to an inner agent.

There's a problem here, but it's not unique to behaviorists. A large share of those who would study human nature scientifically do not know what science is.

Although courts and journalists and sociologists have declared that science is what scientists do, this formula is either a perverse begging of the question or simply wrong. The nature of science is not definitionally what is done by those recognized as scientists by academia nor by some narrower or wider society. Science does not start with academic degrees nor with peer review nor with the awarding of grants.

Science is reasoned analysis of — and theorizing about — empirical data.

Some want to use science more narrowly. It's in no way essential to the principal purpose of this essay that all rational analysis and theorizing about empirical data should count as science; but it is essential to see that whatever sort of analysis and theorizing is employed must be rational and that the data must ultimately be empirical. (I doubt that, at this stage, a behaviorist would feel a need to disagree.) To side-step absurd semantic arguments, I will sometimes write rational empiricism for the concept that I would simply call science.

An ostensible science that accepts as fact unjustified empirical propositions is no science at all. That is not to say that each thing that, in everyday language, we call a science (eg, biology) must be a self-contained set of explanations. It is perfectly acceptable for one such science to be built upon the results of a prior rational empiricism (eg, for chemistry to build upon physics).

If we carefully consider what we take to be fact (and which may indeed be fact), we recognize that there is a theoretical or conjectural support to our acceptance of most of it. Such propositions taken as fact cannot be the foundation of rational empiricism because the aforementioned support must itself have been rational empiricism for rational empiricism to proceed from these propositions. Rational empiricism cannot start with measurement[1.50] nor with notions of things to be measured such as with mass or as with the speed of light; rational empiricism cannot start with a geometry. These notions arise from interpretation and conjecture.[2]

Rational empiricism starts with what may be called brute fact — data the awareness of which is not dependent upon an act of interpretation.[3] If the belief in a proposition depends upon any such act, regardless of how reasonable the act might be, then the proposition is not truly a brute fact.[4]

To develop propositions from brute facts that contradict known brute facts would be to engage in self-contradiction, which is not reasonable in interpretation nor in theorizing. It is especially unreasonable to develop propositions that contradict the very brute facts from which they were developed.[5]

Philosophers have a long history of exposing where propositions are reliant upon prior interpretation and assumption. Towards an extreme, we are asked how we know ourselves not to be brains in vats, fed stimuli corresponding to a virtual reälity. It's not my intention to labor this question, beyond noting that it may be asked, and that acts of interpretation are entailed in any belief about whether we are other than about 3 pounds of tissue, bobbing-about in Pyrex™ jars, with electrodes attached here-and-there, whether the belief (for or against) be knowledge or not.

I referred to this question about whether one is a brain-in-a-vat as towards an extreme, rather than at an extreme, because a case in which stimuli are purely engineered is not an extreme. The presence itself of stimuli is not a brute fact. We conjecture their existence in our explanation of the sensations or sense-perceptions or perceptions that appear in our minds. If those things appear in our minds ex nihilo, then there are no stimuli, engineered or otherwise. That the mind is associated with a brain (or something like it) is not a brute fact. We build a model of reality that includes a body for us, and decide that our minds are housed within that body (as an activity or as a substance) or otherwise associated with it.[6]

The formation of sense-perceptions and of perceptions would seem to involve acts of interpretation; perhaps one would want to claim that the formation even of sensations involves interpretation. However, the presences of such things in the mind are themselves brute facts, whatever may be the theorized or conjectured origins of those things.[7] If by inner we understand the kernel of our belief system, and by outer we understand that which is built around that kernel, and if we begin our notion of mind with the capacity for sensations and the system that interprets these, then we should reälize that rational empiricism begins with the inner agent that the behaviorists and others want to dismiss as fictitious, mystical, superstitious; and it is the outer that is hypothesized in our explanation of the evidence. Those who attempt to deny or otherwise to exclude the inner self are trying to turn science on its head. Rational empiricism starts with a mind, and works its way out. And science, whether we simply equate it with rational empiricism or instead see it as a specific variety thereof, is thus committed to the existence of a mind, which is present in its foundation.


I say a mind advisedly; because, when rational empiricism starts, it starts anew with each mind. Of course, some minds do a better job of the rational empiricism than do others. The mind may be relatively inert rather than interpretive, or its interpretation may be largely irrational from the earliest stages.

If the mind continues, then it may develop an elaborate theory of the world. My own mind has done just this. And one of the important features of this theory is the belief in other minds (implicit in some of what I've been writing). Now, if we set aside issues of rationality, then an elaborate theory of the world might be developed without a belief in other minds. But as I constructed my theory of the world, including a theory of my having a body, it seemed that some of the other things out there exhibited behaviors similar those of my own body, such that those behaviors of my own body were in part determined by my mind. Subsequently, my theory of minds in general, including my own, began to be informed by their behavior.[8] According to later features of the theory that I hold of these minds, some minds do a better job of developing a theory of other minds than do other minds. Some never develop such a theory; others develop theories that impute minds to things that have none; some assume that any mind must necessarily be almost identical to their own minds.

As communication developed between my mind and these other minds, my theories of things-more-generally began to be informed by what I was told of those other things. One of my problems from that point forward was ascertaining the reliability of what I was told. (It might here be noted that my aforementioned development of a theory of the world was of course in very large part a wholesale adoption of those claims that I considered reliable.) And that brings us to collaborative theorizing, of which what many people now think science to be a special case.

But science is not essentially social. It does not pause between acts of communication, nor do we require the resumption of conversation as such to learn whether our most recent attempts were or were not science (though what we learn in conversation may tell us whether our prior conclusions continue to be scientific).

Consider whether Robinson Crusoe can engage in science, even on the assumptions that Friday will never appear, that Mr Crusoe will never be rescued, and that there is no means for him to preserve his work for future consideration. He can certainly engage in rational empiricism. He can test his conclusions against different sets of observations. (He can even quantify many things, and develop arithmetic models!)

Or imagine that you think that you see Colonel Inchthwaite commit a murder, though you are the only witness. Further, whenever you confront the Colonel and he is sure that there are no other witnesses and no recording devices, he freely admits to the murder. Your hypothesis that he has committed murder is tested every time that you query him. The fact that only you witnessed the apparent murder doesn't make your experience mystical. Your theory is a reasoned conclusion from the empirical evidence available to you.

Of course, others cannot use Mr Crusoe's work. And I will readily grant that it might be unscientific for someone else to believe your theory of murder. (That someone else may have little reason to believe your testimony, may have no independent means to test the theory, may have a simpler explanation to fit the evidence available to him or to her.)

Which is all to say that there can be private science, but it is only when the science of one's position is shared that it may become science for others.[10] (And, even then, they may have other evidence that, brought to bear upon one's position, renders it unscientific.)

The notion of science as intrinsically collaborative proceeds in part from a presumption that science is what those widely recognized as scientist do,[11] and in part from identifying science with the subject of the sociology of those seen (by some researcher) as scientists. But much of what people take to be science is, rather, a set of requirements — or of conventions attempting to meet requirements — for social interaction amongst would-be scientists to be practicably applied in the scientific development of belief.


It might be asked whether the scientists manque who deny the mind plausibly can have no experience of it, and under what circumstances.

One theory might be that, indeed, some of these alleged scientists have no experience of consciousness; perhaps they are things that behave indistinguishably or almost indistinguishably from creatures with consciousness, yet do not themselves possess it. Perhaps there are natural machines amongst us, which behave like more, yet are just machines.[12] But I'm very disinclined to accept this theory, which would seem effectively to entail a reproductive process that failed to produce a creature of one sort then successfully produced mimicks thereöf, as if bees and bee-flies might have the same parents.

Another theory would be that some of these alleged scientists are autistic, having minds, but having trouble seeing them. There is actually a considerable amount of mind-blindness amongst those who attempt social science. An otherwise intelligent person without a natural propensity to understand people may involve him- or herself in the scientific study of human nature — or in an ostensibly scientific study thereöf — exactly as an outgrowth and continuation of attempts to understand it by unnatural means. These attempts may in fact be fruitful, as natural inclinations may be actively defective. The autistic can offer us an outsider perspective. But outsiders can be oblivious to things of vital importance, as would be the case here.[13]

(And one must always be alert to attempts by people who fail at the ordinary game of life to transform themselves into winners by hijacking the meta-game, rewriting the rules from positions of assumed expertise.)

A remaining theory would be that these are rather more ordinary folk, who encountered what appeared to them to be a profound, transformative theory, and over-committed to it. (There seems to be an awful lot of that sort of thing in the world.) Subsequently, little compels them to acknowledge consciousness. They aren't often competently challenged; they've constructed a framework that steers them away from the problem; and most people seem to be pretty good at not thinking about things.


While the behaviorists have run off the rails in their insistence that minds are a fiction, that does not mean that the study of human behavior with little or no reference to the mind of the subject is always necessarily a poor practice. As I stated earlier, some people assume that any mind must necessarily be almost identical to their own minds, and a great many people assume far too much similarity. I find people inferring that, because they have certain traits, I must also have these same traits, when I know that I do not; I find them presuming that others have traits that I am sure that those others do not, again based upon a presumed similarity. A study of pure behavior at least avoids this sort of error, and is in some contexts very much to be recommended.


[0] I began writing this entry shortly after seeing the articles, but allowed myself repeatedly to be distracted from completing it. I have quite a few other unfinished entries; this one was at the front of the queue.

[1] When behaviorists found other psychologists unreceptive to their approach, some of them decided to decamp, and identify that approach as a separate discipline, which they grotesquely named behaviorology, combining Germanic with Greek.

[1.50 (2015:06/10)] The comment of a friend impels me to write that, by measurement I intended to refer to the sort of description explored by Helmholtz in Zählen und Messen, by Suppes and Zinnes in Basic Measurement Theory, and by Suppes, Krantz, and Tversky in Foundations of Measurement. This notion is essentially that employed by Lord Kelvin in his famous remark on measurement and knowledge. Broader notions are possible (and we see such in, for example, Rand's Introduction to Objectivist Epistemology).

[2] Under a narrowed definition of science that entails such things as measurement, a reality in which quantification never applied would be one in which science were impossible. Many of those inclined to such narrow definitions, believing that this narrowed concept none-the-less has something approaching universal applicability, struggle to quantify things for which the laws of arithmetic are a poor or impossible fit.

[3] The term brute fact is often instead used for related but distinct notions of fact for which there can be no explanation or of fact for which there is no cause. Aside from a need to note a distinction, I am not here concerned with these notions.

[4] Propositions that are not truly brute fact are often called such, in acts of metaphor, of hyperbole, or of obliviousness.

[5] Even if one insisted on some other definition of science — which insistence would be unfortunate — the point would remain that propositions that contradict known brute fact are unreasonable.

[6] Famously or infamously, René Descartes insisted that the mind interfaced with the brain by way of the pineal gland.

[7] I am sadly sure that some will want to ask, albeït perhaps not baldly, how the mind is to know that its sensation of its sensation is correct, as if one never sensed sensations as such, but only sensations of sensations. And some people, confronted with the proposition put that baldly, will dig-in, and assert that this is indeed the case; but if no sensation can itself be sensed except by a sensation that is not itself, then no sensation can be sensed, as the logic would apply recursively.

[8] Take a moment now, to try to see the full horror of a mind whose first exposures to behavior determined by other minds are largely of neglectful or actively injurious behavior.

[9] If I impute less than certainty to some proposition then, while the proposition may be falsified, my proposition about that proposition — the plausibility that I imputed to it — is not necessarily falsified. None-the-less, it is easier to speak of being wrong about falsified propositions to which one imputed a high degree of plausibility.

[10] The confusion of transmittability with rationality is founded in stupidity. Even if one allowed science to be redefined as a collaborative activity, somehow definitionally requiring transmittability, private rationality would remain rational. But I promise you that some will adopt the madness of insisting that, indeed, any acceptance of private evidence by its holder is mystical.

[11] When would-be scientists imitate, without real understanding, the behavior of those whom they take to be scientists, the would-be scientists are behaving in a way analogous to a cargo cult.

[12] Some people are convinced that they are unique in possessing consciousness, and the rest of us are just robots who do a fair job of faking it. This is usually taken as madness, though there is rather wide acceptance of a certitude that all other sorts of animals are natural machines, and that anything that seems as if it proceeds from love by a dog or by a pig is just the machine performing well.

[13] The presence of consciousness is here a necessary truth, but the proper grounds of its necessity are not obvious to most who are aware of consciousness; thus it should be unsurprising that a markèdly autistic person could not see this truth in spite of its necessity.

A Question of Characters

Sunday, 31 May 2015

At various times, I'm confronted with confusion by persons and by systems of characters with glyphs. Most of the time, that confusion is a very minor annoyance; sometimes, as when wrestling with the preparation of a technical document, it can cause many hours of difficulty.

It's probably rather easier for people first to see that a character may have multiple glyphs. For example, here are two distinct yet common glyphs for the lower-case letter a: and here are two for g:

People have a bit more trouble with the idea that a single glyph can correspond to more than one character. Perhaps most educated folk generally understand that a Greek Ρ is not our P, even though one could easily imagine an identical glyph being used in some fonts. But many people think that they're looking at a o with an umlaut in each of these two words: whereäs the two dots over the o in the first word are a diæresis, an ancient diacritical mark used in various languages to clarify whether and how a vowel is pronounced.[1] The two dots over the o in the German shön are indeed an umlaut, which evolved far more recently from a superscript e.[2] (One may alternately write the same word schoen, whereäs schon is a different word.)

Out of context, what one sees is a glyph. Generally, we need context to tell use whether we're looking at Ϲ (upper-case lunate sigma), our familiar C, or С (upper-case Cyrillic ess); likewise for many other characters and their similar or identical glyphs. Until comparatively recently, we usually had sufficient context, mistakes were relatively infrequent and usually unimportant. (Okay, so a bunch of people thought that the Soviet Union called itself the CCCP, rather than the СССР. Meh.) But, with the development of electronic information technology, and with globalization, the distinction becomes more pressing. Most of us have seen the problems of OCR; these are essentially problems of inferring characters from glyphs. It's not so messy when converting instead from plain-text or from something such as ODF, but when character substitutions were made based upon similarity or identity of glyph, the very same problems can then arise. For example, as I said, one sees glyphs, but what is heard when the text is rendered audible will be phonetic values associated with the characters used. And sometimes the system will process a less-than sign as a left angle bracket, because everyone else is using it as such. In an abstract sense, these are of course problems of transliteration, and of its effects upon translation.

Some of you will recognize the contrast between character and glyph as a special case of the contrast between content and presentation — between what one seeks to deliver and the manner of delivery. Some will also note that the boundary between the two shifts. For example, the difference between upper-case and lower-case letters originated as nothing more than a difference in glyphs. Indeed, our R was once no more than a different way of writing the Greek Ρ; our A simply was the Greek Α, and it can remain hard to distinguish them! I don't know that ſ (long ess) should be regarded as a different character from s, rather than just as an archaïc glyph thereof.

Still, the fact that what is sometimes mere presentation may at other times be content doesn't mean that we should forgo the gains to be had in being mindful of the distinction and in creating structures that often help us to avoid being shackled to the accidental.


[1] In English and most other languages, a diæresis over the second of two vowels indicates that the vowel is pronounced separately, rather than forming a diphthong. (So here /koˈapəˌret/ rather than /ˈkupəˌret/ or /ˈkʊpəˌret/.) Over a vowel standing alone, as in Brontë, the diæresis signals that the vowel is not silent. (In English and some other languages, a grave accent may be used to the very same effect.) Portuguese cleverly uses a diæresis over the first of two vowels to signal that diphthong is formed where it might not be expected.

[2] Germans used to use a dreadful script — Kurrentschrift — in which such an evolution is less surprising.

On the Meaning of Entrepreneur

Wednesday, 20 May 2015

There has been and is a lot of confusion over the English word entrepreneur. Now, I say English word advisedly, because, though entrepreneur was derived from a French word spelled exactly the same way, a word is not merely a sequence of symbols, but such a sequence in association with a concept or set of concepts, and the English word entrepreneur doesn't have quite the same meaning as the French word.

The French word means contractor or, more generally, one who undertakes.

We didn't need a new word for contractor; it would be contemptible affectation of one sort or of another to introduce a longer French word for such purpose. In fact, there was some attempt to engage in that sort of affectation in the 19th Century, first in the entertainment industry.

But the sequence entrepreneur was reïntroduced to English in the mid-20th Century with the intention of identifying a narrower concept that meritted a word of its own. That concept was of a person who attempts to create a market where one does not exist — offering a new sort of product, or offering a sort of product to those who have not been purchasers of such things.

The entrepreneur is not merely a small business person, nor an active business person, nor an independent contractor, nor some combination of the three. The entrepreneur is an economic explorer, seeking to cultivate new territory — typically with pecuniary profit in mind, but sometimes just for the satisfaction of having brought a market into existence.

Whatever the motivation, it is in the rôle of attempting to create markets that the entrepreneur is the great hero and the entrepreneuse the great heroine of the market economy. And some unconscious sense of that heroism has passed through our society, causing business people aren't such explorers to want to label themselves entrepreneur. The word has become diluted in general use, and many people are using it as if, well, it meant no more than the French word from which it were derived. Economists with a fair understanding of the market process shake their heads in dismay. We need a word for those heroes.

Not Following the Script

Monday, 13 April 2015

I frequently run across the problem of websites whose coders silently presume that all their visitors of interest have Javascript enabled on their browsers. Yester-day, I found this presumption affecting a page of someone whom I know (at least in passing), which prompts me to write this entry. (The person in question did not generate the code, but could suffer economic damage from its flaw.)

The reason that one should not presume that Javascript is enabled on the browsers of all visitors is that Javascript is itself a recurring source of security problems. Careful users therefore enable Javascript only for sites that they trust; very careful users enable Javascript only for sites that they trust and even then only on an as-needed basis; and paranoid users just won't enable Javascript ever. Now, in theory, the only visitors who might interest some site designers would be careless users, but we should look askance at those designers and at their sites.

(And trusting a site shouldn't be merely a matter of trusting the competence and good will of the owner of the domain. Unless that owner is also owner of the server that hosts the domain, one is also trusting the party from whom the site owner leases hosting. In the past, some of my sites have been cracked by way of vulnerabilities of my host.)

A designer cannot infer that, if-and-when his or her site doesn't work because Javascript is not enabled, the visitor will reälize that Javascript needs to be enabled; many problems can produce the same symptoms. Most of the time that sites don't work with Javascript disabled, they still don't work with it enabled. Further, the party disabling Javascript on a browser might be different from the present user; the present user might have only vague ideas about how web pages work. (An IT technician might disable Javascript for most browsers of users at some corporate site. Some of those users, perhaps very proficient in some areas but not with IT, may be tasked with finding products for the corporation.)

The working assumption should typically be that Javascript is not enabled, as this assumption will not be actively hurtful when Javascript is enabled, whereäs the opposite assumption will be actively hurtful when Javascript is not enabled.

The noscript element of HTML contains elements or content to be used exactly and only if scripting has been disabled. That makes it well suited to for announcements that a page will work better if Javascript is enabled

<noscript><p class="alert">This page will provide greater functionality if Javascript is enabled!</p></noscript>

or not at all if it is not enabled.

<noscript><p class="alert">This page requires Javascript!</p></noscript>

(It is possible to put the noscript element to other uses.) So a presumption that Javascript is enabled certainly need not be silent.

However, in many cases, the effect got with Javascript isn't worth badgering the visitor to enable Javascript, and the page could be coded (with or without use of the noscript element) so that it still worked well without Javascript. In other cases, the same effects or very nearly the same effects could be got without any use of Javascript; a great deal that is done with Javascript could instead be done with CSS.

Am I Very Wrong?

Tuesday, 24 February 2015

Kindness come too late may be cruelty. I wonder whether I am too late.

In the Spotlight

Thursday, 18 December 2014

The most effective way to hide some things is to shine a light directly upon them. People will then not believe what they are shown.

καταγέλως μῶρος

Wednesday, 17 December 2014

There's a recurring joke that proceeds along these lines:

What do you call someone who speaks two languages? Bilingual. What do you call someone who speaks just one language? American.
Sometimes, the reference is instead to the British. But let's consider the reality that lies in back of this joke.

The vast majority of people who are bilingual speak English as their second language. Why English? At base, because of the economic significance of those who speak English, especially of those for whom English is their native tongue. This significance originates in the past scope of the British Empire, especially in North America. The American economy was once the world's largest — at the present, the matter is muddled — and the combined size of the American economy with that of other primarily Anglophonic regions still exceeds that of the Sinophonic[1] or Spanish-speaking regions.

If no language had something like the economic significance of English, then most people who are now bilingual would instead be monolingual. As it is, they had good cause to know English, but it wasn't their first language, so they learned it as their second.

Thus, mocking people for being strictly Anglophonic generally amounts to mocking them for having been raised amongst the peoples of the linguistic group that has the greatest economic significance. It would be actively stupid to mock them deliberately on this score, and doing so thoughtlessly is not a very great improvement.

(I'm certainly not saying that there are no good reasons for those who know English to learn other languages.)


[1] It may also be noted that the differences amongst what are called dialects of Chinese are often greater than the differences amongst what are regarded as separate languages. These variants of Chinese are labelled as dialects as part of a more general effort to create an illusion of national unity. Mandarin is a widely spoken language, but Chinese is really a family of languages.

Fated

Sunday, 30 November 2014

Off-and-on, I work on the plans for a couple of pieces of serial fiction. And thus it is repeatedly brought to my attention that, for the stories really to work, a profound necessity must drive events; essential elements must be predestined and meaningful.

This characterization contrasts markèdly from my view of real life. I think that people may be said to have personal destinies, but that these can be unreälized, as when we say that someone were meant to do or become something, but instead did or became something else. And, if I did believe that the world were a vast piece of clockwork, then I'd be especially disinclined to think that its dial had anything important to say.

December Song

Friday, 24 October 2014

As he lay in his death-bed, he expressed his profound sadness that he'd just never found the woman with whom he were to share his life. Some day, the right one will come along. insisted someone reflexively.

A nurse entered the room. Before the light completely faded into darkness, he saw her look at him and wink.

Disjunctive Jam-Up

Friday, 26 September 2014

The Eight Amendment to the Constitution of the United States declares

Excessive bail shall not be required, nor excessive fines imposed, nor cruel and unusual punishments inflicted.

(Underscore mine.) The constitution of the state of California has a much more complex discussion of bail; but its Article 1, §17 declares

Cruel or unusual punishment may not be inflicted or excessive fines imposed.

(Underscore mine.) Plainly these words are an adaptation from the US Constitution.

The replacement of and with or was apparently to indicate that cruel punishment were not to be deemed acceptable simply by virtue of being usual. Indeed, Article 1, §17 of the constitution of the state of Florida used to declare

Excessive fines, cruel or unusual punishment, attainder, forfeiture of estate, indefinite imprisonment, and unreasonable detention of witnesses are forbidden.

(underscore mine) and the state supreme court made just that interpretation in cases of the death penalty. (The section has since been radically revised.)

However, a hypothetical problem arises from the replacement. Just as cruel punishment is not acceptable regardless of whether it is unusual, unusual punishment is not acceptable regardless of whether it is cruel. And, if most or all prevailing punishments were cruel, then punishment of any other sort were unusual; and unusual punishment has been forbidden. Thus, under such circumstance, all punishment were forbidden!

This problem may not be merely hypothetical, in the context of problems such as prison over-crowding. (Of course, when push comes to shove, lawyers and judges tend to shove logic out the door.)