CHAPTER 4-5 PITFALLS THAT ENTRAP OUR THINKING When I was nine years old, we moved to a block with a tree nursery at the end of it. The nursery seemed as huge as a jungle to us kids. (As an adult I am shocked at how small an area it covers -- about large enough for two sets of two houses back-to- back.) In our game, "tribe war", one of our favorite stratagems was to dig an "Ethiopian elephant pitfall trap" -- a pit in the path covered with slight sticks and grass for concealment -- and then lead the other tribe of two or three boys through the path in hopes that one would fall into the pit (all the way up to one knee. We never worried about spraining joints or breaking bones, of course.) Those elephant traps are my image of the pitfalls that we face in thinking clearly, except that the thinking traps are not created by malicious nature to trap any of us in particular. Rather, they lie in wait for all of us impartially, without design. But if we do not learn to avoid them, we fall again and again, which not only prevents us from getting ahead, but also engenders such fear of falling that we stand immovable in one spot. Two tactics help avoid pitfalls. First, know the pitfalls so that you can spot their signs, the way a wary elephant somehow senses that something is amiss and steps around the trap, or tests it with a gingerly foot. Second, go slowly, do not be induced into running heedlessly by others running before you, and test every abstract idea against the concrete experience of first-hand knowledge and scientific data. Do not think that the path is safe just because others seem to run safely in it. The saying "Make speed slowly" fits well here. The profession of economics is currently graced by a great scholar who has an uncanny propensity for avoiding intellectual pitfalls. For example, unlike almost all the rest of the profession, he avoided becoming a monotheistic devotee of the false god of post-World War II Keynsianism (a period which was different from the unusual circumstances of the 1930's depression when Keynes's doctrine may have been a useful trick.) And I have noticed him avoiding at least two minor theoretical traps that almost every other textbook writer stepped into. (There must be many more such cases that I did not notice.) I have a hunch that he avoided these minor fallacious concepts not because he thought them through, but rather because he did not blindly plunge ahead and use the concepts just because everyone else did. Instead, he seemed to draw back from them because he had not tested them thoroughly himself, and therefore just stepped around them "instinctively". In contrast, his great rival (I eschew names to avoid unnecessary argument) who is considered the greatest theorist of his generation, and certainly is the deftest person mathematically who has ever plied the trade, has stepped into trap after trap over the years, plunging ahead with mathematical advances without testing the underlying reality of the abstract propositions. This has led him to make some statements so foolish that most grade-school graduates would avoid them by testing them against their own daily experiences. This Nobel prizewinner is so clever that he extricates himself from these traps well before they damage his reputation -- but not before they have damaged the thinking of his readers and followers who remain trapped in the fallacious concepts, unable to make progress. It may be relevant that the former of those economists has worked as much with empirical data as with theory, always treating both sorts of knowledge as inseparable, whereas the latter economist has almost never worked with data. This is powerful testimony in favor of William James' injunction to stick close to concrete experience, and never be so sure of your skill with abstractions that do not constantly test them against experience. So...I advise you to make progress slowly, and to build the habit of testing your abstract ideas against concrete experience. Now we proceed to the more technical task of analysing some of the main pitfalls to help you identify them before they trap you. The more you read about the fallacies, the more sensitive to them you will become. Fallacies are here classified as failures of logic, evidence, language, relevance, and overconfidence. Most fallacies could easily be classified under a heading different than the heading under which I put it, however. The same ideas that emerge in discussion of fallacies appear in at least three other contexts in this book, too: Fallacies are used intentionally by propagandists as the principles for misleading people. These same fallacies cause sadness, anger, and other painful feelings when we mis-evaluate our lives due to bad thinking. And the principles of scientific thinking are devices for preventing fallacies from leading to unsound conclusions in research. LOGICAL PITFALLS1 The study of logic, and of the logical fallacies, has traditionally been offered as a means of improving your purposeful thinking. But deductive logic is only one part of reasoning, and reasoning is only one part of thinking. The quality of the facts that one adduces obviously matters greatly; if you assume that energy is getting more scarce when it fact it has been becoming less scarce over the centuries, a logical argument based on a premise of increasing scarcity will necessarily be unsound even though its logic may be airtight. The quality of the theories you use matters, too. If you invoke the simple Malthusian theory that the scarcity of oil must eventually increase because there is a limited amount of it and we use some of it up, but this theory is inappropriate because we create more oil by growing oil-bearing plants, then again your conclusion will be unsound though your deductive logic may be airtight. The most serious drawback of logic is that the propositions it studies are misleading examples of the intellectual material that we must come to grips with. Studies of logic specialize in isolated arguments like "Mammals breathe through lungs. Whales are mammals. Whales therefore breathe through lungs". But most of our thinking concerns propositions that are interlocked with many other propositions, usually in a hierarchical fashion, and therefore cannot be evaluated in isolation. It is the relationship of a proposition to the entire body of our thinking that creates the most subtle problems, but the most important ones. For example, the Malthusian argument about the scarcity of oil is itself a branch of an entire tree of theory and data, and the statement about oil therefore cannot be sensibly evaluated without reference to the entire hierarchy of arguments; this is typical of scientific work. And focusing on single isolated propositions distracts our attention from that larger task. The classical logical fallacies are variations on a single theme: A faulty or missing link in the argument. The arguer gets away with this if the audience is not paying attention. The obvious protection against logical fallacies is to check every link in the argument. Begging the Question, and Circular Reasoning These are two varieties of assuming something is so just because it is said to be so. You beg the question (really, you beg that an assumption will be accepted) when you say "Will you grant that statement A is true? If so, then we can deduce that..." And you go on from there. Or you just keep repeating A until it seems believable just because it has been heard so often. The circular fallacy is proving A with B, and then proving B with A. "Why should you believe in God? Because the Bible says so. Why should you believe what the Bible says? Because it is the word of God." Non Sequitur In Latin "non sequitur" means "it does not follow". That is, there is a break in the logic of the argument. For example, in centuries past, in pogroms in Russia this cry was frequently heard: "Save Mother Russia. Beat the Jews". There is no stated connection between the two ideas. But amidst the passions of riots, the hearer does not pay attention to the break in the argument and simply acts as if one proposition implies the other. Post Hoc Ergo Propter Hoc The Latin title of this fallacy means "after it and therefore because of it" - that is, because B came after A, A is assumed to have caused B. But do the birds which fly south cause winter to follow? Precedence alone is a very weak test of causation. (See Chapter 4-3.) It is at the same level of thinking as the superstition that because every time you had corn flakes for breakfast you did well in the tennis match, corn flakes cause you to play well. At best this is a clue to investigate dietary influences, rather than reasonable proof of a causal effect. Indeed, no purely logical argument alone can establish causality. In a complex situation, the decision to describe a relationship as causal is subtle and requires judgment, as described in Chapter 4-3. False Alternative "If you are against abortion, you must work for a law against it". This fallacy is rather obvious: You may think that abortion is a bad thing, but also think that a law against it is even worse. Or you may be against abortion but judge that the effort to have it made illegal is even worse. The rhetorical device known as "false alternative" attempts to box you into a corner from which you cannot escape except by accepting the speaker's desired alternative. It does so by implicitly suggesting that there are only two alternatives, one that you do not accept and the other the speaker's desired alternative. But usually there are many other alternatives. To avoid being dragged into this fallacy, you need only use your imagination to postulate other alternatives, and have the force of personality to deny the speaker's false narrowing of the alternatives. Long Chains of Reasoning The longer the chain of argument, the harder it is to check every link in the chain, and therefore the easier it is for an undetected fallacy to occur. A computer program is a nice example of a chain of logical argument. It is obviously easier for an error to slip by when a program contains hundreds or thousands of lines of programming code, all inter-related to other lines of code with a myriad of connections running every which way, than when a program contains just three or four lines of code where the logic is easy to grasp and an error therefore is obvious to your eye. The same is true of a long argument in mathematics or symbolic logic. (Chapter 4-2 discusses this matter further.) The longer and more complex the argument or computer program, therefore, the more important it is to check it not only by examining its logic but also by examining the output to see if it makes sense and squares with your everyday knowledge. All things being equal, therefore, a policy of a government or an organization that hinges on a shorter line of argument is more attractive than a policy that hinges on a longer line of argument that ostensibly proves that the policy is sound. (More discussion of this rationale in favor of muddling-through policies in preference to radical-change policies may be found in Chapter 3-1). PITFALLS IN DEALING WITH INFORMATION Gathering and evaluating information is a skill. If you do it poorly, your thinking suffers. Lack of skill is a bit different than a fallacy, however, because the same actions that are unsound in one situation are sound in another situation. The key questions about evidence are which, and how much -- how much evidence to gather, how much attention to give to particular pieces of evidence, which evidence you should consider relevant, and so on. Let's briefly classify the places where error may occur: First, the aspects of gathering and evaluating evidence are: the elements (observations), the dimensions (variables), the deductive logic used, the devices used to extend the data, predisposing abstractions (theories and world views), mechanisms used, and (perhaps surprisingly) the honesty of your character. Second, for each of these aspects of the information-developing process there are issues about what to include, what to emphasize and de-emphasize (the weights given to various elements), and what to exclude. Space is lacking to cover this classification systematically, so I'll simply touch on some high points. Many fallacies result from the lack of scientific discipline. Indeed, fallacy and scientific discipline are the names of the two opposite sides of the same coin. Hence I'll refer frequently to Chapter 3-3 which discussed scientific methods for gathering information. We tend to gather and process information in ways that are convenient for us.2 This implies that the information that is closest at hand, easiest to notice, and most familiar to us, is more likely to enter into our thinking than information that is harder to come by and less distinct. Sometimes this economy of thought works well. Sometimes, however, we risk falling into error by collecting information hit-or-miss, the easy way. The fallacy of post hoc ergo propter hoc is an example of this kind of erroneous thinking. The event that precedes another event comes to mind as the cause simply because it is the most available explanation of the occurrence.3 We also use convenient similarities as a way of guessing the nature of new instances. For example, Jacques Barzun tells us that many professors pigeon-hole a new student on the basis of her/his similarity to students they have taught earlier. This practice certainly saves a lot of time in studying the new student. (Bergun even recommends the practice.) But it is easy to be fooled by surface characteristics and hence to put the student into the wrong pigeon-hole in your mind. Even (or especially) scientists fall into this pitfall. You have probably read and believed, as I have, that air travel is safer than auto travel per mile. But driving 600 miles on a rural interstate highway by a 40-year-old seat-belted, alcohol-free driver in a car 700 pounds or more above average weight is slightly safer than an air trip of 600 miles, and 300 miles on the interstate are twice as safe as a 300 mile air trip. Surprise. And that's for the driver; a passenger sitting in back is even safer4. But then again, though fatalities may be no greater in such a highway trip, the rate of injuries is about seventy times as great as for an air trip. [get cite from Windle and Dresner] And for a young, drunk, unseatbelted driver of a small car on a winding country road, the fatality rate is more than a thousand times higher than the safe-situation rate5. Now how should we think about the matter? (For some purposes one might wish to compute a measure of "total accident bodily damage" by adding an economic measure of the cost of injuries plus fatalities, using the sort of valuations of injuries and deaths that a court might arrive at -- very tricky business. Clearly determining your purpose in advance might be the toughest part of this exercise.) These are some pitfalls that scientific discipline helps you avoid: Insufficient Evidence, and Neglect of Variability A single swallow does not make a summer, and a single anecdote does not constitute proof of a general phenomenon. The fact that Simha had a happy time in Salt Lake City does not imply that Salt Lake City is the best place to go to have a happy time. But a single demonstration that an artificial heart keeps a patient alive does indeed prove that an artificial heart can work. This error of neglecting variability is so important that it is discussed at further length in the next chapter. The point which I want to hammer again (and again) is the importance of recognizing variability in a phenomenon, and responding to variability by collecting a sample of evidence large enough and representative enough to give you a reliable picture despite the variability, as in the example of the quality of taxi rides in various cities in Chapter 3-3.Improper Weighting of the Elements of Evidence A single vivid illustration can overwhelm a mass of other evidence. A single picture of a brand-new immigrant receiving welfare can overpower careful statistical study which shows that immigrant families receive less in welfare than do average native families. The impression of awesome power projected by a marching army on parade can lead to a decision to go to war, under the illusion that the enemy's power cannot be defeated. Somewhere I have read Gamal Nasser saying that such an impression influenced him to plan war on Israel in 1967. In the opposite direction, Charles DeGaulle wrote that Nazi parades of war equipment in 1937 convinced French observers in Berlin that Germany was undefeatable, which led to French concessions with respect to Central Europe.6 Another illustration: Many people who lived through the depression were so seared by the trauma of money worries that they saved obsessively in later years. An important variant of improper weighting of the evidence arises from neglect of the proportions of the elements being considered. We often give equal weight to all the elements when they differ greatly in their importance and ought to be weighted very differently. This is like the story of the rabbit-and- horse stew -- proportions 50-50, one rabbit, one horse. This error is discussed at greater length in the next chapter. I fell into this pitfall yesterday. My friend Mort told me that he was worried about a brain tumor because he noticed that one eye was foggy, and he ran into a man who had had a foggy eye and it turned out to be a brain tumor. I reassured him that the odds probably were 20 to 1 in his favor because there must be 20 causes of a foggy eye other than a brain tumor. Today he called and told me good news -- at worst it is a cataract, not a very serious matter nowadays. (It turned out to be nothing at all). And I realized that there may be a thousand (I'm guessing) cataract sufferers for each occurrence of brain tumor, let alone other explanations of a foggy eye, and hence the odds I gave were way off the mark. I had made the elementary error of ignoring the frequencies of the explanations, and counting each one equally. Unsound Biasing Preconceptions and Theories The pitfall of being biased by unsound prior expectations is as big as the world -- literally. Indeed, an unsound conclusion often stems from an unsound world view. For example, if you flip a penny ten times and get ten heads, you may expect a tail on the eleventh flip because you wrongly believe that there is a tendency for such things to "average out". (This is factually wrong, of course; as it is said, the coin has no memory.) Indeed, you may even "see" a tail on the eleventh flip though it really is heads, because you are so sure that it must come out that way, as a result of the unsound theory of "the law of averages". Or you may have acquired the view that Asians work harder than other groups, and you may therefore "observe" that an Asian in your office works harder than anyone else even though this is not so. Yet you must make some generalizations -- about which type of dog is particularly friendly to children, and which plants make you sneeze -- or you could not get through the day. Biasing Hopes The Desire to Believe Too often we fool ourselves because we want to believe, rather than on the basis of evidence and sound arguments. "I'll do it tomorrow," and allow ourselves to be convinced, though we put it off again and again with the same argument; this sort of thinking is a staple for all of us. Our capacity to fool ourselves comes home to me every time I tiptoe so as not to wake someone up. I can feel myself closing my ear passages to make it less likely that I will hear the sounds that I make. In other words, I have an almost built-in mechanism to convince myself that I am am making less noise than I really am. FALLACIES IN THE WAY WE USE LANGUAGE Philosophers have known for centuries how easy it is for others -- and for philosophers, too, or even especially philosophers -- to think badly due to pitfalls that are hidden in language. Indeed, the entire body of philosophical pseudo- knowledge called "metaphysics" is a collection of linguistic fallacies. Some of these pitfalls are specific to certain languages, but the most important of them are the result of speaking any language. The root of linguistic fallacies is confusion between words and things. People often assume that because there is a word for something, the subject of the word is not just an illusion. This is a flaw in metaphysics. Just because philosophers have been talking about "the Good" for millennia does not imply that there is an meaningful entity corresponding to that term. Interestingly, in Hebrew the same word refers to "word" and "thing". Consider the confusion created by the great biologist Konrad Lorenz by the use of such terms as "married love" for some behavior of geese. An unwary reader then asks (as one reviewer of a Lorenz book did): "What is married love?" and after associating a set of characteristics with the term -- for example, a "commitment" to each other -- goes on to say such things as "Would it not be nice if people could have the same commitment toward each other as geese do, instead of being fickle?" (You laugh? People make such errors all the time.) (Stephen Jay Gould, review of Konrad Lorenz by Alec Nisbett, New York Times Book Review, Feb 27, 1979). It helps if, instead of asking what a word" means", you ask what people mean when they use a word. It may be that there is no agreement on what they mean; if so, the word is a hopeless mass of confusion, as is often the case in metaphysical discussions. A way to work toward agreement among people about a word is to forge an "operational definition", or "working definition", which contains the operations to be used in identifying or measuring the phenomenon under discussion. Einstein's great breakthrough with the special theory of relativity was the substitution of an operational definition of time (time is what you read on a clock) for a definition in terms of its supposed properties. Just because a writer's language seems to you to be nothing but a mass of confusion, however, to others it may convey useful meaning -- perhaps as poetry does, however, rather than as physics does. This may be the situation with some theology (though some other theology may be simply muddle.) No one can be considered safe from such confusions -- even the great Newton. The basis of Einstein's great achievement of special relativity was demonstrating that a definition of time with respect to its supposed properties -- an idea that had been at the heart of Newton's thinking -- was shot full of confusion. Einstein's next step was to replace the ideas of absolute time and absolute space with definitions based on the operations used to measure time -- time is simply what one measured on a clock, and nothing more. Even earlier, Ernst Mach -- from whose writings Einstein learned much in his youth -- wrote about this "illusory notion" of absolute time, saying "It would appear as though Newton...still stood under the influence of the medieval philosophy...It is an idle metaphysical conception". ("Newton's Views on Time, Space, and Motion", from The Science of Mechanics, Chicago, Open Court, 1902, excerpted in Arthur Danto and Sidney Morgenbesser, Philosophy of Science, Cleveland, Meridian, 1960, 337-338). If Newton could fall into such a linguistic trap, think how easy it is for the rest of us to be entrapped by language. Perhaps the most troublesome linguistic fallacies stem from the various forms of the verb for existence, "is". Especially in an ambiguous discussion, the term "is" (and its related terms such as "are" and "be") is (sic) one of the greatest sources of confusion in English. This verb has two very different uses - first to indicate equality (as in "A car is an automobile" and "two and two are four"), and second, to serve as a joining word (a "copula", as in "Jack is skinny" and "Sereno is a fool"). People think that by understanding the "is" they learn about the world. I once met a person who intended to write a book about "What Is a Human?" It is unlikely that anything except confusion would come out of that endeavor. An example: People ask "Is a foetus a person? " That is, they first decide whether a fetus "is" or "is not" a child, and then deduce from that definition a conclusion about what to be done. If they answer "yes", they then conclude that "Destroying a foetus is murder"; if they answer "no", they conclude that the pregnant woman is not precluded from destroying it. Everything here hinges on the work "is". This is a typical example of why, in my view, this is (sic) the worst word in the English language -- but indispensable. (Hebrew lacks such a word, but in the vernacular people use a substitute for it anyway.) Perhaps the law requires such categorizations as whether a foetus "is" a person. But apart from the law, I seldom find this usage necessary (though it is [sic] often convenient), and usually it is (sic) confusing when one is talking of morals. The word "is" is at the root of the grave paradoxes in logic that required unraveling by the pathbreaking discoveries of Bertrand Russell and Alfred North Whitehead. It is a fascinating and illuminating, though difficult, exercise to try to speak or write without using any version of "is," or related terms such as "exist". The result is a purified English which its inventor, David Bourland, Jr., calls "E-prime".7 There are a zillion ways in which language can purposely be used to obscure meaning, all of them employed on occasion by politicians and advertisers (though this is not intended to suggest that most politicians and advertisers are not quite honest in their language; I believe them to be no less honest than people in other professions). Someone joked that advertising is the only profession to have invented an entirely new linguistic device, the "indefinite comparative". I see this device now on the dishwashing liquid bottle. "More Real Lemon Juice" says the label. More than what? Advertising can also claim to have removed one part of our anatomy, the armpit, and replaced it with another, the underarm. Quite an achievement, and just with the simple technique of euphemism. The power of language to mislead us derives in considerable part from its nature as communicator of abstractions through its easy-to-absorb and often emotive symbols. In just three words, "Peace, Bread, and Jobs" conveys a great load of meaning to many people, enough to move people's minds and bodies. Effort and precise skill are necessary to elucidate what is meant by such slogans. But without such elucidation any reaction is just unthinking herd action. Grand statements about any nation or culture are almost surely wrong. And from-the-hip analysis of national character by an unlearned layperson is particularly obnoxious. Nevertheless, I hazard this excursion into such statements: Numerical slogans are peculiarly prominent in contemporary Chinese culture, perhaps as a nmemonic device. (Was it so in earlier years, too?) Any book about daily life in China is replete with allusions to (for example, in just two pages of a book about the Cultural Revolution; Bao Yuan, Born Red, Stanford U. P., 1987, pp. 82-83) the "Sixteen Points", the "Five Red Categories", "the Seven Black Categories" and the "Four Olds -- the old ideas, culture, customs, and habits of the exploiting classes". Elsewhere in the book: "the army was undertaking a Three Supports and Two Militaries campaign" (p. 200), Liu Shaoqi was "pushing the capitalistic Three Frees and One Contract" (p. 175), the Central Committee's letters to the Soviet Central Committee containing "three peaces and one less" doctrine (p. 144), and much more. There is a category of treasured possessions known as the "Five Wheels" -- bicycle, watch, tape recorder, and so on. The menu of any Chinese restaurant offers numerical delicacies. Can it be that the process of categorization, in which one of the elements takes on the evocative power of the others and all together have more evocative power than the elements when separated, is connected to the tendency in China for mass movements to arise so quickly and violently? JUDGING BY THE SOURCE, AND OTHER FALLACIOUS IRRELEVANCIES If you can ask "What's that got to do with the subject at hand?" or "So what?" (as Rudolph Flesch recommends we do frequently) and the answer is not satisfactory, then you have identified one of the irrelevancy fallacies, of which there are many interesting forms. If you routinely apply this test to the arguments you hear -- and perhaps to your own, too -- you will be shocked how often you find irrelevancies. There are many interesting irrelevancies that do not fall into the categories below, for example the advice to young lawyers: If you have the facts on your side, pound the facts. If you have the law on your side, pound the law. If you have neither going for you, pound the table. Ad Hominem Attack Rabble-rousers surely throughout history have attacked an argument by attacking the source of the argument instead. For twenty years biologists such as Paul Ehrlich and Garrett Hardin have attacked my studies of the economics of population by pointing out that once upon a time I was in the mail-order business, and my book on the subject is still selling well (fifth edition coming up!). For those people to whom working in business is a sign of moral decadence -- even for two years, near the beginning of one's work life -- the implication is that one need not believe anything said by a person guilty of such a character flaw. They also attack my intellectual credentials by pointing out that I do not have a PhD in economics, implying that without a PhD in a given field one cannot do credible work. And they complete the triangle by attacking my motives, repeating from an introduction to one of my books that I was depressed for a period; they imply that my showing that the material life of humanity has been improving is simply a device to improve my own morale. If this is not enough to impugn my motives, they suggest that I applaud more people being born because this is good for my supposed constituency in the business community (because I teach business, among other subjects, and have a PhD in that field.) They have not yet employed the standard ad hominem changes -- that I have an illegitimate child whom I refuse to acknowledge, and that I sneak money from my employer's or the public till. Nor have they used on me the ad hominem device which they have used on another economist who did some similar research on the subject -- calling him a Catholic, and worse, a convert -- only because I am not Catholic. This would be funnier if such attacks were not successful. Perhaps the mechanism that makes ad hominem attacks successful is that the hearer does not want to associate her/himself with a given set of views lest that would seem or feel as if one is associating with a disreputable person. The varieties of ad hominem attack are many and vivid. Out-and-out insult is basic. Also the "you are one, too" argument. Also the charge of inconsistency. Discrediting the speaker by claiming that she/he has a vested interest in the argument is a staple of journalism, especially in Washington, D. C., where I now live. (A typical example: A biologist who has consulted to the chemical industry or has had a research grant from a chemical firm is dismissed as unable to render an objective opinion about the effects of a toxic substance, especially if the reporter considers her/himself to be an "environmentalist".) Showing that the proposer of an argument is not credible is not always unreasonable, however. When the evidence upon which the argument depends is unclear or mostly not available, your judgment must depend upon a complex of reasoning outside of simple deductive logic. It is reasonable to place more trust arguers who have mostly been right in the past than in arguers who have mostly been wrong. That is, the record of a person's forecasts should be taken into account. It would seem that the track record of those who have forecast doom with respect to resources and the environment over the past two decades should cast into doubt these forecasts as they repeat them now and in the future; just about all their forecasts have been falsified by resources becoming more available and the air and water becoming cleaner rather than dirtier in the U. S. (Curiously, however, this record of failure has not led to a loss of faith in these persons by the environmental organizations or by journalists.) Argument to Emotion I remember as a boy of six in about 1938 hearing a speech by Hitler broadcast on U. S. radio. I could not understand a word of German, but the frenzy with which Hitler delivered it, and the frenzy with which his audience in Germany cheered it, was unmistakable. I cannot say for sure that his speech did not contain logical arguments, but it was obvious even to a little boy -- frightened both by the sound of the speech, and by the reaction of those in the group around me who understood German -- that emotion was at the heart of Hitler's appeal in that speech. A prudent speaker enhances an argument with extra-logical elements such as vivid examples and emotional references. (Indeed, that is what I have tried to do in the paragraph above in order to hold your interest and to make the point memorable.) This is not a dirty trick unless the argument is unsound and the emotive elements are used to paper over the lack of logic. Argument to Authority The argument to authority is the other side of the coin from the argument "to the person" (argumentum ad hominem). An argument to authority asks you to believe the argument because the proponent is an expert. Again, though this is considered a logical fallacy, there are situations when the authority of the speaker is a reasonable consideration in your decision about whether or not to credit the argument. Even within science, a person's record as a careful researcher and interpreter of evidence should sometimes be taken into account in judging the validity of a study. Lack of space in a publication sometimes makes it impossible to describe in full detail how data are gathered. If a piece of research reaches a conclusion that you find difficult to believe because it runs contrary to other studies, you are more likely to believe that the research is sound, rather than the result of a procedural error, if it was done by someone whose competence you have reason to trust. Appeal to Consensus "I have yet to meet anyone familiar with the situation who thinks India will be self-sufficient in food by 1971, if ever". (The Population Bomb, 1968, pp. 40-41). This statement by biologist Ehrlich convinced many people that the Indian sub-continent was a "basket case" to be considered beyond saving by itself or by others. In fact, there were at the time many agronomists and agricultural economists who felt that India had a very bright agricultural future if its farmers were unhindered by government regulation. So the appeal to community belief was not even factually sound, as is frequently the case with arguments based upon community belief. But even if there is great consensus on a belief, and even if the consensus includes those persons in the best positions to judge the matter, the consensus may be wrong. And if the argument is not judged apart from the appeal to the general belief, the community may lose a sound idea. "Fifty million Frenchmen can't be wrong." But conventional beliefs are frequently wrong, perhaps more often than most members of the public thinks. The presumption that the conventional belief is correct is too strong, in my experience. This is certainly the case with economic issues. The views stated in newspaper editorials about tariffs, trade gaps, budget deficits, currency controls, rent control, subsidies, and the like are more often wrong than right, as judged by the overwhelming and long- standing consensus of the economic profession going back to Adam Smith. (Is there a damaging contradiction in what I just said?) Appeal to Force One of the most famous lines from the movies comes from the mouth of a violent Mafia capo: "I'm going to make you an offer you can't refuse". Force is not exactly an argument, but it is included in the classic list of fallacies because it can win an argument. You know how it goes: The Mafioso implies that if you refuse the offer you will be dead. More veiled threats are a more "civilized" form of this method. Censorship is a related device. SUMMAR FOOTNOTES 1The following section closely follows David Kelley, The Art of Reasoning, Chapter 6, pp. 123-132, including some examples. 2This section refers to the availability and representativeness heuristics of Kahneman and Twersky, as discussed in Nisbett and Ross, who review in a comprehensive and interesting fashion the evidence on reliability in human inference. 3Cite Kahnemann and Twersky on availability 4Leonard Evans, Michael C. Frick, and Richard C. Schwing, "Is It Safer to Fly or Drive? -- A Problem in Risk Communication", xeros, no date. William Freudenburg was kind enough to provide me with this information. 5 Discussion of Evans et al in Malcolm Gladwell, "Numbers Don't Lie - But They Can Mislead", Washington Post, August 13, 1990, p. A3. 6"On May 1, 1937, a complete Panzer division, with hundreds of aircraft flying over it, marched through Berlin. The impression produced on the spectators, and first and foremost on M. Francois-Poncet, the French ambassador, and on our military attaches, was of a force that nothing could stop -- except a similar force." The Complete War Memoirs, New York: DaCapo, 1984, p. 26] 7 Albert Ellis and Robert A. Harper changed to E-prime when they revised their excellent self-help book, A New Guide to Rational Living (1961; 1975), and they assure us that it clarified their thinking greatly. My knowledge of E-prime comes from the introduction to their book. With respect to the observations (elements): a) Inclusion: How far back should you collect data, if you are relying on historical experience? How appropriate is the set of data (the universe) that you are working with? Which theories are you relying upon in making such decisions? b) Exclusion: Which observations are you excluding as "outliers" because they do not fit your theoretical preconceptions? Which observations are you excluding because they are not easily accessible -- looking for the watch under the streetlight because that is where the light is, rather than because you lost the watch there? c) Weighting: How much weight has been given to observations because of their vividness and the clarity of par authority? Which pre-disposing abstractions have influenced the weighting of the observations? With respect to the variables (dimensions): a) Inclusion: Elements Inclusion Length of time series and appropriateness of universe, representative sampling, pda Exclusion Excluding as outoers because don't fit preconceptions, lamp-and watch, pda Weighting Vividness, clarity, known-ness, other availability; authority; predisposing abstractions (pda) Dimensions Inclusion Important, and not unimportant; relevant (e. g. ad hominem), pda Exclusion Lamp-and-watch, political acceptability, pda Weighting Political acceptability, authority, pda Logic Inclusions Premises Exclusions Premises, arguments Connectedness and validity Mechanisms Inclusions pda Exclusions Correct understanding Page # thinking pitfa45# 3-3-4d