Many discussions posted to the Internet during 1995-98 centered around competing claims as to what constitutes the scientific method, what constitutes the valid use of probability and what constitutes rationality. The errors of perfectionism and criticism-unto-destruction lead straight to dead-end nihilism. Sections include:
- Karl Popper’s Falsifiability Principle is False
- Probability and the Inability to Determine if a Coin is Fair
- Truth and the Inability to Know What is Rational Until After the Fact
- Karl Popper’s Philosophy of Science is Irrational
Karl Popper’s Falsifiability Principle is False
Many technicists conflate science with rationality. They think that rationality consists in openness to criticism, but really, science consists in openness to criticism. Rationality, customarily defined as giving reasons for actions or for holding beliefs, is truly the sophist servant of desire. Rationality comes closer to being scientific when the desire is to aim at objective truth, but there are many other aims than just this one, that reasons can be associated with, including many areas not open to testability, but still open to criticism. I would categorize science as a subset of rationality.
Many scientists think that statements must be falsifiable or the statements are not “scientific.” Recently, there has been some devastating criticism of Karl Popper’s ‘Falsifiability Principle’. It is sometimes useful to know that scientists are standing on quicksand when they invoke ‘falsifiability’ criteria. First, some extended quotes from Elliott Sober’s Philosophy of Biology (1993), then some of my commentary. At the end, David Stove’s criticism will be included.
Many scientists think that statements must be falsifiable or the statements are not “scientific.” Recently, there has been some devastating criticism of Karl Popper’s ‘Falsifiability Principle’. It is sometimes useful to know that scientists are standing on quicksand when they invoke ‘falsifiability’ criteria. First, some extended quotes from Elliott Sober’s Philosophy of Biology (1993), then some of my commentary. At the end, David Stove’s criticism will be included.
[Karl] Popper’s criterion of falsifiability requires that we be able to single out a special class of sentences and call them observation sentences. A proposition is then said to be falsifiable precisely when it is related to observation sentences in a special way: Proposition P is falsifiable if and only if P deductively implies at least one observation sentence O. One problem with Popper’s proposal is that it requires that the distinction between observation statements and other statements be made precise. To check the statement ‘The chicken is dead,’ you must know what a chicken is and what death is. This problem is sometimes expressed by saying that observation is theory laden.
Every claim that people make about what they observe depends for its justification on their possessing prior information. Popper addresses this problem by saying that what one regards as an observation statement is a matter of convention. But this solution will hardly help one tell, in a problematic case, whether a statement is falsifiable. For Popper’s criterion to have some bite, there must be a nonarbitrary way to distinguish observation sentences from the rest. To date, no one has managed to do this in a satisfactory manner.
The problems with Popper’s falsifiability criterion go deeper. First, there is the so-called tacking problem. Suppose that some proposition S is falsifiable. It immediately follows that the conjunction of S and any other proposition N is falsifiable as well. That is, if S makes predictions that can be checked observationally, so does the conjunction S & N. This is an embarrassment to Popper’s proposal since he wanted that proposal to separate nonscientific propositions N from properly scientific propositions S. Presumably, if N is not scientifically respectable, neither is S & N. The falsifiability criterion does not obey this plausible requirement.
Another problem with Popper’s proposal is that it has peculiar implications about the relation of a proposition to its negation. Consider a statement of the form ‘All As are B.’ Popper judges this statement falsifiable since it would be falsified by observing a single A that fails to be B. But now consider the negation of the generalization - the statement that says ‘There exists an object that is both A and not-B.’ This statement is not falsifiable; no single observed object or finite collection of them can falsify this existence claim. So the generalization is falsifiable, though its negation is not. But this is very odd - presumably, if a statement is ‘scientific,’ so is its negation. This suggests that falsifiability is not a good criterion for being scientific.
Still another problem with Popper’s proposal is that most theoretical statements in science do not, all by themselves, make predictions about what can be checked observationally. Theories make testable predictions only when they are conjoined with auxiliary assumptions. Typically, T does not deductively imply O; rather, it is T & A that deductively implies O (here, T is a theory, O is an observation statement, and A is a set of auxiliary assumptions). This idea is sometimes called Duhem’s Thesis...
The final problem with Popper’s proposal is that it entails that probability statements in science are unfalsifiable. Consider the statement that a coin is fair - that its probability of landing heads when tossed is 0.5. ...It is possible for a fair coin to land heads on all ten tosses, to land heads on nine and tails on one, and so on. Probability statements are not falsifiable in Popper’s sense. In fact, something like the Likelihood Principle is what Popper himself adopted when he recognized that probability statements are not falsifiable.
Popper held that there is an asymmetry between falsification and verification. A vestige of Popper’s asymmetry can be restored if we include the premise that the auxiliary assumptions (A) are true:
Falsification
If T&A, then O
A
not-O
____________
not-T
deductively valid
Verification
If T&A, then O
A
O
____________
T
deductively invalid
Although we now seem to have a difference between verification and falsification, it is important to notice that the argument falsifying T requires that we be able to assert that the auxiliary assumptions A are true. Auxiliary assumptions are often highly theoretical; if we can’t verify A, we will not be able to falsify T by using the deductively valid argument form just described. In the last pair of displayed arguments, one is deductively valid and the other is not. However, this does nothing to support Popper’s asymmetry thesis. In fact, we should draw precisely the opposite conclusion: The left-hand argument suggests that if we cannot verify theoretical statements, then we cannot falsify them either.
One problem with Popper’s asymmetry thesis is that it equates what can be known with what can be deduced validly from observation statements. However, science often makes use of nondeductive argumentation, in which the conclusion is said to be rendered plausible or to be well supported by the premises. In such arguments, the premises do not absolutely guarantee that the conclusion must be true.
On the face of it, vulnerability appears to be a defect, not a virtue. Why is it desirable that the hypotheses we believe should be refutable? Wouldn’t science be more secure if it were invulnerable to empirical disconfirmation? The Likelihood Principle helps answer these questions. A consequence of this principle is that if O favors H1 over H2, then not-O would favor H2 over H1. This is because if P(O/H1) > P(O/H2), then P(not-O/H1) < P(not-O/H1). We want our beliefs to be supported by observational evidence. For this to be possible, they must be vulnerable; there must be possible observations that would count against them. This requirement is not a vestige of the discredited falsifiability criterion. It flows from the Likelihood Principle itself.
My commentary on Sober’s quotes:
His points about there being no “nonarbitrary way to distinguish observation sentences from the rest” and “observation is theory laden” deal with semantic and logical limitations inherent in our biological nature. I think that deductive logic and mathematics are wonderful tools for producing truth. The truth they produce, however, is bounded within the axiomatic system chosen.
The point about probability statements not being falsifiable addresses induction problems. Any time probability is used anywhere, for anything pertaining to future events, it is an announcement of ignorance of what will happen next, meaning an ignorance of causes. Probability statements are maps of objective truth, generalizations open to interpretations from whatever theory you subscribe to at the moment that satisfies your needs.
To me, verifiability is every bit as important as falsifiability is. I agree that where possible, we want our beliefs to be supported by vulnerable observational evidence. Experimental evidence, though, begins with theory which is wide-open, and ends with relative boundaries being drawn around events and objects that constitute what the experiment is, and what the evidence is. The only real event and object that can be described in an absolute sense, is the whole knowable universe. Since we can’t do this, we fall back on utility, we describe each event and object as best fits our needs of the moment. In science, we try to get as much consensual agreement on these relative boundaries as possible, so that our experiments can be repeated. Obviously, no experiment or event is ever exactly duplicated, there are only degrees of similitude. All events are historical, unique and irreversible, but many scientifically agreed on events are similar enough to suit our needs of the moment. Verifiability occurs when we bring scientific knowledge to bear on our life through technology, or the natural refinement of words into actuality.
His points about there being no “nonarbitrary way to distinguish observation sentences from the rest” and “observation is theory laden” deal with semantic and logical limitations inherent in our biological nature. I think that deductive logic and mathematics are wonderful tools for producing truth. The truth they produce, however, is bounded within the axiomatic system chosen.
The point about probability statements not being falsifiable addresses induction problems. Any time probability is used anywhere, for anything pertaining to future events, it is an announcement of ignorance of what will happen next, meaning an ignorance of causes. Probability statements are maps of objective truth, generalizations open to interpretations from whatever theory you subscribe to at the moment that satisfies your needs.
To me, verifiability is every bit as important as falsifiability is. I agree that where possible, we want our beliefs to be supported by vulnerable observational evidence. Experimental evidence, though, begins with theory which is wide-open, and ends with relative boundaries being drawn around events and objects that constitute what the experiment is, and what the evidence is. The only real event and object that can be described in an absolute sense, is the whole knowable universe. Since we can’t do this, we fall back on utility, we describe each event and object as best fits our needs of the moment. In science, we try to get as much consensual agreement on these relative boundaries as possible, so that our experiments can be repeated. Obviously, no experiment or event is ever exactly duplicated, there are only degrees of similitude. All events are historical, unique and irreversible, but many scientifically agreed on events are similar enough to suit our needs of the moment. Verifiability occurs when we bring scientific knowledge to bear on our life through technology, or the natural refinement of words into actuality.
Probability and the Inability to Determine if a Coin is Fair
The use of probability statements in discussion is often an area of misunderstanding. I recently had a long discussion with some scientists and mathematicians on whether a coin is fair or not. They never did bring themselves to believe my position that a fair coin can come up tails forever. Probability statements are unfalsifiable as noted above. Following some more quotes from Elliott Sober’s Philosophy of Biology (Ch. 3), I have added some of my commentary:
The actual frequency interpretation of probability is an objective interpretation; it interprets probability in terms of how often an event actually happens in some population of events. There is an alternative interpretation of probability that is subjective in character. We can talk about how much certainty or confidence we should have that a given proposition is true. Not only does this concept describe something psychological, it also is normative in its force. It describes what our degree of belief ought to be. A third interpretation of probability says that an event’s probability is its hypothetical relative frequency. A probability value of x does not entail an actual frequency equal to x, but it does entail that the frequency in an ever-lengthening hypothetical sequence of tosses will converge on the value x. Both the actual frequency and the degree of belief interpretations of probability say that we can define probability in terms of something else. However, closer attention to the hypothetical relative frequency interpretation of probability shows that this interpretation offers no such clarification. For, if it is not overstated, this interpretation is actually circular.
With an infinite number of tosses, each specific sequence has a zero probability of occurring; yet, one of them will actually occur. For this reason, we cannot equate a probability of zero with the idea of impossibility, nor a probability of one with the idea of necessity; this is why a fair coin won’t necessarily converge on a relative frequency of 50 percent heads. If the frequency of heads does not have to converge on the coin’s true probability of landing heads, how are these two concepts related? The Law of Large Numbers... provides the answer: P(the coin lands heads/the coin is tossed) = 0.5 if and only if P(the frequency of heads = 0.5 + or - e/the coin is tossed n times) approaches 1 as n goes to infinity. Here, e is any small number you care to name. The probability of coming within e of 0.5 goes up as the number of tosses increases. Notice that the probability concept appears on both sides of this if-and-only-if statement. The hypothetical relative frequency interpretation of probability is not really an interpretation at all, if an interpretation must offer a noncircular account of how probability statements should be understood.
The last interpretation of probability I will discuss has enjoyed considerable popularity, even though it suffers from a similar defect. This is the propensity interpretation of probability. Propensities are probabilistic dispositions, so I’ll begin by examining the idea of a dispositional property. Dispositional properties are named by words that have ‘-ible’ suffixes. Solubility, for example, is a disposition. It can be defined as follows: X is soluble if and only if, if X were immersed under normal conditions, then X would dissolve. This definition says that an object is soluble precisely when a particular if/then statement is true of it. According to the definition, soluble substances are not simply ones that probably dissolve when immersed in the right way - they are substances that must dissolve when immersed. The suggestion is that probabilistic if/then statements are true because objects possess a special sort of dispositional property, called a propensity. The propensity interpretation stresses an analogy between deterministic dispositions and probabilistic propensities. There are two ways to find out if an object is soluble. The most obvious way is to immerse it in water and see if it dissolves. In principle, we could examine the physical structure of a sugar lump and find out that it is water soluble without ever having to dissolve it in water. Thus, a dispositional property has an associated behavior and a physical basis. We can discover whether an object has a given dispositional property by exploring either of these.
The same is true of probabilistic propensities. We can discover if a coin is ‘fair’ in one of two ways. We can toss it some number of times and gain evidence that is relevant. Or, we can examine the coin’s physical structure and find out if it is evenly balanced. In other words, the probabilistic propensities of an object can be investigated by attending to its behavior and also to its physical structure. The if/then statement (‘if it were immersed, then it would dissolve’) describes a relation between cause and effect. However, there are many probability statements that do not describe any such causal relation. Only sometimes does a conditional probability of the form P(A/B) describe the causal tendency of B to produce A. The more fundamental problem, however, is that ‘propensity’ seems to be little more than a name for the probability concept we are trying to elucidate. We have no way to understand a coin’s ‘propensity to land heads’ unless we already know what it means to assign it a probability of landing that way. An interpretation of probability, to be worthy of the name, should explain the probability concept in terms that we can understand even if we do not already understand what probability is. The propensity interpretation fails to do this.
We now face something of a dilemma. The two coherent interpretations of probability mentioned so far are actual relative frequency and subjective degree of belief. If we think that probability concepts in science describe objective facts about nature that are not interpretable as actual frequencies, we seem to be in trouble. If we reject the actual frequency interpretation, what could it mean to say that a coin has an objective probability of landing heads of 0.5? One possible solution to this dilemma is to deny that probabilities are objective. This is the idea that Darwin expresses in passing in the On the Origin of Species (Harvard, 1964, originally published 1859, p. 131) when he explains what he means by saying that novel variants arise ‘by chance.’ ‘This,’ he says, ‘of course, is a wholly incorrect expression, but it serves to acknowledge plainly our ignorance of the cause of each particular variation.’ One might take the view that probability talk is always simply a way to describe our ignorance; it describes the degree of belief we have in the face of incomplete information. According to this idea, we talk about what probably will happen only because we do not have enough information to predict what certainly will occur.
According to quantum mechanics, chance is an objective feature of natural systems. Even if we knew everything relevant, we still could not predict with certainty the future behavior of the systems described in that physical theory. Perhaps, as Darwin said, we should interpret the probabilistic concepts in evolutionary theory as expressions of ignorance and nothing else. Perhaps probability describes objective features of the world but cannot be defined noncircularly. This might be called an objectivist no-theory theory of probability.
The idea that probability can be defined noncircularly is no more plausible than the idea that a term in a scientific theory can be defined in purely observational language. However, there is another reason to use probabilities. This pertains to the goal of capturing significant generalizations. R. Levins (“The strategy of model building in population biology” American Scientist 54, 1966) proposes an analogy between biological models and maps. One of his points is that a good map will not depict every object in the mapped terrain. The welter of detail provided by a complete map (should such a thing be possible) would obscure whatever patterns we might wish to make salient. A good map depicts some objects but not others.
Truth and the Inability to Know What is Rational Until After the Fact
My commentary on Sober’s quotes:
When Sober says: “With an infinite number of tosses, each specific sequence has a zero probability of occurring; yet, one of them will actually occur. For this reason, we cannot equate a probability of zero with the idea of impossibility...” he is referring to what I call the discrete at rest actuality of an event versus the movement towards an end by mathematical definition.
The Law of Large Numbers is theorized to justify the flat-out arbitrary definition of a fair coin here. In essence, by using the specification of an e, we are simply admitting that we are only trying to satisfy each other, by consent, that a coin is fair. We are not trying to reach the truth of whether a coin is fair or not, because a fair coin could come up tails forever; we are just being practical, making a contract so to speak, by agreeing on subjective definitions.
Note that Darwin did not deify randomness as a cause of evolutionary changes, he correctly stated that ‘chance’ simply meant an epistemological problem; objective reality is too fine grained at the causal level for us to discover all the causes. Evolution does not occur according to a randomness force. There is no ontological randomness, such a concept would make the universe unintelligible, there is only epistemological randomness, aka ignorance.
It seems hard for some scientists to believe, but one fair coin, tossed forever, could come up tails every time. Or it could come up heads every time, forever. And still be fair. This is one of the reasons probability statements are unfalsifiable, and are used in science not to prove things, but to convert our ignorance of cause-and-effect into something practical, something useful if used with care and caution. A fair coin can come up either heads or tails with equal probability. But there is absolutely no way to determine whether a coin is fair or not in actuality, because a fair coin could come up tails every time to infinity. Every coin toss is a fresh unique event, standing on its own, regardless of what has happened previously. Every time it can come up heads or tails with equal probability. There is no force, that at some point in an infinite sequence of events, that will eliminate the probability of coming up tails and force it to come up heads. It is unfalsifiable to hypothesize that a coin is fair.
When ‘in the limit’ is used, it means that the numbers being referred to are no longer at rest, discrete. They are in motion. You can see that points of zero length do not ‘join’ to form a line, they move together to form a line. The book about mathematicians being forced to move numbers in order to deal with infinity or to ever reach any ‘limit’ is by N. Ya. Valenkin, transl. by Abe Shenitzer, In Search of Infinity (Birkhauser,1995).
“In Ch. 3 Valenkin takes us through the origins of numbers and figures and shows us how the theory of the infinite attempts to reconcile the graphical with the numerical. He also explains how science has moved from the definition of a curve as a set of infinitesimal points - as in differential calculus - to the path of a continuously moving point with no intervening gaps.”
In quantum theory, the epistemology and ontology are mixed up. In my opinion, it is improper to assume that we could ever know everything of a causal nature at the finest grained structure of objective reality because we cannot separate the observer from the observed. If we could, then of course we could predict with near certainty the next instant, although changes at the boundaries of the knowable universe prevent absolute certainty. We would revert to subjective probabilities for the instants after that one. It is improper to assume objective randomness just because of this.
I like the ignorance interpretation best, which assumes, properly, a cause-and-effect objective reality, but which recognizes the epistemological problem of the inaccessible fine grain structure. It furthermore ties to the significant generalization idea, which is an aspect of the subjective psychological interpretation. This interpretation is closest to biological epistemology’s emphasis on categorizing perceptions on value prior to semantics, then using semantics at the highest level of consciousness to categorize patterns again on value. Patterns are what we are interested in.
When Sober says: “With an infinite number of tosses, each specific sequence has a zero probability of occurring; yet, one of them will actually occur. For this reason, we cannot equate a probability of zero with the idea of impossibility...” he is referring to what I call the discrete at rest actuality of an event versus the movement towards an end by mathematical definition.
The Law of Large Numbers is theorized to justify the flat-out arbitrary definition of a fair coin here. In essence, by using the specification of an e, we are simply admitting that we are only trying to satisfy each other, by consent, that a coin is fair. We are not trying to reach the truth of whether a coin is fair or not, because a fair coin could come up tails forever; we are just being practical, making a contract so to speak, by agreeing on subjective definitions.
Note that Darwin did not deify randomness as a cause of evolutionary changes, he correctly stated that ‘chance’ simply meant an epistemological problem; objective reality is too fine grained at the causal level for us to discover all the causes. Evolution does not occur according to a randomness force. There is no ontological randomness, such a concept would make the universe unintelligible, there is only epistemological randomness, aka ignorance.
It seems hard for some scientists to believe, but one fair coin, tossed forever, could come up tails every time. Or it could come up heads every time, forever. And still be fair. This is one of the reasons probability statements are unfalsifiable, and are used in science not to prove things, but to convert our ignorance of cause-and-effect into something practical, something useful if used with care and caution. A fair coin can come up either heads or tails with equal probability. But there is absolutely no way to determine whether a coin is fair or not in actuality, because a fair coin could come up tails every time to infinity. Every coin toss is a fresh unique event, standing on its own, regardless of what has happened previously. Every time it can come up heads or tails with equal probability. There is no force, that at some point in an infinite sequence of events, that will eliminate the probability of coming up tails and force it to come up heads. It is unfalsifiable to hypothesize that a coin is fair.
When ‘in the limit’ is used, it means that the numbers being referred to are no longer at rest, discrete. They are in motion. You can see that points of zero length do not ‘join’ to form a line, they move together to form a line. The book about mathematicians being forced to move numbers in order to deal with infinity or to ever reach any ‘limit’ is by N. Ya. Valenkin, transl. by Abe Shenitzer, In Search of Infinity (Birkhauser,1995).
“In Ch. 3 Valenkin takes us through the origins of numbers and figures and shows us how the theory of the infinite attempts to reconcile the graphical with the numerical. He also explains how science has moved from the definition of a curve as a set of infinitesimal points - as in differential calculus - to the path of a continuously moving point with no intervening gaps.”
In quantum theory, the epistemology and ontology are mixed up. In my opinion, it is improper to assume that we could ever know everything of a causal nature at the finest grained structure of objective reality because we cannot separate the observer from the observed. If we could, then of course we could predict with near certainty the next instant, although changes at the boundaries of the knowable universe prevent absolute certainty. We would revert to subjective probabilities for the instants after that one. It is improper to assume objective randomness just because of this.
I like the ignorance interpretation best, which assumes, properly, a cause-and-effect objective reality, but which recognizes the epistemological problem of the inaccessible fine grain structure. It furthermore ties to the significant generalization idea, which is an aspect of the subjective psychological interpretation. This interpretation is closest to biological epistemology’s emphasis on categorizing perceptions on value prior to semantics, then using semantics at the highest level of consciousness to categorize patterns again on value. Patterns are what we are interested in.
The problems with the Falsifiability Principle are related to the general problems of the truth of assertions, as revealed by the inability to determine the truth of whether a fair coin is really fair. These topics of rationality and context, and the specific origin of the subjective degree of belief interpretation are discussed by Robert Nozick in The Nature of Rationality (1993), Chs. 3 & 4. I have added my commentary after these quotes.
But why is it necessary to believe any statement or proposition? Why not simply assign probabilities to each and every statement without definitely believing any one and, in choice situations, act upon these probabilities by (perhaps) maximizing expected utility? Such is the position of radical Bayesianism, and it has some appeal. The term ‘radical Bayesianism’ is Richard Jeffrey’s. See his Probability and the Art of Judgment (1992). Moreover, the cost of radical Bayesianism is not so apparent. According to it, the scientist, or the institution of science at a time, does not accept or believe theories or lawlike statements; rather, Carnap tells us, science assigns these statements particular degrees of probability.
Not only is belief tied to context, so is rationality. To term something rational is to make an evaluation: its reasons are good ones (of a certain sort), and it meets standards (of a certain sort) that it should meet. Beliefs are tied to contexts within which possibilities incompatible with them are excluded or deemed unworthy of consideration - let us call this view ‘radical contextualism.’
Philosophers have faced the task of grounding Reason, grounding what we take to be evident. Hume’s problem of induction was to find a rational argument to the conclusion that reason, or that portion of it embodied in inductive reasoning, (probably) works. Even if this problem could be solved, there would remain the question of what grounds that rational argument, that is, why we should trust any rational argument. If inductive reasoning is rational, then there is such a rational argument, the inductive one; this gets dismissed as circular. So the problem is held to be one of supporting one portion of Reason, inductive reasoning, by other portions of Reason, that is, in a noncircular way. This was the problem Descartes faced - why must self-evident propositions, basking in the natural light of reason, correspond to reality? - and it has given rise to an extensive literature on the ‘Cartesian Circle.’ Kant held that the rationalists could not show why our knowledge or intuition, our ‘reason’ in my sense here, would conform to objects, and he suggested - this was his ‘Copernican Revolution’ - that objects must conform to our knowledge, to the constitution of the faculty of our intuition.
If reason and the facts were independent factors, said Kant, then the rationalists could produce no convincing reason why the two should correspond. Why should those two independent variables be correlated? So he proposed that the (empirical) facts were not an independent variable; their dependence upon reason explains the correlation and correspondence between them. But there is a third alternative: that it is reason that is the dependent variable, shaped by the facts, and its dependence upon the facts explains the correlation and correspondence between them. It is just such an alternative that our evolutionary hypothesis presents. Reason tells us about reality because reality shapes reason, selecting for what seems ‘evident.’ Such a view, we have said, can explain only the past correlation; it cannot guarantee that the future facts will fit the present reason. Hence it does not provide a reason-independent justification of reason, and, although it grounds reason in facts independent of reason, this grounding is not accepted by us independently of our reason. Hence the account is not part of first philosophy; it is part of our current ongoing scientific view.
The rationality of acting on probability is expressed within utility theory by the Von Neumann-Morgenstern condition that if a person prefers x to y, then the person prefers that probability mixture giving the higher probability of x, among two probability mixtures that yield (only) x and y with differing probabilities. If x is preferred to y, then [px,(1-p)y is preferred to qx,(1-q)y if and only if p is greater than q]. This last sentence has exactly the form of a Carnapian reduction sentence and so suggests the project of implicitly defining probability in its terms [Rudolf Carnap, “Testability and Meaning,” in Philosophy of Science (1936)]. Instead of worrying over justifying why we should act on probabilities, instead define probabilities in terms of how we should act. This project was carried out by L.J. Savage, who laid down a set of normative (and structural) conditions on behavior, on preference among actions, sufficient to define a notion of personal probability. It was only after talking with Hilary Putnam and seeing a recent unpublished essay of his, “Pragmatism and Moral Objectivity” (forthcoming), where independently he presses the issue of why one should act upon the more probable, that I came to see this as a serious problem, not just as a finicky difficulty. And to this issue so central to the notion of instrumental rationality and the rationality of belief - why in a particular instance we should act on the most probable or believe the most probable will occur - the resources of rationality have thus far not provided a satisfactory answer.
My comments on Nozick:
The evolutionary hypothesis concerning the development of reason reflecting past correlation and correspondence of reason to facts is a good one. We also can see clearly, that since objective reality is contingent and variable, that is, the new is always coming into being, that reason will not necessarily apply to future events. From this macro scale hypothesis, we can proceed down to our own human scale, and see that reasons are ultimately pronounced good, and assertions are ultimately pronounced true, when they proceed from words to actuality in fact. In other words, bad reasons and untrue assertions are selected out, just as unfit organisms are. In fact, if humanity (Aristotle’s “rational animals”) blows itself up in the future, then one would assert, if one were around, that rationality itself was selected out, that is, it was really irrational all along.
The question is, how can we know in advance which reasons, and which assertions will be selected for, when selection is only recognized after the fact? This is another way of putting the perennial philosophy question: from whence comes humanity’s guidance? John Donne put it long ago, “The ends crowne our workes, but thou crown’st our ends...” Whether ‘thou’ is conceived as ‘God the Father’, as ‘Necessity’, or as the modern ‘Selection’, really makes no difference, it’s the same concept.
The link between ends and reason was put this way by Ernst Mach:
The evolutionary hypothesis concerning the development of reason reflecting past correlation and correspondence of reason to facts is a good one. We also can see clearly, that since objective reality is contingent and variable, that is, the new is always coming into being, that reason will not necessarily apply to future events. From this macro scale hypothesis, we can proceed down to our own human scale, and see that reasons are ultimately pronounced good, and assertions are ultimately pronounced true, when they proceed from words to actuality in fact. In other words, bad reasons and untrue assertions are selected out, just as unfit organisms are. In fact, if humanity (Aristotle’s “rational animals”) blows itself up in the future, then one would assert, if one were around, that rationality itself was selected out, that is, it was really irrational all along.
The question is, how can we know in advance which reasons, and which assertions will be selected for, when selection is only recognized after the fact? This is another way of putting the perennial philosophy question: from whence comes humanity’s guidance? John Donne put it long ago, “The ends crowne our workes, but thou crown’st our ends...” Whether ‘thou’ is conceived as ‘God the Father’, as ‘Necessity’, or as the modern ‘Selection’, really makes no difference, it’s the same concept.
The link between ends and reason was put this way by Ernst Mach:
The biological task of science is to provide the fully developed human individual with as perfect a means of orientating himself as possible. No other scientific ideal can be realized, and any other must be meaningless. No point of view has absolute, permanent validity. Each has importance only for some given end.
This says that indeed, like the ancient Greeks held, reason is the sophist servant of desire. This quote of Mach’s appears to be the takeoff for Einstein’s theorizing about relativity.
Karl Popper’s Philosophy of Science is Irrational
Roger Kimball has written an essay about an Australian philosopher, entitled “Who Was David Stove?” in The New Criterion March 1997, who analytically ripped Popper and his epigoni to pieces.
David Stove died in 1994, but has left behind some very interesting, but little known, books. Kimball writes mostly about Popper and After: Four Modern Irrationalists which was originally published in 1982 by Pergamon Press, but a new edition is being published by Macleay Press retitled as Anything Goes: Origins of the Cult of Scientific Irrationalism. It looks like a doozy for all die-hard Popper fans. Stove also wrote Cricket Versus Republicanism (a collection of very politically incorrect essays published posthumously), Darwinian Fairytales (also published posthumously, Darwinism’s absurdities highlighted between the boundaries of human genetic vs. cultural evolution), The Plato Cult and Other Philosophical Essays (1991) (seven essays attacking philosophical idealism), Rationality of Induction (1986), and Probability and Hume’s Inductive Skepticism (1973).
I am quoting Kimball’s account of Stove’s critique of Popper:
David Stove died in 1994, but has left behind some very interesting, but little known, books. Kimball writes mostly about Popper and After: Four Modern Irrationalists which was originally published in 1982 by Pergamon Press, but a new edition is being published by Macleay Press retitled as Anything Goes: Origins of the Cult of Scientific Irrationalism. It looks like a doozy for all die-hard Popper fans. Stove also wrote Cricket Versus Republicanism (a collection of very politically incorrect essays published posthumously), Darwinian Fairytales (also published posthumously, Darwinism’s absurdities highlighted between the boundaries of human genetic vs. cultural evolution), The Plato Cult and Other Philosophical Essays (1991) (seven essays attacking philosophical idealism), Rationality of Induction (1986), and Probability and Hume’s Inductive Skepticism (1973).
I am quoting Kimball’s account of Stove’s critique of Popper:
The long struggle of empiricism since Bacon had yielded a straightforward but powerful conception of science. Scientific propositions were distinguished from speculative or pseudo-scientific propositions by the degree to which they were verifiable; the method of science was essentially inductive, which means that it moved from the observed or known to the unobserved or unknown; the procedures of science were marked by caution; its results were held to be certain or at least highly probable.
Stove’s technical specialty within philosophy was induction, Popper’s specialty was deduction. These two don’t mix well.
Popper stood all this on its head. In his philosophy of science, we find the curious thought that falsifiability, not verifiability, is the distinguishing mark of scientific theories; this means that, for Popper, one theory is better than another if it is more disprovable than the other. Popper was apparently fond of referring to ‘the soaring edifice of science.’ But in fact his philosophy of science robbed that edifice of its foundation. Refracted through the lens of Popper’s theories, the history of modern science is transformed from a dazzling string of successes into a series of ‘problems’ or ‘conjectures and refutations.’ On the traditional view, scientific knowledge can be said to be cumulative: we know more now than we did in 1897, more then than in 1697. Popper’s theory, demoting scientific laws to mere guesses, denies this: in one of his most famous phrases, he speaks of science as ‘conjectural knowledge,’ an oxymoronic gem that, as Stove remarks, makes as much sense as ‘a drawn game which was won.’
It gets worse.
..Popper’s ideas did not only propound an irrationalist view of science: they also helped to license irrationalism for an entire generation. Without the bedrock - or, rather, the sandbank - of Popper’s theories upon which to build, the other philosophers of science Stove discusses - Imre Lakatos, Thomas Kuhn, and Paul Feyerabend - could never have developed their own influential permutations of irrationalism. And without the example of these and other such gentlemen, the blasé irrationalism that infects the humanities and social sciences today - and, indeed, that infects our entire ‘postmodern’ culture - might never have achieved epidemic proportions. Kuhn’s famous book The Structure of Scientific Revolutions (1962), which in effect denies that there is such a thing as progress in science, has by itself done incalculable intellectual damage to innumerable professors looking for excuses to deny the claims of scientific truth.
Obviously the spread of irrationalism was extremely rapid, because scientific authority lays claim to a political monopoly on all authority. Our Western democratic governments were founded on the notion of rationality derived from the Enlightenment. Science is largely government funded. Science must maintain its corner on being the sole authority on what is to be labeled rationality. If science loses its monopoly on this authority, the particular form of government that feeds it loses its authority, jeopardizing science’s food supply. Once a philosopher moves the herd of state scientists in a different direction, stragglers are cut out and picked off. The herd has learned to move right smartly en masse, survival of the fattest.
Irrationalism, to be plausible, must be disguised, and Stove devotes the first half of his book to a brilliant analysis of the literary devices used to achieve plausibility. There are two basic techniques. The first is to neutralize what Stove calls ‘success words’ - words like ‘knowledge,’ ‘discovery,’ ‘facts,’ ‘verified,’ ‘explanation.’ Such words carry an implication of cognitive achievement. No philosopher of science can do without them entirely. But the simple addition of scare quotes alters everything: ‘Galileo discovers x’ means something quite different from ‘Galileo “discovers” x.’ The element of ambiguity is essential: consider the effect of a sign advertising “fresh” fish. The same trick can obviously be used with words of cognitive failure: ‘mistake,’ ‘false,’ ‘refuted,’ etc. A “refuted” theory is not the same as a refuted theory.
The second technique involves deliberately conflating the history or sociology of science with the logic of science. Stove focuses especially on what he calls ‘sabotaging logical expressions.’ By embedding a logical statement in a historical context, one thereby undermines its logical status while preserving the impression that a logical claim has been made. A simple example is the difference between ‘P entails Q’ and ‘P entails Q according to most logicians.’ The first is a logical statement; the second is a historical claim; it is what Stove calls a ‘ghost logical statement’: it poaches on the prestige of logical entailment without actually making any logical claim at all: it is therefore completely immune to criticism on logical grounds.
Immunity to criticism on logical grounds is a very powerful memetic tool, as long as the rubes don’t see the trick.
Stove’s analysis of how his authors manage to make their irrationalism plausible to their readers is a tour de force. So is his analysis of how they made irrationalism plausible to themselves. The key, at least so far as Popper was concerned, was the challenge to Newtonian physics by relativity and quantum mechanics. As Stove points out, this ‘changed the entire climate of philosophy of science,’ replacing the nineteenth century’s blissful confidence about the impregnable certainty of science with a profound skepticism.
Radical skepticism is indistinguishable from nihilism.
Stove shows how Popper and his other authors, attempting ‘to ensure that no scientific theory should ever again become the object of over-confident belief,’ overreacted and embraced instead a form of irrationalism whose philosophical roots go back to Hume. At bottom, Stove shows, his authors embrace irrationalism because of ‘a certain extreme belief, by which their minds are dominated, about what is required for one proposition to be a reason to believe another.’ They all acknowledge that absolute certainty is impossible; but they assume that only absolute certainty will do as a warrant for rational belief.
I can’t begin to count how many discussions have been devoted to quibbling over the meaning of “certainty.”
They exhibit, in other words, ‘a variety of perfectionism.’ It is, of course, a disappointed perfectionism. Disappointed perfectionism has also led to ‘the frivolous elevation of ‘the critical attitude’ into a categorical imperative.’ The principle result, as Stove notes, has been ‘to fortify millions of ignorant graduates and undergraduates in the belief, to which they are already too firmly wedded by other causes, that the adversary posture is all, and that intellectual life consists in ‘directionless quibble.’
I like this guy! If authoritative science proclaims that no purpose and no meaning are anywhere to be found in the universe, then of course, intellectual life must, by scientific definition, consist of ‘directionless quibble.’ I call this stance ‘smug nihilism.’ I wonder how science itself escapes being ‘directionless quibble’? Could it be that scientific laws don’t apply to scientists themselves? Why are scientists so hungry for other people’s money all the time if they are really just a collection of random fluctuations of nothing?
Stove reportedly held Hume in the highest regard, but this did not prevent him from correcting Hume’s ‘perfectionistic’ errors. Stove’s point about the efficacy of the cumulative results of induction, can be seen in a common-sensical way if you picture yourself as a scientist seeking grant money from the government or private sources to do scientific experiments. It is commonplace to obtain funding for an experiment seeking to verify predictions of a highly regarded (meaning politically sanctioned) theory, or to perform duplicate experiments to immediately verify previous highly regarded experimental results. But it is hard to obtain funding for doing experiments on ‘crackpot’ theories (where did the authority to label them ‘crackpot’ come from?), and it becomes exceedingly difficult to fund the duplication of well-established experimental results more than a few times.
Some scientists claim that “Popper’s idea is that we ought to test our ideas to destruction.” But this is clearly irrational on the face of it. No scientist, and no funding agency, is going to indefinitely work on duplicating the same experiment over and over and over, “to destruction.” Scientists working in the real world don’t want to waste their lives increasing the cumulative inductive probabilities from a very high percentage to an infinitesimally higher percentage; nor do agencies with money to spend on science. This is the real rational world, not Popper’s irrational critical-unto-destruction world.
Verifiability comes when the words of scientific theory are actualized in physical reality. This is why there is no boundary between technology and science. Now, as to why the Big Bang is not science, it is because physicists claim:
This looks fishy to me. This is an indication of the irrationality that has crept into the scientific endeavor.
Reilly Jones © 2001
Stove reportedly held Hume in the highest regard, but this did not prevent him from correcting Hume’s ‘perfectionistic’ errors. Stove’s point about the efficacy of the cumulative results of induction, can be seen in a common-sensical way if you picture yourself as a scientist seeking grant money from the government or private sources to do scientific experiments. It is commonplace to obtain funding for an experiment seeking to verify predictions of a highly regarded (meaning politically sanctioned) theory, or to perform duplicate experiments to immediately verify previous highly regarded experimental results. But it is hard to obtain funding for doing experiments on ‘crackpot’ theories (where did the authority to label them ‘crackpot’ come from?), and it becomes exceedingly difficult to fund the duplication of well-established experimental results more than a few times.
Some scientists claim that “Popper’s idea is that we ought to test our ideas to destruction.” But this is clearly irrational on the face of it. No scientist, and no funding agency, is going to indefinitely work on duplicating the same experiment over and over and over, “to destruction.” Scientists working in the real world don’t want to waste their lives increasing the cumulative inductive probabilities from a very high percentage to an infinitesimally higher percentage; nor do agencies with money to spend on science. This is the real rational world, not Popper’s irrational critical-unto-destruction world.
Verifiability comes when the words of scientific theory are actualized in physical reality. This is why there is no boundary between technology and science. Now, as to why the Big Bang is not science, it is because physicists claim:
- energy cannot be created from nothing and
- energy was created from nothing.
This looks fishy to me. This is an indication of the irrationality that has crept into the scientific endeavor.
Reilly Jones © 2001