Reading David Armstrong

David Malet Armstrong’s book Knowledge, Truth and Belief (1973, pp.150-61) contains an important analysis of the infinite regress of inferences – “reasons behind the reasons” – first noticed by Plato in the Theatetus (200D-201C).

Knowledge traditionally entails true belief, but true belief does not entail knowledge.

Knowledge is true belief plus some justification in the form of reasons or evidence. But that evidence must itself be knowledge, which in turn must be justified, leading to a regress.

Following some unpublished work of Gregory O’Hair, Armstrong identifies and diagrams several possible ways to escape Plato’s regress, including:

Skepticism – knowledge is impossible

The regress is infinite but virtuous

The regress is finite, but has no end (Coherence view)

The regress ends in self-evident truths (Foundationalist view)

Non-inferential credibility, such as direct sense perceptions

Externalist theories (O’Hair is the source of the term “externalist”)

Causal view (Ramsey)

Reliability view (Ramsey)

Armstrong is cited by Hilary Kornblith and other epistemologists as restoring interest in “externalist” justification of knowledge. Since Descartes, epistemology had been focused on “internalist” justifications.

Armstrong does not subscribe to traditional views of justifying true beliefs, but he cited “causal” and “reliabilist” theories as direct non-inferential validation of knowledge. Direct validation or justification avoids the problem of the infinite regress of inferences.

Causality and reliabilism also were not original with Armstrong. He referred to the 1929 work of Frank Ramsey. Today these ideas are primarily associated with the name of Alvin Goldman, who put forward both “causal” and “reliabilist” theories of justification for true beliefs.

Here is how Armstrong described “causal” and “reliabilist” views:

According to “Externalist” accounts of non-inferential knowledge, what makes a true non-inferential belief a case of knowledge is some natural relation which holds between the belief-state, Bap [‘a believes p’], and the situation which makes the belief true. It is a matter of a certain relation holding between the believer and the world. It is important to notice that, unlike “Cartesian” and “Initial Credibility” theories, Externalist theories are regularly developed as theories of the nature of knowledge generally and not simply as theories of non-inferential knowledge. But they still have a peculiar importance in the case of non-inferential knowledge because they serve to solve the problem of the infinite regress.

Externalist theories may be further sub-divided into ‘Causal’ and `Reliability’ theories.

6 (i) Causal theories. The central notion in causal theories may be illustrated by the simplest case. The suggestion is that Bap [‘a believes p’] is a case of Kap [‘a knows p’] if ‘p’ is true and, furthermore, the situation that makes ‘p’ true is causally responsible for the existence of the belief-state Bap. I not only believe, but know, that the room is rather hot. Now it is certainly the excessive heat of the room which has caused me to have this belief. This causal relation, it may then be suggested, is what makes my belief a case of knowledge.

the source for causal theories is Frank Ramsey (1929)
Ramsey’s brief note on ‘Knowledge’, to be found among his ‘Last Papers’ in The Foundations of Mathematics, puts forward a causal view. A sophisticated recent version of a causal theory is to be found in ‘A Causal Theory of Knowing’ by Alvin I. Goldman (Goldman 1967).

Causal theories face two main types of difficulty. In the first place, even if we restrict ourselves to knowledge of particular matters of fact, not every case of knowledge is a case where the situation known is causally responsible for the existence of the belief. For instance, we appear to have some knowledge of the future. And even if all such knowledge is in practice inferential, non-inferential knowledge of the future (for example, that I will be ill tomorrow) seems to be an intelligible possibility. Yet it could hardly be held that my illness tomorrow causes my belief today that I will be ill tomorrow. Such cases can perhaps be dealt with by sophisticating the Causal analysis. In such a case, one could say, both the illness tomorrow and today’s belief that I will be ill tomorrow have a common cause, for instance some condition of my body today which not only leads to illness but casts its shadow before by giving rise to the belief. (An ‘early-warning’ system.)

In the second place, and much more seriously, cases can be envisaged where the situation that makes ‘p’ true gives rise to Bap, but we would not want to say that A knew that p. Suppose, for instance, that A is in a hypersensitive and deranged state, so that almost any considerable sensory stimulus causes him to believe that there is a sound of a certain sort in his immediate environment. Now suppose that, on a particular occasion, the considerable sensory stimulus which produces that belief is, in fact, a sound of just that sort in his immediate environment. Here the p-situation produces Bap, but we would not want to say that it was a case of knowledge.

I believe that such cases can be excluded only by filling out the Causal Analysis with a Reliability condition. But once this is done, I think it turns out that the Causal part of the analysis becomes redundant, and that the Reliability condition is sufficient by itself for giving an account of non-inferential (and inferential) knowledge.

6 (ii) Reliability theories. The second ‘Externalist’ approach is in terms of the empirical reliability of the belief involved. Knowledge is empirically reliable belief. Since the next chapter will be devoted to a defence of a form of the Reliability view, it will be only courteous to indicate the major precursors of this sort of view which I am acquainted with.

Ramsey is the source for reliabilist views as well
Once again, Ramsey is the pioneer. The paper ‘Knowledge’, already mentioned, combines elements of the Causal and the Reliability view. There followed John Watling’s ‘Inference from the Known to the Unknown’ (Watling 1954), which first converted me to a Reliability view. Since then there has been Brian Skyrms’ very difficult paper ‘The Explication of “X knows that p” ‘ (Skyrms 1967), and Peter Unger’s ‘An Analysis of Factual Knowledge’ (Unger 1968), both of which appear to defend versions of the Reliability view. There is also my own first version in Chapter Nine of A Materialist Theory of the Mind. A still more recent paper, which I think can be said to put forward a Reliability view, and which in any case anticipates a number of the results I arrive at in this Part, is Fred Dretske’s ‘Conclusive Reasons’ (Dretske 1971).

Hilary Kornblith on Armstrong
The Terms “Internalism” and “Externalism”
The terms “internalism” and “externalism” are used in philosophy in a variety of different senses, but their use in epistemology for anything like the positions which are the focus of this book dates to 1973. More precisely, the word “externalism” was introduced in print by David Armstrong’ in his book Belief; Truth and Knowledge’ in the following way:

According to “Externalist” accounts of non-inferential knowledge, what makes a true non-inferential belief a case of knowledge is some natural relation which holds between the belief-state, Bap [‘a believes p’], and the situation which makes the belief true. It is a matter of a certain relation holding between the believer and the world. It is important to notice that, unlike “Cartesian” and “Initial Credibility” theories, Externalist theories are regularly developed as theories of the nature of knowledge generally and not simply as theories of non-inferential knowledge. (Belief, Truth and Knowledge, p.157)

So in Armstrong’s usage, “externalism” is a view about knowledge, and it is the view that when a person knows that a particular claim p is true, there is some sort of “natural relation” which holds between that person’s belief that p and the world. One such view, suggested in 1967 by Alvin Goldman, was the Causal Theory of Knowledge. On this view, a person knows that p (for example, that it’s raining) when that person’s belief that p was caused by the fact that p. A related view, championed by Armstrong and later by Goldman as well, is the a href=”/knowledge/reliabilism.html”>Reliability Account of Knowledge, according to which a person knows that p when that person’s belief is both true and, in some sense, reliable: on some views, the belief must be a reliable indicator that p; on others, the belief must be produced by a reliable process, that is, one that tends to produce true beliefs. Frank Ramsey was a pioneer in defending a reliability account of knowledge. Particularly influential work in developing such an account was also done by Brian Skyrms, Peter Unger, and Fred Dretske.

Accounts of knowledge which are externalist in Armstrong’s sense mark an important break with tradition, according to which knowledge is a kind of justified, true belief. On traditional accounts, in part because justification is an essential ingredient in knowledge, a central task of epistemology is to give an account of what justification consists in. And, according to tradition, what is required for a person to be justified in holding a belief is for that person to have a certain justification for the belief, where having a justification is typically identified with being in a position, in some relevant sense, to produce an appropriate argument for the belief in question. What is distinctive about externalist accounts of knowledge, as Armstrong saw it, was that they do not require justification, at least in the traditional sense. Knowledge merely requires having a true belief which is appropriately connected with the world.

But while Armstrong’s way of viewing reliability accounts of knowledge has them rejecting the view that knowledge requires justified true belief, Alvin Goldman came to offer quite a different way of viewing the import of reliability theories: in 1979, Goldman suggested that instead of seeing reliability accounts as rejecting the claim that knowledge requires justified true belief, we should instead embrace an account which identifies justified belief with reliably produced belief. Reliability theories of knowledge, on this way of understanding them, offer a non-traditional account of what is required for a belief to be justified. This paper of Goldman’s, and his subsequent extended development of the idea, have been at the center of epistemological discussion ever since.

See David A. Armstrong on I-Phi

Reading Harry Frankfurt

Reading Harry Frankfurt

Wednesday, December 10th, 2008

Harry G. Frankfurt is the inventor of wildly unrealistic but provocative “thought experiments” designed to show that a person “could not have done otherwise.” Specifically, his goal is to deny that a person is free to choose among alternative possibilities. The traditional argument for free will requires alternative possibilities so that an agent could have done otherwise, without which there is no moral responsibility.
In 1969 Frankfurt famously defined what he called “The Principle of Alternate Possibilities” or PAP, then proceeded to deny it.

“a person is morally responsible for what he has done only if he could have done otherwise.”

Frankfurt’s thought experiments are attacks on this principle.Considering the absurd nature of his attacks (Frankfurt asks us to imagine an agent who can control the minds of others, or a demon inside one’s mind that can intervene in our decisions), the recent philosophical literature is surprisingly full of articles with “Frankfurt-type cases,” logical counterexamples to Frankfurt’s attempt to defend moral responsibility in the absence ofalternative possibilities.Frankfurt changed the debate on free will and moral responsibility with his hypothetical intervening demon. For example, John Martin Fischer’ssemicompatibilism assumes with Frankfurt that we can have moral responsibility, even if determinism (and/or indeterminism) are incompatible with free will.

Frankfurt’s basic claim is as follows:

“The principle of alternate possibilities is false. A person may well be morally responsible for what he has done even though he could not have done otherwise. The principle’s plausibility is anillusion, which can be made to vanish by bringing the relevant moral phenomena into sharper focus.”

Frankfurt posits a counterfactual demon who can intervene in an agent’s decisions if the agent is about to do something different from what the demon wants the agent to do. Frankfurt’s demon will block any alternative possibilities, but leave the agent to “freely choose” to do the one possibility desired by the demon. Frankfurt claims the existence of the hypothetical control mechanisms blocking alternative possibilities are irrelevant to the agent’s free choice. This is true when the agent’s choice agrees with the demon, but obviously false should the agent disagree. In that case, the demon would have to block the agent’s will and the agent would surely notice.

(IRR) There may be circumstances that in no way bring it about that a person performs a certain action; nevertheless, those very circumstances make it impossible for him to avoid performing that action.

Compatibilists have long been bothered by alternative possibilities, apparently needed in order that agents “could do otherwise.” They knew that determinism allows only a single future, one actual causal chain of events. They were therefore delighted to get behind Frankfurt’s examples as proofs that alternative possibilities, perhaps generated in part by random events, did not exist. Frankfurt argued for moral responsibility without libertarian free will.

Note, however, that Frankfurt assumes that genuine alternative possibilitiesdo exist. If not, there is nothing for his counterfactual intervening demon to block. Furthermore, without alternatives, Frankfurt would have to admit that there is only one “actual sequence” of events leading to one possible future. “Alternative sequences” would be ruled out. Since Frankfurt’s demon, much like Laplace’s demon, has no way of knowing the actual information about future events – such as agent’s decisions – until that information comes into existence, such demons are not possible and Frankfurt-style thought experiments, entertaining as they are, can not establish the compatibilist version of free will.

Incompatibilist libertarians like Robert Kane, David Widerker, and Carl Ginethave mounted attacks on Frankfurt-type examples, in defense of free will.Their basic idea is that in an indeterministic world Frankfurt’s demon cannot know in advance what an agent will do. As Widerker put it, there is no “prior sign” of the agent’s de-liberatechoice.In information theoretic terms, the information about the choice does not yet exist in the universe. So in order to block an agent’s decision, the demon would have to act in advance, and that would destroy the presumed “responsibility” of the agent for the choice, whether or not there are available alternative possibilities. We could call this the “Information Objection.”

And note that no matter how many alternative possibilities are blocked by Frankfurt’s hypothetical intervener, the simple alternative of not acting always remains open, and in cases of moral actions, not acting almost always has comparable moral significance. This could be called the “Yes/No Objection.”

Here is a discussion of the problem, from Kane’s A Contemporary Introduction to Free Will, 2005, (p.87)

5. The Indeterminist World ObjectionWhile the “flicker of freedom” strategy will not suffice to refute Frankfurt, it does lead to a third objection that is more powerful. This third objection is one that has been developed by several philosophers, including myself, David Widerker, Carl Ginet, and Keith Wyma.5 We might call it the Indeterministic World Objection. I discuss this objection in my book Free Will and Values. Following is a summary of this discussion:

Suppose Jones’s choice is undetermined up to the moment when it occurs, as many incompatibilists and libertarians require of a free choice. Then a Frankfurt controller, such as Black, would face a problem in attempting to control Jones’s choice. For if it is undetermined up to the moment when he chooses whether Jones will choose A or B, then the controller Black cannot know before Jones actually chooses what Jones is going to do. Black may wait until Jones actually chooses in order to see what Jones is going to do. But then it will be too late for Black to intervene. Jones will be responsible for the choice in that case, since Black stayed out of it. But Jones will also have had alternative possibilities, since Jones’s choice of A or B was undetermined and therefore it could have gone either way. Suppose, by contrast, Black wants to ensure that Jones will make the choice Black wants (choice A). Then Black cannot stay out of it until Jones chooses. He must instead act in advance to bring it about that Jones chooses A. In that case, Jones will indeed have no alternative possibilities, but neither will Jones be responsible for the outcome. Black will be responsible since Black will have intervened in order to bring it about that Jones would choose as Black wanted.

In other words, if free choices are undetermined, as incompatibilists require, a Frankfurt controller like Black cannot control them without actually intervening and making the agent choose as the controller wants. If the controller stays out of it, the agent will be responsible but will also have had alternative possibilities because the choice was undetermined. If the controller does intervene, by contrast, the agent will not have alternative possibilities but will also not be responsible (the controller will be). So responsibility and alternative possibilities go together after all, and PAP would remain true — moral responsibility requires alternative possibilities — when free choices are not determined.6If this objection is correct, it would show that Frankfurt-type examples will not work in an indeterministic world in which some choices or actions are undetermined. In such a world, as David Widerker has put it, there will not always be a reliable prior sign telling the controller in advance what agents are going to do.7 Only in a world in which all of our free actions are determined can the controller always be certain in advance how the agent is going to act. This means that, if you are a compatibilist, who believes free will could exist in a determined world, you might be convinced by Frankfurt-type examples that moral responsibility does not require alternative possibilities. But if you are an incompatibilist or libertarian, who believes that some of our morally responsible acts must be undetermined you need not be convinced by Frankfurt-type examples that moral responsibility does not require alternative possibilities.

There are also many defenders of Frankfurt’s attack on alternative possibilities, notably John Martin Fischer. Many of these positions appear in the 2006 book Moral Responsibility and Alternative Possibilities, edited by David Widerker and Michael McKenna.

Reading Derk Pereboom

In Living Without Free Will, Derk Pereboom offers a “hard incompatibilism” that makes both free will and moral responsibility incompatible with determinism. Although Pereboom claims to be agnostic about the truth of determinism, he argues that we should admit there is neither human freedom nor moral responsibility and that we should learn to live without free will.

He is close to a group of thinkers who share a view that William James called “hard determinism,” including Richard Double, Ted Honderich, Saul Smilansky,Galen Strawson, and the psychologistDaniel Wegner.

Some of them call for the recognition that “free will is an illusion.”

But note that Pereboom’s “hard incompatibilism” is not only the case ifdeterminism is true. It is equally the case if indeterminism is true. Pereboom argues that neither provides the control needed for moral responsibility. This is the standard argument against free will. As Pereboom states his view:

I argue for a position closely related to hard determinism. Yet the term “hard determinism” is not an adequate label for my view, since I do not claim that determinism is true. As I understand it, whether an indeterministic or a deterministic interpretation of quantum mechanics is true is currently an open question. I do contend, however, that not only is determinism incompatible with moral responsibility, but so is the sort of indeterminacy specified by the standard interpretation of quantum mechanics, if that is the only sort of indeterminacy there is.
(Living Without Free Will, p.xviii)

I will grant, for purposes of argument, that event-causal libertarianism allows for as much responsibility-relevant control as compatibilism does. I shall argue that if decisions were indeterministic events of the sort specified by this theory, then agents would have no more control over their actions than they would if determinism were true, and such control is insufficient for responsibility.
(Living Without Free Will, p.46)

In his 1995 essay stating the case for “Hard Incompatibilism,” Pereboom notes…

The demographic profile of the free will debate reveals a majority of soft determinists, who claim that we possess the freedom required for moral responsibility, that determinism is true, and that these views are compatible. Libertarians, incompatibilist champions of the freedom required for moral responsibility, constitute a minority. Not only is this the distribution in the contemporary philosophical population, but in Western philosophy has always been the pattern. Seldom has hard determinism — the incompatibilist endorsement of determinism and rejection of the freedom required for moral responsibility — been defended.One would expect hard determinism to have few proponents, given its apparent renunciation of morality. I believe, however, that the argument for hard determinism is powerful, and furthermore, that the reasons against it are not as compelling as they might at first seem.

The categorization of the determinist position by ‘hard’ and ’soft’ masks some important distinctions, and thus one might devise a more fine-grained scheme. Actually, within the conceptual space of both hard and soft determinism there is a range of alternative views. The softest version of soft determinism maintains that we possess the freedom required for moral responsibility, that having this sort of freedom is compatible with determinism, that this freedom includes the ability to do otherwise than what one actually will do, and that even though determinism is true, one is yet deserving of blame upon having performed a wrongful act. The hardest version of hard determinism claims that since determinism is true, we lack the freedom required for moral responsibility, and hence, not only do we never deserve blame, but, moreover, no moral principles or values apply to us. But both hard and soft determinism encompass a number of less extreme positions. The view I wish to defend is somewhat softer than the hardest of the hard determinisms, and in this respect it is similar to some aspects of the position recently developed by Ted Honderich. In the view we will explore, since determinism is true, we lack the freedom required for moral responsibility. But although we therefore never deserve blame for having performed a wrongful act, most moral principles and values are not thereby undermined.
(Noûs 29, 1995, reprinted in Free Will, ed. D. Pereboom, 1997, p.242)

Pereboom concludes:

Given that free will of some sort is required for moral responsibility, then libertarianism, soft determinism, and hard determinism, as typically conceived, are jointly exhaustive positions (if we allow the “deterministic” positions the view that events may result from indeterministic processes of the sort described by quantum mechanics). Yet each has a consequence that is difficult to accept.If libertarianism were true, then we would expect events to occur that are incompatible with what our physical theories predict to be overwhelmingly likely.

If soft determinism were true, then agents would deserve blame for their wrongdoing even though their actions were produced by processes beyond their control.

If hard determinism were true, agents would not be morally responsible — agents would never deserve blame for even the most cold-blooded and calmly executed evil actions.

I have argued that hard determinism could be the easiest view to accept. Hard determinism need not be of the hardest sort. It need not subvert the commitment to doing what is right, and although it does undermine some of our reactive attitudes, secure analogues of these attitudes are all one requires for good interpersonal relationships.

Consequently, of the three positions, hard determinism might well be the most attractive, and it is surely worthy of more serious consideration than it has been accorded. (p.272)

Pereboom distinguishes two libertarian positions, agent causal and event causal. While his agent-causal positions involve metaphysical freedom if not immaterial substance, his event-causal views assume that indeterminism isthe direct or indirect cause of the action. He then traces decisions determined by character back to early character-forming events. Since they are always in turn either themselves determined, or at best indetermined, we can not be responsible for our characters either.

According to the libertarian, we can choose to act without being causally determined by factors beyond our control, and we can therefore be morally responsible for our actions. Arguably, this is the common-sense position. Libertarian views can be divided into two categories.In agent causal libertarianism, free will is explained by the existence of agents who can cause actions not by virtue of any state they are in, such as a belief or a desire, but just by themselves — as substances. Such agents are capable of causing actions in this way without being causally determined to do so. In an attractive version of agent-causal theory, when such an agent acts freely, she can be inclined but not causally determined to act by factors such as her desires and beliefs. But such factors will not exhaust the causal account of the action. The agent herself, independently of these factors, provides a fundamental element. Agent-causal libertarianism has been advocated by Thomas Reid, Roderick Chisholm, Richard Taylor,Randolph Clarke, and Timothy O’Connor. Perhaps the views of William of Ockham and Immanuel Kant also count as agent-causal libertarianism.

In the second category, which I call event-causal libertarianism, only causation involving states or events is permitted. Required for moral responsibility is not agent causation, but production of actions that crucially involves indeterministic causal relations between events. The Epicurean philosopher Lucretius provides a rudimentary version of such a position when he claims that free actions are accounted for by uncaused swerves in the downward paths of atoms. Sophisticated variants of this type of libertarianism have been developed by Robert Kane and Carl Ginet.
(Living Without Free Will, p.xv)

On Ginet’s and Kane’s conceptions, are free choices indeed partially random events (or perhaps even totally random events on Ginet’s account) for which agents cannot be morally responsible? At this point, one might suggest that there is an additional resource available to bolster Ginet’s and Kane’s account of morally responsible decision. For convenience, let us focus on Kane’s view (I suspect that Ginet’s position will not differ significantly from Kane’s on this issue). One might argue that in Kane’s conception, the character and motives that explain an effort of will need not be factors beyond the agent’s control, since they could be produced partly as a result of the agent’s free choices. Consequently, it need not be that the effort, and thus the choice, is produced solely by factors beyond the agent’s control and no further contribution of the agent. But this move is unconvincing. To simplify, suppose that it is character alone, and not motives in addition, that explains the effort of will. Imagine first that the character that explains the effort is not a product of the agent’s free choices, but rather that there are factors beyond his control that determine this character, or nothing produces it, or factors beyond his control contribute to the production of the character without determining it and nothing supplements their contribution to produce it. Then, by incompatibilist standards, the agent cannot be responsible for his character. But in addition, neither can he be responsible for the effort that is explained by the character, whether this explanation is deterministic or indeterministic. If the explanation is deterministic, then there will be factors beyond the agent’s control that determine the effort, and the agent will thereby lack moral responsibility for the effort. If the explanation is indeterministic, given that the agent’s free choice plays no role in producing the character, and nothing besides the character explains the effort, there will be factors beyond the agent’s control that make a causal contribution to the production of this effort without determining it, while nothing supplements the contribution of these factors to produce the effort. Here, again, the agent cannot be morally responsible for the effort.

However, prospects for moral responsibility for the effort of will not improved if the agent’s character is partly a result of his free choices. For consider the first free choice an agent ever makes. By the above argument, he cannot be responsible for it. But then he cannot be responsible for the second choice either, whether or not the first choice was character-forming. If the first choice was not character-forming, then the character that explains the effort of will for the second choice is not produced by his free choice, and then by the above argument, he cannot be morally responsible for it. Suppose, alternatively, that the first choice was character-forming. Because the agent cannot be responsible for the first choice, he also cannot be responsible for the resulting character formation. But then, by the above argument, he cannot be responsible for the second choice either. Since this type of reasoning can be repeated for all subsequent choices, Kane’s agent can never be morally responsible for effort of will.

Given that such an agent can never be morally responsible for his efforts of will, neither can he be responsible for his choices. For in Kane’s picture, there is nothing that supplements the contribution of the effort of will to produce the choice. Indeed, all free choices will ultimately be partially random events, for in the final analysis there will be factors beyond the agent’s control, such as his initial character, that partly produce the choice, while there will be nothing that supplements their contribution in the production of the choice, and by the most attractive incompatibilist standard, agents cannot be responsible for such partially random events.
(Living Without Free Will, p.48-50)

See Derk Pereboom on I-Phi

Reading John Martin Fischer

Four ViewsSince the time of Peter van Inwagen’s 1983 classic Essay on Free Will which introduced the Consequence Argument,John Martin Fischer has been arguing the case for a compatibilism that focuses on moral responsibility and agent control rather than compatibilist free will per se.

Fischer was inspired by Peter Strawson’s influential 1962 essay,Freedom and Resentment, whichchanged the subject from the intractable free will problem to moral responsibility alone.

As Fischer says in his new 4-volume setFree Will (but mostly about MR and AP), “Some philosophers do not distinguish between freedom and moral responsibility. Put a bit more carefully, they tend to begin with the notion of moral responsibility, and “work back” to a notion of freedom; this notion of freedom is not given independent content (separate from the analysis of moral responsibility). For such philosophers, “freedom” refers to whatever conditions are involved in choosing or acting in such a way as to be morally responsible.” (Free Will, vol.I, p.xxiii)

Fischer also has been influenced by Harry Frankfurt’s attack on what Frankfurt called the Principle of Alternate Possibilities (PAP). Before Frankfurt, compatibilists and incompatibilists alike had argued thatalternative possibilities seemed to be a condition not only for free will but for moral responsibility.

Frankfurt’s clever examples changed the debate from compatibilism vs. incompatibilism to the very existence of alternative possibilities.

Although attacks and counterattacks continue, Frankfurt-style examples have become far too arcane and unlikely to win support outside a small number of compatibilists and incompatibilists.

Nevertheless, Fischer has tried to carve out a position calledsemicompatibilism, which de-emphasizes alternative possibilities and emphasizes agent control.  Fischer hopes that semicompatibilism will be resistant to any discovery by science that strict causal determinism is true. He does this by dividing the needed agent control into two parts, “regulative control” and “guidance control.”

Regulative control involves alternative possibilities, which lead to what Fischer calls “alternative sequences” of action. Fischer thinks he can simply deny that agents have regulative control, and bypass the question of alternative possibilities, based on Frankfurt-style examples.  Although Fischer generally supports Frankfurt-style examples, he is the author of one of the cleverest counterattacks, the idea that the mere possibility that the agent might try an alternative gives rise to a “flicker of freedom” (The Metaphysics of Free Will: An Essay on Control , p.131-159).

Fischer wants to focus our attention on the more critical guidance control, which describes the “reasons-responsiveness” and “sourcehood” involved in the “actual sequence” of events leading up to the agent’s action. For Fischer, no alternative sequences, however many and however they flicker with freedom, are as relevant as the actual sequence.

Being the source of our actions allows us to say that our actions are “up to us,” that we can take ownership of our actions. This is what Fischer regards as the “freedom-relevant condition.”  It is what Robert Kane calls our “ultimate responsibility (UR).” And it is what Manuel Vargas calls the “self-governance condition” in his Revisionism.

Kane, Vargas, and Derk Pereboom contributed to Fischer’s recent book Four Views on Free Will. Pereboom also focuses on moral responsibility like Fischer, but he disagrees with Fischer that moral desert justifies praise and blame, reward and punishment. At the most, says Pereboom,  responsibility can justify that we can be “legitimately called to moral improvement.” Desert implies retributivism. Pereboom says the most we can justify is moral rehabilitation, for its beneficial consequences to society.

Although Fischer is officially agnostic on the ancient problem of free will versus determinism, he shows a strong commitment to causality and determinism over his years of defending compatibility with determinism.

Nevertheless, Fischer’s dividing of agent control issues into regulative control (involving alternative possibilities) and guidance control (what happens in the actual sequence) is an excellent approach that allows us to situate the indeterminism that many thinkers feel is critical to any libertarian model. Fischer notes that indeterminism in the alternative possibilities might generate “flickers of freedom.” And he says clearly (Four Views, p.74) that guidance control is not enhanced by positing indeterminism.

In his 1998 book Responsibility and Control, written with Mark Ravizza, Fischer describes what he calls the Direct and Indirect Arguments for incompatibilism. The Indirect Argument says that determinism rules out alternative possibilities.  From his semicompatibilist view, that does not threaten moral responsibility. Only in the Direct Argument for incompatibilism does determinism rule out moral responsibility.

So might Fischer agree with a view that 1) allows the “freedom-relevant condition”  (reasons responsiveness and ownership) in the actual sequence to be governed by what he calls “almost causal determinism” (Responsibility and Control, p.15n) and 2) allows indeterminism in the generation of the alternative possibilities (flickers of freedom)?

That is the view we offer in the I-Phi Cogito model. Although they do not endorse it themselves, Daniel Dennett and Alfred Mele have also offered this view as something libertarians should like.

Indeterminism is important only in microscopic structures, but that is enough to introduce noise and randomness into our thoughts, especially when we are rapidly generating alternatives for action by random combinations of past experiences. But our brain and our neurons can suppress microscopic noise when they need to, insuring what we calladequate determinism, what Fischer calls almost causal determinism, and what Ted Honderich calls near determinism – in our willed actions.

In Robert Kane’s contribution to Four Views on Free Will, he correctly identifies noise in messages as generated indeterministically, but mistakenly thinks these are merely a “hindrance or obstacle” that raises our level of effort when making his rare but morally significant “self-forming actions.”

The role of indeterminism in free will is better seen as simply generating Fischer’s AP “flickers of freedom.” These alternative possibilities are then the “free” part of “free will” (Fischer’s regulative control).

The “will” part (Fischer’s guidance control) is “almost causally” determined to be reasons responsive and to take ownership for the determination to act in a fashion consistent with the agent’s character and values.

Event-causal libertarians like Kane and Laura Waddell Ekstrom think this kind of freedom is not enough. And agent-causal libertarians like Randolph Clarke and Timothy O’Connor want even more “metaphysical” freedom. They say that if the will is determined to act in a rational way consistent with its character and values, then the agent will make exactly the same decision inexactly the same circumstances.

Such consistency of action does not bother the common sense thinker or the compatibilist (even a hard incompatibilist?) philosopher.

Kane, Ekstrom, and others continue to invoke some indeterminism in the decision process itself. As Daniel Dennett recommended as early as 1978 (inBrainstorms) and Alfred Mele has been promoting as a “modest libertarianism” in his recent books (Autonomous Agency and Free Will and Luck), indeterminism is best kept in the early stage of a two-stage process.

We first need free (alternative possibilities) and then will (adequately determined actions) in a temporal sequence. First chance, then choice.

I think that John Martin Fischer’s guidance control, perfectly compatible with his “almost causal determinism,” validates not only his semicompatibilist view of moral responsibility, but also supports the common sense or popular view of free will that is found in the opinion surveys of experimental philosophers Joshua Knobe and Shaun Nichols.

While limited compared to “metaphysical” freedom, this view is consistent with a broadly scientific world view, a requirement for any systematic revisionism that Manuel Vargas calls “naturalistic plausibility” (Four Views, p.153).

Ironically perhaps, this view would be the very opposite of a revisionism, in the sense that the diagnostic (descriptive) analysis of common sense would agree remarkably well with what Vargas calls the prescriptive view for philosophers. Or perhaps it is the philosophers’ views that need revision?

As an illustration of just how naturalistically plausible this new view of free will is, consider the case of biological evolution. The evidence is overwhelming that variations in the gene pool are driven by random mutations in the DNA. Many of these mutations are caused by indeterministic quantum mechanical events, cosmic ray collisions for the most part. Think of the mutations as alternative possibilities for new species. An adequately determined process of natural selection then weeds out those random variations that can reproduce themselves and compete with their ancestors. First chance, then selection.

Indeed, the story of life is maintaining some information stability (parts of our DNA have been the same for 2.8 billion years) in a chaotic environment – and not the pseudo-random deterministic chaos of the computer theorists, but real irreducible chaos.

Only a believer in metaphysical determinism would deny the evolutionary evidence for indeterminism and two stages, the first microscopic and random (chance) the second macroscopic and adequately determined (choice). Sadly, such a metaphysical belief is the intelligent design position of the creationists.

Of course we are discussing only science, not logical certainty.

So we can also ameliorate John Martin Fischer’s nightmare of waking up one morning to a New York Times headline “Causal Determinism Is True”  (Four Views, p.44).

Nothing in science is logically true, in the sense of true in all possible worlds, true by the principle of non-contradiction or the weaker law of the excluded middle.  It is the excluded middle argument that leads us to the muddled standard argument against free will.

Our two-stage argument is quite old. We can trace it back to William James(1884 in The Dilemma of Determinism), Henri Poincaré (1906), Arthur Holly Compton (1935), and Karl Popper (1961).

What does Information Philosophy have to do with the two-stage model?

Information is the principal reason that biology is not reducible to chemistry and physics. Information is what makes an organism an individual, each with a different history. No atom or molecule has a history. Information is what makes us ourselves. Increasing information is involved in all “emergent” phenomena.

In information philosophy, the future is unpredictable for two basic reasons. First, quantum mechanics shows that some events are not predictable. The world is causal but not determined. Second, the early universe does not contain the information of later times, just as early primates do not contain the information structures for intelligence and verbal communication, and infants do not contain the knowledge and remembered experience they will have as adults.

The universe began in a state of minimal information nearly fourteen billion years ago. Information about the future is always missing, not present until it has been created, after which it is frozen.

John Martin Fischer calls this the “Principle of the Fixity of the Past” (Responsibility and Control, p.22). It suggests that even divine foreknowledge is not present in our open expanding universe, lending support to the religious view called Open Theism.

_____________________

I am indebted to Kevin Timpe’s new book Free Will:Sourcehood and its Alternatives, which clarified many of the terms in the current debates and greatly clarified my rereading of Four Views, especially elucidating the positions of Fischer and Vargas.

See John Martin Fischer on I-Phi

Reading Kevin Timpe

Free WillKevin Timpe is a Christian philosopher who wrote the Internet Encyclopedia of Philosophy article on Free Will and serves as the IEP editor for Religion and Philosophy.

Tempe believes that free will can only be grounded if the ultimate source for actions lies entirely within the agent, if our actions are up to us” (Aristotle’s ἐφ ἡμῖν). This can only be the case ifcausal determinism is false.

While Timpe focuses on the “sourcehood” of the agent’s origination of – or ultimate responsibility for – actions, he accepts as a corollary that the agent will have genuine alternative possibilities for action, since the existence of alternative possibilities is an indicator of the absence of causal determinism.

But Timpe departs from a prime assumption of those compatibilists who have defended Harry Frankfurt’s attacks on alternative possibilities. That assumption is the first premise in what Timpe calls the Basic Argument:

  1. Free will requires the ability to do otherwise (alternative possibilities).
  2. If causal determinism is true, then no agent has the ability to do otherwise (no alternative possibilities).
  3. Therefore, free will requires the falsity of causal determinism (indeterminism is true and alternative possibilities exist).

For Timpe, alternative possibilities are merely a corollary of “sourcehood.” He calls himself a Sourcehood Incompatibilist, a position John Martin Fischercalls an “actual-sequence incompatibilist.”

The basic requirement of sourcehood for libertarian free will is that some indeterminism occurs in the “actual sequence” of events leading up to the agent’s action. Timpe does not describe in detail how, when, and where such indeterminism might enter the sequence. He does deny that “luck” is a problem, suggesting he is aware that chance must not be the direct cause of an action.

Note that sourcehood incompatibilists can be hard determinists, like Derk Pereboom, who denies free will and moral responsibility.

Since 1962, when Peter Strawson changed the subject from free will to moral responsibility (emphasizing the the natural existence of reactive attitudes and moral behavior), and since 1969, when Frankfurt changed the debatefrom free will models to his denial of what he called “alternate” possibilities, the focus of attention in “free will debates” has been on moral responsibility and the agential control needed for responsibility.

Compatibilists have leaped at the opportunity to deny alternative possibilities because the determinism that they feel is compatible with free will does not allow alternative possibilities in anything but what Timpe calls a “subjunctive sense.” The agent could have done otherwise if he or she had decided to do otherwise, which is possible if the past had been different, an argument first introduced formally by G. E. Moore, but present as early as the Hobbes-Bramhall debates.

In his 2008 book, Free Will: Sourcehood and Its Alternatives, Timpe has an excellent review of the last thirty-five years of debates, especially on theKaneWiderker arguments which showed that a Frankfurt demon depended on determinism to predict which actions needed to be blocked to insure the agent would “freely” choose the action the intervener wanted. Thus Frankfurt examples “beg the question,” assuming determinism to attack alternative possibilities.

Timpe reviews the many abortive attempts by compatibilists to refute Kane-Widerker and other attacks on Frankfurt, including the “flicker of freedom” attack developed by Fischer (though Fischer is himself a compatibilist). The idea is that just at the moment of deciding, the agent could decide to “try” one of the alternative possibilities being blocked by the intervener.

Timpe’s assessment of these decades of debate is severe:

In these sorts of circumstances, Fischer thinks, further arguments would be begging the question since the two sides of the debate begin with different premises, often based on intuitions that the other side denies: “I suggest that some of the debates about whether alternative possibilities are required for moral responsibility may at some level be fueled by different intuitive pictures of moral ‘responsibility.’”

If this is true, then perhaps it would be true to say that not much philosophical headway has been made in the past 35 years of debate begun by Frankfurt’s article. It is certainly true that much is made of various and conflicting intuitions in the debate surrounding the compatibilism/incompatibilism debate. Perhaps the debate is ultimately over which set of intuitions is more plausible, in which case we should not be surprised by the lack of a clear victor.
(Free Will: Sourcehood and Its Alternatives, p.67)

See Kevin Timpe on I-Phi

Reading Paul Russell on Free Will and Hume

Freedom and Moral Sentiment

Paul Russell has provided a new interpretation of David Hume’s naturalism and moral sentiments and their connection to the reactive attitudes of Peter Strawson.
In his discussion of freedom, Russell offers a concise statement of thestandard two-part argument against free will.

…the well-known dilemma of determinism. One horn of this dilemma is the argument that if an action was caused or necessitated, then it could not have been done freely, and hence the agent is not responsible for it. The other horn is the argument that if the action was not caused, then it is inexplicable and random, and thus it cannot be attributed to the agent, and hence, again, the agent cannot be responsible for it. In other words, if our actions are caused, then we cannot he responsible for them; if they are not caused, we cannot be responsible for them. Whether we affirm or deny necessity and determinism, it is impossible to make any coherent sense of moral freedom and responsibility.
(Freedom and Moral Sentiment: Hume’s Way of Naturalizing Sentiment, 1995, p.14)

But then Russell attempts to reconcile some chance with otherwise determined actions. His suggestion is very close to a resolution of theRandommness Objection, but we suggest that he should move randomness back into the alternative possibilities and allow both will and action to beadequately determined. Then “will” as an act of determination agrees better with the common sense use of the term.

The success or force of the antilibertarian argument, it seems, depends very largely on a particular interpretation of the libertarian position. Contrary to what compatibilists generally suppose, liberty of indifference and liberty of spontaneity may not be incompatible with each other. What, then, is the alternative interpretation to be considered? According to the antilibertarian argument (on the classical interpretation), if actions were not caused, then it would be unreasonable to attribute them to the agent or hold the agent responsible for them. The target here is liberty of indifference interpreted, on this account, as the view that our actions are uncaused. However, it may be argued that this is not the only position which is available to libertarians or defenders of “free will”. They may locate the requisite “break in the causal chain” elsewhere. It is important to distinguish between the following two types of liberty of indifference: a notion of liberty of indifference which suggests that actions are not caused or determined by antecedent conditions and a notion of liberty of indifference which suggests that our willings are not caused or determined by antecedent conditions (our willings being understood as the causal antecedents of action). For convenience, let us call the first liberty of indifference in acting (LIA) and the second liberty of indifference in willing (LIW). Both of these notions of liberty of indifference are vulnerable to well-known objections, but LIA is open to some objections to which the LIW is not liable.The libertarian may seek to evade the antilibertarian argument by conceding that our actions must be caused by our antecedent willings, thereby rejecting LIA, but refuse to abandon or reject LIW. By rejecting LIA, the defender of “free will” can avoid the main thrust of the antilibertarian argument, namely, that liberty of indifference would renderactions random and capricious and would make it impossible to attribute such actions to the agent. Those who accept LIW may, quite consistently, maintain that free action is determined by the antecedent willings of the agent and thus reject any suggestion that they licence random events at the level of action. Any randomness that LIW permits (assuming, as we do, that any alternative metaphysical conception of causation is excluded) occurs only at the level of the determination of the will.

The presence of random events at the level of willing will not prevent an agent from enjoying liberty of spontaneity. Such an agent may well be able to act in accordance with the determinations of her (capricious) will. Nor would it be impossible to attribute actions to such an agent, because it would be her (capricious) motives, desires, and so on, which caused them. Clearly, then, liberty of indifference, interpreted in terms of LIW, is compatible with, and thus need not exclude, liberty of spontaneity. It is true that the actions of an agent who enjoys LIW will be quite unpredictable, and it is also true that her future actions will not be amenable to the conditioning influences of punishments and rewards. In this way, LIW is still liable to other serious criticisms (especially if one interprets responsibility in terms of amenability to the conditioning influence of rewards and punishments). However, the actions of an agent who enjoys LIW share much with those of an agent whose will is necessitated by (external) antecedent causes. Liberty of spontaneity does not require that agents be able to determine their own wills, and it therefore makes little difference, on the face of it, whether our wills are determined by external causes or are merely capricious. In this way, it may be argued that the (classical) antilibertarian argument is not straightforwardly effective against the libertarian position when the notion of liberty of indifference is interpreted in terms of LIW rather than LIA.

It is, perhaps, tempting to suggest that the significance of these observations lies with the fact that they reveal certain limitations of the antilibertarian argument and that they may, therefore, open up new avenues of defence for the libertarian position. I believe that the real interest and significance of these observations lies elsewhere. What they bring to light are certain serious inadequacies in the spontaneity argument. An agent who enjoys LIW may also enjoy liberty of spontaneity, and this is a point that many defenders of classical compatibilism may find rather awkward and embarrassing. It follows from the fact that liberty of spontaneity is compatible or consistent with LIW that we may reasonably hold an individual responsible for actions caused by her capricious, random willings. Clearly, then, there is, in these circumstances, as much, or as little, reason to hold an agent responsible for actions due to a capricious will as there is to hold an agent responsible for actions that are due to a will that is conditioned by antecedent external causes. Both agents may equally enjoy liberty of spontaneity. If we have reason to conclude that LIW constitutes an inadequate foundation for freedom and responsibility, then surely we must also conclude that there is more to freedom and responsibility than liberty of spontaneity. In short, compatibilists must either concede that agents whose actions are due to LIW are nevertheless free and responsible or else acknowledge that the spontaneity argument provides us with an inadequate and incomplete account of freedom and responsibility.
(Freedom and Moral Sentiment, p.18)

See Paul Russell on I-Phi

Reading Robert Nozick

In his Philosophical Explanations, 1981, Robert Nozick sketched a view of how free will is possible, how without causal determination of action a person could have acted differently yet nevertheless does not act at random or arbitrarily. (He admits the picture is somewhat cloudy.)

Philosophical Explanations

Despite approaching the problem from several different directions, he found it so intractable, so resistant to illuminating solution, that he was forced to conclude “No one of the approaches turns out to be fully satisfactory, nor indeed do all together.”

Nozick admits that “Over the years I have spent more time thinking about the problem of free will — it felt like banging my head against it — than about any other philosophical topic except perhaps the foundations of ethics.”

Nozick introduces quantum mechanics to consider an analogy with the weighting of reasons for a decision. He does not, however, claim any applicability to the decision process or free will, since this would just be a random decision.

Is this conception of decision as bestowing weights coherent? It may help to compare it to the currently orthodox interpretation of quantum mechanics. The purpose of this comparison is not to derive free will from quantum mechanics or to use physical theory to prove free will exists, or even to say that nondeterminism at the quantum level leaves room for free will. Rather, we wish to see whether quantum theory provides an analogue, whether it presents structural possibilities which if instanced at the macro-level of action — this is not implied by micro-quantum theory — would fit the situation we have described. According to the currently orthodox quantum mechanical theory of measurement, as specified by John von Neumann, a quantum mechanical system is in a superposition of states, a probability mixture of states, which changes continuously in accordance with the quantum mechanical equations of motion, and which changes discontinuously via a measurement or observation. Such a measurement “collapses the wave packet”, reducing the superposition to a particular state; which state the superposition will reduce to is not predictable.” Analogously, a person before decision has reasons without fixed weights; he is in a superposition of (precise) weights, perhaps within certain limits, or a mixed state (which need not be a superposition with fixed probabilities). The process of decision reduces the superposition to one state (or to a set of states corresponding to a comparative ranking of reasons), but it is not predictable or determined to which state of the weights the decision (analogous to a measurement) will reduce the superposition. (Let us leave aside von Neumann’s subtle analysis, in Chapter 6, of how any placing of the “cut” between observer and observed is consistent with his account.) Our point is not to endorse the orthodox account as a correct account of quantum mechanics, only to draw upon its theoretical structure to show our conception of decision is a coherent one. Decision fixes the weights of reasons; it reduces the previously obtaining mixed state or superposition. However, it does not do so at random.

See Robert Nozick on I-Phi

Reading Colin McGinn

In June I visited Jason Anthony Pannone, the librarian at the Harvard Philosophy Department’s Robbins Library. Jason offers individual and group tutorials on locating materials in Harvard libraries. He gave me a number of tips on using the extensive Philosophy Resources at Harvard.

I was carrying Ted Honderich’s huge Theory of Determinism, and Jason mentioned a recent post on his Robbins Library Notes blog about the spat between Honderich and Cornell philosopher Colin McGinn.

The Making of a Philosopher, by Colin McGinn

I picked up a copy of McGinn’s popular “The Making of a Philosopher,” which follows McGinn from his undergraduate degree in psychology and an interest in Continental philosophy, especially Edmund Husserl, to his years in the center of Anglo-American Analytic (AAA) philosophy, from winning the John Locke Prize at Oxford to becoming a professor in the U.S. at Rutgers.

McGinn contrasts the perennial static condition of philosophy with the dynamic growth of science, with a wry comment on older scientists (like me) who venture into philosophy.

“I venture to suggest that philosophers tend on the whole to be persons of considerable intelligence, many of them highly competent at science, and endowed with excellent thinking skills. It’s not that if you let some real scientists loose on philosophical problems they would have all the answers for you in a matter of days. In fact, when scientists, particularly distinguished ones, try their hand at philosophy—usually after they have retired—the results are often quite inept, risibly so. So what is it that makes philosophy so hard? Why do we still have no proof that there is an external world or that there are minds other than our own? Why is freedom of the will still so hotly debated? Why do we have so much trouble figuring out what kind of thing the self is? Why is the relation between consciousness and the brain so exasperatingly hard to pin down?” (p.200)

McGinn describes three views of philosophy.

  • the traditional Platonic realm of elevated and profound study of ultimate reality, the soul, etc.
  • the analytic view that philosophy consists of a bunch of meaningless pseudo-questions, including the later Wittgenstein and Ordinary Language philosophers. ‘Thus we never normally say “the will is free” or “human actions are determined by law and causality” in ordinary speech, so these sentences are ipso facto under suspicion of meaninglessness. ‘ (p.202)
  • the view that philosophy is just immature science. ‘Here the idea is that what we now call philosophy is just the residue of problems left over as science has eaten up more and more of what used to be called philosophy.’ (p.203)

McGinn’s own view, dubbed “mysterian” by Owen Flanagan, is that our human intelligence, our epistemic apparatus, is just “not cut out for the job.”

“Perhaps, then, that is the explanation of philosophical intractability more broadly; philosophical problems are of a kind that does not suit the particular way we form knowledge of the world. The question then is what it is about the problems and our intelligence that makes the latter unsuited to the former.” (p.204)

Problems in Philosophy, by Colin McGinn

McGinn developed his mysterian ideas in his 1993 book Problems in Philosophy: The Limits of Inquiry. Then he wrote a popular treatment in The Mysterious Flame (1999).

“The central conjecture of my book is that there is a certain cognitive structure that shapes our knowledge of the world, and that this structure is inappropriate when it comes to key philosophical problems. I call this the CALM conjecture, short for Combinatorial Atomism with Lawlike Mappings. Roughly speaking, you understand something when you know what parts it has and how they are put together, as well as how the whole changes over time; then you have rendered the phenomenon in question–CALM.” (p.206)

“Natural entities are basically complex systems of interacting parts that evolve over time as a result of various causal influences. This is obviously true of inanimate physical objects, which are spatial complexes made of molecules and atoms and quarks, and subject to the physical forces of nature. But it is also true of biological organisms, in which now the parts include kidneys, hearts, lungs, and the cells that compose these. The same abstract architecture applies to language also: Sentences are complexes of simpler elements (words and phrases) put together according to grammatical rules. Mathematical entities such as triangles, equations, and numbers are also complexes decomposable into simpler elements. In all these cases we can appropriately bring to bear the CALM method of thinking: We conceptualize the entities in question by resolving them into parts and articulating their mode of arrangement.”

“Find the atoms and the laws of combination and evolution, and then derive the myriad of complex objects you find in nature. If incomprehension is a state of anxiety or chaos, then CALM is what brings calm. Question: Does CALM work in philosophy?”

“In Problems in Philosophy I argue that …[t}here are yawning gaps between [some] phenomena and the more basic phenomena they proceed from, so that we cannot apply the CALM format to bring sense to what we observe. The essence of a philosophical problem is the unexplained leap, the step from one thing to another without any conception of the bridge that supports the step. For example, a free decision involves a transition from a set of beliefs and desires to a particular choice; but this choice is not dictated by what precedes it—hence it seems like an unmediated leap. The choice, that is, cannot be accounted for simply in terms of the beliefs and desires that form the input to it, just as conscious states cannot be accounted for in terms of the neural processes they emanate from. In both cases we seem to be presented with something radically novel, issuing from nowhere, as if a new act of creation were necessary to bring it into being. And this is the mark of our lack of understanding. The existence of animal life seems like an eruption from nowhere (or an act of God) until we understand the process of evolution by natural selection; we can then begin to see how the transitions operate, from the simple to the more complex. But in philosophy we typically lack the right kind of explanatory theory, and hence find ourselves deeply puzzled by how the world is working.” (p.209)

“This message is not very congenial to the optimistic philosopher who wants to solve the deep problems that brought him or her to philosophy. For I am saying that this is a futile aim; my book could equally have been called The Futility of Philosophy…” (p.210)

On the contrary, I found McGinn’s CALM methodology quite congenial for understanding the classic “free-will” problem, in many ways because of the strong analogy with the process of evolution by gene variation and natural selection that works for him as an explanatory theory.

For what is Freedom but chance “combinatorial atoms,” possible thoughts or actions that can be combined in new ways as “alternative possibilities?”

And what is Will but the choice of an adequately determined mind acting in accordance with its character and values, making “lawlike mappings” of those fortuitous opportunities?

Going back to McGinn’s youthful training in ordinary language philosophy, which claimed the confusion was all the result of misuse of language, we can ask what is the ordinary use of “free will?”

As John Locke so clearly told us long ago, it is inappropriate to describe the Will itself as Free. The Will is a Determination. It is the Man who is Free. “I think the question is not proper, whether the will be free, but whether a man be free.” ” This way of talking, nevertheless, has prevailed, and, as I guess, produced great confusion.” (Essay Concerning Human Understanding, Book II, Chapter XXI, Of Power, section 24)

In our Cogito model, “Free Will” combines two distinct conceptual “atoms.” Free is the chance and randomness of the Micro Mind. Will is the adequate determinism of the Macro Mind. And these occur in a temporal sequence.

Compatibilists and Determinists are right about the Will, but wrong about Freedom.

Libertarians are right about Freedom, but wrong about the Will.

McGinn’s career as a professional philosopher turned on such a moment of freedom.  He was awarded the prestigious and remunerative (£150) John Locke Prize at Oxford. How he came to get it is a semi-causal chain started by a free action we should see as causa sui.

McGinn atttended A. J. Ayer’s class in which a single book would be elucidated in the course of the term.

“At the first session Ayer asked who had read the book to be discussed —The Nature of Things by Anthony Quinton. I happened to have just finished reading it, so I raised my hand; to my surprise no other hand went up, and a cold shiver went through me as Ayer fixed me with a beady eye. ” (p.78)

“The session continued, with Ayer giving the first presentation of the term, followed by what seemed to me like a very high-powered discussion, to which I did not even think of making a contribution. At the end of the class, however, Professor Ayer suddenly fixed gaze his on me, hunched at the back of the room, and announced, “The man at the back can pay for his virtue and give the presentation on chapter two for next week.” He didn’t even wait to see if I was agreeable to this brilliant suggestion. That was it: me, next week. ”

“I duly found myself in front of about forty clever people, ready to find fault with whatever I had to say. I read my essay aloud, staring self-protectively down at the page. When I had finished, I looked up, as red as a beetroot, with very clammy palms (which I always get when I am nervous), and Professor Ayer said, matter-of-factly, “Very good.”

“In order to improve my chances on the B.Phil I decided to enter for a voluntary examination called the John Locke Prize. This examination is for people aiming to win the prize of one hundred and fifty pounds, along with the prestige that goes with it. Traditionally, the brightest philosophy postgraduate at Oxford wins the prize…The examiners that year, 1972, for the John Locke Prize were Professor Ayer, Professor Hare (who had let me onto the B.Phil), and Brian Farrell, the Wilde Reader in Mental Philosophy—a fairly formidable crew.” (p.82)

“I turned up in “subfuse” for the first examination: white bowtie and shirt, dark suit, black shoes, cap and gown (this was compulsory: anything missing and you would be denied entry to the examination hall). I buckled down to the questions, writing about logical form, the coherence theory of truth, the nature of necessity, personal identity, the analysis of knowledge, and so on.”

“About a week later Professor Ayer informed me that my handwriting was so bad that I would need to have my papers typed by a professional typist in the presence of an invigilator to make sure I hadn’t cheated. Moreover, I would personally have to pay for this to be done. I expressed my misgivings, saying I had not acquitted myself at all well, and worried about the enormous expense of about fifty pounds that this was going to cost me. Ayer replied that I, or it, was “worth it,” so I reluctantly agreed—and anyway, you didn’t not do as Sir Alfred told you to do. I accordingly read my atrociously written papers aloud to a bored typist in the presence of an equally bored invigilator, who awoke to take exception to my inelegant use of the phrase “chunk of reality,” wincing all the way. I really must improve my handwriting, I thought. (Even today my writing is a miracle of illegibility.)”

“Then a week or so later, as I was sitting down for one of Kripke’s John Locke lectures, Professor Ayer conspicuously approached me in front of about five hundred people, clapped me on the back, and told me I had won the John Locke Prize—and by a wide margin. ”

Can you see the causal chain? Ayer picks out the one raised hand to lead the next discussion. He says “Very good.” He asks McGinn to invest £50 in a typed manuscript. McGinn, or it, is “worth it.” He claps McGinn’s back at the John Locke lecture.

Can you see the alternative possibilities? What if four students had raised their hands? Ayer might have selected the one closest to him to start next week.

Can you see the free action? McGinn raised his hand! There might have been others who read Quinton who did not. They did otherwise, as McGinn could have done otherwise.

McGinn’s reaction says it all.

“I wonder now what would have happened to me if Ayer had never asked me to have my papers typed (a highly unusual step, in fact), or if I had walked out when I felt like it or if I had just not sat for the John Locke Prize at all. Things would undoubtedly have been very different, and even now I feel a cold sweat at the alternative possibilities. Life and chance, chance and life.” (p.85)

I went back to read more of McGinn’s technical discussion of the free will problem in his Problems in Philosophy, and wrote him up for the Information Philosopher.

See Colin McGinn on I-Phi

Reading G. E. M. Anscombe

Elizabeth Anscombe was a student of Ludwig Wittgenstein and later served, with G. H. von Wright and Rush Rhees as the executor of his papers and as editor of his Philosophical Investigations.Her Inaugural Lecture as Professor of Philosophy at Cambridge University in 1971 was entitled “Causality and Determination.” She explained that we had no empirical grounds for believing in a determinism that is logical necessary or even in the physical determinism that appears to be required by natural laws like Newton’s.

The high success of Newton’s astronomy was in one way an intellectual disaster: it produced an illusion from which we tend still to suffer. This illusion was created by the circumstance that Newton’s mechanics had a good model in the solar system. For this gave the impression that we had here an ideal of scientific explanation; whereas the truth was, it was mere obligingness on the part of the solar system, by having had so peaceful a history in recorded time, to provide such a model. (p.20)

She asks…

Must a physicist be a ‘determinist’? That is, must he believe that the whole universe is a system such that, if its total states at t and t’ are thus and so, the laws of nature are such as then to allow only one possibility for its total state at any other time? No.

Anscombe is familiar with developments in quantum physics. She notes thatMax Born dissociated causality from determinism. And she mentions Richard Feynman’s suggestion (following Arthur Holly Compton) of a Geiger counter firing that might be connected to a bomb “There would be no doubt of the cause of the explosion if the bomb did go off,” she says. So there can becausality without determinism. (p.24)
She notes that C. D. Broad, in his 1934 inaugural lecture, had considered indeterminism, but he had added that whatever happened without being determined was “accidental.”

He did not explain what he meant by being accidental; he must have meant more than not being necessary. He may have meant being uncaused; but, if I am right, not being determined does not imply not being caused. Indeed, I should explain indeterminism as the thesis that not all physical effects are necessitated by their causes. But if we think of Feynman’s bomb, we get some idea of what is meant by “accidental”. It was random: it ‘merely happened’ that the radioactive material emitted particles in such a way as to activate the Geiger counter enough to set off the bomb. Certainly the motion of the Geiger counter’s needle is caused; and the actual emission is caused too: it occurs because there is this mass of radioactive material here. (I have already indicated that, contrary to the opinion of Hume, there are many different sorts of causality.) But all the same the causation itself is, one could say, mere hap. It is difficult to explain this idea any further. (p.25)

Indeed it is. We wish that Anscombe had tried.But she goes on to say Broad naively assumed that our actions were therefore randomly caused. Apparently aware that randomness as a cause of action had been criticized since antiquity, she calls Broad naive.

Broad used the idea to argue that indeterminism, if applied to human action, meant that human actions are ‘accidental’. Now he had a picture of choices as being determining causes, analogous to determining physical causes, and of choices in their turn being either determined or accidental. To regard a choice as such – i.e. any case of choice – as a predetermining causal event, now appears as a naif mistake in the philosophy of mind, though that is a story I cannot tell here.

Again, we could hope she would have told us more.

Anscombe recounts the severe criticism of scientists’ suggestions that indeterminism could account for human freedom.

It was natural that when physics went indeterministic, some thinkers should have seized on this indeterminism as being just what was wanted for defending the freedom of the will. They received severe criticism on two counts: one, that this ‘mere hap’ is the very last thing to be invoked as the physical correlate of ‘man’s ethical behaviour’; the other, that quantum laws predict statistics of events when situations are repeated; interference with these, by the will’s determining individual events which the laws of nature leave undetermined, would be as much a violation of natural law as would have been interference which falsified a deterministic mechanical law. (p.25)Ever since Kant it has been a familiar claim among philosophers, that one can believe in both physical determinism and ‘ethical’ freedom. The reconciliations have always seemed to me to be either so much gobbledegook, or to make the alleged freedom of action quite unreal. My actions are mostly physical movements; if these physical movements are physically predetermined by processes which I do not control, then my freedom is perfectly illusory. The truth of physical indeterminism is then indispensable if we are to make anything of the claim to freedom. But certainly it is insufficient. The physically undetermined is not thereby ‘free’. For freedom at least involves the power of acting according to an idea, and no such thing is ascribed to whatever is the subject (what would be the relevant subject?) of unpredetermination in indeterministic physics. (p.26)

Nevertheless, Anscombe is surprised that indeterministic physics has had so little effect on the thinking of philosophers of mind, who remain mostly determinists.

It has taken the inventions of indeterministic physics to shake the rather common dogmatic conviction that determinism is a presupposition or perhaps a conclusion, of scientific knowledge. Not that that conviction has been very much shaken even so. Of course, the belief that the laws of nature are deterministic has been shaken. But I believe it has often been supposed that this makes little difference to the assumption of macroscopic determinism: as if undeterminedness were always encapsulated in systems whose internal workings could be described only by statistical laws, but where the total upshot, and in particular the outward effect, was as near as makes no difference always the same. What difference does it make, after all, that the scintillations, whereby my watch dial is luminous, follow only a statistical law – so long as, the gross manifest effect is sufficiently guaranteed by the statistical law? Feynman’s example of the bomb and Geiger counter smashes this conception; but as far as I can judge it takes time for the lesson to be learned. I find deterministic assumptions more common now among people at large, and among philosophers, than when I was an undergraduate. (p.28)

See G. E. M. Anscombe on I-Phi