Reading Jacques Monod

Jacques Monod’s 1971 book Chance and Necessity was a landmark in the popular science literature for its unequivocal statement that the origin of life is purely a product of Chance.

…chance alone is at the source of every innovation, of all creation in the biosphere. Pure chance, absolutely free but blind, at the very root of the stupendous edifice of evolution: this central concept of modern biology is no longer one among other possible or even conceivable hypotheses. It is today the sole conceivable hypothesis, the only one that squares with observed and tested fact. And nothing warrants the supposition — or the hope — that on this score our position is likely ever to be revised.
(Chance and Necessity, p. 112)

Monod correctly denies any teleological forces are needed to create life from inanimate matter, but he finds that teleonomic purposeful behavior is one of the fundamental characteristics of life, along with what he calls autonomous morphogenesis (life is “self-constructing”) and reproductive invariance (life is “self-replicating”).

Information philosophy agrees that with the emergence of life, information structures with purposes entered the universe.

But there must have been information-creating, ergodic processes at work before terrestrial life appeared. They created the informational substrate for life, in particular, the sun and the planetary environment hospitable to the origin of life on earth.

Monod says that some biologists have been unhappy with his idea of teleonomy, that living beings are endowed with a purpose or a project, but he says this is essential to the definition of living beings. His next criterion isautonomous morphogenesis. He says,

…a living being’s structure results from a … process … that owes almost nothing to the action of outside forces, but everything, from its overall shape down to its tiniest detail, to “morphogenetic” interactions within the object itself.

We now know this is only “adequate determinism

It is thus a structure giving proof of an autonomous determinism: precise, rigorous, implying a virtually total “freedom” with respect to outside agents or conditions — which are capable, to be sure, of impeding this development, but not of governing or guiding it, not of prescribing its organizational scheme to the living object. Through the autonomous and spontaneous character of the morphogenetic processes that build the macroscopic structure of living beings, the latter are absolutely distinct from artifacts, as they are, furthermore, from the majority of natural objects whose macroscopic morphology largely results from the influence of external agents.

Crystals are one of the few purely physical “ergodic” processes, reducing the entropy locally

To this there is a single exception: that, once again, of crystals, whose characteristic geometry reflects microscopic interactions occurring within the object itself. Hence, utilizing this criterion alone, crystals would have to be classified together with living beings, while artifacts and natural objects, alike fashioned by outside agents, would comprise another class.
(Chance and Necessity, p.10)

The quantum cooperative atomic phenomena that form crystals are of course the same as form the macromolecules of life, DNA, RNA, etc.

Monod thinks there is an “internal, autonomous determinism” that “guarantees the formation of the extremely complex structures of living beings.” The “guarantee” can not be perfect as a result of statistical physics. Monod is fully aware of quantum indeterminacy. After discussing chance in terms of probability and games of chance, he says,

on the microscopic level there exists a further source of still more radical uncertainty, embedded in the quantum structure of matter. A mutation is in itself a microscopic event, a quantum event, to which the principle of uncertainty consequently applies. An event which is hence and by its very nature essentially unpredictable.

Monod identifies the key evolutionary process as the transmission ofinformation from one living information structure to the next. Note that this is accomplished in the constant presence of thermal and quantal noise


Such structures represent a considerable quantity of information whose source has still to be identified: for all expressed — and hence received — information presupposes a source. He says “the source of the information expressed in the structure of a living being is always another, structurally identical object.”

[Living beings have the] ability to produce and to transmit ne varietur the information corresponding to their own structure. A very rich body of information, since it describes an organizational scheme which, along with being exceedingly complex, is preserved intact from one generation to the next. The term we shall use to designate this property is invariant reproduction, or simply invariance.With their invariant reproduction we find living beings and crystalline structures once again sharing a property that renders them unlike all other known objects in the universe. Certain chemicals in supersaturated solution do not crystallize unless the solution has been inoculated with crystal seeds. We know as well that in cases of a chemical capable of crystallizing into two different systems, the structure of the crystals appearing in the solution will be determined by that of the seed employed.
(Chance and Necessity, p.12)

Monod claims that the main distinction between crystals and living things is the quantity of information transmitted between the generations. He thus neglects the creativity inherent in the acquisition and transmission ofknowledge by living things.

Crystalline structures, however, represent a quantity of information by several orders of magnitude inferior to that transmitted from one generation to another in the simplest living beings we are acquainted with. By this criterion — purely quantitative, be it noted — living beings may be distinguished from all other objects, crystals included.

In his major contribution toward an informational approach to biology, Monod goes on to make a quantitative estimate of what he calls the “teleonomic level” of a species, arranging them in a hierarchy based purely on information content. This is an important beginning for information-based biological science.

…since a structure’s degree of order can be defined in units of information, we shall say that the “invariance content” of a given species is equal to the amount of information which, transmitted from one generation to the next, assures the preservation of the specific structural standard. As we shall see later on, with the help of a few assumptions it will be possible to arrive at an estimate of this amount.That in turn will enable us to bring into better focus the notion most immediately and plainly inspired by the examination of the structures and performances of living beings, that of teleonomy. Analysis nevertheless reveals it to be a profoundly ambiguous concept, since it implies the subjective idea of “project.” [Consider] the example of the camera: if we agree that this object’s existence and structure realize the “project” of capturing images, we must also agree, obviously enough, that a similar project is accomplished with the emergence of the eye of a vertebrate.

But it is only as a part of a more comprehensive project that each individual project, whatever it may be, has any meaning. All the functional adaptations in living beings, like all the artifacts they produce, fulfill particular projects which may be seen as so many aspects or fragments of a unique primary project, which is the preservation and multiplication of the species.

To be more precise, we shall arbitrarily choose to define the essential teleonomic project as consisting in the transmission from generation to generation of the invariance content characteristic of the species. All the structures, all the performances, all the activities contributing to the success of the essential project will hence be called “teleonomic.”

This allows us to put forward at least the principle of a definition of a species’ “teleonomic level.’ All teleonomic structures and performances can be regarded as corresponding to a certain quantity of information which must be transmitted for these structures to be realized and -these performances accomplished. Let us call this quantity “teleonomic information.” A given species’ “teleonomic level” may then be said to correspond to the quantity of information which, on the average and per individual, must be transferred to assure the generation-to-generation transmission of the specific content of reproductive invariance.
(Chance and Necessity, pp.13-14)

For François Jacob, who shared the Nobel Prize with Jacques Monod, teleonomy was a basic characteristic of every cell. Jacob said that the basic purpose and desire of every cell is to become two cells


But Monod sees that his teleonomy appears to be in conflict with a basic tenet, the very cornerstone, of modern science.

The cornerstone of the scientific method is the postulate that nature is objective. In other words, the systematic denial that “true” knowledge can be got at by interpreting phenomena in terms of final causes – that is to say, of “purpose.” An exact date may be given for the discovery of this canon. The formulation by Galileo and Descartes of the principle of inertia laid the groundwork not only for mechanics but for the epistemology of modern science, by abolishing Aristotelian physics and cosmology. To be sure, neither reason, nor logic, nor observation, nor even the idea of their systematic confrontation had been ignored by Descartes’ predecessors. But science as we understand it today could not have been developed upon those foundations alone. It required the unbending stricture implicit in the postulate of objectivity — ironclad, pure, forever undemonstrable. For it is obviously impossible to imagine an experiment which could prove the nonexistence anywhere in nature of a purpose, of a pursued end.But the postulate of objectivity is consubstantial with science; it has guided the whole of its prodigious development for three centuries. There is no way to be rid of it, even tentatively or in a limited area, without departing from the domain of science itself.

Objectivity nevertheless obliges us to recognize the teleonomic character of living organisms, to admit that in their structure and performance they act projectively — realize and pursue a purpose. Here therefore, at least in appearance, lies a profound epistemological contradiction. In fact the central problem of biology lies with this very contradiction, which, if it is only apparent, must be resolved; or else proven to be utterly insoluble, if that should turn out indeed to be the case.
(Chance and Necessity, pp.21-2)

Monod’s resolution of his “profound epistemological contradiction” is to make teleonomy secondary to – and a consequence of – reproductive invariance.

Since the teleonomic properties of living beings appear o challenge one of the basic postulates of the modern theory of knowledge, any philosophical, religious, or scientific view of the world must, ipso facto, offer an implicit if not an explicit solution to this problem.{T]he single hypothesis that modern science here deems acceptable: namely, that invariance necessarily precedes teleonomy. Or, to be more explicit;` the Darwinian idea that the initial appearance, evolution, and steady refinement of ever more intensely teleonomic structures are due to perturbations occurring in a structure which already possesses the property of invariance — hence is capable of (preserving the effects of chance and thereby submitting them to the play of natural selection.

Ranking teleonomy as a secondary property deriving from invariance — alone seen as primary — the selective theory is the only one so far proposed that is consistent with the postulate of objectivity. It is at the same time the only one not merely compatible with modern physics but based squarely upon it, without restrictions or additions. In short, the selective theory of evolution assures the epistemological coherence of biology and gives it its place among the sciences of “objective nature.”
(Chance and Necessity, pp.23-4)

Monod summarizes the history of philosophy more or less as we do (and asKarl Popper does), along the lines of the great division, or dualism, between idealists and materialists.

We see the distinction as between those who think information is an invariant and those who see it as constantly increasing. Monod’s focus onreproductive invariance may prevent him seeing the importance of novelty and creation of new information. Ever since its birth in the Ionian Islands almost three thousand years ago, Western philosophy has been divided between two seemingly opposed attitudes. According to one of them the authentic and ultimate truth of the world can reside only in perfectly immutable forms, by essence unvarying. According to the other, the only real truth resides in flux and evolution. From Plato to Whitehead and from Heraclitus to Hegel and Marx, it is clear that these metaphysical epistemologies were always closely bound up with their authors’ ethical and political biases. These ideological edifices, represented as self-evident to reason, were actually a posteriori constructions designed to justify preconceived ethico-political theories.
(Chance and Necessity, p.99)

Monod on Knowledge and Value

Like many scientists, Monod regards the open search for knowledge and truth as of intrinsic value. Can he go on to make knowledge itself a value in the objective world of “value-free” science? Monod seeks an “ethic of knowledge.”

Must one adopt the position once and for all that objective truth and the theory of values constitute eternally separate, mutually impenetrable domains? This is the attitude taken by a great number of modern thinkers, whether writers, or philosophers, or indeed scientists. For the vast majority of men, whose anxiety it can only perpetuate and worsen, this attitude I believe will not do; I also believe it is absolutely mistaken, and for two essential reasons.First, and obviously, because values and knowledge are always and necessarily associated in action just as in discourse.

Second, and above all, because the very definition of “true” knowledge reposes in the final analysis upon an ethical postulate.

Each of these two points demands some brief clarification.

Ethics and knowledge are inevitably linked in and through action. Action brings knowledge and values simultaneously into play, or into question. All action signifies an ethic, serves or disserves certain values; or constitutes a choice of values, or pretends to. On the other hand, knowledge is necessarily implied in all action, while reciprocally, action is one of the two necessary sources of knowledge.

The moment one makes objectivity the conditio sine qua non of true knowledge, a radical distinction, indispensable to the very search for truth, is established between the domains of ethics and of knowledge. Knowledge in itself is exclusive of all value judgment (all save that of “epistemological value”) whereas ethics, in essence nonobjective, is forever barred from the sphere of knowledge.

The postulate of objectivity…prohibits any confusion of value judgments with judgments arrived at through knowledge. Yet the fact remains that these two categories inevitably unite in the form of action, discourse included. In order to abide by our principle we shall therefore take the position that no discourse or action is to be considered meaningful, authentic unless — or only insofar as — it makes explicit and preserves the distinction between the two categories it combines. Thus defined, the concept of authenticity becomes the common ground where ethics and knowledge meet again; where values and truth, associated but not interchangeable, reveal their full significance to the attentive man alive to their resonance.

In an objective system…any mingling of knowledge with values is unlawful, forbidden. But — and here is the crucial point, the logical link which at their core weds knowledge and values together — this prohibition, this “first commandment” which ensures the foundation of objective knowledge, is not itself objective. It cannot be objective: it is an ethical guideline, a rule for conduct. True knowledge is ignorant of values, but it cannot be grounded elsewhere than upon a value judgment, or rather upon an axiomatic value. It is obvious that the positing of the principle of objectivity as the condition of true knowledgeconstitutes an ethical choice and not a judgment arrived at from knowledge, since, according to the postulate’s own terms, there cannot have been any “true” knowledge prior to this arbitral choice. In order to establish the norm for knowledge the objectivity principle defines a value: that value is objective knowledge itself. Thus, assenting to the principle of objectivity one announces one’s adherence to the basic statement of an ethical system, one asserts the ethic of knowledge.

By the very loftiness of its ambition the ethic of knowledge might perhaps satisfy this urge in man to project toward something higher. It sets forth a transcendent value, true knowledge, and invites him not to use it self-servingly but henceforth to enter into its service from deliberate and conscious choice. At the same time it is also a humanism, for in man it respects the creator and repository of that transcendence.

The ethic of knowledge is also in a sense “knowledge of ethics,” a clear-sighted appreciation of the urges and passions, the requirements and limitations of the biological being. It is able to confront the animal in man, to view him not as absurd but strange, precious in his very strangeness: the creature who, belonging simultaneously to the animal kingdom and the kingdom of ideas, is simultaneously torn and enriched by this agonizing duality, alike expressed in art and poetry and in human love.

Conversely, the animist systems have to one degree or another preferred to ignore, to denigrate or bully biological man, and to instill in him an abhorrence or terror of certain traits inherent in his animal nature. The ethic of knowledge, on the other hand, encourages him to honor and assume this heritage, knowing the while how to dominate it when necessary. As for the highest human qualities, courage, altruism, generosity, creative ambition, the ethic of knowledge both recognizes their sociobiological origin and affirms their transcendent value in the service of the ideal it defines.
(Chance and Necessity, pp.173-9)

Monod’s Historical Error on Chance and Necessity

Monod took the title of his work from a statement by Democritus that he imagined or misremembered (an example of the Cogito Model for human creativity). He opens his book with this quotation,

Everything existing in the Universe is the fruit of chance and necessity. Democritus

Unfortunately, Democritus made no such statement. As the founder of determinism, he and his mentor Leucippus were adamantly opposed to chance or randomness. Leucippus insisted on an absolute necessity which leaves no room in the cosmos for chance.

“Nothing occurs at random (maten), but everything for a reason (logos) and by necessity.”οὐδὲν χρῆμα μάτηῳ γίνεται, ἀλλὰ πάντα ἐκ λόγου τε καὶ ὑπ’ ἀνάγκης

See Jacques Monod on I-Phi

Reading Peter van Inwagen

Peter van Inwagen made a significant reputation for himself by bucking the trend among philosophers in most of the twentieth century to accept compatibilism, the idea that free will is compatible with a strict causal determinism.Indeed, van Inwagen has been given credit for rehabilitating the idea of incompatibilism in the last few decades. He explains that the old problem of whether we have free will or whether determinism is true is no longer being debated. In the first chapter of his landmark 1983 book,An Essay on Free Will, van Inwagen says:

1.2 It is difficult to formulate “the problem of free will and determinism” in a way that will satisfy everyone. Once one might have said that the problem of free will and determinism — in those days one would have said ‘liberty and necessity’ — was the problem of discovering whether the human will is free or whether its productions are governed by strict causal necessity. But no one today would be allowed to formulate “the problem of free will and determinism” like that, for this formulation presupposes the truth of a certain thesis about the conceptual relation of free will to determinism that many, perhaps most, present-day philosophers would reject: that free will and determinism are incompatible. Indeed many philosophers hold not only that free will is compatible with determinism but that free will entails determinism. I think it would be fair to say that almost all the philosophical writing on the problem of free will and determinism since the time of Hobbes that is any good, that is of any enduring philosophical interest, has been about this presupposition of the earlier debates about liberty and necessity. It is for this reason that nowadays one must accept as a fait accompli that the problem of finding out whether free will and determinism are compatible is a large part, perhaps the major part, of “the problem of free will and determinism”.
(Essay on Free Will, p.1)

Unfortunately for philosophy, the concept of incompatibilism is very confusing. It contains two opposing concepts, libertarian free will and hard determinism.

And like determinism versus indeterminism, compatibilism versus incompatibilism is a false and unhelpful dichotomy. J. J. C. Smart once claimed he had an exhaustive description of the possibilities, determinism or indeterminism, and that neither one neither allowed for free will. (Since Smart, dozens of others have repeated thisstandard logical argument against free will.)

Van Inwagen has replaced the traditional “horns” of the dilemma of determinism – “liberty” and “necessity” – and now divides the problem further:

I shall attempt to formulate the problem in a way that takes account of this fait accompli by dividing the problem into two problems, which I will call the Compatibility Problem and the Traditional Problem. The Traditional Problem is, of course, the problem of finding out whether we have free will or whether determinism is true. But the very existence of the Traditional Problem depends upon the correct solution to the Compatibility Problem: if free will and determinism are compatible, and,a fortiori, if free will entails determinism, then there is no Traditional Problem, any more than there is a problem about how my sentences can be composed of both English words and Roman letters.
(Essay on Free Will, p.2)

Van Inwagen defines determinism very simply. “Determinism is quite simply the thesis that the past determines a unique future.” (p. 2)He concludes that such a Determinism is not true, because we could not then be responsiblefor our actions, which would all be simply the consequences of events in the distant past that were not “up to us.”This approach, known as van Inwagen’s Consequence Argument, is the perennialDeterminism Objection in the standard argument against free will.

Note that in recent decades the debates about free will have been largely replaced by debates about moral responsibility. Since Peter Strawson, many philosophers have claimed to be agnostic on the traditional problem of free will and determinism and focus on whether the concept of moral responsibility itself exists. Some say that, like free will itself, moral responsibility is an illusion. Van Inwagen is not one of those. He hopes to establish free will.
Van Inwagen also notes that quantum mechanics shows indeterminism to be “true.” He is correct. But we still have a very powerful and “adequate” determinism. It is this adequate determinism that R. E. Hobart and others have recognized we need when they say that “Free Will Involves Determination and is Inconceivable Without It.” Our will and actions are determined. It is the future alternative possibilities in our thoughts that are undetermined.Sadly, many philosophers mistake indeterminism to imply that nothing is causal and therefore that everything is completely random.This is the Randomness Objection in the standard argument.

Van Inwagen states his Consequence Argumentas follows:

“If determinism is true, then our acts are the consequences of the laws of nature and events in the remote past. But it is not up to us what went on before we were born, and neither is it up to us what the laws of nature are. Therefore, the consequences of these things (including our present acts) are not up to us.” (Essay on Free Will, 1983, p.16)

Exactly how this differs from the arguments of centuries of Libertarians is not clear, but van Inwagen is given a great deal of credit in the contemporary literature for this obvious argument. See for example, Carl Ginet’s article “Might We Have No Choice?” in Freedom and Determinism, Ed. K. Lehrer, 1966.We note that apparently Ginet also thought his argument was original. What has happened to philosophers today that they so ignore the history of philosophy?

Van Inwagen offers several concise observations leading up to his Consequence Argument, including concerns about the terminology used (which concerns arise largely because of his variations on the traditional problem terminology).

Determinism may now be defined: it is the thesis that there is at any instant exactly one physically possible future.Let us now see what can be done about defining free will.
I use the term ‘free will’ out of respect for tradition.

When I say of a man that he “has free will” I mean that very often, if not always, when he has to choose between two or more mutually incompatible courses of action — such that he can, or is able to, or has it within his power to carry out.

It is in these senses that I shall understand ‘free will’ and ‘determinism’. I shall argue that free will is incompatible with determinism. It will be convenient to call this thesis incompatibilism and to call the thesis that free will and determinism are compatible compatibilism.

I have no use for the terms ’soft determinism’, ‘hard determinism; and ‘libertarianism’. I do not object to these terms on the ground that they are vague or ill-defined. They can be easily defined by means of the terms we shall use and are thus no worse in that respect than our terms.

van Inwagen does not seem to mind that “incompatibilism” lumps together opposite schools – hard determinists and libertarians

Soft determinism is the conjunction of determinism and compatibilism; hard determinism is the conjunction of determinism and incompatibilism; libertarianism is the conjunction of incompatibilism and the thesis that we have free will.

I object to these terms because they lump together theses that should be discussed and analysed separately. They are therefore worse than useless and ought to be dropped from the working vocabulary of philosophers.

‘Contra-causal freedom’ might mean the sort of freedom, if freedom it would be, that someone would enjoy if his acts were uncaused. But that someone’s acts are undetermined does not entail that they are uncaused.

Incompatibilism can hardly be said to be a popular thesis among present-day philosophers (the “analytic” ones, at any rate). Yet it has its adherents and has had more of them in the past. It is, however, surprisingly hard to find any arguments for it. That many philosophers have believed something controversial without giving any arguments for it is perhaps not surprising; what is surprising is that no arguments have been given when arguments are so easy to give.

Perhaps the explanation is simply that the arguments are so obvious that no one has thought them worth stating. If that is so, let us not be afraid of being obvious. Here is an argument that I think is obvious (I don’t mean it’s obviously right; I mean it’s one that should occur pretty quickly to any philosopher who asked himself what arguments could be found to support incompatibilism):

we call this theDeterminism Objection

If determinism is true, then our acts are the consequences of the laws of nature and events in the remote past. But it is not up to us what went on before we were born, and neither is it up to us what the laws of nature are. Therefore, the consequences of these things (including our present acts) are not up to us.I shall call this argument the Consequence Argument.

we call this theRandomness Objection

[What van Inwagen calls] The Mindargument proceeds by identifying indeterminism with chance and by arguing that an act that occurs by chance, if an event that occurs by chance can be called an act, cannot be under the control of its alleged agent and hence cannot have been performed freely. Proponents of [this argument] conclude, therefore, that free will is not only compatible with determinism but entails determinism. (p.16)

Note that van Inwagen’s Mind argument adds the second horn of the dilemma of determinism. He named it the Mind Argument after the philosophical journal Mindwhere objections to chance were often published.Thus van Inwagen’s Consequence and Mind Arguments are the two parts of the standard argument against free will.

Although van Inwagen is famous for the first horn of the dilemma, the determinism objection to free will (also known as the Direct Argument), he has also contributed significantly to the second – and much more difficult to reconcile – randomness objection. (also known as theIndirect Argument).

Free Will Remains a Mystery for van Inwagen

Van Inwagen dramatized his understanding of the indeterministic brain events needed foragent causation by imagining God “replaying” a situation to create exactly the same circumstances and then arguing that decisions would reflect the indeterministic probabilities. He mistakenly assumes that possibilities translate directly into probabilities.He also mistakenly assumes that random possibilities directly cause human actions.

Now let us suppose that God a thousand times caused the universe to revert to exactly the state it was in at t1 (and let us suppose that we are somehow suitably placed, metaphysically speaking, to observe the whole sequence of “replays”). What would have happened? What should we expect to observe? Well, again, we can’t say what would have happened, but we can say what would probably have happened: sometimes Alice would have lied and sometimes she would have told the truth. As the number of “replays” increases, we observers shall — almost certainly — observe the ratio of the outcome “truth” to the outcome “lie” settling down to, converging on, some value. We may, for example, observe that, after a fairly large number of replays, Alice lies in thirty percent of the replays and tells the truth in seventy percent of them—and that the figures ‘thirty percent’ and ’seventy percent’ become more and more accurate as the number of replays increases. But let us imagine the simplest case: we observe that Alice tells the truth in about half the replays and lies in about half the replays. If, after one hundred replays, Alice has told the truth fifty-three times and has lied forty-eight times, we’d begin strongly to suspect that the figures after a thousand replays would look something like this: Alice has told the truth four hundred and ninety-three times and has lied five hundred and eight times. Let us suppose that these are indeed the figures after a thousand [1001] replays. Is it not true that as we watch the number of replays increase we shall become convinced that what will happen in the next replay is a matter of chance.
(”Free Will Remains a Mystery,” in Philosophical Perspectives, vol. 14, 2000, p.14)

Van Inwagen reveals that he clearly thinks that indeterminism directly results in actions. No wonder on his account that “free will remains a mystery!”He repeated the argument more recently:

If God caused Marie’s decision to be replayed a very large number of times, sometimes (in thirty percent of the replays, let us say) Marie would have agent-caused the crucial brain event and sometimes (in seventy percent of the replays, let us say) she would not have… I conclude that even if an episode of agent causation is among the causal antecedents of every voluntary human action, these episodes do nothing to undermine the prima facie impossibility of an undetermined free act.
(”Van Inwagen on Free Will,” in Freedom and Determinism, 2004, ed. Joseph Keim Campbell, et al., p.227)

Van Inwagen, Kane, and Compatibilism compared to the Cogito Model

Robert Kane has argued that randomnmess in the decision need not be there all the time, just enough to be able to say we are not completely determined. Even if just a small percentage of decisions are random, we could not be responsible for them.We can make a quantitative comparison of the outcome of 1000 thought experiments (or “instant replays” by God as van Inwagen imagines) that shows how the indeterminism in the Cogito Model is limited to generating alternative possibilities for action.

Van Inwagen’s results after 1000 experiments are approximately 500 times when Alice lies and 500 times when Alice tells the truth.

Robert Kane is well aware of the problem that chance reduces moral responsibility, especially in his sense of Ultimate Responsibility (UR).

In order to keep some randomness but add rationality, Kane says perhaps only some small percentage of decisions will be random, thus breaking the deterministic causal chain, but keeping most decisions predictable. Laura Ekstrom and others follow Kane with some indeterminism in the decision.

Let’s say randomness enters Kane’s decisions only ten percent of the time. The other ninety percent of the time, determinism is at work. In those cases, presumably Alice tells the truth. Then Alice’s 500 random lies in van Inwagen’s first example would become only 50.

But this in no way explains moral responsibility for those few cases.

Compare the Information Philosophy Cogito model, which agrees with compatibilism/determinism except in cases where something genuinely new and valuable emerges as a consequence of randomness.

In our two-stage model, we have first “free” – random possibilities, then “will” – adequately determined evaluation of options and selection of the “best” option.

Alice’s random generation of alternative possibilities will include 50 percent of options that are truth-telling, and 50 percent lies.

Alice’s adequately determined will evaluates these possibilities based on her character, values, and current desires.

In the Cogito model, she will almost certainly tell the truth. So it predicts almost the same outcome as a compatibilist/determinist model.

The Cogito model is not identical, however, since it can generate new alternatives.

It is possible that among the genuinely new alternative possibilities generated, there will be some that determinism could not have produced.

It may be that Alice will find one of these options consistent with her character, values, desires, and the current situation she is in. One might include a pragmatic lie, to stay with van Inwagen’s example.

In a more positive example, it may include a creative new idea that information-preserving determinism could not produce.

Alice’s thinking might bring new information into the universe. And she can legitimately accept praise (or blame) for that new action or thought that originates with her.

To summarize the results:

Van Inwagen Kane Cogito Compatiblism
Alice tells truth 500 950 1000* 1000
Alice lies 500 50 0* 0

* (Alice tells the truth unless a good reason emerges from her free deliberations in the Cogito Model, in which case, to stay with van Inwagen’s actions, she might tell a pragmatic lie.)

We should also note the Moral Luck criticism of actions that have a random component in their source.

Alfred Mele would perhaps object that the alternative possibilities depend on luck, and that this compromises moral responsibility.

On the Cogito Model view, Mele is right with respect to moral responsibility. But Mele is wrong that luck compromises free will.

Free will and creativity may very well depend on fortuitous circumstances, having the new idea “coming to mind” at the right time, as Mele says.

The universe we live in includes chance and therefore luck, including moral luck, is very real, but not a valid objection to our libertarian free will model (or Mele’s “modest libertarianism”).

How to Think about the Problem of Free Will
Van Inwagen recently produced a very clear proposal for thinking about free will. It is a paper to appear in The Journal of Ethics entitled How to Think about the Problem of Free Will.It starts with a very concise wording of the Standard Argument against Free Willthat includes the Determinism, Randomness, and Responsibility Objections.

There are seemingly unanswerable arguments that (if they are indeed unanswerable) demonstrate that free will is incompatible withdeterminism.And there are seemingly unanswerable arguments that (if indeed . . . ) demonstrate that free will is incompatible withindeterminism.

But if free will is incompatible both with determinism and indeterminism, the concept “free will” is incoherent, and the thing free will does not exist.

There are, moreover, seemingly unanswerable arguments that, if they are correct, demonstrate that the existence of moral responsibilityentails the existence of free will, and, therefore, if free will does not exist, moral responsibility does not exist either. It is, however, evident that moral responsibility does exist.

Van Inwagen concludes:

It must, therefore, be that at least one of the following three things is true:

The seemingly unanswerable arguments for the incompatibility of free will and determinism are in fact answerable; these arguments are fallaciousThe seemingly unanswerable arguments for the incompatibility of free will and indeterminism are in fact answerable; these arguments are fallacious.

we call this theResponsibility Objection

The seemingly unanswerable arguments for the conclusion that the existence of moral responsibility entails the existence of free will are in fact answerable; these arguments are fallacious.

The “problem of free will” is just this problem (this is my proposal): to find out which of these arguments is fallacious, and to enable us to identify the fallacy or fallacies on which they depend.

Van Inwagen recognizes that the philosophical discussions of free will are clouded by the use of vague terminology. He recommends some terms be avoided – ‘libertarianism’, ‘hard determinism’, and soft ‘determinism’ – and that terms be confined to ‘the free-will thesis’, ‘determinism’, ‘compatibilism’ and ‘incompatibilism.’ He says

There is a tendency among writers on free will to oppose ‘compatibilism’ and ‘libertarianism’; but the fundamental opposition is between compatibilism and incompatibilism.Here is a major example (not entirely unconnected with my minor example). Philosophers who use the term ‘libertarianism’ apparently face an almost irresistible temptation to speak of ‘libertarian free will.’

What is this libertarian free will they speak of? What does the phrase ‘libertarian free will’ mean?

Although van Inwagen says he has presented the free-will problem “in a form in which it is possible to think about it without being constantly led astray by bad terminology and confused ideas,” he himself is apparently confused by the ambiguous term incompatibilism.Incompatibilists are of two opposing types; libertarians who take incompatibilism plus the free will thesis to mean that determinism is not true, and determinists who deny the free will thesisbecausedeterminism is true.So “libertarian free will” and “compatibilist free will” nicely distinguish between an indeterminist view of free will and the view that free will is compatible with determinism.

And it is impossible to define a libertarian with just one of van Inwagen’s set of terms.

van Inwagen makes his confusion clear:

Noun-phrases like ‘free will’ and ‘compatibilist free will’ and ‘libertarian free will’ are particularly difficult for me. I find it difficult to see what sort of thing such phrases are supposed to denote. In serious philosophy, I try never to use an abstract noun or noun-phrase unless it’s clear what ontological category the thing it purports to denote belongs to. For many abstract noun-phrases, it’s not at all clear what sort of thing they’re supposed to denote, and I therefore try to use such phrases only in introductory passages, passages in which the reader’s attention is being engaged and a little mush doesn’t matter.

Van Inwagen then looks closely at the noun phrase “free will” and asserts that it always means the same thing, that the agent is/was able to do otherwise.

‘free will’, ‘incompatibilist free will’, ‘compatibilist free will’ and ‘libertarian free will’ are four names for one and the same thing. If this thing is a property, they are four names for the property is on some occasions able to do otherwise. If this thing is a power or ability, they are four names for the power or ability to do otherwise than what one in fact does.All compatibilists I know of believe in free will. Many incompatibilists (just exactly the libertarians: that’s how ‘libertarian’ is defined) believe in free will. And it’s one and the same thing they believe in.

This seems to be word jugglery. Libertarians and compatibilists are using the same noun phrase, but they are denoting two different models for free will, two different ways that free will might operate. Free will is not just the words in a set of propositions to be adjudicated true or false by analytic language philosophers.

John Locke explicitly warned us of the potential confusion in such noun phrases, and carefully distinguished the freedom in “free” from the determined “will.” Van Inwagen’s problem stems in part from taking this phrase to be a single entity.In Latin and all the romance languages, as well as the Germanic languages – in short, all the major philosophical languages (excepting the Greek of Aristotle, before the Stoics created the problem we have today and Chrysippus invented compatibilism) – the concept of free will is presented as a complex of two simple ideas– free and will.

liberum arbitrium, libre arbitre (French), libera volontà (Italian), livre arbítrio (Portuguese), va gratuit (Romanian), libre voluntad (Spanish)Willensfreiheit (German), fri vilje (Danish), vrije wil (Dutch), fri vilja (Swedish)

ελεύθερη βούληση (Greek), свободную волю (Russian), स्वतंत्र इच्छा (Hindi).

Even some non-Indo-European languages combine two elementary concepts – vapaasta tahdosta (Finnish).

Polish – woli – is an exception to the rule.

The reason Aristotle did not conflate freedom with will, according to his fourth-century commentator Alexander of Aphrodisias, was because for Aristotle the problem was always framed in terms of responsibility, whether our actions are “up to us” (in Aristotle’s Greek ἐφ ἡμῖν), whether the causes behind our actions, including Aristotelian accidents (συμβεβεκός), come from within us (ἐν ἡμῖν).

Coming back to van Inwagen, he then asks what it is that libertarians, including himself, really want. For one thing, he wishes that free will could be compatible with determinism. “It would be so simple,” he says. But reason has convinced him it is incompatible.Will van Inwagen be satisfied to learn that free will is compatible with the adequate determinismthat we really have in the world? And that the microscopic indeterminism that we have need not be the direct cause of our actions?

Let us turn from what libertarians want to have to what they want to be true. Do libertarians want libertarianism to be true? Well, libertarianism is the conjunction of the free-will thesis and incompatibilism. To want libertarianism to be true, therefore, would be to want both the free-will thesis and incompatibilism to be true. I will stipulate, as the lawyers say, that libertarians want the free-will thesis to be true. (And who wouldn’t? Even hard determinists, or most of them, seem to regard the fact — they think it’s a fact — that we do not have free will as a matter for regret.)But do libertarians want incompatibilism to be true? Perhaps some do. I can say only that I don’t want incompatibilism to be true. Just as hard determinists regard the non-existence of free will as a matter for regret, I regard the fact — I think it’s a fact — that free will is incompatible with determinism as a matter for regret. But reason has convinced me that free will is incompatible with determinism, and I have to accept the deliverances of reason, however unpalatable they may be. I should think that any philosopher in his or her right mind would want compatibilism to be true. It would make everything so simple. But we can’t always have what we want and things are not always simple.

Sadly, incompatibilist libertarians have been right about indeterministic freedom, but wrong about the Will, which must be adequately determined.And compatibilists have been right about the adequately determined Will, and wrong about indeterminist Freedom, which is never the direct cause of human actions.

See the Cogito model for more details.

Van Inwagen then congratulates himself for having reintroduced the standard argument for the incompatibilism of free will and determinism. As our history of the free will problem shows, this argument has been around since Epicurus.

The Consequence Argument is my name for the standard argument (various more-or-less equivalent versions of the argument have been formulated by C. D. Broad, R. M. Chisholm, David Wiggins, Carl Ginet, James Lamb, and myself) for the incompatibility. It is beyond the scope of this paper seriously to discuss the Consequence Argument. I will, however, make a sociological point. Before the Consequence Argument was well known (Broad had formulated an excellent version of it in the 1930s, but no one was listening), almost all philosophers who had a view on the matter were compatibilists. It’s probably still true that most philosophers are compatibilists. But it’s also true that the majority of philosophers who have a specialist’s knowledge of the ins and outs of the free-will problem are incompatibilists. And this change is due entirely to the power, the power to convince, the power to move the intellect, of the Consequence Argument. If, therefore, the Consequence Argument is fallacious (in some loose sense; it certainly contains no logical fallacy), the fallacy it embodies is no trivial one. Before the Consequence Argument was well known, most philosophers thought that incompatibilists (such incompatibilists as there were) were the victims of a logical “howler” that could be exposed in a paragraph or two.

[Eddy Nahmias, in a post called Counting Heads on the Garden of Forking Paths blog, has surveyed philosophers and finds the ratio of compatibilists to incompatibilists (2:1) to be about the same in the general and specialist populations.]

Van Inwagen concludes:

The problem of free will, I believe, confronts us philosophers with a great mystery. Under it our genius is rebuked. But confronting a mystery is no excuse for being in a muddle. In accusing others of muddle, I do not mean to imply that that they are muddled because they do not believe what I do about free will. I do not mean to imply that they are muddled because they are compatibilists.

Describing the problem of free will as whether compatibilism or incompatibilism is true – a redescription that van Inwagen takes most of the credit for – is likely a major contribution to the philosophical muddle we find ourselves in.

See Peter van Inwagen on I-Phi

Reading Randolph Clarke

From Randolph Clarke’s web page at Florida State University

My primary research interests are issues concerning human agency, particularly intentional action, free will, and moral responsibility. I’ve also written on practical reason, mental causation, and dispositions.I favor a causal theory of action, on which something counts as an intentional action in virtue of being appropriately caused by mental events of certain sorts, such as the agent’s having an intention with pertinent content. This kind of action theory takes human agency to be a natural phenomenon, something of a kind with (even if differing in sophistication from) the agency of many non-human animals.

Many philosophers have thought that free and morally responsible action would be ruled out if our actions were causally determined by prior events. My book, Libertarian Accounts of Free Will, examines whether indeterminism of any sort is more hospitable. Though I defend libertarian views (accounts requiring indeterminism) from several common objections, I argue that none of these accounts is adequate. If responsibility isn’t compatible with determinism, then, I think, it isn’t possible.

Clarke introduced the terms “broad incompatibilism” and “narrow incompatibilism.” A narrow incompatibilist is an incompatibilist on free will and a compatibilist on moral responsibility. A broad incompatibilist sees determinism as incompatible with both free will and moral responsibility.Narrow incompatibilism resembles John Martin Fischer’s term semicompatibilism.Semicompatibilism is the idea that moral responsibilityis compatible with determinism.The term “incompatibilism” is used to characterize both determinists and libertarians.

Thus broad incompatibilism resembles Derek Pereboom’s term “hard incompatibilism.” Hard incompatibilism is the idea that free will and moral responsibility do not exist. Some hard incompatibilists like Saul Smilanskyand Daniel Wegner call free will an illusion.

Like many philosophers, Clarke tends to equate moral responsibility with simple responsibility or accountability, that is, being the cause of an action.

In recent years, it has come to be a matter of some dispute whether moral responsibility requires free will, where the latter is understood as requiring anability to do otherwise.I shall not take sides here in these disputes. I treat the thesis that responsibility is incompatible with the truth of determinism as a separate claim, and I call incompatibilism without this further claim “narrow.” Narrow incompatibilism holds that free will, understood as indicated above, is incompatible with determinism, but it (at least) allows that responsibility and determinism may be compatible. I call the position that free will and determinism are not compatible but responsibility and determinism are compatible “merely narrow incompatibilism.” A semicompatibilist may endorse merely narrow incompatibilism, but she need not, as she may remain uncommitted on the question whether determinism precludes the ability to do otherwise. I call the view that both free will and responsibility are incompatible with determinism “broad incompatibilism.” (Libertarian Accounts of Free Will, p.11)

The technical language of philosophers specializing in free will is a total mess, quite opposite to the stated goal of analytic language philosophy to make conceptual analysis clear. This makes it very difficult for outsiders (and some insiders) to follow their contentious debates.

Conceptual analysis would be much easier if a more careful separation of concepts was made, for example “free” from “will,” and “free will” from “moral responsibility.”

Clarke’s Objections to Dennett, Mele, Ekstrom, Kane and our Cogito Model.

Clarke defines additional new terms in his Libertarian Accounts of Free Will. He calls Daniel Dennett’s two-stage model of decision making “deliberative,” since quantum randomness internal to the mind is limited to the deliberations. And he calls Robert Kane’s model “centered,” by which he means the quantum randomness is in the center of the decision itself.Clarke accepts the Kane and Ekstrom views that if the agent’s decision simply results from the events in the deliberation phase that that could not be what he calls “directly free.” Clarke calls this deliberative freedom “indirect.”

“Indirectly free” is a reasonable description for our Cogito Model, which has indeterminism in the “free” deliberation stage and “adequate” determinismin the “will” stage.

Although Clarke says that a “centered event-causal libertarian view provides a conceptually adequate account of free will,” he doubts that it can provide for moral responsibility. He says that

An event-causal libertarian view secures ultimate control, which no compatibilist account provides. But the secured ultimacy is wholly negative: it is just (on a centered view) a matter of the absence of any determining cause of a directly free action. The active control that is exercised on such a view is just the same as that exercised on an event-causal compatibilist account.

It is a bit puzzling to see how the active control of a libertarian decision based on quantum randomness is “just the same as that exercised” on a compatibilist account, unless it means, as Double argued, no control at all. So it may be worth quoting Clarke at length.

Dennett requires only that the coming to mind of certain beliefs be undetermined; Mele maintains that (in combination with the satisfaction of compatibilist requirements) this would suffice, as would the undetermined coming to mind of certain desires.Likewise, on Ekstrom’s view, we have undetermined actions — the formations of preferences — among the causes of free decisions. But she does not require that these preference-formations either be or result from free actions. Nor can she require this. Any free action, she holds, must be preceded by a preference-formation. An infinite regress would be generated if these preference-formations had to either be or result from free actions. And a similar regress would result if Dennett or Mele required that the undetermined comings-to-mind, attendings, or makings of judgments that figure in their accounts had to either be or result from free actions.

Thus, given the basic features of these views, all three must allow that an action can be free even if it is causally determined and none of its causes, direct or indirect, is a free action by that agent. Setting aside the authors currently under discussion, it appears that all libertarians disallow such a thing. What might be the basis for this virtual unanimity?

When an agent acts with direct freedom — freedom that is not derived from the freedom of any earlier action— she is able to do other than what she, in fact, does. Incompatibilists (libertarians included) maintain that, if events prior to one’s birth (indirectly) causally determine all of one’s actions, then one is never able to do other than perform the actions that one actually performs, for one is never able to prevent either those earlier events or the obtaining of the laws of nature.

Clarke now claims that even prior events thought up freely by the agent during deliberations will “determine” the agent’s decision. This is roughly what the Cogito Model claims. After indeterminism in the “free” deliberation stage, we need “adequate” determinism in the “will” stage to insure that our actions are consistent with our character and values (including Kane’s SFAs), with our habits and (Ekstrom’s) preferences, and with our current feelings and desires.Clarke oddly attempts to equate events prior to our births with events in our deliberations, claiming that they are equally determinist. He says,

If this is correct, then a time-indexed version of the same claim is correct, too. If events that have occurred by time t causally determine some subsequent action, then the agent is not able at t to do other than perform that action, for one is not able at tto prevent either events that have occurred by t or the obtaining of the laws of nature. An incompatibilist will judge, then, that, on Dennett’s and Mele’s views, it is allowed that once the agent has made an evaluative judgment, she is not able to do other than make the decision that she will, in fact, make, and that, on Ekstrom’s view, it is allowed that once the preference is formed, again the agent is not able to avoid making the decision that she will, in fact, make. If direct freedom requires that, until an action is performed, the agent be able to do otherwise, then these views do not secure the direct freedom of such decisions.Mele confronts this line of thinking head-on. Some libertarians, he acknowledges, do hold that a decision is directly free only if, until it is made, the agent is able to do other than make that decision, where this is taken to require that, until the action occurs, there is a chance that it will not occur. But such a position, Mele charges, is “mere dogmatism” (1995a: 218). It generates the problem of control that he (along with Dennett and Ekstrom) seeks to evade, and hence libertarians would do well to reject this position.

There is, however, a decisive reason for libertarians not to reject this position, a reason that stems from the common belief — one held by compatibilists and incompatibilists alike — that, in acting freely, agents make a difference, by exercises of active control, to how things go. The difference is made, on this common conception, in the performance of a directly free action itself, not in the occurrence of some event prior to the action, even if that prior event is an agent-involving occurrence causation of the action by which importantly connects the agent, as a person, to her action. On a libertarian understanding of this difference-making, some things that happen had a chance of not happening, and some things that do not happen had a chance of happening, and in performing directly free actions, agents make the difference. If an agent is, in the very performance of a free action, to make a difference in this libertarian way, then that action itself must not be causally determined by its immediate antecedents. In order to secure this libertarian variety of difference-making, an account must locate openness and freedom-level active control in the same event — the free action itself — rather separate these two as do deliberative libertarian views.

On the views of Dennett, Ekstrom, and Mele, agents might be said to make a difference between what happens but might not have and what does not happen but might have, but such a difference is made in the occurrence of something nonactive or unfree prior to the action that is said to be free, not in the performance of the allegedly free action itself. Failure to secure for directly free actions this libertarian variety of difference-making constitutes a fundamental inadequacy of deliberative libertarian accounts of free action.
(Libertarian Accounts of Free Will, p.63-4)

We need only extend the process of decision to include everything from the start of freedeliberations to the moment of willed choice to see that the Cogito Model allows the agent to make a real difference. The agent is justified saying “I could have done otherwise,” “This action was up to me,” and “I am the originator of my actions and the author of my life.”Clarke goes on to consider his “centered” event-causal view, and initially claims that it provides an adequate account of free will, but his “adequate” is damning with faint praise.

Clarke finds a “conceptually adequate account of free will” for narrow, but not for broad, incompatibilism. His “centered” account, like that of Kanevan InwagenEkstrom, andBalaguer, includes indeterminism in the decision itself. It is not limited to deliberations as in most two-stage models.

If merely narrow incompatibilism is correct, then an unadorned, centered event-causal libertarian view provides a conceptually adequate account of free will. Such a view provides adequately for fully rational free action and for the rational explanation — simple, as well as contrastive — of free action. The indeterminism required by such a view does not diminish the active control that is exercised when one acts. Given incompatibilism of this variety, a libertarian account of this type secures both the openness of alternatives and the exercise of active control that are required for free will.

It is thus unnecessary to restrict indeterminism, as deliberative accounts do, to locations earlier in the processes leading to free actions. Indeed, so restricting indeterminism undermines the adequacy of an event-causal view. Any adequate libertarian account must locate the openness of alternatives and freedom-level active control in the same event — in a directly free action itself. For this reason, an adequate event-causal view must require that a directly free action be nondeterministically caused by its immediate causal antecedents.

If, on the other hand, broad incompatibilism is correct, then no event-causal account is adequate. An event-causal libertarian view secures ultimate control, which no compatibilist account provides. But the secured ultimacy is wholly negative: it is just (on a centered view) a matter of the absence of any determining cause of a directly free action. The active control that is exercised on such a view is just the same as that exercised on an event-causal compatibilist account.

This sort of libertarian view fails to secure the agent’s exercise of any further positive powers to causally influence which of the alternative courses of events that are open will become actual. For this reason, if moral responsibility is precluded by determinism, the freedom required for responsibility is not secured by any event-causal libertarian account. (pp.219-20)

Reading David Armstrong

David Malet Armstrong’s book Knowledge, Truth and Belief (1973, pp.150-61) contains an important analysis of the infinite regress of inferences – “reasons behind the reasons” – first noticed by Plato in the Theatetus (200D-201C).

Knowledge traditionally entails true belief, but true belief does not entail knowledge.

Knowledge is true belief plus some justification in the form of reasons or evidence. But that evidence must itself be knowledge, which in turn must be justified, leading to a regress.

Following some unpublished work of Gregory O’Hair, Armstrong identifies and diagrams several possible ways to escape Plato’s regress, including:

Skepticism – knowledge is impossible

The regress is infinite but virtuous

The regress is finite, but has no end (Coherence view)

The regress ends in self-evident truths (Foundationalist view)

Non-inferential credibility, such as direct sense perceptions

Externalist theories (O’Hair is the source of the term “externalist”)

Causal view (Ramsey)

Reliability view (Ramsey)

Armstrong is cited by Hilary Kornblith and other epistemologists as restoring interest in “externalist” justification of knowledge. Since Descartes, epistemology had been focused on “internalist” justifications.

Armstrong does not subscribe to traditional views of justifying true beliefs, but he cited “causal” and “reliabilist” theories as direct non-inferential validation of knowledge. Direct validation or justification avoids the problem of the infinite regress of inferences.

Causality and reliabilism also were not original with Armstrong. He referred to the 1929 work of Frank Ramsey. Today these ideas are primarily associated with the name of Alvin Goldman, who put forward both “causal” and “reliabilist” theories of justification for true beliefs.

Here is how Armstrong described “causal” and “reliabilist” views:

According to “Externalist” accounts of non-inferential knowledge, what makes a true non-inferential belief a case of knowledge is some natural relation which holds between the belief-state, Bap [‘a believes p’], and the situation which makes the belief true. It is a matter of a certain relation holding between the believer and the world. It is important to notice that, unlike “Cartesian” and “Initial Credibility” theories, Externalist theories are regularly developed as theories of the nature of knowledge generally and not simply as theories of non-inferential knowledge. But they still have a peculiar importance in the case of non-inferential knowledge because they serve to solve the problem of the infinite regress.

Externalist theories may be further sub-divided into ‘Causal’ and `Reliability’ theories.

6 (i) Causal theories. The central notion in causal theories may be illustrated by the simplest case. The suggestion is that Bap [‘a believes p’] is a case of Kap [‘a knows p’] if ‘p’ is true and, furthermore, the situation that makes ‘p’ true is causally responsible for the existence of the belief-state Bap. I not only believe, but know, that the room is rather hot. Now it is certainly the excessive heat of the room which has caused me to have this belief. This causal relation, it may then be suggested, is what makes my belief a case of knowledge.

the source for causal theories is Frank Ramsey (1929)
Ramsey’s brief note on ‘Knowledge’, to be found among his ‘Last Papers’ in The Foundations of Mathematics, puts forward a causal view. A sophisticated recent version of a causal theory is to be found in ‘A Causal Theory of Knowing’ by Alvin I. Goldman (Goldman 1967).

Causal theories face two main types of difficulty. In the first place, even if we restrict ourselves to knowledge of particular matters of fact, not every case of knowledge is a case where the situation known is causally responsible for the existence of the belief. For instance, we appear to have some knowledge of the future. And even if all such knowledge is in practice inferential, non-inferential knowledge of the future (for example, that I will be ill tomorrow) seems to be an intelligible possibility. Yet it could hardly be held that my illness tomorrow causes my belief today that I will be ill tomorrow. Such cases can perhaps be dealt with by sophisticating the Causal analysis. In such a case, one could say, both the illness tomorrow and today’s belief that I will be ill tomorrow have a common cause, for instance some condition of my body today which not only leads to illness but casts its shadow before by giving rise to the belief. (An ‘early-warning’ system.)

In the second place, and much more seriously, cases can be envisaged where the situation that makes ‘p’ true gives rise to Bap, but we would not want to say that A knew that p. Suppose, for instance, that A is in a hypersensitive and deranged state, so that almost any considerable sensory stimulus causes him to believe that there is a sound of a certain sort in his immediate environment. Now suppose that, on a particular occasion, the considerable sensory stimulus which produces that belief is, in fact, a sound of just that sort in his immediate environment. Here the p-situation produces Bap, but we would not want to say that it was a case of knowledge.

I believe that such cases can be excluded only by filling out the Causal Analysis with a Reliability condition. But once this is done, I think it turns out that the Causal part of the analysis becomes redundant, and that the Reliability condition is sufficient by itself for giving an account of non-inferential (and inferential) knowledge.

6 (ii) Reliability theories. The second ‘Externalist’ approach is in terms of the empirical reliability of the belief involved. Knowledge is empirically reliable belief. Since the next chapter will be devoted to a defence of a form of the Reliability view, it will be only courteous to indicate the major precursors of this sort of view which I am acquainted with.

Ramsey is the source for reliabilist views as well
Once again, Ramsey is the pioneer. The paper ‘Knowledge’, already mentioned, combines elements of the Causal and the Reliability view. There followed John Watling’s ‘Inference from the Known to the Unknown’ (Watling 1954), which first converted me to a Reliability view. Since then there has been Brian Skyrms’ very difficult paper ‘The Explication of “X knows that p” ‘ (Skyrms 1967), and Peter Unger’s ‘An Analysis of Factual Knowledge’ (Unger 1968), both of which appear to defend versions of the Reliability view. There is also my own first version in Chapter Nine of A Materialist Theory of the Mind. A still more recent paper, which I think can be said to put forward a Reliability view, and which in any case anticipates a number of the results I arrive at in this Part, is Fred Dretske’s ‘Conclusive Reasons’ (Dretske 1971).

Hilary Kornblith on Armstrong
The Terms “Internalism” and “Externalism”
The terms “internalism” and “externalism” are used in philosophy in a variety of different senses, but their use in epistemology for anything like the positions which are the focus of this book dates to 1973. More precisely, the word “externalism” was introduced in print by David Armstrong’ in his book Belief; Truth and Knowledge’ in the following way:

According to “Externalist” accounts of non-inferential knowledge, what makes a true non-inferential belief a case of knowledge is some natural relation which holds between the belief-state, Bap [‘a believes p’], and the situation which makes the belief true. It is a matter of a certain relation holding between the believer and the world. It is important to notice that, unlike “Cartesian” and “Initial Credibility” theories, Externalist theories are regularly developed as theories of the nature of knowledge generally and not simply as theories of non-inferential knowledge. (Belief, Truth and Knowledge, p.157)

So in Armstrong’s usage, “externalism” is a view about knowledge, and it is the view that when a person knows that a particular claim p is true, there is some sort of “natural relation” which holds between that person’s belief that p and the world. One such view, suggested in 1967 by Alvin Goldman, was the Causal Theory of Knowledge. On this view, a person knows that p (for example, that it’s raining) when that person’s belief that p was caused by the fact that p. A related view, championed by Armstrong and later by Goldman as well, is the a href=”/knowledge/reliabilism.html”>Reliability Account of Knowledge, according to which a person knows that p when that person’s belief is both true and, in some sense, reliable: on some views, the belief must be a reliable indicator that p; on others, the belief must be produced by a reliable process, that is, one that tends to produce true beliefs. Frank Ramsey was a pioneer in defending a reliability account of knowledge. Particularly influential work in developing such an account was also done by Brian Skyrms, Peter Unger, and Fred Dretske.

Accounts of knowledge which are externalist in Armstrong’s sense mark an important break with tradition, according to which knowledge is a kind of justified, true belief. On traditional accounts, in part because justification is an essential ingredient in knowledge, a central task of epistemology is to give an account of what justification consists in. And, according to tradition, what is required for a person to be justified in holding a belief is for that person to have a certain justification for the belief, where having a justification is typically identified with being in a position, in some relevant sense, to produce an appropriate argument for the belief in question. What is distinctive about externalist accounts of knowledge, as Armstrong saw it, was that they do not require justification, at least in the traditional sense. Knowledge merely requires having a true belief which is appropriately connected with the world.

But while Armstrong’s way of viewing reliability accounts of knowledge has them rejecting the view that knowledge requires justified true belief, Alvin Goldman came to offer quite a different way of viewing the import of reliability theories: in 1979, Goldman suggested that instead of seeing reliability accounts as rejecting the claim that knowledge requires justified true belief, we should instead embrace an account which identifies justified belief with reliably produced belief. Reliability theories of knowledge, on this way of understanding them, offer a non-traditional account of what is required for a belief to be justified. This paper of Goldman’s, and his subsequent extended development of the idea, have been at the center of epistemological discussion ever since.

See David A. Armstrong on I-Phi

Reading Harry Frankfurt

Reading Harry Frankfurt

Wednesday, December 10th, 2008

Harry G. Frankfurt is the inventor of wildly unrealistic but provocative “thought experiments” designed to show that a person “could not have done otherwise.” Specifically, his goal is to deny that a person is free to choose among alternative possibilities. The traditional argument for free will requires alternative possibilities so that an agent could have done otherwise, without which there is no moral responsibility.
In 1969 Frankfurt famously defined what he called “The Principle of Alternate Possibilities” or PAP, then proceeded to deny it.

“a person is morally responsible for what he has done only if he could have done otherwise.”

Frankfurt’s thought experiments are attacks on this principle.Considering the absurd nature of his attacks (Frankfurt asks us to imagine an agent who can control the minds of others, or a demon inside one’s mind that can intervene in our decisions), the recent philosophical literature is surprisingly full of articles with “Frankfurt-type cases,” logical counterexamples to Frankfurt’s attempt to defend moral responsibility in the absence ofalternative possibilities.Frankfurt changed the debate on free will and moral responsibility with his hypothetical intervening demon. For example, John Martin Fischer’ssemicompatibilism assumes with Frankfurt that we can have moral responsibility, even if determinism (and/or indeterminism) are incompatible with free will.

Frankfurt’s basic claim is as follows:

“The principle of alternate possibilities is false. A person may well be morally responsible for what he has done even though he could not have done otherwise. The principle’s plausibility is anillusion, which can be made to vanish by bringing the relevant moral phenomena into sharper focus.”

Frankfurt posits a counterfactual demon who can intervene in an agent’s decisions if the agent is about to do something different from what the demon wants the agent to do. Frankfurt’s demon will block any alternative possibilities, but leave the agent to “freely choose” to do the one possibility desired by the demon. Frankfurt claims the existence of the hypothetical control mechanisms blocking alternative possibilities are irrelevant to the agent’s free choice. This is true when the agent’s choice agrees with the demon, but obviously false should the agent disagree. In that case, the demon would have to block the agent’s will and the agent would surely notice.

(IRR) There may be circumstances that in no way bring it about that a person performs a certain action; nevertheless, those very circumstances make it impossible for him to avoid performing that action.

Compatibilists have long been bothered by alternative possibilities, apparently needed in order that agents “could do otherwise.” They knew that determinism allows only a single future, one actual causal chain of events. They were therefore delighted to get behind Frankfurt’s examples as proofs that alternative possibilities, perhaps generated in part by random events, did not exist. Frankfurt argued for moral responsibility without libertarian free will.

Note, however, that Frankfurt assumes that genuine alternative possibilitiesdo exist. If not, there is nothing for his counterfactual intervening demon to block. Furthermore, without alternatives, Frankfurt would have to admit that there is only one “actual sequence” of events leading to one possible future. “Alternative sequences” would be ruled out. Since Frankfurt’s demon, much like Laplace’s demon, has no way of knowing the actual information about future events – such as agent’s decisions – until that information comes into existence, such demons are not possible and Frankfurt-style thought experiments, entertaining as they are, can not establish the compatibilist version of free will.

Incompatibilist libertarians like Robert Kane, David Widerker, and Carl Ginethave mounted attacks on Frankfurt-type examples, in defense of free will.Their basic idea is that in an indeterministic world Frankfurt’s demon cannot know in advance what an agent will do. As Widerker put it, there is no “prior sign” of the agent’s de-liberatechoice.In information theoretic terms, the information about the choice does not yet exist in the universe. So in order to block an agent’s decision, the demon would have to act in advance, and that would destroy the presumed “responsibility” of the agent for the choice, whether or not there are available alternative possibilities. We could call this the “Information Objection.”

And note that no matter how many alternative possibilities are blocked by Frankfurt’s hypothetical intervener, the simple alternative of not acting always remains open, and in cases of moral actions, not acting almost always has comparable moral significance. This could be called the “Yes/No Objection.”

Here is a discussion of the problem, from Kane’s A Contemporary Introduction to Free Will, 2005, (p.87)

5. The Indeterminist World ObjectionWhile the “flicker of freedom” strategy will not suffice to refute Frankfurt, it does lead to a third objection that is more powerful. This third objection is one that has been developed by several philosophers, including myself, David Widerker, Carl Ginet, and Keith Wyma.5 We might call it the Indeterministic World Objection. I discuss this objection in my book Free Will and Values. Following is a summary of this discussion:

Suppose Jones’s choice is undetermined up to the moment when it occurs, as many incompatibilists and libertarians require of a free choice. Then a Frankfurt controller, such as Black, would face a problem in attempting to control Jones’s choice. For if it is undetermined up to the moment when he chooses whether Jones will choose A or B, then the controller Black cannot know before Jones actually chooses what Jones is going to do. Black may wait until Jones actually chooses in order to see what Jones is going to do. But then it will be too late for Black to intervene. Jones will be responsible for the choice in that case, since Black stayed out of it. But Jones will also have had alternative possibilities, since Jones’s choice of A or B was undetermined and therefore it could have gone either way. Suppose, by contrast, Black wants to ensure that Jones will make the choice Black wants (choice A). Then Black cannot stay out of it until Jones chooses. He must instead act in advance to bring it about that Jones chooses A. In that case, Jones will indeed have no alternative possibilities, but neither will Jones be responsible for the outcome. Black will be responsible since Black will have intervened in order to bring it about that Jones would choose as Black wanted.

In other words, if free choices are undetermined, as incompatibilists require, a Frankfurt controller like Black cannot control them without actually intervening and making the agent choose as the controller wants. If the controller stays out of it, the agent will be responsible but will also have had alternative possibilities because the choice was undetermined. If the controller does intervene, by contrast, the agent will not have alternative possibilities but will also not be responsible (the controller will be). So responsibility and alternative possibilities go together after all, and PAP would remain true — moral responsibility requires alternative possibilities — when free choices are not determined.6If this objection is correct, it would show that Frankfurt-type examples will not work in an indeterministic world in which some choices or actions are undetermined. In such a world, as David Widerker has put it, there will not always be a reliable prior sign telling the controller in advance what agents are going to do.7 Only in a world in which all of our free actions are determined can the controller always be certain in advance how the agent is going to act. This means that, if you are a compatibilist, who believes free will could exist in a determined world, you might be convinced by Frankfurt-type examples that moral responsibility does not require alternative possibilities. But if you are an incompatibilist or libertarian, who believes that some of our morally responsible acts must be undetermined you need not be convinced by Frankfurt-type examples that moral responsibility does not require alternative possibilities.

There are also many defenders of Frankfurt’s attack on alternative possibilities, notably John Martin Fischer. Many of these positions appear in the 2006 book Moral Responsibility and Alternative Possibilities, edited by David Widerker and Michael McKenna.

Reading Derk Pereboom

In Living Without Free Will, Derk Pereboom offers a “hard incompatibilism” that makes both free will and moral responsibility incompatible with determinism. Although Pereboom claims to be agnostic about the truth of determinism, he argues that we should admit there is neither human freedom nor moral responsibility and that we should learn to live without free will.

He is close to a group of thinkers who share a view that William James called “hard determinism,” including Richard Double, Ted Honderich, Saul Smilansky,Galen Strawson, and the psychologistDaniel Wegner.

Some of them call for the recognition that “free will is an illusion.”

But note that Pereboom’s “hard incompatibilism” is not only the case ifdeterminism is true. It is equally the case if indeterminism is true. Pereboom argues that neither provides the control needed for moral responsibility. This is the standard argument against free will. As Pereboom states his view:

I argue for a position closely related to hard determinism. Yet the term “hard determinism” is not an adequate label for my view, since I do not claim that determinism is true. As I understand it, whether an indeterministic or a deterministic interpretation of quantum mechanics is true is currently an open question. I do contend, however, that not only is determinism incompatible with moral responsibility, but so is the sort of indeterminacy specified by the standard interpretation of quantum mechanics, if that is the only sort of indeterminacy there is.
(Living Without Free Will, p.xviii)

I will grant, for purposes of argument, that event-causal libertarianism allows for as much responsibility-relevant control as compatibilism does. I shall argue that if decisions were indeterministic events of the sort specified by this theory, then agents would have no more control over their actions than they would if determinism were true, and such control is insufficient for responsibility.
(Living Without Free Will, p.46)

In his 1995 essay stating the case for “Hard Incompatibilism,” Pereboom notes…

The demographic profile of the free will debate reveals a majority of soft determinists, who claim that we possess the freedom required for moral responsibility, that determinism is true, and that these views are compatible. Libertarians, incompatibilist champions of the freedom required for moral responsibility, constitute a minority. Not only is this the distribution in the contemporary philosophical population, but in Western philosophy has always been the pattern. Seldom has hard determinism — the incompatibilist endorsement of determinism and rejection of the freedom required for moral responsibility — been defended.One would expect hard determinism to have few proponents, given its apparent renunciation of morality. I believe, however, that the argument for hard determinism is powerful, and furthermore, that the reasons against it are not as compelling as they might at first seem.

The categorization of the determinist position by ‘hard’ and ’soft’ masks some important distinctions, and thus one might devise a more fine-grained scheme. Actually, within the conceptual space of both hard and soft determinism there is a range of alternative views. The softest version of soft determinism maintains that we possess the freedom required for moral responsibility, that having this sort of freedom is compatible with determinism, that this freedom includes the ability to do otherwise than what one actually will do, and that even though determinism is true, one is yet deserving of blame upon having performed a wrongful act. The hardest version of hard determinism claims that since determinism is true, we lack the freedom required for moral responsibility, and hence, not only do we never deserve blame, but, moreover, no moral principles or values apply to us. But both hard and soft determinism encompass a number of less extreme positions. The view I wish to defend is somewhat softer than the hardest of the hard determinisms, and in this respect it is similar to some aspects of the position recently developed by Ted Honderich. In the view we will explore, since determinism is true, we lack the freedom required for moral responsibility. But although we therefore never deserve blame for having performed a wrongful act, most moral principles and values are not thereby undermined.
(Noûs 29, 1995, reprinted in Free Will, ed. D. Pereboom, 1997, p.242)

Pereboom concludes:

Given that free will of some sort is required for moral responsibility, then libertarianism, soft determinism, and hard determinism, as typically conceived, are jointly exhaustive positions (if we allow the “deterministic” positions the view that events may result from indeterministic processes of the sort described by quantum mechanics). Yet each has a consequence that is difficult to accept.If libertarianism were true, then we would expect events to occur that are incompatible with what our physical theories predict to be overwhelmingly likely.

If soft determinism were true, then agents would deserve blame for their wrongdoing even though their actions were produced by processes beyond their control.

If hard determinism were true, agents would not be morally responsible — agents would never deserve blame for even the most cold-blooded and calmly executed evil actions.

I have argued that hard determinism could be the easiest view to accept. Hard determinism need not be of the hardest sort. It need not subvert the commitment to doing what is right, and although it does undermine some of our reactive attitudes, secure analogues of these attitudes are all one requires for good interpersonal relationships.

Consequently, of the three positions, hard determinism might well be the most attractive, and it is surely worthy of more serious consideration than it has been accorded. (p.272)

Pereboom distinguishes two libertarian positions, agent causal and event causal. While his agent-causal positions involve metaphysical freedom if not immaterial substance, his event-causal views assume that indeterminism isthe direct or indirect cause of the action. He then traces decisions determined by character back to early character-forming events. Since they are always in turn either themselves determined, or at best indetermined, we can not be responsible for our characters either.

According to the libertarian, we can choose to act without being causally determined by factors beyond our control, and we can therefore be morally responsible for our actions. Arguably, this is the common-sense position. Libertarian views can be divided into two categories.In agent causal libertarianism, free will is explained by the existence of agents who can cause actions not by virtue of any state they are in, such as a belief or a desire, but just by themselves — as substances. Such agents are capable of causing actions in this way without being causally determined to do so. In an attractive version of agent-causal theory, when such an agent acts freely, she can be inclined but not causally determined to act by factors such as her desires and beliefs. But such factors will not exhaust the causal account of the action. The agent herself, independently of these factors, provides a fundamental element. Agent-causal libertarianism has been advocated by Thomas Reid, Roderick Chisholm, Richard Taylor,Randolph Clarke, and Timothy O’Connor. Perhaps the views of William of Ockham and Immanuel Kant also count as agent-causal libertarianism.

In the second category, which I call event-causal libertarianism, only causation involving states or events is permitted. Required for moral responsibility is not agent causation, but production of actions that crucially involves indeterministic causal relations between events. The Epicurean philosopher Lucretius provides a rudimentary version of such a position when he claims that free actions are accounted for by uncaused swerves in the downward paths of atoms. Sophisticated variants of this type of libertarianism have been developed by Robert Kane and Carl Ginet.
(Living Without Free Will, p.xv)

On Ginet’s and Kane’s conceptions, are free choices indeed partially random events (or perhaps even totally random events on Ginet’s account) for which agents cannot be morally responsible? At this point, one might suggest that there is an additional resource available to bolster Ginet’s and Kane’s account of morally responsible decision. For convenience, let us focus on Kane’s view (I suspect that Ginet’s position will not differ significantly from Kane’s on this issue). One might argue that in Kane’s conception, the character and motives that explain an effort of will need not be factors beyond the agent’s control, since they could be produced partly as a result of the agent’s free choices. Consequently, it need not be that the effort, and thus the choice, is produced solely by factors beyond the agent’s control and no further contribution of the agent. But this move is unconvincing. To simplify, suppose that it is character alone, and not motives in addition, that explains the effort of will. Imagine first that the character that explains the effort is not a product of the agent’s free choices, but rather that there are factors beyond his control that determine this character, or nothing produces it, or factors beyond his control contribute to the production of the character without determining it and nothing supplements their contribution to produce it. Then, by incompatibilist standards, the agent cannot be responsible for his character. But in addition, neither can he be responsible for the effort that is explained by the character, whether this explanation is deterministic or indeterministic. If the explanation is deterministic, then there will be factors beyond the agent’s control that determine the effort, and the agent will thereby lack moral responsibility for the effort. If the explanation is indeterministic, given that the agent’s free choice plays no role in producing the character, and nothing besides the character explains the effort, there will be factors beyond the agent’s control that make a causal contribution to the production of this effort without determining it, while nothing supplements the contribution of these factors to produce the effort. Here, again, the agent cannot be morally responsible for the effort.

However, prospects for moral responsibility for the effort of will not improved if the agent’s character is partly a result of his free choices. For consider the first free choice an agent ever makes. By the above argument, he cannot be responsible for it. But then he cannot be responsible for the second choice either, whether or not the first choice was character-forming. If the first choice was not character-forming, then the character that explains the effort of will for the second choice is not produced by his free choice, and then by the above argument, he cannot be morally responsible for it. Suppose, alternatively, that the first choice was character-forming. Because the agent cannot be responsible for the first choice, he also cannot be responsible for the resulting character formation. But then, by the above argument, he cannot be responsible for the second choice either. Since this type of reasoning can be repeated for all subsequent choices, Kane’s agent can never be morally responsible for effort of will.

Given that such an agent can never be morally responsible for his efforts of will, neither can he be responsible for his choices. For in Kane’s picture, there is nothing that supplements the contribution of the effort of will to produce the choice. Indeed, all free choices will ultimately be partially random events, for in the final analysis there will be factors beyond the agent’s control, such as his initial character, that partly produce the choice, while there will be nothing that supplements their contribution in the production of the choice, and by the most attractive incompatibilist standard, agents cannot be responsible for such partially random events.
(Living Without Free Will, p.48-50)

See Derk Pereboom on I-Phi

Reading John Martin Fischer

Four ViewsSince the time of Peter van Inwagen’s 1983 classic Essay on Free Will which introduced the Consequence Argument,John Martin Fischer has been arguing the case for a compatibilism that focuses on moral responsibility and agent control rather than compatibilist free will per se.

Fischer was inspired by Peter Strawson’s influential 1962 essay,Freedom and Resentment, whichchanged the subject from the intractable free will problem to moral responsibility alone.

As Fischer says in his new 4-volume setFree Will (but mostly about MR and AP), “Some philosophers do not distinguish between freedom and moral responsibility. Put a bit more carefully, they tend to begin with the notion of moral responsibility, and “work back” to a notion of freedom; this notion of freedom is not given independent content (separate from the analysis of moral responsibility). For such philosophers, “freedom” refers to whatever conditions are involved in choosing or acting in such a way as to be morally responsible.” (Free Will, vol.I, p.xxiii)

Fischer also has been influenced by Harry Frankfurt’s attack on what Frankfurt called the Principle of Alternate Possibilities (PAP). Before Frankfurt, compatibilists and incompatibilists alike had argued thatalternative possibilities seemed to be a condition not only for free will but for moral responsibility.

Frankfurt’s clever examples changed the debate from compatibilism vs. incompatibilism to the very existence of alternative possibilities.

Although attacks and counterattacks continue, Frankfurt-style examples have become far too arcane and unlikely to win support outside a small number of compatibilists and incompatibilists.

Nevertheless, Fischer has tried to carve out a position calledsemicompatibilism, which de-emphasizes alternative possibilities and emphasizes agent control.  Fischer hopes that semicompatibilism will be resistant to any discovery by science that strict causal determinism is true. He does this by dividing the needed agent control into two parts, “regulative control” and “guidance control.”

Regulative control involves alternative possibilities, which lead to what Fischer calls “alternative sequences” of action. Fischer thinks he can simply deny that agents have regulative control, and bypass the question of alternative possibilities, based on Frankfurt-style examples.  Although Fischer generally supports Frankfurt-style examples, he is the author of one of the cleverest counterattacks, the idea that the mere possibility that the agent might try an alternative gives rise to a “flicker of freedom” (The Metaphysics of Free Will: An Essay on Control , p.131-159).

Fischer wants to focus our attention on the more critical guidance control, which describes the “reasons-responsiveness” and “sourcehood” involved in the “actual sequence” of events leading up to the agent’s action. For Fischer, no alternative sequences, however many and however they flicker with freedom, are as relevant as the actual sequence.

Being the source of our actions allows us to say that our actions are “up to us,” that we can take ownership of our actions. This is what Fischer regards as the “freedom-relevant condition.”  It is what Robert Kane calls our “ultimate responsibility (UR).” And it is what Manuel Vargas calls the “self-governance condition” in his Revisionism.

Kane, Vargas, and Derk Pereboom contributed to Fischer’s recent book Four Views on Free Will. Pereboom also focuses on moral responsibility like Fischer, but he disagrees with Fischer that moral desert justifies praise and blame, reward and punishment. At the most, says Pereboom,  responsibility can justify that we can be “legitimately called to moral improvement.” Desert implies retributivism. Pereboom says the most we can justify is moral rehabilitation, for its beneficial consequences to society.

Although Fischer is officially agnostic on the ancient problem of free will versus determinism, he shows a strong commitment to causality and determinism over his years of defending compatibility with determinism.

Nevertheless, Fischer’s dividing of agent control issues into regulative control (involving alternative possibilities) and guidance control (what happens in the actual sequence) is an excellent approach that allows us to situate the indeterminism that many thinkers feel is critical to any libertarian model. Fischer notes that indeterminism in the alternative possibilities might generate “flickers of freedom.” And he says clearly (Four Views, p.74) that guidance control is not enhanced by positing indeterminism.

In his 1998 book Responsibility and Control, written with Mark Ravizza, Fischer describes what he calls the Direct and Indirect Arguments for incompatibilism. The Indirect Argument says that determinism rules out alternative possibilities.  From his semicompatibilist view, that does not threaten moral responsibility. Only in the Direct Argument for incompatibilism does determinism rule out moral responsibility.

So might Fischer agree with a view that 1) allows the “freedom-relevant condition”  (reasons responsiveness and ownership) in the actual sequence to be governed by what he calls “almost causal determinism” (Responsibility and Control, p.15n) and 2) allows indeterminism in the generation of the alternative possibilities (flickers of freedom)?

That is the view we offer in the I-Phi Cogito model. Although they do not endorse it themselves, Daniel Dennett and Alfred Mele have also offered this view as something libertarians should like.

Indeterminism is important only in microscopic structures, but that is enough to introduce noise and randomness into our thoughts, especially when we are rapidly generating alternatives for action by random combinations of past experiences. But our brain and our neurons can suppress microscopic noise when they need to, insuring what we calladequate determinism, what Fischer calls almost causal determinism, and what Ted Honderich calls near determinism – in our willed actions.

In Robert Kane’s contribution to Four Views on Free Will, he correctly identifies noise in messages as generated indeterministically, but mistakenly thinks these are merely a “hindrance or obstacle” that raises our level of effort when making his rare but morally significant “self-forming actions.”

The role of indeterminism in free will is better seen as simply generating Fischer’s AP “flickers of freedom.” These alternative possibilities are then the “free” part of “free will” (Fischer’s regulative control).

The “will” part (Fischer’s guidance control) is “almost causally” determined to be reasons responsive and to take ownership for the determination to act in a fashion consistent with the agent’s character and values.

Event-causal libertarians like Kane and Laura Waddell Ekstrom think this kind of freedom is not enough. And agent-causal libertarians like Randolph Clarke and Timothy O’Connor want even more “metaphysical” freedom. They say that if the will is determined to act in a rational way consistent with its character and values, then the agent will make exactly the same decision inexactly the same circumstances.

Such consistency of action does not bother the common sense thinker or the compatibilist (even a hard incompatibilist?) philosopher.

Kane, Ekstrom, and others continue to invoke some indeterminism in the decision process itself. As Daniel Dennett recommended as early as 1978 (inBrainstorms) and Alfred Mele has been promoting as a “modest libertarianism” in his recent books (Autonomous Agency and Free Will and Luck), indeterminism is best kept in the early stage of a two-stage process.

We first need free (alternative possibilities) and then will (adequately determined actions) in a temporal sequence. First chance, then choice.

I think that John Martin Fischer’s guidance control, perfectly compatible with his “almost causal determinism,” validates not only his semicompatibilist view of moral responsibility, but also supports the common sense or popular view of free will that is found in the opinion surveys of experimental philosophers Joshua Knobe and Shaun Nichols.

While limited compared to “metaphysical” freedom, this view is consistent with a broadly scientific world view, a requirement for any systematic revisionism that Manuel Vargas calls “naturalistic plausibility” (Four Views, p.153).

Ironically perhaps, this view would be the very opposite of a revisionism, in the sense that the diagnostic (descriptive) analysis of common sense would agree remarkably well with what Vargas calls the prescriptive view for philosophers. Or perhaps it is the philosophers’ views that need revision?

As an illustration of just how naturalistically plausible this new view of free will is, consider the case of biological evolution. The evidence is overwhelming that variations in the gene pool are driven by random mutations in the DNA. Many of these mutations are caused by indeterministic quantum mechanical events, cosmic ray collisions for the most part. Think of the mutations as alternative possibilities for new species. An adequately determined process of natural selection then weeds out those random variations that can reproduce themselves and compete with their ancestors. First chance, then selection.

Indeed, the story of life is maintaining some information stability (parts of our DNA have been the same for 2.8 billion years) in a chaotic environment – and not the pseudo-random deterministic chaos of the computer theorists, but real irreducible chaos.

Only a believer in metaphysical determinism would deny the evolutionary evidence for indeterminism and two stages, the first microscopic and random (chance) the second macroscopic and adequately determined (choice). Sadly, such a metaphysical belief is the intelligent design position of the creationists.

Of course we are discussing only science, not logical certainty.

So we can also ameliorate John Martin Fischer’s nightmare of waking up one morning to a New York Times headline “Causal Determinism Is True”  (Four Views, p.44).

Nothing in science is logically true, in the sense of true in all possible worlds, true by the principle of non-contradiction or the weaker law of the excluded middle.  It is the excluded middle argument that leads us to the muddled standard argument against free will.

Our two-stage argument is quite old. We can trace it back to William James(1884 in The Dilemma of Determinism), Henri Poincaré (1906), Arthur Holly Compton (1935), and Karl Popper (1961).

What does Information Philosophy have to do with the two-stage model?

Information is the principal reason that biology is not reducible to chemistry and physics. Information is what makes an organism an individual, each with a different history. No atom or molecule has a history. Information is what makes us ourselves. Increasing information is involved in all “emergent” phenomena.

In information philosophy, the future is unpredictable for two basic reasons. First, quantum mechanics shows that some events are not predictable. The world is causal but not determined. Second, the early universe does not contain the information of later times, just as early primates do not contain the information structures for intelligence and verbal communication, and infants do not contain the knowledge and remembered experience they will have as adults.

The universe began in a state of minimal information nearly fourteen billion years ago. Information about the future is always missing, not present until it has been created, after which it is frozen.

John Martin Fischer calls this the “Principle of the Fixity of the Past” (Responsibility and Control, p.22). It suggests that even divine foreknowledge is not present in our open expanding universe, lending support to the religious view called Open Theism.


I am indebted to Kevin Timpe’s new book Free Will:Sourcehood and its Alternatives, which clarified many of the terms in the current debates and greatly clarified my rereading of Four Views, especially elucidating the positions of Fischer and Vargas.

See John Martin Fischer on I-Phi