Reading Alexander Bain

Alexander Bain was a Scottish philosopher who influenced the Americans Charles Sanders Peirce and William James. Their meetings of the short-lived “Metaphysical Club” in the early 1860’s often included discussions of Bain’s work. Peirce thought the core idea of his new philosophy of pragmatism came from Bain’s definition of a belief as “that upon which a man is prepared to act.”

William James gave “The Will to Believe” agential force in his own version of pragmatism. The “difference between the objects of will and belief is entirely immaterial, as far as the relation of the mind to them goes.” (Principles, vol.2, p.320) “When a thing is such as to make us act on it, then we believe it, according to Bain,” said James (p.322).

In his 1859 book Emotions and the Will, Bain said

It remains to consider the line of demarcation between belief and mere conceptions involving no belief – there being instances where the one seems to shade into the other. It seems to me impossible to draw this line without referring to action, as the only test, and the essential import of the state of conviction even in cases the farthest removed in appearance from any actions of ours, there is no other criterion.
(Emotions and the Will, ch. XI, Liberty and Necessity, sect. 22, p.595)

Bain on Free Will
Bain followed John Locke and considered it absurd to describe the will as “free.” His psychological theory marked the beginning of psychophysical parallelism, and it denied a purely physical or materialist explanation of mind. Knowledge and all mental events flowed from the sensations. So the physical body could generate spontaneous movements, but they could be known to a Laplacian intelligence.

Spontaneity, Self determination. – These names are introduced into the discussion of the will, as aides to the theory of liberty, which they are supposed to elucidate and unfold. That there is such a thing as ’spontaneity,’ in the action of voluntary agents has been seen in the foregoing pages. The spontaneous beginnings of movement are a result of the physical mechanism under the stimulus of nutrition… There is nothing in all this that either takes human actions out of the sweep of law, or renders liberty and necessity appropriate terms of description… The physical, or nutritive, stimulus is a fact of our Constitution, counting at each moment for a certain amount, according to the bodily condition; and if anyone knew exactly the condition of a man or animal in this respect, a correct allowance might be made in the computation of present motives.
(Emotions and the Will, ch. XI, Liberty and Necessity, sect. 7, p.552)

Bain thought that the mind also could generate “outgoing” thoughts and new associations at “random,” but it is likely that his idea of randomness was the prevalent 19th-century view that randomness and chance were just the result of human ignorance and our incapacity to make arbitrarily accurate measurements, following the views of Adolphe Quételet and Henry Thomas Buckle.

When Watt invented his ‘parallel motion’ for the steam engine, his intellect and observation were kept at work, going out in all directions for the change of some suitable combination rising to view; his sense of the precise thing to be done was the constant touchstone of every contrivance occurring to him, and all the successive suggestions were arrested, or repelled, as they came near to, or disagreed with, this touchstone. The attraction and repulsion were purely volitional effects; they were the continuance of the very same energy that, in his babyhood, made him keep his mouth to his mother’s breast widely felt hunger on appeased and withdraw it when satisfied…

No formal resolution of the mind, adopted after consideration or debate, no special intervention of the ‘ego,’ or the personality, is essential to this putting forth of the energy of retaining on the one hand, or repudiating on the other, what is felt to be clearly suitable, or clearly unsuitable, to the feelings or aims of the moment. The inventor sees the incongruity of a proposal, and forth with it vanishes from his view. There may be extraneous considerations happening to keep it up in spite of the volitional stroke of repudiation, but the genuine tendency of the mind is to withdraw all further consideration, on the mere motive of unsuitability; while some other scheme of an opposite nature is, by the same instinct, embraced and held fast.

In all these new constructions, be they mechanical, verbal, scientific, practical, or aesthetical, the outgoings of the mind are necessarily at random; the end alone is the thing that is clear to the view, and with that there is a perception of the fitness of every passing suggestion. The volitional energy keeps up the attention, or the active search, and the moment that anything in point rises before the mind, springs upon that like a wild beast on its prey. I might go through all the varieties of creative effort, detailed under the law of constructive association, but I should only have to repeat the same observation at every turn.
(Emotions and the Will, ch. IV, Control of Feeling and Thoughts, sect. 8, Constructive Association a Voluntary Process, p.413-4)

Among Bain’s many accomplishments was the founding of the influential philosophical journal, Mind, in 1876.

See Alexander Bain on I-Phi

Reading Philippa Foot

Philippa Foot was an Oxford-trained philosopher who argued for a neo-Aristotelian virtue ethics as opposed to deontology, utilitarianism, or consequentialism in ethics.

Foot created the famous moral thought experiment known as the trolley problem.

In 1957 she wrote an article in The Philosophical Review entitled “Free Will As Involving Determinism.” Foot criticized arguments that free will requires determinism, and in particular the idea that one could not be held responsible for “chance” actions chosen for no particular reason.

Her article begins with the observation that determinism has become widely accepted as compatible with free will.

The idea that free will can be reconciled with the strictest determinism is now very widely accepted. To say that a man acted freely is, it is often suggested, to say that he was not constrained, or that he could have done otherwise if he had chosen, or something else of that kind; and since these things could be true even if his action was determined it seems that there could be room for free will even within a universe completely subject to causal laws. (The Philosophical Review, vol LXVI, (1957), p.439)

Foot’s estimate of the wide acceptance of determinism is correct, but hard to reconcile with quantum indeterminacy in modern physics, as Elizabeth Anscombe pointed out a few years later in her inaugural lecture at Cambridge.

It has taken the inventions of indeterministic physics to shake the rather common dogmatic conviction that determinism is a presupposition or perhaps a conclusion, of scientific knowledge. Not that that conviction has been very much shaken even so…I find deterministic assumptions more common now among people at large, and among philosophers, than when I was an undergraduate. (Causality and Determination, 1971, p.28)

Foot examines arguments by David Hume, R. E. Hobart (the pseudonym of Dickinson S. Miller, a student and later colleague of William James), P. H. Nowell-Smith, Gilbert Ryle, and A. J. Ayer.

Foot correctly doubted that the ordinary language meaning of saying our actions are “determined” by motives has the same meaning as strict physical determinism, which assumes a causal law that determines every event in the future of the universe. She cites Bertrand Russell’s view of causal determinism:

The law of universal causation . . . may be enunciated as follows:…given the state of the whole universe,…every previous and subsequent event can theoretically be determined.

Foot is also skeptical of the simple logical argument that everything happens either by chance or because it is causally determined. This is the standard argument against free will that makes indeterminism and determinism the two horns of a logical dilemma.

Foot notes that our normal use of “determined” does not imply universal determinism.

For instance, an action said to be determined by the desires of the man who does it is not necessarily an action for which there is supposed to be a sufficient condition. In saying that it is determined by his desires we may mean merely that he is doing something that he wants to do, or that he is doing it for the sake of something else that he wants. There is nothing in this to suggest determinism in Russell’s sense. (ibid, p.441)

And when we do something “by chance” it may not mean physically undetermined, and may not be used to deny responsibility.

It is not at all clear that when actions or choices are called “chance” or “accidental” this has anything to do with the absence of causes… Ayer says, “Either it is an accident that I choose to act as I do, or it is not.” The notion of choosing by accident to do something is on the face of it puzzling; for usually choosing to do something is opposed to doing it by accident. What does it mean to say that the choice itself was accidental? (p.449-50)

If I say that it was a matter of chance that I chose to do something,…I do not imply that there was no reason for my doing what I did, and I say nothing whatsoever about my choice being undetermined. If we use “chance” and “accident” as Ayer wants to use them, to signify the absence of causes, we shall have moved over to a totally different sense of the words, and “I chose it by chance” can no longer be used to disclaim responsibility. (p.450)

Foot does not see that the role of chance and indeterminism might simply be to provide “free” alternative possibilities for action, to be deliberated upon and used as causes or reasons behind motives of our “will” as we choose to act.

She also does not seem to know that Hobart’s 1934 article was entitled “Free Will As Involving Determination And Inconceivable Without It.” In her reference (note 5), she thinks Hobart’s article has the same title she is using – “Free Will As Involving Determinism”.

See Philippa Foot on I-Phi

Reading David Armstrong

David Malet Armstrong’s book Knowledge, Truth and Belief (1973, pp.150-61) contains an important analysis of the infinite regress of inferences – “reasons behind the reasons” – first noticed by Plato in the Theatetus (200D-201C).

Knowledge traditionally entails true belief, but true belief does not entail knowledge.

Knowledge is true belief plus some justification in the form of reasons or evidence. But that evidence must itself be knowledge, which in turn must be justified, leading to a regress.

Following some unpublished work of Gregory O’Hair, Armstrong identifies and diagrams several possible ways to escape Plato’s regress, including:

Skepticism – knowledge is impossible

The regress is infinite but virtuous

The regress is finite, but has no end (Coherence view)

The regress ends in self-evident truths (Foundationalist view)

Non-inferential credibility, such as direct sense perceptions

Externalist theories (O’Hair is the source of the term “externalist”)

Causal view (Ramsey)

Reliability view (Ramsey)

Armstrong is cited by Hilary Kornblith and other epistemologists as restoring interest in “externalist” justification of knowledge. Since Descartes, epistemology had been focused on “internalist” justifications.

Armstrong does not subscribe to traditional views of justifying true beliefs, but he cited “causal” and “reliabilist” theories as direct non-inferential validation of knowledge. Direct validation or justification avoids the problem of the infinite regress of inferences.

Causality and reliabilism also were not original with Armstrong. He referred to the 1929 work of Frank Ramsey. Today these ideas are primarily associated with the name of Alvin Goldman, who put forward both “causal” and “reliabilist” theories of justification for true beliefs.

Here is how Armstrong described “causal” and “reliabilist” views:

According to “Externalist” accounts of non-inferential knowledge, what makes a true non-inferential belief a case of knowledge is some natural relation which holds between the belief-state, Bap [‘a believes p’], and the situation which makes the belief true. It is a matter of a certain relation holding between the believer and the world. It is important to notice that, unlike “Cartesian” and “Initial Credibility” theories, Externalist theories are regularly developed as theories of the nature of knowledge generally and not simply as theories of non-inferential knowledge. But they still have a peculiar importance in the case of non-inferential knowledge because they serve to solve the problem of the infinite regress.

Externalist theories may be further sub-divided into ‘Causal’ and `Reliability’ theories.

6 (i) Causal theories. The central notion in causal theories may be illustrated by the simplest case. The suggestion is that Bap [‘a believes p’] is a case of Kap [‘a knows p’] if ‘p’ is true and, furthermore, the situation that makes ‘p’ true is causally responsible for the existence of the belief-state Bap. I not only believe, but know, that the room is rather hot. Now it is certainly the excessive heat of the room which has caused me to have this belief. This causal relation, it may then be suggested, is what makes my belief a case of knowledge.

the source for causal theories is Frank Ramsey (1929)
Ramsey’s brief note on ‘Knowledge’, to be found among his ‘Last Papers’ in The Foundations of Mathematics, puts forward a causal view. A sophisticated recent version of a causal theory is to be found in ‘A Causal Theory of Knowing’ by Alvin I. Goldman (Goldman 1967).

Causal theories face two main types of difficulty. In the first place, even if we restrict ourselves to knowledge of particular matters of fact, not every case of knowledge is a case where the situation known is causally responsible for the existence of the belief. For instance, we appear to have some knowledge of the future. And even if all such knowledge is in practice inferential, non-inferential knowledge of the future (for example, that I will be ill tomorrow) seems to be an intelligible possibility. Yet it could hardly be held that my illness tomorrow causes my belief today that I will be ill tomorrow. Such cases can perhaps be dealt with by sophisticating the Causal analysis. In such a case, one could say, both the illness tomorrow and today’s belief that I will be ill tomorrow have a common cause, for instance some condition of my body today which not only leads to illness but casts its shadow before by giving rise to the belief. (An ‘early-warning’ system.)

In the second place, and much more seriously, cases can be envisaged where the situation that makes ‘p’ true gives rise to Bap, but we would not want to say that A knew that p. Suppose, for instance, that A is in a hypersensitive and deranged state, so that almost any considerable sensory stimulus causes him to believe that there is a sound of a certain sort in his immediate environment. Now suppose that, on a particular occasion, the considerable sensory stimulus which produces that belief is, in fact, a sound of just that sort in his immediate environment. Here the p-situation produces Bap, but we would not want to say that it was a case of knowledge.

I believe that such cases can be excluded only by filling out the Causal Analysis with a Reliability condition. But once this is done, I think it turns out that the Causal part of the analysis becomes redundant, and that the Reliability condition is sufficient by itself for giving an account of non-inferential (and inferential) knowledge.

6 (ii) Reliability theories. The second ‘Externalist’ approach is in terms of the empirical reliability of the belief involved. Knowledge is empirically reliable belief. Since the next chapter will be devoted to a defence of a form of the Reliability view, it will be only courteous to indicate the major precursors of this sort of view which I am acquainted with.

Ramsey is the source for reliabilist views as well
Once again, Ramsey is the pioneer. The paper ‘Knowledge’, already mentioned, combines elements of the Causal and the Reliability view. There followed John Watling’s ‘Inference from the Known to the Unknown’ (Watling 1954), which first converted me to a Reliability view. Since then there has been Brian Skyrms’ very difficult paper ‘The Explication of “X knows that p” ‘ (Skyrms 1967), and Peter Unger’s ‘An Analysis of Factual Knowledge’ (Unger 1968), both of which appear to defend versions of the Reliability view. There is also my own first version in Chapter Nine of A Materialist Theory of the Mind. A still more recent paper, which I think can be said to put forward a Reliability view, and which in any case anticipates a number of the results I arrive at in this Part, is Fred Dretske’s ‘Conclusive Reasons’ (Dretske 1971).

Hilary Kornblith on Armstrong
The Terms “Internalism” and “Externalism”
The terms “internalism” and “externalism” are used in philosophy in a variety of different senses, but their use in epistemology for anything like the positions which are the focus of this book dates to 1973. More precisely, the word “externalism” was introduced in print by David Armstrong’ in his book Belief; Truth and Knowledge’ in the following way:

According to “Externalist” accounts of non-inferential knowledge, what makes a true non-inferential belief a case of knowledge is some natural relation which holds between the belief-state, Bap [‘a believes p’], and the situation which makes the belief true. It is a matter of a certain relation holding between the believer and the world. It is important to notice that, unlike “Cartesian” and “Initial Credibility” theories, Externalist theories are regularly developed as theories of the nature of knowledge generally and not simply as theories of non-inferential knowledge. (Belief, Truth and Knowledge, p.157)

So in Armstrong’s usage, “externalism” is a view about knowledge, and it is the view that when a person knows that a particular claim p is true, there is some sort of “natural relation” which holds between that person’s belief that p and the world. One such view, suggested in 1967 by Alvin Goldman, was the Causal Theory of Knowledge. On this view, a person knows that p (for example, that it’s raining) when that person’s belief that p was caused by the fact that p. A related view, championed by Armstrong and later by Goldman as well, is the a href=”/knowledge/reliabilism.html”>Reliability Account of Knowledge, according to which a person knows that p when that person’s belief is both true and, in some sense, reliable: on some views, the belief must be a reliable indicator that p; on others, the belief must be produced by a reliable process, that is, one that tends to produce true beliefs. Frank Ramsey was a pioneer in defending a reliability account of knowledge. Particularly influential work in developing such an account was also done by Brian Skyrms, Peter Unger, and Fred Dretske.

Accounts of knowledge which are externalist in Armstrong’s sense mark an important break with tradition, according to which knowledge is a kind of justified, true belief. On traditional accounts, in part because justification is an essential ingredient in knowledge, a central task of epistemology is to give an account of what justification consists in. And, according to tradition, what is required for a person to be justified in holding a belief is for that person to have a certain justification for the belief, where having a justification is typically identified with being in a position, in some relevant sense, to produce an appropriate argument for the belief in question. What is distinctive about externalist accounts of knowledge, as Armstrong saw it, was that they do not require justification, at least in the traditional sense. Knowledge merely requires having a true belief which is appropriately connected with the world.

But while Armstrong’s way of viewing reliability accounts of knowledge has them rejecting the view that knowledge requires justified true belief, Alvin Goldman came to offer quite a different way of viewing the import of reliability theories: in 1979, Goldman suggested that instead of seeing reliability accounts as rejecting the claim that knowledge requires justified true belief, we should instead embrace an account which identifies justified belief with reliably produced belief. Reliability theories of knowledge, on this way of understanding them, offer a non-traditional account of what is required for a belief to be justified. This paper of Goldman’s, and his subsequent extended development of the idea, have been at the center of epistemological discussion ever since.

See David A. Armstrong on I-Phi

Epistemology Next

For the time being, the Information Philosopher will turn attention to our second major effort, the significance of information for the problem ofknowledge.

Knowing how we know is a fundamentally circular problem when it is described in human language. And knowing something about what is adds another circle, if the knowing being must itself be one of those things that exists.

These circular definitions and inferences need not be vicious circles. They may simply be a coherent set of ideas that we use to describe ourselves and the external world. If the descriptions are logically valid, or verifiable empirically, we think we are approaching the “truth” about things and acquiring knowledge.

How then do we describe the knowledge itself – as an existing thing in our existent minds and in the existing external world. Information philosophy does it by basing everything on the abstract but quantitative notion ofinformation.

Information is stored or encoded in structures. Structures in the world build themselves, following natural laws, including physical and biological laws. Structures in the mind are partly built by biological processes and partly built by human intelligence, which is free, creative, and unpredictable.

Knowledge is information created and stored in minds and in human artifacts like stories, books, and internetworked computers.

Knowledge is actionable information in minds that forms the basis for thoughts, actions, and beliefs.

Knowledge includes all the cultural information created by human societies. It also includes the theories and experiments of scientists, who collaborate to establish our knowledge of the external world. This knowledge comes closest to being independent of any human mind.

To the extent of the correspondence, the isomorphism, the one-to-one mapping, between information structures (and processes) in the world and representative structures and functions in the mind, information philosophy claims that we have quantifiable personal or subjective knowledge of the world.

To the extent of the agreement (again a correspondence or isomorphism) between information in the minds of an open community of inquirers seeking the best explanations for phenomena, information philosophy further claims that we have quantifiable inter-subjective knowledge of other minds and an external world. This is as close as we come to “objective” knowledge, and knowledge of objects – to Kant’s “things in themselves.”

Knowledge has historically been identified by philosophers with language, logic, and human beliefs. Epistemologists, from Plato’s Theatetus andAristotle’s Posterior Analytics to modern language philosophers, identify knowledge with statements or propositions that can be logically analyzed and validated.

Specifically, traditional epistemology defines knowledge as “justified true belief.” Subjective beliefs are usually stated in terms of propositions. For example,

S knows that P if and only if(i) S believes that P,
(ii) P is true, and
(iii) S is justified in believing that P.

In the long history of the problem of knowledge, all three of these knowledge or belief “conditions” have proved very difficult for epistemologists. Among the reasons…

(i) A belief is an internal mental state beyond the full comprehension of expert external observers. Even the subject herself has limited immediate access to all she knows or believes. On deeper reflection, or consulting external sources of knowledge, she might “change her mind.”(ii) The truth about any fact in the world is vulnerable to skeptical or sophistical attack. The concept of truth should be limited to uses within logical and mathematical systems of thought. Real world “truths” are always fallible and revisable in the light of new knowledge.

(iii) The notion of justification of a belief by providing reasons is vague, circular or an infinite regress. What reasons can be given that themselves do not have just reasons? In view of (i) and (ii) what value is there in a “justification” that is fallible, or worse false?

(iv) Epistemologists have primarily studied personal or subjective beliefs. Fearful of competition from empirical science and its method for establishing knowledge, they emphasize that justification must be based on reasons internally accessible to the subject. Some mis-describe as “external” a subject’s unconscious beliefs or beliefs unavailable to immediate memory. These are merely inaccessible, perhaps only temporarily.

(v) The emphasis on logic has led some epistemologists to claim that knowledge is closed under (strict or material) implication. This assumes that the process of ordinary knowing is informed by logic, in particular that

(Closure) If S knows that P, and P implies Q, then S knows that Q.

We can only say that S is in a position to deduce Q, if she is trained in logic.

It is no surprise that epistemologists have failed in every effort to put knowledge on a sound basis, let alone establish knowledge with apodeictic certainty, as Plato and Aristotle expected and René Descartes thought he had established beyond any reasonable doubt.

Perhaps overreacting to the threat from science as a demonstrably more successful method for establishing knowledge, epistemologists have hoped to differentiate and preserve their own philosophical approach. Some have held on to the goal of logical positivism (e.g., Russell, early Wittgenstein, and the Vienna Circle) that philosophical analysis would provide an a priorinormative ground for merely empirical scientific knowledge.

Logical positivist arguments for the non-inferential self-validation of logical atomic perceptions like “red, here, now” have perhap misled some epistemologists to think that personal perceptions can directly justify some “foundationalist” beliefs.

The philosophical method of linguistic analysis (inspired by the later Wittgenstein) has not achieved much more. It is unlikely that knowledge of any kind reduces simply to the careful conceptual analysis of sentences, statements, and propositions.

Information philosophy looks deeper than the surface ambiguities of language.


Information philosophy distinguishes at least three kinds of knowledge, each requiring its own special epistemological analysis:

  • Subjective or personal knowledge, including introspection and intuition, as well as communications with and perceptions of other persons and the external world.
  • Communal or social knowledge of cultural creations, including fiction, myths, conventions, laws, history, etc.
  • Knowledge of a mind-independent physical external world.

When information is stored in any structure, whether the world, human artifacts, or a mind, two fundamental physical processes occur. First is a collapse of a quantum mechanical wave function. Second is a local decrease in the entropy corresponding to the increase in information. Entropy greater than that must be transferred away to satisfy the second law.

These quantum level processes are susceptible to noise. Information stored may have errors. When information is retrieved, it is again susceptible to noise, This may garble the information content. In information science, noise is generally the enemy of information. But some noise is the friend of freedom, since it is the source of novelty, of creativity and invention, and of variation in the biological gene pool.

Biological systems have maintained and increased their invariant information content over billions of generations. Humans increase our knowledge of the external world, despite logical, mathematical, and physical uncertainty. Both do it in the face of random noise, bringing order (or cosmos) out of chaos. Both do it with sophisticated error detection and correction schemes that limit the effects of chance. The scheme we use to correct human knowledge is science, a combination of freely invented theories and adequately determined experiments.