Schrödinger’s Cat

Erwin Schrödinger’s intention for his infamous cat-killing box was to discredit certain non-intuitive implications of quantum mechanics, of which his wave mechanics was the first mathematical formulation.
Albert Einstein originated the suggestion that the superposition of Schrödinger’s wave functions implied that two different physical states could exist at the same time. This is correct for so-called “entangled” states, but it applies only for atomic level phenomena and over limited distances that preserve the coherence of the wave functions.

Einstein wrote to Schrödinger with the idea that the decay of a radioactive nucleus could be arranged to set off a large explosion. Since the moment of decay is unknown, Einstein argued that the superposition of decayed and undecayed nuclear states implies the superposition of an explosion and no explosion. Many years later, Richard Feyman made this a nuclear explosion! (What is it about some scientists?)

Einstein and Schrödinger did not like the fundamental randomness implied by quantum mechanics. They wanted to restore determinism to physics. Indeed Schrödinger’s wave equation predicts a perfectly deterministic time evolution of the wave funcion. Randomness enters only when a measurement is made and the wave function “collapses.”

Schrödinger devised a variation in which the random radioactive decay would kill a cat. Observers could not know what happened until the box is opened.The details of the tasteless experiment include:

  • a bit of radioactive material with a decay half-life likely to emit an alpha particle during a time T
  • a Geiger counter which produces an avalanche of electrons when the alpha particle passes through it
  • an electrical circuit energized by the electrons which drops a hammer
  • a flask of a deadly hydrocyanic acid gas, smashed open by the hammer.

The gas will kill the cat, but the exact time of death is unpredictable and random because of irreducible quantum indeterminacy.This thought experiment is widely misunderstood. It was meant to suggest that quantum mechanics describes the simultaneous (and obviously contradictory) existence of a live and dead cat. Here is the famous diagram with a cat both dead and alive.

What’s wrong with this picture?

Quantum mechanics claims only that the time evolution of the Schrödinger wave functions for the probability amplitudes of nuclear decay accurately predict the proportion of nuclear decays that will occur in a given time interval.More specifically, quantum mechanics provides us with the accurate prediction that if this experiment is repeated many times (the SPCA would disapprove), half of the experiments will result in dead cats.

Note that this is a problem in epistemology. What knowledge is it that quantum physics provides?

If we open the box at the time T when there is a 50% probability of an alpha particle emission, the most a physicist can know is that there is a 50% chance that the radioactive decay will have occurred and the cat will be observed as dead or dying.

If the box were opened earlier, say at T/2, there is only a 25% chance that the cat has died. Schrödinger’s superposition of live and dead cats would look like this.

If the box were opened later, say at 2T, there is only a 25% chance that the cat is still alive. Quantum mechanics is giving us only statistical information – knowledge about probabilities.

Schrödinger is simply wrong that the mixture of nuclear wave functions that accurately describes decay can be magnified to the macroscopic world to describe a similar mixture of live cat and dead cat wave functions and the simultaneous existence of live and dead cats.

What do exist simultaneously in the macroscopic world are genuinealternative possibilities for future events. This is what bothered physicists like Einstein, Schrödinger, and Max Planck who wanted a return to deterministic physics. It also bothers determinist and compatibilistphilosophers who have what William James calls an “antipathy to chance.”

Until the information comes into existence, the future is indeterministic. Once information is macroscopically encoded, the past is determined.

How does information physics resolve the paradox?

As soon as the alpha particle sets off the avalanche of electrons in the Geiger counter (an irreversible event with a significant entropy increase), new information is created in the world.For example, a simple pen chart recorder attached to the Geiger counter could record the time of decay. Notice that as usual in information creation, the energy expended by a recorder increases the entropy more than the increased information decreases it, thus satisfying the second law of thermodynamics.

Even without a mechanical recorder, the cat’s death sets in motion biological processes that an equivalent, if gruesome, recording. When a dead cat is the result, a sophisticated autopsy can tell when Schrödinger’s cat died because the cat’s body is acting as an event recorder. There never is a superposition of live and dead cats.

The paradox points clearly to the Information Philosophy solution to the problem of measurement. Human observers are not required to make measurements. The cat is the observer.

In most physics measurements, the new information is captured by apparatus well before any physicist has a chance to read any dials or pointers that indicate what happened. Indeed, in today’s high-energy particle interaction experiments, the data may be captured but not fully analyzed until many days or even months of computer processing establishes what was observed. In this case, the experimental apparatus is the observer.

See Erwin Schrödinger on I-Phi 

Reading David Armstrong

David Malet Armstrong’s book Knowledge, Truth and Belief (1973, pp.150-61) contains an important analysis of the infinite regress of inferences – “reasons behind the reasons” – first noticed by Plato in the Theatetus (200D-201C).

Knowledge traditionally entails true belief, but true belief does not entail knowledge.

Knowledge is true belief plus some justification in the form of reasons or evidence. But that evidence must itself be knowledge, which in turn must be justified, leading to a regress.

Following some unpublished work of Gregory O’Hair, Armstrong identifies and diagrams several possible ways to escape Plato’s regress, including:

Skepticism – knowledge is impossible

The regress is infinite but virtuous

The regress is finite, but has no end (Coherence view)

The regress ends in self-evident truths (Foundationalist view)

Non-inferential credibility, such as direct sense perceptions

Externalist theories (O’Hair is the source of the term “externalist”)

Causal view (Ramsey)

Reliability view (Ramsey)

Armstrong is cited by Hilary Kornblith and other epistemologists as restoring interest in “externalist” justification of knowledge. Since Descartes, epistemology had been focused on “internalist” justifications.

Armstrong does not subscribe to traditional views of justifying true beliefs, but he cited “causal” and “reliabilist” theories as direct non-inferential validation of knowledge. Direct validation or justification avoids the problem of the infinite regress of inferences.

Causality and reliabilism also were not original with Armstrong. He referred to the 1929 work of Frank Ramsey. Today these ideas are primarily associated with the name of Alvin Goldman, who put forward both “causal” and “reliabilist” theories of justification for true beliefs.

Here is how Armstrong described “causal” and “reliabilist” views:

According to “Externalist” accounts of non-inferential knowledge, what makes a true non-inferential belief a case of knowledge is some natural relation which holds between the belief-state, Bap [‘a believes p’], and the situation which makes the belief true. It is a matter of a certain relation holding between the believer and the world. It is important to notice that, unlike “Cartesian” and “Initial Credibility” theories, Externalist theories are regularly developed as theories of the nature of knowledge generally and not simply as theories of non-inferential knowledge. But they still have a peculiar importance in the case of non-inferential knowledge because they serve to solve the problem of the infinite regress.

Externalist theories may be further sub-divided into ‘Causal’ and `Reliability’ theories.

6 (i) Causal theories. The central notion in causal theories may be illustrated by the simplest case. The suggestion is that Bap [‘a believes p’] is a case of Kap [‘a knows p’] if ‘p’ is true and, furthermore, the situation that makes ‘p’ true is causally responsible for the existence of the belief-state Bap. I not only believe, but know, that the room is rather hot. Now it is certainly the excessive heat of the room which has caused me to have this belief. This causal relation, it may then be suggested, is what makes my belief a case of knowledge.

the source for causal theories is Frank Ramsey (1929)
Ramsey’s brief note on ‘Knowledge’, to be found among his ‘Last Papers’ in The Foundations of Mathematics, puts forward a causal view. A sophisticated recent version of a causal theory is to be found in ‘A Causal Theory of Knowing’ by Alvin I. Goldman (Goldman 1967).

Causal theories face two main types of difficulty. In the first place, even if we restrict ourselves to knowledge of particular matters of fact, not every case of knowledge is a case where the situation known is causally responsible for the existence of the belief. For instance, we appear to have some knowledge of the future. And even if all such knowledge is in practice inferential, non-inferential knowledge of the future (for example, that I will be ill tomorrow) seems to be an intelligible possibility. Yet it could hardly be held that my illness tomorrow causes my belief today that I will be ill tomorrow. Such cases can perhaps be dealt with by sophisticating the Causal analysis. In such a case, one could say, both the illness tomorrow and today’s belief that I will be ill tomorrow have a common cause, for instance some condition of my body today which not only leads to illness but casts its shadow before by giving rise to the belief. (An ‘early-warning’ system.)

In the second place, and much more seriously, cases can be envisaged where the situation that makes ‘p’ true gives rise to Bap, but we would not want to say that A knew that p. Suppose, for instance, that A is in a hypersensitive and deranged state, so that almost any considerable sensory stimulus causes him to believe that there is a sound of a certain sort in his immediate environment. Now suppose that, on a particular occasion, the considerable sensory stimulus which produces that belief is, in fact, a sound of just that sort in his immediate environment. Here the p-situation produces Bap, but we would not want to say that it was a case of knowledge.

I believe that such cases can be excluded only by filling out the Causal Analysis with a Reliability condition. But once this is done, I think it turns out that the Causal part of the analysis becomes redundant, and that the Reliability condition is sufficient by itself for giving an account of non-inferential (and inferential) knowledge.

6 (ii) Reliability theories. The second ‘Externalist’ approach is in terms of the empirical reliability of the belief involved. Knowledge is empirically reliable belief. Since the next chapter will be devoted to a defence of a form of the Reliability view, it will be only courteous to indicate the major precursors of this sort of view which I am acquainted with.

Ramsey is the source for reliabilist views as well
Once again, Ramsey is the pioneer. The paper ‘Knowledge’, already mentioned, combines elements of the Causal and the Reliability view. There followed John Watling’s ‘Inference from the Known to the Unknown’ (Watling 1954), which first converted me to a Reliability view. Since then there has been Brian Skyrms’ very difficult paper ‘The Explication of “X knows that p” ‘ (Skyrms 1967), and Peter Unger’s ‘An Analysis of Factual Knowledge’ (Unger 1968), both of which appear to defend versions of the Reliability view. There is also my own first version in Chapter Nine of A Materialist Theory of the Mind. A still more recent paper, which I think can be said to put forward a Reliability view, and which in any case anticipates a number of the results I arrive at in this Part, is Fred Dretske’s ‘Conclusive Reasons’ (Dretske 1971).

Hilary Kornblith on Armstrong
The Terms “Internalism” and “Externalism”
The terms “internalism” and “externalism” are used in philosophy in a variety of different senses, but their use in epistemology for anything like the positions which are the focus of this book dates to 1973. More precisely, the word “externalism” was introduced in print by David Armstrong’ in his book Belief; Truth and Knowledge’ in the following way:

According to “Externalist” accounts of non-inferential knowledge, what makes a true non-inferential belief a case of knowledge is some natural relation which holds between the belief-state, Bap [‘a believes p’], and the situation which makes the belief true. It is a matter of a certain relation holding between the believer and the world. It is important to notice that, unlike “Cartesian” and “Initial Credibility” theories, Externalist theories are regularly developed as theories of the nature of knowledge generally and not simply as theories of non-inferential knowledge. (Belief, Truth and Knowledge, p.157)

So in Armstrong’s usage, “externalism” is a view about knowledge, and it is the view that when a person knows that a particular claim p is true, there is some sort of “natural relation” which holds between that person’s belief that p and the world. One such view, suggested in 1967 by Alvin Goldman, was the Causal Theory of Knowledge. On this view, a person knows that p (for example, that it’s raining) when that person’s belief that p was caused by the fact that p. A related view, championed by Armstrong and later by Goldman as well, is the a href=”/knowledge/reliabilism.html”>Reliability Account of Knowledge, according to which a person knows that p when that person’s belief is both true and, in some sense, reliable: on some views, the belief must be a reliable indicator that p; on others, the belief must be produced by a reliable process, that is, one that tends to produce true beliefs. Frank Ramsey was a pioneer in defending a reliability account of knowledge. Particularly influential work in developing such an account was also done by Brian Skyrms, Peter Unger, and Fred Dretske.

Accounts of knowledge which are externalist in Armstrong’s sense mark an important break with tradition, according to which knowledge is a kind of justified, true belief. On traditional accounts, in part because justification is an essential ingredient in knowledge, a central task of epistemology is to give an account of what justification consists in. And, according to tradition, what is required for a person to be justified in holding a belief is for that person to have a certain justification for the belief, where having a justification is typically identified with being in a position, in some relevant sense, to produce an appropriate argument for the belief in question. What is distinctive about externalist accounts of knowledge, as Armstrong saw it, was that they do not require justification, at least in the traditional sense. Knowledge merely requires having a true belief which is appropriately connected with the world.

But while Armstrong’s way of viewing reliability accounts of knowledge has them rejecting the view that knowledge requires justified true belief, Alvin Goldman came to offer quite a different way of viewing the import of reliability theories: in 1979, Goldman suggested that instead of seeing reliability accounts as rejecting the claim that knowledge requires justified true belief, we should instead embrace an account which identifies justified belief with reliably produced belief. Reliability theories of knowledge, on this way of understanding them, offer a non-traditional account of what is required for a belief to be justified. This paper of Goldman’s, and his subsequent extended development of the idea, have been at the center of epistemological discussion ever since.

See David A. Armstrong on I-Phi

Epistemology Next

For the time being, the Information Philosopher will turn attention to our second major effort, the significance of information for the problem ofknowledge.

Knowing how we know is a fundamentally circular problem when it is described in human language. And knowing something about what is adds another circle, if the knowing being must itself be one of those things that exists.

These circular definitions and inferences need not be vicious circles. They may simply be a coherent set of ideas that we use to describe ourselves and the external world. If the descriptions are logically valid, or verifiable empirically, we think we are approaching the “truth” about things and acquiring knowledge.

How then do we describe the knowledge itself – as an existing thing in our existent minds and in the existing external world. Information philosophy does it by basing everything on the abstract but quantitative notion ofinformation.

Information is stored or encoded in structures. Structures in the world build themselves, following natural laws, including physical and biological laws. Structures in the mind are partly built by biological processes and partly built by human intelligence, which is free, creative, and unpredictable.

Knowledge is information created and stored in minds and in human artifacts like stories, books, and internetworked computers.

Knowledge is actionable information in minds that forms the basis for thoughts, actions, and beliefs.

Knowledge includes all the cultural information created by human societies. It also includes the theories and experiments of scientists, who collaborate to establish our knowledge of the external world. This knowledge comes closest to being independent of any human mind.

To the extent of the correspondence, the isomorphism, the one-to-one mapping, between information structures (and processes) in the world and representative structures and functions in the mind, information philosophy claims that we have quantifiable personal or subjective knowledge of the world.

To the extent of the agreement (again a correspondence or isomorphism) between information in the minds of an open community of inquirers seeking the best explanations for phenomena, information philosophy further claims that we have quantifiable inter-subjective knowledge of other minds and an external world. This is as close as we come to “objective” knowledge, and knowledge of objects – to Kant’s “things in themselves.”

Knowledge has historically been identified by philosophers with language, logic, and human beliefs. Epistemologists, from Plato’s Theatetus andAristotle’s Posterior Analytics to modern language philosophers, identify knowledge with statements or propositions that can be logically analyzed and validated.

Specifically, traditional epistemology defines knowledge as “justified true belief.” Subjective beliefs are usually stated in terms of propositions. For example,

S knows that P if and only if(i) S believes that P,
(ii) P is true, and
(iii) S is justified in believing that P.

In the long history of the problem of knowledge, all three of these knowledge or belief “conditions” have proved very difficult for epistemologists. Among the reasons…

(i) A belief is an internal mental state beyond the full comprehension of expert external observers. Even the subject herself has limited immediate access to all she knows or believes. On deeper reflection, or consulting external sources of knowledge, she might “change her mind.”(ii) The truth about any fact in the world is vulnerable to skeptical or sophistical attack. The concept of truth should be limited to uses within logical and mathematical systems of thought. Real world “truths” are always fallible and revisable in the light of new knowledge.

(iii) The notion of justification of a belief by providing reasons is vague, circular or an infinite regress. What reasons can be given that themselves do not have just reasons? In view of (i) and (ii) what value is there in a “justification” that is fallible, or worse false?

(iv) Epistemologists have primarily studied personal or subjective beliefs. Fearful of competition from empirical science and its method for establishing knowledge, they emphasize that justification must be based on reasons internally accessible to the subject. Some mis-describe as “external” a subject’s unconscious beliefs or beliefs unavailable to immediate memory. These are merely inaccessible, perhaps only temporarily.

(v) The emphasis on logic has led some epistemologists to claim that knowledge is closed under (strict or material) implication. This assumes that the process of ordinary knowing is informed by logic, in particular that

(Closure) If S knows that P, and P implies Q, then S knows that Q.

We can only say that S is in a position to deduce Q, if she is trained in logic.

It is no surprise that epistemologists have failed in every effort to put knowledge on a sound basis, let alone establish knowledge with apodeictic certainty, as Plato and Aristotle expected and René Descartes thought he had established beyond any reasonable doubt.

Perhaps overreacting to the threat from science as a demonstrably more successful method for establishing knowledge, epistemologists have hoped to differentiate and preserve their own philosophical approach. Some have held on to the goal of logical positivism (e.g., Russell, early Wittgenstein, and the Vienna Circle) that philosophical analysis would provide an a priorinormative ground for merely empirical scientific knowledge.

Logical positivist arguments for the non-inferential self-validation of logical atomic perceptions like “red, here, now” have perhap misled some epistemologists to think that personal perceptions can directly justify some “foundationalist” beliefs.

The philosophical method of linguistic analysis (inspired by the later Wittgenstein) has not achieved much more. It is unlikely that knowledge of any kind reduces simply to the careful conceptual analysis of sentences, statements, and propositions.

Information philosophy looks deeper than the surface ambiguities of language.


Information philosophy distinguishes at least three kinds of knowledge, each requiring its own special epistemological analysis:

  • Subjective or personal knowledge, including introspection and intuition, as well as communications with and perceptions of other persons and the external world.
  • Communal or social knowledge of cultural creations, including fiction, myths, conventions, laws, history, etc.
  • Knowledge of a mind-independent physical external world.

When information is stored in any structure, whether the world, human artifacts, or a mind, two fundamental physical processes occur. First is a collapse of a quantum mechanical wave function. Second is a local decrease in the entropy corresponding to the increase in information. Entropy greater than that must be transferred away to satisfy the second law.

These quantum level processes are susceptible to noise. Information stored may have errors. When information is retrieved, it is again susceptible to noise, This may garble the information content. In information science, noise is generally the enemy of information. But some noise is the friend of freedom, since it is the source of novelty, of creativity and invention, and of variation in the biological gene pool.

Biological systems have maintained and increased their invariant information content over billions of generations. Humans increase our knowledge of the external world, despite logical, mathematical, and physical uncertainty. Both do it in the face of random noise, bringing order (or cosmos) out of chaos. Both do it with sophisticated error detection and correction schemes that limit the effects of chance. The scheme we use to correct human knowledge is science, a combination of freely invented theories and adequately determined experiments.

A Glossary of Terms in the Free Will Debates

We now have a design for the Glossary page on Information Philosopher and have glossed over 50 terms so far.

See www.informationphilosopher.com/afterwords/glossary/

We announced the new Glossary on the Garden of Forking Paths blog.

The glossary takes advantage of the web to include recursive links to other terms in the glossary, to I-Phi web pages on specific concepts, and external links to Wikipedia or the Stanford Encyclopedia of Philosophy where available.

Each term also has a “Search I-Phi” link which retrieves all the web pages mentioning the search term from the Information Philosopher website.

As an example of how the links to other glosses work, consider the gloss for Ethical Fallacy:

Ethical Fallacy
The Ethical Fallacy is to assume that free choices are restricted to moral decisions. Robert Kane does this, as did Plato and the Scholastics. This is not to deny that moral responsibility is historically intimately connected with free will and even dependent on the existence of free will (for libertarians and broad compatibilists). Any decision can be free. Our freedom to act also includes merely practical, financial, and fiduciary judgments, as well as occasional irrational flip decisions and even misjudgments.

We see that limiting free actions to moral/ethical choices is a form of Restrictivism. Clicking on the Restrictivism link goes to:

Restrictivism
Restrictivist theories claim that the number of “free” actions is a tiny fraction of all actions. Robert Kane, for example limits them to rare “self-forming actions” (SFAs) in which weighty and difficult moral decisions are made. Limiting freedom to moral decisions is the ethical fallacy. Peter van Inwagenrestricts free will to cases where the reasons that favor either alternative are not clearly stronger. This is the ancient liberty of indifference. Susan Wolfrestricts free decisions to those made rationally according to “the True and the Good.”

See also – Search I-Phi

Here we see that other restrictivists are Peter van Inwagen and Susan Wolf.

You can click on the Search I-Phi links above. They are active.