Einstein’s lifelong search for a Unified Field Theory ended in the thought that fields may only be averages over large numbers of particles

Einstein in his later years grew pessimistic about the possibilities for deterministic continuous field theories, by comparison with indeterministic and statistical discontinuous particle theories like those of quantum mechanics.

Although Einstein initially was a strong critic of quantum theory and its implications for indeterminism and a statistical nature of reality, from the 1930’s on he never said that quantum mechanics is “incorrect” – as far as it goes – only that something else would likely be added to quantum physics in the future, making it “complete.”

As early as 1930, Einstein marveled at the logical strength of the theory, especially its formulation by Paul Dirac, “to whom, in my opinion, we owe the most perfect exposition, logically, of this [quantum] theory.”

To Leopold Infeld he wrote in 1941,

“I tend more and more to the opinion that one cannot come further with a continuum theory.”

In his 1949 autobiography (he called it his obituary) for his Schilpp volume he wrote an extensive analysis of his criticism of the quantum theory, repeating the concerns he had first developed in 1935. It is worth looking at them completely here.

I must take a stand with reference to the most successful physical theory of our period, viz., the statistical quantum theory which, about twenty-five years ago, took on a consistent logical form (Schrödinger, Heisenberg, Dirac, Born). This is the only theory at present which permits a unitary grasp of experiences concerning the quantum character of micro-mechanical events. This theory, on the one hand, and the theory of relativity on the other, are both considered correct in a certain sense, although their combination has resisted all efforts up to now. This is probably the reason why among contemporary theoretical physicists there exist entirely differing opinions concerning the question as to how the theoretical foundation of the physics of the future will appear.

Will it be a field theory; will it be in essence a statistical theory? I shall briefly indicate my own thoughts on this point.

Physics is an attempt conceptually to grasp reality as it is thought independently of its being observed. In this sense one speaks of “physical reality.” In pre-quantum physics there was no doubt as to how this was to be understood. In Newton’s theory reality was determined by a material point in space and time; in Maxwell’s theory, by the field in space and time. In quantum mechanics it is not so easily seen.

(“Autobiographical Notes,” in Albert Einstein: Philosopher-Scientist,Ed, Paul Arthur Schilpp, 1949, p.1, in German and English)

Einstein’s dream of a continuous field theory was fading fast.

Einstein wrote his friend Michele Besso 1954 to express his lost hopes for a continuous field theory like that of electromagnetism or gravitation…

“I consider it quite possible that physics cannot be based on the field concept, i.e:, on continuous structures. In that case, nothing remains of my entire castle in the air, gravitation theory included, [and of] the rest of modern physics.”

quoted in Subtle is the Lord…, Abraham Pais, 1982, p.467

The fifth edition of The Meaning of Relativity included a new appendix on Einstein’s field theory of gravitation. In the final paragraphs of this work, his last, published posthumously in 1956, Einstein wrote:

Is it conceivable that a field theory permits one to understand the atomistic and quantum structure of reality ? Almost everybody will answer this question with “no”…One can give good reasons why reality cannot at all be represented by a continuous field. From the quantum phenomena it appears to follow with certainty that a finite system of finite energy can be completely described by a finite set of numbers (quantum numbers). This does not seem to be in accordance with a continuum theory, and must lead to an attempt to find a purely algebraic theory for the description of reality. But nobody knows how to obtain the basis of such a theory.

The Meaning of Relativity, 1956, pp.165-66


The Libet Experiments. Does brain activity before a decision make the decision? Maybe it’s just thinking about alternative possibilities?

One commentator (planksip®) asked: How do you reconcile Benjamin Libet’s EEG experiments (readiness potential) et al, which detected a brain’s motor cortex response 300 milliseconds prior to a person being consciously aware of said response?

The neurologist Benjamin Libet performed a sequence of remarkable experiments in the early 1980’s that were enthusiastically, if mistakenly, adopted by determinists and compatibilists to show that human free will does not exist.

His measurements of the time before a subject is aware of self-initiated actions have had an enormous, mostly negative, impact on the case for human free will, despite Libet’s view that his work does nothing to deny human freedom.

Since free will is best understood as a complex idea combining two antagonistic concepts – freedom and determination, “free” and “will,” in a temporal sequence, Libet’s work on the timing of events can also be interpreted as supporting our “two-stage model” of free will.

Indeed, Libet himself argued that there was still room for a veto over a decision that may have been made unconsciously over 300 milliseconds before the agent is consciously aware of the decision to flex a finger, but before the action of muscles flexing.
The original discovery that an electrical potential (of just a few microvolts – μV) is visible in the brain long before the subject flexes a finger was made by Kornhuber and Deecke (1964). They called it a “Bereitschaftspotential” or readiness potential.

As shown on Kornhuber and Deecke‘s RP diagram, the rise in the readiness potential was clearly visible at about 550 milliseconds before the flex of the wrist (blue arrow).

We can correlate the beginnings of the readiness potential (350ms before Libet’s conscious will time “W” appears) with the early stage of our two-stage model of free will, when alternative possibilities are being generated, in part at random. The early stage may be delegated to the subconscious, which is capable of considering multiple alternatives (William James‘ “blooming, buzzing confusion”) that would congest the single stream of consciousness.

Bob Doyle is the Information Philosopher. Helping communities communicate, podcasting, and information immortality.

2016 was a very hectic year getting out my second and third books, Great Problems and Metaphysics, while postponing what I view as my more important work on Albert Einstein.

I have been struggling for decades to understand what Einstein was thinking about what is going on with the quantum waves of probability he first saw in 1905 and first expressed clearly in 1909, then wrote about again and again until his death, with no one ever taking seriously what he was talking about, especially his 1935 EPR paper, which was attacked then and now is the second quantum revolution of nonlocality and entanglement!

2017 has also been very busy since I decided to supplement the information in my enormous website (and the printed books) with daily lecture videos – for the rest of my productive years! It took me many months to design and build my iTV-Studio. I have spent many decades helping others communicate. Now I am doing it myself.

My wife Holly despaired that I would be further suspending my work on Einstein.

But I actually have managed to make significant progress understanding (despite Feynman’s edict) Heisenberg’s matrix mechanics and Schrödinger’s wave mechanics, the next two chapters in my Einstein book.

The progress is to simply see the wave function as purely abstract information and therefore in the same metaphysical category as mind. I will pursue the thought that Einstein called it a “ghost field” (Gespensterfeld) with a mysterious (spooky) influence or control (though only statistical) over matter.  This remains the “one mystery” in quantum mechanics, as Feynman called it and I hope to explain it  .

As you know, I see our essential selves as abstract forms through which concrete matter and energy flow, under the management of active information structures from the molecular machines in our cells to the thinking in our minds, ideas stored in our experience recorder and reproducer (ERR).

I see immortality as the survival of those ideas, to become a contribution to human knowledge (my SUM). You might view my life’s work as getting as many of my ideas as possible out of my mind and into humanity’s knowledge base.

I think I might lecture about this progress today. It has enormous implications for my fifth planned book – Mind: The Scandal in Psychology, which I hope will interest you.

Hard Determinism and Incompatibilism: An Escape from Moral Responsibility

William James introduced the terms “hard determinism” and “soft determinism” in the 1880’s. Today his “soft determinism”  is known as “compatibilism,” the self-contradictory idea that free is will is compatible with determinism.

James called this a “wretched subterfuge.” Immanuel Kant called it “word juggling.” Since much of philosophy today is juggling words, playing with their possible meanings, even redefining words to mean their very opposite, it is no surprise that we have a scandal in  philosophy. For example, Daniel Dennett defines free will as moral responsibility.

Despite Dennett, most incompatibilists and determinists accept the traditional idea that determinism means the lack of moral responsiility.

Besides hard incompatibilism, incompatibilists have staked out nuanced versions of the familiar positions with new jargon like semicompatibilism, and illusionism. To see which philosophers hold which positions, take a look at our history of the free will problem.
Let’s look at the taxonomy of deterministic positions and see where hard incompatibilism fits.

Taxonomy of Determinist Positions

Semicompatibilists are narrow incompatibilists who are agnostic about free will and determinism.

Hard incompatibilists think both free will and moral responsibility are not compatible with determinism. Illusionists are incompatibilists who say free will is an illusion.

The old incompatibilism explains freedom. It cannot explain the will. Hard incompatibilism denies both freedom and responsibility.

Only the “two-stage” Cogito model is genuine free will.


The Arrow of Time. If we could reverse the time, would it look like a movie going backwards?

The laws of nature, except the second law of thermodynamics, are symmetric in time. Reversing the time in the dynamical equations of motion simply describes everything going backwards. The second law is different. Entropy must never decrease in time, except statistically and briefly.

Many natural processes are apparently irreversibleIrreversibility is intimately connected to the direction of time. Identifying the physical reasons for the observed irreversibility, the origin of irreversibility, would contribute greatly to understanding the apparent asymmetry of nature in time, despite nature’s apparently perfect symmetry in space.

In 1927, Arthur Stanley Eddington coined the term “Arrow of Time” in his book The Nature of the Physical World. He connected “Time’s Arrow” to the one-way direction of increasing entropy required by the second law of thermodynamics. This is now known as the “thermodynamic arrow.”
(Nature of the Physical World, 1927, p.328-9)

In his later work, Eddington identified a “cosmological arrow,” the direction in which the universe is expanding, as shown by Edwin Hubble about the time Eddington first defined the thermodynamic arrow.
New Pathways in Science, 1937, p.328-9)

There are now at least five other proposed arrows of time (discussed below). We can ask whether one arrow is a “master arrow” that all the others are following, or perhaps time itself is just a given property of nature that is otherwise irreducible to something more basic, as is space.

Given the four-dimensional space-time picture of special relativity, and given that the laws of nature are symmetric in space, we may expect the laws to be invariant under a change in time direction. The laws do not depend on position in space or direction, they are invariant under translations and rotations, space is assumed uniform and isotropic. But time is not just another spatial dimension. It enters into calculations of event separations as an imaginary term (multiplied by the square root of minus 1). Nevertheless, all the dynamical laws of motion are symmetric under time reversal.

So the basic problem is – how can macroscopic irreversibility result from microscopic processes that are fundamentally reversible?

Is Quantum Mechanics Incomplete? Einstein Thought Dirac’s Statistical Theory Was Perfect, Though Not A Field Theory

In 1931, Einstein said of Paul Dirac

“Dirac, to whom, in my opinion, we owe the most perfect exposition, logically, of this [quantum] theory, rightly points out that it would probably be difficult, for example, to give a theoretical description of a photon such as would give enough information to enable one to decide whether it will pass a polarizer placed (obliquely) in its way or not.” Maxwell’s Influence on the Evolution of the Idea of Physical Reality…1931, Ideas and Opinions, p.270

Four years later, in the infamous EPR paper, Einstein said quantum theory appears tp be “incomplete.” He believed that quantum theory, as good as it is (and he never saw anything better), is incomplete because its statistical predictions (phenomenally accurate in the limit of large numbers of identical experiments – “ensembles” Einstein called them), tell us nothing but “probabilities” about individual systems. Even worse, he saw that the wave functions of entangled two-particle systems predict faster-than-light correlations of properties between events in a space-like separation. He mistakenly thought this violated his theory of relativity. Although this was the heart of his famous EPR paradox paper in 1935, we shall see that Einstein was already concerned about faster-than-light transfer of energy and that he saw spherical light waves “collapsing” instantaneously in his very first paper on quantum theory.

In most general histories, and in the brief histories included in modern quantum mechanics textbooks, the problems raised by Einstein are usually presented as arising after the “founders” of quantum mechanics and their Copenhagen Interpretation in the late 1920’s. Modern attention to Einstein’s work on quantum physics often starts with the Einstein-Podolsky-Rosen paper of 1935, when the mysteries of nonlocality, nonseparability, and entanglement are first clearly understood by Einstein’s opponents. Physicists today think of quantum mechanics as beginning with the Heisenberg (particle) formulation and the Schrödinger (wave) formulation. The popular image of Einstein post-EPR is either in the role of critic trying to expose fundamental flaws in the “new” quantum mechanics or as an old man who simply didn’t understand the new quantum theory. Both these images of Einstein are seriously flawed, as we shall see.

Many histories of quantum theory, most starting from the Copenhagen perspective of Bohr, Heisenberg, Born, Jordan, and Pauli, focus on Einstein’s failed attempts in debates with Bohr to challenge the uncertainty principle. EPR is described as failing to show that quantum mechanics is “incomplete.” This is a verbal quibble. Quantum mechanics is indeed incomplete in that it cannot predict simultaneously the position and momentum of a particle, nor the “real” path of a particle between measurements. Most important, QM is only a statistical theory, as Einstein maintained. Its results are only confirmed by large numbers of identical experiments. Continuous matter and radiation only appear when we average over large numbers of discrete particles.

Few histories point out that it was Einstein who over three decades invented (or discovered) nonlocality and entanglement, as well as the ontological chance in quantum mechanics that is the real basis of acausality that Heisenberg thought he saw in his uncertainty principle.

Reduction to Physics? Or Emergence of Life, Mind, Consciousness, and Knowledge? Information Philosophy Identifies the Transitions

Reductionism is a concept in philosophy that claims a description of properties in a complex system can be “reduced” to the lower-level properties of the system’s components. For example, the laws and properties of chemistry can be reduced to the laws of physics.

More specifically, the properties of molecules can be reduced to those of atoms, the properties of biological cells can be reduced to those of molecules, plants and animals can be reduced to those of cells, and mind can be reduced to neurons in the brain.

Beyond the properties, reductionists claim that causal laws of nature in the base level must causally determine the laws of a higher level. These thinkers usually have a highly simplistic, materialistic, and deterministic view of the most fundamental laws of nature, namely the laws of classical physics, or the interpretations of quantum physics that deny indeterminism.

Anti-reductionists deny claims that deterministic causal laws can in principle reduce everything, including life and mind, to the fundamental particles of physics. They include emergentists, who think at least some higher level properties and laws cannot be reduced, but must emerge as sui generis entities that need new explanations. They also include vitalists, who believe that a dualistic, non-physical, immaterial substance is needed to explain life, mind, and consciousness.

Information philosophy identifies the transitions from non-life to life as active replication of information structures with immaterial, but physical, information processing, the transition to a free and creative mind as that immaterial information processing, the transition to consciousness as the development of an experience reorder and reproducer (ERR), and the transition to human knowledge (SUM) as the external communication and storage of information in all human artifacts.

A New I-Phi Web Page for Donald Hebb

Donald Hebb

Donald O. Hebb was a Canadian psychologist whose 1949 book The Organization of Behavior put forward what he called his “neuropsychological postulate,” the assumption that cognitive processes like perception and learning can be understood in terms of the connections between assemblies of neurons. Hebb’s thesis was that behavior is to be understood entirety in terms of brain function.

He is considered the father of neural network theory, which is central to artificial intelligence research. These networks or “cell assemblies” were connected in ways that control the responses to various stimuli.

It is a model for leaning often called “Hebbian learning.” He described his “neuropsychological postulate” as this assumption:

When an axon of cell A is near enough to excite cell B and repeatedly or persistently takes part in firing it, some growth process or metabolic change takes place in one or both cells such that A’s efficiency, as one of the cells firing B, is increased.

Hebb, D. O. (1949). The Organization of Behavior: A Neuropsychological Theory. New York: Wiley and Sons, p.62

This assumption is often paraphrased as “Neurons that fire together wire together.

The Experience Recorder and Reproducer (ERR) model of information philosophy is built on Hebb’s assumption as the basis of the Recorder stage, where the Reproducer depends on this extension of Hebb’s insight.

Neurons that were wired together in the past will fire together in the future.

Donald Hebb: Neurons That Fire Together Wire Together.

And neurons that were wired together in the past will fire together in the future. This extension of Hebb’s idea is the basis for our Experience Recorder and Reproducer (ERR).

The ERR is simpler than, but superior to, the computational models of the mind popular in today’s neuroscience and cognitive science, the “software in the brain hardware.”

Although we see mind as immaterial information, we think that man is not a machine and the mind is not a computer.

The biological and neurological basis for our proposed ERR is very straightforward.

    • The ERR Recorder: Neurons become wired together (strengthening their synaptic connections to other neurons) during an organism’s experiences, across multiple sensory and limbic systems.
    • The ERR Reproducer: Later firing of even a part of the previously wired neurons can stimulate firing of all or part of the original complex, thus “playing back” similar past experiences (including the critically important emotional reaction to those experiences).

Our ERR mind model grows out of the biological question of what sort of “mind” would provide the greatest survival value for the lowest (or the first) organisms that evolved mind-like capabilities.

We propose that a minimal primitive mind would need only to “play back” past experiences that resemble any part of current experience. Remembering past experiences has obvious relevance (survival value) for an organism. But beyond survival value, the ERR touches on the philosophical problem of “meaning.” We suggest the epistemological “meaning” of information perceived may be found in the past experiences that are reproduced by the ERR.

The ERR model is a memory model for long-term potentiation stored in the neocortical synapses. Short-term memory must have a much faster storage mechanism. While storage is slow, we shall see that ERR retrieval is just as fast, and it does not fade as does short-term, working memory.

We propose that the ERR reproduces the entire complex of the original sensations experienced, together with the emotional response to the original experience (pleasure, pain, fear, etc.). Playback of past experiences are stimulated by anything in the current experience that resembles something in the past experiences, in the five dimensions of the senses (sound, sight, touch, smell and taste).

The ERR model stands in contrast to the popular cognitive science or “computational” model of a mind as a digital computer with a “central processor” or even many “parallel processors.” No algorithms or stored programs are needed for the ERR model. There is nothing comparable to the addresses and data buses used to stored and retrieve information in a digital computer.

Free Will: Are Biological Organisms Completely Determined?

Biological communications, the information exchanged in messages between biological entities, is far more important than the particular physical and chemical entities themselves. These material entities are used up and replaced many times in the life cycle of a whole organism, while the messaging has remained constant, not just over the individual life cycle, but that of the whole species.

In fact most messages, and the specific molecules, e.g., DNA,  that embody and encode those messages, have been only slowly varying for billions of years.

As a result, the sentences (or statements or “propositions”) in biological languages may have a very limited vocabulary compared to human languages. Although the number of words added to human languages in a typical human lifetime is remarkably small.

Biological information is far more important than matter and energy for another reason. Beyond biological information as “ways of talking” in a language, we will show that the messages do much more than send signals, they encode the architectural plans for biological machines that have exquisite control over individual molecules, atoms, and their constituent electrons and nuclei. In digital computer terms, these are biological algorithms or code.

Far from the materialist idea that fundamental physical elements have “causal control” over living things, we find that biological information processing systems are machines. They are intelligent robotic machines. They assemble themselves and build their own replacements when they fail. And they use the flow of free energy and material with negative entropy to run the “programs” that manipulate their finest parts at astonishingly high speeds.

The data rates are well beyond the largest and fastest digital computers and the programs they are running have evolved through Darwinian evolution. These are programs without a programmer.