· 6 years ago · Apr 30, 2019, 05:12 PM
11
2Consciousness and the Interface Theory of Perception
3by Donald D. Hoffman, Ph.D.
4Based on evolutionary grounds, brain activity does not cause
5consciousness, and on mathematical grounds, consciousness is not
6identical to functional properties of the brain. Consciousness is
7fundamental and must be modeled precisely in its own right.
81. Introduction: The Mystery
9“What is the biological basis of consciousness?†This is the version
10of the classic mind-body problem that is widely assumed in current
11research. 1 Much of this research is focused on finding neural
12correlates of consciousness (NCCs); an NCC is a minimal collection
13of neural events or mechanisms that is highly correlated with a
14specific conscious experience, such as an itch or a headache. 2 To
15its credit, recent research has found many NCCs. But to its dismay,
16it has failed to furnish a theory: It’s a mystery how NCCs can be, or
17cause, or give rise to conscious experiences.
18This mystery, in one form or another, has puzzled thinkers since
19before Plato. In 1866, it puzzled Thomas Huxley, who wrote, “How it
20is that anything so remarkable as a state of consciousness comes
21about as a result of irritating nervous tissue, is just as unaccountable
22as the appearance of Djin when Aladdin rubbed his lamp.†3 Despite
23recent progress in unearthing NCCs, this mystery still puzzles
24researchers such as Christof Koch, who observes, “That is the
25universe in which we find ourselves, a universe in which particular
26vibrations of highly organized matter trigger conscious feelings. It
27seems as magical as rubbing a brass lamp and having a djinn
28emerge who grants three wishes.†4
29Why is the mind-body problem still a mystery? One answer, indeed
30the one most widely tendered, is that the key discovery that will
31solve this mystery has not been made but, when it is, the solution
32will be obvious. 5 This has happened before in the history of science.
33The mystery of life and inheritance, for instance, was solved by
34discovering the structure of DNA. Given the history of science, this
35reply is fair.
36A second answer is that we have been short-changed by evolution.
37NCCs indeed cause or give rise to consciousness, but we have not
38been endowed by evolution with the concepts needed to understand
39how this happens. We don’t expect spiders to possess the concepts
40needed to understand quantum physics; perhaps Homo sapiens
412
42don’t possess the concepts needed to understand the mind-body
43problem. This appears to be the view of Colin McGinn: “We know
44that brains are the de facto causal basis of consciousness, but we
45have, it seems, no understanding whatever of how this can be so.†6
46Given what we know of evolution, this, too, is a reasonable reply.
472. Let’s Question Our Assumptions
48However, it’s also possible that the mystery persists because our
49formulation of the problem harbors false assumptions. This has
50happened before in the history of science. For instance, the way that
51objects called “black bodies†radiate energy mystified classical
52physics, and was only understood when quantum theory rejected
53the classical assumption that energy varies continuously, replacing it
54with the counterintuitive assumption that energy, such as the energy
55in light and heat, comes packaged in discrete quanta. 7
56The problem with this proposal is that scientific theories, including
57current attempts at the mind-body problem, make many
58assumptions. 8 If false assumptions are indeed hindering progress,
59then it might be difficult to discern which are the offenders.
60But it’s worth a try. Here I question two assumptions of many current
61theories. (1) Natural selection favors true perceptions. (2) The mind
62is what the brain does.
63Why these? One reason is that these assumptions are central. We
64assume, for instance, that there is a biological basis for
65consciousness in part because we assume that our perceptions of
66space, time, and physical objects (such as brains and neurons) are
67generally true. If it turned out instead that our perceptions of space,
68time, and physical objects are adaptive fictions, not genuine insights,
69then we would be less inclined to assume that some of those
70fictions, namely neurons, are the basis of consciousness.
71A second reason is that both assumptions can be rigorously tested.
72The first can be tested using, for example, genetic algorithms and
73evolutionary games. 9−12 The second can be formulated as a
74mathematical proposition, and proven true or false. 13,14
75A third reason, which no doubt you’ve already guessed, is that I
76think both assumptions are, in fact, false. For the first assumption, a
77variety of evolutionary games and genetic algorithms demonstrate
78that true perceptions are dominated by simple heuristics that are
79tuned to fitness. 9−12 For the second assumption, a theorem
803
81establishes that conscious experiences cannot be identical to
82functional properties of a complex system such as the brain. 13,14
83In what follows, I outline the evidence against the assumptions that
84natural selection favors true perceptions and that the mind is what
85the brain does, keeping mathematical discourse to a minimum. I
86then propose an “interface theory†of perception, 15,18 and a
87“conscious realist†ontology in which consciousness, rather than
88space-time and physical objects, is taken as fundamental. 19,20 These
89new assumptions transform the mind-body problem. Rather than
90being a puzzle about how matter gives rise to consciousness, it
91becomes the problem of how consciousness gives rise to spacetime and matter. A scientific theory that starts with consciousness
92requires, of course, a mathematically precise theory of
93consciousness. I propose some ideas in that direction, aiming for a
94genuine theory that makes risky and testable predictions.
95There is a simple way to dismiss the project just outlined: If natural
96selection favors untrue perceptions, then surely it favors untrue logic
97and math. If so, then this project refutes itself. It uses logic and math
98to conclude that logic and math are unreliable.
99This would be a showstopper. I think, however, that the same
100evolutionary games that reveal selection pressures against true
101perception also reveal selection pressures toward reliable logic and
102math. This is an open issue, but I sketch reasons to be hopeful,
103similar in flavor to Dutch book arguments for the axioms of
104probability. 21
1053. Some Common Intuitions About Selection and Perception
106Does natural selection favor true perceptions? Many vision
107researchers claim that it does. In his textbook Vision Science,
108Stephen Palmer tells the reader that “Evolutionarily speaking, visual
109perception is useful only if it is reasonably accurate…. Indeed, vision
110is useful precisely because it is so accurate. By and large, what you
111see is what you get. When this is true, we have what is called
112veridical perception . . . perception that is consistent with the
113actual state of affairs in the environment. This is almost always the
114case with vision….†22
115Noë and Regan argue that “perceivers are right to take themselves
116to have access to environmental detail and to learn that the
117environment is detailed†and that “the environmental detail is
118present, lodged, as it is, right there before individuals and that they
1194
120therefore have access to that detail by the mere movement of their
121eyes or bodies.†23
122Geisler and Diehl suggest, “In general, it is true that much of human
123perception is veridical under natural conditions.†24
124Marr claims, “We…very definitely do compute explicit properties of
125the real visible surfaces out there, and one interesting aspect of the
126evolution of visual systems is the gradual movement toward the
127difficult task of representing progressively more objective aspects of
128the visual world.†25
129Physicists and philosophers have also weighed in on this issue. The
130physicist Abner Shimony, for instance, argues that “evolution has
131eventuated in animals which transform their sensitive reactions so
132that their resulting cognitive states are quite accurate indices of
133crucial distal characteristics of the environment.†26
134The philosopher Thomas Nagel argues, “If there is a mindindependent physical world, the systematic inability to detect the
135basic truth about our surroundings (setting aside more sophisticated
136scientific truth) would be disastrous for our reproductive fitness.
137Realism about the physical world is a fundamental aspect of any
138Darwinian explanation of our perceptual and cognitive faculties, as
139well as of our motives and capacities for action.†27
140The intuition behind these claims seems to be that truer perceptions
141are ipso facto more fit. In consequence, those of our predecessors
142who saw more truly had a fitness advantage over those who saw
143less truly, and were more likely to have offspring. We are the
144descendants of those who saw more truly, and thus can count on
145our perceptions to be generally accurate.
146The problem with relying on this intuition is that evolution is complex,
147and intuitions are fallible guides to its workings. Fortunately, there
148are mathematical formulations of evolution, such as evolutionary
149game theory and genetic algorithms, which permit us to rigorously
150investigate whether natural selection indeed favors truer
151perceptions. 28,29
1523. What Is a Perceptual Strategy?
153We want to carefully investigate what evolution entails about
154perception. What kinds of perception does evolution favor? What
155kinds are likely to go extinct? To try to answer these questions, we
156must have a clear idea about what we mean by “kinds of
157perception.†We need to precisely define different kinds of
1585
159perception, so that we can see precisely what evolution will do with
160them.
161There is a long and interesting history of philosophical debate about
162the nature of perception, which is a good source to draw on here.
16330,31 But we must transform this debate into precisely defined
164perceptual strategies that we can then allow to compete in
165evolutionary games. These perceptual strategies might, in turn, aid
166philosophical debates by providing a precise language for discourse.
167So we start by first specifying the classes of perceptual strategies to
168be tested in our evolutionary simulations. If we denote the objective
169world by some set W and the perceptions of an organism by some
170set X—where we assume for the moment that we know nothing
171about W or X—then a perceptual strategy is a function, call it P, from
172W to X. This is illustrated in Figure 1, and written P :W → X .
173Figure 1. Perceptual strategy. The region labeled W represents
174possible states of the objective world. The region labeled X
175represents possible perceptual states of an organism. A perceptual
176strategy is a way of mapping the states of the world onto the
177perceptions of an organism, and is labeled P.
178We can distinguish different classes of perceptual strategies by their
179assumptions about W, X, and P. The strongest assumption, which
180we call naïve realism, claims that our perceptions are identical to the
181world; that is, X = W and P is the identity function.
182A weaker assumption, which we call strong critical realism, claims
183that our perceptions are identical to a subset of the world; that is,
184X ⊂ W and P is the identity function on this subset.
185A yet weaker assumption, which we call weak critical realism, claims
186that our perceptions need not be identical to any subset of the world,
187but that the relationships among our perceptions accurately reflect
188relationships in the world; that is, it is allowed that X ⊂/ W , but
189required that P is a so-called homomorphism of the structures on W.
1906
191A yet weaker assumption, which we call interface perceptions,
192claims that our perceptions need not be identical to any subset of
193the world, and that the relationships among our perceptions need
194not reflect relationships in the world except measurable relationships
195(i.e., relationships needed to describe probabilities); that is, it is
196allowed that X ⊂/ W , and that P is not a homomorphism of the
197structures on W (except for measurable structures).
198Finally, the weakest assumption, which we call arbitrary perceptions,
199claims that our perceptions need not be identical to any subset of
200the world, and that the relationships among our perceptions need
201not reflect any relationships in the world; that is, it is allowed that
202X ⊂/ W , and that P is not a homomorphism of any structures on W.
203The relationship among these classes of perceptual strategies is
204illustrated by the diagram in Figure 2.
205Figure 2. Venn diagram showing the inclusion relationship among
206the five classes of perceptual strategies.
2074. Could Our Perceptions Be Like a User Interface?
208Few researchers are naïve realists, because there is reason to
209believe that we do not perceive some aspects of the objective world.
210The human eye, for instance, only sees light whose wavelengths lie
211in a window between 400 and 700 nanometers, whereas the
212electromagnetic spectrum extends well beyond this window. Despite
213such evidence, some philosophers still defend versions of naïve
214realism. 30
215Many researchers are strong critical realists, and assume that our
216perceptions are, in the normal case, identical to a part of the
217objective world. According to them, when you see a round, white
2187
219baseball, there really is a round, white baseball that exists even if
220you don’t look.
221Some researchers are weak critical realists, and assume that our
222perceptions need not be identical to any part of the objective world,
223but that they do accurately portray its true structure. Colors, for
224instance, might not exist apart from our perceptions, but colors
225nonetheless accurately convey aspects of the world that exist even
226when we don’t look.
227Few researchers buy the interface theory, which allows that our
228perceptions are not identical to any part of the objective world, and
229that they do not accurately portray its true structure (apart from the
230structure needed to describe probabilities). 15−17 Indeed, when I
231introduce the interface theory in lectures at universities and
232conferences, the audience finds it amusing, obviously false, and
233almost beneath dignifying with a response. After all, they argue, if
234our perceptions need not be accurate, even about a part of the
235objective world, then how could they be useful? Illusions would not
236be the exception, they would be the rule.
237But an analogy often helps. Consider the desktop of your laptop or
238mobile device. Suppose that there is a file icon on the desktop that
239is round, blue, and in the middle of the screen. Does that mean that
240the file itself is round, blue, and in the middle of the computer?
241Obviously not. Files have no colors or shapes, and their positions on
242the screen needn’t mirror their locations in the computer. The colors,
243shapes, and positions of an icon are not true depictions of the
244objective properties of the corresponding file. Nor are they intended
245to be. It’s not that the interface is trying to deceive you. It’s simply
246that its purpose is not to depict objective reality, but rather to hide it.
247The reality is too complex, and understanding it is not necessary if
248one wants to delete a file or edit a photo. Indeed, if you were forced
249to deal explicitly with all the diodes, resistors, voltages, and
250magnetic fields that constitute the file, you might never finish editing
251that photo.
252So here is a case where accurately perceiving the objective truth is
253not useful, it’s an impediment. The interface theory of perception
254allows that natural selection might have shaped our perceptions to
255be analogous to interfaces that hide the complexity of objective
256reality and instead provide a useful guide to behavior. If so, then
257space-time could simply be our desktop, and physical objects with
258their colors, shapes, textures, and motions are just icons of that
259desktop.
2608
2615. Evolutionary Games: A Matter of Life or Death
262There is a long history of philosophical debate about the nature of
263perception, 31 and recently this debate has included arguments from
264evolution. 32−36 Remarkably, until recently, no one formalized these
265arguments and tested them using evolutionary games. When this is
266done, interface perceptual strategies are typically more fit than
267realist strategies. 9−12
268Consider, for instance, a game in which two animals compete to
269obtain a resource, say water, that is in three distinct territories, as
270illustrated in Figure 3. An animal looks at each territory and chooses
271one, obtaining its resources and the corresponding fitness payoff.
272Once a territory is chosen, the other animal must choose one of the
273two remaining territories, and obtains its resources and its fitness
274payoff. On each trial of the game, we can randomly select which
275animal chooses first.
276Figure 3. An evolutionary game. Two organisms (e.g., rabbits)
277compete for water resources in three territories. In this example, the
278quantity of water happens to be 14 in the first territory, 92 in the
279second, and 48 in the third. These quantities are not fitness payoffs.
280It might be, for instance, that the fitness payoff for 92 is less than for
28148.
282The quantity of water in a territory might vary, say, from 1 to 100,
283where 1 indicates little water and 100 indicates a lot. In different
284games, we can play with the statistics of water quantity, perhaps
285using a uniform distribution, a normal distribution, or some other
286distribution of interest.
287In different games, we can also play with the fitness payoffs
288associated to different resource quantities. We could, for instance,
289consider games in which greater resources yield greater fitness
2909
291payoffs. But we could also consider games in which, say, resource
292values nearer 50 have higher fitness payoffs. This could model a
293case where too little water is bad for fitness (e.g., dying from thirst),
294too much water is bad for fitness (e.g., dying from drowning), and
295some intermediate quantity is just right.
296Once we have the distribution of resources and the fitness payoff
297function, we can then compute the expected payoffs that different
298perceptual strategies would obtain when competing with each other.
299For instance, if some animals use an interface strategy (IS) and
300others a weak critical realist strategy (WS), we can compute the
301expected payoff to an IS animal when it competes with a WS, the
302expected payoff to an IS animal when it competes with another IS,
303the expected payoff to a WS when competing with an IS, and the
304expected payoff to a WS when competing with another WS. There
305are 2 x 2 = 4 such expected payoffs to compute. If there is a third
306strategy, say some animals use a strong critical realist strategy
307(SR), then we can compute the 3 x 3 = 9 different expected payoffs;
308if there is a fourth strategy, then there are 16 such payoffs, and so
309on.
310Given these expected payoffs, there are formal models of evolution
311that we can use to predict which strategies will dominate, coexist, or
312go extinct. 37−40We can, for instance, use evolutionary game theory,
313which assumes infinite populations of competing strategies with
314complete mixing, in which the fitness of a strategy varies with its
315relative frequency in the population. In the case in which just two
316strategies, say S1 and S2 , are competing, we can write down the
317four expected payoffs in a simple table, as shown in Figure 4. The
318expected payoff to S1 is a when competing with S1 and b when
319competing with S2 ; the expected payoff to S2 is c when competing
320with S1 and d when competing with S2 .
321Figure 4. Expected payoffs in a competition between two strategies,
322S1 and S2 .
323Then it can be shown that S1 dominates (i.e., drives S2 to extinction)
324if a > c and b > d ; S2 dominates if a < c and b < d ; they are bistable
325if a > c and b < d ; they coexist if a < c and b > d ; they are neutral if
326a = c and b = d . Similar results can be obtained when more
327strategies compete, but new outcomes are possible. For instance,
328with three strategies, it might be that S1 dominates S2 , S2 dominates
32910
330S3 , and S3 dominates S1 , as in the popular children’s game of RockPaper-Scissors in which rock beats scissors, which beats paper,
331which beats rock. With four or more strategies, the dynamics can
332have more complex behaviors known as limit cycles and chaotic
333attractors.
334In a large series of evolutionary games, realist and interface
335perceptual strategies have been allowed to compete. The result is
336that interface strategies, in most cases, drive realist strategies to
337extinction. 9−11 One key reason is illustrated in Figure 5.
338Figure 5. One reason true perceptions go extinct. (a) A resource that
339varies in quantity from 1 to 100, and a fitness payoff function that
340rewards intermediate quantities. (b) A realist perceptual strategy that
341sees resource quantities 1 to 50 as red, and 51 to 100 as green; it is
342a realist strategy because green truly indicates greater resource
343quantities than does red. (c) An interface perceptual strategy where
344green does not truly indicate greater resource quantities than red but
345does indicate greater fitness payoffs.
346Figure 5a illustrates a fitness payoff function, in which the payoff
347varies as a resource varies in quantity from 1 to 100. The payoff is
348greatest for intermediate quantities of the resource. Figure 5b
349illustrates a realist perceptual strategy that can only see two colors,
350red and green. It is a realist strategy because the perceived colors
351accurately report information about the true resource quantity: all
352resource quantities seen as green are greater than those seen as
353red. Figure 5c illustrates an interface perceptual strategy that also
354sees only red and green. It is not a realist strategy because the
355Amy Knupp 8/5/2013 5:41 PM
356Comment [1]: Note to formatter: I can’t figure
357out how to get this comma to appear on the same
358line as the S with the 3 subscript
35911
360perceived colors do not accurately report information about the true
361resource quantity: all resource quantities seen as green are not
362greater than those seen as red. However, all resource quantities
363seen as green do have greater fitness payoffs than those seen as
364red. In consequence, when the interface strategy competes with the
365realist strategy in evolutionary games, the interface strategy will
366systematically reap greater fitness payoffs and drive the realist
367strategy to extinction.
368The key point of this example is that fitness and truth are distinct. A
369perceptual strategy that is tuned to fitness will, in general,
370outcompete one that is tuned to truth. Truer perceptions are not, in
371general, fitter perceptions, and evolution “cares†only about fitness,
372not truth.
3736. The Mind Is Not What the Brain Does
374Is it possible that the colors you see are quite different from the
375colors I see? This question occurs to many imaginative kids, and it
376occurred to John Locke who, in his 1690 Essay Concerning Human
377Understanding, asked if it’s possible that “the idea that a violet
378produced in one man’s mind by his eyes were the same that a
379marigold produced in another man’s, and vice versa.†This so-called
380spectrum-inversion question continues to be debated, because the
381fate of many theories of consciousness turns on its outcome. These
382theories propose that it is the functional properties of complex
383systems, such as brains, that are responsible for the presence and
384properties of consciousness. 42−47
385These theories come in two classes: reductive functionalism and
386nonreductive functionalism. Reductive functionalist theories pick out
387some particular functional properties of, say, the brain, and propose
388that conscious experience is identical to those functional properties,
389where they mean identical in the same sense that 12 and a dozen
390are identical: they’re just different names for one and the same
391thing. Nonreductive functionalist theories also pick out some
392particular functional properties of, say, the brain, and propose that
393those functional properties cause or give rise to conscious
394experience.
395Functionalist proposals that are nonreductive incur a promissory
396note: They owe us a theory that explains how and why the particular
397functional properties that they specify can cause, or give rise to,
398conscious experience. This promissory note has never yet been
399paid by any nonreductive functionalist theory, and the relevant bank
400accounts look pretty empty.
40112
402Functionalist proposals that are reductive incur no such promissory
403note: they owe us no causal or emergence theories because they
404make no claims about cause or emergence. Their claim is one of
405identity: “The mind is what the brain does†is the informal and
406popular statement of this claim. Now, of course, such a claim is
407intended to be a scientific hypothesis, not mere armchair
408speculation, and so it must, in principle, be falsifiable.
409How could it be falsified? One approach is to use imagination. If a
410reductive functionalist proposes that some functional property F of
411neural activity is identical to our conscious experience of, say, a
412particular shade of red, then one can try to imagine the experience
413of red happening when F does not occur, and vice versa. If one can
414imagine this, then it is logically possible, and the identity claim fails.
415If, for instance, one could imagine a triangle that didn’t have exactly
416three sides, this would falsify the claim that triangles are identical to
417three-sided polygons. (Good luck trying!)
418The problem with this approach is that it is not conclusive. If a theory
419proposes that F is identical to some conscious experience, and
420someone claims that they can imagine otherwise, then a supporter
421of the F theory can simply reply that their opponent didn’t really
422succeed in imagining what they claimed to imagine. This leads to
423fruitless debates about intuitions.
424There is a better approach. One can formulate reductive
425functionalism as a specific mathematical claim and then try to
426disprove it. Then one can have profitable debates about the
427assumptions made by the mathematics and about the correctness of
428the disproof.
429Reductive functionalism has been mathematically formulated and
430disproven. 13,14 The disproof is called the scrambling theorem. Each
431reductive functionalist theory of the mind-body problem is therefore
432false. This is not the place to give mathematical details of the
433scrambling theorem, but a simple example can convey the key
434ideas.
435Suppose, for simplicity, that Jack and Jill each have only two color
436experiences, say red and green; and that they each can say only
437two words, red and green; and that they each only look at two
438objects, ripe tomatoes and ripe limes. Every time we show Jack and
439Jill a ripe tomato and ask them what color it is, they each say “redâ€;
440every time we show them a ripe lime, they each say “green.†Thus,
441there are functional mappings that relate tomatoes and limes to the
44213
443conscious experiences red and green and to the verbal reports “redâ€
444and “green.â€
445Now consider a reductive functionalist claim that the experiences
446red and green are identical to these functional mappings. This would
447entail, for instance, that whenever Jack is shown a tomato and says
448“red,†he necessarily has the same color experience that Jill has
449when she is shown a tomato and says “red.â€
450But could Jack and Jill be functionally identical, and yet have
451different color experiences? Indeed they could, as illustrated in
452Figure 6. Here Jack and Jill each see tomatoes and limes and, in
453consequence, have color experiences red and green and give verbal
454reports “red†and “green.†However, as the straight arrows in the
455middle indicate, Jack’s color experiences are not identical to Jill’s,
456but instead they are swapped. When Jack, for instance, sees a
457tomato, he has the color experience red, but when Jill sees a
458tomato, she has the color experience green. Nevertheless, Jack and
459Jill each report that the tomato is “red†and the lime is “green.†They
460are functionally identical, even though their color experiences are
461inverted.
462Figure 6. How Jack and Jill can have differing color experiences and
463yet be functionally identical.
46414
465This is a simple example, but the scrambling theorem proves that no
466matter how complex the example gets, no matter how many
467conscious experiences are involved, and no matter how much
468scrambling there is between the conscious experiences of Jack and
469Jill, it is always possible to arrange the arrows so that they are
470functionally identical in every experiment that could be performed,
471including any psychophysical, brain imaging, and neural recording
472experiments. The scrambling theorem holds regardless of the
473geometry or symmetries of the space of conscious experiences,
474contrary to prior proposals. 48
475The scrambling theorem applies to a theory of consciousness called
476integrated information theory (IIT), developed by Giulio Tononi and
477Gerald Edelman. 47,49−53 One intuition driving IIT is that each
478conscious state is highly informative, in the sense that it is but one of
479a large repertoire of potential conscious states. A second intuition is
480that, “Phenomenologically, every experience is an integrated whole,
481one that means what it means by virtue of being one, and that is
482experienced from a single point of view. For instance, the
483experience of a red square cannot be decomposed into the separate
484experience of red and the separate experience of a square.†47
485These intuitions are formalized in a definition of integrated
486information, denoted Φ , which quantifies the amount of information
487a system generates as a whole beyond what is generated
488independently by its minimal parts. Specific qualia (i.e., specific
489conscious experiences) are represented by particular shapes in an
490information-theoretic qualia space denoted Q. They then propose,
491“According to the IIT, consciousness is one and the same thing as
492integrated information.†47
493This is a reductive functionalist proposal. They propose to identify
494consciousness with the functional property Φ together with the
495structure of Q. This proposal contradicts the scrambling theorem
496and is thus false.
497This does not mean that Φ and Q are useless in the study of
498consciousness. To the contrary, once one drops the false claim that
499Φ and Q are identical to consciousness, one can then explore the
500interesting empirical claim that Φ and Q correlate well with the
501amounts and kinds of consciousness in a variety of systems. If they
502do, then one can try to develop a scientific theory of consciousness
503that explains why and how this is so. In the process, one must
504account for empirical phenomena that appear to violate the claim
505that consciousness is an integrated whole. For instance, in some
506experiments, observers demonstrate illusory conjunctions, in which
50715
508they incorrectly bind visual features such as color and form in their
509conscious experiences. 54,55 In other experiments, they exhibit
510change blindness, in which observers fail to integrate into their
511conscious experience visual features that are right before their eyes.
51256 Although casual examination of conscious phenomenology
513suggests that it is an integrated whole, perhaps this reveals little
514about the true nature of consciousness and more about our inability
515to be aware of our own blindnesses. These are the kinds of
516empirical and theoretical challenges that Φ and Q face once we
517give up the false claim that they are identical to consciousness and
518begin the serious work of building a genuine theory.
519So IIT, properly understood, proposes correlates of consciousness
520but offers no explanatory theory of consciousness. The mystery that
521puzzled Huxley in 1866 is no less puzzling to IIT today.
5229. Let’s Abandon False Assumptions
523So far, we have questioned two key assumptions of most current
524attempts to construct a scientific theory of consciousness. The first
525assumption, that natural selection favors true perceptions, finds little
526support in empirical studies using evolutionary games and genetic
527algorithms. The second assumption, that the mind is what the brain
528does, is provably false.
529It’s not easy to abandon the first assumption, to let go of a realist
530interpretation of our perceptual experiences and instead adopt an
531interface interpretation. As Thomas Nagel put it, “[S]cientific realism
532would be undermined if we abandoned a realistic interpretation of
533the perceptual experiences on which science is based.†27 In
534particular, scientific realism about neurons and neural activity would
535be undermined, and this, in turn, would undermine the quest for a
536biological basis for consciousness. More generally, scientific realism
537about space-time and physical objects would be undermined, and
538this, in turn, would undermine the quest for a physicalist theory of
539consciousness. Not a happy idea for most current researchers. But
540evolutionary game theory, applied to perceptual evolution, sends a
541clear message. It tells us not to reify our perceptions.
542It’s also not easy to abandon the second assumption. The mindbody problem is so mysterious that claims of identity between
543consciousness and brain function seem to be the only way out. But
544the scrambling theorem clearly shows that such identity claims are
545simply giving up, throwing in the towel. It tells us that we cannot
546shirk the job of developing a scientific theory, by instead trying to
547pawn off a claim of identity. It tells us not to reify our descriptions.
54816
549Since most current research is predicated on the two assumptions
550we’ve just rejected, the obvious question is: How shall we now
551proceed in our quest for a scientific theory of consciousness? What
552different assumptions shall we try?
553At points like this, there are no formulas for how to proceed. Even
554principles like Occam’s Razor are fallible guides (I once heard
555Francis Crick, at a meeting of the Helmholtz Club, wryly remark,
556“Many men have slit their throats with Occam’s Razor.â€) These are
557points of creativity, of revolution, of risk. We strike out in a direction,
558knowing full well we are likely to be wrong.
55910. Let’s Assume That Consciousness Is Fundamental
560It’s in this spirit that I suggest we try to develop a scientific theory of
561consciousness that takes consciousness as fundamental, not as
562derivative on neural activity or functional complexity. I call this
563approach conscious realism. Abandoning a physicalist ontology is,
564of course, not ipso facto renouncing scientific methodology. To the
565contrary, it is scientific methodology, and the spectacular failure of
566physicalist theories, that prompts the proposal of conscious realism.
567A conscious realist theory of consciousness owes us a
568mathematically precise theory of consciousness qua consciousness.
569What structures and dynamics does consciousness itself have? How
570are these related to the structures and dynamics in well-established
571theories of physics such as relativity and quantum theory? For ideas
572and constraints on such a theory of consciousness, we can
573consider, inter alia, NCCs, psychophysical experiments, brain
574imaging, and mathematical correlates such as Φ and Q, but our
575goal is not a theory of reduction or emergence. It is a mathematical
576theory of consciousness on its own terms.
577Conscious realism is not the transcendental idealism of Kant. For
578Kant, the noumenal world, the thing in itself, was beyond description
579and, thus, beyond the ken of science. Conscious realism proposes
580that consciousness is the thing in itself and is within the purview of
581scientific.
582The goal of conscious realism differs from the de facto prior history
583of subjective and objective idealism, which have never produced a
584mathematically precise scientific theory of consciousness (and
585indeed have sometimes been promoted as adversarial to science).
586The goal of conscious realism is a rigorous and falsifiable theory of
587consciousness, that takes consciousness as fundamental but makes
58817
589full contact with current theories in physics (i.e., explains how these
590theories fit within the framework of conscious realism).
591There has been some progress toward a mathematical theory of
592consciousness qua consciousness, and of integrating this theory
593with quantum theory. 10,12,19,20 This is not the place for mathematical
594details, but the flavor of the approach can be appreciated if one
595knows just a bit about so-called Markovian kernels, as illustrated in
596Figure 7.
597Figure 7. Markovian kernels. In (a) is shown a perceptual strategy
598that is a function. In this case, a given world state w triggers only
599one perceptual state x. In (b) is shown a perceptual strategy that is a
600Markovian kernel. A given world state w can, in each act of
601observation, trigger one of several perceptual states x1 , x2 , x3 and
602so on. In this example, the probability of triggering perceptual state
603x1 is .1, the probability of triggering x2 is .6, and the probability of x3
604is .3.
605In section 3, we defined a perceptual strategy to be a function
606P :W → X , where W denotes possible states of the objective world
607(whatever it might be) and X is a set of possible perceptual states of
608some organism. This models the situation where a specific state of
609the world, say state w, triggers a specific perceptual response, say
61018
611x1 . But what if things aren’t so simple? What if sometimes w triggers
612x1 , but other times it instead triggers x2 or x3 ? In this case, we can
613no longer use a function to describe the perceptual strategy. But all
614is not lost. We can use probabilities instead. We can say that if w
615obtains then the probability that we will see x1 is such and such, the
616probability that we will see x2 is such and such, and so on for all the
617relevant possible perceptions. This is similar to saying that if we roll
618a fair die, then the probability of rolling a 1 is such and such, the
619probability of rolling a 2 is such and such, and so on. But if the die is
620not fair, then we will need to assign different probabilities for these
621outcomes. Thus, for each different state of the world, we get a
622different set of probabilities for the perceptions that might be
623triggered by that state of the world. The mathematical object that
624does this, that for each possible state of the world gives the
625probabilities of the various possible perceptions, is called a
626Markovian kernel. For each state w of the world, it gives a probability
627distribution on the possible perceptions that might occur.
628Now that we know a bit about Markovian kernels, we can use them
629not just to describe perceptions but also decisions and actions.
630Suppose that an organism has a repertoire of possible behaviors,
631say G. A particular action gi might be, say, to take one step forward,
632another action gj might be to turn 90 degrees to the right, and so
633on. Then we can think of a decision as choosing a behavior based
634on one’s current perceptions and goals. If my current perceptions
635are xi , then I might, with a certain probability, choose behavior gj or
636gk , and so on. Thus, we can model decisions by a Markovian kernel,
637call it D, that describes for each of our possible perceptions the
638probabilities of various behaviors we might choose to perform.
639Once we have decided on a behavior, we then act on the world and
640change the state of the world. If we act using behavior gi , then we
641can assume that there is some probability that the new state of the
642world W will state wj , another probability that the new state will be
643wk , and so on. Thus, once again we can use a Markovian kernel to
644model our actions on the world.
645We can represent these ideas in a simple diagram, the PDA
646(Perception-Decision-Action) loop, as shown in Figure 8. The
647perceptual kernel P maps the world W to the organism’s perceptions
648X; the decision kernel D maps the organism’s perceptions to
649behaviors; the action kernel A maps the organism’s behaviors onto
650changed states of the world.
65119
652Figure 8. The Perception-Decision-Action (PDA) loop.
653We can then, as a first step toward a mathematically precise theory
654of consciousness qua consciousness, propose a definition of the
655technical term conscious agent. A conscious agent is a 5-tuple (X,
656G, P, D, A), where X is a set of perceptions, G a set of behaviors,
657and P, D, and A are Markovian kernels as shown in Figure 8.
658This definition of conscious agent is not intended to be a reductive
659functionalist theory of consciousness. To the contrary, the term
660conscious agent is here treated as a technical term that has a
661precise mathematical definition. It is then an empirical question as to
662how well conscious agents perform as a descriptive and predictive
663model of consciousness. If empirical research turns up shortcomings
664of conscious agents, we can revise the definition or abandon it
665altogether in favor of a better theory.
66611. Conscious Agents Are a Promising Model of
667Consciousness
668But of course, I propose this definition of conscious agents because
669I think it might do well as a theory of consciousness. There are
670several reasons why.
671Conscious Agents and Bayesian Perception
672First, researchers have had striking success in modeling perception,
673multimodal integration, and perceptually guided behavior as socalled Bayesian inference. 57−59 Consider, for instance, the conscious
674experience of apparent motion, which you can see here: Sphere
675Applet. The applet shows you a sequence of movie frames in which
676dots appear in each frame. If the frames are shown slowly enough,
677then you see discrete frames with dots that are unrelated from one
678frame to the next. But if the frames are shown more quickly, your
679conscious experience suddenly transforms. You see dots moving
680smoothly, as you can check for yourself in the applet.
68120
682This transformation of conscious experience is a remarkable feat.
683Figure 9 shows why. For simplicity, let’s just consider a movie in
684which each frame has only two dots, and let’s just focus on two
685successive frames of this movie. Figure 9a shows this situation, in
686which the two dots in the first frame are colored black, and in the
687second frame, red. Now if we are to experience smooth movement
688of the dots, then the visual system must decide either to move the
689dots as shown in Figure 9b or as in Figure 9c. This is known as the
690“correspondence problem,†deciding for each dot in one frame
691where it moves to in the next frame. If there are many dots, then
692there are many possible correspondences. But we only see one
693correspondence from frame to frame, and, thus, one smooth motion.
694Figure 9. The correspondence problem in apparent motion.
695We can model this perception as an inference. The premises of the
696inference are the positions of the dots in the two frames. Given
697these premises, the visual system tries to infer the “bestâ€
698correspondence. For instance, the visual system seems to prefer
699correspondences in which all the dots move as little as possible from
700one frame to the next. It turns out that one can model these
701preferences, and the choice of correspondence, using Bayesian
702inference. 60 Briefly, if the positions of dots in the two frames is D
703and the possible correspondences are C, then the visual system is
704effectively computing the conditional probability p(C | D) and then
705choosing the particular correspondence, which, say, maximizes this
706conditional probability.
707But p(C | D) is properly understood as a Markovian kernel: for each
708set of dots D, it gives a probability measure on the possible
709correspondences C. Thus, our conscious experience of smooth
710motion can be properly modeled using Markovian kernels.
711Indeed, all perceptual experiences can be modeled using Bayesian
712inference, and, thus, by Markovian kernels. For this reason, using
713Markovian kernels to model perceptions and decisions (i.e., the
71421
715maps P and D of Figure 8) in the definition of conscious agents
716allows this definition to immediately inherit substantial support, both
717empirical and theoretical, from current research on conscious
718perceptual experiences.
719The Sphere Applet also demonstrates a second transformation of
720conscious experience. As you can check for yourself, not only do the
721dots appear to move smoothly but they also appear to pop out in 3D,
722forming a sphere. The applet lets you play with this. If you click the
723button labeled “More Slant†six times, the sphere will disappear, and
724the dots will appear to move only in a plane. By clicking the “More
725Slant†and “Less Slant†buttons, you can make the sphere appear
726and disappear.
727This conscious experience is called “structure from motion†and can
728also be modeled as Bayesian inference. 61 Here the visual system
729starts with the correspondences C and infers 3D objects T. In the
730process, the visual system is effectively computing the conditional
731probability p(T |C), which can also be represented by a Markovian
732kernel. Thus, we see that conscious agents can build on each other
733to create new and more complex conscious experiences. In the
734sphere applet, one conscious agent is using the kernel p(C | D) to
735create the conscious experience of smooth motion in 2D, and a
736second builds on this, using the kernel p(T |C) to create the
737conscious experience of a 3D object.
738Conscious Agents and the Combination Problem
739Those who take consciousness as fundamental face what is known
740as the “combination problem.†62−65 William Seager defines this as
741“the problem of explaining how the myriad elements of ‘atomic
742consciousness’ can be combined into a new, complex and rich
743consciousness such as that we possess.†62 William James
744understood this problem back in 1890: “Where the elemental units
745are supposed to be feelings, the case is in no wise altered. Take a
746hundred of them, shuffle them and pack them as close together as
747you can (whatever that may mean); still each remains the same
748feeling it always was, shut in its own skin, windowless, ignorant of
749what the other feelings are and mean. There would be a hundredand-first feeling there, if, when a group or series of such feelings
750were set up, a consciousness belonging to the group as such should
751emerge. And this 101st feeling would be a totally new fact; the 100
752original feelings might, by a curious physical law, be a signal for its
753creation, when they came together; but they would have no
754substantial identity with it, nor it with them, and one could never
755deduce the one from the others, or (in any intelligible sense) say that
75622
757they evolved it…. The private minds do not agglomerate into a
758higher compound mind.â€
759Conscious agents provide a natural solution to the combination
760problem. We just saw an example of this in the case of motion
761perception. One group of conscious agents starts with discrete
762frames of static dots and creates conscious experiences of dots that
763move smoothly in 2D. These conscious experiences are combined
764as the input to a higher conscious agent that creates a literally new
765dimension of conscious experience, namely, a 3D experience.
766Formally, conscious agents can model such combinations of
767consciousness by so-called kernel “tensor products,†“direct sums,â€
768and “composition.†This is not the place to delve into mathematical
769details. But intuitively, the tensor products and sums of kernels can
770be used to take the output experiences of one group of conscious
771agents and arrange them to be the proper input for a higher
772conscious agent that creates a new kind of conscious experience,
773(e.g., a 3D experience out of input experiences that are only 2D).
774The hierarchy relationship between conscious agents can be
775modeled formally by kernel composition. 19
776More intuitively, conscious agents can be mathematically combined
777together to create new conscious agents. That is, when conscious
778agents are properly combined together, the new composite
779mathematical structure satisfies the definition of a conscious agent,
780and, thus, is a conscious agent. The sphere applet, as we just
781discussed, illustrates the corresponding phenomenology. There are
782many examples in visual perception of similar phenomena, in which
783conscious visual experiences of one type are combined together to
784form the inputs for a new conscious visual experience with literally
785new phenomenological features that cannot be reduced to or
786identified with the component experience.
78715
788Conscious Agents and Quantum Bayesianism
789Conscious agents provide a promising link with quantum theory. In
790standard formulations of quantum theory, observers play a key but
791controversial role; the field of quantum measurement tries to
792understand this role. 67−69 Although there is no consensus among
793experts in quantum theory about the relationship between
794consciousness and quantum mechanics, some theories of
795consciousness build on aspects of quantum theory. 70−71
796One interpretation of quantum theory that has arisen from recent
797work in quantum information and computation is called “quantum
798Bayesianism†or QBism for short. 72−73 According to QBism, the state
79923
800of a quantum system is not a description of an objective reality
801independent of any observer. Instead, the quantum state depends
802on the observer and, indeed, “a quantum state is a state of belief
803about what will come about as a consequence of his actions upon
804the system.†70 Just as the interface theory of perception claims that
805our perceptions do not faithfully represent the true nature of reality,
806the QBist claims “there is no sense in which the quantum state itself
807represents (pictures, copies, corresponds to, correlates with) a part
808or whole of the external world, much less of a world that just is. In
809fact, the very character of the theory seems to point to the
810inadequacy of the representationalist program when attempted on
811the particular world we live in.†70 In consequence, quantum
812measurements are not reports of objective reality: “At the instigation
813of a quantum measurement, something new comes into the world
814that was not there before; and that is about as clear an instance of
815creation as one can imagine.†70
816Why should it be that quantum states are not reports of objective
817reality? When a quantum state describes a quantum object in terms
818of position, momentum, and so forth, it is using predicates grounded
819in our perceptions, (e.g., of space and time and physical objects).
820Now physics doesn’t use our perceptual predicates just as they are
821in our untutored perceptions. In our untutored perception of space,
822for instance, the moon looks about as far away from us as the stars.
823Physics takes our untutored perceptual predicates and extends
824them (e.g., using symmetry groups) to new predicates. But the basic
825predicates of space, time, and physical objects are simply
826adaptations that have been shaped by natural selection into the
827perceptual systems of Homo sapiens and, as we have seen from
828evolutionary game theory, natural selection does not, in general,
829favor true perceptions. Our perceptions were shaped to guide
830adaptive behavior, not to report truth.
831So evolution by natural selection is the reason why quantum states
832are not reports of objective reality. Instead, as QBism says, the
833information in an observer’s quantum state gives “The
834consequences (for me) of my actions upon the physical system.†70
835Of course, natural selection has shaped our perceptions exactly for
836the purpose of informing us about fitness consequences of our
837behaviors. Fitness, not truth, is the coin of the perceptual realm. My
838perceptions have been shaped by natural selection to tell me about
839the fitness consequences for me of my actions.
840The concrete technical challenge here is to connect the formal
841definition of a conscious agent, and the formalism of quantum theory
842as it is interpreted by QBism. For instance, referring again to Figure
84324
8448, the act of measurement within the formalism of conscious agents
845would be modeled by the kernel composition AP. The conscious
846agent can know the kernel AP, since this kernel is a map from its
847possible behaviors G to its possible perceptions X, and these are
848clearly known by the conscious agent. But the conscious agent
849cannot know A and P separately, because each kernel involves the
850unknown world W. So, according to the theory of conscious agents,
851every quantum measurement must be modeled as a composition of
852two kernels, AP, factoring through an unknown world W. What
853constraint does this place on models of measurement? How does it
854relate to the unusual calculus of probabilities that arises in the Born
855rule for quantum measurement, in which probabilities are given by
856squares of complex amplitudes? QBists have shown that the
857appearance of complex amplitudes in the measurement process is
858merely a computational convenience and not a fundamentally more
859powerful calculus of probabilities. One could, in principle, dispense
860with complex numbers and do quantum theory entirely with standard
861probabilities. The Born rule then turns out to be simply a quantum
862law of total probability, relating actual measurements to
863counterfactual measurements. 70 How is this related to the kernel AP
864of conscious agents, which always factors real measurements
865through an unknown world W?
86612. Objections and Replies
867Questioning fundamental and widely believed assumptions is no
868easy task. Such assumptions are widely held for good reason, and it
869is natural and healthy that new proposals, such as are offered here,
870should be met with skepticism. In this last section, I canvas a few
871objections and offer responses.
872Your interface theory of perception is clearly false. It says that
873physical objects are just icons of a species-specific interface,
874and, thus, are not real. But if a bus hurtles down a road at
875high speed, would you step in front of it? If you did, you would
876find out that it is not just an icon, it is real, and your theory is
877nonsense.
878The interface theory of perception does indeed assert that physical
879objects are simply icons of a species-specific perceptual interface.
880Still, I would not step in front of the bus for the same reason I
881wouldn’t carelessly drag a file icon on my desktop to the trashcan.
882Why? I don’t take the icon literally, but I do take it seriously. The
883color, shape, and position of the icon are not literally true
884descriptions of the file. Indeed, color and shape are even the wrong
885language to attempt a true description. But the interface is designed
88625
887to guide useful behaviors, and those behaviors have consequences
888even if the interface does not literally resemble the truth. Natural
889selection shaped our perceptions, in part, to keep us alive long
890enough to reproduce. We had better take our perceptions seriously.
891If you see a tiger, keep away. If you see a cliff, don’t step over.
892Natural selection ensures that we must take our perceptions
893seriously. But it is a logical error to conclude that we must, therefore,
894take our perceptions literally.
895As discussed before, the interface theory of perception fits well with
896QBist interpretations of quantum theory, which say that we should
897not take quantum states literally as descriptions of an objective
898reality independent of the observer. Thus, the interface theory is not
899falsified by current physics but instead fits well with and even offers
900evolutionary explanations for puzzling aspects of quantum physics.
901The objection uses the world real. This word is used with two very
902different meanings. In the objection, it is used to mean that
903something exists even if it is not observed. So, the bus is argued to
904be real in the sense that it would exist even if no one observed it.
905But there is another sense of real, as when I say I have a real
906headache. The headache would not exist if no one (e.g., me)
907observed it. But if you claimed on those grounds that my headache
908wasn’t real, I would be cross with you. So the interface theory says
909that physical objects such as a bus are real in the headache sense
910of real. But it denies that they are real in the sense of existing
911whether or not they are observed.
912Doesn’t the interface theory say that the moon is only there
913when you look? That’s clearly absurd.
914Yes, the interface theory says that the moon is only there when I
915look. However, the interface theory does not deny that, when I see
916the moon, something exists whether I observe it or not. But that
917something is not the moon, and it is probably not anything in space
918and time. Space, time, and the moon are just the best that I, as a
919humble member of the species H. sapiens, can come up with. There
920is a reality that exists independent of my perceptions; the interface
921theory does not endorse metaphysical solipsism. But it is an
922elementary mistake to assume that what exists in any way
923resembles what I perceive.
924The moon is my perceptual experience. When you see the moon,
925you have your own perceptual experience that is distinct from (not
926numerically identical to) my perceptual experience. So when we
927both look up at “the moon†there are actually two moons, one of your
92826
929experience and one of mine. There is something that exists that
930triggers each of us to create an experience of the moon, but that
931something, in all probability, does not resemble the moon.
932Actually, the interface theory is nothing new. Physicists have
933been telling us for decades that objects are mostly empty
934space. That desk looks solid, but it is really just particles
935whizzing through empty space at high speeds.
936Indeed, physicists have been telling us this for some time. But the
937claim of the interface theory is different, and more radical. It says
938that the particles themselves, and the empty space through which
939they travel, are not the objective reality. They are still part of the
940interface. Suppose I admit that the icon on my desktop is not the
941reality of the file, but then I whip out a magnifying glass, look closely
942at the icon, and conclude that the pixels I see are the reality. I’ve
943made a fundamental mistake. The pixels are still part of the desktop
944interface and they don’t resemble the real file any more than the
945icon does. The same is true of the particles whizzing through empty
946space.
947The interface theory of perception means science is not
948possible. If our senses don’t deliver the truth, then how can
949science possibly proceed?
950The interface theory poses no problem to science. It simply says
951that one particular theory is incorrect, viz., the theory that objective
952reality consists in part of space, time, and physical objects.
953Discarding false theories is genuine scientific progress. Now that we
954know not to take our perceptions at face value, we can be more
955sophisticated in their interpretation. We now understand that our
956perceptions are shaped by natural selection to inform us about
957fitness, not truth. We can still construct theories about the nature of
958objective reality and about how that reality relates to our
959perceptions. We can then make empirical predictions that can be
960tested. The methodology of science is not called into question by the
961interface theory.
962You use evolutionary game theory to conclude that our
963perceptions do not report the truth. But how about our logic
964and mathematics? Does evolution also shape them to be
965incorrect? And if so, isn’t this a defeater for your whole
966program? You use the logic and mathematics of evolution to
967conclude that logic and mathematics are unreliable.
96827
969I agree that if evolutionary games show that natural selection favors
970incorrect logic and mathematics, then I have a real problem. It would
971be self-refuting. This is clearly an important research area.
972I think, however, that it will turn out that the same evolutionary
973games which demonstrate that natural selection does not favor true
974perceptions will also demonstrate that natural selection favors true
975logic and mathematics. Suppose, for instance, that the objective
976world contains two resources and that the fitness payoff of these
977resources, for a specific organism, depends on the sum of the
978resource quantities. Then an organism whose perceptual system
979performs the sum correctly will be better able to reap the fitness
980benefits of those resources than one that does not. More generally,
981if the fitness payoffs are some function f of structures in the objective
982world, then selection pressures will shape organisms to correctly
983compute f.
984There are a couple provisos. First, the selection pressures will only
985shape organisms to correctly compute the portions of f that are, in
986fact, relevant to fitness. If, for instance, the payoff function rewards
987only one element of the range of f and gives no rewards for any
988other elements of its range, then an organism that only correctly
989computes the pullback of that single element will be able to reap all
990the fitness rewards. However, as the behavioral repertoire of the
991organism increases and other elements of the range of f are
992rewarded for different behaviors, then the organism will need to
993correctly compute the pullbacks of these elements as well. Thus, the
994selection pressures are toward truth, even if, in practice, they don’t
995get all the way there.
996A second proviso is that it is not clear that selection pressures will
997uniquely determine the range of a function. It appears that, as long
998as all the pullbacks are computed correctly, they can be randomly
999assigned (even incorrectly) to different elements of the range, and
1000the organism can still reap all the fitness benefits. Thus, it might turn
1001out that selection pressures are toward the truth, but only up to
1002automorphisms of the range of functions.
1003Now I have been speaking of logic and math as they apply in the
1004normal functioning of our perceptual processing, not as they are
1005used in our deliberate reasoning. It is quite possible that our
1006deliberate reasoning has evolved not as a guide to truth but simply
1007to serve some other useful function. Dan Sperber and his
1008colleagues, for instance, argue that reasoning evolved to allow us to
1009devise and evaluate arguments designed to persuade others about
1010what we want. 75 The goal of our reasoning is successful argument,
101128
1012not truth. And this, they suggest, is one reason for the notorious
1013confirmation bias in human reasoning.
1014The ideas discussed here have implications for long-standing
1015debates about whether evolution is compatible with the claim that
1016our cognitive faculties are reliable. Plantinga, for instance, argues
1017that evolution and naturalism together make it improbable or
1018inscrutable whether our cognitive faculties are reliable; this, he says,
1019is a defeater for all our beliefs, including beliefs in evolution and
1020naturalism. 35 But the ideas discussed here suggest that the question
1021must be refined if we are to make real progress. Asking whether
1022evolution is likely to produce reliable cognitive faculties is too broad
1023a question. Perhaps evolution produces untrue perceptions but
1024reliable logic and mathematics. We shall have to look at each aspect
1025of human cognition separately and ask, using tools such as
1026evolutionary games and genetic algorithms, what natural selection is
1027likely to do with that aspect.
1028When you dismissed the integrated information theory (IIT) of
1029consciousness, you dismissed the measure Φ of integrated
1030information, which may turn out to be useful in the study of
1031consciousness. This is a serious mistake.
1032I did not dismiss IIT tout court. I dismissed Tononi’s claim of identity
1033between consciousness and Φ . That claim is false, as is
1034established by the scrambling theorem. But I am certainly open to
1035the possibility that Φ will turn out to be a useful measure in the
1036study of consciousness. If so, it can be applied within the formalism
1037of conscious agents. The Markovian kernels within that formalism
1038are amenable to IIT analyses such as effective information and Φ .
1039Your interface theory of perception and conscious-agent
1040theory of consciousness make no predictions and are thus
1041not genuine scientific theories.
1042Here are some predictions. No physical object has real values of
1043dynamical physical properties (such as position, momentum, spin)
1044when it is not observed. If we find definitive evidence otherwise, my
1045theories would be in ruins. The experimental evidence so far is that
1046quantum objects violate Bell’s inequalities, which is often interpreted
1047as a refutation of local realism; 67 such an interpretation is exactly
1048what is predicted by the interface theory of perception. However,
1049other interpretations such as Bohm’s, which keeps realism at the
1050expense of locality, and Everett’s, which keeps realism at the
1051expense of counterfactual definiteness, are not ruled out.
105229
1053Another prediction: No physical object has any causal powers. I call
1054this doctrine epiphysicalism: Consciousness creates physical
1055objects and their properties, but physical objects themselves have
1056no causal powers. This is the converse of epiphenomenalism, which
1057claims that physical objects, such as brains, create conscious
1058experiences, but conscious experiences themselves have no causal
1059powers. If any physical object were shown to have causal powers,
1060my theories would be in ruins.
1061Another prediction: Every perceptual capacity can be represented by
1062the conscious-agent formalism. If there were some perceptual
1063capacity whose formal statement could not be represented within
1064the formalism of conscious agents, then the conscious-agent
1065formalism would be falsified. This claim about conscious agents and
1066perceptual capacities is analogous to the claim that is made about
1067Turing machines and effective procedures. The Church-Turing
1068thesis states that every algorithm can be instantiated by some
1069Turing machine. Were someone to produce an algorithm that could
1070not be so instantiated, then the Church-Turing thesis would be
1071falsified, and Turing machines would be an inadequate
1072representation of algorithms. Similarly, the Conscious-Agent thesis
1073states that every perceptual capacity can be instantiated by some
1074conscious agent. Were someone to produce a perceptual capacity
1075that could not be so instantiated, then the Conscious-Agent thesis
1076would be falsified. The Conscious-Agent thesis is effectively the
1077claim that conscious agents are an adequate formalism to represent
1078all conscious perceptual experiences.
1079Acknowledgements
1080For helpful discussions, I thank P. Foley, B. Marion, J. Mark, C.
1081Prakash, M. Singh, G. Souza, and K. Stephens. Any errors are, of
1082course, mine, not theirs. I also thank Elaine Ku for the rabbit images
1083used in Figure 3.
1084References
1085[1] G. Miller, “What is the Biological Basis of Consciousness?â€
1086Science, 309 (2005), 79.
1087[2] C. Koch, The Quest for Consciousness: A Neurobiological
1088Approach (Englewood, CO: Roberts & Company, 2004).
1089[3] T. J. Huxley, “Lessons in Elementary Psychology,†8 (1866), 210.
1090[4] C. Koch, Consciousness: Confessions of a romantic reductionist
1091(Cambridge, MA: MIT Press, 2012).
1092[5] F. Crick, The Astonishing Hypothesis: The Scientific Search for
1093the Soul (New York: Scribners, 1994).
109430
1095[6] C. McGinn, “Can We Solve the Mind-body Problem?†Mind, 98,
1096(1989), 349–366.
1097[7] A. Peres, Quantum Theory: Concepts and Methods (Boston:
1098Kluwer, 1995).
1099[8] W. V. O. Quine, “Two Dogmas of Empiricism,†The Philosophical
1100Review, 60 (1951), 20-43.
1101[9] J. Mark, B. Marion, and D. D. Hoffman, “Natural Selection and
1102Veridical Perceptions,†Journal of Theoretical Biology, 266 (2010),
1103504–515.
1104[10] D. D. Hoffman, M. Singh, “Computational Evolutionary
1105Perception,†Perception, 41 (2012), 1073-1091.
1106[11] D. D. Hoffman, M. Singh, J. Mark, “Does Evolution Favor True
1107Perceptions?†Proceedings of the SPIE, Human Vision and
1108Electronic Imaging XVIII (2013), Doi: 10.1117/12.2011609.
1109[12] M. Singh, D. D. Hoffman, “Natural Selection and Shape
1110Perception,†Shape Perception in Human and Computer Vision: An
1111Interdisciplinary Perspective, edited by S. Dickinson, S. and Z. Pizlo
1112(New York: Springer, 2013).
1113[13] D. D. Hoffman, “The Scrambling Theorem: A Simple Proof of
1114the Logical Possibility of Spectrum Inversion,†Consciousness and
1115Cognition, 15 (2006), 31-45.
1116[14] D. D. Hoffman, “The Scrambling Theorem Unscrambled: A
1117Response to Commentaries,†Consciousness and Cognition, 15
1118(2006), 51-43.
1119[15] D. D. Hoffman, Visual intelligence: How we create what we see
1120(New York: W.W. Norton, 1998).
1121[16] D. D.Hoffman, “The Interface Theory of Perception,†Object
1122Categorization: Computer and Human Vision Perspectives, edited
1123by S. Dickinson, M. Tarr, A. Leonardis, B. Schiele (Cambridge:
1124Cambridge University Press, 2009), 148-165.
1125[17] J. J. Koenderink, “Vision and information,†Perception Beyond
1126Inference. The Information Content of Visual Processes edited by L.
1127Albertazzi, G. V. Tonder, and D. Vishnawath, D. (Cambridge, MA:
1128MIT Press, 2011).
1129[18] J. J. Koenderink, “World, Environment, Umwelt, and Innerworld:
1130A Biological Perspective on Visual Awareness,†Proceedings of the
1131SPIE, Human Vision and Electronic Imaging XVIII (2013) Doi:
113210.1117/12.2011874.
1133[19] B. M. Bennett, D. D. Hoffman, and C. Prakash, Observer
1134mechanics: A formal theory of perception (San Diego: Academic
1135Press, 1989).
1136[20] D. D. Hoffman, “Conscious Realism and the Mind-body
1137Problem,†Mind & Matter, 6 (2008), 87-121.
1138[21] R. Briggs, “Distorted Reflection,†Philosophical Review, 118
1139(2009), 59-85.
114031
1141[22] S. Palmer, Vision science: Photons to phenomenology
1142(Cambridge, MA: MIT Press, 1999).
1143[23] A. Noë, and J. K. Regan, “On the Brain-basis of Visual
1144Consciousness: A Sensorimotor Account,†Vision and Mind:
1145Selected Readings in the Philosophy of Perception, edited by A.
1146Noë and E. Thompson (MIT Press, Cambridge, MA, 2002).
1147[24] W. S. Geisler, and R. L. Diehl, “A Bayesian Approach to the
1148Evolution of Perceptual and Cognitive Systems,†Cognitive Science,
114927 (2003), 379-402.
1150[25] D. Marr, Vision: A Computational Investigation into the Human
1151Representation and Processing of Visual Information (San
1152Francisco: Freeman, 1982).
1153[26] A. Shimony, “Perception from an Evolutionary Point of View,â€
1154Journal of Philosophy, 68, 19 (1971), 571–583.
1155[27] T. Nagel, Mind and Cosmos: Why the Materialist Neo-Darwinian
1156Conception of Nature is Almost Certainly False (Oxford University
1157Press, 2012).
1158[28] M. Nowak, Evolutionary Dynamics: Exploring the Equations of
1159Life (Cambridge, MA: Belknap Press of Harvard University Press,
11602006).
1161[29] M. Mitchell, An Introduction to Genetic Algorithms (Cambridge,
1162MA: MIT Press Bradford, 1998).
1163[30] W. Fish, Perception, Hallucination, and Illusion (Oxford Press,
11642009).
1165[31] W. Fish, Philosophy of Perception: A Contemporary Introduction
1166(New York: Routledge, 2010).
1167[32] K. Lorenz, Behind the Mirror: A Search for a Natural History of
1168Human Knowledge (New York: Harcourt Brace Jovanovich, 1973).
1169[33] H. Kornblith, Naturalizing Epistemology (Cambridge, MA: MIT
1170Bradford, 1987).
1171[34] G. Radnitzky, and W. W. Bartley, Evolutionary Epistemology,
1172Rationality, and the Sociology of Knowledge (La Salle, IL: Open
1173Court, 1993).
1174[35] J. Belby, Naturalism Defeated? (Ithaca, NY: Cornell University
1175Press, 2002).
1176[36] B. Skyrms, Signals: Evolution, Learning, and Information
1177(Oxford University Press, 2010).
1178[37] J. Hofbauer, and K. Sigmund, Evolutionary Games and
1179Population Dynamics (Cambridge University Press, 1998).
1180[38] J. Maynard Smith, Evolution and the Theory of Games
1181(Cambridge University Press, 1982).
1182[39] M. A. Nowak, Evolutionary Dynamics: Exploring the Equations
1183of Life (Cambridge, MA: Belknap Press, 2006).
1184[40] L. Samuelson, Evolutionary Games and Equilibrium Selection
1185(Cambridge, MA: MIT Press, 1997)
118632
1187[41] J. Locke, An Essay Concerning Human Understanding (Oxford
1188University Press, 1690/1979).
1189[42] N. Block, J. Fodor, “What Psychological States Are Not,
1190Philosophical Review, 81 (1972), 159–181.
1191[43] J. Bickle, Philosophy and Neuroscience: A Ruthlessly Reductive
1192Account (Dordrecht: Kluwer Academic Publishers, 2003).
1193[44] D. Chalmers, The Conscious Mind (Oxford University Press,
11941996).
1195[45] P. S. Churchland, Brain-wise: Studies in Neurophilosophy
1196(Cambridge, MA: MIT Press, 2002).
1197[46] D. Dennett, Consciousness Explained (Boston: Back Bay
1198Books, 1992).
1199[47] G. Tononi, “Consciousness as Integrated Information: A
1200Provisional Manifesto†Biological Bulletin, 215 (2008), 216-242.
1201[48] S. Palmer, “Color, Consciousness, and the Isomorphism
1202Constraint,†Behavioral and Brain Sciences, 22 (1999), 923–989.
1203[49] G. Tononi, and G.Edelman, “Consciousness and Complexity,â€
1204Science, 282 (1998), 1846-1851.
1205[50] G. Tononi, and O. Spoorns, “Measuring Information Integration,â€
1206BMC Neuroscience, 4 (2003), 31.
1207[51] G. Tononi, “An Information Integration Theory of
1208Consciousness,†BMC Neuroscience, 5 (2004), 42.
1209[52] G. Tononi, and C. Koch, “The Neural Correlates of
1210Consciousness: An Update,†Annals of the New York Academy of
1211Sciences, 1124 (2008), 239-261.
1212[53] A. B. Barrett, A. K. Seth, “Practical Measures of Integrated
1213Information for Time-series Data,†PLOS Computational Biology, 7
1214(2011), 1.
1215[54] A. Treisman, and H. Schmidt, “Illusory Conjunctions in the
1216Perception of Objects,†Cognitive Psychology, 14(1), (1982), 107-
1217141.
1218[55] P. T. Quinlan, “Visual Feature Integration Theory: Past, Present,
1219and Future.†Psychological Bulletin, 5 (2003), 643-673.
1220[56] D. J. Simons, and M. S. Ambinder, “Change Blindness: Theory
1221and Consequences,†Current Directions in Psychological Science,
122214 (2005), 44-48.
1223[57] D. Knill, and W. Richards, Perception as Bayesian Inference
1224(Cambridge University Press, 1996).
1225[58] D. Kersten, P. Mamassian, and A. Yuille, “Object Perception as
1226Bayesian Inference,†Annual Review of Psychology, 55 (2004), 271-
1227304.
1228[59] T. E. Hudson, L. T. Maloney, and M. S. Landy, “Optimal
1229Compensation for Temporal Uncertainty in Movement Planning,â€
1230PLOS Computational Biology, 4(7), (2008), e1000130. doi:
123110.1371/journal.pcbi.1000130.
123233
1233[60] J. C. Read, “A Bayesian Model of Stereopsis Depth and Motion
1234Direction Discrimination,†Biological Cybernetics, 86 (2002), 117-
1235136.
1236[61] D. A. Forsythe, S. Ioffe, and J. Haddon, “Bayesian Structure
1237from Motion,†Proceedings of the 7th IEEE International Conference
1238on Computer Vision, 1 (1999), 660-665.
1239[62] W. Seager, “Consciousness, Information, and Panpsychism,â€
1240Journal of Consciousness Studies, 2 (1995), 272-288.
1241[63] P. Goff, “Why Panpsychism Doesn’t Help Us Explain
1242Consciousness,†Dialectica, 63 (2009), 289-311.
1243[64] M. Blamauer, “Is the Panpsychist Better Off as an Idealist?
1244Some Leibnizian Remarks on Consciousness and Composition,â€
1245Eidos, 15 (2011), 48-75.
1246[65] S. Coleman, “The real Combination Problem: Panpsychism,
1247Micro-subjects, and Emergence,†Erkenntnis. (2013), DOI
124810.1007/s10670-013-9431-x.
1249[66] W. James, The Principles of Psychology (Vol. 1) (New York:
1250Cosimo Inc., 1890/2007).
1251[67] D. Albert, Quantum Mechanics and Experience (Cambridge,
1252MA: Harvard University Press, 1992).
1253[68] J. A. Wheeler, and W. H. Zurek, Quantum Theory and
1254Measurement (Princeton University Press, 1983).
1255[69] G. Greenstein, and A. G. Zajonc, The Quantum Challenge
1256(Sudbury, MA: Jones and Bartlett, 2005).
1257[70] N. Herbert, Elemental Mind: Human Consciousness and the
1258New Physics (New York: Plume, 1993).
1259[71] S. Kak, R. Schild, R. Penrose, and S. Hameroff, Cosmology of
1260Consciousness: Quantum Physics and Neuroscience of Mind
1261(Cambridge, MA: Cosmology Science Publishers, 2011).
1262[72] S. Kak, R. Penrose, and S. Hameroff, Quantum Physics of
1263Consciousness (Cambridge, MA: Cosmology Science Publishers,
12642011).
1265[73] C. A. Fuchs, “QBism, the Perimeter of Quantum Bayesianism,
1266arXiv:1003.5209 (2010).
1267[74] C. A. Fuchs, R. Schack, “A Quantum-Bayesian Route to
1268Quantum-state Space,†Foundations of Physics, 41 (2011), 345-356.
1269[75] H. Mercier, D. Sperber, “Why Do Humans Reason? Arguments
1270for an Argumentative Theory,†Behavioral and Brain Sciences, 34
1271(2011), 57-111.