Sunday, 8 July 2012

John Searle: Consciousness and Causality

            Abstract: How do neurobiological processes in the brain cause consciousness? I think this is the most important question in the biological sciences today. Two related questions: Where exactly is consciousness realized in the brain and how does it function causally in our behavior? We know consciousness happens and we know the brain does it. How does it work? How do we approach this problem scientifically? The standard way is to go through three steps. First, try to find the neurobiological correlate of consciousness. Second, try to test if the correlations are in fact causal. Do the neurobiological states cause consciousness? Third, try to formulate a theory. Why do these processes cause consciousness at all, and why do these specific processes cause these specific conscious states? One depressing feature of this entire research project is that it does not seem to be making much progress.

Mystery of consciousness  a book by me. NYRB book
The Problem of Consciousness:
How to study consciousness scientifically
Free Will as a Problem in Neurobiology

Comments invited



Is language a *feeling* or a *doing* function?

Could it be a bit of both? Could we split language in 3 subfunctions: 1) speech (*doing*), 2) semantics or, if you prefer, symbol grounding (*feeling*) and 3) syntax (*doing*)?

    Can we identify any other function absolutely requiring *feeling*?


If not, this could explain the strength of the Turing test since in order to succeed, one would have to implement *feeling* in the program. Maybe Searle, in his Chinese Room, does not *feel* that he speaks chinese because he is not ready to admit that the program includes a *feeling* function overwritting his own *feeling* function.

    If only language requires *feeling*, why would organisms without language need a *feeling* function? Ockham says no..





    (i) Language is both doing and feeling (just as most things we do while awake are): T2 capacity to respond appropriately to verbal input/output is doing. Understanding and meaning is, in addition to doing, feeling. But, until further notice, the function that language performs (the functional advantage it confers) is just doing, like all evolutionary advantages: It is what helps survival and reproduction. Like us, the "Blind-Watchmaker" is no mind-reader.

    (ii) Sensorimotor symbol-grounding is also doing; the fact that it is also felt is unexplained.

    (iii) If we could discover (and explain) any function requiring (let alone "absolutely requiring) feeling, we would have solved the hard problem.

    (iv) Passing the Turing test only generates doing; if it also generates feeling, we cannot explain how or why. (When Searle is manipulating meaningless Chinese symbols according to formal rules, and feels that he does not understand Chinese, he is indeed not understanding Chinese [do you doubt that?]; he is at best just "doing" Chinese.)

    (v) If any of the many waking functions that are felt functions *requires* feeling, then no one has yet given a hint of how or why. (Ockham may "say" that therefore we can forget about feeling, because everything can be done without it, and hence it is functionally and explanatorily superfluous, but that does not solve the hard problem, it merely re-states it. And although we should make sure to focus on the epistemic problem of explaining feeling -- rather than the ontic problem of puzzling about whether there exists one kind of "stuff" in the world, or two -- everyone knows that maintaining an embarrassed silence about feeling does not make feeling disappear from the world, nor explain it!

    1. You cannot say that "Passing the Turing test only generates doing" until you have passed the Turing test. My point is precisely that if TT was only doing, engineers would have solved the problem.

      For the Chinese Room, Searle assumes 1) that the program in the book can pass the Turing test and 2) that "Passing the Turing test only generates doing", but he doesn't know what is required for 1 to be true. If 1 includes a *feeling* function (agreed we have no clue about that *feeling* function, but we have no clue about its non-requirement either), maybe after integrating the program he would feel differently.

      I don't doubt what he thinks he would feel after learning the program, I simply know that he cannot have a clue about how he would feel after learning a program that he cannot write, a program that nobody in the world has been able to imagine after decades of research with a $1,000,000 reward.

      I have no ontic problem, no puzzling about substance dualism, nor about property dualism. I am with Searle all the way about causality and reality. I don't want to hide *feeling*, on the contrary I don't think we can make real progress in Artificial Intelligence if we don't understand it. Luckily, it's strictly causal...


      Good luck with hoping that the Turing-Test-passing model will somehow explain feeling -- but remember that the TT-passer will not just have to have verbal capacity (T2) but robotic capacity (T3), hence the model will not be just computation.

      And even if it had been just computation, try testing whether having a subject memorize the computer program for playing and winning binary-decimal tic-tac-toe gives that player an understanding that he is in reality playing tic-tac-toe. (Substitute for every "move" in tic-tac-toe matrix abc-def-ghi the decimal number for the binary number for each row -- 001-000-100 becomes 1-0-4 and the winning move is then 1-2-4.) If the player does not thereby understand that he is playing tic-tac-toe, even though he is playing (doing) tic-tac-toe, Searle would not be understanding Chinese, even though he was doing it. No need to create the T2-passing program to see that. Searle's thought-experiment is enough.

    3. UNDERSTANDING UNDERSTANDING... the reason I came to Cognitive Informatics.

      Robotic capacities are just *doing* (e.g. Google's driverless car). True verbal capacity (i.e. penpal for 30 years) is NOT just *doing*. So, T3 is not sufficient to pass T2. T4 is full brain *doing* including *feeling* and is required to pass T2.

      T4 is robotic *doing* realized through a *feeling* brain. While T3 is a sensorimotor brain, T4 is a conscious brain. A strictly sensorimotor brain cannot be your penpal for 30 years.

      There are many ways to play tic-tac-toe from a sensorimotor point of view. Number of ways are much more limited to play consciously. Not limited to biological ways, but certainly limited to neuronal ways.

      Neuronal doesn't mean not computational. "Consciousness is causal, physically causal" Searle dixit. "Physical causality is computable" Turing dixit.


      T2 is the capacity to speak and reply indistinguishably from a person. Speaking and replying is just doing.

      T3 includes T2. Both are just doing. So is T4 (brain doing).

      You miss the point about tic-tac-toe: You can do it without understanding (feeling) that you are playing tic-tac-toe, just as Searle would be "doing" Chinese speaking and replying without understanding Chinese.

      Neural secretion (for example) is not computation. If something is computable it does not mean it is computation. Planetary positions are computable: orbiting is not computation.

  2. I asked the question, and the answer was off enough that I didn't see how to get the conversation back on track.

    There are models of intentionality and representation (and therefore description) that are observer-independant. Cf. Ruth Millikan's intentionality and pushme-pullyou representations, Andy Clark's Action-Oriented representation, similar models by Wheeler and others, Richard Menary's intentionality, etc.

    Now, these models a) explain (or provide a framework to) how intentionality is realized and b) do it without intentionality. So basically they're superior, aren't they?

    Saying descriptions of these models are observer-dependant, and therefore demand a sort of intentionality that demands consciousness is wrong, because to these models, intentionality and descriptions do not demand consciousness.

    1. I think that observer-dependent objects are those objects which properties directly depend on their use by humans (and thus on how humans understand those objects). For instance, money would not exist (and would only be paper) without humans attributing exchange value to it and culturally agreeing about this value. Similarly, if you define intentionality as both the doing and feeling aspects of top-down actions, the doing part would be observer-independent whereas the feeling part would be observer-dependent. In fact, you do not need to have a human or other system understand what it means and what it feels like to be a system exerting top-down control in order for this system to effectively exert this top-down control. In other words, it is possible to explain how a system displays top-down behaviors without having to referring to its understanding and feeling of such behaviors.

    2. What do you mean exactly by "top" and "down" in "a system display[ing] top-down behaviors without having to refer to its understanding and feeling of such behaviors"?

      To me, sensorimotor behaviors without understanding and feeling is pretty much "down-down", isn't it?


    1. Descartes was right: You can be wrong about whether this is warmer or cooler than that, but you can't be wrong that this *feels* warmer (or cooler) than that -- or, more fundamentally still, you can't be wrong *that* you are feeling whatever you are feeling, when you are feeling.

    2. When I deliberately raise my hand, it feels like "I" caused it; but what actually caused my hand to rise is not evident. And if it's whatever caused the motor potential (or the acetylcholine secretion) it's not evident how or why that's "me" -- let alone why it's felt.

    3. Observer-dependence is not what's hard, feeling is. There is observer-dependence in the viewpoint of a robot, but that's trivial if the robot doesn't feel. And if it feels, it's back to the hard problem: how and why does it feel.

    4. The neural correlate of duck/rabbit perception is just a neural correlate of feeling, not a causal explanation. NCCs don't explain a thing; they just predict. And finding the light-switch is *not* a causal explanation.

    5. Feeling is a biological trait, no doubt. The brain causes feeling, no doubt. But how and why? A light-switch won't explain it. It explains light. Feeling's not light. Light is something that electromagnetic frequencies do. Feeling is not doing. And physiological, biological, neural and engineering explanations are explanations of doings in terms of doings. (What seeing light feels-like to a feeler is another matter, but that sure isn't explained by the light-switch, internal or internal, nor by electromagnetic oscillations.)

    1. Concerning the cause of John Searle's "raising my hand example" (point 2.), I think that he simply meant that even though we can identify the physical causes of actions, it does not refute the fact that all actions are caused by the self. This conclusion is based on a definition of the self as an entity that comprises both subjective and objective epistemology (to use John Searle's jargon), i.e. perceptions of the experience vs. perceptions of the physical world and agreed factual conventions, respectively.

    2. Couldn't follow: Could you say it again more simply?

  4. Stevan: "NCCs don't explain a thing; they just predict. And finding the light-switch is *not* a causal explanation."

    I agree that NCC do not explain consciousness. But biological mechanisms *can* explain conscious content/feelings within scientific norms. Science can do no more than propose theoretical models/mechanisms that predict or post-dict relevant empirical findings. Science is not omniscient and cannot explain the *sheer existence* of any phenomenon. However, using the working definition of consciousness as a transparent brain representation of the world from a privileged egocentric perspective, a system of neuronal brain mechanisms (called the retinoid model) has been proposed that can realize this kind of brain representation. The logical implications of its neuronal structure and dynamics have successfully predicted/post-dicted many previously puzzling conscious experiences. For example see here:

    and here:

    Why shouldn't we accept the retinoid model as a candidate biological explanation of conscious content/feelings?


      Because even if the "retinoid model" were to successfully explain brain function, and a mechanism based on it could successfully pass T3 or T4, it still wouldn't give a hint of a hint as to why any of it is felt, rather than just done.

      Feeling is there in the world, no doubt. And surely it's the brain that generates it. But explaining how and why the brain generates feeling rather than just doing is the hard problem, the "doing/feeling problem," and one cannot explain it away by saying feeling's just "there," as a property of the world, like gravitation or electromagnetism, Feeling is not one of the fundamental forces. It calls for an explanation.

      And explanation is not just correlational prediction.

      But Arnold, please let's not re-enact our prior discussions in this Forum. It is sufficient to point the interested reader to the PhilPapers thread on the "Explanatory Gap."

    2. Stevan, I don't want to re-enact our prior discussions, but I think the problem is important enough to extend our prior discussions.

      You say "And explanation is not just correlational prediction."

      I whole-heartedly agree. But exposing the details of a *biological mechanism* that can be demonstrated to generate a conscious experience IS an explanation.

      In my seeing-more-then-is-there (SMTT) experiment, I was able to demonstrate that the neuronal structure and dynamics of the brain's putative *retinoid mechanism* successfully predicted that subjects would have a vivid conscious experience/feeling of a triangle oscillating in space when, in fact, there was no such object in their visual field. Moreover, the properties of the putative retinoid mechanism enabled the experimenter to independently control the height of the subject's phenomenal/felt triangle while the subject was able to control the width of the phenomenal triangle to actually maintain approximate height-width equality. It seems to me that this SMTT experiment is analogous to the double-slit experiment in physics that demonstrated that light has the complementary properties of particle and wave, because it demonstrates that conscious content/feeling has the complementary properties of a particular kind of brain activity (3d-person perspective) and phenomenal experience/feelings (1st-person perspective).

      Given this experimental demonstration that shows how a biological mechanism can generate a conscious experience, why shouldn't we consider the retinoid model to be a biological explanation of consciousness within scientific norms?

    3. Neural firing frequency is correlated with and hence predicts perceived stimulus intensity. That doesn't explain how or why intensity is felt. (But it does explain how we *do* things that are a function of stimulus intensity.) Your retinoid prediction/correlation, Arnold, is just a more elaborate example of exactly the same thing.

      Another way to put it is" *Given* that we feel, the neural mechanisms can predict what we will feel; but they don't explain how or why we feel. They just beg the question.

    4. Stevan: "Neural firing frequency is correlated with and hence predicts perceived stimulus intensity."

      You miss the point, Stevan. In the SMTT experiment THERE IS NO PERCEIVED STIMULUS. The vivid conscious experience/feeling of a triangle oscillating in the space in front of the subject is wholly an internal CONSTRUCTION by a particular kind of brain mechanism that realizes an egocentric perspective/subjectivity. There is only one kind of plausible biological design, as far as I have been able to find, that can do this job and it has the neuronal structure and dynamics of the retinoid model. I might add that no one that I have asked (and I have asked many knowledgeable people) knows of any artifact that contains an internal analog representation of the space in which it exists including a fixed physical locus of perspectival origin. In the retinoid theory, this singular capacity of the brain is the key to understanding consciousness/subjectivity/feeling.

    5. Stevan, I should add this:

      If you are a monist who asserts

      a. Physical processes (physical doings) are all that exist.

      and also asserts

      b. Physical processes/doings can not be be feelings.

      Then your assertions are either incoherent or what you call "feelings" do not exist.

    6. Arnold, I am a monist utterly bored with ontics. I am asserting that how and why we feel has not been explained.

    7. So, as good monists, when we do arrive at a standard model/explanation of how and why we feel, it will be based on physical processes/doings. We can shake hands on this.

  5. After all the questions asked during the talk and the discussion, something continues to bother me :

    If we reject materialism and dualism because both conceptions are grounded on a misleading vocabulary (namely the distinction between "mind" and "body"), could we say that the biological naturalism is a third kind of metaphysical hypothesis where only objects of experience are valid entities?

    If this point is correct, what is the place of consciousness amongst these entities? Isn't it the blindspot that cannot observe itself?

    1. When Searle discussed protons, electrons, dark matter and dark energy, it seemed pretty clear to me that he meant to imply that all of these things are genuine entities, just as much as objects of experience are. (Well, provided that our mature scientific theories still posit the existence of entities of this kind.) That was the point of rejecting the distinction between 'mind' and 'body'. What you're describing as 'biological naturalism' is in fact very different from what is usually called 'naturalism'. It would be more adequately described as 'idealism', which is a view that Searle also rejects.

  6. I agree with Searle's view of consciousness. In particular, I was pleased with his emphasis on what he called the "conscious field" as a *prerequisite* for perception and all of the other enriching content of phenomenal experience. I have proposed a theoretical model of *retinoid space* which I think fits the requirement of a pre-existing conscious field. If John Searle is willing to engage in this discussion, I would be very much interested in his views about my argument with Stevan Harnad (see above). An overview of the retinoid theory of consciousness/subjectivity can be had here:

  7. Vincent LeBlankart posted to Turing Consciousness

    Could someone explain to me why there is no information in nature that is observer independent.
    Could DNA code for different proteins than those it codes for?

    Vincent LeBlankart added

    I think Searle thought my comment was irrelevant, so I see two options :
    1. I can't figure out what he means
    2. I'm naive as to what is the goal of scientists ;)

    Pierre Vadnais replied

    Well, it depends what you mean by information ;-)
    Information is often defined as data well-formed, meaningful and true. Many will say that it cannot be "meaningful" except for an observer. Thruth is even more complex.

    Some accept that info might exist without a receiver, but not without a conscious emitter.

    If you stretch the definition, causality might be seen as a kind of information unconsciously emitted; it becomes meaningful only when interpreted by a conscious observer, but it exists before observation.

    Does a tree falling in the forest make noise? Noise, yes; the sound wave is there. Information, not until somebody hears it. Searle said that the rainbow doesn't exist; the light rays have certainly been relatively sorted, but you need a head to call it a rainbow.

    Well-formed data is user-independent and it is true as long as it has not been interpreted. However, it not meaningful, therefore not information, until it is semantically interpreted by a conscious observer.
    Turing Consciousness replied

    It all depends on whether you mean felt or unfelt observation, and on what you mean by information.

    Depends what you mean by "information" and on whether you mean felt observation or unfelt observation.

    Let's say information is data (e.g., digitized as strings of 0's and 1's) the way Shannon & Weaver defined it.

    There are plenty of observer-independent data in nature, both unfeeeling-observer-independent and feeling-observer-independent. It's just that except when they are feeling-observer-dependent, the data don't mean anything. They are merely uninterpreted data, meaningless 0's and 1's (though some data do have the remarkable property of being systematically *interpretable,* by and to feeling observers).

    There's nothing mysterious about this. Data are just squiggles and squoggles except when interpreted by feeling minds. If a T3 robot is nonfeeling, it is grounded but it has not meaning. Hence it can do anything and everything we can do with data, but the data have no meaning for the T3 robot. If the T3 robot feels, hence it is a "feeling observer," then the data do have not just sensorimotor grounding (in doing) but also meaning.

    Calling this "observer-relative" or "observer-dependent" is needlessly fancy and confusing. As Searle agreed, the "geometric" angle of gaze if even a toy robot is observer-dependent, but the observation is unfelt, so it's irrelevant. (This should not be in FB but in the blog!)

    1. A good reference for information is Luciano Floridi's entry in Stanford Encyclopedia of Phylosophy on "Semantic Conceptions of Information" or his book "Phylosophy of Information" (2011).

      For Floridi, unfelt observation means missing the information. This is getting only the signal, but not the message. If you look at an MP3 file with an hexadecimal editor, you can see the binary code (or its hexadecimal equivalent), but you don't hear the music. That's unfelt observation. The music is all there, if you send the same file to an MP3 reader, you hear the music. The miracle of decoding, suddenly the symbols have meaning.

      That meaning is only the first level. If you like music, you might recognize the tune, a level of meaning not accessible to someone who has never heard the piece. If there is a song over that music, you might recognize words, yet another level of meaning not readily available to people speaking a different language and certainly not available to any non-human animal.

      Shannon's "Mathematical Theory of Communication" clearly requires conscious emitter and receiver. It's all about encoding and decoding.

      There is something called environmental information. Something like the rings on a cut tree stump. It is available for anything with eyes to see animals, T3 robots, whatever... Only humans can discover that these rings could be related to the age of the tree. Humans can program T3 robots to detect the rings and calculate the age, but T3 robots would never on their own have discovered that trick... that requires consciousness.

      This environmental information had no conscious emitter. It doesn't really get meaning until some conscious observer receives it. For most philosophers, no meaning implies not information. Still it is there, seen or not, ever before the tree is cut... as much as any molecule, mountain or tectonic plate (as Searle would say)... and as much as the noise of the tree falling in the forest or the unattended rainbow.


      The usual way to try to ground knowing according to contemporary theory of knowledge is: We know something if (1) it’s true, (2) we believe it, and (3) we believe it for the “right” reasons. Floridi proposes a better way. His grounding is based partly on probability theory, and partly on a question/answer network of verbal and behavioural interactions evolving in time. This is rather like modeling the data-exchange between a data-seeker who needs to know which button to press on a food-dispenser and a data-knower who already knows the correct number. The success criterion, hence the grounding, is whether the seeker’s probability of lunch is indeed increasing (hence uncertainty is decreasing) as a result of the interaction. Floridi also suggests that his philosophy of information casts some light on the problem of consciousness. I’m not so sure.

      Harnad, Stevan (2011) Lunch Uncertain [Review of: Floridi, Luciano (2011) The Philosophy of Information (Oxford)]. Times Literary Supplement, 5664, 22-23.

    3. Thanks for the reference; read it before and was not very surprised by your conclusion: "Consciousness is feeling".

      However, I find your summary of Floridi's proposal very interesting. What if the data-seeker (DS) and the data-knower (DK) were two parts of a same brain? (I guess this is what Floridi intends to do with his two-machine artificial agent - AM2). DK could be a standard sensorimotor neural network learning through experience and evolving under natural selection. At some point, a new network appears interconnecting symbols, a semantic network. The symbols become connected to their counterpart in the sensorimotor network; that's your idea of grounding. But the symbols also have to be interconnected between themselves; that's semantics, not only meaning of the words, but meaning of interacting words. The semantic network becomes a DS relative to the sensorimotor network since it is trying to reproduce between the words the rules that helped DK to get a better lunch. Better semantic rules then help DK to get a better lunch even more often and DK gets better at avoiding natural elimination.

      Same message I have been trying to pass from the beginning, just hoping that using your metaphor (DS-DK) of Floridi's proposal will make the message clearer.


      Spoken words are of the same nature as rainbows or noise of falling trees except that they have a conscious emitter. They are short-lived and have no effect unless they fall in a conscious hear.

      Written words are of the same nature as spoken words, they also have a conscious emitter. However, they enjoy a longer life. Think Egyptian hieroglyphs which remained undecipherable for centuries. Human knew that they meant something because their regularity could only be explained by a conscious emitter.

      This line of reasoning is dangerous though, since it can also lead to the conclusion that regularities in nature require intelligent design. But we all know that the rings of cut tree stumps are the result of physical causality not conscious design. Therefore we all know that true well-formed data (potentially meaningful) exists without a conscious emitter and before conscious reception.

    5. (1) Whether the information (data) transfer is occurring within one system, or between two systems, it is simply information transfer unless it is felt. And it is not the causal role of the information transfer (doing) that is at issue here, but the causal role of the fact that it is felt.

      (2) Words are sounds, and hearing them feels like something, just like hearing any sound does (if audible, and heard). Words (or other data) can also mean something (if semantically interpretable); and both saying them with an intended meaning and understanding what they mean likewise feel like something.

    6. So, youreally don't want to buy that consciousness (feelings if you prefer) lies in the intricacies of semantic interpretation... That would at least reduce the domain of research.

    7. No, I really don't. The preceding sentence is semantically interpretable. In your head, if you understand it, it not only allows you to do what understanding it entails -- answer, infer, act (doings, all) -- but it also feels like something to be understanding it. If, in a T3 robot, the sentence only allows it to answer, infer, act, Turing indistinguishably, but not feeling a thing, then the sentence has (T3) grounding, but not meaning. (In Searle's terms, its semantics remain extrinsic rather than intrinsic.)

      (Ceterum censeo: I happen to believe that a T3 robot would feel. But I cannot say why or how...)

    8. Sounds are audible and heard. Words are audible, heard and understood. Feeling is understanding that you understand; two levels of the same process.

      Words have a composability that sounds don't have. Music is a composition of sounds, but it is not fully understood. It gets to the feeling level without full understanding. You feel something, but you cannot really decipher the message.

    9. I have found this to be a useful way to think about information:

      Information is any property of any object, event, or situation that can be detected, classified, measured, or described in any way.

      1. The existence of information implies the existence of a complex
 physical system consisting of (a) a source with some kind of structured 
content (S), (b) a mechanism that systematically encodes the structure of
 S, (c) a channel that selectively directs the encoding of S, (d) a 
mechanism that selectively registers and decodes the encoding of S.

      2. A distinction should be drawn between *latent* information and what
might be called *kinetic* information. All structured physical objects 
contain latent information. This is as true for undetected distant
 galaxies as it is for the magnetic pattern on a hard disc or the ink
marks on the page of a book. Without an effective encoder, channel, and
 decoder, latent information never becomes kinetic information. Kinetic
 information is important because it enables systematic responses with
respect to the source (S) or to what S signifies. None of this implies

      3. A distinction should be drawn between kinetic information and
 *manifest* information. Manifest information is what is contained in our
 phenomenal experience. It is conceivable that some state-of-the-art
 photo-to-digital translation system could output equivalent kinetic 
information on reading English and Russian versions of War and Peace,
 but a Russian printing of the book provides me no manifest information
about the story, while an English version of the book allows me to
 experience the story. The *explanatory gap* is in the causal connection 
between kinetic information and manifest information.

      I should add that manifest information requires subjectivity; i.e., registration of kinetic information within an egocentric space.

  8. Google's car and consciousness

    John Searle goes to a party and gets totally wasted. His host, a very rich man, happens to have Google's driverless car in his garage. John Searle sits in the passager seat, has just enough time to utter his home address, and pass out.

    When he opens his eyes, because his wife is yelling "OPEN YOUR EYES!!!" (probably at the recommendation of Dr Plourde), he has no recollection of how he got there.

    Was the car conscious or not? Of course not, but Searle wasn't either.

    If feelings were involved in this trip, they were probably felt by the programmers who built some meaning in the perceptions (sensors) of the car. This suggests that feeling is in the building of the semantic network, not in the using.

    I know, I'm still missing the point...

    1. Yes, you're missing the point. Feelings were not involved in this trip: doings were.

  9. I am not convinced that Doctor Searle has successfully proven epiphenomenalism is wrong. Indeed, many talks during the summer school, such as Doctor Brembs’ and Doctor Cisek’s, seem to imply: A) that the actual decision to move, for instance, is completely determined by neuronal dynamics, and B) that the feeling of volition is constructed post facto. Accounts such as Doctor Graziano’s further lend support to the idea that consciousness is the end product of cognitive representation—it could be involved in all action, yet still not be causally responsible for anything.

    Perhaps one could say, as Doctor Searle has, that the feeling of volition and the behavior occur simultaneously, which yields my feeling of having been the source of my action. However, the actual dynamics of my action, at least it would seem, are such that the cause of both my behavior and my feeling of volition is an underlying neural decision mechanism, which depends on dynamical system features such as nonlinearity and criticality. Hence, despite my feeling like my mental state (the will to raise my arm) caused my physical state (my arm moving), there need not be any causal link between the two.

    Furthermore, simply declaring that the mind-body problem is solved is not, IMHO, a very convincing way to deal with the problem.

    Doctor Searle adds that consciousness couldn’t have no function—it has too many! Indeed, he points out that almost all my doing is accompanied by feeling, and I agree with this point of view. Consciousness indeed seems to play an essential part in all higher-order (and most lower-order) cognitive tasks.
    While I find this intuition appealing, I believe it still does not imply that consciousness itself plays any causal role in decision-making. Consciousness could be part of all complex behavior, as Doctor Searle insists, yet still not be causing any of the behaviors Doctor Searle believes it is responsible for.

    1. The hard problem is an explanatory (hence epistemic) one: What is the causal role of feeling?

      Epiphenomenalism is just an metaphysical dogma, which does not answer the question but simply restates it as follows: Feeling has no causal role; it's just there.

      Forget epiphenomenalism and take your pick between these two: (1) we haven't explained the causal role of feeling yet; or (2) the causal role of feeling cannot be explained (but then be prepared to explain why not).

  10. Consciousness is your occurrent phenomenal world. Which of your volitional behaviors are not contingent on salient properties of your phenomenal world? I would argue that your phenomenal world (consciousness) *must* be a part of the causal chain in all of your non-reflexive behavior.

    1. IMHO, just because all volitional action comes to be represented phenomenally, does not necessarily imply that all volitional behaviors are contingent upon salient features of my phenomenal world. Indeed, it seems to me that the actual deterministic factors that determine volitional action are themselves quite unconscious until they reach a given critical point and stabilize into a decision. And phenomenality being part of the causal chain I agree with—I just think consciousness might be located at the end of that chain.
      Attention probably has a lot to do with causal action-selection, but attention acts on both conscious and unconscious cognitive processes. As for consciousness, I’m not so sure.

    2. Maxwell J. Ramstead: "Indeed, it seems to me that the actual deterministic factors that determine volitional action are themselves quite unconscious.."

      Yes, the decision to make a volition act is pre-conscious and seems to occur about 500 milliseconds before we are consciously aware of the decision. But the critical point is that the whole volitional episode STARTS IN RESPONSE TO SOME PARTICULAR ASPECT OF OUR PHENOMENAL WORLD. No phenomenal world, no volitional acts. No volitional acts, human-style adaptation is impossible. This explains why we feel. You might find it helpful to read "The Pragmatics of Cognition" on pp. 300-301 in *The Cognitive Brain* here:

      Maxwell: "And phenomenality being part of the causal chain I agree with—I just think consciousness might be located at the end of that chain."

      So do you actually believe that if you were not conscious you would be able to begin to respond to my comments here? If so, please explain how this could happen.

      Maxwell: "Attention probably has a lot to do with causal action-selection, but attention acts on both conscious and unconscious cognitive processes. As for consciousness, I’m not so sure."

      Almost all current discussion of attention is vague. If you think about the neuronal mechanisms of attention, I think you will find that what we call selective attention is a directed shift of neuronal activation in the phenomenal world around us. So selective attention is an integral part of our conscious experience. We are not conscious of other kinds of attentional processes such as priming and gating of our pre-conscious synaptic matrices. This is explained in detail in *The Cognitive Brain* (1991).


    "Why consciousness? why life? why the universe?"

      "The universe is not a biological trait, hence no adaptive function. Life is a precondition for adaptive function (though it's possible that the self-replicative property of genes is at the root of adaptive function). But feeling (consciousness), like feeding and flying, is a biological trait. Hence it is natural to ask what is its adaptive function (i.e. "why?")."

      "Actually it was not "why" for the adaptive function but rather why it appeared first? Any biological traits that appears and is maintained by natural selesction actually did not appeared because it was adaptive. A trait appears first by mere chance, exactly as life probably did and exactly as the universe probably did. I agree they are maintained because they appeared to be adative but they surely didn't appear because they were adaptive. There are all simply the result of purposeless events. So of course the universe is not biological but the universe is physical and life is a part of this physical universe. They all follow the same rules of physics and all emerged with no purposeful reasons.
      So why consciousness might not be one of the purposeless events that appeared and never disapeared with no specific reason?"

      "Adaptive traits can start as random mutations, or, more often, and changes in existing developmental processes; they may already be there (because of some other, possibly expired adaptive function) and their newfound adaptiveness could be because of a new environmental change."


    "To expand from Searle's interesting comments about summer vacations and what exists by itself, here's a recent paper by Dennett about analytic metaphysics, what is real, and what depends on the observer."

  13. Why keep using the robot argument to discredit the utility of consciousness?
    As Searl nicely mentioned, it cannot be a good argument because consciousness as we study it is consciousness in the context of living beings and as living beings are the result of evolution it cannot be compared with robots.

    And what if consciousness was a biological "constrain" (in the sense that it appeared, was not necessarily deletarious but remained and evolution had to cope with it) and that all the abilities we have now and require consciousness (eg. language, decision making, senses, etc...) would have evolve coping with consciousness even though consciousness was not absolutely necessary?
    If you think a bit more, another reason to argue that the robot argument is not a good one is that if you remove any piece of consciousness of a human being, unfortunately, this human would be unable to do anything, therefore, consciousness is necessary to a human being's life, meaning that consciousness has to have an adaptive function.

    1. If you remove his consciousness, a human being would be closer to a vegetable than to a robot. But thinking about a conscious robot is as thinking about a conscious vegetable. It doesn't make any sense.

    2. Well, I beg to differ. Turing asked the question: "Can machines think?" and many rephrased the question with a computationalist or a connectionist twist. For a robot to pass the Turing test, it better be conscious. To be able to do it, one certainly has to define consciousness and understand its functionality (if not its function).

      I don't think "the robot argument [is there] to discredit the utility of consciousness". It shows all the functions that can be done without consciousness, hopefully helping to zoom in on the "function" of consciousness. And I tried to emphasized that language, not just speech, grounded language is impossible (for now) in robots...

    3. Well Pierre, I 100% agree with you on the language argument. However, some people keep hardly thinking that language doesn't necessitate consciousness because they use the robot argument...

      Also, when I said "discredit", I meant that some people tend to say that consciousness doesn't have any adaptive function because in theory a robot can do anything a human can do without consciousness. But I keep wondering : if we are able to create that behaves, in every single aspect, as human do, would we have created a conscious robot?

    4. To me, that's the magic of the Turing test. Not sure that Turing realized that his test based on verbal communication implied understanding consciousness. No robot could ever be penpal with Stevan without being conscious. It is possible to program the most extensive dictionary into a robot and make it utter the nicest arguments, but none of that will be grounded as Stevan himself keeps reminding us.

      The grounding of words, and even more of sentences, requires not only learning but understanding... and understanding is synonymous with consciousness (NOT with feeling).

      To answer specifically your question: a robot that behaves, in every single aspect, as human do, would have to be conscious. The Turing test is the same as "behaving, in every single aspect, as human do".

  14. Xavier Dery ‏@XavierDery

    Searle says the ontological subjectivity of consc. is not an obstacle to understanding it from an epistemic objective POV: science #TuringC

    2:46 PM - 9 Jul 12 via Twicca Twitter app

  15. I remember vaguely that Searl talked about the photon being a wave and a particle at the same time in theory, but in practice, onely one of the two can be seen by an observer. Can someone give some more details about that?

    Was it used as an example to say that consciousness would only be seen as neuronal interactions when an observer would look at it?
    This article made me perplex.


      Forget about wave/particle duality. It is quantum mechanics's "hard problem" but it has nothing to do with our hard problem. Every time someone brings it up, it's a red herring. A problem is not solved by conflating it with yet another unsolved problem...

  16. For my own understanding of consciousness, Searle's "jargon" was actually quite enlightening. I was looking at my notes though, and something is unclear to me.

    I understand that (1) all observer-relative phenomena are created by consciousness. But is this enough of a reason to say that (2) consciousness itself cannot be explained in observer-relative terms? Either something I missed, or just makes me feel strange to assume that (2) follows from (1)...

  17. The comparison of consciousness with vitalism has been brought up many times, but is not quite appropriate; whereas the vitalists assumed with no empirical backing that there was some extra magic involved in life itself, consciousness clearly does exist since we each of us have it.
    Thus although mechanical (chemical, biological, whatever) operations can explain life, it is not a given that they can explain consciousness, because of all the reasons given in Harnad's talk.

  18. I may be missing the nuance differentiating two things Dr. Searle said, lest they be contradictory.

    When talking what he considered to be facts about consciousness, Dr. Searle stated that is is real and irreducible; further, since consciousness has a subjective ontology, it cannot be reduced or explained using a third-person ontology. Later, Dr. Searle states that we will only understand consciousness as a natural biological phenomenon that can be described at many levels (and subsequently gave the example of the car with the pooched spark plug, alternately described as an insufficient movement of electrons unable produce the electric arc required for the oxidation of hydrocarbons).

    To me, these two statements seem incongruous. Sure, the experience of consciousness is entirely subjective — but if one believes that consciousness is caused by activity of the brain (which we all do believe), this activity can be observed in the third-person, and thus consciousness can be described as a natural phenomenon at more than one level. Consistent with Dr. Searle's second point, consciousness can be described as a natural phenomenon at more than one level, and in contradiction to his first only one of these levels is the subjective experience. Other perspectives would be welcomed on the matter.

  19. I must appreciate you for the information you have shared.I find this information very useful and it has considerably saved my time.WHAT IS CONSCIOUSNESS

  20. While interacting in our day-to-day life, we need to act or react to bodily processes and the happenings in the world, sometimes instantly, to provide us beneficial outcomes.

    Consciousness is designed by the evolutionary process to allow data from such interactions that requires judgmental power to become available for making decisions, thereby benefiting from the capability of making free will decisions (If there were no free will, there was no requirement of consciousness).

  21. Consciousness totally depends on mental maturity. When a man gradually grown up then maturity level automatically increase.
    I believe a health conscious person will must conscious about fresh water. Parazapper such as zapping device can eliminate germ.