Monday, July 18, 2011

The Mind is Flat


I. Introduction

What is the mind? Where is it? What is consciousness? Where is that? How does thinking work? These questions become even more interesting when one realizes that their widely accepted answers are often recursive. That is, the mind is sometimes defined as a grand composition of all things mental, consciousness is often thought to include things that happen to/ are done by conscious agents, and so on. Contemporary theory, however seems to be rapidly approaching a better explanation, one that does not rely on itself for validity. The white knights of the embodied cognitive revolution claim the victory of a solution. However, some of their claims seem flawed, incomplete, or sophistic; they have perhaps claimed a victory that they have not won. Here I will attempt to claim my position among the ranks of modern cognitive theorists, and propose some action that could facilitate the development of a more definitive answer to the question: what is the mind?


II. Attempts at definition

The discussion of these topics necessarily begins with a few definitions. Unfortunately, it is within these very definitions that much of the dissent in the field exists, and as such very few terms involved have widely agreed-upon meanings. This section will serve as the pedestal upon which I build my position, so I will strive to be precise and thorough, but the discussion contained herein will be far from exhaustive. Space will simply not allow a complete review of 3000 years’ thinking on the topic of the mind, neither would I wish to bore the reader with pedantry. Therefore, I will focus only on what I find to be the most relevant features and theses to the topic of embodied cognition.

a. Consciousness (n.)-

A working definition of consciousness is necessary before adjudicating the claims of embodied cognition. Here, I will select one that I think will provide a useful metric against which to gauge embodied views of consciousness. In my view, consciousness can be seen as roughly equivalent to an agent’s awareness of itself and the world around it. Unfortunately, this definition is an incomplete and simplified one, and will be insufficient for determining the validity of certain theses within the realm of embodied cognition; elaboration contiguously. I tend to agree superficially with Uriah Kriegel’s Self-representational theory of consciousness. Kriegel posits that, in order to experience consciousness (awareness) of a blue sky, one must experience the phenomenal character of the “bluish way [the sky] is like for me,” (2009). This phenomenal character has two parts: the qualitative character of “bluish-ness,” and the subjective character of “for-me-ness,” (ibid.). The qualitative character is what makes a conscious experience what it is, and the subjective character is what makes the experience conscious at all. That means there cannot be consciousness without both a subject (to have an experience) and a quality (for that subject to experience).

Before continuing with Kriegel, let me point out that this compositional model for consciousness is important in that it requires an agent to posses several abilities for consciousness to occur. First, the agent must have a method for interacting with-- or at the very least least a mechanism for registering changes in-- an environment (this predication will become especially important in discussing enactivism). Second, the agent must be able to recognize the difference between things that are part of it and things that are not (this will have minor implications for the silliness that is panpsychism, but more importantly, in thinking about the extended mind). Third, it must be able to integrate-- or at least associate-- the qualitative and subjective characteristics of its experience; the pieces must form a whole or at least they must be able to respond to one another. Finally, and importantly, an agent must be able to experience ad hoc the phenomenal and subjective characteristics of this integrated set of characteristics. That is, the nature of conscious experience demands that it be able to react and respond to itself.

Kriegel’s reasoning provides a mechanism for the agent’s ability to respond to its own consciousness by virtue of the “self-representational” nature of the subjective characteristic (2009). Self-representation in his view, is an intrinsic property of the subjective character of a conscious experience that yokes the agent to its experience, and thus its consciousness. Stated another way, agency is a component of subjective character. Kriegel does not go so far as to claim this, but I conceive of the self-representational nature of subjective experience as the linchpin that holds the disparate abilities required for consciousness together, such that they can be united under the banner of conscious experience. Self-representation is what gives the agent agency. Agency is then bound to/ associated with/ integrated in the qualitative character of experience, and consciousness arises.

Consciousness can therefore be ostensibly defined as a phenomenon wherein an agent experiences the qualitative and subjective character of some phenomenon. Note that this definition casts consciousness as a phenomenon itself. Consciousness by this definition is not an activity or a property but a result of a conglomeration of circumstances including but not limited to the existence of an agent with the abilities I described, and the existence of some occurrence of which the agent may be conscious. This definition does not include any mention of the brain, cognition, or biology; nor does it invoke necessary activities or uses for consciousness. These concepts are important for an a priori understanding what consciousness is, and should be analyzed in terms of this definition but are not necessary for my purposes here.

It is also important to note that this definition does not preclude consciousness from being epiphenomenal. I am, as yet, unconvinced that consciousness deserves the deification that seems to underlie some of the barriers to understanding it, and I am open to the possibility that consciousness is an over-glorified serendipity. Though it deserves brief mention, and I consider the possibility of epiphenomenal consciousness intriguing, a more in-depth discussion falls outside the scope of this paper.


b. The Mind (n.)-


My definition for the mind is significantly simpler than my (and Kriegel’s) definition of consciousness. The mind, it would seem, can simply be defined as whatever part of the agent contains consciousness. This conceptualization makes no attempt to solve or allow for any robust analysis of Chalmers’ hard problem, and lends itself equally to the whims of embodied and classical theorists. Agency may be limited to the brain, even to a single neuron, or extended to include the entire universe without contradicting this definition. The mind can be any entity one claims it to be, provided one can provide evidence that said entity (wholly and without any external constitution) experiences consciousness. So, while the definition is not limiting prima facie, some external limits to what can be considered to have a mind should be enforced. The thing that has a mind and therefore is conscious (the agent) must be fully accounted for; no parts may be left out and nothing that does not constitute part of the mind may be included. The correct conceptualization of the mind therefore depends on the veracity of embodied theories, rather than imposing limits on them. This also provides an imperative for researching embodied theory, since precisification of what portion of an agent produces cognition and consciousness promises to also identify the location of the mind.

c. Cognition (v.)-

Defining cognition in a way that all theorists (classical and embodied alike) can accept is perhaps the most important obstacle-- and the most critical benchmark-- for determining conclusively whether embodied or classical theories of cognition are correct. As such, the definition, as well as the need for a proper definition, are major sources of contention in the embodied cognition literature.
Andrew Clark and David Chalmers proposed in their paper “The Extended Mind,” that the environment plays “an active role... [in]... driving cognitive processes,” (1998). They point to many common practices that involve offloading cognitive tasks to the environment (e.g., using pen and paper to perform long multiplication, use of instruments and tools, reading and writing) as evidence that cognitive processes can and do take place outside of the body. They argue that since cognitive tasks which are offloaded to the environment do not differ functionally from similar tasks that take place within the brain, they should be included in the realm of cognitive operations. Stated another way, “...if a state [outside the body] plays the same causal role in the cognitive network as a mental state, then there is a presumption of mentality, one that can only be defeated by displaying a relevant difference between the two (and not merely the brute difference between inner and outer),”(Clark 2008, from Adams 2010). This parity principle demands that if one is to defend the brain as the limit of cognition (say that cognition take place wholly and exclusively in the head), one must show that operations outside the head are different from those within.
This principle seems viable. Consider two subjects who are read a list of words, and instructed to remember as many words from the list as possible. Subject A is allowed to take notes, and jot down the words as they are read, while Subject B must rely on memory. It is clear that Subject A’s recall of words from the list should be better than Subject B’s, in fact Subject A’s performance should be near-perfect. Clark and Chalmers would argue that Subjects A and B were performing nearly identical tasks; they were both perceiving words from a list and encoding them for later retrieval. The only difference is the location of storage, which Clark and Chalmers would dismiss as functionally irrelevant.
They also draw on the theoretical possibility that technology could one day advance sufficiently to produce brain implants which augment or replace biological cognitive processes (think of the Borg from Star Trek). They point out that if we accept that these implants could be part of cognition, then contemporary objects that perform the same tasks are just as cognitive as the implants would be. Thus, we should have no problem considering calculators, dictionaries, and pens as part of the cognitive apparatus.
Other theorists, even some who consider themselves embodied cognitivists, have taken issue with these claims for various reasons. A major criticism relevant to the development of a definition for cognition comes from Frederick Adams. Adams points out that a definition delineating what is cognitive and what is not is abundantly lacking in academia (2010). He argues that the offloading of tasks onto the environment aids cognition, but does not constitute cognition. This distinction between aiding and constituting cognition is very important, and will be explored in detail in section V. For now, suffice it to say that Adams complains that people like Clark and Chalmers are getting away with violating the distinction between causation and constitution because academia lacks of a formal mark of what can be considered cognitive and what cannot. Adams’ evaluation rightly suggests that the development and acceptance of such a mark would draw a figurative circle around the mind. Things inside the circle will be considered cognitive, and things outside the circle will not. It seems that the development of a mark of the cognitive will put to rest the debate over the possibility of extended mind.
Adams has recently proposed a set of conditions that may fulfill this need. He suggests that the following conditions be met for a process to be categorized as cognitive:

1. Cognitive processes involve states that are semantically

evaluable.

2. The contents carried by cognitive systems do not depend

for their content on other minds.

3. Cognitive contents can be false or even empty, and

hence are detached from the actual environmental

causes.

4. Cause and explain in virtue of their representational

content.

(Adams 2010)


Summarized briefly, for something to be cognitive: it must not merely map information but also inform the truth or falsehood of that information. The content must have meaning that is accessible to and produced specifically for the relevant agent. The content may be, and often is, false, incomplete, or inconsistent with objective reality (this is less of a requisite and more of a let or exception for some types of content that are clearly cognitive but may be wrongly excluded). Finally things that are cognitive are able to produce behavioral results in virtue of their semantic content (more simply, things that are cognitive can drive behavior and things that are not cognitive cannot).
Adams’ paper is fairly recent, and it remains to be seen if his suggestions will be widely accepted, but if they are, they could be damning to proponents of extended mind. The conditions may allow for something like a brain implant to become part of the cognitive process, but deny the possibility of truly external processes constituting cognition. This will be explored more in-depth later.
This section ends without a conclusive definition of cognition. Such a conclusion would likely signal the end of the debate over extended mind, and I cannot presume to have solved such a problem. Deciding what constitutes cognition will have an impact in many of the following sections, and Adams’ conditions may provide a reasonable metric in some cases, but it remains to be seen if his conditions are faultless.

III. The Symbol Grounding Problem

a. The Need for Grounding

John Searle’s (1980) famous Chinese Room thought experiment can be invoked to illustrate the need for symbol grounding. The concept of grounding in brief is that, in order for symbols to acquire meaning for a particular agent, the symbols must have some basis in that agent’s experience (Shapiro 2011). Symbols may interact with and build upon one another infinitely to produce such abstruse phenomena as abstract nouns and hypothesis testing, but Searle demonstrates that at least some basic symbols must be grounded in real experience in order for any meaning to be acquired. For example, I may describe to you a six-foot tall feathered animal that runs very quickly and can be found in many zoos. I can tell you that this thing is called an ostrich. In any case you will probably be able to remember the set of symbols associated with this new symbol (ostrich), but unless you have experience with the symbols that ‘ostrich’ is built on (the foot as a unit of measurement, running, quickness, feathers, zoos), you will fail to derive any real meaning from this new symbol.
Searle’s thought experiment demonstrates this principle (perhaps more adequately than I), but the point is the same. The presence of meaning, or as Shapiro points out, the ability to understand meaning (2011), is what differentiates cognition from computation, and symbols cannot be meaningfully understood without some kind of grounding. The Chinese room was originally articulated as a critique of the classical computational model of cognition, and embodied cognitivists claim their solution to the problem as one of their greatest victories.

b. Embodied Grounding and the Indexical Hypothesis

Some embodied cognitivists claim that they have discovered the solution to the symbol grounding problem (Glenberg and Kaschak 2002, Barsalou 1999, Shapiro 2011). The solution is known as the indexical hypothesis, and comes in three stages.

First, words are indexed, or mapped to perceptual symbols (Glenberg & Kaschak 2002, but see also: Barsalou 1999). The words are stand-ins for the classical theorist’s amodal symbol, and indeed that’s exactly what a word is; a symbol for an object that is not based in any one modality. The strength of this indexing is that it allows perceptual information to collapse across modalities, and be represented my a single amodal representation (i.e., the amodal symbol for cat can include a cat’s appearance, smell, the sounds it makes, and etc.).

In the second stage, affordances are derived from these symbols (Glenberg & Kaschak 2002). Affordances are important facets of meaning, because they explain why my understanding of a chair will differ from a mouse’s understanding of a chair (if a mouse can understand such a thing), even if we have identical histories with and memories of chairs. It seems to me that an affordance can be readily likened to the subjective character of phenomenal consciousness. The “for-me-ness” of an instantiation of conscious experience requires that I use knowledge about myself to experience the chair. The chair shows up in the world for me and the mouse, but because (this part is important) we have different bodies (different selves, different self-representation), the chair will afford different things to us. To me, a chair is a place to sit; chairs afford sitting. Though the mouse may presumably use a chair to sit (can mice sit?), it may also use it to climb, to hide under, to chew on, and etc. The chair affords different things to the mouse and me, and the embodied cognitivist will readily proclaim (rightly so) that those affordances depend on the type of body you have. I would go so far as to say that the derivation of affordances is a component of the subjective character of the phenomenon of consciousness.

In the third stage, these affordances for different perceptual symbols are “meshed.” This meshing allows the affordances for different symbols of objects in the world to interact with one another. The example given by Shapiro points out that a vacuum cleaner affords hanging (things can hang on it), and a coat affords being hang-able (it can hang on things) (2011). The affordances of these two objects mesh, and provide the basis for our understanding of the sentence “hang the coat on the upright vacuum cleaner.”

The point to take away from the indexical hypothesis is that meaning (or our ability to understand meaning) is grounded in embodiment. Any agent’s understanding of a thing in the world depends on what that thing affords to the agent, and affordances depend strictly on the type of body that the agent has. I heartily support this hypothesis, but its strength is undermined in a few ways that deserve mention.

First, the indexical hypothesis may not be incompatible with classical cognitive theory (Shapiro 2011). That is, there is no reason to suspect a computational form of cognition couldn’t index perceptions onto amodal symbols and mesh their affordances. Shapiro points out that “All that is necessary [for classical cognition to explain the indexical hypothesis] is additional causal step-- from the modal representation of a [symbol] to an amodal representation,” (2011). I suppose Shapiro is suggesting that a middle step that accounts for transduction of modal symbols to amodal symbols would allow classical cognition to provide an account for the acquisition of meaning that did not differ appreciably from the embodied cognitivist’s account. If classical cognitivists can explain the acquisition of meaning, then they will have no motivation to depart from their computational models, which only deal with symbols after the meaning is acquired. So, the Indexical hypothesis may not provide a reason to prefer the embodied account.

The second problem with the indexical hypothesis requires an explanation of the action-sentence compatibility effect (ACE) demonstrated by Glenberg and Kaschak (2002). Their experiment asked participants to judge the sensibility of statements presented on a screen. In order to provide a sensibility judgment, the subjects had to press a “Yes” or “No” button. The buttons were positioned such that the “Yes” response required an action either away from or toward the subject’s body (yes-is-close and yes-is-far conditions). The authors found that sentences which implied action away from the body (e.g., close the drawer), were judged to be sensible more quickly when the action required to respond “Yes” was also a movement away from the body. In keeping with standard analysis of reaction time studies, they concluded that; since reaction times were slower for sentences which implied an action opposite the action required to provide a correct response, the ability to understand an action and perform an action must be subserved by the same system. In other words, the cognitive processes necessary for understanding an action-related sentence are the same cognitive processes that are necessary for performing that action . Subjects are faster when the two variables are compatible, because the action itself is primed by understanding the action, and slower when they are incompatible because understanding and acting interfere with one another.

The objection to this experiment arises from Glenberg’s reasoning behind what, exactly, a sensibility judgment is. Fred Adams (forthcoming, from Shapiro 2011) articulates a reasonable objection to Glenberg’s reasoning, but I would take the objection one step further. Adams points out that the incompatibility of affordances does not make sentence un-understandable. To use Adams’ example, the sentence “climb the pencil” does not allow proper meshing of affordances for some agent with a body like a person’s. Pencils do not afford climbing to people. However, one can still understand the sentence. In a metaphorical sense, I can imagine what it would be like to climb a pencil. The ACE effect then does not demonstrate interference in the the understanding of sentences (as the experimenters claim), but rather interference in the do-ability of a sentence. In essence, the experiment does not seem to be related to understanding, and as such has little to do with the indexical hypothesis’ claims at the origins of meaning and understanding, but may provide an embodied account of affordance judgment.

I would propose that the experiment be conducted again with a different aim and different “nonsense” sentences. Nonsense sentences, or more precisely, collections of words that can be categorically judged to lack any semantic content, compared to sentences that do make sense, would show whether the ACE effect exists with regard to understanding of meaning. As an example of a completely nonsensical sentence, I would invoke a well-known Internet meme popularized on various forums and image boards: “Has anyone really been far even as decided to use even go want to do look more like?” (sic, this sentence doesn’t have an identifiable source, but I must note that I did not come up with it. For more info see encyclopediadramatica.com, and others). The sentence, and others like it, lack any identifiable semantic content. It simply doesn’t make any sense. The words taken individually, however, do make sense; they retain their individual meanings and affordances. What the sentence lacks is any meshing of affordances. If an experimenter were able to show that judging sentences like this one as nonsensical takes longer than judgments for sentences like “climb the pencil,” then he or she could conclude that a) affordances exist and b) the proper meshing of affordances is involved in understanding the meaning of a sentence.


IV. Enactivism
a. Strong Enactivism

Alva Noe and Ken Aizawa’s debate over enactivism is an interesting one. Noe claims that consciousness is an action; something that is done rather than something that happens or is possessed (2009). Noe’s argument is that consciousness requires active, dynamic involvement with an environment. Aizawa refers to this view as Strong Enactivism (SE), and correctly points out that “the central problem for strong enactivism is the existence of perception despite complete immobilization,” (forthcoming). If consciousness works the way Noe claims, then the onset of paralysis (immobilization) should mark the agent’s loss of consciousness. Noe seems to be saying (though these are my words), that agency is constituted by the agent’s ability to react to the environment. It surprised me, but this does not seem to contradict my (and Kriegel’s) definition of consciousness. After all, part of conscious experience is its qualitative character, and qualitative character comes from the environment. However (here is where I break from Noe and Kriegel), I have described consciousness as a phenomenon that arises in reaction to other phenomena. This allows for the possibility that consciousness, being a phenomenon, may react to itself. Devoid of any environmental cues, consciousness may persist by means of a positive feedback loop. Each instantiation of consciousness-- or moment of conscious awareness-- is perceived, understood and reacted to, such that consciousness recursively perpetuates itself.

Fatally for Noe’s view, Aizawa points out that consciousness does persist after complete immobilization by drug-induced paralysis (forthcoming). On my view, however, consciousness that normally arises in response to the environment could persist by reacting to itself. If Noe allowed for this circular causation, he could hope to rescue his strong enactivism. As excited as I am by this possibility, I still have to take issue with Noe’s analysis of SE. Noe seems to believe that the involvement of the environment in conscious awareness is constitutive. That is, he believes that things happening in the environment are an inextricable component of conscious experience. On my view, the environment provides causal support for consciousness. I will discuss this fallacy in detail in section V.


b. Weak Enactivism


Noe seems to back away from claims of SE, and instead defends what Aizawa calls weak enactivism. Noe’s reasoning holds that is is not necessary to actually interact with the environment, is is only necessary to know how to interact with the environment. He refers to this actionable skill set as sensorimotor knowledge (SMK), and defends his position by referring to amodal completion of occluded figures. He argues that our perception of occluded figures does not exist in a one-to-one relationship with reality. What we consciously experience has a dual content on his view. There is the “raw data;” the visual map or qualitative content of what we see, but the experience also contains (on his view) the perception of a completed, viewpoint invariant figure. He claims that we are able to complete the figure because we have SMK about figures “like that one.” If something looks like it could be a square, we are readily convinced that it is because of our experience with other square-like objects. Aizawa points out that if the occluded portion of the figure as it exists in reality doesn’t match our filled-in mental representation of it, we experience the object erroneously. In pointing this out he obviates a flaw in the perceptual system, but that flaw does not necessarily preclude SMK’s involvement in perception. Noe’s SMK is based on habit, and provides a basis for belief and expectation. Noe does not claim that SMK is based in flawless fundamental truth, or justified true belief. Aizawa’s critique seems to be limited to pointing out that SMK does not always provide an agent with an accurate representation of the world. Aizawa fails to explain why this flaw in SMK means SMK can’t exist. Consider an analogy.

My computer is designed to take the inputs that I give it, manipulate them according to a set of rules, and provide an output that I can react to. It is designed to do this consistently and without failure. Unfortunately, sometimes I provide too many inputs and my computer can’t respond to them all. Sometimes, some of my inputs aren’t registered, and are simply skipped. Sometimes I produce input so quickly, that I interrupt the response to a previous input. This causes a corruption of the data stream by storing information in RAM in a place that it doesn’t belong. My computer doesn’t know what to do with this data, and either freezes or crashes. By Aizawa’s logic, the rules that my computer uses to manipulate my inputs don’t exist. This argument is silly at best. Providing an example of how some process can fail does not prove its non-existence. There may be valid and fatal critiques to the concept of SMK, but this is not one.


V. The Causation-Constitution Fallacy and Consciousness

The distinction between causation and constitution has implications in every preceding section of this essay, and I have touched on it many times. It seems that every aforementioned embodied cognitive theorist has committed this fallacy, and in my view, it may mark the ringing of the death knell of embodied cognition. Embodied cognitive theorists, almost without exception, claim that processes external to the brain are a part of cognition. They claim that things external to the brain may and do constitute cognition. Lawrence Shapiro delineates the distinction between causation and constitution aptly:
“… if C is a constituent of an event or process P, C exists where and when that event or process exists. Thus, for some process P, if C takes place prior to P’s occurrence (even if in the location where P eventually occurs), or if C takes place apart from P’s occurrence (even if during the time span of P), then C is not a constituent of P.” (2011)

To harp on my earlier example, my input to a computer does not constitute the computer’s treatment of my input. But my input does cause my computer’s treatment of that input. Hitting the enter key to begin a Google search does not constitute a Google search (its constituents are the algorithms that turn my input into some output), but it is a very necessary piece of the causal chain that produces Google’s output.
Proponents of embodiment, like Clark and Chalmers, Noe, Glenberg and Kaschak, Barsalou and others, allow for the possibility that cognitive processes are or can be constituted by, well, anything. Their individual viewpoints vary widely from Chalmers’ panpsychism to Glenberg’s indexical hypothesis, but they all believe that cognition is constituted at least in part by something other than processes in the brain. This, I simply can’t abide. The brain, in my view, is fundamentally an input/output device, like a computer. The inputs are limited to information that can enter through the sensory modalities, but the nature of the output is a bit more nebulous. The output of the brain includes behavior, of course, but also includes consciousness and importantly, memory. The self-perpetuating and reactive nature of conscious experience means that an agent can effect a change in the environment, the consequence of which is actionable or useful. Let me explain.

An agent, more specifically a person, can use a pen and paper to do long division, perhaps 3 divided by 16,384. First, the agent must obtain a desire to find the answer to this problem of how many 3’s are in 16,384, this desire is an input provided by the environment. Then he will decide to write the problem on a sheet of paper, this decision is an output, but is also an input that causes him to behave in such a way that he actually writes the problem on a sheet of paper. The behavior that created the image was an output, but the image itself is accessible to vision, and can therefore be an input. Then he must begin to solve the problem, he asks himself “how many threes go into 16?” and somehow, he comes up with the output “5 threes, with one left over.” The internal process that results in the output is cognition. Any process that turns an input into an output may be considered cognition. This is essentially the classical view.

I view consciousness as an output; a phenomenon that is produced by brains, just like behavior. This means that consciousness (remember when I say consciousness, I mean the experience of the qualitative and subjective character of perception; what something is like for me) is not constitutive of cognition. Consciousness is not a mental process. It simply requires the causal support of cognition and mental processes. Likewise cognition (which causes consciousness) depends on the agent’s body for causal support. Cognitive processes in the brain could not take place without the digestive or circulatory systems. It also seems evident that the meshing of affordances that are grounded in bodily states may be necessary for the (output) experience of understanding meaning. But again, the roles of these processes are causal.


VI. Conclusion

My views on consciousness aside, embodied cognition does not seem to have produced a hypothesis or experimental result that could not be explained by the classical view. Embodied cognitive theory has raised some interesting questions about cognition, but ultimately Adams and Aizawa are correct. Embodiment is an important, perhaps even necessary step in the causal chain that produces cognition and consciousness, but the mind is flat. If you sail across the mind, you will eventually fall off. What I mean is, there must be a limit on what can be considered a cognitive process, and in my view cognitive processing, and therefore the mind, is necessarily confined to the brain.








Works Cited

Adams, F. (2010). Why we still need a mark of the cognitive. Cognitive Systems Research, xxx, xxx-xxx.
Aizawa, K. (forthcoming). Don’t give up on the brain.
Barsalou, L., (1999). Perceptual symbol sytems. Behavioral and Brain Sciences. 22. 577-609
Clark, A. (2008). Supersizing the mind: Embodiment. action, and cognitive

extension. Oxford: Oxford University Press.

Clark, A., & Chalmers, D. (1998). The Extended Mind. The Philosopher Annual, XXI, 59-74.
Glenberg, A.M., & Kaschak, M.P., (2002). Grounding Language in Action. Psychonomic Bulletin & Review. 9 (3), 558-565
Kriegel, U. (2009). Subjective consciousness: a self-representational theory. Oxford : Oxford University Press.
Noe, A. (2009). Out of our heads: why you are not your brain, and other lessons from the biology of consciousness. New York: Hill and Wang
Searle, J., (1980). Minds, brains, and programs. Behavioral and Brain Sciences. 3, 417-424
Shapiro, L., (2011). Embodied Cognition. New York: Routledge





Sunday, November 22, 2009

First and For Most

This is a blog. Blogs are places where people post things that they find interesting, and then guilt you in to reading about those things in hopes that you will care about them too. This one belongs to me.

My mission here is to provide a window into my thought processes. In conversing with people, it has become apparent that the way I think and the beliefs I hold are markedly different from those of the main.