Chapter

Mind, meaning, and neural causation

Derek Bolton and Jonathan Hill

in Mind, Meaning and Mental Disorder

Second edition

Published on behalf of Oxford University Press

Published in print March 2004 | ISBN: 9780198515609
Published online March 2013 | e-ISBN: 9780191754296 | DOI: http://dx.doi.org/10.1093/med/9780198515609.003.0002

Series: International Perspectives in Philosophy & Psychiatry

Mind, meaning, and neural causation

Show Summary Details

Preview

The main claim of the first chapter was that explanations of action in terms of meaningful mental states are effective in prediction, and the inference was drawn that such explanations are causal. The main burden of the present chapter has been to reconcile this conclusion with the material causation of behaviour by the brain. We began in Section 2.1 by noting that mind–brain identity theory promises a quick reconciliation, but it runs against several obstacles, chief of which in the present context is that mental states have intentionality while brain states arguably do not. It was suggested on the other hand that brain processes can be legitimately regarded as carrying (encoding) information, and indeed are so regarded at the interface between cognitive psychology and neuroscience, particularly for the purpose of explaining the role of the brain in regulating action. Subsequent sections were devoted to explication of this idea that brain states encode information, or meaning, and in particular to defending it from various lines of thought much discussed in the current literature.

In Section 2.2 we considered Fodor's Language of Thought Hypothesis, according to which there are sentence-like structures in the brain, which serve both to encode meaning and to regulate (cause) behaviour. However, this particularly literal reading of the encoding thesis is apparently incompatible with connectionist models of cognitive function and its implementation in the brain. In these models there are, at least at a micro-level of analysis, no physical (hardware) counterparts to meaningful symbols, or sentences. The relation between connectionist models of cognitive function and those of traditional, symbolic AI, the ones conducive to the Language of Thought Hypothesis, is complex however. The philosophical assumption relevant here is that, if attribution of meaningful mental states is valid, in particular if it affords causal explanations of action, then such attributions must pick out corresponding states in the brain. This assumption can be combined with affirmation of its antecedent in order to derive its consequent, in the (sophisticated) form of the Language of Thought Hypothesis. Alternatively, the assumption can be combined with denial of its consequent (on connectionist grounds), leading to denial of its antecedent, to the ‘elimination’ of folk psychology. Either way the assumption is problematic, and reasons for rejecting it were presented. Psychological descriptions attribute meaningful states to the brain, or better, to the person as agent. However, they are based on (high-level) regularities in organism–environment interactions, and it is on this basis that they predict such interactions well (enough). There is no reason to suppose that in accomplishing this task, psychological descriptions also have to pick out neural structures that correspond to them. The assumption that folk psychology seeks to define the neural structures which serve information-processing and action, still more that this is its primary aspiration, is based on a misconception of the logic of psychological description, and hence of its ontological commitments. The upshot of these considerations is that the brain can be said to encode meaning, or information, though not because there is a one-one correspondence between meaningful states and neural states. The language of meaning and of encoded meaning is based in organism–environment interactions, and can be applied to the brain only insofar as the brain serves in the regulation of those interactions. The main claim of the chapter—that meaningful mental causation can be reconciled with neural causation by appeal to the idea that the brain encodes meaning—can thus be preserved, though with these qualifications.

We turned next to related issues in AI theory, concerning the relation between syntax and semantics. In Section 2.3 we considered Searle's well-known ‘Chinese room argument’ against the conflation of semantics with syntax; symbol-manipulation is not enough for intentionality. As to what is, we invoked again the claim, as throughout the essay, that intentionality is grounded in activity. Made in terms of semantics and syntax, the point is that the ‘symbols’ which carry meaning must be essentially involved in the mediation of sensory-motor pathways.

The considerations in the next two sections, 2.4. and 2.5, had to do particularly with the idea that meaning encoded in the brain plays a causal role. This claim about causality is, as indicated in the first section, the main rationale of the encoding thesis. In Section 2.4 we consider a line of argument pervasive in the literature, the main thrust of which is that it would have to be neural syntax which causes behaviour, not the alleged neural semantics, because only the former is local to the effects. Semantic properties, so the argument runs, are about distal features and therefore cannot be causes. Our reply was that locally defined neural causes will have locally defined effects, namely motor behaviour which is not about the environment. By contrast causal explanation of intentional behaviour has to cite intentional causes, causes that are about what the behaviour is about. Such causes are identified in non-local terms, though they are local to the behaviour that they regulate. This combination of intentional features with local instantiation is of course precisely the point of the thesis that meaning (information) is encoded in the brain.

In Section 2.5 we turned to a distinct though related argument that would refute the main proposal, an extremely influential argument based on Putnam's so-called twin-Earth thought-experiments. It leads to the conclusion that ‘meanings aint in the head’, and hence (among other things) are not causes of behaviour. The argument seeks to establish the claim that mental states as individuated by meaningful content can (be imagined to) vary while brain states remain the same, this variation being due to variation in the represented states of affairs. This claim has been accepted by most commentators, leading either to the more or less reluctant dismissal of folk psychological meaning as being irrelevant to causal explanation, or else to the problematic attempt to define a ‘narrow’ as opposed to a ‘broad’ content. Putnam's argument was criticized in the remainder of the section. The standard reading of the twin-Earth thought-experiments, the one which supports the conclusion that meanings ‘aint in the head’, posits differences in mental contents which are not reflected in behavioural differences, in particular, not in discriminative behaviours. This dissociation between mental content and behavioural criteria, whatever can be said for it on other grounds, is at odds with what is required in the context of a cognitive-behavioural science. In this context, the postulate of meaningful states serves the purpose of explaining and predicting action, in particular discriminative behaviour, and is otherwise unjustified. The implication is that meaningful content is to be defined essentially in terms of organism–environment interactions. Given this kind of definition of content, several features of folk psychological meaning that are problematic or incompatible according to the standard reading of the doppelganger thought-experiments appear as valid. Folk psychological meaning, intentionality in general, is and should be all of the following: ‘world-involving’, supervenient on the neural states that regulate action, and invoked in the causal explanation of action.

A very different sort of objection to the encoding thesis was considered in Section 2.6. It starts from a view found in Wittgenstein's philosophy and other post-modern semantics, that meaning is based in human practice and culture, and resists therefore the idea that meaning is ‘in the brain’. The underlying view of meaning was accepted, however, and indeed has been emphasized throughout the chapter to rescue the encoding thesis from the various objections to it so far. However the language of encoding, when appropriately understood, remains valid in this context of post-modern semantics. Meaningful psychological ‘states’ are manifested in action. It is possible to describe these states as being encoded in the brain, but this description pre-supposes that the brain is functioning within the person, and that the person is acting in the environment. In this sense, the subject of psychological states is primarily the person (as agent). Nevertheless it is possible and legitimate to say that the brain encodes these meaningful states: this is the way in which brain function has to be described when the task is to explain how it regulates action.

Chapter.  16911 words. 

Subjects: Psychiatry

Full text: subscription required

How to subscribe Recommend to my Librarian

Buy this work at Oxford University Press »

Users without a subscription are not able to see the full content. Please, subscribe or login to access all content.