Holism Without Tears:
Local and Global Effects
in Cognitive Processes

Ron McClamrock
University at Albany, SUNY

Published in June 1989 Philosophy of Science

The suggestion that cognition is holistic has become a prominent criticism of optimism about the prospects for cognitive science. This paper argues that the standard motivation for this holism, that of epistemological holism, does not justify this pessimism. An illustration is given of how the effects of epistemological holism on perception are compatible with the view that perceptual processes are highly modular. A suggestion for generalizing this idea to conceptual cognitive processing is made, and an account of the holists' failure is offered.

The following question has been emerging as one of the central problems facing cognitive science: If it doesn't come in parts, how are we going to build it? In this paper, I'll be looking at both the suggestion that the mind doesn't "come in parts", and the claim that decomposability is a precondition on our understanding and modeling cognitive processes in the ways that contemporary cognitive science proposes. The holistic pessimism embodied in these two claims has recently become pervasive in some philosophical circles, spreading across the philosophy of psychology from Hubert Dreyfus' strong anti-cognitivist position to Jerry Fodor's equally strong pro-cognitivist one. The worry that makes for such strange bedfellows is that cognitive processes are holistic, global, and non-decomposable; and as such, they promise to evade explanation and modeling by the "computer metaphor" strategy central to cognitive psychology and artificial intelligence.

The strategy in this paper will be the following: First, I'll try to clarify the nature of the holistic worry. Second, I'll look at ways in which perceptual processes may or may not be holistic in the worrisome sense, and draw a small moral from that. And finally, I'll try to apply this moral more generally to the case of cognitive processing, and suggest how it bears on the holists' worries about cognition. Let me emphasize that I won't be trying to show here that the holists' claims about cognition must be wrong; but rather, that one can perfectly well accept some of the central assumptions of the holistic argument without accepting its conclusion. It's not that I think that it couldn't turn out that cognitive processes are holistic in a sense which impedes their explanation; that is, I think, a question which is still open, and is to be settled by the real successes and failures in psychological theory- building. The point here will be just that the philosophical arguments advanced by the holists don't justify their pessimism.

Locality and globality

The notion of the decomposability of a complex system has a fairly obvious and intuitive core: What should be looked at in considering the behavior of some functional component of a system is not the entire functioning of the overall embedding system. Instead, consideration should focus on a more limited class of facts: facts about the subsystem's own internal structure, plus some some constrained class of effects from the rest of the overall system. So, for example, the arithmetic subroutine of some program may not care what goes into determining that it's 24x19 instead of 56x82 that it's asked to compute; all that matters for considering its behavior is its own internal structure plus what numbers (and function descriptions) are given to it.The processing here is local: it only has access to some constrained, functionally local properties of the overall system. It is just this property of locality of process that the holists claim cognitive systems do not have. Cognition does not, they say, come in discrete chunks or parts. There are not local "frames" or "sub-worlds" which are the parts which comprise the whole of cognitive structure. Or to give it a slightly more mysterious tone: Meaningfulness is at the heart of any account of cognition, and nothing could be more holistic in nature than that.

The underlying theme to this view is that of epistemological holism, or the holism of belief fixation. In contemporary analytic philosophy, this idea is most closely associated with Quine's "web of belief" metaphor. Roughly put, it's the idea that anything can conceivably bear on anything else. Depending on what the overall structure of my knowledge is, it's possible that finding out about what brand of coffee I'm drinking could affect my beliefs about whom to vote for. (E.g., I find it's brand X, you bought it, everybody knows the good guys are boycotting brand X, so you're not a good guy, so I shouldn't let you convince me to vote Republican.) It's also the problem which in discussions of artificial intelligence is called the "frame" problem: the problem of trying to put a "frame" around the representations which need to be updated on the basis of some new bit of information, or around those which should be brought to bear in dealing with some problem or question. If essentially anything could in principle bear on anything else, then such boundaries on what should ideally be looked at are at least in principle undrawable. (A good philosopher's introduction to the frame problem is given in Dennett 1984.)

As mentioned earlier, the view of the holism of cognition as a potentially severe problem for cognitive science is held in common by commentators who are otherwise as far apart as Jerry Fodor and Hubert Dreyfus. With the the nature of the worry somewhat clearer, let me say a little more about that. As for Dreyfus, his pessimism here is unsurprising, given his well-known pessimistic views in about cognitive science in general. As he puts it at one point, "... sub-worlds are not related like isolable physical systems to larger systems that they compose; rather they are local elaborations of a whole which they presuppose." (Dreyfus 1979, p. 14)

On the other hand, Fodor's holistic pessimism is quite surprising, given his usual role as one the more optimistic defenders of cognitive science. His pessimism comes in two parts: The first is what he calls "Fodor's First Law of the Non-Existence of Cognitive Science." This is the assertion that "the more global a cognitive process is, the less anybody understands it." (Fodor 1983, p. 107) The second part is his claim (based on the idea of epistemological holism or the frame problem) that central processes -- paradigmatically, the processes of non- demonstrative fixation of belief -- are robustly global (i.e. non-local) in nature. It's this conclusion which (at the end of Modularity of Mind) leads him to such pessimistic musings as the comment that "if someone - a Dreyfus, for example -- were to ask us why we should even suppose that the digital computer is a plausible mechanism for the simulation of global cognitive processing, the answering silence would be deafening." (p. 129)

Before disturbing that silence, let me make a brief aside about what won't be considered here: the issue of meaning holism. Very roughly, meaning holism is the idea that the meaning of any term is a function of (at least) its overall conceptual or inferential position within an entire conceptual structure (or theory). On this view -- sometimes called "conceptual role semantics" (see Block (1986) for an up-to-date discussion of this view) -- changing any part of the "web of belief" will change (at least a little) the relationship of all of the parts, and will thus change the meanings of the terms in the structure. So, for example, the holistically individuated conceptual role of 'polio' in some world- view can not only be altered by finding out about the nature of the underlying etiology of the disease, but also by finding out that, say, my Aunt Sally had it as a young girl.

Once you accept some kind of epistemological holism, it may be an easy slide into meaning holism, but it's also one you don't have to make.I don't in fact happen to accept meaning holism and conceptual role semantics myself; but the reasons for this are outside the scope of the present discussion. The point of bringing up the topic here at all is just to distinguish those issues from the issues about holism of processing with which this paper is concerned. I think it's easy to run the two kinds of issues together. What's at issue here is a different move from the idea of epistemological holism; one which leads to a holistic view of mental processing.

Decomposition and functional analysis

There are two ideas (one of which has been discussed widely in the literature) which I would like to bring up for use here: The more familiar one is the idea of functional analysis by decomposition; the less familiar is Robert Cummins' distinction between interpretive and descriptive analysis. Functional analysis by decomposition: A standard approach to the scientific explanation of the workings of a complex system is to divide and conquer. We carve the system up into functionally characterized subsystems, and then explain its behavior by characterizing the job done by each of the parts, and how all of those jobs fit together to produce the overall complex behavior. So, we explain the workings of the circulatory system by characterizing the functional role of the heart as that of pumping blood, of the veins as carrying it back to the heart, and so on; and similarly for the functional structure of bureaucracies, radios, etc. Further, there is no requirement of physical saliency for functional components -- the purchasing department might be spread around the corporation, and subroutines of programs typically don't have any distinct physical location.

Let me briefly mention two ways in which this notion has been elaborated in the literature. One of the best known is Herbert Simon's characterization of nearly decomposable systems (Simon 1981). For Simon, a system is nearly decomposable if it is composed of component subsystems where the interactions between subsystems are "weak, but not negligible" -- that is to say, where "intracomponent linkages are stronger than intercomponent linkages." (p. 217) This is typical, in Simon's view, in complex systems ranging from corporations (where the components might be departments) to molecular structures (where atomic bonds are enough stronger than molecular ones that nuclei can be considered indivisible).

This idea of explanation by functional decomposition has also been fleshed out in various ways specific to the notion of an information- processing system. John Haugeland, in "The Nature and Plausibility of Cognitivism" (Haugeland 1978), characterizes the systematic decomposition of an information-processing system via the specification of functionally characterized component Intentional Black Boxes (IBBs) and their interrelations. For Haugeland, the component IBBs of a decomposable information processing system must have some constrained class of representational (or "quasilinguistic") inputs and outputs which are their only real functional link to the other IBBs in the system. In the simplest sort of case, we can imagine the decomposition of a chess- playing machine into components which generate possible moves, make some evaluation of the strength of that move, and decide when a good enough move has been found. On this sort of analysis, we might see the joints at which to carve up the system for functional decomposition as information bottlenecks at which the flow of information is dramatically constrained with respect to the total information used by the subsystem.So, for example, the subroutine in a computer which does the floating- point addition may have access to information about the location in memory of a given floating-point accumulator, but the main routine which balances the books may not. It just asks the arithmetic subroutine for answers to problems, and doesn't care how they're solved. Interpretive and descriptive analysis: In chapter 2 of his book The Nature of Psychological Explanation (Cummins 1983), Robert Cummins presents the distinction between interpretive and descriptive specifications of sub-functions of a complex system. As he points out, a natural cleavage can be made between characterizations of a functional subsystem in terms of locally determined, intrinsic, descriptive properties of the subsystem, and characterizations in terms of properties which it has by virtue of the functional role it plays in the embedding system -- properties which are globally determined, relational, and interpretive. For example, a given action of a computer's central processor, such as loading an internal register with the contents of some specified memory location, might be specified intrinsically as, say, "load A with the contents of X". But that very same action might, given the program in which it is embedded, be specified as "get the next significant digit in the calculation"; or in another place in that program, "get the next letter on the current line of text". The very same descriptively characterized operation can be an instance of the former interpretively specified action in one instance and of the latter on some other occasion. Generally, a descriptive characterization of an operation or a subsystem specifies it in terms of properties which supervene on functionally or physically local properties. In contrast, an interpretive specification characterizes it in terms of its relationally specified, more literally "functional" properties -- properties which needn't supervene on any particular localized proper subset of the properties of the overall system.

Two more quick points about the distinction will be helpful later on. First, note that it's quite plausible to make the descriptive/interpretive distinction at more than one level of analysis of a system. So, for example, from the point of view of a Lisp program, a function like (car(list)) (i.e. "get the first item on the list named `list'") might be thought of as a descriptive characterization of that action, whereas the correct interpretive characterization of that operation in a particular case might be "get the name of the student with the highest score on the midterm". Of course, from the machine language point of view, the Lisp characterization would be seen as an an interpretive analysis of some sequence of machine language instruction -- a sequence of instructions which might play a different role in some other context.

The second point is that for at least one level of application of this distinction, it's plausible to see it as one way of making the distinction between syntactic and semantic properties of a representational system. The syntactic or formal properties of a representation are typically thought of as those which the representation has in virtue of its shape, or if you like, its local structural properties. Relationships to other representations in the system which are inferentially linked to it are not typically considered syntactic properties of the representation, but rather semantic on The significance of the representation for the system, its overall functional (or sometimes even "conceptual") role -- these are the sorts of properties often thought of as bound up with the idea of the meaning rather than the form of the representation. These are also just the sorts of properties which lie at the heart of the interpretive side of the interpretive/descriptive distinction.

The point of these distinctions for the task at hand will, I hope, become clear as we progress. With them now in hand, let me turn to the consideration of some issues about perception which will lead us back to the question of cognitive holism.

Information flow in perception

When talking about information-processing approaches to perception, it's common to separate two main camps: On the one hand, there are those who take perception to be substantially top-down. The central idea of this approach is that our general knowledge of the world (including even such things as which scientific theories we believe) permeates our perceptual processing, and thus affects how we perceive the world. Perceptual processing is taken to be simply a part of our general problem-solving activity, and there are essentially no limitations seen on what information might be brought to bear on the processing occurring between stimulus and percept. Let me call this kind of view perceptual holism, since on this view, anything can affect perception -- i.e., there are no principled constraints on what part of the information the organism represents is available to perceptual processes.

On the other hand, there are those who see very substantial constraints on the quantity and character of top-down information flow in perceptual processing, and who hold that the character of the stimulus more rigidly determines how we perceive things. Perception is seen as primarily bottom-up -- or better put, it is (in Zenon Pylyshyn's terms) cognitively impenetrable (Pylyshyn 1984, esp. pp.130-45) or (in Fodor's) informationally encapsulated (Fodor 1983). Perceptual systems are taken to have only a kind of very restricted access to our background knowledge. (Just how restricted is a question which will come up later.) What I know about, say, politics, generally has no influence on my perceptual processes at all. They are driven (almost) exclusively by the character of the external stimuli along with the rules and constraints on interpreting stimuli which are internal to the perceptual system (e.g., see it as a rigid three-dimensional object if possible). These internal rules and the representations they manipulate may in fact in be largely inaccessible from other points in the cognitive system. Viewing things in this way makes the mind (in Simon's terms) a nearly decomposable system: perceptual systems are mental subsystems for which intracomponent structure is more significant than intercomponent links.

The first, holistic view of perception has been fairly popular in the last 30 years or so, and is at the heart of the so-called "New Look" perceptual psychology typified by researchers such as Jerome Bruner. However, the second view, the modular or encapsulated view of perception, has come into its own somewhat in the last several years, particularly in the work of theorists such as David Marr and Shimon Ullman in vision and Merrill Garrett and Ken Forster in psycholinguistics. The present point in discussing these views will not be to try to show that either of these approaches must be right, given the current state of the evidence. Rather, what I'll do is point out how it could well turn out that perceptual processes are in an interesting sense both informationally encapsulated and holistically sensitive, and then make some use of that possibility. In order to do this, let me first note two sorts of phenomena which bear on these issues: first, the persistence of illusion, in order to show one example in which some kind of impenetrability of perceptual processing is undeniable; and second, some general top-down effects on perceptual recognition tasks which present at least a prima facie problem for an encapsulated account of perceptual processing. The persistence of illusion: It's a banality that everyone is familiar with -- knowing that an illusion is illusory doesn't make it go away. Look at any of the standard visual illusions all you want; it will still look as though the thick lines in the Herring figure are curved (Fig. 1a), and as though the forked line in the Muller-Lyer is longer than the arrow (Fig. 1b). I have cognitive access to the information

- - - - - Figures 1a and 1b here - - - - -

that these are illusions, but my visual processor won't listen. This of course suggests that at least some kinds of information which are directly relevant to visual tasks are not accessible by my visual processes. The processes which mediate between proximal stimulus and conscious percept have access to a lot of information (e.g. retinal light gradients, rules for depth interpretation, etc.), but one bit of information I have that those processes don't is that the lines are really the same length. Top-down effects in recognition: It's another banality that knowing what you're looking for makes it easier to find it. So, for example, knowing that a stimulus has a certain character allows us to perceive it that way when we might not otherwise have done so. This effect seems to surface both in speech and visual perception. In speech recognition, we might thinks of this as the "Mick Jagger" effect: Listen to Mick sing; you probably won't be able to tell whether he's saying "he prayed" or "depraved". But if you find out what the words are (e.g. by looking at a lyric sheet or by inferring from context), you then hear them differently -- you perceive them as the words they are, rather than as the slurred noises you heard earlier.

The situation is similar with vision. Most people who are simply shown Figure 2 won't see it as anything much. But tell them it's a

- - - - - Figure 2 here - - - - -

dalmatian sniffing the ground with its nose there, and they will usually see it that way. Not only will they say they see it that way, but they'll also then be able to give you new information about the picture that they would not have had without coming to see it as a dog -- e.g., information about which of the dog's paws is farthest forward. Knowing (by being told) what's there allows you to alter your perception of the picture; hence this background knowledge does affect perception.

The question to consider is whether this shows the modular view of perception to be false. In what follows, I'll say why the answer to this is `no', and then try to draw a slightly more general conclusion from the rationale given for that answer.

Perceptual priming -- global sensitivity and local processes

There's a fair bit known now about how priming -- the prior presentation of a related stimulus -- can affect different sorts of recognition tasks. For example, showing a subject the word 'nurse' immediately prior to the word 'doctor' will speed up the recognition of 'doctor' as a word. The prior presentation of the closely related item facilitates recognition in a way that 'nurse' doesn't facilitate the recognition of 'bread', although 'butter' does. (See, for example, Meyer and Schevanevelt 1971.) In the modularist's way of talking, we might say that the perceptual module is activated by the first stimulus in a way that makes it more ready for the presentation of the related item.

It's important to note that priming effects are generally quite transitory and fleeting. Association effects (e.g. between `doctor' and `nurse' in the above case) decay quite quickly. Furthermore, they exhibit substantial interference effects (e.g. priming for both red and green will slow reaction times for recognition of these over priming for just one). Thus, the effects of priming are in a certain respect quite limited.

I'd like to use the idea of priming to suggest a way to view the top- down effects in perception like those we just saw in the cases of Mick's lyrics and the dog picture. The rough idea is just this: We recognize the dog after we're told it's there because we explicitly feed the visual system the image of a dog. We thereby prime it internally in much the way the external stimulus does in the standard external priming cases. This might be seen as roughly giving the visual system an explicit hypothesis to test. Similarly for the lexical case: recognition of the word is enhanced by internally priming the language perception system with an explicit candidate for the word being spoken or sung. We don't know what the exact mechanism underlying external priming is; but for present purposes, the details aren't critical. We know priming does work for stimuli which are externally presented. It's not too implausible to think responses to internally generated primes might work in the same way.

There are fact some experimental results around which are suggestive of this. For example, Cooper and Shepard (1973) showed that reaction times for determining whether the second of two visual presentations of a symbol (such as a letter) is the same as or a mirror-image of the first are slowed by rotating the second presentation of the symbol into a non- upright position. But this slowing was shown to be eliminated by giving the subject independent information about the rotated orientation that the second object would be presented in. Providing an intervening interval in which to perform an internal rotation on the representation of the first stimulus was critical for achieving the facilitation of recognition.

So let's pretend for the time being that explicit internal priming of the perceptual system is the way in which recognition is facilitated when you look for something in particular in a degraded image, or when you listen for something in particular in degraded speech. I think it's possible, given what we know now, that this is in fact the right explanation for this effect. I don't know of any evidence which directly conflicts with it. So in order to make a conceptual point about encapsulation and holism, let me imagine it's true and see where this takes us. For present purposes, this needn't be exactly the way this actually works. What's critical in the current context is showing how certain aspects of holism and locality of process are compatible in a possibly real psychological mechanism.

Suppose then that something like this is more or less correct. Where then would this leave us on the question of perceptual holism? Even though the perceptual module receives some input from higher level systems, it still looks like a component in a system which counts as being nearly decomposable in Simon's sense. It is still (as Haugeland would say) an individual IBB with some small, constrained set of inputs.It's just that in addition to the proximal representation of stimuli, you can have one extra input -- the internal prime. All the perceptual system has access to is the representation of the proximal stimulus, the representation it's primed with, and its own internal rules and information. Note that no new kinds of processes are required by this -- the same processes at work in external priming can be assumed to apply. When you think of what's given to the perceptual module as, say, a shape description or a word to use as a possible hypothesis about the distal stimulus, the module still looks like a very substantially informationally encapsulated subsystem. It's still encapsulated from everything except its two inputs; all that directly affects the perceptual processes is the actual explicit representation that is used in priming the module on that occasion.

If this is right, then we have an example of a process which is local and encapsulated when considered from the point of view of which particular representations it has direct access to; but holistic in the epistemological sense. After all, which representations we in fact prime our visual systems with is a holistic and idiosyncratic matter. If I'm walking down a path late at night which I have good reason to believe is teeming with dangerous wildlife, I'm probably more likely to explicitly prime my visual system with images of potentially harmful shadow-lurkers.

One way to make it clearer how processes can be both local and sensitive to top-down information is through the idea of changes in representational format. For example, suppose there are distinct codes for (visual) imagistic or perceptual representations and for propositional representations. Then visual processes might be sensitive to top-down effects from propositional knowledge, but only to those bits of information which have been explicitly recoded or translated into the perceptual format. What information is actually translated this way on a given occasion can in principle depend on anything -- it's a paradigmatically holistic process, like fixation of belief. But the internal workings of the visual system itself are only sensitive to what's been explicitly translated on this occasion. Those processes are then functionally local, but nonetheless are sensitive to top-down information flow and (indirectly) the effects of epistemological holism.

Another way to look at this is through the interpretive/descriptive distinction made earlier in kinds of functional analyses. Recall that an interpretive analysis of some system will characterize its sub-functions in terms of the overall role they play in the system (e.g., getting the next significant digit); whereas in a descriptive analyses, sub-functions would be characterized by their local or intrinsic properties (e.g., loading register A with the contents of the memory location register X is pointing to). On the current way of viewing things, the module is itself directly sensitive only to the descriptive properties (the shapes or forms) of the representations that it receives as inputs. It is only indirectly sensitive to the content, global properties, and interpretive role those representations have relative to the rest of the cognitive system; sensitive only in that those properties are important in determining whether the module gets that representation as an input in the first place. The only aspect of the internal prime which figures in the internal information-processing analysis of the module is the shape of the syntactic object which it gets as input. The interpretive role of the incoming representation with respect to the rest of the cognitive system is irrelevant to that analysis.

When we consider the workings of the module, the representation that it is primed with is (to use Haugeland's term) de-interpreted; it's given a new interpretive analysis on the basis of its role in the functioning of that subsystem only. Intuitively speaking, the priming representation may mean "my dog Fido" for the central system, but it only means "looks like ...." for the module. The visual system only has access to the shape specified for it by the representation it's given, even though the central systems have access to the higher-level content of that representation -- e.g. that it means "Fido". Thus, what's semantically holistic (i.e. potentially sensitive to the content of any belief) may of course still be syntactically local (that is, have access only to the shape or form of some particular small class of representations.

A little generalization

In the last section, I distinguished between two kinds of information carried in an input to a relatively encapsulated subsystem: First, there is the information to which the subsystem is directly sensitive, which is relevant to the consideration of the internal working of the subsystem. Second, there is the information to which that subsystem is only indirectly sensitive -- representational properties of the input which play no role in the internal processing of the the subsystem, but which are relevant as an external determinant of what inputs the subsystem actually gets. The point of the distinction was to point out that something can be a local, substantially encapsulated mechanism (and thus have access only to some constrained class of inputs and their shapes) while nonetheless being indirectly sensitive to essentially any information in the system in which the local mechanism is embedded. Even if the sensitivity of the subsystem is only to local syntactic properties of its own constrained set of inputs, it can be indirectly sensitive to the content or interpretive role of the representations that it sees. The content of the representation may be in part responsible for its being given to that mechanism at that time; but once it is so given, its local shape is all that matters. Locality of mechanism, near decomposability, and all those desiderata are compatible with sensitivity to the effects of epistemological holism.

Let's suppose now that Quine, Fodor, and Dreyfus are all right, and that some kind of epistemological holism is true in the conceptual realm. Where should that now leave us on the question of cognitive holism? If we're to be worried about the conceptual realm turning out to be non- understandable because it doesn't come in isolable parts, then the worry should be primarily one about whether processing is localized. After all, if processing is localized, then the system will be nearly decomposable along the lines of processing realms. I suggest that even though there can be epistemic effects from one domain to another, the kinds of representational bottlenecks in information flow that we imagined in the case of perception might still serve to isolate the subsystems into which we might functionally decompose the conceptual realm as well.

The point is that there may be locally mappable structures of inference and association at the conceptual level which have some internal degree of complexity, but that the outside conceptual connections which a representation in that structure has are limited by the descriptive properties of the representation in roughly the way that a representation's effect on a perceptual module was limited by its shape. Although I suggested in the case of the encapsulation of perception that one way to view the informational bottleneck was by an explicit change in format (e.g. from a propositional to an imagistic code), it's important to note here that such a shift isn't essential to the idea. The change in format considered earlier was simply one way to pick out the point of interface between isolated processing domains. The isolation occurs not because of a difference in format, but because of the limited information flow at that point -- which in the perceptual case may happen to be a point of format change. What's essential is that the properties of a passed representation which are relevant for analyzing of its role in each processing subsystem are independent of its interpretive role in other subsytems.

The internal structure of various conceptual regions or domains might then be thought of as isolated virtual machines, each with some constrained set of possible inputs from outside. Each domain cares about the representations it deals with only under the interpretive analysis determined by its own internal structure. The interplay between these domains is then such that the interpretive properties of representations in one domain can have an indirect effect on the behavior of some other new domain. But the story about a representation's internal role does not appeal at all to its interpretive properties in the old domain. The new domain is only sensitive to the new interpretive analysis assigned to that representation from within the new domain. Data (viewed under a descriptive analysis) is passed; but interpretive roles are not.

That's the central idea of processing encapsulation between conceptual domains: Don't try to require that the behavior of one domain be totally independent of the content of the representations of other domains; that's more than is required in the perceptual case. In the perceptual case, modularity requires that the module be sensitive directly only to the local, syntactic, descriptive properties of the representation that it's been primed with. And so similarly in the conceptual case: If you can isolate domains so that what you have is localized data passing, the overall system can still be seen as functionally decomposable.

We can see now how it's perfectly possible to have both (in principle) epistemic holism and functional decomposition (along the lines of local processes) in the conceptual realm.That was philosophical goal of this paper: showing that the immediate inference from epistemological holism to the denial of functional decomposability of conceptual processes doesn't go through. If you believe what's been said so far, then that's been done. Before concluding, though, let me briefly raise three sorts of further considerations relevant to the overall issue of cognitive holism: first, how a couple of facts about how we really think about things might bear on the issue; second, the nature of another kind of holistic worry which has not been addressed in this paper; and finally, what I take to be the right way to view the general kind of mistake the holist has made.

The following moral seems to run through a good bit of the psychological literature on problem-solving: We often use fairly gross local heuristics in figuring things out, and totally ignore significant and relevant background information. Here are a couple of examples: In one well-known study (see Kahneman and Tversky 1982 for a discussion of this and related examples), subjects were presented with the following problem: Jones lives in a city where hit-and-run accident one night, and says it looked like a blue one. We test him under similar perceptual conditions, and find them to be green the other 20\% of the time), and similarly for right, and that it was likely a blue cab he saw?

Most subjects say "yes"; but the answer is "no". If we assume that the "a priori" probability that the cab was green is fixed would be misidentified with respect to their color, then the probability is .85 x .2 = .17 that he saw a green cab but took it to be blue, while the probability that he saw a blue cab and took it to be blue is .15 x .8 = .12. So it's more likely he saw a green cab and misidentified it than that he saw a blue cab and correctly identified it. Of course what the subjects seem to do here is simply ignore the relevant information about the base he's probably right this time."

A second consideration from reality: Substantial isolation of information across domains in reality seems to hold within science, Fodor's favorite example of a realm permeated by holistic belief fixation. As Clark Glymour puts it: "In arguing for a theory or hypothesis, or in getting evidence pertinent to it, scientists appeal to a very restricted range of facts. Astrophysicists never cite botanical facts, except out of whimsy, and botanists never cite facts that are extragalactic." Glymour 1985) It's similar, of course, for the more mundane realms. I'm hard pressed to see how any of my views about, say, politics, have ever affected my judgements about whether the Celtics are going to cover the spread tonight. It's not that they couldn't do so, or that they shouldn't; it's just that in practice, my judgements about basketball games are almost entirely immune from infection by beliefs about politics, physics, split-brain research, or just about anything else.

For present purposes, the point to take from these sorts of cases is that belief and judgement fixation may be ideally holistic, but in practice is grossly local. In real belief fixation, we not only typically fail to bring to bear evidence from distant domains which may be in principle relevant (as in the latter kinds of cases), we also often fail to bring to bear evidence which is quite closely related to the problem, and which we have had explicitly pointed out to us (as in the former sorts of cases).

Given that there is in the real case some substantial degree of isolation of reasoning, it looks like an open question whether or not the interactions there are between domains might be viewed as the top-down effects in perception were. That is, we may be able to see them as data flow across processing bottlenecks which separate the functionally encapsulated components (or frames, or domains) which comprise the overall complex conceptual architecture. The idea that there are "domains" in which there is substantial interconnection but which interconnect with other domains in very constrained ways is central to much of the current "conceptual modeling" work in AI -- and particularly in the now ubiquitous "expert systems". The class of data that an expert system deals with is typically constrained in a roughly a priori way within a single domain. The recent successes in this area have not been insignificant. Full steam ahead, and damn the philosophers.

Concluding remarks

Let me conclude with two more general points: First, a comment about a problem which hasn't been avoided; and second, a suggestion about the basic nature of what I see as the holists' mistake.

There is a kind of somewhat independent worry about a different sort of holism which isn't answered by any of this. Regardless of whether the internal structure of the cognitive machinery is to some very large extent modular, the fact remains that the system comes as a package, and we can't easily poke in between whatever parts that it might have. If there are some routes for information flow between modules (e.g. some way for background beliefs to influence perception), theoretical isolation of components becomes very difficult. Of course, that's a large part of why psychology is hard: you have to be very clever with chronometric studies (and studies of common mistakes, and lesion studies, and so on) in trying to tease apart distinct internal components and processes. None of what I've said should be taken as suggesting anything contrary to this. The argument here isn't for the claim that decomposition of the cogitator isn't hard; it's just that is isn't impossible, given what we know so far. (This practical problem of decomposition is what Fodor has said (in conversation) he's really worried about, and clearly what worries Forster (in Forster (1985)). The way of viewing things presented here may give them little solace; but will perhaps provide more for those in AI who want to be able to build the mind one piece at a time.)

After all this, how should we view the mistake of sliding from epistemological holism to processing holism? Let me make a suggestion. David Marr, in his book Vision (Marr 1982), distinguishes for us two "levels" of theoretical characterization of information-processing systems above that of physical implementation. The first, which he (perhaps somewhat misleadingly) calls the computational level, is "the level of what the device does and why". The second is that of representation and algorithm, and "this second level specifies how.... [It tells us] what is the algorithm for the transformation". (pp. 23-5) This distinction is, Marr claims, "roughly [Chomsky's] distinction between competence and performance." (p. 28) So, for example, take a cash register. As for the computational level, Marr tells us that "what it does is arithmetic, so our first task is to master the theory of addition." (p. 22) Whereas at the algorithmic level, "we might choose Arabic numerals for the representations, and for the algorithm we could follow the usual rules about adding the least significant digits first and 'carrying' if the sum exceeds 9." (p. 23)

With this in hand, here is a general way to characterize the weakness of the holists' worry: At the computational or competence level of theorizing about cognition, things may in fact be inescapably holistic; but at the algorithmic or performance level, things might nonetheless turn out to be localized and modular. If this is so, cognitive theory might be a case where the computational or competence theory might be idealized to the point where we never really get too close to it in practice; that is, a case where the theory of competence and the theory of performance just don't mesh all that well.

It would then be an example of a kind of overidealization in the competence theory. This isn't a surprising thing to have happen in a science. We were lucky with ideal gases, in that real gases sufficiently approximate point masses for the laws to be useful; but we've been pretty unlucky with those of, say, economics, where our real "optimizing market agents" don't even read the labels. Given the degree to which real thinkers are misdescribed by a "computational" theory of central processes which works by the standards of perfectly rational nondemonstrative fixation of belief, puzzles about the possibility of engineering (reverse or otherwise) a system which would perfectly satisfy such a theory may not be so bothersome after all. A holistic computational theory of belief fixation may give you roughly the results you'd get given, say, unlimited working space, lots of time to propagate information through the network, and so on. But you and me -- we're just not that smart.


References