September, 1998

Ecological Validity: Then and Now

Kenneth R. Hammond
Professor Emeritus, Department of Psychology
University of Colorado


Note to the reader: this Web-Essay is in two parts. The first Part reproduces the first 13 pages of a monograph that I wrote in 1978 titled (grandiosely) "Psychology's scientific revolution: Is it in danger?"



Contents

Then (1978)

Now (1998)


This monograph was rejected by the American Psychologist, not for any technical or scientific reason, but because the editor thought it too bellicose, or words to that effect, and suggested that I try to make my point in some less aggressive fashion. In short, (although there were no ad hominem remarks) tone it down! Naturally I was angered, and considered this response just one more expression of the academic establishment's intolerance of criticism of psychology that had hampered Brunswik's efforts to open the methodological restrictions of his day. Nevertheless, I was discouraged and made no further attempts to publish the monograph.

My reason for presenting it here is that I believe that the situation has developed much as I feared: an important Brunswikian concept -- invented by Brunswik because it had to be invented -- has been thoroughly corrupted by those ignorant of its origins, and indeed, those origins may be lost. My evidence for this can be found in the second ("Now") Part of this Web-Piece that refers to a 1996 article published in Behavioral and Brain Sciences titled "The base rate fallacy reconsidered: Descriptive, normative and methodological challenges" written by Jonathan J. Koehler (1996), that includes commentaries from 40 contributors. The term "ecological validity" was used on 46 occasions (!) by the author and 16 of the 48 commentators. The flagrant misuse of this Brunswikian term in this article demonstrates clearly that its misuse is persistent and growing to the point at which it is now becoming casual. For example, Wright (1996) misleads his readers by defining the meaning of the term "real world" to be "ecological validity", and inadvertently, illustrates the utter confusion into which this central issue has descended, thus " The thrust of [Banaji and Crowder (1989)]'s argument was that researchers were becoming more concerned about how well their study mimicked the real world (i.e., ecological validity , )than whether their results were likely to be generalizable" (italics in orginal). A further example of the casual corruption of ecological validity can be found in Cosmides and Tooby (1996) who, in a sharp denial of the conclusions drawn by Tverksy and Kahneman regarding the general incompetence of "people's" judgments under uncertainty, state "...we show that correct Bayesian reasoning can be elicited in 76% of subjects -- indeed 92% in the most ecologically valid condition -- simply by expressing the problem in frequentist terms" (p.1).


Why should I demand that the original meaning of these terms not be destroyed? Because scientists should not take a scientific term that has an established definition and meaning and use for over a half century and employ it in an arbitrary fashion that robs it of any meaning whatever.

The misuse of this term is doubly regrettable because its correct use, and its relation to a second Brunswikian concept -- representative design -- is exactly what the authors need in their effort to make their case about the false generalization of results. Therefore the purpose of the comments that follow is to alert Brunswikians and others to this regrettable turn of events and to enlist their aid in preventing a further debasement of the valuable concept of "ecological validity" and its confusion with representative design.

Why should I demand that the original meaning of these terms not be destroyed? Because scientists should not take a scientific term that has an established definition and meaning and use for over a half century and employ it in an arbitrary fashion that robs it of any meaning whatever. No one has a right to do that. It is bad science, bad scholarship, and bad manners. And that is what the psychologists who misuse it are guilty of , and will continue to be guilty of until something stops them. No one should think that this is merely an attempt to resurrect the purity of a term used by an author 50 years ago ; it is a term currently in frequent (proper) use by many psychologists who are currently conducting research and developing theory. Therefore its arbitrary change of meaning is a barrier and an affront to those attempting to develop a cumulative science.

Then (1978)

Psychology's scientific revolution began with Brunswik's argument for a reversal in psychology's approach to its subject matter. He (a) urged the abandonment of the search for nomothetic, deterministic laws of behavior in favor of idiographic-statistical descriptions (now "models") of the behavior of individuals, (b) advocated the replacement of the systematic design of experiments in favor of the design of experiments that are representative of the organism's ecology, or habitat, and (c) suggested that the geographer's descriptive efforts rather than the physicist's law-seeking endeavor should provide the proper model for psychologists (1943, 1952, 1956). These points were buttressed by a detailed methodological argument, empirical studies and a sophisticated analysis of trends in the history of psychology which pointed to the eventual acceptance of these revolutionary ideas. (See Hammond, 1966 for an overview.)

The reversal in approach urged by Brunswik is now being advocated by an increasing number of psychologists. From the time of Koch's (1959) review of psychology in which he asserted that there is a "stubborn refusal of psychological findings to yield to empirical generalization," continuing through the present to Cronbach's gloomy pronouncement that "generalizations decay" (1975, p. 122), there has been a growing recognition that the present paradigm is failing; that change is needed. Leaders in the fields of experimental psychology (e.g., Jenkins, 1974), developmental psychology (e.g., Bronfenbrenner, 1977), and educational psychology (e.g., Cronbach, 1975), and social psychology in particular (e.g., Gergen, 1973, 1976; McGuire, 1973; Smith, 1976; see also Elms, 1975) have noted that the traditional approach leaves us with results that are restricted to the laboratory, and therefore of dubious value. Increasingly, the general demand is for some methodological change (often no more than brave, if vague, calls for more "field research") that will permit the achievement of results that will carry stable meaning for behavior outside the laboratory.

As the pace of change increases, however, the principal concepts of psychology's scientific revolution are becoming confused, emptied of meaning, reinvented, and their origins lost. In the hope of preventing further negative developments of this kind, I shall (a) indicate briefly the nature of certain key Brunswikian concepts, (b) show that they are in danger of losing their meaning as their use increases, (c) call attention to the original meaning of these concepts in an effort to restore their theoretical coherence, and (d) demonstrate how false, yet widely accepted, conclusions have been drawn from research that fails to maintain the integrity of these concepts. It must be emphasized that it is not merely the priority of usage of terms that is at stake; rather, it is my purpose to serve the development of psychology as a cumulative science, and to spare its students from having to cope with the idiosyncratic proliferation of the meanings of concepts taken from what was originally a theoretically coherent context. Since it is nearly twenty-five years since Brunswik's death, however, his academic background is briefly described first.

Egon Brunswik

Brunswik (1903-1955) began his work during the 1930s in Austria and continued it at the University of California at Berkeley from 1937 to 1955. Although Brunswik earned high esteem as a scholar for his profound analyses of the history, method and theory of psychology, as well as his research, his arguments for change were rejected during his lifetime (see Hammond, 1968; Tolman, 1956, for biographical information; for contrasting views with Hull and Lewin, see Brunswik, 1943; Hull, 1943; Lewin, 1943).


"He [Brunswik] asks us . . . to revamp our fundamental thinking . . . . It is an onerous demand . . . . His work is an object lesson in theoretical integrity"

To what extent this rejection accounted for his suicide in 1955 is uncertain; what is certain is that the acceptance of his ideas has increased steadily since his death; there have been over 175 references to his work in the last five years. What is equally certain is that he called for a reversal of fundamental practices. As noted by Gibson (1957) in his review of Brunswik's book (1956): "He [Brunswik] asks us . . . to revamp our fundamental thinking . . . . It is an onerous demand . . . . His work is an object lesson in theoretical integrity" (p. 35). Only by maintaining that "theoretical integrity" will psychology be able to maintain its momentum toward the attainment of laboratory results that carry meaning for situations outside the laboratory.

Three Revolutionary Concepts in Danger

The current treatment of three concepts introduced by Brunswik (representative design, ecological validity, and intra-ecological correlation) are discussed below. In what follows we show how the current treatment of these concepts produce the confusion, loss of origin and meaning, and reinvention mentioned above.

Representative Design

The concept of representativeness is fundamental to generalization. Just as the subjects in an experiment must represent those not included in the experiment if generalization over subjects is to be achieved, so also must the conditions of an experiment represent those conditions outside the laboratory over which generalization is to be achieved.


Systematic arrangement of conditions in the experiment that do not represent the nonsystematic arrangement of conditions outside of it prevents both logical and empirical generalization of results.

Systematic arrangement of conditions in the experiment that do not represent the nonsystematic arrangement of conditions outside of it prevents both logical and empirical generalization of results. Moreover, if experiments are to produce results that will generalize to circumstances outside the laboratory, they must not merely include substantive material that is representative of the outside situation, but the formal, that is, structural, aspects of the situation outside the laboratory as well. As Brunswik put it: "Generalizability of results concerning. . . the variables involved [in the experiment] must remain limited unless the range, but better also the distribution. . . of each variable, has been made representative of a carefully defined set of conditions" (1956, p. 53). Brunswik's admonition regarding the representativeness of the formal aspects of the conditions of experiments also includes the (ecological) intercorrelation among the independent variables in the experiment, thus challenging the typical factorial design in which variables are set in orthogonal relation to one another.

One way to achieve representativeness of task, or environmental, conditions is to sample conditions, just as one achieves representativeness of the subject population by sampling subjects. In this way the range, distribution and intercorrelation among environmental variables will appear in the laboratory sample, and, therefore the laboratory conditions will be representative of the conditions toward which generalization is intended--within the limits of sampling errors. The size of such sampling errors can be estimated and controlled by the size of the sample, precisely as in subject sampling.

This argument for the need for representativeness of task conditions has slowly but surely gained acknowledgement since Brunswik introduced it in the 1940s (see, e.g., Brunswik, 1943), particularly in the area of person-perception. For when task conditions include persons, the argument is straightforward and its implementation is not difficult. It is easy to see that if one intends to generalize over persons, whether they are subject-persons or object-persons, sampling is required. If one wants to ascertain whether short people are perceived to be less (or more) aggressive than tall people one must expose to the subject sample an adequate sample of short and tall person-objects since short and tall persons will vary in many other dimensions, e.g., weight, sex, etc. And, indeed, object-person sampling has increased substantially in research on social perception and social judgment in the last two decades. From a time during the 50's when virtually all studies in social perception and social judgment ignored object sampling (see Crow, 1957; Hammond, 1948, 1954 for examples in this period) and thus made their results useless, research in social judgment and social perception has progressed to the point where nearly all researchers now employ reasonably sized numbers, if not true samples, of persons-objects, or target persons. This aspect of the revolution is almost secure.

The use of systematic design has not altogether been given up, however. Studies which employ intensive subject-person sampling but no object-person sampling are still published in the Journal of Personality and Social Psychology as well as other prominent journals. In 1977, for example, Nisbett and Wilson (1977) had 118 subjects rate a single object-person, and Selby, Calhoun, and Brock (1977) had 47 subjects rate one object-person. The most recent issue to arrive (April, 1978) contains a study of lying that includes a single "liar" (Kraut, 1978). A sobering fact is that when Thorndike (1920) discovered the "halo effect," over a half-century ago, he got it right (there were 8 subjects and 137 object-persons in the study he described) as did the agronomist, Henry Wallace, when he studied the corn judge's ratings of ears of corn in 1923. The price psychology pays for clinging to the conventional systematic research paradigm is evidenced in a review of research on the effect of the sex of the experimenter in psychological research (Rumenik, Capasso, and Hendrick, 1977). They found that only 39 of 63 studies used as many as two members of each sex as experimenters (p. 874) and few used more than that. (Only 8 of the 63 studies used as many as 10 in each sex group!) Many of the studies that failed to employ object (experimenter) sampling were conducted in the 1970s. Such wasted effort should not be surprising, however; students are instructed in a fashion that perpetuates these errors (see, for example, Aronson and Carlsmith, 1968) by researchers whose own work provides examples of the same mistakes (e.g., Aronson, Willerman, and Floyd, 1966).

The choice between representative and systematic design of experiments has surfaced episodically since the beginning of scientific research in psychology. As Gillis and Schneider (1966) point out, Wundt recognized it in 1896 and chose systematic design; MacDougall recognized it in 1922 and chose representative design; Wundt was followed and MacDougall was not. When Brunswik recognized the necessity for representative design, he laid out in great detail the profound implications of that choice for theory and method, and for the nature of psychology as a scientific discipline (1952, 1956). He went so far as to suggest that choosing representativeness in experimental design would mean giving up the law-seeking physicist as the role model for psychologists and choosing the description seeking geographer instead.


The revolutionary implications of this idea are still imperfectly understood because they have not yet been fully explored.

The revolutionary implications of this idea are still imperfectly understood because they have not yet been fully explored. But it is a revolutionary idea that is gaining acceptance (Cronbach, 1975). Its acceptance will be unduly slowed, however, if the specific meanings of revolutionary concepts are eroded, for psychologists and their students will then be required to relive their history in a morass of confused and indistinct concepts and amidst studies that produce results of doubtful utility--a situation not far from that which exists today. The erosion of meaning from the concept of ecological validity and its confusion with the concept of representative design (and its aim of generalization) provide clear examples of these dangers.

Ecological Validity

Brunswik introduced the term ecological validity to indicate the degree of correlation between a proximal (e.g., retinal) cue and the distal (e.g., object) variable to which it is related (see Brunswik, 1956, pp. 48-52, on the "Ecological Validity of Potential Cues and Their Utilization in Perception"). Thus, in a perceptual task, ecological validity refers to the objectively measured correlation between, say, vertical position and size of an object (larger objects tend to be higher up in the visual field) over a series of situations. Or, more broadly, one may compare the ecological validity of the cue "height of forehead" with the cue "vocabulary level" as indicators of a person-object's intelligence. (See, for example, the section entitled "Classification of cues in terms of ecological validity, representative design," Brunswik, 1957, or "Ecological validity of potential cues and their utilization in perception," pp. 48-50, 1956.) In short, ecological validity refers to the potential utility of various cues for organisms in their ecology (or natural habitat). Of course, the difference between the ecological validity of a cue and its actual use by an organism provides important information about the effective use of information by that organism (see Fig. 1).

1 Ecological Validity
2 Cue utilization
3 Objective values of cues
4 Objective inter-correlations among cues
5 Subjective values of cues
6 Subjective inter-correlations among cues
7 Judgment
8 Inferred state
Figure 1
Erosion of Meaning

Prior to 1974, the concepts of representative design and ecological validity were kept distinct, and psychologists used them as they were intended to be used by their author. Among those who employed these terms as Brunswik defined them are Bruner, Goodnow, and Austin, 1956; Dudycha and Naylor, 1966; Einhorn, 1972; J. Gibson, 1957; Gillis, 1975; Goldberg, 1970; Hammond, 1955; Hammond, Rohrbaugh, Mumpower and Adelman, 1977; Hammond, Stewart, Brehmer, and Steinmann, 1975; Heider, 1958; Hochberg, 1966; Jarvik, 1966; Keeley and Doherty, 1972; Leeper, 1966; Lewin, 1943; Lindell, 1976; Loevinger, 1966; Murell, 1977; Osgood, 1957; Postman and Tolman, 1959; Rappoport and Summers, 1973; Slovic and Lichtenstein, 1971, Smedslund, 1955; Steinmann and Doherty, 1972; Stewart, 1976; and several authors of chapters in the Handbook of Social Psychology (1968, see, e.g., Tajfel, pp. 315-394) and others. In short, these terms have had established meaning for over three decades (1947-1977). Their meanings are no longer unique to Brunswik, and for the many research workers who have used these terms in a precise way their meanings are not arbitrary. To assign new meanings without reference to previous use not only introduces confusion, it is bad science. Once a term loses its meaning, it is virtually impossible to recover it (cf. Hochberg, 1956).

Unfortunately, however, we find that since 1974 the term ecological validity has been used in a number of very different ways by different authors who do not recognize either Brunswik's use of the term or anyone else's use of it. Thus, Jenkins (1974) talks about the "ecologically valid problems of everyday life." Bronfenbrenner (1977) refers to the ecological validity of experiments, as do Berman and Kenny (1976), as well as Graham (1977) who, after mistakenly attributing the concept to Orne (1970) (who mistakenly refers to "Egon Brunswik's concept of the ecological validity of research" [p. 259]) defines this term as "the extent to which the setting in which research takes place is capable of producing results that are valid." Neisser (1976) on the other hand, refers to the ecological validity of theories. And a number of authors (Christensen, 1977; Eaton and Clore, 1975; Frodi, 1974; Greenwald, 1976; Orne, 1970; Parke, 1976; Silverstein and Strang, 1976) refer to the ecological validity of results.

Erosion of meaning from the concept of ecological validity is perhaps best illustrated by Neisser's use of it (1976, p. 48). Neisser acknowledges that the term "ecological validity was coined by Brunswik," but adds that Brunswik's use of the term "was slightly different from the one that is popular today." Unfortunately, Neisser offers neither Brunswik's definition of the term, the one that is "popular today," nor his own, and thus empties the concept of meaning.

Since 1974, the term ecological validity has come to be used by some authors to refer to the degree to which results obtained in the psychological laboratory "generalize" to circumstances outside the laboratory. Jenkins, for example, in his 1974 presidential address to members of the Division of Experimental Psychology admonishes them that "it is true. . . that a whole theory of an experiment can be elaborated without contributing in an important way to the science because the situation is artificial and non-representative. . . . In short, contextualism stresses relating one's laboratory problems to the ecologically valid problems of everyday life" (p. 794). Readers should not let the puzzling phrase "ecologically valid problems of everyday life" distract them from the main point: Jenkins confuses the concept of ecological validity with generalization and the representative design of experiments.

Bronfenbrenner (1977), in his presidential address to the members of the Division of Personality and Social Psychology in 1974, also confuses the concept of ecological validity with generalization and representative design. He begins his critique of current and past psychological research by observing that "the emphasis on rigor has led to experiments that are elegantly designed but often limited in scope. . . . Many of these experiments involve situations that are unfamiliar, artificial, and short-lived and that call for unusual behaviors that are difficult to generalize (italics ours) to other settings" (p. 513). Having expressed his dissatisfaction with the lack of representativeness of past and current research design which makes it difficult to generalize laboratory findings to non-laboratory situations, Bronfenbrenner turns to the concept of ecological validity (p. 515) and states that "although this term has, as yet, no accepted definition" (thus joining with Neisser and Jenkins in ignoring three decades of empirical research and a substantial body of psychological theory) he then proceeds to change the established definition of ecological validity by saying that "one can infer from discussions of the topic a common underlying conception: An investigation is ecologically valid if it is carried out in a naturalistic setting and involves objects and activities from everyday life." Finding his own new definition not only "too simplistic" and "scientifically unsound," "as it is currently used" (no reference), he also finds it to have "no logical relation to the classical definition of validity--namely, the extent to which a research procedure measures what it is supposed to measure." This statement, even in its idiosyncratic form, is simply false. In the articles written previous to 1974, the concept of ecological validity was consistently used within the classical definition of validity, as it should be. Indeed, in his Table 2, on p. 30, Brunswik (1956) not only defines ecological validity in test measurement terms, but defines the ecological reliability of cues in test measurement terms also, thus preserving the kind of theoretical coherence any science requires if it is to be cumulative.

Ignoring all past work on the representative design of experiments and the ecological validity of cues, and finding his own redefinition wanting, Bronfenbrenner then offers a second new definition of ecological validity, namely; "Ecological validity refers to the extent to which the environment experienced by the subjects in a scientific investigation has the properties it is supposed or assumed to have by the investigator" (p. 416). Needless to say, all connection with previous usages is now hopelessly lost.

The consequences of steady erosion of meaning from this concept are now beginning to appear; equally prestigious psychologists are now protesting against "indiscriminate use" of the term and implying that its "indiscriminate use" has already destroyed its value. Bandura, for example, (1978) has protested against "the slighting of experimentation by recourse to the. . . ready invocation of ecological validity. This notion," he correctly observes, "has lost much of its identity from its earlier parentage, (and) is in danger of being transformed into a cliche through indiscriminate use."


In short, despite three decades of consistent and growing use, the established meanings of these valuable concepts of representative design and ecological validity are being eroded, confused, changed arbitrarily.

In short, despite three decades of consistent and growing use, the established meanings of these valuable concepts of representative design and ecological validity are being eroded, confused, changed arbitrarily. Indeed, these concepts may have already been corrupted beyond retrieval, perhaps to the point where their abandonment could become necessary because they will have become reduced to cliches, and their scientific value thereby lost, circumstances Bandura sees as imminent. If that should happen, psychologists will be deprived of concepts already proven to be so useful that they will have to be reinvented.

From a dictionary or common language point of view, Neisser, Jenkins and Bronfenbrenner, or any one else is free to use the terms ecological validity and representative design in whatever fashion ordinary usage will permit. But scientists do not use the common language in the common way; they assign special definitions to specific terms. And when that occurs, and when those special definitions acquire stable and significant meanings to many workers in the discipline over a period of time, then indifference to established usage together with arbitrary redefinition become obstacles to progress. The discipline that permits such obstacles cannot be a cumulative science--a matter about which psychology has reason to be tender.

Now (1998)

Kohler's (1996) article provides a very comprehensive and highly informative analysis of the many studies of the "base rate fallacy" and certainly deserves our attention. I will not try to consider the "descriptive and normative challenges" put forward so effectively by Koehler, but will restrict my comments to the methodological matters that are related to the misuse of the term "ecological validity." Here is Koehler's abstract.

We have been oversold on the base rate fallacy in probabilistic judgment from an empirical. normative and methodological standpoint. At the empirical level, a thorough examination of the base rate literature (including the famous lawyer-engineer problem) does not support the conventional wisdom that people routinely ignore base rates. Quite the contrary, the literature shows that base rates are almost always used and that their degree of use depends on task structure and representation. (Note: this is a conclusion Brunswikians would anticipate.) Specifically, base rates play a relatively larger role in tasks where base rates are implicitly learned or can be represented in frequentist terms. Base rates are also used more when they are reliable and relatively ore diagnostic than available individuating information. At the normative level, the base rate fallacy should be rejected because few tasks map unambiguously into the narrow framework that is held up as the standard of good decision making. Mechanical applications of Bayes's Theorem to identify performance errors are inappropriate when (1) key assumptions of the model are either unchecked or grossly violated, and (2) no attempt is made to identify the decision maker's goals , values , and task assumptions. Methodologically, the current approach is criticized for its failure to consider how the ambiguous, unreliable and unstable base rates of the real world are and should be used. Where decision makers' assumptions and goals very, and where performance criteria are complex, the traditional Bayesian standard is insufficient. Even where predictive accuracy is the goal in commonly defined problems, there may be situations (e.g., informationally redundant environments) in which base rates can be ignored with impunity. A more ecologically valid research program is called for. ( Italics mine). This program should emphasize the development of prescriptive theory in rich, realistic decision environments. (p.1)

 

The reader will have noted that Koehler makes a number of cogent criticisms of the broad conclusions that have been drawn about the "base rate fallacy" ("we have been oversold on the base rate fallacy in probabilistic judgment") and will have also noted that he emphasizes the need for a "more ecologically valid" research program in order to remedy the "oversold" condition of the base rate fallacy. This is a remarkable statement in that Koehler reports exactly what Brunswikians have come to expect from psychological experiments, namely, overgeneralization of results from experiments designed without regard to a theory of the environment, or at a minimum, a statement of the conditions to which the results are intended to apply. Curiously, however, at the same time Koehler refers -- altogether inappropriately -- to "A more ecologically valid research program" when what he really means is "representative design." It is the general awkwardness of this well-intentioned effort that defines the current state of methodology in psychology today, and lends support to the worries I expressed in my rejected article in 1978. To put it briefly, Koehler's article demonstrates two circumstances: 1) there is a growing visibility of the limitations of traditional laboratory methods, and 2) attempts to remedy matters are being made through the misuse of a Brunswikian concept. As a result, the effort is being terribly bungled.

Those who know Brunswikian research and theory will wonder what Koehler could possibly mean by his suggested remedy to employ "A more ecologically valid research program." They know that "ecological validity "refers to the degree of relation between cue and distal variable., and that it does not refer to, has nothing to do with, the representative design of an experiment. They will also be puzzled by Koehler's call for the "development of a prescriptive theory in rich, realistic decision environments." How , they will wonder, can a theory be developed in a "decision environment" of any kind? All of this language indicates just how inept psychological research efforts can become. Without any theory of environments, they must take refuge in referring to a concept they do not understand, or one that is absurd, namely "the real world." The 46 references in the Koehler article to "ecological validity" -- nearly all based on a misunderstanding -- indicate a scientific discipline adrift and searching for a method to call its own, and failing to see what has been offered to it for almost half a century.

Brunswik did not use the term "ecological validity" to refer to the problem of generalization of results from laboratory experiments, He used the term "ecological validity" to refer to the degree of relation between distal variable and cue. He had to invent this term -- and Brunswik did invent it -- because both the perception and learning psychologists of the 30's, 40's and 50's almost without exception designed their experiments so that there was always a perfect relation between cue and distal variable, or cue and reward. (There were few exceptions, see Hammond, 1966). Because he wanted to broaden the horizons of the perception and learning psychologists to include the idea of a probabilistic environment, however, he needed a term to express the degree of relation between cue and distal variable or reward, and thus break up the research design that always employed a rigid, one-to-one relationship between cue and distal variable or cue and reward. And "ecological validity" is the term he chose, and the one that Brunswikians have been using ever since for that purpose and that purpose alone. It does not refer to the degree to which the conditions of an experiment represent some set of conditions toward the generalization is intended, as Koehler and so many others have come to believe. What Koehler apparently wants when he says that "A more ecologically valid research program is called for" is a research program which is built on experimental designs that -- somehow -- represent some broader set of conditions than those ordinarily used. Of the 48 authors only Funder has mentioned the confused relation between "representativeness" and "ecological validity" (p.23).

Of course, Brunswikians have long urged that move, and have given specific reasons for doing so. But Koehler apparently has not studied Brunswik, so he has no recommendation for us other than a vague request ( ignored by Anderson, p.17 and ridiculed by Dawes, p.20). What Koehler and his critics -- and all those who abuse the term "ecological validity " -- do not know is that Brunswik met the challenge of generalizing the results of experiments by laying out in detail exactly what it is that needs to be represented, Ignorance of that material is what misled Dawes into uttering the absurdity that "we have too much ecological validity." That is an absurdity because this is a Brunswikian term that refers to the empirical relation between cue and distal object. Therefor it makes no sense whatever to say that "we have too much ecological validity." Unless Dawes wants to claim that he is free to use this long established term in any fashion he sees fit, and thus become an irresponsible member of the scientific community, he should stop misusing this term.


It should be plain that there is no connection between the legitimate usage of the term ecological validity and the problem of generalizing results from conditions used in experiments to other conditions of interest.

It should be plain that there is no connection between the legitimate usage of the term ecological validity and the problem of generalizing results from conditions used in experiments to other conditions of interest. But the repeated misuse of this term --not only in the commentaries on Koehler's article but in a wide variety of research contexts -- makes it clear that psychology has a serious -- and perennial -- problem, namely, how should the results from psychological experiments be interpreted?

In coming to grips with this problem it is essential to note that the concept of representative design does not call for generalization to "real world" conditions ( a meaningless demand). Rather, representative design calls for , first, a specification of the conditions toward which the generalization is intended, and second, a specification of how those conditions are represented in the experimental conditions. The first requirement is one that is seldom, if ever, met in academic psychology, although it is often met in human factors research, or other forms of applied research. Typically, academic experimental psychologists construct an experiment to test a theory or hypothesis, and do so in the most expeditious and controlled fashion possible, and then generalize their results -- somehow. It is only in the "so what" phase of criticizing the experiment that the significance of the results are evaluated in terms of their generalizability (or, as we have seen, "ecological validity"). The "so what" criticism may take the form of saying "well, I don't deny that the base rate fallacy appeared under those conditions, but where in the "real world" will those conditions ever appear? Nowhere. So why should I care about that result? " Koehler's article and the commentaries on the base rate fallacy illustrate that course of events quite well. For example, Koehler states; "Even where the distinction between base rates and individuating information can be retained in a laboratory study, Connolly warns that the two-parameter Bayesian framework may not survive translation to complex, real world (sic) judgment tasks. If he is right, then those who study judgment in ecologically valid settings may have little use for the base rate fallacy" (Koehler, 1996, p. 43). In other words, if Connolly is right, the base rate studies are a waste of time and money, if one wishes to extrapolate the findings -- any findings -- to what? To "ecologically valid" settings. But what are these? Since no one has given us a definition of these, and since the only meaning ever given to the term "ecological validity" makes nonsense of the statement "ecologically valid settings," what can we conclude about this discourse? Only that psychologists have not yet come to grips with the question of the generalization of results from experiments, or , more broadly, how to decide what the results their experiments mean. Koehler seems to agree, for he goes on to say that "most commentators agree that it would be desirable to improve the ecological validity of base rate research.... As Funder notes, real world base rates tend to be less reliable than the laboratory base rates that the subjects are expected to treat as "definitionally true." ...[Thus] it is hard to discern the significance of laboratory responses for real world decision behavior" (p.44). If Koehler's final sentence stopped at "responses" , we would be in agreement on the broad conclusion.


There is, of course, no such thing as a "real world."

But Koehler's criticism gets us nowhere for two reasons: it misuses the term "ecological validity" and employs the meaningless concept of a "real world." There is, of course, no such thing as a "real world." It has been assigned no properties, and no definition; it is used simply because of the absence of a theory of tasks or other environments, and thus does not responsibly offer a frame of reference for the generalization. Brunswikians do have a theory of tasks and it has been enunciated often since 1935 (Brunswik and Tolman), recently very specifically in terms of the "principle of parallel concepts." This principle is used to describe judgment tasks task in terms of concepts parallel to the organism's cognitive system. And the two parallel systems find quantitative expression in the lens model equation (about which more on another occasion).

Brunswik did not simply ask for more "real world" studies. He asked that psychologists provide a justification for their generalizations. When he did want to generalize to a specific ecology outside the laboratory, as in his 1944 perception experiment on constancy, to demonstrate how the Gestalt psychologists results with illusions were restricted to the highly controlled conditions of the laboratory, he specified his ecology in advance. That is, he indicated that he intended to measure the perceptual accuracy of judgments of the size of objects made by a Berkeley resident in a random sample (n = 174) of situations in her natural habitat, and found that she was highly accurate in her judgments of size over this respectably-large sample. This result contradicted, of course, what the hundreds, if not thousands, of laboratory studies of perceptual illusions would have led us to expect, the susceptibility of visual perception to illusion.

It will be of interest to contrast Brunswik's study of the accuracy of visual perception with Harlow's highly esteemed work with monkeys in his laboratory. One of his conclusions (Harlow, 1959) was that "Tall, unstable stimuli ...elicit avoidant, hesitant behavior." What should a reader make of that conclusion? Surely, this is the behavior that Harlow observed in his laboratory. Won't readers be surprised when they encounter monkeys living in tall, unstable trees, and exhibiting anything but "avoidant, hesitant behavior?" Would it be unfair to criticize Harlow's conclusion to be the product of circumstances as far removed from the monkey's natural habitat as the Gestalt psychologists' illusion-producing circumstances were from our natural habitat as represented by the environs of Berkeley? Harlow simply failed to heed Brunswik's admonition that "Generalizability of results concerning the variables involved must remain limited unless the range, but better also the distribution ...of each variable, has been made representative of a carefully defined set of conditions" (1956, p. 53)

There is more to this matter but this is not place for presenting it. For those who are lucky enough to have access to his 1956 book, "Perception and the Representative Design of Psychological Experiments," his general theoretical system for describing an environment is to be found on pp. 1 -26.

The rebuttal to Koehler's criticism of the lack of generality ( and to my denunciation of the misuse of "ecological validity" and the use of "real world") is that we need have no interest in whether the conditions of the experiment will ever appear anywhere other than the laboratory. That is, if the experiment adequately tests the hypothesis, and the results tell us what we want to know, then that is that. This rebuttal has had a long life, although it only appears only sporadically., and then to rebut expressions of discontent with psychologists' methodological orthodoxy. It appeared in pristine form in the 1941 symposium that involved a debate between Hull, Lewin and Brunswik. Here is Lewin rebutting Brunswik on his assertions about the importance of the ecology and objective probabilities, and the view that generalization is an essential part of psychology, thus:" [Brunswik] wishes to include in the psychological field those parts of the physical and sociological world which , to my mind, have to be excluded. These parts, he states, have to be studied in a statistical way, and the probability of the occurrence of events calculated "(Lewin, 1943, p. 308). He became more explicit:" To my mind, the main issue is what the term 'probability' refers to. Does Brunswik want to study the ideas of the driver of a car about the probability of being killed or does he want to study the accident statistics which tell the "objective probability " of such an event. If an individual sits in a room trusting that the ceiling will not come down, should only his "subjective probability" be taken into account for predicting behavior or should we also consider the "objective probability" of the ceiling's coming down as determined by the engineers. To my mind, only the first has to be taken into account, but to my inquiry, Brunswik answered that he meant also the latter" (p.308).

These remarks neatly divide two of the most important theorists of the 20th century. Lewin is interested only in the subjective life (space) of his subjects; therefore he has no interest in extrapolating the results of an experiment to a world in which there is an objective probability of events ("the ceiling falling down"); whether it actually comes down is of no interest to psychologists. Brunswik is interested in how the subjective probabilities match up with objective ones., and that means we have to ascertain what they are, and that leads us to representative design. If we wish to generalize our results to specific probability circumstances, that leads us to representative design. (See Stewart's work on weather forecasters for examples). I see no way in which Lewin could be persuaded.

Since Lewin's view was presented over a half century ago one might assume that it is of mere historical interest. That would be a mistake. In responding to Koehler's remarks about ecological validity Dawes expresses Lewin's view once more. For example: "True external validity results not from sampling various problems that are representative of 'real world' decision making, but from reproducing an effect in the laboratory with minimal contamination (including real world factors)." In short, Dawes not only wants to refrain from including objective probabilities in the experiment (see his first phrase), but wants to make certain that there is no "contamination" from them. Thus not only is he uninterested in whether the "ceiling comes down" or not (exactly as Lewin would have it) but wants to make sure there is nothing in the experiment that will remind the subject that ceilings fall. He deplores the fact that it is so difficult to rid nonsense syllables of their everyday connotations, and exclaims "Would that we could (his italics) separate our experiments from the natural ecology!"(p. 20). Given this exhibition of complete misunderstanding of the issue of generalization, as in the case with Lewin, I see no way in which Dawes or others who hold similar views could be persuaded.

If these remarks, together with the misguided remarks of Koehlers' and others who seek greater "ecological validity" (with whom I am obviously sympathetic in principle) do not demonstrate a scientific discipline at sea, utterly confused and irreparably divided with respect to its methodological goals, I do not know what further evidence is needed, unless it is to be found in the cynical remarks of Stalker ( pp. 38 - 39).

If that is true, what should the remedy be? I once believed that only frequent empirical demonstrations of the failure of generalizations in studies of person perception that failed to sample person-objects would clarify and advance our methodological competence. Or that making visible the contradictions in research findings such as those pointed out by Funder ( do we stereotype, or ignore base rates?) would help. But in view of remarks such as Dawes' that reinvigorate the position taken by Lewin 50 years ago, I see that I was wrong. So now I can only hope that advances in technology that make it easy to represent the circumstances toward which we wish to generalize our findings (never mind the "real world"!) will make the issue moot, and critics like myself mute.

References

Armelius, B.-A., and Armelius, K. (1974). The use of redundancy in multiple-cue judgments: Data from a suppressor-variable task. American Journal of Psychology, 87(3), 385-392.

Armelius, B.-A., and Armelius, K. (1975). Note on detection of cue intercorrelation in multiple-cue probability. Scandinavian Journal of Psychology, 16, 37-41.

Armelius, K., and Armelius, B. A. (1976). The effects of cue-criterion correlations, cue intercorrelations and the sign of the cue intercorrelation on performance in suppressor variable tasks. Organizational Behavior and Human Performance, 17, 241-250.

Aronson, E., and Carlsmith, J. (1968). Experimentation in social psychology. In G. Lindzey and E. Aronson (Eds.), The handbook of social psychology (2nd ed., Vol. 3). Reading, MA: Addison-Wesley Publishing Co.

Aronson, E., Willerman, B., and Floyd, J. (1966). The effect of a pratfall on increasing personal attractiveness. Psychonomic Science, 4, 227-228.

Bakan, D. (1954). A generalization of Sidman’s results on group and individual functions, and a criterion. Psychological Bulletin, 51, 63-64.

Bandura, A. (1978). On paradigms and recycled ideologies. Cognitive Therapy and Research, 2, 79-103.

Berman, J. S., and Kenny, D. A. (1976). Correlational bias in observer ratings. Journal of Personality and Social Psychology, 34(2), 263-273.

Block, J. (1977). Correlational bias in observer ratings: Another perspective on the Berman and Kenny study. Journal of Personality and Social Psychology, 35(12), 873-880.

Brehmer, B. (1974). The effect of cue intercorrelation on interpersonal learning of probabilistic inference tasks. Organizational Behavior and Human Performance, 12, 397-412.

Brehmer, B. (1975). Policy conflict and policy change as a function of task characteristics. IV. The effect of cue intercorrelations. Scandinavian Journal of Psychology, 16, 85-96.

Brehmer, B. (1976). Social judgment theory and the analysis of interpersonal conflict. Psychological Bulletin, 83(6), 985-1003.

Brehmer, B., and Hammond, K. R. (1977). Cognitive factors in interpersonal conflict. In D. Druckman (Ed.), Negotiations: Social-psychological perspectives (pp. 79-103). Beverly Hills: Sage.

Bronfenbrenner, U. (1977). Toward an experimental ecology of human development. American Psychologist, 32(7), 513-531.

Bruner, J. S., Goodnow, J. J., and Austin, G. A. (1956). A study of thinking. New York: Wiley.

Brunswik, E. (1943). Organismic achievement and environmental probability. Psychological Review, 50, 255-272.

Brunswik, E. (1952). The conceptual framework of psychology. In International encyclopedia of unified science (Vol. 1, no. 10, pp. 4-102). Chicago: University of Chicago Press.

Brunswik, E. (1956). Historical and thematic relations of psychology to other sciences. Scientific Monthly, 83, 151-161.

Brunswik, E. (1956). Perception and the representative design of psychological experiments. (2nd ed.). Berkeley: University of California Press.

Brunswik, E. (1957). Scope and aspects of the cognitive problem. In H. Gruber, K. R. Hammond, and R. Jessor (Eds.), Contemporary approaches to cognition (pp. 5-31). Cambridge: Harvard University Press.

Chapman, L. J. (1967). Illusory correlation in observational report. Journal of Verbal Learning and Verbal Behavior, 6, 151-155.

Chapman, L. J., and Chapman, J. P. (1967). Genesis of popular but erroneous psychodiagnostic observations. Journal of Abnormal Psychology, 72(3), 193-204.

Chapman, L. J., and Chapman, J. P. (1969). Illusory correlation as an obstacle to the use of valid psychodiagnostic signs. Journal of Abnormal Psychology, 74(3), 271-280.

Christensen, L. B. (1977). Experimental methodology. Boston, MA: Allyn and Bacon.

Clark, H. H. (1973). The language-as-fixed-effect fallacy: A critique of language statistics in psychological research. Journal of Verbal Learning and Verbal Behavior, 12, 335-359.

Cosmides, L., and Tooby, J. (1996). Are humans good intuitive statisticians after all? Rethinking some conclusions from the literature on judgment under uncertainty. Cognition: International Journal of Cognitive Science, 58, 1-73.

Cronbach, L. J. (1975). Beyond the two disciplines of scientific psychology. American Psychologist, 30, 116-127.

Crow, W. (1957). The need for representative design in studies of interpersonal perception. Journal of Consulting Psychology, 21, 321-325.

Dowling, J. F., and Graham, J. R. (1976). Illusory correlation and the MMPI. Journal of Personality Assessment, 40(5), 531-538.

Dudycha, A. L., and Naylor, J. C. (1966). The effect of variations in the cue R matrix upon the obtained policy equation of judges. Educational and Psychological Measurement, 26, 583-603.

Eaton, W. O., and Clore, G. L. (1975). Interracial imitation at a summer camp. Journal of Personality and Social Psychology, 32, 1099-1105.

Einhorn, H. J. (1972). Expert measurement and mechanical combination. Organizational Behavior and Human Performance, 7, 86-106.

Elms, A. C. (1975). The crisis of confidence in social psychology. American Psychologist, 30, 967-976.

Fischhoff, B. (1976). Attribution theory and judgment under uncertainty. In J. H. Harvey, W. J. Ickes, and R. F. Kidd (Eds.), New directions in attribution research (pp. 421-452). Hillsdale, NJ: Erlbaum.

Fisher, R. A. (1947). The design of experiments (4th ed.). New York: Hafner.

Frodi, A. (1974). On the elicitation and control of aggressive behavior. Göteborg Psychological Reports, 4, 16.

Gergen, K. J. (1973). Social psychology as history. Journal of Personality and Social Psychology, 26, 309-320.

Gergen, K. J. (1976). Social psychology, science and history. Personality and Social Psychology Bulletin, 2, 373-383.

Gibson, J. J. (1957). Survival in a world of probable objects [Review of Perception and the representative design of psychological experiments]. Contemporary Psychology, 2(2), 33-35.

Gillis, J. S. (1975). The effects of selected anti-psychotic drugs on objective task learning, and interpersonal learning with acute schizophrenics. In K. R. Hammond and C. R. B. Joyce (Eds.), Psychoactive drugs and social judgment: Theory and research. New York: Wiley.

Gillis, J., and Schneider, C. (1966). The historical preconditions of representative design. In K. Hammond (Ed.), The psychology of Egon Brunswik (pp. 204-236). New York: Holt, Rinehart, and Winston.

Goldberg, L. R. (1970). Man versus model of man: A rationale, plus some evidence, for a method of improving on clinical inferences. Psychological Bulletin, 73(6), 422-432.

Golding, S. L., and Rorer, L. G. (1972). Illusory correlation and subjective judgment. Journal of Abnormal Psychology, 80(3), 249-260.

Graham, K. R. (1977). Psychological research: Controlled interpersonal interaction. New York: Brooks Cole.

Greenwald, A. G. (1976). Within-subjects designs: To use or not to use? Psychological Bulletin, 83(2), 314-320.

Hamilton, D. L., and Gifford, R. K. (1976). Illusory correlation in interpersonal perception: A cognitive basis of stereotypic judgments. Journal of Experimental Social Psychology, 12, 392-407.

Hammond, K. R. (1948). Subject and object sampling: A note. Psychological Bulletin, 45, 530-533.

Hammond, K. R. (1954). Representative vs. systematic design in clinical psychology. Psychological Bulletin, 51(2), 150-159.

Hammond, K. R. (1955). Probabilistic functioning and the clinical method. Psychological Review, 62, 255-262.

Hammond, K. R. (1968). Brunswik, Egon. In International encyclopedia of the social sciences (pp. 156-158). New York: Macmillan.

Hammond, K. R. (Ed.). (1966). The psychology of Egon Brunswik. New York: Holt, Rinehart and Winston.

Hammond, K. R., Rohrbaugh, J., Mumpower, J., and Adelman, L. (1977). Social judgment theory: Applications in policy formation. In M. F. Kaplan and S. Schwartz (Eds.), Human judgment and decision processes in applied settings (pp. 2-27). New York: Academic Press.

Hammond, K. R., Stewart, T. R., Brehmer, B., and Steinmann, D. O. (1975). Social judgment theory. In M. F. Kaplan and S. Schwartz (Eds.), Human judgment and decision processes (pp. 271-312). New York: Academic Press.

Hartsough, W. R. (1975). Illusory correlation and mediated association: A finding. Canadian Journal of Behavioural Science, 7(2), 151-154.

Heider, F. (1958). The psychology of interpersonal relations. New York: Wiley.

Hochberg, J. (1956). Perception: Toward the recovery of a definition. Psychological Review, 63, 400-405.

Hochberg, J. (1966). Representative sampling and the purposes of perceptual research: Pictures of the world and the world of pictures. In K. R. Hammond (Ed.), The psychology of Egon Brunswik. New York: Holt, Rinehart, and Winston.

Hull, C. L. (1943). The problem of intervening variables in molar behavior theory. Psychological Review, 50, 273-291.

Jarvik, M. (1966). A functional view of memory. In K. R. Hammond (Ed.), The psychology of Egon Brunswik. New York: Holt, Rinehart, and Winston.

Jenkins, J. J. (1974). Remember that old theory of memory? Well, forget it! American Psychologist, 29, 785-795.

Keeley, S. M., and Doherty, M. E. (1972). Bayesian and regression modeling of graduate admission policy. Organizational Behavior and Human Performance, 8, 297-323.

Knowles, B. A., Hammond, K. R., Stewart, T. R., and Summers, D. A. (1971). Positive and negative redundancy in multiple cue probability tasks. Journal of Experimental Psychology, 90, 157-159.

Knowles, B. A., Hammond, K. R., Stewart, T. R., and Summers, D. A. (1972). Detection of redundancy in multiple cue probability tasks. Journal of Experimental Psychology, 93, 425-427.

Koch, S. (Ed.). (1959-1963). Psychology: Å study of a science (Vol. 1-6). New York: McGraw-Hill.

Koehler, J. J. (1996). The base rate fallacy reconsidered: Descriptive, normative, and methodological challenges. The Behavioral and brain sciences., 19(1), 1-53.

Kraut, R. (1978). Verbal and nonverbal cues in the perception of lying. Journal of Personality and Social Psychology, 36, 380-391.

Kuhn, T. (1962). The structure of scientific revolutions. Chicago: University of Chicago Press.

Leeper, R. W. (1966). A critical consideration of Egon Brunswik’s probabilistic functionalism. In K. R. Hammond (Ed.), The psychology of Egon Brunswik. New York: Holt, Rinehart, and Winston.

Lewin, K. (1943). Defining the "field at a given time." Psychological Review, 50, 292-310.

Lewin, K. (1943). Defining the 'field at a given time'. Psychological Review, 50(3), 292-310.

Lindell, M. K. (1976). Cognitive and outcome feedback in multiple-cue probability learning tasks. Journal of Experimental Psychology, 2, 739-745.

Lindell, M. K., and Stewart, T. R. (1974). The effects of redundancy in multiple-cue probability learning. American Journal of Psychology, 87, 393-398.

Loevinger, J. (1966). Psychological tests in the conceptual framework of psychology. In K. R. Hammond (Ed.), The psychology of Egon Brunswik. New York: Holt, Rinehart, and Winston.

McGuire, W. J. (1973). The yin and yang of progress in social psychology: Seven koan. Journal of Personality and Social Psychology, 26(3), 446-456.

Mumpower, J. L., and Hammond, K. R. (1974). Entangled task dimensions: An impediment to interpersonal learning. Organizational Behavior and Human Performance, 11, 377-389.

Murrell, G. (1977). Combination of evidence in perceptual judgment. In M. F. Kaplan and S. Schwartz (Eds.), Human judgment and decision processes in applied settings. New York: Academic Press.

Naylor, J. C., and Schenck, E. A. (1968). The influence of cue redundancy upon the human inference process for tasks of varying degrees of predictability. Organizational Behavior and Human Performance, 3, 47-61.

Neisser, U. (1976). Cognition and reality. San Francisco: W. H. Freeman and Co.

Nisbett, R. E., and Wilson, T. D. (1977). The halo effect: Evidence for unconscious alternation of judgments. Journal of Personality and Social Psychology, 35, 250-256.

Orne, M. T. (1970). Hypnosis, motivation, and ecological validity. In W. Arnold and M. Page (Eds.), Nebraska symposium on motivation. Lincoln, NE: University of Nebraska Press.

Osgood, C. (1957). Discussion. In H. Gruber, R. Jessor, and K. Hammond (Eds.), Contemporary approaches to cognition: A symposium held at the University of Colorado. Cambridge, MA: Harvard University Press.

Parke, R. P. (1976). Social cues, social control and ecological validity. Merrill Palmer Quarterly, 22, 111-123.

Postman, L., and Tolman, E. C. (1959). Brunswik’s probabilistic functionalism. In S. Koch (Ed.), Psychology: A study of science (Vol. 1). New York: McGraw-Hill.

Rappoport, L. H., and Summers, D. A. (1973). Human judgment and social interaction. New York: Holt, Rinehart and Winston.

Rosen, G. J. (1975). On the persistence of illusory correlation associated with the Rorschach. Journal of Abnormal Psychology, 84, 571-573.

Ross, L. (1977). The intuitive psychologist and his shortcomings: Distortions in the attribution process. Advances in Experimental Social Psychology, 10, 173-220.

Rotter, J. (1973). The future of clinical psychology. Journal of Clinical and Consulting Psychology, 40, 313-321.

Rumenik, D., Capasso, D., and Hendrick, C. (1977). Experimenter sex effects in behavioral research. Psychological Bulletin, 84, 852-877.

Selby, J. W., Calhoun, L. G., and Brock, T. A. (1977). Sex differences in the perception of rape victims. Personality and Social Psychology Bulletin, 3, 412-415.

Sidman, M. (1952). A note on functional relations obtained from group data. Psychological Bulletin, 49(3), 263-267.

Silverstein, C. H., and Strang, D. G. (1976). Seating position and interaction in triads: A field study. Sociometry, 39, 166-170.

Slovic, P., and Lichtenstein, S. (1971). Comparison of Bayesian and regression approaches to the study of information processing in judgment. Organizational Behavior and Human Performance, 6, 649-744.

Smedslund, J. (1955). Multiple probability learning. Oslow: Akademisk Forlag.

Smith, M. B. (1976). Social psychology, science, and history: So what? Personality and Social Psychology Bulletin, 2, 438-444.

Starr, B. J., and Katkin, E. S. (1969). The clinician as an aberrant actuary: Illusory correlation and the incomplete sentences blank. Journal of Abnormal Psychology, 74(6), 670-675.

Steinmann, D. O., and Doherty, M. E. (1972). A lens model analysis of a bookbag and poker chip experiment: A methodological note. Organizational Behavior and Human Performance, 8, 450-455.

Stewart, T. R. (1976). Components of correlation and extensions of the lens model equation. Psychometrika, 41(1), 101-120.

Tajfel, H. (1968). Social and cultural factors in perception. In G. Lindzey and E. Aronson (Eds.)_, The handbook of social psychology (2nd ed., Vol. 3). Reading, MA: Addison-Wesley Publishing Co.

Thorndike, E. L. (1920). A constant error in psychological ratings. Journal of Applied Psychology, 4, 25-29.

Tolman, E. C. (1956). Eulogy: Egon Brunswik: 1903-1955. American Journal of Psychology, 69, 315-342.

Tversky, A. (1972). Elimination by aspects: A theory of choice. Psychological Review, 79(4), 281-299.

Tversky, A., and Kahneman, D. (1973). Availability: A heuristic for judging frequency and probability. Cognitive Psychology, 5, 207-232.

Wallace, H. A. (1923). What is in the corn judge's mind? Journal of the American Society of Agronomy, 15, 300-304.

Wike, E. L., and Church, J. D. (1976). Comments on Clark's "The language-as-fixed-effect fallacy." Journal of Verbal Learning and Verbal Behavior, 15, 249-255.

Wright, D. B. (1996). Issues for applied Cognitive Psychology. Theory and Psychology, 6, 287-291.

Wyer, R. S., Jr. (1975). The role of probabilistic and syllogistic reasoning in cognitive organization and social inference. In M. F. Kaplan and S. Schwartz (Eds.), Human judgment and decision processes (pp. 229-289). New York: Academic Press.


Home | Egon Brunswik | Sign up | Annual Meetings | Newsletters | Email list | Notes and essays | Resources | Photos | Links | Sitemap

brunswik.org