The Brunswik Society Newsletter

Volume 10 Number 1
Boulder, Colorado and Albany, New York
Fall 1995

This edition of the Brunswik Society Newsletter was edited by Tom Stewart (T.STEWART@ALBANY.EDU) and Mary Luhring (MLUHRING@CLIPR.COLORADO.EDU) and supported by the Center for Policy Research (University at Albany, State University of New York) and the Center for Research on Judgment and Policy (University of Colorado, Boulder).

Click here for the Index

Table of Contents

New Books on Social Judgment Theory and methods

Research Summaries

Recommended reading--Brunswik's classic papers

Kenneth Hammond, University of Colorado (

My Brunswikian activities have been limited to working on the manuscript of my book which will be published by Cambridge in June 1996. The table of contents follows:

Hammond, K.R. (in press). Human Judgment and Social Policy: Irreducible Uncertainty, Inevitable Error, Unavoidable Injustice. New York: Oxford University Press.


Part I: Rivalry

1. Irreducible Uncertainty and the Need for Judgment
2. Duality of Error and Policy Formation
3. Coping with Uncertainty: The Rivalry Between Intuition and Analysis

Part II: Tension

4. Origins of Tension Between Coherence and Correspondence Theories of Competence in Judgment and Decision Making
5. The Evolutionary Roots of Correspondence Competence

Part III: Compromise and Reconciliation

6. Reducing Rivalry Through Compromise
7. Task Structure, Cognitive Change, and Pattern Recognition
8. Reducing Tension Between Coherence and Correspondence Through Constructive Complementarity

Part IV: Possibilities

9. Is It Possible to Learn by Intervening?
10. Is It Possible to Learn from Representing?
11. Possibilities for Wisdom
12. The Possible Future of Cognitive Competence

Return to contents

Ray Cooksey, University of New England (

The book, "Judgment Analysis: Theory, Methods, and Applications" by Ray Cooksey has been completed and should be appearing in print, published by Academic Press, by December of this year, if all goes well. Its function is to provide a comprehensive guide to the paradigm starting with theoretical foundations and moving into design, execution, and analysis discussion. The book is liberally illustrated with JA examples from a variety of research domains and provides practical guidance on how to conduct JA research. The following is an abbreviated table of contents (the first two heading levels only) of the book which should give Brunswikians a good idea of what is covered.

Cooksey, R.W. (in press). Judgment Analysis: Theory, Methods, and Application. New York: Academic Press.

Chapter 1: Theoretical Foundations of Judgment Analysis
Chapter 2: Designing Judgment Analysis Research
Chapter 3: Constructing Judgment Analysis Tasks
Chapter 4: Capturing Judgment Policies
Chapter 5: Comparing Systems: The Lens Model Equation
Chapter 6: Aggregating and Comparing Judgment Policies
Chapter 7: Special Topics and Issues in Judgment Analysis
Chapter 8: Future Directions for Judgment Analysis Research
Appendix A: Computer Programs for Supporting Judgment Analysis Research
Appendix B: Glossary of Important Terms

Return to contents

Using the Multitrait-Multimethod Matrix to Evaluate the Functional Requirements for Different Decision Aids

Len Adelman, George Mason University (LADELMAN@GMUVAX.GMU.EDU)

We continued our research with Patriot personnel, and began laboratory research evaluating alternative human-computer interface concepts based on that research, this past year. However, what I want to describe briefly here is the results of an exploratory study using the multitrait-multimethod matrix to evaluate the functional requirements proposed for eight Army decision aids. Each of our seventeen respondents rated the usefulness of each aid's proposed functional capabilities using three subjective methods. A within-subjects ANOVA found a significant method x aid interaction; the mean usefulness ratings were method dependent. However, application of the multitrait-multimethod matrix (with the aids being the traits) permitted us to examine the pattern of agreement in the respondents' ratings. This permitted us to identify those aids for which there was higher convergent than discriminant validity coefficients, and therefore less method dependency, and those for which this was not the case. Aids with similar (or higher) heterotrait-heteromethod (discriminant validity) than heterotrait-monomethod (convergent validity) coefficients were of particular concern because they represented cases with strong method dependencies, and where respondents may not have fully understood (or could not distinguish between) the functional capabilities of different aids. Although exploratory, these results suggests that the multitrait-multimethod matrix approach can support user requirements analysis.

Return to contents

Jim Hogge, Vanderbilt University (HOGGEJH@CTRVAX.VANDERBILT.EDU)

Since the November 1994 meeting I have been working (with Stephen Schilling, who recently joined the faculty here in my department) on the application of generalizability theory (Cronbach, Gleser, Nanda, & Rajaratnam, 1972; Shavelson & Webb, 1991) to the assessment of the reliability of global judgments based upon multiple cues. Generalizability theory emphasizes that measurement error can have multiple sources and can yield estimates of the relative magnitude of various components of error variation. In turn, these estimates facilitate the design of subsequent studies in which error is minimized and reliability is maximized. Steve and I are working on a presentation for J/DM in which we plan to demonstrate how judgments obtained on two occasions (with both crossvalidation cases and repeated cases) can be analyzed according to generalizability theory.

Return to contents

Josh Klayman, University of Chicago, University of Melbourne (FAC_JLK@GSBVAX.UCHICAGO.EDU)

Since January, I have been on leave from Chicago at the psychology department of the University of Melbourne, Australia, where I will be until next March. I received a Fulbright award to do collaborative research with Alex Wearing and others on learning and decision making in complex and dynamic environments. Melbourne has quite a group of people interested in different aspects of decision making. Alex, many of you know, has worked on dynamic decision making using laboratory tasks of varying complexity. Chris Ball is working on analysis of decision strategies in multiattribute choice, especially with uncertain and missing information. Jeanette Lawrence studies real decisions by professionals at different levels in fields such as jurisprudence and nursing. Over in the Business School, Leon Mann (some of you will know his work with Irving Janis) studies managerial decision making, most recently in the area of R&D investments. And Mary Omodei, at nearby Latrobe University, is working on dynamic decision making and on novel ways of providing feedback for better learning in real-time tasks such as sports coaching. You can perhaps see why the Fulbright people believed me when I told them that Melbourne really was a good place for international collaboration.

Alex and I are starting a project to study how and what people learn in dynamic systems. Those are tasks that have one of more of the following characteristics: (a) they progress over time, so that the value of a variable at one time is in part a function of the system variables at a previous time; (b) any given variable may simultaneously be a cause and an effect, e.g., there may be cycles of causation; (c) the judge is part of the system, so that judgments affect the subsequent state of other system variables. Here, we are drawing primarily on learning research in the MCPL tradition as well as recent research and concepts from the area of System Dynamics (e.g., work by John Sterman and colleagues at MIT).

Also, Alex, Chris Ball, and Mary Omodei have received funding for new studies of decision making in complex, dynamic decision tasks. I'm working on that project as a consultant/collaborator. We are focusing particularly on the flow of information in such situations when more than one person in involved (distributed decision making), and when information is incomplete and probabilistic.

I am also continuing work with Claudia Gonzalez-Vallejo at Albany and Jack Soll back at Chicago on overconfidence. Our basic premise is that Gerd Gigerenzer and others were Brunswikian (that is, correct) in saying that you have to be very careful how you pick your test questions when comparing people's confidence with their accuracy. However, we now have good evidence that miscalibration does still occur, and that the level of over- or underconfidence depends strongly on the difficulty of the questions, even when care is taken that environments are representatively sampled. We tested this with both two-choice and confidence-interval judgments. So, there are still interesting phenomena to be explained in the area of confidence judgments, and we are now working on understanding the effects of random error and cognitive biases (yes, I still dare use the B word!) on judgments of confidence.

All of this research is still in the formative stages, so we will be very grateful for any suggestions and information.

Return to contents

Alex Wearing, University of Melbourne (ALEX_WEARING@MUWAYF.UNIMELB.EDU.AU)

With regard to current research, see Josh Klayman's report to you. He more or less says it all. Having Josh Klayman among us for a year is, as we say, a long way better than a poke in the eye with a burnt stick. In fact, he is making an invaluable contribution to our research activity. To flesh out one or two details.....

Jeanette Lawrence and I are finishing, at long last, a simulation of a theory of magistrate's decision processes. Chris Ball, Mary Omodei, Josh Klayman and I are beginning a project on distributed dynamic decision making (Yes, the project has Berndt Brehmer's fingerprints on it). Peter Hart and I have been looking at decision making and organizational effectiveness by characterizing the causal structure of organizations as perceived by the staff.

Return to contents

Terry Connolly, University of Arizona (CONNOLLY@CCIT.ARIZONA.EDU)

I have recently been revisiting an issue I worked on, and dropped, some years ago: The causal sequence subjects assume to exist between cues and distal variables. It seems to me that at least two such causal sequences can be distinguished (and probably many more, mixed and intermediate cases): In the first case, the proximal information is naturally thought of as reflecting or indicating the value of the underlying variable, and thus causally subsequent to it: The patient first gets sick, then the symptoms reflect this fact. I read this as typically what Brunswikians have in mind, and call the indicators CUES. In the second case, the proximal measures are causally prior, and the distal variable subsequent:

The patient ingests various substances, the sickness is the result. I refer to the proximal attributes in this case as COMPONENTS. In many cases the interpretation is ambiguous: The defective ashtray in the new car may be either trivial (a minor component) or significant (a cue to shoddy work on more important matters). I worked out the (tedious) algebra of optimal inference for the two cases years ago: The optimal combining rules are, of course, quite different.

I recently ran, with a student here, Joydeep Srivastava, a couple of experiments showing that subjects are sensitive to the difference. In both experiments we presented subjects with the same attribute values (for examples, ratings of products they were choosing between), but varied the accompanying verbal context to suggest either a cue-type or a component-type interpretation. Subjects seem to have had no problem picking up these context shifts and changing their combination rules appropriately -- sometimes in somewhat subtle ways, as in their dealing with missing attribute information. This suggests, minimally, that those of us who work on multiattribute evaluation processes need to be a lot more careful about exactly how we describe the "attributes" we present to subjects. More fundamentally, it raises interesting questions about the extent to which subjects are active sense-makers, rather than mechanical responders, when presented with attribute vectors. (Our evidence, plainly, argues for the former).

The gory details are in a forthcoming OBHDP article, "Cues and Components in Multiattribute Evaluation". If you can't wait for it to come forth, or would otherwise like to join the discussion, please drop me a note and I'll send you a preprint.

Return to contents

Jim Holzworth, University of Connecticut (Holz@UCONNVM.UCONN.EDU)

During the past year, a number of judgment studies have been conducted in our research group at the University of Connecticut.

Nancy Carrafiello, with my assistance, has been analyzing judgments concerning shoplifting. Appropriate severity of punishment and/or amount of sympathy for shoplifters was judged for 60 cases of shoplifting by 84 adults, in three different presentation conditions. Cases varied in terms of nine cues: age, gender, race, social status, employment status, value of stolen object, prior criminal record, perceived object need, and locus of control. Punishment judgments were less severe if made along with, or after, sympathy judgments. Sympathy judgments were not as much affected by punishment judgments. Punishment judgments were influenced mainly by value of object and criminal record. Sympathy judgments were influenced mainly by age and social status.

Martha Hennen asked ten lame duck employees (employees who knew they were going to be laid off in the near future) to judge 50 classified advertisements. In response to each ad, participants were asked to indicate how likely they would be to reply to such an ad and how attractive they would find the advertised position. Six cues were manipulated: job title, salary, company logo, contact names provided, experience requirements, and whether benefits programs were explicitly mentioned in the ad.. Three cues seemed most important to participants when making both reply and attractiveness judgments: salary, experience requirements, and benefits. Signs of beta weights for particular cues were different for different participants, indicating individual differences in terms of how participants used information.

During this year's spring training, when major league baseball owners and players were at odds, Steven Mellor, Mike Paley, and I asked 120 baseball fans to make judgments concerning support of Major League Baseball. The judgment task consisted of 37 announcements concerning major league baseball games. Each announcement varied in terms of three cues: percentage discount on ticket prices, percentage of replacement players on the field, and number of Major League Baseball Players Association members from the home team expected to form picket lines in front of the ballpark entrance. After reading an announcement, each fan indicated what s/he would do in response, drawing a line between those activities s/he would do from those s/he would not. Response options were: read about the game in the newspaper the next day, talk about the game with others who attended the game, watch the game on television, purchase tickets for the game for others but not attend the game, attend the game. Significant squared multiple correlation values were obtained for 73 of the 80 fans who varied responses. Statistical relative weights were: .19 for ticket discounts, .64 for percentage of replacement players, and .17 for presence of pickets. There were individual differences among fans concerning cue utilization.

My colleague, Janet Barnes-Farrell and I have continued our research on performance appraisal. We are currently analyzing data from our third study. Four groups of experienced university supervisors made judgments concerning how to deal with instances of poor performance by hypothetical subordinates (with or without knowledge of outcomes). Scenarios of poor performance varied in terms of nine cues. Each supervisor rated the appropriateness of nine different corrective actions for each scenario, then picked one of the nine actions as the preferred one. We reported some preliminary results at last year's Brunswik Society meeting. Supervisors relied on different subsets of information for different corrective actions. Cluster analysis identified groups of actions that show similar patterns in the kinds of information that influence judgments of their appropriateness. We are also finding that knowledge of outcome information significantly affects perceptions of contextual cues.

Return to contents

Clare Harries, Plymouth University (

I've been using judgment analysis to study British General Practitioners' decision making on lipid lowering therapy, prophylaxis for migraine and HRT, and judgment making on risk of coronary heart disease. Information used and the quantity of information used were compared on these different tasks. (An average of four cues influenced decision making on each task. Differences were seen in treatment of smokers, overweight people and account taken of the patient's attitude to treatment.) - The first study has been written up in two articles (1) Evans, Harries, Dennis & Dean (1995) General Practitioners' tacit and stated policies in the prescription of lipid lowering agents. British Journal of General Practice 45, 15-18; (2) Harries, Evans, Dennis & Dean (in press) A Clinical Judgment Analysis of Prescribing Decisions in General Practice. Le Travail Humain.

Other studies look more at the pattern of stated cue use in relation to tacit cue use. Using an information selection task we found that stated cue use correlated better with cue selection than with actual cue use. Tacit policies could be predicted just as well from knowledge of the cues selected as from doctors' stated use of cues. Where doctors appear to lack self-insight it is not related to non-linear cue use.

I've also sort of replicated Reilly and Doherty's (1989, 1992) self-recognition studies. Although GPs managed to pick out their own policies at considerably greater than chance rates this appeared at one level to be affected by the similarity between policies presented. GPs have also made judgments about risk and likelihood of prescription of lipid lowering drugs on the same patients and these policies have been compared.

Return to contents

Roy Poses, Brown University (ROYPOSES@BROWNVM.BROWN.EDU)

Roy Poses, Wally Smith, and Donna Alexander-Forti are continuing their work on physicians' judgments for patients with congestive heart failure. Most of this year's efforts went to collection, entry, cleaning, and management of our vast clinical data set. Our first priority now is to create models of the ecology, in this case, of the outcomes of congestive heart failure. These have immediate medical interest. Once these models are constructed, and we have a better idea of the cues that predict the outcomes of heart failure, we will be ready to do the analysis of the judgments. Since clinical guidelines suggest physicians need to predict the outcomes of heart failure to make important decisions, such as whether to admit a patient to an intensive care unit; and since we have already shown physicians are not very good at making such judgments, a lens model analysis revealing why their judgments are not very good may actually get the attention of clinicians and policy makers.

Return to contents


Dick Joyce continues with his colleagues at the Royal College of Surgeons in Ireland (a full, and respectable, medical school despite the title) to develop methods based on JA for the study of Individual Quality of Life. We also have a book on this subject in preparation. Wearing a different hat, as a Swiss resident, he continues to initiate, monitor and evaluate the Federal programme of research in Complementary/Alternative Medicine, for which Individual Quality of Life turns out to be an extremely if not uniquely appropriate outcome measure.

Return to contents

Mike DeKay, Veterans Affairs Medical Center, Philadelphia (

Except for the big pile of leftover projects from graduate school (with Gary McClelland, Ken Hammond, Reid Hastie, et al. in Colorado), most of my current research is in the medical domain (with David Asch, Peter Ubel, Jack Hershey, etc.) and most of it is non-Brunswikian (for now anyway). Current projects/papers include:

(1) A major rewrite of the endangered species valuation stuff, some of which I presented at the Brunswik conference few years ago, though the focus then was on subjects' insight into their own policies (revision under review).

(2) A paper on the difference probabilistic standards of proof and Blackstone's ratio of judicial errors (under review)

(3) Cost-effectiveness of population screening strategies for Cystic Fibrosis (under review)

(4) An EU analysis of the defensive use of diagnostic testing (paper should go out very soon -- also an MDM or JDM poster)

(5) Attitude-behavior relationships in euthanasia (paper very close to going out)

(6) Research on the different decisions that are made when physicians and others consider treating individuals versus treating groups (following up on some work by Redelmeier & Tversky -- maybe an MDM or JDM poster)

(7) Research on organ allocation strategies. The current priority system puts heavy emphasis on antigen matching to increase transplant success rates. For a number of reasons, this favors Whites over Blacks. In one study, we look at the effects of this information on people's allocations of organs to potential recipient groups. (paper in preparation)

(8) The only thing likely to be of more direct interest to Brunswikians is some other work we are beginning on organ allocation. This project is a collaborative effort between people here at the VA and Penn and Gary McClelland back in Colorado. Basically, it turns out that many of the issues we uncovered when we studied preferences for endangered species preservation programs have close analogues in the organ allocation domain. So we intend to use the judgment analysis software that we developed for the species studies in the new domain.

An example of the similarity between the two domains is that the things that need to be ranked (species or organ recipients) vary in terms of probability attributes (e.g., probability of survival with and without intervention) and nonprobability or "utility" attributes (e.g., a species' uniqueness or a patient's time on the waiting list). Expected utility theory says that these two types of attributes should be combined in a particular multiplicative (i.e., configural) fashion. However, the current priority systems in both domains treat these two types of attributes in an essentially additive manner. Judgment analysis studies in the species domain indicated that most subjects combined probabilities and utilities additively and continued to make distinctions among species on utility attributes even when the preservation programs were completely ineffective (i.e, when preservation programs did not increase species' probability of survival). There are good reasons to expect similar results in the organ allocation domain. Regardless of the results, the EU model might be reasonably advocated as an appropriate normative model for organ allocation. If the empirical results are similar to those in the species domain, there would be additional policy implications. For example, we might want to use judgment analysis (or some other method like decision analysis) to elicit preferences on the "utility" dimensions, but not on the probability dimensions, because (1) people don't use probabilities linearly, and (2) people don't combine probabilities and utilities appropriately. This would provide a nice example of why it is best to separate facts and values in creating social policy and why it may often be best for experts to do the aggregation.

I expect the final product from this research (or one product, at least) to look very much like the paper by Raymark, Baltzer, Doherty, Warren, Meeske, Tape, & Wigton in the current issue of Medical Decision Making, although their topic is somewhat different (judgments of whether to accept life-sustaining treatments).

(9) In a related issue, I'd also like to know if anyone has thought of a good way to quantify self-reports of configural policies. In other words, subjects can tell us, for example, that they placed more weight on variable X1 when variable X2 was small, but it would be nice to have a self-reported SIZE of this effect so that it could be compared to the size of the statistical interaction. Then we could use regression based measures (including something like G) to get at self-insight into the configural portion of judgment. This would be nice for configural policies like those reported my species paper and in the Raymark et al. paper mentioned above, where simple importance ratings almost certainly underestimate Ss' insight (because they do not capture Ss' insight into configural compontents). Thoughts?

Return to contents

David Funder, University of California, Riverside (FUNDER@UCRAC1.UCR.EDU)

I continue my research on the accuracy of personality judgment. My graduate students and I are analyzing a large data set gathered on a sample of UCR undergraduates that includes their self-judgments of personality, judgments by college peers, hometown friends, and parents, and also includes several videotaped behaviors per subject and data on their daily life experiences gathered through diary and experience-sampling ("beeper") methods. Recent studies, coming out soon in JPSP and the Journal of Personality, examine the basis of the acquaintance ship effect (the robust phenomenon that the longer you know somebody, the better self-other and other-other agreement with which you will describe his or her personality), and compare the accuracy of self vs. peers' judgments of personality against the criterion of predicting behavior (the peers' judgments win by a small margin). Also forthcoming in Psychological Review is a presentation of the theoretical approach underlying my work. I call it the "Realistic Accuracy Model" (or, regrettably, RAM), and it is a thinly disguised Brunswikian approach. Indeed, the major Figure in the paper is labeled "A model of the process of accurate personality" judgment and, to those familiar, will look a lot like a lens. Copies of all these papers are available on request.

Return to contents

Rob Hamm, University of Oklahoma Health Sciences Center (

The recognition of domestic violence. I am working with a team including MDs, dentists, and nurses on a study of health care providers' perceptions of the possibility a patient has been victim of violence. This projected, with funding from the Presbyterian Health Foundation, is a grand scale judgment study, using a between subject design. Sixteen vignettes have been produced, describing a patient or client in the health care provider's office. These are women at 4 ages (teenager, young pregnant wife, separated mother of 4, and elderly dependent woman). For each, the story was told several ways, varying the strength of the evidence suggesting the woman has been a victim of domestic violence. For example, in the story there may or may not be a visible bruise; the woman may be frank or act defensive. Each respondent reacts to just one version of one vignette, as well as answering several personality measures and demographics. Respondent reactions include judgments of danger and choices of action. Over 3000 questionnaires were mailed to Oklahoma physicians (MDs and DOs), dentists, nurses (LPNs and RNs), social workers and physician assistants, and over 1000 have been returned. Analysis will a) determine the effect of the stimulus woman's identity and violence cues upon the health care provider's perceptions and clinical responses, and b) see whether these responses vary as a function of profession or of personal characteristics.

Teens' perceptions of the possibility of violence in their personal relationships. I have worked with a team of researchers, led by Dr. Mary Lawler of our department, on a project studying teens' awareness of violence. Using questionnaires, teenaged subjects are shown a vignette describing a situation in a teen's personal relationships, in which there is the potential for violence. They rate their perceptions of the danger, and their evaluation of the characters. They also fill out several personality, demographics, and risk behavior questionnaires. Pilot studies (Lawler, Hamm, Crandall, Pryor, Ralls, Rettig, and Davis, STFM Violence Education Conference, 1994) led to a proposal for this project being funded by the Presbyterian Health Foundation. In the funded study, the above approach is being repeated. In addition, I am responsible for a study in which individual subjects judge a number of scenarios, using the more conventional Lens Model techniques. The content of these judgment tasks is currently being developed.

Patient understanding of non-insulin-dependent diabetes. I am developing materials for a study of non-insulin dependent diabetics' understanding of their disease. This includes a model of a cognitive system (a number of interconnected concepts), to be specified using several judgment tasks. Elements of the system are: preferences for lifestyles; beliefs about effects of lifestyles upon diabetic consequences; and preferences for diabetic consequences. We have piloted the tasks measuring patients' beliefs about the effects of lifestyle (diet and exercise factors) upon weight, and their beliefs about the effects of lifestyle and weight upon blood glucose. Another possibility with these materials would be to compare patient understanding with doctor's understanding of patient's understanding. This would use techniques I have developed before (Public Choice article) for characterizing the degree of projection, accuracy, and simple agreement in one person's understanding of the other, here the doctor's understanding of the patient. Other work needed is to work with an endocrinologist or diabetologist to find or produce a model of the true effects of the NIDDM diabetic's lifestyle upon weight, glucose, and eventual long term consequences of diabetes, so that patients' judgments can be compared with an objective standard.

Return to contents

Tom Tape, University of Nebraska (TGTAPE@UNMC.EDU)

I will be presenting some work (done with David Steele and Bob Wigton) at the J/DM meeting at a session being organized by Gretchen Chapman on Medical J/DM issues. In brief, the work is testing cognitive feedback and probablistic feedback for learning a non-linear four-cue clinical prediction rule for meningitis. Our preliminary results show that (like our previous experience with real vs. abstract tasks) cognitive feedback does not work well at all compared with feedback of actual probabilities calculated from the rule.

Return to contents

Robert Gifford, University of Victoria (SNAP@UVVM.UVIC.CA)

I am doing (that is, just starting, in progress, or recently completed) the following lens-based studies:

1. Identifying defensible space features of houses that lead burglars to assess a house as vulnerable to burglary, in comparison to

2. Residents' assessments of the same cues, and

3. Police assessments of the same cues.

4. Nonverbal and

5. Verbal cues that lead observers to conclude a person has a given personality trait.

6. Verbal and nonverbal cues to measured intelligence (just like on p 29 of Brunswik's 1956 book, Perception and the representative design of psychological experiments).

7. Vocal cues to attractiveness, using voice personals ads in relation to self- and other-ratings.

8. A large study of modern office building features that lead architects and laypersons to judge buildings as attractive or not. (Attempt to explain just why--in terms of specific cues--architects and laypersons usually disagree).

9. Have shown how, in the nonverbal-personality studies (#5 above), the lens model can be used as a causal model.

10. Looking at how verbal & nonverbal behaviors of job applicants in interviews influence the judgments of personnel officers, and how they reflect the applicant's views of self.

These may seem disparate, but I see no borders between social, personality, and environmental psychology, and I like to use the lens model as the unifying framework.

Return to contents

John Gillis, Oregon State University (GILLISJ@CLA.ORST.EDU)

Frank Bernieri and I continue our work on the encoding and decoding(judgment) of rapport using videotaped two-person interactions. All of our early work was done with tapes of people in mildly adversarial situations: debating some issues. Now we have begun studying interactions in quite another context: a cooperative trip-planning activity. Comparisons between the two contexts are possible on both the ecological and judgment sides. Rapport turns out to be encoded pretty well by nonverbal cues in both circumstances. Perhaps surprisingly it is actually more predictable in the adversarial than cooperative situation, R squares being .68 and .59 respectively. Further, rapport levels are rather impressively predictable from a limited subset of cues in both contexts---5-cue models yield R squares of .56(debates) and .47(trip-planning). Accuracy of judges however---assessed by correlating judgments of rapport with the ratings of rapport given by the taped interactancts themselves---is better in the cooperative situation, the less well encoded one! This is because observers make their assessments of rapport on the basis of a few cues indicating expressivity, animation, extroversion. These policies are not sensitive to contexts; they are applied across situations. Accuracy is thus dependent on how closely these stable policies match ecological realities in specific contexts. In the cooperative activity expressivity and its correlates are valid predictors of rapport so judges do relatively well. It looks as though our college-student judges keep doing what they do and if we confront them with circumstances where this is appropriate they do well. There is much more in the data from these comparisons which we hope will be in press soon.

We also continue our cross-cultural studies of social perception; data from studies with our tapes in Pakistan has arrived. More on this project soon.

Return to contents

Marilyn Rothert, Michigan State University (ROTHERT@MSU.EDU)

The research team at MSU has had a productive year. Members include Marilyn Rothert, Margaret Holmes-Rovner, Jill Kroll, Georgia Padonu, Geri Talarczyk, Dave Rovner, Neal Schmitt. We completed a study addressing support interventions for women making judgments about menopause. We also completed the pilot study using judgment cases with low income African American women (AAW). We found that low income AAW had few significant policies (about 30 of 200) with test/retest reliability and lack of variance the major problems. There needs to be a better understanding of use of judgment cases with this population. We have submitted a grant proposal to NIH to develop and test a decision support intervention with AAW which may provide some data.

We are exploring some possible uses of written judgment cases with large populations. We're wrestling with issues of how to efficiently cluster subjects by policy, what people can self report i.e. subjective weights, and identification of own policy (remembering Mike Doherty's work). We want to explore the educational implications of responding to judgment cases. The thinking process involved may help people identify what is important to them regarding the judgment. The task may assist individuals to make decisions that are more consistent with their values as well as informed. This results in greater satisfaction with their decision and behavior congruent with their decision. We are also interested in exploring the use of the cluster data to tailor messages back to people, expecting the tailored message to have greater impact on decisions and behavior than standard messages.

We continue to look at issues of relations among decision analysis, judgment analysis, and behavior to understand and support individual decision making. We welcome comments and feedback.

Return to contents

Jeryl Mumpower, University at Albany (JLM21@CNSIBM.ALBANY.EDU)

My Brunswikian related research activities can be divided into three parts.

First, I have continued my work incorporating a Brunswikian-influenced view of negotiations. John Rohrbaugh and I just had a paper entitled "Negotiation and Design: Supporting Resource Allocation Decisions through Analytical Mediation" accepted by Group Decision and Negotiation. This paper represents an effort to integrate and summarize our work over this area during the past ten years or so.

John and I are also working on a paper, entitled, "Negotiation Support for Multi-Party Resource Allocation: Developing Recommendations for Decreasing Transportation-Related Air Pollution in Budapest," with our colleagues Anna Vari and Tom Darling. This paper will describe our efforts to provide negotiation support for five task-force members who were trying to reach agreement about how to allocate a limited amount of money among programs intended to improve the air quality in Budapest. Also, on the negotiation front, I am working on a paper, with my co-authors Jim Sheffield, Tom Darling, and Richard Milter, that will report on our multi-year, multi-study effort to try to make some sense of interpersonal learning in negotiations. This is the work that I reported on at the last Brunswik meeting. The gist of this work is that interpersonal learning in negotiations settings is not very good, but not because negotiators are afflicted by a "fixed-pie bias" as is commonly assumed.

Second, the greatest fraction of my time during the past year has been spent in trying to develop a computer-supported judgment aid for use in crisis decision making in psychiatric emergency rooms. The decision is whether or not to admit people who present at psychiatric emergency rooms. The three most important factors are their mental status, the degree of danger they pose to themselves, and the degree of danger they pose to others. A number of imperfectly valid cues are associated with each of these factors. It would be nice if there were sufficient data to build a model of the environmental system that described the relation between cues and criteria, but no such data are available. Instead, we are trying to build judgment models that represent experts' judgments about the relations between cues and criteria. Informed by a review of the research, we've used groups of "experts," consisting of psychiatrists, other clinicians, consumers, and family members, to build such a model. We are currently testing a prototype in a field setting.

Third, Tom Stewart and I have made a commitment to Mike Doherty to finish the paper "Why Experts Disagree" this upcoming year. It shouldn't be hard -- we've been working on it for five years already, so we must be close to finishing by now.

Return to contents

Tim Earle and George Cvetkovich, Western Washington University (TIMEARLE@NESSIE.CC.WWU.EDU)

Our on-the-fringes-of-Brunswik research continued to be focused during the past year on the study of judgments of social trust. In July, 1995, our long-promised book on social trust was finally published: "Social Trust: Toward a Cosmopolitan Society," Praeger, Westport, CT. Like all books of its type, it is outrageously priced, but perhaps you may influence your local academic library to buy a copy. With the book out of the way, we were able to write a couple of articles describing our empirical work on social trust. The first, "Social Trust and Culture in Risk Management," presents two studies, an original and an independent replication, that support the main claim of our theory--that social trust judgments are based on value similarity. Instead of being deduced from evidence, social trust is inferred from value-bearing narratives. People tend to trust other people and institutions that "tell stories" expressing currently salient values, stories that interpret the world in the same way they do. The second article, "Culture, Cosmopolitanism and Risk Management," explores the distinction we make between two types of social trust, Pluralistic and Cosmopolitan. Briefly, Pluralistic social trust is "within-group" (within-narrative), based on the (differing) values of existing groups. Because of this, it is not useful in the management of complex societal problems. Cosmopolitan social trust is "across-group," based on new sets of values (narratives) that are constructed for the solution of specific societal problems. Our current work continues this line, as we attempt to understand how narratives work in judgments of social trust. We would be happy to send pre-publication versions of the two articles to anyone who drops us a line.

Return to contents

Dan Gigone & Reid Hastie, Center for Research on Judgment and Policy, University of Colorado (,

We have been working on applying the lens model to the study of small group judgment. We have completed 3 studies that demonstrate what we have called the Common Knowledge Effect: the impact of a cue on the group's judgments depends on the number of group members who know that cue prior to group discussion. The more members who know the cue, the more impact that cue has on the group judgments. In terms of the lens model of the group judgments, the cue coefficients on the judgment side of the model depend on the distribution of information to group members. We have demonstrated the effect for both quantitative judgments and binary choices. We have developed a more general model of small group judgment that is based on the lens model. The model allows us to quantify the impact of group discussion of a cue on the group judgment. We are also just finishing the preparation of a review of small group judgment accuracy. In the review, we apply the lens model equation and SJT to group judgment accuracy. We advocate an analysis of accuracy in terms of a mean squared error decomposition that includes the lens model equation.

Return to contents

Kathy Mosier, NASA Ames Research Center (

The research we have been doing is not quite "Brunswikian," but I would like to put this summary out to the group for possible feedback from the cue experts! We (Linda Skitka at the University of Illinois at Chicago and I) have been investigating the use of automated cues in decision making and system control. We hypothesized that individuals would use automated cues as a heuristic replacement for vigilant information seeking and processing, i.e., that they would go no further than the automated cues to make decisions, and would not utilize other, more traditional cues available to them. We did find evidence of this tendency in several studies using college students in a very low fidelity laboratory flight task simulation, and have extended the study to commercial, glass-cockpit pilots. In a part-task simulation, pilots flew interactive flight scenarios, into which several automation "events" were inserted. Events involved some type of conflict between expected and actual automation performance, with verifying or disconfirming information available from the cockpit display. For each of three events, half or more of the pilots missed the automation event, i.e., they did not look past the automated cues and did not realize that a malfunction had occurred or take action to correct it. A fourth event concerned a false automated warning of an engine fire. During training, subjects were told that an engine fire would be indicated by 6 cues, including an electronically generated message. During the event, subjects received ONLY this automated message, and had to decide whether or not to shut down the supposedly affected engine. All of the subjects shut down the engine. Most interestingly, I think, between 75-80% of the subjects, when responding to a post-experimental questionnaire, "remembered" at least one more cue being present during the engine fire event. Several of the pilots "remembered" three or four cues, including warning bells and lights, that were never present. We've been examining the literature on eyewitness reports, and on schema, and on associative recall to explain this phenomenon.

Return to contents

Claudia Gonzalez-Vallejo, University at Albany (

Last October I joined Jeryl Mumpower and Tom Stewart at the Center for Policy Research, Albany, after seven months of working as consultant for the United Nations Development Fund for Women. I am enjoying my interactions with Jeryl and Tom very much, as well as my reading of all the interesting research posted in the Brunswik list. Besides Tom and Jeryl, my other Brunswikian connection began in 1992 when I visited at the University of Chicago and started my collaboration with Josh Klayman and Jack Soll, at the Center for Decision Research. Being a graduate from Chapel Hill, I feel I am becoming more of an hybrid as the days go by, enjoying issues in policy, psychology, marketing, medical decision making and many other areas where Brunswikian, Thurstonian, and mixtures of theoretical approaches can prove exciting and useful in answering different research questions.

As Josh mentioned in his research summary, we are investigating issues of overconfidence with particular emphasis on the effects of question difficulty, strategies for question sampling, and strategies for question presentation. We are also trying to better understand the effects of random error on confidence judgments as well as the role of feedback on measures of accuracy and confidence.

My work at the Center for Policy Research entails new and exciting projects in the areas of medical decision making and management. In medical decision making, we are currently working on two issues: a) the diagnosis and treatment of acute otitis media in children and b) clinical judgments and decisions in health care management of terminally ill patients. The otitis media project deals with two interrelated problems: on the one hand, the infectious condition of acute otitis media is difficult to diagnose and thus much variability exists regarding its treatment; on the other, the non-infectious condition of recurrent otitis media with effusion often results in placement of ear tubes, an expensive procedure that is intended to prevent less than agreed upon, long term consequences of mild hearing loss. So far we have studied the judgments of five physicians to realistic patient cases. The patient cases included information about the patients' history, examination information, and parents' attitudes. As is more often the case, we found physicians varied in their diagnostic judgments for equivalent cases, but also all physicians agreed that the examination cues pertaining to coloration and swelling of the ear drum were the most important factors influencing their decisions. With regard to the surgery decision for the effusion problem, we are currently working on a markov model, in a decision analysis framework, to better understand the process, consequences, and uncertainties surrounding this decision. We will be presenting this work in the upcoming meeting of the Society for Medical Decision Making.

Our work on health care management for patients in the last moths of life includes, at the moment, a statistical predictive mortality model. Results indicate that admission information can accurately predict in-hospital death for a substantial fraction of patients. We expect to look at primary care physicians' decisions about performing medical procedures when the uncertainty of prognoses is reduced, among other factors. Because many of the issues in this area are complex and of an ethical nature, we hope to convene a panel with health care providers to discuss the issues involved in these difficult decisions.

In relation to my UN experience and my interest in development issues, I have begun some collaborative work at the University at Albany, in the area of organizational behavior and management. We are developing a management training tailored to aid development efforts, especially those that focus on women. Using the Competing Values Framework--developed by Robert Quinn and John Rohrbaugh--we hope to study women's grass-roots organizations in Latin America and design a leadership methodology to strengthen management skills of organizational leaders. Our work is very preliminary at this stage as we are trying to secure some funds to carry out our ambitious plans.

Other current research includes studies on the effects of vague information on consumers' preferences (in collaboration with Sanjay Dhar at the University of Chicago) and modeling choice behavior and confidence judgments within a stochastic framework. My modeling work in the areas of choice and confidence are a mixture of Thurstonian and Brunswikian views. The models are quite Brunswikian, in the sense that individual's behavior is assumed both variable and dependent on specific environmental conditions, as well as Thurstonian, in terms of the specific ways random error is handled.

Return to contents

Mark Chaput de Saintonge, London Hospital Medical College (RDK1001@UX.LHMC.LON.AC.UK)

I maintain some activity with Vinod Diwan Goran Tomsen at the International Health Care Unit at the Karolinska as part of an EEC study. This is an RCT of the effects of group auditing on improving drug therapy in primary care. We are examining the effects of educational interventions on prescribing in acute asthma and urinary tract infections. Part of the intervention package will be CJA feedback but I doubt we will be able to segregate it's contribution from other factors. Baseline measurements and most interventions now complete. Behavioral measures include changes in prescription rates for individual GPs. We should at least be able to link changes in judgments on vignettes with changes in prescription rates. We are still optimistic and I still wear the fine 1950's Stetson trilby I purchased second-hand in the market at Albany!

I also maintain some activity with Roy Poses.

Return to contents

Alex Kirlik, Center for Human-Machine Systems Research, Georgia Institute of Technology (kirlik@isye.isye.gatech.EDU)

At the Center for Human-Machine Systems Research at Georgia Tech, we have been investigating real-time human interaction with dynamic systems within an ecological (Brunswikian/Gibsonian) framework. Our goal is develop a theoretical approach to interaction that explicitly acknowledges both the perceptual/judgmental and the action-oriented interfaces between the organism and the environment. It is sometimes overlooked that the conceptual precursor to the Lens model originally developed by Tolman and Brunswik gave equal attention to both these (perceptually-oriented and action-oriented) proximal-distal relations between the organism and the external world. Our empirical studies have led us to believe that a fruitful understanding of human interaction with dynamic environments will emerge only when both perception/judgment and action are seen as cognitive resources, rather than viewing action as merely cognitively-neutral "implementation" of judgments or decisions.

We have come to this view through our attempts to characterize the nature of expertise in dynamic interaction. In short, we have observed that experts are not only be more efficient in their selection of action on the basis of environmental information, but also that experts employ strategies for interacting with the environment that make information available to them that is not available to novices. Importantly, we are not here referring to differences between experts and novices in terms of information search, or in abilities to make perceptual judgments. Instead, we have observed that experts employ strategies that involve constraining the dynamic behavior of the environment in such a way that variables that are distal to novices are indeed made proximal to experts. That is, when vicarious functioning is supported, we have observed that skill acquisition is characterized by the development of action strategies that raise "depth" variables to the environmental surface. The mechanism whereby this aspect of skill acquisition occurs lies in the selection of an invariant strategy which induces constraint in the behavior of the dynamic environment above and beyond the constraint in the environment due to its "internal" dynamics. This constraint, which does not characterize the novice's environment, is exploited by the expert in terms of information: the expert's environment possesses a higher degree of regularity in its behavior, and thus the perceptual information available from the expert's environment is more informative about the environment than is the perceptual information available to the novice.

We have been using a mathematical approach based on systems theory and multi-dimensional information theory to formalize these intuitions. We have applied the mathematical model to an actual situation (short-order cooks' grilling strategies) and have found that the model is capable of capturing how expert strategies significantly reduce the uncertainty associated with the distal variables in this task. We have thus given one, hopefully provoking, answer to the question of "how do experts make it look easy?" Our answer is that, for the expert, it IS easier. We continue to refine and formalize these ideas, and plan to test their applicability to a variety of dynamic situations.

Click here for more detail on A Mathematical Model of Strategic Interaction

Return to contents

Recommended reading--Brunswik's classic papers

In addition to his books (The Conceptual Framework of Psychology. Univ. of Chicago Press, 1952, and Perception and the Representative Design of Psychological Experiments, Univ. of Calif. Press, 1956) the following papers are particularly important (list courtesy of Ken Hammond). How many have you read?

Tolman, E., and Brunswik, E. (1935). The Organism and the Causal Texture of the Environment. Psychol. Rev., 42, 43-77.

Brunswik, E. (1937). Psychology as a Science of Objective Relations. Philo. Sci., 4, 227-260.

Brunswik, E. (1939). Probability as a Determiner of Rat Behavior. J. exp. Psychol., 25, 172-197.

Brunswik, E. (1940). Thing constancy as measured by correlation coefficients. Psychological Review, 47, 69-78.

Brunswik, E. (1943). Organismic Achievement and Environmental Probability. Psychol. Rev., 50, 255-272.

Brunswik, E. (1946). Points of View. In Harriman, P.L. (Ed.), Encyclopedia of Psychology, Philosophical Library, 523-537.

Brunswik, E. (1950). Remarks on Functionalism in Perception. In Bruner, J. and Krech, D. (Eds.), Perception and Personally, Durham, N.C.: Duke University Press, 56-65.

Brunswik, E. and Herma, H., (1951). Probability Learning of Perceptual Cues in the Establishment of a Weight Illusion. J. Exp. Psychol., 41, 281-290.

Brunswik, E. (1951). Notes on Hammond's Analogy Between "Relativity and Representativeness." Phil. Sci., 18, 212-217.

Brunswik, E. and Kamiya, J. (1953). Ecological Cue-Validity of "Proximity" and of other Gestalt Factors. Amer. J. Psychol., 66, 20-32.

Brunswik, E. (1955). "Ratiomorphic" models of perception and thinking. Acta Psychologica, 11, 108-109.

Brunswik, E., (1955). Representative Design and Probabilistic Theory. Psychol. Rev., 62, 193-217.

Brunswik, E., (1955). In Defense of Probabilistic Functionalism: A Reply. Psychol. Rev., 62, 236-242.

Brunswik, E., (1957). Scope and Aspects of the Cognitive Problem. In Gruber, H., Jessor, R., and Hammond, K. (Eds.), Cognition: The Colorado Symposium. Cambridge, Mass.: Harvard University Press, 5-31.

Brunswik, E., (1959). Ontogenetic and Other Developmental Parallels to the History of Science. In Evans, H. (Ed.), Men and Moments in the History of Science. Seattle: University of Washington Press, 3-21.

Return to contents


Return to contents

Back to the Brunswik Society Home Page