Education and a Civil Society: Teaching Evidence-Based Decision Making

Chapter 1: Decision Making and Its Development

Back to table of contents
Eamonn Callan, Tina Grotzer, Jerome Kagan, Richard E. Nisbett, David N. Perkins, and Lee S. Shulman
Teaching Evidence-Based Decision Making in K-16 Education

David Perkins

Review your activities over the last few weeks. Perhaps you can recall a decision that did not turn out so well. The outcome might not have been your fault: Even when we ponder long and wisely, circumstances intervene—a hurricane, a flu, a flat tire. But people often recognize something they could have done to make a better decision in the first place. “If I had thought about it more carefully, I would have recognized that five or six things could go wrong, so at least one of them was almost bound to.” “I could easily have bought some time to get more information.” “If I had considered the possible side effects more, I would have realized that plan B was really better than plan A.” “I let myself be influenced far too much by what others were saying.” And so on.

I get this simple exercise from a friend and colleague of many years, Robert Swartz, a prominent figure in the critical-thinking movement. The exercise is suitable for teachers, children, professors, businesspeople. In groups it generates remarkably rich conversations about the art and craft of decision making. Almost everyone can respond with at least one significant decision that did not go so well and how the decision might have been better approached by paying more attention to one or another kind of evidence.

We should not be surprised. Our lives are woven out of decisions. We are constantly deciding—whether to do homework or go to a movie, whether to accept a job offer or keep looking, whether to call someone for a second date, whether to vote for this person or that, support this policy or that. Moreover, decision making is not one of those human capabilities like breathing or everyday conversation that pretty much takes care of itself. A complicated world and our complicated selves ensure that decision making is often hard and sometimes, to our later chagrin, disastrously easy.

Looking to the larger scale, the decisions of leaders are likewise not untroubled. Historian Barbara Tuchman (1984) in The March of Folly systematically analyzed a number of historical decisions that she argues were “folly”—unwisely taken at the time, with ample warning from insiders and outsiders. Examples include King George’s decision to play tough with the American colonies, leading to the American Revolution; and U.S. involvement in Vietnam. The January 1986 decision to launch the Challenger space shuttle despite serious concerns from engineers is another often-mentioned example: executive concerns about the appearance of progress inappropriately overrode technical reservations (Starbuck and Milliken, 1988). The shuttle exploded shortly after launch. The poorly conceived Bay of Pigs invasion of Cuba is a further common target of critique (e.g., Johnson 2004).

Imagining a society where evidence-based decision making is more of a deliberate craft, where most people make decisions somewhat better than they typically do today—because education, mentoring, and other means of fostering good decision making have brought this about—is enticing. So, too, is imagining a society where adolescents are more cautious about experimenting with sex and drugs, where adults less often fall into dead-end jobs or dead-end relationships, where care for the elderly involves more planning and foresight. So, too, is imagining a world where not only do individuals better navigate their way through the complexities of life but where larger-scale decisions—matters of public office and public policy—reflect more informed and attentive leaders and citizens. My aim in this essay is briefly to explore the prospects of creating such a world.

Focusing on everyday decision making, the discussion moves through several themes: first, what distinguishes decision making from other kinds of thinking; then, descriptions of how decision making goes wrong and prescriptions for what counts as good decision making and how education might develop good decision making; finally, a particular problem: how better decision making might serve not just individual interests but civic engagement and society at large.

Research tells us a lot today about how decision making goes right or wrong. What we know adds up not so much to a template for perfect decision making as to a vision of artful self-regulation responsive to the circumstances, sometimes more analytic, sometimes more intuitive, sometimes making checklists, sometimes telling stories and counterstories to oneself. Mode needs to match occasion. The best efforts to improve decision making through education have beneficial results. However, these efforts focus only on some aspects of the challenge. Educational and other institutions would do well to invest much more in this fundamental cognitive craft through which we shape our lives and our larger society.


How does decision making differ from other kinds of thinking, for instance, planning, reasoning, or analyzing? The boundaries between such everyday categories inevitably are fuzzy, all the more so because they tend to draw on one another—one decides on a plan or plans how to decide, reasons out a decision or decides on the most important reasons (e.g., Moshman 2005). That acknowledged, decision making most centrally is a problem of choice (Galotti 2002). One must select from among alternative paths leading toward the future. The paths may comprise alternative plans, go/no-go choices, or beliefs to accept or reject—as with religious faiths, political philosophies, scientific theories, or personal policies.

Often the challenge goes beyond deciding between given alternatives. One might need to find reasonable alternatives in the first place or to look creatively beyond the obvious options to others more promising. Even so, choice remains central. In the search for a good choice, many factors can figure: goals and values to be pursued, time pressure, side effects, opportunity costs, stable versus changing circumstances, available knowledge, interests of others as well as oneself, and more (Byrnes 1998; Galotti 2002; Jacobs and Klaczynski 2005).

While decision-making situations vary enormously in their particulars, this account centers on what might be called everyday decision making—for instance, purchase decisions; forming and severing personal relationships; voting and other political commitments; educational and job decisions; and choices regarding sex, recreational drugs, and the like. People addressing such everyday decisions typically face possibilities of action rather than, or in addition to, belief; have time for reflection rather than needing to respond quickly; have some familiarity with the area in question but lack deep professional expertise; and feel they are personally involved rather than making technical decisions on matters not close to them.

An ordinary example: A couple of years ago, I attended a conference in Athens. My wife came along and we took time to enjoy the classic city. On our last day we spent an hour-and-a-half looking at Oriental rugs in a small shop. We spent considerable thoughtful time sorting out which rugs we liked best and ended up buying three medium-size rugs. We knew something about the pricing of these rugs, and the costs seemed reasonable. Our decision, like countless others that people make every day, had all the earmarks of a typical decision-making process: an action taken after reflection, based on some familiarity with the area, and having personal involvement.

The decision also was a mistake, we later concluded: not a serious one, but a mistake nonetheless. Why? How do decisions go wrong?


The next day we felt that we should have bought only one or perhaps two of the rugs. Our imminent departure had led us to overpurchase. Leaving behind an attractive buy was difficult. An accident of schedule—our imminent departure—unduly influenced us (the shop owners did nothing to press us) in a way that advertisers frequently mimic with headlines like “One day only” or “Limited time offer.” Amsel et al. (2005) argue that avoidance of future regret often strongly and inappropriately figures in decision making. In effect, my wife and I feared the regret of a lost opportunity, a decision-making pitfall we should have recognized from prior experience.

Were we sure we made a mistake? This is an important question because appraisal of decisions after the fact suffers from what is called “hindsight bias,” the distorting effect of knowing how things turned out (e.g., Fischhoff and Beyth 1975; Teigen 1986). We were confident we had erred. We recalled how during shopping we were much more attracted by one of the rugs, significantly by the second, and found the third simply pleasant. This should have been a signal for caution. After all, the rugs were being shipped, so we could have taken a couple of days to decide about the second and third rugs and then emailed the shop owner with little realistic risk that others would have purchased exactly those rugs. We could even have photographed the rugs with our digital camera for another look later. We had done both these things in the past.

To generalize from this and many other ordinary examples: decision making, broadly speaking, goes wrong because it is not sufficiently evidence based. People often act impulsively and impressionistically, with a limited view of the matter at hand, not looking beyond what the obvious options offer; not sufficiently consulting past experiences, general knowledge, informed friends and colleagues, and other sources; or not weaving evidence together in a coherent way. The technical literature on the psychology of decision making foregrounds how poor decision making reflects lack of self-regulation to elaborate and clarify one’s goals and the goals of relevant others, imaginatively search for promising options, carefully assess the costs and consequences of adopting those options, and so on (e.g., Baron 1988; Byrnes 1998; Galotti 2002; Janis 1989).

That is the big picture. Now for some details.

The Challenge of Representing Complexity

One obvious challenge of decision making is simply that decisions are often complex, involving several options, multiple criteria, and diverse knowledge about the situation. Managing all the elements is hard, and overlooking something important is easy. Means are needed to organize and integrate the ideas and information important to a decision.

The good news is that people have conceptual tools for coping with this complexity. The tools are not always fully or well used, but they are powerful. Two rather different modes of representation figure prominently in both everyday and more expert decision making: quantitative/tabular (quantitative for short) and narrative.

The quantitative approach often takes simple list form, as in checklists of product features or tallies of pros and cons. At the far end of technical finesse are expected-utility models. These involve an intricate tabular analysis, with each of several options scored for each of several criteria and the criteria weighted for relative importance (e.g., Baron 1988; Dawes and Corrigan 1974; von Winterfeldt and Edwards 1986). Multiplying each option score times the weight of the criterion and summing the results yields an overall score for the option. The probabilities of various consequences can also be incorporated into the analysis. Impressively sophisticated as this is, it is a method that does not serve all situations well, a matter elaborated later.

Narrative offers another mode of representation for decision making. Simple versions of the narrative mode involve concise causal stories in favor of or against options: “I’m sympathetic with his goals, but okaying this would set a precedent we’d have to live with forever.” “If we order today, we’ll have to have the materials sent by express or they simply won’t get here in time.” Complex versions involve elaborate argument. In criminal law, for instance, consider how the prosecution motivates the jury to return a guilty verdict by weaving the evidence into a story, a “theory of the crime” that combines the three themes of motive, method, and opportunity. This story must stand against counterstories advanced by the defense, which cast the evidence in a different light and challenge some of it.

While the conceptual tools of quantitative and narrative representation can help with decision making, their ready, widespread use is complicated by a number of factors. For decisions that invite technical approaches, the more intricate analyses require knowledge and skills many people do not have. On the quantitative side, most people do not know when to attempt or how to manage the expected utility procedures. On the narrative side, most people are unlikely to be conversant with the niceties of hypothetico-deductive reasoning. However, lack of technical knowledge is probably not the principal mischief maker. A considerable amount of decision making is simply shallow. People do not try hard enough to be better informed, to consider multiple possibilities, and to carefully evaluate them. Why is this?

The Trend to Oversimplification

Benign factors lurk behind many quick and dirty decisions. Occasionally choices arise in urgent circumstances. With no time for elaborate analysis, one must take one’s best shot, which can in fact be quite good. Other benign factors include the fact that investment in a complex analytic process can be costly, many decisions are not all that important, and the prospective gains from taking the longer, more thorough route might not outweigh the prospective benefits of available shortcuts. For important decisions, the available information may not support the most thorough styles of analysis anyway. Figuring in relative probabilities of consequences will add minimal value if the probabilities are little more than guesses.

On the other hand, many not-so-benign factors lead to shallow explorations of decisions. A pointed folk saying—“Between two options, choose the third”—points to one of the most basic causes: important options beyond the obvious ones often exist. The same can be said of criteria. Many decision situations do not make salient some of the promising options and pertinent criteria. The situations create the illusion of a completely stated problem: a couple of apparent choices and two or three obvious considerations. However, a little digging reveals hidden layers.

This was true of the rugs story. Our focus on which made overlooking deferred purchase strategies easier. Similarly, given the option of buying a neighbor’s used car, many people will assume their choices are yes or no after a test drive. But perhaps they could “choose the third” and make a deal to rent the car for a week to see how they like it, the rental going toward the purchase price if they’re happy.

Problems of saliency aside, elaborate decision making demands intense mental effort. Herbert Simon (1957) famously characterized the problem of human beings’ limited rationality: cognitive bottlenecks steer us toward expeditious but sometimes unfortunate shortcuts. Touching on the rugs purchase once more, my wife and I had a lot on our minds and little room for much more. Analogously, a manager focusing on the substance of a tricky decision can easily forget a colleague whose judgment should be heard both for counsel and to maintain the relationship.

Keeping things simple is a natural and generally adaptive tendency. Contemporary “dual processing” models argue that people have two modes of processing: intuitive and analytic1 (e.g., Epstein 1994; Klaczynski 2005; Reyna et al. 2005; Stanovich 1999; for a critical assessment, see Evans 2008). Intuitive decision making is rapid, is only marginally conscious, and at its best is deeply informed by experience in the domain in question. Imagine how you might size up a job candidate in the course of a brief conversation, responding to all sorts of nuances you could not readily name. In contrast, analytic decision making is more conscious, deliberate, effort-demanding, systematic, and analytical, pondering a number of options and integrating a range of considerations toward a conclusion by using narrative, quantitative, or mixed approaches.

Efficient, fairly intuitive processing dominates our day-to-day activities— we would need a month to get through a day otherwise—and also many medium-to-high-stakes decisions. Often this serves well enough, but sometimes the limited horizons and technical shortfalls of intuitive processing leave us blind to significant options, benefits, and risks. Moreover, people sometimes deceive others and even themselves about their commitment to explicit evidence, formulating articulate rationales that are no more than post hoc justifications of intuitive convictions.

The Hazards of Heuristics and Biases

Intuitive processing is replete with what are sometimes called heuristics and biases. These rules of thumb often serve in a rough-and-ready way but can prove deeply misleading (Kahneman et al. 1982). Sometimes characterized as cognitive illusions, these include, for instance, confirmation bias (the tendency to seek confirmatory and fail to seek disconfirmatory evidence), social stereotyping, the fundamental attribution error (the tendency to explain others’ behavior as reflecting abiding personality characteristics while seeing one’s own behavior as a response to situational demands), the neglect of opportunity cost (the cost incurred by foregoing the benefits of A when you choose B over A), and the “waste not” heuristic around sunk costs (the feeling that you have to get your money’s worth out of a cost already irretrievably incurred, even though a change in circumstances, or something else, may have made the effort of doing so no longer worth the gain).

Another important bundle of biases, termed prospect theory, won Daniel Kahneman the 2002 Nobel Prize for economics (his partner in this work, Amos Tversky, unfortunately had died or he would have shared the award) (Kahneman and Tversky 1979). Prospect theory focuses on decisions that involve gains and losses, often under conditions of uncertainty.

The theory has several nuances, but one of the most basic points is easily stated: People are strongly loss-averse—changes viewed as losses usually are treated as considerably more painful than equivalent changes viewed as gains are treated as rewarding. Contrary to Ben Franklin, a penny saved is worth more than a penny earned. A $100 bill is worth more to you, by roughly twice as much, when you have it and think of yourself as potentially losing or spending it than when you don’t have it and might gain it. For instance, people usually won’t bet $100 against a fifty-fifty chance of getting $X until X gets up to about $200 (e.g., Tversky and Kahneman 1992).

To see how losing versus gaining works in a nonmonetary case, imagine that Tony is more of a Superman than a Spiderman fan and Patrick the opposite. Unfortunately, Tony receives a Spiderman action figure on his birthday, and Patrick receives a Superman action figure. Will they trade? Probably not, because once Tony possesses a Spiderman figure, trading it away gets coded as a loss, likely to seem too dear compared to gaining a Superman action figure unless the initial preference difference is very great.

Such differences reflect what is called a framing effect: losses and gains are counted relative to the baseline from which decision makers see themselves as starting. Framing effects are errors of reasoning because the underlying situation is the same no matter how framed. Tony would enjoy the Superman figure more and Patrick the Spiderman figure more if only they could get past their loss aversion and make the trade.

Framing effects also influence risky situations. People tend to be risk-averse about gains, preferring a smaller sure thing to a chance at a greater gain; and they tend to be risk-seeking about losses, preferring a risk of a greater loss to a certain smaller loss. One notable study asked a number of ambulatory patients, graduate students, and physicians about their preference between two different treatments for lung cancer (none of the subjects actually suffered from the malady) (McNeil et al. 1982). For about half the subjects the results of the two treatments were expressed in gain terms (e.g., survival rates at various points in time), and for the rest in loss terms (e.g., mortality rates). One of the treatments involved a higher risk of early death but greater long-term survival. When the treatments were described in terms of survival rates, subjects tended to shun the risky “early death” treatment; they tended to choose it when the treatments were described in terms of mortality rates.

In principle, people might notice that the treatments could be described either way and thus more objectively think through the matter. After all, the job of analytic processing is to monitor the output of intuitive mechanisms and catch shortcuts when they seem likely to do serious mischief. One problem, though, is that analytic processing inevitably misses a lot. Constant checking would require far too much of our cognitive resources.

Another problem is that analytic processing generally is unaware of specific glitches such as loss aversion or sunk costs. For instance, proceeding in a broadly analytic way, one might happily register a sunk cost as a real loss in a pro-con list. Sophisticated analytic practice involves not only an elaborated quantitative or narrative approach but alertness to specific traps.

The Neglect of Intuition at Its Best

Without question quick, intuitive decision making serves well to move us through the minor decisions of the day. When the stakes are low, and even more so when we have experience to build on, why think hard about a decision? If we are occasionally wrong, so what? We can hope to learn from our mistakes and do better next time.

However, for more important decisions, the moral of the story so far might seem to be “analytic: good; intuitive: bad.” To avoid Simon’s limited rationality bottleneck, decision makers should fight the intuitive impulse and adopt well-elaborated narrative and quantitative approaches with the help of support systems from pencil-and-paper to spreadsheets.

A significant body of research, however, shows that sometimes intuitive decisions are better. Malcolm Gladwell’s (2005) popular book Blink reviews a range of such examples, albeit somewhat uncritically. In general, intuition can serve well when the decisions have a personal character, for instance preferences for food, art, or perhaps even another person to spend one’s life with. For example, in laboratory experiments where subjects were invited to choose a jam they liked or a poster to hang in their apartments, subjects asked to approach the task intuitively showed greater satisfaction later on with their choices than did subjects asked to take an analytic approach (Wilson et al. 1993; Wilson and Schooler 1991).

Why is this? Often we do not have precise access to our reasons for liking something. More broadly, in some areas of life we cannot simply read off the tops of our minds the many tacit goals that guide our behavior. Adopting an analytic approach can disrupt our intuitive attunement to our deep interests, leading to worse decisions. (These tacit goals may themselves invite surfacing and reconsideration, but that is another story.)

Research also argues that an intuitive approach often better serves when people have well-developed experience in a domain. Studies of experts have yielded a model called recognition-primed decision making (RPD) (e.g., Klein 1998; Lipshitz et al. 2001). Experienced firefighters, ship commanders, tank platoon leaders, and so on tend to make decisions by sizing up a situation rapidly in order to understand it. From this, a good prospective solution quickly and naturally emerges. When the expert has doubts, instead of brainstorming many solutions and comparing them, he or she typically adopts a narrative tactic of limited scope, assessing the solution at hand by doing a mental simulation and, if the solution seems wanting, reaching for another prospect. Especially in time pressured circumstances or where efficiency is important, the point is not so much to choose the absolute best among all possible options as it is to arrive at a good workable solution and move forward.

RPD should not be seen as limited to professional expertise. In many of the ordinary activities of life—choosing a smart alternative route when you encounter a traffic jam on your usual commute, deciding whether to try to fix a plumbing problem yourself or call the plumber—RPD may well figure effectively. When my wife and I were buying Oriental rugs in Greece, we had enough experience to recognize a risk of overpurchasing. Perhaps next time our recognition will be better primed!

Does this suggest a moral directly opposite “analytic: bad; intuitive: good”? Do we just need to develop enough knowledge and experience in the domain in question to underwrite intuitive decision making? As a general policy this will not work. Many life decisions occur only now and then, with most people unlikely to accumulate the rich expertise that could drive sound intuitive decision making.

Moreover, considerable research shows that when multiple reasonably clear criteria bear on a decision people are not good at intuitively integrating their impressions into an overall judgment. For instance, college admissions processes that depend on people sizing up candidates are markedly less predictive of academic success than adding up scores for the multiple criteria, even when some of the individual scores are themselves subjective judgments (Dawes 1982). Human judges lean too much toward seeing each case as individual and foregrounding this or that attractive or unattractive feature. Unfortunately this makes the final judgments worse, a trend difficult to convince people of because we are enormously attracted to the idea that nuanced holistic judgment always wins over adding up numbers (Galotti 2002, ch. 5).

Summing Up the Challenges of Decision Making

Decisions often involve a range of options not apparent at first, as well as consequences of character, magnitude, and likelihood that are easily missed or mistaken. Quantitative and narrative styles of representation help us to manage the complexity, but they are useless if we do not seek out that complexity. Instead, intuitive impressions rule. Often they are not well grounded in rich experience and personal sensibility. Often, misleading heuristics and biases are in the mix. Thus, thoughtful, careful decision makers must attend to and watch out for a number of factors.

However, relatively intuitive approaches sometimes yield good service, and more elaborate analyses sometimes yield no better choices and occasionally worse ones by disrupting sound intuitions. The tradeoffs between relatively more analytic and intuitive approaches as well as between quantitative and narrative approaches lead to the question: What in the end is good decision making? If we want to educate people to make better choices, what practices should we be cultivating?


The quality of decision making can be evaluated in two ways: by examining how well particular choices work out and by examining the processes leading up to those choices. The first approach is, in a sense, the final word on the matter. However, systematically applying it is hard.

For one problem, decisions often fall short or indeed succeed because of unforeseen and not reasonably foreseeable events. Moreover, we often have no sure way of knowing what would have happened had we chosen differently. If a year ago we bought stock A rather than stock B, today we can look up how stock B did; but if we took job A rather than job B, who’s to say how job B would have gone?

Furthermore, another troublesome cognitive illusion, hindsight bias, makes it difficult to judge what choice should have been made earlier knowing only what one knew at the time. People drastically overestimate the before-the-fact likelihood of events they know have actually occurred, and they also tend to view outcomes as unsurprising—“everyone knows that!” In one study, students read accounts of obscure historical events with different outcomes reported for different students and one group of students receiving no information about outcomes (Fischhoff and Beyth 1975). Asked to estimate the likelihood of various outcomes before the events, students who read that a particular outcome had actually occurred assigned it a much greater probability than the no-outcome group. In another study, Teigen (1986) had one group of students rate the “truth” of a proverb (e.g., “absence makes the heart grow fonder”) while another group rated their “truth” of its opposite (e.g., “out of sight, out of mind”). Each proverb and its opposite generally earned “true” ratings.

In summary, defining good decision making by whether the particular choice actually works out well is a problematic approach. We do better to examine the character of the decision-making process itself.

The Limits of Utility Models

As mentioned earlier, another way of characterizing good decision making involves the idea of expected utility. The utility strategy calls for constructing a tabular representation rating each option on each criterion, with the criteria weighted for their importance. If all this is done right, the decision recommended by the model can be shown to maximize utility for the decision maker (e.g., Baron 1988). Accordingly, the utility model might be a tempting normative standard for good decision making, providing a rigorous foundation for educational efforts.

However, the utility model does not offer the universal answer it may seem to at first. The model leaves out its sometimes high cost-in-effort and also assumes the decision maker’s capability and available time. Moreover, the conclusion that utility will be maximized requires several background assumptions. For instance, all relevant options and criteria need to be in play, yet decision making often suffers from neglecting a less obvious option or criterion. Decision makers’ ratings have to reflect their true goals and values, which sometimes are not consciously accessible.

Also the criteria need to add to the overall utility independently of one another rather than interacting. However, one characteristic commonly enhances or undermines the value of another. For instance, in a stereo system the contributions of speaker quality and amplifier quality are not independent: Higher-quality speakers contribute less without a good amplifier to drive them.

Such reservations do not challenge the considerable utility of the utility model, which often proves valuable in situations with several criteria and many tradeoffs across different options. This can easily happen for competing purchase decisions or job opportunities with different advantages and disadvantages. The quantitative/tabular approach provides a way to represent and review the situation, even when the ideal conditions for its application are not met. The issue is not whether the utility model is a good tool in the toolkit—it is—but whether it provides a universal normative model for good decision making—it doesn’t.


If the utility model does not offer the perfect account of a good decision-making process, what does? Perhaps no one template will suffice. Effective strategy requires sizing up the situation; it requires “deciding how to decide.” Good decision making is thus a situation-responsive double balancing act of the intuitive and analytic, narrative and quantitative methods suited to the circumstances.

That is, good decision making is an adaptive response to the particulars (Payne et al. 1993), which might help to explain the finding from Galotti and colleagues (2006) that students’ self-reported decision-making styles (e.g., more intuitive versus analytic) were not predictive of the number of options or attributes they considered in evaluating college majors. The demands of the specific decision may influence what people do more than their stylistic predilections. This would fit the general social science observation that behavior is determined more by situational characteristics than by personal characteristics (Ross and Nisbett 1991).

Deciding how to decide can be an almost reflexive or a highly deliberative matter. In any case, it often will lead the decision maker to a middle level of investment in decision making that neither risks an extremely intuitive decision nor bears the high costs of the most elaborate styles of analysis. For example, one line of experimentation asked subjects to consider options such as possible apartment rentals, applying a number of criteria to a range of alternatives. Faced with many options, people typically will quickly eliminate most of them as failing to meet one or another condition they view as necessary (for several similar examples, see Galotti 2002, ch. 4). A shopper might cross off any apartment on a busy street and any apartment over a certain cost per month. Such tactics reduce the options to a few inviting closer consideration. Strictly speaking, the elimination process is a normative mistake unless the elimination criteria are truly necessary conditions. The strategy risks missing the rare hidden gem; for example, an overbudget apartment so beautiful and convenient that one rethinks one’s budget. However, the strategy makes the decision process much more manageable.

A narrative approach also can bring the decision maker to a middle level of investment, one that is more than a quick take but much less than a saga. Recall how research on expert decision making reveals clear patterns of narrative elaboration, such as testing promising solutions with mental simulations (Klein 1998; Lipshitz et al. 2001). Other strategies for making the most of one’s narratives without letting them become hideously complex include settling time, such as “sleeping on it” or taking a walk, and repeated rehearsal to reappraise a narrative. I remember one person telling me that he talked over decisions with his cat. A deliberate counternarrative, such as the defense’s response to the prosecution, is another tactic and a good way to test the integrity of a narrative.

Narrative and quantitative modes of representation involve tradeoffs that affect how well they suit a particular situation. Narratives can deflect needed critical examination through the neatness of the stories they tell. Studying informal everyday reasoning, Perkins et al. (1991) argued that people taking positions on controversial issues often displayed a makes-sense epistemology, suspending critical judgment as soon as they arrived at one way of telling the story that made sense. Also, narrative forms quickly become unmanageable as they incorporate more dimensions and details. Quantitative/tabular forms can accommodate many options and criteria readily if tediously just by adding rows and columns.

On the other hand, even simple quantitative forms are hard to track without the help of pencil and paper or a spreadsheet. The causal-intentional character of narrative and counternarrative supports memory. Also, a narrative embodies an explanatory account of the promise of an option that might be more revealing than toting up scores.

Some situations clearly lend themselves more to a quantitative or narrative style. Apartment features like price, number of rooms, and location can be evaluated relatively independently of one another for their contribution to the overall desirability of an apartment, thus inviting a quantitative style. In contrast, to evaluate a poem by scoring it on independent dimensions of rhythm, metaphor, allusion, and so on would be odd: too much depends on how the various features integrate with one another.

Finally, mixed modes are often natural. For instance, a narrative style may morph into a quantitative style as more and more factors surface for consideration and demand organization. Alternatively, a quantitative exploration might reveal interacting factors that crystallize into a narrative more compelling than a tally of scores. Deciding how to decide is not just a single choice as one tackles a decision but a choice remade as ideas and information develop.

With more intuitive and analytic narrative and quantitative modes of representation available, what are some of the factors that inform a good double balance? At least three are worth considering: stakes, knowledge, and personal resonance.

The Significance of Stakes for Balanced Decision Making

Stakes refer to the import of a decision for oneself and relevant others. Stakes vary enormously, from trivial, such as what flavor of ice cream to order for dessert, to substantial, such as whom to marry or whether to accept an enticing job offer. In the double balancing act of decision making, the basic guideline is simple: High stakes recommend more investment in narrative and/or quantitative analysis to improve the quality of the decision, assuming available time and information.

High stakes do not necessarily favor a quantitative style with options, attributes, and ratings. The appropriateness of an analytic approach depends on the character of the decision. However, high stakes do caution against leaving either the quantitative or the narrative mode out of the picture. The two offer different perspectives, they do not necessarily yield the same recommendation, and they can function as checks on each other, with discrepancy an invitation to deeper processing.

Although high stakes recommend significant analysis, significant does not necessarily mean maximal. Often more and more investment in thinking through a decision encounters sharply diminishing returns. On the narrative side, efforts to explore complex stories and counterstories can proliferate endlessly in the absence of key information. One example of this is the construction and appraisal of conspiracy theories around the Kennedy assassination. Nor do we need the grand scale of national events to doubt whether an elaborate tale with shaky grounds reveals what’s really going on. Everyday institutional life in corporations and academic institutions displays plenty of tenuous tale spinning.

Elaborate quantitative approaches also commonly show diminishing returns. In principle, good decision making calls for keeping even seemingly weak options on the checklist: in the search for an apartment, an initially unlikely prospect might prove competitive because of high scores on features neglected in a first impression. However, if the initial culling of a long list is not too aggressive, this is unlikely. . In principle one can improve the quality of a decision by carefully weighting criteria or estimating the probabilities of various options succeeding—but if one cannot reliably do so, one may just be introducing noise into the analysis. Also, criteria are often correlated with one another, in which case relative weighting does not matter so much. Dawes (1982), studying the use of utility models to predict graduate school performance based on admissions criteria, found that different weightings of different criteria—say grade point average and GRE scores and certain interviewer impressions—did not matter because they overlapped a lot. One might as well keep it simple and weigh them all equally.

A final strategic point for deciding how to decide says that the stakes might not be quite as high as they seem. For some decisions, how one pursues the opportunities of a choice once made may be more important than which among reasonably promising options one selects. For example, the “right” college or graduate program seems as much a matter of what one makes of it as which one picks.

The Significance of Knowledge for Balanced Decision Making

Knowledge, experience, and understanding of a situation are tremendously important in the balancing act of good decision making. Fundamental to the pursuit of sound choices is taking stock of what one knows and critically appraising it. For example, in purchase decisions people assume a strong correlation between price and quality. However, such relationships vary considerably. In product areas where standards are enforced and prominent brand names figure, price differences are likely to reflect what manufacturers spend on advertising more than anything else. Buy generic drugs!

Consulting with others potentially more knowledgeable is a common recourse in decision making—but it also requires critical assessment of the input. Are these others really more expert than you? Do they have any reason to be biased? Would you do better to trust the most expert among them or go with the trend of their responses (on “the wisdom of crowds,” see Surowiecki 2004)? Individual consultations aside, the age of the Internet makes certain kinds of information very accessible but not necessarily very reliable. For example, a quick Internet search for the promise of weight reducing drugs reveals a cascade of strong claims in scientific language, but the claims are roundly debunked by reputable medical Internet sites. Assessing reliability remains a crucial part of the process.

Besides the search for objective information, there is the knowledge of lived experience. Rich experience in a domain makes possible and even recommends a somewhat intuitive narrative mode, especially when the stakes are low to moderate or time is a factor. Expertise enables the decision maker to build quickly an understanding of the situation from which a viable path forward emerges. “Expert” need not mean professional expertise but simply the sort of experience a serious weekend gardener or experienced commuter might attain. Byrnes (1998, 38) notes four sources from which decision makers can draw options: memory, analogical reasoning, causal reasoning, and advice. The first three benefit from a reasonable measure of experience in the area in question.

Experience can, however, prove entrapping: The better constructed the box, the harder to think outside it. So long as the solution is somewhere “in the box,” experience should help. However, sometimes the solution lies elsewhere. The history of innovation includes many examples where entrenched belief systems stood in the way of novel theories—for example, the theory of continental drift and the bacterial theory of ulcers offered much-resisted counternarratives to the received narratives of their times. For more personal decisions, seriously pursued counternarratives can help people shake a false sense that they know the score. If you’re afraid that leaving the formality of the workplace for the freedom of retirement will leave you at loose ends, disorganized and with little to fill your days, ask yourself to envision in some detail what sort of life you might construct. If you have always felt you would be uncomfortable in an urban environment, ask yourself to envision in some detail what life might be like in this urban setting today.

With some choices, we don’t have enough pertinent experience to tell ourselves reliable stories. We are rare visitors to many of life’s puzzles. Not often do we face major medical or care decisions for aging parents. Gut responses are likely to be unreliable. When we are not well-oriented, deliberately seeking out information and elaboration is especially important, and the more detached quantitative style, even if only in the form of a pro-con list, provides a way of organizing what information we do have or can gather from friends and other sources. This information can then be supplemented by whatever sensible stories we can tell ourselves. Also, we can try to remind ourselves about the sorts of heuristics and biases discussed earlier, lest they lead us to construct superficially persuasive but poorly grounded stories.

Situations where we are “visitors” shade into even more challenging situations where we find ourselves profoundly disoriented, unsure about how to characterize the problem at hand. Consider a medical situation where the doctors themselves do not know what is going on, or financial anomalies whose cause is unclear and could be the result of computer error or fraud or sloppiness. In such circumstances, a quick decision might be ill advised. Deciding how to decide might best be a matter of deciding not to decide yet! If doing so is safe, stall, buying time to build a better understanding. If decisions of some sort must be made, sometimes they can take the form of hedges or choices that can later be reversed.

Research on expertise warns that we sometimes are not the experts we think we are. Overall, people’s judgments in college admission processes actually are not as predictive of academic success for admittees as simply adding up scores (Dawes 1982). Extensive experience with an activity does not always add up to discernment. Consider how many people never get good at sports or games they regularly play. Or consider how some people have a pattern of troubled relationships, falling into the same pitfalls over and over. Investigators have singled out some of the characteristics that help us build reliable judgment from extended experience: recurrent rather than unique situations, a greater focus on things than on human behavior, feedback available regarding success rather than no feedback, stable versus dynamic situations, and the like (Shanteau 1992).

The Significance of Personal Resonance for Balanced Decision Making

For some kinds of decisions more than others, one’s personal intuitions about the situation have an intrinsic relationship to the desired outcomes. Compare investing $10,000 in the stock market with spending $10,000 on a painting for your living room. If you are an experienced investor, you might have a feel for the merits of the stock, but the ultimate aim is financial return. In contrast, the ultimate aim of purchasing the painting is to acquire an object that you will still enjoy looking at many years later. Your feelings about the painting today are a sample of what you ultimately want out of it.

An intrinsic relationship between one’s intuitive feelings and the matter at hand informs many life decisions, for instance regarding the relationships one sustains, the place where one lives, or the profession one pursues. One can compose a narrative or quantitative analysis for any of these high-stakes decisions, but to make such choices without asking whether one liked the other person, the dwelling, or the profession would be odd. In such cases, the intuitive voice of personal resonance (or dissonance) is a tremendously important data point, often approaching a necessary condition.

When personal resonance has special bearing, good narrative elaboration may serve the decision process better than quantitative/tabular elaboration. Consider how poetry or prose can capture essential personal qualities. In contrast, decisions regarding stock investments, insurance policies, and washing machines have a kind of neutrality that invites treating them largely in terms of objective advantages and disadvantages.

Balanced Decision Making in Summary

Good decision making can be seen as a double balancing act. The decision maker decides how to decide, adopting a more intuitive or analytic approach in a more narrative or quantitative or mixed style suited to stakes, knowledge, and personal resonance.

  • High-stakes decisions recommend a more analytic approach with both narrative and quantitative/tabular representations to crosscheck one another but with caution about diminishing returns for extended elaboration.
  • Extensive knowledge and experience enable more effective use of intuition and narrative modes of representation—but with awareness that the situation may require “thinking out of the box” in ways that challenge one’s knowledge and experience.
  • Personal resonance gives a special priority to personal intuitive reasons— not because they offer a more accurate read on an objective reality but because they represent personal affinities or aversions that are likely to continue to color one’s experience of the path taken.
  • Finally, even a relatively intuitive decision of the moment deserves due analytic consideration of commonly overlooked matters such as opportunity costs, cognitive illusions (e.g., loss aversion), and biasing influences (e.g., social pressure).

Although such principles and their refinements offer no calculus of normative correctness, they do show how the thoughtful decision maker might take stock and choose a smart approach. However, as the saying goes, advice is cheap. What does it take to put good decision-making practices into action in learners’ everyday lives?


Fostering better decision making might seem simply a matter of informing learners about how to play the game. Here are the good moves and a little practice, now go forth and decide better! However, information and exercise alone are not likely to do the job.

First, good decision making, as with any everyday kind of thinking, is highly dispositional. Appropriate knowledge and skills need the company of alertness to occasions and readiness to engage them seriously and thoughtfully. Several lines of research argue that sophisticated thinking depends not just on abilities but dispositions, with people commonly performing below capacity for lack of commitment or sensitivity to occasion (e.g., Dweck 2000; Perkins and Ritchhart 2004; Perkins et al. 2000; Stanovich 1999). Individuals vary considerably in their openness or aversion to the sorts of ambiguities and complexities that emerge in working through a difficult choice, as gauged by such dispositional indices as need for cognitive closure (Kruglanski and Webster 1996) or need for cognition (Cacioppo et al. 1996). For instance, tricky, emotionally loaded decisions can prompt costly procrastination during which the problem becomes greater—especially when the decision maker postpones the decision without using the additional time to gather information or counsel.

Besides appropriate dispositions, good decision making calls for self-regulation (Byrnes 1998). Effective decision makers decide how to decide in particular situations, monitor their progress, perhaps revise their strategies in midstream, and take stock afterward to improve future practice. Accordingly, interventions should foster relevant skills and knowledge, cultivate positive dispositions, and develop metacognitive self-regulation of decision-making practices.

Byrnes (1998) among others has pointed out two fundamentally different approaches to cultivating better decision making. One targets high-priority areas such as adolescent sexuality, conflict and violence, drug and alcohol use, or parenting practices. The other aims at teaching general decision-making skills and dispositions.

Teaching Decision Making in High-Priority Areas

Adolescence offers an attractive zone for interventions because of patterns of adolescent risk-taking. Byrnes cautions that the surge in risk-taking does not appear to reflect anything about the teenage mind as such but rather the reduction of parental controls during that period. Indeed, adults show similar patterns of risk-taking. However, adolescents as a group still in school provide a convenient and important intervention population.

Programs focusing on adolescent sexuality and risk provide a class of well-researched examples of the targeted approach. Reyna et al. (2005) provide a convenient review in the context of discussing a particular developmental theory of decision making. “Justsayno” programs that push abstinence appear not to work well, their impact on sexual restraint and taking precautions minimal. Moreover, abstinence programs can hardly be considered efforts to foster decision making. Instead, they promote a particular decision. More effective programs engage the student in understanding the risks of sexual activities and the ways of protecting oneself and one’s partners. Several such programs have been shown to produce near-term effects on rates of abstinence and use of protection.

Often the impact fades in a few months. However, the authors particularly like a program called Reducing the Risk: Building the Skills to Prevent Pregnancy, STD, and HIV (Barth 1996) for which studies suggest somewhat longer-term effects. Reyna et al. (2005) foreground some critical features of the program’s pedagogy. Students learn what the dangers are, what can be done about them, and their own capabilities to act effectively. Roleplaying and other techniques help develop in-the-moment dispositions and capabilities. “Social inoculation” equips students to deal effectively with social pressure. All this unfolds over a considerable period of time.

What makes such interventions effective? They speak more to intuitive decision making in a particular context than to analytic decision making in general. They equip learners with a range of basic understandings that offer clear immediate implications for sexual caution. Reyna et al. (2005) characterize this as a matter of accumulating and using appropriate “gists,” relatively simple ways of understanding situations that allow ready inference of appropriate courses of action.

For another example, the Resolving Conflict Creatively Program (RCCP) focuses on violence-prevention with elementary school children (Aber et al.1998). Some fifty lessons foster attitudes and skills around deciding how to handle conflict situations, including building cultural awareness and sensitivity to interpersonal and intergroup relationships. The methods include examples, role playing, discussion, and reflection.

Propensity to violence considerably increases during the elementary school years. Systematic research on RCCP implementations suggests that the program dampens this trend for many learners while not reversing it. Results were measured in multiple ways, with hostile attributions to hypothetical scenarios, response choices to hypothetical scenarios, reported aggressive fantasies, depression, and actual conduct problems. Results varied depending on gender, age, depth of implementation, and other factors (Aber et al. 1998; Aber et al. 2003).

RCCP and similar programs typically emphasize narrative approaches to decision making. Less commonly used are coolly analytical tools such as pro-con lists or tables of options crossed with dimensions of evaluation. Reyna et al. (2005) argue that in high-risk areas such as sexual behavior efforts to reason through complex tradeoffs of risks and gains can be counterproductive. More insightful gists, broad understandings that strongly recommend cautionary practices, afford better management of one’s behavior. The learning process in these programs develops practical and prudent expertise and uses roleplaying rehearsals of highly charged situations to address not only knowledge and skill but dispositions and self-regulation. In doing so, such programs appropriately treat decisions around sexuality and violence as high stakes (e.g. the risk of venereal disease or injury), requiring knowledge and experience (built up in part through gists and role playing), and reflecting positive and negative personal resonance (sexual attractions or surges of temper addressed through discussion and role playing).

Targeted instruction in decision making in high-priority areas can work, even in areas as sensitive and gland-driven as adolescent sexuality and violence. But the strength of such programs is also their weakness: By intent and design they target particular areas rather than general decision-making capabilities and dispositions. Many of the important decisions people face are not in chronic, high-risk areas but are encountered only now and then. Developing a full-scale program of instruction for every type of decision would not make sense.

Teaching Decision Making as a General Craft

Byrnes (1998) reviews three programs designed to cultivate the general craft of decision making, drawing on an earlier review of efforts to teach adolescents decision making by Beyth-Marom et al.(1991). The GOFER program’s emphasis is clear from its acronym: Goals clarification, Option generation, Fact-finding, consideration of Effects, and Review (Mann et al. 1988). The Personal Decision Making program emphasizes a decision-making process of five steps: identifying alternatives, formulating criteria, applying the criteria to alternatives, summarizing the results, and self-evaluation (Ross 1981). The Odyssey program to teach thinking is an extensive intervention of one hundred lessons, including a ten-lesson unit on decision making, with attention to anticipating outcomes, acquiring and assessing information, articulating preferences, and applying weights to dimensions to manage complex decisions (Herrnstein et al. 1986).

All three programs address adolescents. All consist of a few lessons on decision making to be taught in a school context. All emphasize application of multiple attributes to options in order to score them. All operate as independent mini-courses within the curriculum. All include the teaching of specific strategies or procedures toward understanding the decision situation, identifying options, examining consequences, and evaluating options in terms of their consequences. They seek to help learners develop the basic skills and dispositions of an analytic, quantitative style of decision making.

Some interventions with a similar strategic emphasis engage the rhythm of schooling in a different way. The literature on the thinking-skills movement makes a distinction between standalone and infused interventions (e.g., Swartz and Perkins 1989; Perkins 1995, ch. 8). Whereas standalone interventions constitute courses or mini-courses of their own, infused interventions combine the cultivation of thinking with learning in the disciplines.

Robert Swartz and his colleagues have worked on infused models of the teaching of thinking for many years (Swartz and Parks 1994; Swartz et al. 2007). Their approach includes considerable attention to decision making, fostering attention to options, consequences, costs, and related matters. Graphic organizers often are used to help learners express and order their thinking around content-related issues. In one typical application, students take on the role of Harry Truman pondering whether to drop atomic bombs on Hiroshima and Nagasaki (Swartz et al. 1998; for a brief account and graphic organizer, see Swartz 2000). The students are allowed to reason only with information available up to 1945. Intervention at early grades is possible too. A sample lesson for first graders uses Horton Hatches the Egg by Dr. Seuss and a simplified decision-making strategy—What could we do? What would happen if we did those things? What’s the best thing to do?

The designers of infused programs would not expect occasional decision-making activities in history or reading to make learners better decision makers. Rather, they encourage frequent and consistent use of such practices across a variety of disciplines, aiming both to deepen disciplinary learning and instill decision-making practices that will spill over to other disciplines and to learners’ everyday lives.

How confident can we be that such initiatives, standalone or infused, have the desired impact? A prior question concerns the entire enterprise of the teaching of general thinking skills and dispositions. This issue has been controversial (Anderson et al. 1996; Brown et al. 1989; Perkins and Salomon 1987, 1989). Some have argued, using concepts like situated learning (e.g., Lave and Wenger 1991), that enhancing cognitive capabilities in a general sense is unlikely. Effective initiatives need a contextual focus, as with the programs on adolescent sexuality and violence. For a while, the debate took a polarized “can’t be done / can be done” form. My sense is that a more moderate position now prevails. In any case, I have argued elsewhere that the literature now offers a number of clear empirically grounded cases of enhancing various general cognitive skills (Grotzer and Perkins 2000; Perkins 1995; Ritchhart and Perkins 2005).

However, decision making per se has not been the focus of the best research. Byrnes (1998) observes that the evaluations of the GOFER, Personal Decision Making, and Odyssey programs all included some positive results—but on measures that emphasized students’ knowledge of decision making. Researchers made no post-test effort to inventory students’ decision-making behavior in real-world contexts or to engage them in simulated decision-making processes.

One can hope that the students’ gains in knowledge and understanding would translate into shifts in practical behavior, but Byrnes warns that the pedagogical approaches taken seem somewhat didactic. Ideas about transfer of learning point to features that should enhance impact: metacognitive reflection; intensive simulated rehearsal, such as that which is characteristic of targeted approaches; and direct encouragement to apply the ideas widely, such as log-keeping activities around everyday decision making (e.g., Perkins and Salomon 1989; Salomon and Perkins 1989).

Many examples of infused approaches exist, but little research has been done in this area because embedding thinking strategies in disciplinary learning introduces many hard-to-control variables. Two kinds of outcomes— enhancement of disciplinary understanding and thinking—are relevant, and even without controlled studies we can reasonably claim that the extensive use of infused decision making or other thinking practices almost certainly fosters disciplinary understanding. Extensive research has demonstrated that thoughtful elaborative processing of content builds understanding (e.g., Cohen et al. 1993; Meyer and Land 2006; Wiske 1998).

On the other hand, the impact of infused approaches on students’ general decision-making practices lacks rigorous evidence one way or the other. Informal reports of deeper writing about decision situations are encouraging, as are frequent anecdotes from students about how they applied one or another technique. However, these do not offer direct evidence of lasting general changes in decision making. Although inconvenient to investigate, this is an area that would be worth the effort.

To summarize, pedagogical approach is important. Didactic styles are likely to be less effective than approaches that involve metacognitive reflection and simulations. Fairly frequent activities over a considerable period of time are likely to have greater impact than a brief unit. Approaches that focus on single disciplines or topics are likely to be less effective than approaches that deliberately range across disciplines and invite learners to make connections to their out-of-school lives, fostering transfer of learning.

Also, typical interventions such as GOFER lean toward the quantitative/tabular style of analysis, with options and attributes and ratings or checklists. They also neglect the importance of regulating decision making according to the critical dimensions of stakes, knowledge, and personal resonance, generally assuming medium to high stakes and making available necessary knowledge: not much attention is paid to sizing up the circumstances and deciding how to decide. The programs also often do not directly deal with typical heuristics and biases, including social influences like bandwagon effects. Accordingly, we might look toward interventions that take a more balanced decision-making approach, with learners encouraged to develop positive dispositions and self-regulation regarding which approach or what mixture applies when.


Better decision making has not just personal but social importance. The participative character of democracies calls for informed citizens who thoughtfully consider matters of policy and justice, adding to the collective wisdom by voting, campaigning, protesting, and engaging in informal discourse. One of the principal goals of education is generally taken to be preparation for such roles.

Thus, however schools address decision making in general, they would do well to engage the peculiar dilemmas of personal decision making for the public good. Imagine, for instance, that civic engagement was approached much the same way as programs on adolescent sexuality and violence, with assiduous attention to naive beliefs, social inoculation against misleading pressures, simulation activities to foster behavioral patterns of thoughtful civic participation resistant to undue influence, and so on.

Interventions with such a clear social agenda should include ample attention to the contribution of social interactions to decision making. One baleful influence is groupthink (Janis 1972), a phenomenon that runs all the way from PTA meetings to national politics. Groupthink is the tendency for group members to influence one another in ways that lead to an artificial and misleading consensus. It’s a matter of following the crowd, deferring to influential figures, and keeping contrary thoughts to oneself when a trend gathers momentum.

Quite the opposite phenomenon is documented by James Surowiecki (2004) in his recent The Wisdom of Crowds. Surowiecki reviews numerous research results and natural circumstances demonstrating that the common trend across a number of individuals often yields remarkably good judgments and predictions—better than most of the individuals involved, even the experts. Cases range from estimating the number of marbles in a jar to predicting elections and economic trends.

The wisdom of crowds is good news for the potential of democratic societies and free markets. But it comes with a seeming paradox: How can we have both groupthink and the wisdom of crowds? The answer is that they reflect different circumstances. Groupthink thrives on close interactive relationships between the people involved, so that opinions can snowball. In contrast, the wise crowd generally involves people not in close touch, with varied sources of information, and reaching independent judgments.

Although students certainly could prepare better for decision making in civic contexts, the challenge goes well beyond individual commitment and capability. Thoughtful independent judgment is not so well served by typical political processes and institutions. Even reasonably committed members of the polity cannot find it easy to function in an ideal way. Consider the inordinate burden of understanding complex public issues. No matter whether the theme is immigration, tax laws, military commitments, or healthcare, the media are replete with debate from power figures and interest groups. Every statement has a counterstatement. Misinformation is commonplace, as registered by watchdog groups that fact-check political speeches. Statistics are used to lead and mislead. The principal voices strive for simplistic compelling stories that will win allegiance. Sources written in the neutral manner of reasoned opinion are often deeply biased.

Now consider the challenge from the perspective of balanced decision making and the three factors: stakes, knowledge, and personal resonance. In civic engagement, stakes often have the unfortunate configuration of low personal, high public significance. Many individuals correctly judge that large-scale policy or leadership decisions will not much affect their personal lives. Even when the personal impact could be great, they note that their opinion or vote is a drop in the bucket—their expected return on investment of effort is very low. As to knowledge, most people cannot be deeply knowledgeable about the welter of issues in play, and efforts to acquire some basic and reliable grounding are frustrated by the adversarial and often manipulative character of political PR. As to personal resonance, the natural turn of public controversy is to personalize everything with broad appeals to identity and emotion—even issues that would better be considered in detached analytic ways. In summary, the typical pattern of public discourse around major issues and offices is singularly unsupportive of, indeed in many ways actively undermining of, thoughtful citizenly engagement.

To develop citizenly skills and dispositions while neglecting large-scale patterns that undermine thoughtful engagement would be shortsighted. To get results, we need to attend to both. One can envision institutions that help rather than hinder civic engagement by clarifying stakes, providing reliable knowledge, and handling personal resonance in an honest way. Imagine, for instance, that major newspapers, regardless of their political leanings, routinely made available fact checks of political statements, with lead-ins always on the front page. Imagine that Google provided prominent front-page links to such analyses for every user who did not explicitly opt out. Imagine that the media took a cue from the way technology products are often analyzed in reviews, organizing summaries of issues with tables itemizing factors and briefly characterizing features. Occasionally one sees this, but it could be commonplace. Imagine that important and orienting factual information— not information under significant debate—were widely and actively “advertised” to produce better public calibration around important issues (because the reality is that most people remain seriously underinformed even when critical information is widely accessible).

Ideas such as these bring with them many challenges. The point is not that any one of these ideas is a magic bullet but that educating for more thoughtful civic engagement makes little sense if the social institutions that undermine thoughtful civic engagement continue to be disregarded.


Just as the role of decision making in individual and collective endeavor is enormous, so is the research on decision making extensive. Many prominent themes are hardly touched here, including moral aspects of decision making, the significance of time pressure, the complexity of rapidly changing situations, the detection of important decision points handled by default but meriting reconsideration, the role of will and intentions in following through on in-principle decisions (the “New Year’s Resolutions” problem), goal revision in contrast with evaluating options relative to current goals, decision avoidance, regret avoidance as a driver of decision making, decision-making styles, and the intricacies and pitfalls of group decision making.

Although speaking to all these motifs, the literature also displays significant blind spots. Galotti (1995) notes that relatively little research has tracked real-life decision making in nonexperts, her studies of college choice and college major choice being an important exception. We know little about which heuristics and biases investigated in the laboratory come up the most in realistic circumstances. Evidence positive or negative of the impact of interventions on actual everyday behavior is sparse.

These limits acknowledged, the present exploration advocates for more attention to be paid to the following areas of neglect:

1. Beyond analytical to well-grounded intuitive decision making. Much of the literature on decision making and most educational interventions targeting general decision making treat it as ideally an analytical enterprise. However, research tells us that in certain contexts well-grounded intuitive thinking serves decision making better. Accordingly, what has here been characterized as balanced decision making, with the right balance sought situation by situation as the person decides how to decide, would serve personal and public interests better than a doggedly analytic approach.

2. Beyond quantitative/tabular to narrative styles of decision analysis. Approaches to cultivating general decision-making capabilities tend to foreground quantitative analysis—options, attributes, ratings, scores, or junior versions of the same, such as pro-con lists and tallies. This is often a powerful mode of analysis, but research on decision making suggests that narrative analysis is potent as well. The particular character of the decision can lend itself to one or the other or to a blend.

3. Both high-priority targets and general decision making. While some people hope that the cultivation of general decision-making skills and dispositions will serve well enough, high-priority persistent trouble spots such as adolescent sexuality, drug use, violence, and civic participation almost certainly require targeted treatment. Good practice depends too much on context-specific knowledge encoded in ways that make it fluently available in the moment, and on managing tricky aspects of personal resonance, for anyone to expect that general decision-making skills and dispositions will powerfully inform such areas.

However, personal and public life include a myriad of medium-to-high-stakes decisions that occur only now and then. One cannot prepare young learners for hundreds of specific decisions case-by-case. Some researchers have claimed that general cognitive skills cannot be significantly enhanced, but plenty of evidence exists to the contrary. Therefore, general decision-making skills and dispositions are important too.

4. Beyond improving people to improving institutions. Improving citizenly decision making tends to be viewed as solely a matter of improving citizens— better people for a better democracy. However, the quality of decision making greatly depends on contextual support as well as the skills and dispositions of participants. In many ways, current public institutions and patterns of discourse undermine rather than sustain good decision making. This invites attention to social innovations that would make thoughtful civic participation more natural and responsible—and the same applies to innovations on a smaller scale, as within corporate, government, and educational institutions.

My recommendation is emphatically expansive: Much more can be done than is being done. We know a lot today about how decision making might go better, and this knowledge provides a platform on which to build. Moreover, the best of current programs to improve decision making, both targeted and general, seem likely to have some beneficial effects.

However, comprehensive efforts to enhance decision making should include more attention to intuitive and narrative approaches. Efforts to improve general decision making should not be seen as substitutes for addressing high-priority targets and vice versa. Making people better decision makers should figure as part of a larger agenda of developing social settings to be more supportive of thoughtful decision making.

As to research, we do not know enough about the impact of interventions, particularly those focused on improving decision making generally. Nor do we have good maps of the variety of decisions people face in their lives and the pitfalls that cause the most trouble outside of laboratory studies and simulations. Such research is challenging because it requires finding ways to track people’s decision making “in the wild,” but it is important enough to pursue with more vigor.

In present patterns of schooling as in our society at large, the improvement of everyday decision making gets meager attention. Yes, one can point to the occasional course or other intervention, but the reality is that most people spend little learning time on decision making as such. Few areas of skill and character promise so much payout and receive so little pay-in. Whether to invest in fostering everyday decision making is a decision also. These pages are something of an argument for its importance and priority.


Many thanks to my longtime colleague and friend Robert Swartz for his comments on the first version of this paper, and to the close reading and counsel from an anonymous critical reader. The first version was written during my sabbatical stay at the Center for Advanced Study in the Behavioral Sciences at Stanford, which I thank for financial and logistical support, a splendid setting, and helpful conversations.


1. Other terms are sometimes used.


Aber, J., J. L. Brown, and S. M. Jones. 2003. Developmental trajectories toward violence in middle childhood: Course, demographic differences, and response to schoolbased intervention. Developmental Psychology 39:324–348.

Aber, J., S. M. Jones, J. L. Brown, N. Chaudry, and F. Samples. 1998. Resolving conflict creatively: Evaluating the developmental effects of a schoolbased violence prevention program in the neighborhood and classroom context. Development and Psychopathology 10:187–213.

Amsel, E., T. Bowden, J. Cottrell, and J. Sullivan. 2005. Anticipating and avoiding regret as a model of adolescent decision making. In The Development of Judgment and Decision Making in Children and Adolescents, ed. J. E. Jacobs and P. A. Klaczynski, 119–156. Mahwah, NJ: Erlbaum.

Anderson, J. R., L. M. Reder, and H. A. Simon. 1996. Situated learning and education. Educational Researcher 25(4):5–11.

Baron, J. 1988. Thinking and Deciding. New York: Cambridge University Press.

Barth, R. P. 1996. Reducing the Risk: Building Skills to Prevent Pregnancy, STDs, and HIV. Santa Cruz, CA: ETR Associates.

Beyth-Marom, R., B. Fischhoff, M. J. Quadrel, and L. Furby. 1991. Teaching adolescents decision making. In Teaching Decision Making to Adolescents, ed. J. Baron and R. V. Brown, 19–60. Mawah, NJ: Erlbaum.

Brown, J. S., A. Collins, P. and Duguid. 1989. Situated cognition and the culture of learning. Educational Researcher 18(1):32–42.

Byrnes, J. P. 1998. The Nature and Development of Decision Making: A Self-Regulation Model. Mahwah, NJ: Erlbaum.

Cacioppo, J. T., R. E. Petty, J. A. Feinstein, W. B. G. Jarvis. 1996. Dispositional differences in cognitive motivation: The life and times of individuals varying in need for cognition. Psychological Bulletin 119(2), 197–253.

Cohen, D. K., M. W. McLaughlin, and Talbert, J. E., eds. 1993. Teaching for Understanding: Challenges for Policy and Practice. San Francisco: Jossey-Bass.

Dawes, R. M. 1982. The robust beauty of improper linear models in decision making. In Judgment Under Uncertainty: Heuristics and Biases, ed. D. Kahneman, P. Slovic, and A. Tversky, 391–407. Cambridge, UK: Cambridge University Press.

Dawes, R. M., and B. Corrigan. 1974. Linear models in decision making. Psychological Bulletin 81:95–106.

Dweck, C. S. 2000. Self-theories: Their Role in Motivation, Personality, and Development. Philadelphia: Psychology Press.

Epstein, S. 1994. Integration of the cognitive and psychodynamic unconscious. American Psychologist 49:709–724.

Evans, J. 2008. Dual-processing accounts of reasoning, judgment, and social cognition. Annual Review of Psychology 59:255–278.

Fischhoff, B. and R. Beyth. 1975. “I knew it would happen”: Remembered probabilities of oncefuture things. Organizational Behavior and Human Performance 13:1–16.

Galotti, K. 1995. A longitudinal study of real-life decision making: Choosing a college. Applied Cognitive Psychology 9:459–484.

Galotti, K. 2002. Making Decisions That Matter: How People Face Important Life Choices. Mahwah, NJ: Erlbaum.

Galotti, K. M., E. Ciner, H. E. Altenbaumer, H. J. Geertz, A. Rupp, and J. Woulfe. 2006. Decision making styles in a real-life decision: Choosing a college major. Personality and Individual Differences 41:629–639.

Gladwell, M. 2005. Blink: The Power of Thinking without Thinking. New York: Little, Brown.

Grotzer, T. A., and D. N. Perkins. 2000. Teaching intelligence: A performance conception. In Handbook of Intelligence, ed. R. J. Sternberg, 492–515. New York: Cambridge University Press.

Herrnstein, R. J., R. S. Nickerson, M. Sanchez, and J. A. Swets. 1986. Teaching thinking skills. American Psychologist 41:1279–1289.

Jacobs, J. E. and P. A. Klaczynski, eds. 2005. The Development of Judgment and Decision Making in Children and Adolescents. Mahwah, NJ: Erlbaum.

Janis, I. 1972. Victims of Groupthink: Psychological Study of Foreign-Policy Decisions and Fiascoes. 2nd ed. Boston: Houghton Mifflin.

Janis, I. L. 1989. Crucial Decisions. New York: The Free Press.

Johnson, D. D. P. 2004. Overconfidence and War: The Havoc and Glory of Positive Illusions. Cambridge, MA: Harvard University Press.

Kahneman, D., and A. Tversky. 1979. Prospect theory: An analysis of decisions under risk. Econometrica 47:263–291.

Kahneman, D., P. Slovic, and A. Tversky. 1982. Judgment under Uncertainty: Heuristics and Biases. Cambridge, UK: Cambridge University Press.

Klaczynski, P. A. 2005. Metacognition and cognitive variability: A dual-process model of decision making and its development. In The Development of Judgment and Decision Making in Children and Adolescents, ed. J. E. Jacobs and P. A. Klaczynski, 39–76. Mahwah, NJ: Erlbaum.

Klein, G. 1998. Sources of Power: How People Make Decisions. Cambridge: MIT Press.

Kruglanski, A., and D. Webster. 1996. Motivated closing of the mind: “Seizing” and “freezing.” Psychological Review 103(2):263–283.

Lave, J., and E. Wenger. 1991. Situated Learning: Legitimate Peripheral Participation. New York: Cambridge University Press.

Lipshitz, R., G. Klein, J. Orasanu, and E. Salas. 2001. Taking stock of naturalistic decision making. Journal of Behavioral Decision Making 14:331–352.

Mann, L., R. Harmoni, C. Power, G. Beswick, and C. Ormond. 1988. Effectiveness of the GOFER course in decision making for high school students. Journal of Behavioral Decision Making 1:159–168.

McNeil, B., S. Pauker, H. Sox, and A. Tversky. 1982. On the elicitation of preferences for alternative therapies. New England Journal of Medicine 306:1259–1262.

Meyer, J. H. F., and R. Land, eds. 2006. Overcoming Barriers to Student Understanding: Threshold Concepts and Troublesome Knowledge. London: Routledge.

Moshman, D. 2005. Commentary: The development of thinking. In The Development of Judgment and Decision Making in Children and Adolescents, ed. J. E. Jacobs and P. A. Klaczynski, 327–334. Mahwah, NJ: Erlbaum.

Payne, J. W., J. R. Bettman, and E. J. Johnson. 1993. The Adaptive Decision Maker. New York: Cambridge University Press.

Perkins, D. N. 1995. Outsmarting IQ: The Emerging Science of Learnable Intelligence. New York: The Free Press.

Perkins, D. N., M. Farady, and B. Bushey. 1991. Everyday reasoning and the roots of intelligence. In Informal Reasoning, ed. J. Voss, D. N. Perkins, and J. Segal, 83–105. Hillsdale, NJ: Lawrence Erlbaum Associates.

Perkins, D. N., and R. Ritchhart. 2004. When is good thinking? In Motivation, Emotion, and Cognition: Integrative Perspectives on Intellectual Functioning and Development, ed. D. Y. Dai and R. J. Sternberg, 351–384. Mawah, NJ: Erlbaum.

Perkins, D. N., and G. Salomon. 1987. Transfer and teaching thinking. In Thinking: The Second International Conference, ed. D. N. Perkins, J. Lochhead, and J. Bishop, 285–303. Hillsdale, NJ: Lawrence Erlbaum Associates.

Perkins, D. N., and G. Salomon. 1989. Are cognitive skills context bound? Educational Researcher 18(1):16–25.

Perkins, D. N., S. Tishman, R. Ritchhart, K. Donis, and A. Andrade. 2000. Intelligence in the wild: A dispositional view of intellectual traits. Educational Psychology Review 12(3):269–293.

Reyna, V. F., M. B. Adam, K. M. Poirier, C. W. LeCroy, and C. J. Brainerd. 2005. Risky decision making in childhood and adolescence: A fuzzy-trace theory approach. In The Development of Judgment and Decision Making in Children and Adolescents, ed. J. E. Jacobs and P. A. Klaczynski, 77–106. Mahwah, NJ: Erlbaum.

Ritchhart, R., and D. N. Perkins. 2005. Learning to think: The challenges of teaching thinking. In Cambridge Handbook of Thinking and Reasoning, ed. K. Holyoak and R. Morrison, 775–802. New York: Cambridge University Press.

Ross, J. A. 1981. Improving adolescents’ decision making skills. Curriculum Inquiry 11:279–295.

Ross, L., and R. E. Nisbett. 1991. The Person and the Situation: Perspectives of Social Psychology. Philadelphia: Temple University Press.

Salomon, G., and D. N. Perkins. 1989. Rocky roads to transfer: Rethinking mechanisms of a neglected phenomenon. Educational Psychologist 24(2):113–142.

Shanteau, J. 1992. Competence in experts: The role of task characteristics. Organizational Behavior and Human Decision Processes 53:252–266.

Simon, H. A. 1957. Models of man: Social and Rational. New York: Wiley.

Stanovich, K. E. 1999. Who is Rational? Studies of Individual Differences in Reasoning. Mahwah, NJ: Erlbaum.

Starbuck, W. H., and F. J. Milliken. 1988. Challenger: Fine-tuning the odds until something breaks. Journal of Management Studies 25:319–340.

Surowiecki, J. 2004. The Wisdom of Crowds: Why the Many are Smarter than the Few and How Collective Wisdom Shapes Business, Economies, Societies and Nations. New York: Little, Brown.

Swartz, R. J. 2000. Thinking about decisions. In Developing Minds: A Resource Book for Teaching Thinking, ed. A. L. Costa, 58–66. Alexandria, VA: ASCD.

Swartz, R. J., A. L. Costa, B. K. Beyer, R. Regan, and B. Kallick. 2007. Thinking-Based Learning: Activating Students’ Potential. Norwood, MA: Christopher-Gordon Publishers.

Swartz, R. J., S. Fischer, and S. Parks. 1998. Infusing the Teaching of Critical and Creative Thinking into Secondary Science: A Lesson Design Handbook. Pacific Grove, CA: Critical Thinking Press and Software.

Swartz, R. J., and S. Parks. 1994. Infusing the Teaching of Critical and Creative Thinking into Secondary Science: A Lesson Design Handbook. Pacific Grove, CA: Critical Thinking Press and Software.

Swartz, R. J., and D. N. Perkins. 1989. Teaching Thinking: Issues and Approaches. Pacific Grove, CA: Midwest Publications.

Teigen, K. 1986. Old truths or fresh insights? A study of students’ evaluations of proverbs. British Journal of Social Psychology 25:43–49.

Tuchman, B. 1984. The March of Folly. New York: Alfred A. Knopf.

Tversky, A., and D. Kahneman. 1992. Advances in prospect theory: Cumulative representation of uncertainty. Journal of Risk and Uncertainty 5:297–323.

von Winterfeldt, D., and W. Edwards. 1986. Decision Analysis and Behavioral Research. New York: Cambridge University Press.

Wilson, T. D., D. J. Lyle, J. W. Schooler, S. D. Hodges, K. J. Klaaren, and S. J. LaFleur. 1993. Introspecting about reasons can reduce post-choice satisfaction. Personality and Social Psychology Bulletin 19:331–339.

Wilson, T. D., and J. W. Schooler. 1991. Thinking too much: Introspection can reduce the quality of preferences and decisions. Journal of Personality and Social Psychology 60:181–192.

Wiske, M. S., ed. 1998. Teaching for Understanding: Linking Research with Practice. San Francisco, CA: Jossey-Bass.