Making Decisions
Nov 02, 2008When we make decisions we think we're in control, making rational choices. But are we? This is the central question posed by Dan Ariely...
Our topic this week is the irrationality of human decision making. As philosophers, I’m sure that John and I would like to believe that we make decisions in a perfectly rational way. Indeed, I’m sure that most people think of themselves as pretty rational decision makers. How would thoroughly rational decision making go? Well, first, you’d decide what things you want, and how much you really want them. Second, you’d survey your options for getting what you want. Third, you would assess the upside benefits and downside costs of each alternative. And last but certainly not least, you’d choose the alternative that has either the greatest upside or the least downside, depending on whether you were risk-averse or risk seeking. It’s pretty simple really.
Decades of psychological research has shown, though, that although philosophers may be paragons of rationality -- ahem, ahen – in fact most people (and probably most philosophers too) are pretty irrational in their decision-making. People go wrong at every turn. We aren’t so good at figuring out what we want. Our preferences aren’t very stable or coherent. We’re bad at assessing risks and reward. You name it, when it comes to decision making, we’re bad at it.
Here’s a little game you can play with a partner that helps illustrate how irrational we can be. Let’s call it Sellers and Choosers. If you’re reading this alone and you want to play along, go get a partner now and let’s play the game together. I’ll be the referee. I’ve got two mugs – one for you, one for your partner. The mugs are exactly alike. I’m just going to flat out give you one of the mugs. (I can’t really do that over the internet just yet. But use your imagination and play along.) Anyway, the mug is yours to keep. It’s a really beautiful mug and very well made. Or, if you like, you can sell it. No doubt you’d be willing to sell the mug for the right price. So go ahead, write down the price at which you’d be willing to sell your lovely little mug.
Now as for your partner. I’m going to offer your partner a choice. I’m not going to flat out give her (or him) the identical mug. She or he has to choose. She has to choose between an identical mug and a sum of money. How much money, you ask? Well, I’ve written an amount of money on the bottom of the mug. She doesn’t get to see it. Instead what she has to do is write down an amount of money such that if she had a choice between the mug and the money, the choice between the two would be a wash. She gets the mug only if the price she writes down as a fair price for the mug is higher than price I’ve written on the bottom of the mug.
You may be wondering going with this and what it has to do with irrational decision-making. Don’t worry, the punch line is about to come. Here’s the thing, suppose we run this little experiment thousands of times and put people in different roles – sometimes the role of Seller and sometimes in the role of a Chooser. You know what we find? Well we find that people in the role of the seller place a significantly higher price – like more than twice the price -- on the mug than people in the role of the chooser do. What that means is that if the mug is already yours (and you have to set a sell price) you’ll think it’s worth a lot more than a similar mug that isn’t yet yours (on which you have to place a “willing to purchase it” price.)
One way to think of this is as an instance of loss aversion. You’ve got your precious mug in hand and you don’t want to lose it. It means a lot to you. And so you set a very high price on it. That is, people tend to value things they already have and might lose, much more highly than things they don’t have, but could get.
That seems pretty irrational, doesn’t it? Go back to what I was saying earlier about calculating upside benefits and downside costs. It looks like those calculations are highly skewed, depending on whether we’re talking about gains or losses. That doesn’t make any sense.
We’ve looked at just one tiny little example of apparent human irrationality. There are literally hundreds of experiments that demonstrate that people are massively irrational in the way we make decisions. And luckily for us, we’ve got one of the world’s leading investigators of human irrationality as our guest this week. Dan Ariely, author of the bestselling Predictably Irrational: The Hidden Forces That Shape Our Decisions.
By the way, Ariely has a follow up book out – The Upside of Irrationality: The Unexpected Benefits of Defying Logic at Work and at Home. We’d love to have him back on the show to talk about the new book. This week’s episode, though, is less about the upside of irrationality than the downside. But I think one can get a glimmer of how irrationality might have an upside by considering last week’s topic – loyalty. From pure self-centered cost-benefit analysis, it can be hard to make sense of loyalty. You might even call loyalty a form of irrationality. But without loyalty (and trust) all kinds of relationships wouldn’t be possible. So if loyalty is a form of irrationality, it may be a darned good thing that we are irrational in that way. But that’s a topic for another show.
Comments (10)
Guest
Thursday, July 22, 2010 -- 5:00 PM
Interesting exercise. The first step in the proInteresting exercise.
The first step in the process that Ken describes seems by far the most critical: what is the goal or "want," and what values does it reflect? If my goals are to proclaim my affluence to the world and bully my way through traffic, I might decide to buy a Hummer (R.I.P.!) despite the obvious conflicts with other values (humility, reduction of resource consumption and carbon footprint, traffic safety and so forth). But I could achieve the same goals by other means, like buying a supersized Ford Intimidator or something similar. The decision about a suitable means seems more amenable to rational analysis, and less laden with emotional baggage, than the initial "want" calculation.
Another example: if a society decides to ensure adequate health care for 100% of its population, a "single-payer" system quickly emerges as the most rational means to accomplish that (although other schemes may be worthy of consideration). But when other values (like allegedly saving taxpayer money and "personal responsibility") conflict, the means analysis leads to the kind of sausage legislation that emerged this year.
This is not to say that the decision about means is unimportant and always rational, but that choice often follows from the fundamental decision about ends. Or so it seems to me at the moment.
I look forward to downloading the podcast.
Guest
Friday, July 23, 2010 -- 5:00 PM
Intriguing. This may change the way I see my bossIntriguing. This may change the way I see my boss; what he values is my production, because it is his responsibility; my happiness is not his, so he assumes it can go away without any change in production.
Guest
Monday, July 26, 2010 -- 5:00 PM
Through out our whole life we are seeking out theThrough out our whole life we are seeking out the thing which we exactly want but fail to identify that. If we never understand ourselves then how could we judge that we are rational? It?s like illusions.
Guest
Tuesday, July 27, 2010 -- 5:00 PM
This is an area which is desperate need of some inThis is an area which is desperate need of some intro phil of logic to make some simple distinctions and points. Here is what I think philosophers could contribute, starting with their understanding of translating english if, then into material implication.
Recap on if, then and material implication.
If we want a simply formal model of the english if, then, our formal translation is uniquely satisfied by material implication if:
a) the translation is truth functional, doesn't depend on the semantic parts in "if a, then b."
b) the translation preserves two formal features of the english if, then.
1) if then is reflexive.
2) if then is not sysmetric.
c) finally, the truth value for if then when a is true and b is false is false.
We translate the english if then, such that it is a truth functional translation which preserves several formal features.
As a consequence, we get both the paradoxes of material implication and the failure of even smart individuals to locate material implication in Wason's selection task.
These two features, some odd formal paradoxes and the inability of english users to recognize the formal translation are good analogues to what is happening in preference theory. Here is why.
Preference Theory
We start with with the english relation "x is preferred to y" or the choices "x is chosen over y" and translate the preference talk or choices into a formal structure, a formal translation.
But unlike if then, this translation depends upon us first identifying a body of choices or preferences that appear reasonable.
Then, given certain formal considerations say property alpha and beta in choice theory, we can translate the choices and determine whether they are formally rational.
If they are not, we have two essentially different ways to go: a) the original choice pattern was not reasonable, or b) our concept of formal rationality has to be modified. (Two many of these difficult choices would be a ground to move away from rational choice theory.)
What most economists fail to do is to specify if the alleged failure shows:
a) the translation was not truth functional, and the preference talk lacks certain formal characteristics. Framing examples come to mind here.
b) the translation or model was truth functional, but the irrationality in model shows that we were mistaken about the original choice talk being reasonable. We have now found something new. The money pump arguments comet to mind here.
c) the translation failed, much the same way english users don't perceive themselves to be using material implication.
d) the original choice talk was of reasonable behavior, but the formal considerations have to be changed in order to accommodate this translation. In this context, it is worth noting that most of the examples of context dependent choice fail to be captured by set theory choice theory. If the addition of a changes the relationship between b and c, then the relationship between b and c is not an ordered pair.
Getting an economist to hew to one of these positions would be useful. (Not withstanding, I do like Ariely's experiments because they are fresh - but if someone is predictable in their choices, then they are maximizing something.)
Guest
Thursday, August 5, 2010 -- 5:00 PM
Most people make decisions not based on reality, bMost people make decisions not based on reality, but based on fear or emotions. That is where the problems lie. If we make decisions based on what is real, then good decisions are made.
Guest
Saturday, August 7, 2010 -- 5:00 PM
Is not the part of the excitement being a human thIs not the part of the excitement being a human that we are irrational,bit odd,and make decisions with emotions and depending on our mood at the time can be great ideas or really bad-
N
Tuesday, August 10, 2010 -- 5:00 PM
I don't really see how that's irrational. We knowI don't really see how that's irrational. We know more about the things we have than the things we don't (in the example of the mug, the person who's used it would know more about it than the person who's only seen it), so it seems logical to me to hold on to what you have and know than give up something for what you don't already have.
Harold G. Neuman
Monday, August 16, 2010 -- 5:00 PM
Of course---we all make rational decisions. We thiOf course---we all make rational decisions. We think. But those rational decisions are based upon our world views---what we think rational may not translate well in some other country. This is why such concepts as democracy fail in those places where it has never been embraced, much less understood. I do not need to elaborate upon current dilemmas. I am only beginning to learn the value of brevity, taught to me by an older friend.
Might be back---maybe not.
HGN.
Guest
Friday, August 27, 2010 -- 5:00 PM
Very few of us are trained to think properly. FewVery few of us are trained to think properly. Few of us get any psychology and fewer any neuroscience.
For starters, many choke on poor language skills. Parsing errors are ubiquitous. Minimizing ambiguity is both science and art.
Then add a heavy dose of appetizing logical fallacies.
Finally, toss in a wide variety of half-baked heuristics, cognitive biases, and off-label implicit associations and nobody can agree on the basic facts much less digest what they might mean.
In this internet + Google world there is no excuse not to shift more k-12 curriculum towards linguistic and reasoning skills--from facts to fact checking.
Poor Richard
Poor Richard's Almanack 2010
Harold G. Neuman
Sunday, October 3, 2010 -- 5:00 PM
sum, ergo, sum. Go ask Alice, I think she'll know.sum, ergo, sum. Go ask Alice, I think she'll know...