Intuition and Uncertainty: Reflections on the Monty Hall Problem

Dan Ariely has posted an intriguing offer on this blog recently: buy his second book, and enter to win dinner with him. I can hardly resist! Again, I started commenting on the post, but got a little carried away:

This is priceless! I’m half considering buying a second copy just for the chance to have dinner with Dr. Ariely. I’m finding it very difficult to calculate the expected cost benefit here: immeasurably large reward * infinitesimal probability – cost of buying a copy > 0 ???

Back to reality: People have a very hard time reasoning about probabilities. One example of this is the Monty Hall Problem, named after host of the American game show Let’s Make a Deal. I have to admit, I find this problem very difficult to correctly analyze, even though I know the correct solution!

The basic scenario is this: You are presented with three doors. Traditionally, there is car behind one of the doors, and a goat behind each of the other two. Your job is to choose which door to open, revealing the prize. Now, suppose you’ve chosen one of the doors, but before it is opened, the host reveals that one of the other doors is hiding a goat. You are then given a second choice: you can open the door you originally chose, or switch. Which of the two remaining doors maximized your probability of driving away in a new car?

I won’t spoil the ending for those who want to solve this puzzle for themselves, but the fact that most people have a difficult time arriving at the correct solution shows that we are quite bad at reasoning about probabilities. That being the case, how can we possibly do a good job deciding in real-life scenarios, which tend to be much more complicated?
If by some happy accident, we usually end up making the right choices in cases involving uncertainty, this leaves many questions about how we decide unanswered. If not by sound reasoning, what strategies do we use to arrive at the choices we make? If those strategies lead us to behave rationally much of the time, yet we are not thinking rationally (e.g. heuristics may play a big role, even when they have no logical basis), how should we regard the statement “humans are rational beings”? Sorry to go off on a philosophical tangent, but I find such questions fascinating.

Advertisements
This entry was posted in Uncategorized. Bookmark the permalink.

11 Responses to Intuition and Uncertainty: Reflections on the Monty Hall Problem

  1. Alon says:

    what makes you assume that we behave rationally most of the time?:)

    • allyourcode says:

      I’m surprised you believe I’m making that assumption. My whole intention was to question the notion that humans think or at least behave rationally. e.g. I implied that rational behavior is a “happy accident”. Of course, I didn’t intend for that to be taken literally; if rational behavior were a purely random event, natural selection would have killed us off already.

      • Alon Nir says:

        Sorry about that, I guess I gave too much weight to the first part of your closing statement.

        Anyway, as I’ve always seen it, the world works and life continues in spite of irrationalities. It doesn’t work as well as it could (wars, financial crashes, etc.) but we’re still around.

      • allyourcode says:

        I agree, but most people seem to have the opposite attitude: irrationality is very much the atypical behavior. These people need to read more of Dr. Ariely’s books.

      • Alon Nir says:

        perhaps the reason for this lies in a variant another irrationality : the “it won’t happen to me” bias (a.k.a. the optimism bias). Maybe people think that they are unaffected by such irrationalities. :)

      • allyourcode says:

        I can see why an individual might believe himself to be quite rational, but why do they believe the same of others? As Dr. Ariely’s studies show, we are also quite bad at estimating how irrational other people will be. I was recently at a conference where a few of us were sitting around a table discussing the sorry state of the US education system. The business folks seemed to agree that performance-based pay for teachers would greatly help the situation. I believe the premise of this belief is that the laws of nature dictate that providing teaching incentives necessarily results in stronger learning. Being a fan of Dr. Ariely and a contrarian, I began describing Dr. Ariely’s studies in India that tested the effect of bonuses on performance in various puzzle games. The venture capitalist that I spoke to was very surprised to hear that the 5 month bonus group performed the worst. I was both delighted that he responded so positively to me, yet dismayed that people seem to take rationality for granted. I like to think that my explaining this study would affect his way of thinking, or at the very least, convince him to read about our systematic irrationalities, but people seem very convinced of rationality, whether it be their own or people in general.

  2. allyourcode says:

    Good point. I mentioned something similar in my conversation with the business folks. What I said was that when you use simplistic metrics, people will try to optimize against them without thinking about the consequences for overall success.

    Being a software engineer, my favorite examples are number of bugs, and lines of code, which Joel talks about on his blog post Incentive Pay Considered Harmful. When management decides that’s how they’re going to measure people’s performance, you can easily guess what’s likely to happen. When you tell people they’ll get penalized for writing more bugs, they stop writing code. On the other hand, if you tell them they’ll be rewarded for writing more lines, they’ll write lots of code, but it will be pretty shoddy. Both situations are disastrous for software companies.

    What’s the solution? According to my venture capitalist friend, we need good principals and administrators who know how to evaluate teachers as opposed to blindly relying on artificial and simplistic metrics. I couldn’t agree more; however, once you recognize the need for good management, you’re no longer talking about performance based pay as it’s generally proposed, because states heavily rely on standardized tests to measure student/teacher/school performance.

    Relying on a principal’s opinion is not without problems either, because it introduces subjectivity and creates a large potential for conflicts of interest. This is one of the most important reasons that metrics are so attractive to begin with: they remove the human element. It is entirely unrealistic to believe that the human element can be removed, because teacher performance is fundamentally complicated. While some practices are clearly good (e.g. assigning relevant homework), and some practices are clearly bad (spending class time watching entertainment films), there are no silver bullets in education; therefore, we would expect a narrow set of metrics to capture a distorted picture of how well a teacher is doing.

    PS: Hate to toot my own horn, but I do have an older blog post about performance-based pay. It covers a bunch of other areas that this comment does not -> you may want to look at it.

  3. Alon Nir says:

    It is perplexing. To support your last comment, I’d just like to add that there was a chapter in Freakonomics about incentives to teachers (which, if I remember correctly, showed that teachers actually started to cheat). Furthermore, research by Uri Gneezy showed that monetary incentives can work the opposite way.
    cheers,
    A

  4. Alon says:

    Well, you have covered the two biggest problems of setting incentives my friend. It’s not like that only in education, of course, it’s everywhere.

    To diverge back to the original topic of your post, I’d just like to add that a behavior even more confusing in my view is subjecting oneself to commitment contracts. It’s quite peculiar that people are rational enough to know they will not act rationally in a certain situation at a certain point in the future, hence calmly deciding to PAY to LIMIT their own choice set.

    • allyourcode says:

      Interesting indeed. Perhaps we would find that people have very good meta-rationality if it were to be investigated scientifically. If so, people might be able to use meta-rationality to compensate for their (regular) irrationality.

      • Alon says:

        Interesting concept. It reminds me of models of procrastination (Akerlof, I think, for one) that distinguish between different types of people, so that there are procrastinators that are aware of their fallibility, while others oblivious. The first group makes different choices than the latter.

        Another way to go is the 2 system theory. http://en.wikipedia.org/wiki/Dual_process_theory

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s