PETER ZHANG – APRIL 7TH, 2021

Professor Don A. Moore is the Lorraine Tyson Mitchell Chair in Leadership and Communication at Berkeley Haas and serves as Associate Dean for Academic Affairs. His research interests are confidence and overconfidence, with a focus on forecasting, judgment, and decision making. BER Staff Writer Peter Zhang interviewed Professor Moore over Zoom on March 17th, 2021. Interested readers may learn about research opportunities on Professor Moore’s website.

Peter Zhang: I would ask you for your personal background but I know the readers can find a wonderful “Self-Aggrandizing Autobiographical Sketch” on your website. Could you talk a bit about what brought you to your research?

Professor Moore: Sure. I could tell you a long winding story about my personal development, teenage angst, and all of that, but I try to tell an interesting version of that in the book. I’ll give you the short, nerdy version:

Most of my dissertation was sort of boring, but there was this one curious finding that emerged from it. Chasing that one down, and figuring out what was going on, led me down the rabbit hole that became my fascination with overconfidence. I’ve been pursuing that mystery ever since. I think I’ve solved the question that my dissertation posed, but that led me to other questions, which have proven a professional obsession.

 

Peter Zhang: Let’s talk a bit about confidence. You’ve recently written a book on it called Perfectly Confident. For the young Professor Moore and for most of us, we look at Elon Musk and come to see confidence as an unequivocal good, a prerequisite even, for success. What’s wrong with this view?

Professor Moore: So many things are wrong with that view! Thank you for that question.

The first thing that is wrong with that view is that it selects on a dependent variable. It’s worth asking: what is the population of entrepreneurs or would-be entrepreneurs from which Elon Musk was drawn? If it is the case that the most confident are the ones that choose to dive into this risky endeavor and their confidence is not perfectly calibrated with their promise or prospects, what we will see is that there is variation in outcomes.

Of course, there’s a lot of difficulty in predicting outcomes. The entrepreneurs don’t know, the venture capitalists don’t know—if the VCs knew then they would have a better hit rate on their investment when in fact nine of out ten of their companies don’t go IPO and don’t make them lots of money, there’s room for them to improve—it’s hard to predict how entrepreneurial ventures are going to perform.

We’ve got variations in outcomes, with a lot of new ventures going belly up, failing. Those who are confident enough to get in and give it a try anyway will, on average, just due to adverse selection, be overconfident. If, from that set, you just pay attention to those who hit it big, you’re neglecting the entire group who gave it a try, most of whom fail.

So if you aspire to be like Elon Musk, if you could jump straight to the multi-billionaire part, that’d be fine. But there are steps ahead of there that you have to achieve, and if you’re not well-calibrated in your confidence you’re running the risk that you’re making a mistake. That you’re sinking too much of your precious time and money into a venture that will ultimately fail and that will tar you as having been overconfident.

 

Peter Zhang: I want to explore the idea of confidence and the dangers a bit more. Some of your recent research suggests that confidence is contagious. My question: can people with malicious intent—casinos, advertisers, swindlers—manipulate our confidence?

Professor Moore: Oh, man. Nefarious actors attempt to manipulate our confidence all the time. They run the gamut.

In the talks that I give, I often show a slide with a report by the National Lottery of the United Kingdom on optimism. If you’re going to sell your customers negative expected value bets, you would like them to delude themselves about their prospects of winning. You’d like them to keep sinking money into this losing prospect that’s profitable to you as the seller. You want them to keep being optimistic. That is an institutionalized swindler.

But there are also confidence men who attempt to gain our confidence so they can make their way into our wallets, making us believe them and pretending to be more confident then they deserve to be. That is one of the ways that they play this social game of confidence, where they try to display more confidence than the situation actually warrants, that they deserve to have. They’re putting on a show in which they’re attempting to gain our trust and build our confidence in them.

 

Peter Zhang: I think most of us would agree that, at a personal level, overconfidence is dangerous. But someone might reasonably ask: why does this matter at a macro level? Surely, institutions like companies and governments aren’t vulnerable, are they?

Professor Moore: They’re highly vulnerable.

The cemetery of powerful corporations that have gone bankrupt is full of confident leaders who were sure that their positions were unassailable. Kodak and Blockbuster Video were once flying high and didn’t think that they had to bother with pesky rivals in digital photography or streaming video. Their overconfidence proved their demise.

Lots of corporations get themselves in trouble with bad beliefs, bad theories about confidence, and its role in their success. Like the rest of us, corporations often fool themselves into thinking that more confidence is better. That’s because in so much of life we observe a correlation between confidence and performing, they think: “If only I can jack up the confidence of the people at my company, with big, hairy, audacious goals, then we will triumph. Mmm!”

They wind up overcommitting to customers who are ultimately disappointed by what they failed to deliver; they introduce aircraft to market ahead of the requisite safety checks and wind up killing their passengers and customers. Companies and governments get themselves into all sorts of trouble by pretending to be confident when a wiser, well-calibrated judge would not have been so confident.

 

Peter Zhang: Could you elaborate a bit more on the danger for governments? And maybe here you can also tie in your work with the Good Judgement Project.

Professor Moore: I see that at a couple levels.

One has to do with contests for leadership, most notably election campaigns. Voters often have very little to go on and will attend too closely to what a candidate says. When a candidate sounds confident, when they brag about all they can achieve or Making American Great Again, or even bringing a new tone of bipartisanship to Washington—pfft—it sounds great. And if it is the case that those who are more confident on average are more capable and can deliver more, it might not be crazy for voters to choose the more confident candidate.

But in doing so, we will also guarantee that we’re selecting the overconfident candidate, that they will disappoint us, and that they can’t actually deliver all the hope and change they make us believe is possible.

So that’s just part of the political game. Voters could get smarter about it if they attended more closely to candidates’ track records of achievements and took their campaign promises with a little more of a grain of salt, understanding the complex political dynamics at work.

The other level has to do with not the elected officials who are running the show but instead the mid-level managers—those who are running agencies—in their attempts to help the government get better at what they do.

Like every institution, effective policy planning depends on good forecasting. You made reference to the Good Judgement Project. That was an attempt by IARPA—the Intelligence Advanced Research Projects Activity, which is to the intelligence agencies what DARPA is to the defense Department—to help get better at forecasting. They were interested in forecasting geopolitical events like the fall of foriegn presidents and financial crises around the world.

But the truth is that every policy decision—in fact every decision—depends on a forest of its consequences. My involvement with the Good Judgement Project came through my interest in confidence and overconfidence and the value of having well-calibrated confidence from making good confidence.

Have you read Superforecasting?

Peter Zhang: Yep.

Professor Moore: Then you know the central role that self-doubt, questioning, and humility played in the superpowers of the superforecasters. One of the things that made them super was their willingness to doubt themselves, to question their assumptions, to go back and revise forecasts, and to pay close attention to the evidence even when it suggests that their assumptions are wrong.

 

Peter Zhang: Daniel Khaneman, an enormous figure in behavioral economics, has said in interviews that overconfidence is the most harmful of the biases, often precisely because of the bureaucratic misjudgements that you mentioned. But he also seems to be a lot less confident in a solution. In concluding Thinking Fast and Slow, he bemoans how our slow-thinking rationality fails to adjust our automatic biases in precisely the moments where we need it the most. So, he directs his book towards third parties—towards the critics. You seem to direct your book to the decision-makers themselves. What is your prescription for correcting overconfidence? Do you agree with Khaneman’s pessimism?

Professor Moore: As someone who is down in the weeds on overconfidence, I have many thoughts and they relate in part to different forms of confidence.

I think the type of overconfidence that Khaneman identifies as getting us into the most trouble is overplacement—when we think we’re better than others and we’re not. That is the overconfidence that leads would-be entrepreneurs to cash out their investments and ruin their marriages for a venture that will ultimately fail. It’s what gets us into losing wars. It leads to enormous amounts of wasted efforts and tragic outcomes where we undertake projects that are collectively destructive of value.

I think there are useful antidotes to overplacement, and overplacement is not universal. There are predictable circumstances in which people think that they’re worse than others. When I ask my students how they think they’re going to perform on a test of Russian literature or plants of the Sahara desert, everyone in the class thinks they’re going to be worse than average. So overplacement is not universal.

By contrast, I am much more ready to concede Khaneman’s claim to hardwired universality on overprecision—the excessive faith that we know the truth. That is the sort of overconfidence that gets forecasters into trouble, when they’re too sure that they’re knowledge is correct. And even though overprecision is pervasive—in my research I get it almost every time I look—even there I’m not ready to throw in the towel.

I think of people as flawed but corrible. There are big differences in the size of the bias depending on how you ask the question. You can elicit people’s confidence in a way that really exacerbates overconfidence. If you ask, for instance, “we’re going to make our targets, right?,” the people who work for you will say “yes, boss.”

Much better is to say, “how likely is it that we will complete project X by this date, by this date, by this date?” You get people to think through the full distribution of probabilities. You force them to think, “how likely is it that I’m wrong about this forecast?” You still get some overprecision, but it is so much less. And, you get so much more useful information out of the process.

 

Peter Zhang: Making ourselves quantify our beliefs is one way to mitigate overconfidence. Are there other ways that we, as individuals, could adjust our own confidence?

Professor Moore: The simple prescription there is to ask yourself why you might be wrong. There are a lot of ways to do that.

One of the easiest is to accept the gift offered by our rivals, critics, and enemies. Listen to their critiques. You have something to learn there. It’s possible that they’re just haters trying to tear you down, and there isn’t substance to their critiques. On the other hand, it’s possible that they’re on to something, and understanding their criticisms can provide useful input that helps you get stronger.

Companies do this. When the boss has the courage to call a premortem to discuss why their favorite plan is likely to fail, it gathers people with the explicit purpose of discussing “what’s wrong with my plan, and what are its greatest weaknesses?” Thinking hard about those criticisms and whether you can somehow insulate your plan or hedge against your greatest risks—that will help you better calibrate your confidence and better protect yourself from weaknesses.

 

Peter Zhang: In the book, you tell the story of Alfred P. Sloan, a CEO of General Motors that purposely sought out disagreement. You also mention early in the book that you did debate in high school so maybe you can relate to this. In my experience, debaters—some of the people most exposed to disagreement and diversity of thought—tend to also be some of the most stubborn and absolutist people. Open-ended question: what do you think is going on here?

Professor Moore: I remember those people in debate too!

Debate should be good for helping people think about different perspectives. You’re assigned to a position and have to argue as persuasively as you can from that position. It should help people consider the opposite perspective. But it also develops in them a penchant for passionate argumentation which isn’t necessarily compatible with a balanced view.

There are a number of ways for me to come to your invitation. One is to wonder about how one can persuade those who disagree. In particular, I’ve thought a lot about this in the context of political partisanship. You encounter someone who has beliefs who seem so foreign and so misguided, but you have the chance to talk to them: what’s the right thing to do?

Telling them that they’re insane and QAnon is a baseless conspiracy theory is not going to win you a lot of allies, even if it’s true. The more successful interpersonal strategies are cousins of the effective intrapersonal strategies, where you come at the problem sympathetically, in an attempt to understand rather than persuade. The other person thinks of themselves as rational and well-intentioned just like you do. They’re living in a different information environment.

To understand is to forgive. Understanding what they’re paying attention to and the information they are basing their judgements on can clarify why they’ve come to their judgements and may open the way to useful dialogue.

 

Peter Zhang: Shifting gears a little bit—economics in recent years has undergone a big shift in becoming more rigorous and empirical. Could you tell me a bit about an experiment you’ve performed? Maybe a personal favorite?

Professor Moore: Wow, lots to talk about.

I could tell you about my analysis of the survey of professional forecasters, identifying overconfidence in forecasts of economic outcome, but that’s not an experiment I ran.

If you want me to tell you about an experiment, I might tell you about one I just ran with a doctoral student, here at Haas, named Sandy Campbell. We began with some real world data from a game show [the Million Dollar Money Drop] that provided high-stakes decision-making that made it possible for us to look at overconfidence in the wild. Were these game show contestants overconfident?

The answer appears to be yes. That’s evidence with high stakes. It’s not a purely incentive-compatible mechanism that they’re using, it’s a linear payoff scheme which has some problems.

I have other data. The stakes aren’t quite as high, but in my MBA class I give multiple-choice exams where I invite students to report the probability that each of the given options is the right one. I reward them with a quadratic scoring rule that makes it optimal for a rational student who wants to maximize their grade to honestly report their confidence on each of the answers, so I can analyze those data. Much like the Million Dollar Money Drop, that also makes it look like people are overconfident—overprecise in their judgement, too sure that their right.

But, there are all sorts of differences between the Million Dollar Money Drop and my class. Sandy and I wanted to understand the degree to which the payoff scheme might have influenced the ways that people responded. So we just ran an experiment where we played a quiz game with people in our study, and we randomly assigned them to either the linear payoff scheme like the Million Dollar Money Drop or the quadratic rule like in my class and asked the question: does their degree of overconfidence vary with the payoff scheme?

If they were perfectly rational it should. In reality, it doesn’t. It made almost no difference. That’s helpful for making the case that the data from the game show and the data from the class are indicative of people’s beliefs. They are not driven as much as rational actor theories would imply by the payoffs scheme.

 

Peter Zhang: I know you’re involved with BITSS and I see on Twitter that you are a big fan of reproductions. Can you tell me a bit about research transparency and reproducibility?

Professor Moore: I am inspired by the changes going on in socal science and the ways in which we’re stepping up our game to pre register our studies to post our materials. I think that it portends good things for the future of science and I try very hard to abide by the highest standards of reporting and transparency in everything that I do. That includes:

  • pre-registering studies before I run them;
  • posting all the data and materials afterwards;
  • sharing the analysis code when I can;
  • using a rigorous threshold of statistics significance (I’m on this paper arguing that everyone should use a significance threshold of 0.005 rather than 0.05 and I try to do that in my studies)

I have played the gadfly role within my institution, the Haas School of Business, where it’s my job when someone comes up for promotion or tenure, to note whether they have observed good scientific practice in their work. Whether they’re posting data and pre-registering their studies. I see the trends as positive, but we have a long way to go.

 

Peter Zhang: In thinking about the dangers—p-hacking or overlooking confounding factors—how do you think that relates to your research on confidence? Do you think it’s an example of wishful thinking?

Professor Moore: Eh, yeah. I can make that connection. I’m not sure it’s a real strong one. I do think that many of the doubters and skeptics of the need for change—the “old guard,” who have stood in the way or put the brakes on moves towards open science—are too sure about the reliability of prior published results. My willingness to ask, “how might I be wrong?,” makes me skeptical of my own results and skeptical of other people’s results. Recent replication efforts should lead us all to suspect that the published literature has entirely too many false positives in it. It would be overconfident and gullible to think that just because it’s published, it’s true.

 

Peter Zhang: Since this is probably going to get published around finals week, for students like me, what advice would you give to help us calibrate our expectations and succeed?

Professor Moore: When it comes to expectation and performance on exams and grades, that’s a place where the best students—students who get into institutions like UC Berkeley—often motivate themselves with what psychologist Julie Norem has called defensive pessimism. That is, imagining failure, envisioning catastrophe, thinking about how embarrassing it would be if you failed, and thereby motivating yourself to work hard to study and prepare for the exam.

So, take heart, calibrate your confidence, study as much as you need to. Disaster is not imminent. But also don’t party too much the night before!

Featured Image Source: Berkeley Haas

Disclaimer: The views published in this journal are those of the individual authors or speakers and do not necessarily reflect the position or policy of Berkeley Economic Review staff, the Undergraduate Economics Association, the UC Berkeley Economics Department and faculty, or the University of California, Berkeley in general.

Share this article:

Leave a Reply

Your email address will not be published. Required fields are marked *