Tuesday, October 10, 2017

Defending Thaler from the guerrilla resistance


So, Richard Thaler won the Nobel Prize, which is pretty awesome. If you've read Thaler's memoir, you'll know that it was a long, hard, contentious fight for him to get his ideas accepted by the mainstream. And even though Thaler is now a Nobelist and has been the AEA president - i.e., he has completely convinced the commanding heights of the econ establishment that behavioral econ is a crucial addition to the canon - resistance still pops up with surprising frequency in certain corners of the econ world. It's a sort of ongoing guerrilla resistance.

An example is this blog post by Kevin Bryan of A Fine Theorem. Kevin is one of the best research-explainers in the econ blogosphere, and his Nobel explainer posts have always been uniformly excellent. This time, however, instead of explaining Thaler's research, Kevin decided to challenge it, in a rather dismissive manner. In fact, his criticisms are pretty classic anti-behavioral stuff - mostly the same arguments Thaler talks about in his memoir.

Anyway, let's go through some of these criticisms, and see why they don't really hit the mark.


1. The invisible hand-wave

First, a random weird thing. Kevin writes:
Much of my skepticism is similar to how Fama thinks about behavioral finance: “I’ve always said they are very good at describing how individual behavior departs from rationality. That branch of it has been incredibly useful. It’s the leap from there to what it implies about market pricing where the claims are not so well-documented in terms of empirical evidence.”
This is Fama, not Kevin, but it's a very odd quote. Behavioral finance has been very good at documenting asset price anomalies - in fact, this is almost all of what it's good at. This is what Shiller got the Nobel for in 2013, and it's what Thaler himself is most famous for within the finance field. Behavioral finance has struggled (though not entirely failed) to explain most of these anomalies in terms of psychology, especially in terms of insights drawn from experimental psychology. But in terms of empirical evidence, behavioral finance is pretty solid.

Anyway, that might be a sidetrack. Back to Kevin:
[S]urely most people are not that informed and not that rational much of the time, but repeated experience, market selection, and other aggregative factors mean that this irrationality may not matter much for the economy at large. 
This is a dismissal that Thaler refers to as "the invisible hand wave". It's basically a claim that markets have emergent properties that make a bunch of not-quite-rational agents behave like a group of complete-rational agents. The justifications typically given for this assumption - for example, the idea that irrational people will be competed out of the market - are typically vague and unsupported. In fact, it's not hard at all to write down a model where this doesn't happen - for example, the noise trader model of DeLong et al. But for some reason, some economists have very strong priors that nothing of this sort goes on in the real world, and that the emergent properties of markets approximate individual rationality.


2. Ethical concerns

Kevin, like many critics of Thalerian behavioral economics, raises ethical concerns about the practice of "nudging":
Let’s discuss ethics first. Simply arguing that organizations “must” make a choice (as Thaler and Sunstein do) is insufficient; we would not say a firm that defaults consumers into an autorenewal for a product they rarely renew when making an active choice is acting “neutrally”. Nudges can be used for “good” or “evil”. Worse, whether a nudge is good or evil depends on the planner’s evaluation of the agent’s “inner rational self”, as Infante and Sugden, among others, have noted many times. That is, claiming paternalism is “only a nudge” does not excuse the paternalist from the usual moral philosophic critiques!...Carroll et al have a very nice theoretical paper trying to untangle exactly what “better” means for behavioral agents, and exactly when the imprecision of nudges or defaults given our imperfect knowledge of individual’s heterogeneous preferences makes attempts at libertarian paternalism worse than laissez faire.
There are, indeed, very real problems with behavioral welfare economics. But the same is true of standard welfare economics. Should we treat utilities as cardinal, and sum them to get our welfare function, when analyzing a typical non-behavioral model? Should we sum the utilities nonlinearly? Should we consider only the worst-off individual in society, as John Rawls might have us do?

Those are nontrivial questions. And they apply to pretty much every economic policy question in existence. But for some reason, Kevin chooses to raise ethical concerns only for behavioral econ. Do we see Kevin worrying about whether efficient contracts will lead to inequality that's unacceptable from a welfare perspective? No. Kevin seems to be very very very worried about paternalism, and generally pretty cavalier about inequality.

Perhaps this reflects Kevin's libertarian values? I actually have no idea what Kevin believes in. But hopefully the Nobel committee tries to make its awards based on the positive rather than normative considerations. After all, the physics Nobel often goes to scientists whose discoveries could be used to make weapons, right? I just don't see the need to automatically mix in ethics and values when assessing the importance of behavioral economics.


3. The invisible hand-wave, again

Kevin writes:
Thaler has very convincingly shown that behavioral biases can affect real world behavior, and that understanding those biases means two policies which are identical from the perspective of a homo economicus model can have very different effects. But many economic situations involve players doing things repeatedly with feedback – where heuristics approximated by rationality evolve – or involve players who “perform poorly” being selected out of the game. For example, I can think of many simple nudges to get you or I to play better basketball. But when it comes to Michael Jordan, the first order effects are surely how well he takes cares of his health, the teammates he has around him, and so on. I can think of many heuristics useful for understanding how simply physics will operate, but I don’t think I can find many that would improve Einstein’s understanding of how the world works.
This argument makes little sense to me. Most people aren't Michael Jordan or Einstein. And those people surely didn't compete all the other basketball players and physicists out of the market. Why does the existence of a few perfectly rational people mean that nudges don't matter in aggregate? Also, why should we assume that non-Michael-Jordans can quickly or completely learn heuristics that make nudges unnecessary? If that were true, why would players even have coaches?

It seems like another case of the invisible hand wave.

(Also, when it's used as an object, it's "you and me", not "you and I". This grammar overcorrection is my one weakness. If you ever need to defeat me in battle, just use "X and I" as an object, and I'll fly into an insane rage and walk right into your perfectly executed jujitsu move.)

Kevin continues:
The 401k situation [that Thaler's most famous nudge policy deals with] is unusual because it is a decision with limited short-run feedback, taken by unsophisticated agents who will learn little even with experience. The natural alternative, of course, is to have agents outsource the difficult parts of the decision, to investment managers or the like. And these managers will make money by improving people’s earnings. No surprise that robo-advisors, index funds, and personal banking have all become more important as defined contribution plans have become more common! If we worry about behavioral biases, we ought worry especially about market imperfections that prevent the existence of designated agents who handle the difficult decisions for us.
Assuming that a market for third-party advice will take care of behavioral problems seems like both a big leap and a mistake. First, there's the assumption that someone with nontrivial behavioral biases will be completely rational in her choice of an adviser. Big assumption. Remember that people are typically paying financial advisers a fifth of their life's savings or more. Big price tag. How confident are we that someone who treats opt-in and opt-out pensions differently is going to get good value for that huge and opaque expenditure?

Also, suppose that financial advisers really do earn their keep, i.e. a fifth of your life's savings. If the market for financial advice is efficient, and financial advice is all about countering your own behavioral biases, that means that behavioral biases are so severe that their impact is worth a fifth of your lifetime wealth! If a cheap little nudge could make all of that vast expenditure unnecessary - i.e., if it could get you to do the thing that you'd otherwise pay a financial adviser 20% of your lifetime wealth to do for you - then the nudge seems like a huge efficiency-booster.

So this point of Kevin's also seems to miss the mark.


4. Endowment effects and money pumps

Kevin writes:
Consider Thaler’s famous endowment effect: how much you are willing to pay for, say, a coffee mug or a pen is much less than how much you would accept to have the coffee mug taken away from you. Indeed, it is not unusual in a study to find a ratio of three times or greater between the willingness to pay and willingness to accept amount. But, of course, if these were “preferences”, you could be money pumped (see Yaari, applying a theorem of de Finetti, on the mathematics of the pump). Say you value the mug at ten bucks when you own it and five bucks when you don’t. Do we really think I can regularly get you to pay twice as much by loaning you the mug for free for a month? Do we see car companies letting you take a month-long test drive of a $20,000 car then letting you keep the car only if you pay $40,000, with some consumers accepting? Surely not.
First of all, the endowment effect isn't a money pump if it only works once with each object. It's only a money pump if you can keep loaning and reselling something to someone. Otherwise, people's maximum potential losses from this bias are finite - they're just some percent of their lifetime consumption. Maybe not 300%, but something.

But anyway, Kevin says that we don't see car companies letting you take a month-long test drive. Hmm. I guess that is true...for cars.



5. External validity of lab effects

Everyone knows external validity of laboratory findings is a big problem for experimental economics (and psychology, and biology...). Also problematic is ecological validity - even if a lab effect consistently exists in the real world, it might not matter quantitatively compared to other stuff. External and ecological validity do present big challenges for behaviorists who want to take insights from the lab and use them to predict real-world outcomes.

But Kevin chooses some highly questionable examples to illustrate the problem. For example:
Even worse are the dictator games introduced in Thaler’s 1986 fairness paper. Students were asked, upon being given $20, whether they wanted to give an anonymous student half of their endowment or 10%. Many of the students gave half! This experiment has been repeated many, many times, with similar effects. Does this mean economists are naive to neglect the social preferences of humans? Of course not! People are endowed with money and gifts all the time. They essentially never give any of it to random strangers – I feel confident assuming you, the reader, have never been handed some bills on the sidewalk by an officeworker who just got a big bonus! Worse, the context of the experiment matters a ton (see John List on this point). Indeed, despite hundreds of lab experiments on dictator games, I feel far more confident predicting real world behavior following windfalls if we use a parsimonious homo economicus model than if we use the results of dictator games.
Does Kevin seriously think that any behaviorist believes that dictator games imply that people walk around giving away half of any gifts they receive? That makes no sense at all. In the dictator game, there's one other person - in the real world, there are effectively infinite other people. What would it even mean for a person on the street to behave analogously to a person in a dictator game? The situations aren't equivalent at all.

As John List says, context matters. Wage negotiations at a company are different than family gift exchanges, which are different from financial windfalls, which are different from randomly being handed money on the street. Norms in these situations are different. If someone gives you a gift, there's probably a norm of not re-gifting it. If someone hands you money in a dictator game, you probably don't treat it as a personal gift. Etc.

To me, this is clearly not a reason to assume that norms and values only matter in the lab, and that real-world people always behave perfectly selfishly. Quite the contrary. It's a reason to pay more attention to norms and values, not less. Why does Bill Gates give away so much of his money? Why do people give money to some beggars and buskers but not to others? Do these behaviors bear any similarity to how people behave when asking for (or handing out) raises in the workplace? Do they bear any similarity to the way people haggle over the price of a car or a house?

These are not trivial questions to be waved away, simply because if you hand someone cash on the street they don't instantly hand half of it to the first person they see.

Kevin follows this up with what seems like another bad example:
To take one final example, consider Thaler’s famous model of “mental accounting”. In many experiments, he shows people have “budgets” set aside for various tasks. I have my “gas budget” and adjust my driving when gas prices change. I only sell stocks when I am up overall on that stock since I want my “mental account” of that particular transaction to be positive. But how important is this in the aggregate? Take the Engel curve. Budget shares devoted to food fall with income. This is widely established historically and in the cross section. Where is the mental account? Farber (2008 AER) even challenges the canonical account of taxi drivers working just enough hours to make their targeted income. As in the dictator game and the endowment effect, there is a gap between what is real, psychologically, and what is consequential enough to be first-order in our economic understanding of the world.
Kevin's argument appears to be that if mental accounting only matters in some domains, it doesn't matter overall. That makes no sense to me. If mental accounting is important for investing and driving, but not for food purchases or taxi jobs, does that mean it's not important "in the aggregate"? Of course not! Gas is a substantial monthly expense. The compounded rate of return on your stock portfolio can make a huge difference to your lifetime consumption. Even if mental accounting mattered only for these two things, it would matter in the aggregate.


So, Kevin's attacks on Thaler's research paradigm pretty much uniformly miss the mark. Because of this, I half suspect that Kevin - usually the most careful and incisive of bloggers - is playing devil's advocate here, taking cheap shots at behaviorism simply because it's fun. This guerrilla resistance is more like paintball.