Thursday, May 26, 2016

101ism, overtime pay edition

John Cochrane wrote a blog post criticizing the Obama administration's new rule extending overtime pay to low-paid salaried employees. Cochrane thinks about overtime in the context of an Econ 101 type model of labor supply and demand. I'm not going to defend the overtime rule, but I think Cochrane's analysis is an example of what I've been calling "101ism".

One red flag indicating that 101 models are being abused here is that Cochrane applies the same model in two different ways. First, he models overtime pay as a wage floor:

Then he alternatively models it as a negative labor demand shock:

Well, which is it? A wage floor, or a negative labor demand shock? The former makes wages go up, while the latter makes wages go down, so the answer is clearly important. If using the 101 model gives you two different, contradictory answers, it's a clue that you shouldn't be using the 101 model.

In fact, overtime rules are not quite like either wage floors or negative labor demand shocks. Overtime rules stipulate not a wage level, but a ratio between base wages and wages paid on hours worked per worker above a certain amount.

In the Econ 101 model of labor supply and demand, there's no distinction between the extensive and the intensive margin - hiring the same number of employees for fewer hours each is exactly the same as hiring fewer employees for the same number of hours each. But with overtime rules, those two are obviously not the same. For a given base wage, under overtime rules, hiring 100 workers for 40 hours each is cheaper than hiring 40 workers for 100 hours each, even though the total number of labor hours is the same. That breaks the 101 model.

With overtime rules, weird things can happen. First of all, base wages can fall while keeping employment the same, even if labor demand is elastic. Why? Because if companies fix the hours that their employees work, they can just set the base wage lower so that overall compensation stays the same, leading to the exact same equilibrium as before.

Overtime rules can also raise the level of employment. Suppose a firm is initially indifferent between A) hiring a very productive worker for 60 hours a week at $50 an hour, and B) hiring a very productive worker for 40 hours a week at $50 an hour, and hiring 2 less productive workers at 40 hours a week each for $25 an hour. Overtime rules immediately change that calculation, making option (B) cheaper. In general equilibrium, in a model with nonzero unemployment (because of reservation wages, or demand shortages, etc.), overtime rules should cut hours for productive workers and draw some less-productive workers into employment. In fact, this is exactly what Goldman Sachs expects to happen.

Now, to understand the true impact of overtime rules, we probably have to include more complicated stuff, like unobservable effort (what if people work longer but less hard?), laws regarding number of work hours, unobservable hours (since the new rule is for salaried employees), sticky wages, etc. But even if we want to think about the very most simple case, we can't use the basic 101 model, since the essence of overtime rules is to force firms to optimize over 2 different margins, and S-D graphs represent optimization over only 1 margin.

Using 101 models where they clearly don't apply is 101ism!

Monday, May 23, 2016

Theory vs. Evidence: Unemployment Insurance edition

The argument over "theory vs. evidence" is usually oversimplified and silly, since you need both to understand the world. But there is a sense in which I think evidence really does "beat" theory most of the time, at least in econ. Basically, I think empirical work without much theory is usually more credible than the reverse.

To show what I mean, let's take an example. Suppose I was going to try to persuade you that extended unemployment insurance has big negative effects on employment. But suppose I could only show you one academic paper to make my case. Which of these two papers, on its own, would be more convincing?

Paper 1: "Optimal unemployment insurance in an equilibrium business-cycle model", by Kurt Mitman and Stanislav Rabinovitch

The optimal cyclical behavior of unemployment insurance is characterized in an equilibrium search model with risk-averse workers. Contrary to the current US policy, the path of optimal unemployment benefits is pro-cyclical – positively correlated with productivity and employment. Furthermore, optimal unemployment benefits react nonmonotonically to a productivity shock: in response to a fall in productivity, they rise on impact but then fall significantly below their pre-recession level during the recovery. As compared to the current US unemployment insurance policy, the optimal state-contingent unemployment benefits smooth cyclical fluctuations in unemployment and deliver substantial welfare gains.

Some excerpts:
The model is a Diamond–Mortensen–Pissarides model with aggregate productivity shocks. Time is discrete and the time horizon is infinite. The economy is populated by a unit measure of workers and a larger continuum of firms...Firms are risk-neutral and maximize profits. Workers and firms have the same discount factor β...Existing matches [i.e., jobs] are exogenously destroyed with a constant job separation probability δ...All worker–firm matches are identical: the only shocks to labor productivity are aggregate shocks...[A]ggregate labor productivity...follows an AR(1) process...The government can insure against aggregate shocks by buying and selling claims contingent on the aggregate state...The government levies a constant lump sum tax τ on firm profits and uses its tax revenues to finance unemployment benefits...The government is allowed to choose both the level of benefits and the rate at which they expire. Benefit expiration is stochastic...

Paper 2: "The Impact of Unemployment Benefit Extensions on Employment: The 2014 Employment Miracle?", by Marcus Hagedorn, Iourii Manovskii, and Kurt Mitman

We measure the aggregate effect of unemployment benefit duration on employment and the labor force. We exploit the variation induced by Congress' failure in December 2013 to reauthorize the unprecedented benefit extensions introduced during the Great Recession. Federal benefit extensions that ranged from 0 to 47 weeks across U.S. states were abruptly cut to zero. To achieve identification we use the fact that this policy change was exogenous to cross-sectional differences across U.S. states and we exploit a policy discontinuity at state borders. Our baseline estimates reveal that a 1% drop in benefit duration leads to a statistically significant increase of employment by 0.019 log points. In levels, 2.1 million individuals secured employment in 2014 due to the benefit cut. More than 1.1 million of these workers would not have participated in the labor market had benefit extensions been reauthorized.

Some excerpts:
[W]e exploit the fact that, at the end of 2013, federal unemployment benefit extensions available to workers ranged from 0 to 47 weeks across U.S. states. As the decision to abruptly eliminate all federal extensions applied to all states, it was exogenous to economic conditions of individual states. In particular, states did not choose to cut benefits based on, e.g. their employment in 2013 or expected employment growth in 2014. This allows us to exploit the vast heterogeneity of the decline in benefit duration across states to identify the labor market implication of unemployment benefit extensions. Note, however, that the benefit durations prior to the cut, and, consequently, the magnitudes of the cut, likely depended on economic conditions in individual states. Thus, the key challenge to measuring the effect of the cut in benefit durations on employment and the labor force is the inference on labor market trends that various locations would have experienced without a cut in benefits. Much of the analysis in the paper is devoted to the modeling and measurement of these trends. 
The primary focus of the formal analysis in the paper is on measuring the counterfactual trends in labor force and employment that border counties would have experienced without a cut in benefits...The first one...allows for permanent (over the estimation window) differences in employment across border counties which could be induced by the differences in other policies (e.g., taxes or regulations) between the states these counties belong to. Moreover, employment in each county is allowed to follow a distinct deterministic time trend. The model also includes aggregate time effects and controls for the effects of unemployment benefit durations in the pre-reform period...The second and third models...reflect the systematic response of underlying economic conditions across counties with different benefit durations to various aggregate shocks and the heterogeneity is induced by differential exposure of counties to these aggregate disturbances. 

These two papers have results that agree with each other. Both conclude that extended unemployment insurance causes unemployment to go up by a lot. But suppose I only showed you one of these papers. Which one, on its own, would be more effective in convincing you that extended UI raises U a lot?

I submit that the second paper would be a lot more convincing. 

Why? Because the first paper is mostly "theory" and the second paper is mostly "evidence". That's not totally the case, of course. The first paper does have some evidence, since it calibrates its parameters using real data. The second paper does have some theory, since it relies on a bunch of assumptions about how state-level employment trends work, as well as having a regression model. But the first paper has a huge number of very restrictive structural assumptions, while the second one has relatively few. That's really the key.

The first paper doesn't test the theory rigorously against the evidence. If it did, it would easily fail all but the most gentle first-pass tests. The assumptions are just too restrictive. Do we really think the government levies a lump-sum tax on business profits? Do we really think unemployment insurance benefits expire randomly? No, these are all obviously counterfactual assumptions. Do those false assumptions severely impact the model's ability to match the relevant features of reality? They probably do, but no one is going to bother to check, because theory papers like this are used to "organize our thinking" instead of to predict reality.

The second paper, on the other hand, doesn't need much of a structural theory in order to be believable. Unemployment insurance discourages people from working, eh? Duh, you're paying people not to work! You don't need a million goofy structural assumptions and a Diamond-Mortensen-Pissarides search model to come up with a convincing individual-behavior-level explanation for the empirical findings in the second paper.

Of course, even the second paper isn't 100% convincing - it doesn't settle the matter. Other mostly-empirical papers find different results. And it'll take a long debate before people agree which methodology is better. 

But I think this pair of papers shows why, very loosely speaking, evidence is often more powerful than theory in economics. Humans are wired to be scientists - we punish model complexity and reward goodness-of-fit. We have little information criteria in our heads.

Update: Looks like I'm not the only one that had this thought... :-)

Also, Kurt has a new discussion paper with Hagedorn and Manovskii, criticizing the methodology of some empirical papers that find only a small effect of extended UI. In my opinion, Kurt's team is winning this one - the method of identifying causal effects of UI on unemployment using data revisions seems seriously flawed.

Friday, May 20, 2016

What's the difference between macro and micro economics?

Are Jews for Jesus actually Jews? If you ask them, they'll surely say yes. But go ask some other Jews, and you're likely to hear the opposite answer. A similar dynamic tends to prevail with microeconomists and macroeconomists. Here is labor economist Dan Hamermesh on the subject:
The economics profession is not in disrepute. Macroeconomics is in disrepute. The micro stuff that people like myself and most of us do has contributed tremendously and continues to contribute. Our thoughts have had enormous influence. It just happens that macroeconomics, firstly, has been done terribly and, secondly, in terms of academic macroeconomics, these guys are absolutely useless, most of them.
Ouch. But not too different from lots of other opinions I've head. "I went to a macro conference recently," a distinguished game theorist confided a couple of years back, sounding guilty about the fact. "I couldn't believe what these guys were doing." A decision theorist at Michigan once asked me "What's the oldest model macro guys still use?" I offered the Solow model, but what he was really claiming is that macro, unlike other fields, is driven by fads and fashions rather than, presumably, hard data. Macro folks, meanwhile, often insist rather acerbically that there's actually no difference between their field and the rest of econ. Ed Prescott famously refuses to even use the word "macro", stubbornly insisting on calling his field "aggregate economics".

So who's right? What's the actual distinction between macro and "micro"? The obvious difference is the subject matter - macro is about business cycles and growth. But are the methods used actually any different? The boundary is obviously going to be fuzzy, and any exact hyperplane of demarcation will necessarily be arbitrary, but here are some of what I see as the relevant differences.

1. General Equilibrium vs. Game Theory and Partial Equilibrium

In labor, public, IO, and micro theory, you see a lot of Nash equilibria. In papers about business cycles, you rarely do - it's almost all competitive equilibrium. Karthik Athreya explains this in his book, Big Ideas in Macroeconomics:
Nearly any specification of interactions between individually negligible market participants leads almost inevitably to Walrasian outcomes...The reader will likely find the non-technical review provided in Mas-Colell (1984) very useful. The author refers to the need for large numbers as the negligibility hypothesis[.]
Macro people generally assume that there are too many companies, many consumers, etc. in the economy for strategic interactions to matter. Makes sense, right? Macro = big. Of course there are some exceptions, like in search-and-matching models of labor markets, where the surplus of a match is usually divided up by Nash bargaining. But overall, Athreya is right.

You also rarely see partial equilibrium in macro papers, at least these days. Robert Solow complained about this back in 2009. You do, however, see it somewhat in other fields, like tax and finance (and probably others).

2. Time-Series vs. Cross-Section and Panel

You see time-series methods in a lot of fields, but only in two areas - macro and finance - is it really the core empirical method. Look in a business cycle paper, and you'll see a lot of time-series moments - the covariance of investment and GDP, etc. Chris Sims, one of the leading empirical macroeconomists, won a Nobel mainly for pioneering the use of SVARs in macro. The original RBC model was compared to data (loosely) by comparing its simulated time-series moments side by side with the empirical moments - that technique still pops up in many macro papers, but not elsewhere. 

Why are time-series methods so central to macro? It's just the nature of the beast. Macro deals with intertemporal responses at the aggregate level, so for a lot of things, you just can't look at cross-sectional variation - everyone is responding to the same big things, all at once. You can't get independent observations in cross section. You can look at cross-country comparisons, but countries' business cycles are often correlated (and good luck with omitted variables, too). 

As an illustration, think about empirical papers looking at the effect of the 2009 ARRA stimulus. Nakamura and Steinsson - the best in the business - looked at this question by comparing different states, and seeing how the amount of money a state got from the stimulus affected its economy. They find a large effect - states that got more stimulus money did better, and the causation probably runs in the right direction. Nakamura and Steinsson conclude that the fiscal multiplier is relatively large - about 1.5. But as John Cochrane pointed out, this result might have happened because stimulus represents a redistribution of real resources between states - states that get more money today will not have to pay more taxes tomorrow, to cover the resulting debt (assuming the govt pays back the debt). So Nakamura and Steinsson's conclusion of a large fiscal multiplier is still dependent on a general equilibrium model of intertemporal optimization, which itself can only be validated with...time-series data.

In many "micro" fields, in contrast, you can probably control for aggregate effects, as when people studying the impact of a surge of immigrants on local labor markets use methods like synthetic controls to control for business cycle confounds. Micro stuff gets affected by macro stuff, but a lot of times you can plausibly control for it.

3. Few Natural Experiments, No RCTs

In many "micro" fields, you now see a lot of natural experiments (also called quasi-experiments). This is where you exploit a plausibly exogenous event, like Fidel Castro suddenly deciding to send a ton of refugees to Miami, to identify causality. There are few events that A) have big enough effects to affect business cycles or growth, and B) are plausibly unrelated to any of the other big events going on in the world at the time. That doesn't mean there are none - a big oil discovery, or an earthquake, probably does qualify. But they're very rare. 

Chris Sims basically made this point in a comment on the "Credibility Revolution" being trumpeted by Angrist and Pischke. The archetypical example of a "natural experiment" used to identify the impact of monetary policy shocks - cited by Angrist and Pischke - is Romer & Romer (1989), which looks at changes in macro variables after Fed announcements. But Sims argues, persuasively, that these "Romer dates" might not be exogenous to other stuff going on in the economy at the time. Hence, using them to identify monetary policy shocks requires a lot of additional assumptions, and thus they are not true natural experiments (though that doesn't mean they're useless!). 

Also, in many fields of econ, you now see randomized controlled trials. These are especially popular in development econ and in education policy econ. In macro, doing an RCT is not just prohibitively difficult, but ethically dubious as well.

So there we have three big - but not hard-and-fast - differences between macro and micro methods. Note that they all have to do with macro being "big" in some way - either lots of actors (#1), shocks that affect lots of people (#2), or lots of confounds (#3). As I see it, these differences explain why definitive answers are less common in macro than elsewhere - and why macro is therefore more naturally vulnerable to fads, groupthink, politicization, and the disproportionate influence of people with forceful, aggressive personalities.

Of course, the boundary is blurry, and it might be getting blurrier. I've been hearing about more and more people working on "macro-focused micro," i.e. trying to understand the sources of shocks and frictions instead of simply modeling the response of the economy to those shocks and frictions. The first time I heard that exact phrase was in connection with this paper by Decker et al. on business dynamism. Another example might be the people who try to look at price changes to tell how much sticky prices matter. Another might be studies of differences in labor market outcomes between different types of workers during recessions. I'd say the study of bubbles in finance also qualifies. This kind of thing isn't new, and it will never totally replace the need for "big" macro methods, but hopefully more people will work on this sort of thing now (and hopefully they'll continue to take market share from "yet another DSGE business cycle model" type papers at macro conferences). As "macro-focused micro" becomes more common, things like game theory, partial equilibrium, cross-sectional analysis, natural experiments, and even RCTs may become more common tools in the quest to understand business cycles and growth. 

Monday, May 16, 2016

How bigoted are Trump supporters?

Jason McDaniel and Sean McElwee have been doing great work analyzing the political movement behind Donald Trump. For example, they've showed pretty conclusively that Trump support is driven at least in part by what they call "racial resentment" - the notion that the government unfairly helps nonwhites.

But "racial resentment" is not the same thing as outright bigotry. Believing that the government unfairly helps black people doesn't necessarily mean you dislike black people. So McDaniel and McElwee did another survey asking about people's attitudes toward various groups. Here's a graph summarizing their basic findings:

So, from this graph, I gather:

1. Trump supporters, on average, say they like Blacks, Hispanics, Scientists, Whites, and Police. On average, they say they dislike Muslims, Transgenders, Gays, and Feminists.

2. Trump supporters, on average, say they like Whites a bit more than average, Muslims a lot less, and Transgenders a bit less. They also might say they like Hispanics, Gays, and Feminists somewhat less, though the statistical significance is borderline.

Now here's how Sean McElwee interpreted this same graph:

This interpretation doesn't appear to be supported by Sean's own data. In fact, his data appear to support the opposite of what he claims.

Now, the main caveat to all this is that surveys like this almost certainly don't do a good job of measuring people's real attitudes toward other groups. When someone calls you on the phone (or hands you a piece of paper) and asks you if you like Hispanics, whether you say "yes" or "no" is probably much more dependent on what you think you ought to say than what you really feel. So this survey is probably mainly just measuring differences in how Trump supporters feel they ought to answer surveys.

But even if this survey really did measure people's true attitudes, it still wouldn't tell us what Sean claims it does. Trump supporters, overall, say they like Blacks. And the degree to which they say they like Blacks is not statistically significant from the national average. Only when it comes to Muslims and Transgender folks do Trump supporters appear clearly more bigoted than the national average.

But going back to the main problem with surveys like this, it might be that Trump supporters are simply more willing to express their dislike of Muslims and Transgender people in a survey. This may just reflect their general lack of education. More educated people are plugged into the mass media culture, which generally discourages overt expressions of bigotry toward any group. Less educated folks are less likely to have gotten the message that you're not supposed to say bad things about Muslims and Trans people.

So in conclusion, this survey doesn't seem to support the narrative that Trump supporters are driven by bigotry. That narrative might still be true, of course - there are certainly some very loud and visible bigots within Trump's support base (and within his organization). But after looking at this data, my priors, which were pretty ambivalent about that narrative to begin with, haven't really been moved at all.

Sunday, May 15, 2016

Russ Roberts on politicization, humility, and evidence

The Wall Street Journal has a very interesting interview with Russ Roberts about economics and politicization. Lots of good stuff in there, and one thing I disagree with. Let's go through it piece by piece!

1. Russ complains about politicization of macroeconomic projections:
He cites the Congressional Budget Office reports calculating the effect of the stimulus package...The CBO gnomes simply went back to their earlier stimulus prediction and plugged the latest figures into the model. “They had of course forecast the number of jobs that the stimulus would create based on the amount of spending,” Mr. Roberts says. “They just redid the estimate. They just redid the forecast."
I wouldn't be quite so hard on the CBO. It's their job to forecast the effect of policy. They have to choose a model in order to do that. And it's their job to evaluate the impact of policy. They have to choose a model to do that. And of course they're going to choose the same model, even if that makes the evaluation job just a repeat of the forecasting job. I do wish, however, that the CBO would try some various alternative models, and show the differing estimates for the different models. That would be better than what they currently do.

I think a better example of politicization of policy projections was given not by Russ, but by Kyle Peterson, who wrote up the interview for the WSJ. Peterson cited Gerald Friedman's projection of the impact of Bernie Sanders' spending plans. Friedman also could have incorporated model uncertainty, and explored the sensitivity of his projections to his key modeling assumptions. And unlike the CBO, he didn't have a deadline, and no one made him come up with a single point estimate to feed to the media. And some of the people who defended Friedman's paper from criticism definitely turned it into a political issue

So I think Russ is on point here. There's lots of politicization of policy projections.

2. Peterson (the interviewer) cites a recent survey by Haidt and Randazzo, showing politicization of economists' policy views. This is really interesting. Similar surveys I've seen in the past haven't shown a lot of politicization. A more rigorous analysis found a statistically significant amount of politicization, though the size of the effect didn't look that large to me. So I'd like to see the numbers Haidt and Randazzo get. Anyway, it's an interesting ongoing debate.

3. Russ highlights the continuing intellectual stalemate in macroeconomics:
The old saw in science is that progress comes one funeral at a time, as disciples of old theories die off. Economics doesn’t work that way. “There’s still Keynesians. There’s still monetarists. There’s still Austrians. Still arguing about it. And the worst part to me is that everybody looks at the other side and goes ‘What a moron!’ ” Mr. Roberts says. “That’s not how you debate science.”
Russ is right. But it's very important to draw a distinction between macroeconomics and other fields here. The main difference isn't in the methods used (although there are some differences there too), it's in the type of data used to validate the models. Unlike most econ fields, macro relies mostly on time-series and cross-country data, both of which are notoriously unreliable. And it's very hard, if not impossible, to find natural experiments in macro. That's why none of the main "schools" of macro thought have been killed off yet. In other areas of econ, there's much more data-driven consensus, especially recently. 

I think it's important to always make this distinction in the media. Macro is econ's glamour division, unfortunately, so it's important to remind people that the bulk of econ is in a very different place.

4. Russ makes a great point about econ and the media:
If economists can’t even agree about the past, why are they so eager to predict the future? “All the incentives push us toward overconfidence and to ignore humility—to ignore the buts and the what-ifs and the caveats,” Mr. Roberts says. “You want to be on the front page of The Wall Street Journal? Of course you do. So you make a bold claim.” Being a skeptic gets you on page A9.
Absolutely right. The media usually hypes bold claims. It also likes to report arguments, even where none should exist. This is known as "opinions on the shape of the Earth differ" journalism. This happens in fields like physics - people love to write articles with headlines like "Do we need to rewrite general relativity?". But in physics that's harmless and fun, because the people who make GPS systems are going to keep on using general relativity. In econ, it might not be so harmless, because policy is probably more influenced by public opinion, and public opinion can be swayed by the news.

5. Russ makes another good point about specification search:
Modern computers spit out statistical regressions so fast that researchers can fit some conclusion around whatever figures they happen to have. “When you run lots of regressions instead of just doing one, the assumptions of classical statistics don’t hold anymore,” Mr. Roberts says. “If there’s a 1 in 20 chance you’ll find something by pure randomness, and you run 20 regressions, you can find one—and you’ll convince yourself that that’s the one that’s true.”...“You don’t know how many times I did statistical analysis desperately trying to find an effect,” Mr. Roberts says. “Because if I didn’t find an effect I tossed the paper in the garbage.”
Yep. This is a big problem, and probably a lot bigger than in the past, thanks to technology. Most of science, not just econ, is grappling with this problem. It's not just social science, either - bio is having similar issues

6. Russ calls for more humility on the part of economists:
Roberts is saying that economists ought to be humble about what they know—and forthright about what they don’t...When the White House calls to ask how many jobs its agenda will create, what should the humble economist say? “One answer,” Mr. Roberts suggests, “is to say, ‘Well we can’t answer those questions. But here are some things we think could happen, and here’s our best guess of what the likelihood is.” That wouldn’t lend itself to partisan point-scoring. The advantage is it might be honest.
I agree completely. People are really good at understanding point estimates, but bad at understanding confidence intervals, and really bad at understanding confidence intervals that arise from model uncertainty. "Humility" is just a way of saying that economists should express more uncertainty in public pronouncements, even if their political ideologies push them toward presenting an attitude of confident certainty. A "one-handed economist" is exactly what we have too much of these days. Dang it, Harry Truman!

7. Russ does say one thing I disagree with pretty strongly:
Economists also look for natural experiments—instances when some variable is changed by an external event. A famous example is the 1990 study concluding that the influx of Cubans from the Mariel boatlift didn’t hurt prospects for Miami’s native workers. Yet researchers still must make subjective choices, such as which cities to use as a control group. 
Harvard’s George Borjas re-examined the Mariel data last year and insisted that the original findings were wrong. Then Giovanni Peri and Vasil Yasenov of the University of California, Davis retorted that Mr. Borjas’s rebuttal was flawed. The war of attrition continues. To Mr. Roberts, this indicates something deeper than detached analysis at work. “There’s no way George Borjas or Peri are going to do a study and find the opposite of what they found over the last 10 years,” he says. “It’s just not going to happen. Doesn’t happen. That’s not a knock on them.”
It might be fun and eyeball-grabbing to report that "opinions on the shape of the Earth differ," but that doesn't mean it's a good thing. Yes, it's always possible to find That One Guy who loudly and consistently disagrees with the empirical consensus. That doesn't mean there's no consensus. In the case of immigration, That One Guy is Borjas, but just because he's outspoken and consistent doesn't mean that we need to give his opinion or his papers anywhere close to the same weight we give to the copious researchers and studies who find the opposite. 

Anyway, it's a great interview write-up, and I'd like to see the full transcript. Overall, I'm in agreement with Russ, but I'll continue to try to convince him of the power of empirical research!

Friday, May 13, 2016

Review: Ben Bernanke's "The Courage to Act"

I wrote a review of Ben Bernanke's book, The Courage to Act, for the Council on Foreign Relations. Here's an excerpt:
Basically, Bernanke wants the world to understand why he did what he did, and in order to understand we have to know everything.  
And the book succeeds. Those who are willing to wade through 600 pages of history, and who know something about the economic theories and the political actors involved, will come away from this book thinking that Ben Bernanke is a good guy who did a good job in a tight spot. 
But along the way, the book reveals a lot more than that. The most interesting lessons of The Courage to Act are not about Bernanke himself, but about the system in which he operated. The key revelation is that the way that the U.S. deals with macroeconomic challenges, and with monetary policy, is fundamentally flawed. In both academia and in politics, old ideas and prejudices are firmly entrenched, and not even the disasters of crisis and depression were enough to dislodge them.
The main points I make in the review are:

1. Bernanke was the right person in the right place at the right time. He was almost providentially well-suited to the task of steering America through both the financial crisis and the Great Recession that followed. A lot of that had to do with his unwillingness to downplay the significance of the Great Depression (as Robert Lucas and others did), and with his unwillingness to ignore the financial sector (as other New Keynesians did).

2. However, the institutional, cultural, and intellectual barriers against easy monetary policy that were created in the 1980s, as a reaction the inflation of the 70s, held firm, preventing Bernanke and the Fed from taking more dramatic steps to boost employment, and preventing a thorough rethink of conventional macroeconomic wisdom.

3. Fiscal Keynesianism, however, has also survived, despite generations of efforts by monetarists, New Classicals, Austrians, and others to kill it off. Deep down, Americans still believe that stimulus works.

4. The political radicalism of the Republican party was a big impediment to Bernanke's efforts to revive the economy. Anti-Fed populism, from both the right (Ron Paul) and the left (Bernie Sanders) also interfered with the goal of putting Americans back to work.

You can read the whole thing here!

Michael Strain and James Kwak debate Econ 101

Very interesting debate over Econ 101, between Michael Strain and James Kwak. Strain attempts to defend Econ 101 from the likes of Paul Krugman and Yours Truly. He especially criticizes my call for more empirics in 101:
Critics suggest that introductory textbooks should emphasize empirical studies over these models. There are many problems with this suggestion, not the least of which that economists’ empirical studies don’t agree on many important policy issues. For example, it is ridiculous to suggest that economists have reached consensus that raising the minimum wage won’t reduce employment. Some studies find non-trivial employment losses; others don’t. The debates often hinge on one’s preferred statistical methods. And deciding which methods you prefer is way beyond the scope of an introductory course. 
As you might predict, I have some problems with this. First of all, I don't like the idea that if the empirics aren't conclusively settled, we should just teach theories and forget about the facts. I agree with Kwak, who writes:
I don’t understand this argument. The minimum wage may or may not increase unemployment, depending on a host of other factors. The fact that economists don’t agree reflects the messiness of the world. That’s a feature, not a bug.
Totally! This clearly seems like the intellectually honest thing to do. It seems bad to give kids too strong of a false sense of certainty about the way the world works. When a debate is unresolved, I think you shouldn't simply ignore the evidence in favor of a theory that supports one side of the debate.

As a side note, I think the evidence on short-term employment effects of minimum wage is more conclusive than Strain believes, though also more nuanced than is often reported in the media and in casual discussions.

Strain also writes this, which I disagree with even more:
Even more problematic, some of the empirical research most celebrated by critics of economics 101 contradicts itself about the basic structure of the labor market. The famous “Mariel boatlift paper” finds that a large increase in immigrant workers doesn’t lower the wages of native workers. The famous “New Jersey-Pennsylvania minimum wage paper” finds that an increase in the minimum wage doesn’t reduce employment. If labor supply increases and wages stay constant — the Mariel paper — then the labor demand curve must be flat. But if the minimum wage increases and employment stays constant — New Jersey-Pennsylvania — then the labor demand curve must be vertical. Reconciling these studies is, again, way beyond the scope of an intro course. (emphasis mine)
Strain is using the simplest, most basic Econ 101 theory - a single S-D graph applying to all labor markets - to try to understand multiple results at once. He finds that this super-simple theory can't simultaneously explain two different empirical stylized facts, and concludes that we should respond by not teaching intro students about one or both of the empirical stylized facts.

But what if super-simple theory is just not powerful enough to describe both these situations at once? What if there isn't just one labor demand curve that applies to all labor markets at once? Maybe in the case of minimum wage, monopsony models are better than good old supply-and-demand. Maybe in the case of immigration, general equilibrium effects are important. Maybe search frictions are a big deal. There are lots of other possibilities too.

Strain's implicit assumption - that there's just one labor demand curve - seems like an example of what I call "101ism". A good 101 class, in my opinion, should teach monopoly models, and at least give a brief mention of general equilibrium and search frictions. And even more importantly, a good 101 class should stress that models are situational tools, not Theories of Everything. Assuming that there's one single labor demand curve that applies to all labor markets is a way of taking a simple model and trying to make it function as a Theory of Everything; no one should be surprised when that attempt fails. And our response to that failure shouldn't be to just not teach the empirics. It should be to rethink the way we use the theory.

Anyway, I agree with what Kwak says here:
People like Krugman and Smith (and me) aren’t saying that Economics 101 is useless; we all think that it teaches some incredibly useful analytical tools. The problem is that many people believe (or act as if they believe) that those models are a complete description of reality from which you can draw policy conclusions [without looking at evidence].

Sunday, May 08, 2016

Regulation and growth

As long as we're on the topic of regulation and growth, check out this post I recently wrote for Bloomberg View:
I’m very sympathetic to the idea that regulation holds back growth. It’s easy to look around and find examples of regulations that protect incumbent businesses at the expense of the consumer -- for example, the laws that forbid car companies from selling directly to consumers, creating a vast industry of middlemen. You can also find clear examples of careless bureaucratic overreach and inertia, like the total ban on sonic booms over the U.S. and its territorial water (as opposed to noise limits). These inefficient constraints on perfectly healthy economic activity must reduce the size of our economy by some amount, acting like sand in the gears of productive activity. 
The question is how much...If regulation is less harmful than the free-marketers would have us believe, we risk concentrating our attention and effort on a red herring... 
[F]ocusing too much on deregulation might actually hurt our economy. Many government rules, such as prohibitions on pollution, tainted meat, false advertising or abusive labor practices, are things that the public would probably like to keep in place. And reckless deregulation, like the loosening of restrictions on the financial industry in the period before the 2008 credit crisis, can hurt economic growth in ways not captured by most economic models. Although burdensome regulation is certainly a worry, a sensible approach would be to proceed cautiously, focusing on the most obviously useless and harmful regulations first (this is the approach championed by my Bloomberg View colleague Cass Sunstein). We don’t necessarily want to use a flamethrower just to cut a bit of red tape.

Also, on Twitter I wrote a "tweetstorm" (series of threaded tweets) about the regulation debate. Here are the tweets:

The regulation issue is really a very multifaceted, complex, and important series of different issue. It's an important area of policy debate, but can't be boiled down to one simple graph - and shouldn't be boiled down to one simple slogan.

Friday, May 06, 2016

Brad DeLong pulpifies a Cochrane graph

When Bob Lucas, Tom Sargent, and Ed Prescott remade macroeconomics in the 70s and 80s, what they were rebelling against was reduced-form macro. So you think you have a "law" about how GDP affects consumption? You had better be able to justify that with an optimization problem, said Lucas et al. Otherwise, your "law" is liable to break down the minute you try to take advantage of it with government policy.

Lots of people are unhappy with what Lucas et al. invented to replace the "old macro". But few would argue that it needed replacing. Identifying correlations in aggregate data really doesn't tell you a lot about what you can accomplish with policy.

Because of this, I've always been highly skeptical of John Cochrane's claim that if we simply launched a massive deregulatory effort, it would make us many times richer than we are today. Cochrane typically shows a graph of the World Bank's "ease of doing business" rankings vs. GDP, and claims that this graph essentially represents a menu of policy options - that if we boost our World Bank ranking slightly past the (totally hypothetical) "frontier", we can make our country five times as rich as it currently is. This always seemed like the exact same fallacy that Lucas et al. pointed out with respect to the Phillips Curve. 

You can't just do a simple curve-fitting exercise and use it to make vast, sweeping changes to national policy. 

Brad DeLong, however, has done me one better. In a short yet magisterial blog post, DeLong shows that even if Cochrane is right that countries can move freely around the World Bank ranking graph, the policy conclusions are incredibly sensitive to the choice of functional form. 

Here is Cochrane's graph, unpacked from its log form so you can see how speculative it really is:

DeLong notes that this looks more than a little bit crazy, and decides to do his own curve-fitting exercise (which for some reason he buries at the bottom of his post). Instead of a linear model for log GDP, he fits a quadratic polynomial, a cubic polynomial, and a quartic polynomial. Here's what he gets:

Cochrane's conclusion disappears entirely! As soon as you add even a little curvature to the function, the data tell us that the U.S. is actually at or very near the optimal policy frontier. DeLong also posts his R code in case you want to play with it yourself. This is a dramatic pulpification of a type rarely seen these days. (And Greg Mankiw gets caught in the blast wave.)

DeLong shows that even if Cochrane is right that we can use his curve like macroeconomists thought we could use the Phillips Curve back in 1970, he's almost certainly using the wrong curve. You'd think Cochrane would care about this possibility enough to at least play around with slightly different functional forms before declaring in the Wall Street Journal that we can boost our per capita income to $400,000 per person by launching an all-out attack on the regulatory state. I mean, how much effort does it take? Not much.

And this is an important issue. An all-out attack on the regulatory state would inevitably destroy many regulations that have a net social benefit. The cost would be high. Economists shouldn't bend over backwards to try to show that the benefits would be even higher. That's just not good policy advice.

(Also, on a semi-related note, Cochrane's WSJ op-ed (paywalled) uses China's nominal growth as a measure of the rise in China's standard of living. That's just not right - he should have used real growth. If that's just an oversight, it should be corrected.)


Cochrane responds to DeLong. His basic responses are 1) drawing plots with log GDP is perfectly fine, and 2) communist regimes like North Korea prove that the relationship between regulation and growth is causal.

Point 1 is right. Log GDP on the y-axis might mislead 3 or 4 people out there, but those are people who have probably been so very misled by so very many things that this isn't going to make a difference.

Point 2 is not really right. Sure, if you go around shooting businesspeople with submachine guns, you can tank GDP by making it really hard to do business. No one doubts that. But that's a far, far, far cry from being able to boost GDP to $400k per person by slashing regulation and taxes. Cochrane's problem isn't just causality, it's out-of-sample extrapolation. DeLong shows that if you fit a cubic or quartic polynomial to the World Bank data, you find that too much "ease of doing business" is actually bad for your economy, and doing what Cochrane suggests would reduce our GDP substantially. Is that really true? Who knows. Really, what this exercise shows is that curve-fitting-and-extrapolation exercises like the one Cochrane does in the WSJ are silly sauce.

Anyway, if you're interested to read more stuff I wrote about regulation and growth, see this post.

Monday, April 25, 2016

Life Update: Leaving Stony Brook, joining Bloomberg View

Short version: I've joined Bloomberg View as a full-time writer. I'm leaving Stony Brook, and leaving academia, effective August 15, 2016. Bloomberg View had approached me a year ago about possibly working for them full-time. So I took a 1-year leave from Stony Brook, partially to finish some research stuff I had to do, and partially to decide whether I should switch jobs (I retained my Stony Brook affiliation during that time, and kept advising students and working with Stony Brook professors, but didn't teach classes). Early this year, Bloomberg made me a very nice offer for a full-time position, and I decided to take it. The offer included the chance to live in the San Francisco Bay Area, where I've long wanted to live, so I've moved to SF.

Longer version: Back in 2006, the original reason I thought of getting an econ PhD was actually to become an econ pundit and writer. I saw the quality of the econ commentary out there, and decided that it could be much improved - that there was a huge breakdown in the pipeline of good and useful ideas between academia and the public debate. I admired economists like Brad DeLong and Paul Krugman, and writers like Matt Yglesias, who took some steps to bridge that gap, but I thought that much more needed to be done. I wanted to make sure that good ideas, rather than politically motivated propaganda or silly oversimplifications, made it out of the ivory tower and into the public consciousness.

As soon as I started grad school, I essentially forgot that dream entirely. I got absorbed in the grad school stuff, first in macroeconomics, then later in my dissertation and behavioral finance (which was much more fun and satisfying than macro). And I enjoyed the academic life at Stony Brook, especially the people there. Now, with this Bloomberg job, I'll sadly be leaving that behind - but in the end, I came right back around to where I started. In terms of bringing good ideas from academia into the public consciousness, progress has been made, but much remains to be done. Fortunately, Bloomberg is a great platform to do this, and I'll be working with some great econ writers like Narayana Kocherlakota, Justin Fox, and many others.

As for Stony Brook, I will miss everyone there. It's a good, fast-growing department. The behavioral finance group there is strong and growing, with Danling Jiang, Stefan Zeisberger, and others. The people in charge of the College of Business, including the dean, Manuel London, are really excellent leaders, and the department is much friendlier and less politics-ridden than basically any other I've seen. They'll be hiring my replacement soon, so if you're a job candidate in behavioral finance, and you'd like to live in New York, I'd recommend Stony Brook.

Anyway, to you grad students out there, I certainly wouldn't regard me as a role model - my career is weird and unusual, and was probably always destined to be that way. I'd still recommend the economics PhD, and the life of an economist, to a whole lot of people out there.

That's all that's changed. I'll still be blogging here, and I'll still be around on Twitter!

Sunday, April 24, 2016

Baseline models

Your macro diss of the day comes via Kevin Grier. Grier is responding to a blog post where David Andolfatto uses a simple macro model to think about interest rates and aggregate demand. Kevin, employing a somewhat tongue-in-cheek tone, criticized David's choice of model:
OK, everybody got that. Representative agent? check. Perfect capital markets? check, lifetime income fixed and known with certainty? check. Time-separable preferences? check. 
People, it would be one thing if models like this fit the data, but they don't. 
The consumption CAPM is not an accurate predictor of asset prices, The degree of risk aversion required to make the numbers work in the equity premium puzzle is something on the order of 25 or above, the literature is littered with papers rejecting [the Permanent Income Hypothesis]. 
So we are being harangued by a model that is unrealistic in the theory and inaccurate to the extreme in its predictions. 
And that's pretty much modern macro in a freakin' nutshell.
Mamba out. 
Kevin is saying that if simple models of this type - models with representative agents, perfect capital markets, deterministic income, and time-separable preferences - haven't performed well empirically, we shouldn't use them to think about macro questions, even in a casual setting like a blog post.

I think Kevin is basically right about the GIGO part. Bad models lead to bad thinking.

A defender of David's approach might say that this model is just a first-pass approximation, good for a first-pass analysis. That even if simple models like this can't solve the Equity Premium Puzzle or predict all of people's consumption behavior, they're good enough for thinking about monetary policy in a casual way.

But I don't think I'd buy that argument. We know that heterogeneity can change the results of monetary policy models a lot. We know incomplete markets can also change things a lot, in different ways. And I think it's pretty well-established that stochasticity and aggregate risk can change monetary policy a lot.

So by using a representative-agent, perfect-foresight, complete-markets model, David is ignoring a bunch of things that we know can totally change the answers to the exact policy questions David is thinking about.

So what should we do instead? One problem is that models with things like heterogeneity, stochasticity, and imperfect markets are a lot more complicated, and therefore harder to apply in quick or casual way. If we insist on using models with those elements, then it's going to be very hard to write blog posts thinking through monetary policy issues in a formal way. Maybe that's just the sad truth.

Another problem is that we don't really know that models with things like heterogeneity, stochasticity, and imperfect markets are going to be much better. Most of these models can match a couple features of the data, but as of right now there's no macro model in existence that matches most or all of the stylized facts about business cycles, finance, consumption, etc. 

So it might be a choice between using A) a simple model that we know doesn't work very well, and B) a complicated model that we know doesn't work very well. Again, the best choice might be just to throw up our hands and stop using formal models to think casually about monetary policy.

Kevin also says that "being harangued by a model that is unrealistic in the theory and inaccurate to the extreme in its predictions" is "pretty much modern macro in a freakin' nutshell." Is that true? 

Actually, I'd say it's more of a problem in fields like international finance, asset pricing, and labor that try to incorporate macro models into their own papers. Usually, in my experience, they pick a basic RBC-type model, because it's easy to use. They then add their own elements, like labor search, financial assets, or multiple countries. But since the basic foundation is a macro model that doesn't even work well for the purpose it was originally conceived for (explaining the business cycle), the whole enterprise is probably doomed from the start.

In the core macro field, though, I think there's a recognition that simple models don't work, and an active search for better ones. From what I've personally seen, most leading macroeconomists are also pretty cautious and circumspect when they give advice to policymakers directly, and don't rely too strongly on any one model. 

Saturday, April 23, 2016

Policy recommendations and wishful thinking

There was a bit of a blow-up earlier this year over Gerald Friedman's analysis of Bernie Sanders' economic plans. Paul Krugman, Christina and David Romer, Brad DeLong and others (including yours truly) said that Friedman was being overly optimistic about the effects of stimulus - some said he had overestimated the remaining output gap, others questioned the use of "Verdoorn's Law" to predict that stimulus can increase productivity growth to very high levels. Others, like JW Mason and Dean Baker, defended Sanders.

To me, it seemed that the coup-de-grace was delivered by Justin Wolfers:
When I pointed Mr. Friedman to this critique of his analysis, he simultaneously accepted and rejected it 
He accepted it, telling me that “I may have made a mistake.” 
But he also rejected this critique, arguing that his figures are based on an alternative view of the world, stating: “To me, when the government spends money, stimulates the economy, hires people who spend, that stimulates more private investment. That remains, and at the next year, you’re starting at the higher level.” He admits that this “is not standard macro,” and described it as the understanding of an earlier generation of economists — a sub-tribe of Keynesians he called “Joan Robinson Keynesians.”
When you get someone to admit they made a mistake in their analysis, it seems like it's over. Friedman admits he made a mistake and then says that his conclusion was right anyway, because we can go find some alternative assumptions that make his original conclusion hold. To me this is transparently assuming the conclusion. That's a big no-no, and while a lot of macroeconomists probably do this, it looks really bad to admit to it!

(I'm also starting to realize that "Joan Robinson" is a sort of an invincible rhetorical refuge for lefty macro types, the way "Friedrich Hayek" is for righty macro types.)

But anyway, the fracas quieted down, but now it's back. Friedman and allies are no longer saying that their analysis is "just standard economics", since they had to switch to non-standard economics to make the conclusions come out the way they wanted. The line now is that Krugman, the Romers, et al. are just a bunch of pessimists, who are unintentionally playing into the hands of conservatives.

Here's Friedman, writing at the INET website:
Professional economists tend to embrace an economic theory that government can do little more than fuss around the edges. From that stance, what do they have to offer ordinary people for whom the economy is not working? Not a whole lot...The angry reaction to my report revealed that by some combination of rationalization and the dominance of neoclassical microeconomics since the 1970s, liberal economists have virtually abandoned Keynesian economics.. 
There is, of course, a politics as well as a psychology to this economic theory...The role of economists and other policy elites (Paul Krugman is fond of the term “wonks”) is to explain to the general public why they should be reconciled with stagnant incomes, and to rebuke those, like myself, who say otherwise[.]
And here's Mason, being interviewed in Jacobin:
The position on the other side, the CEA chairs and various other people who’ve been the most vocal critics of [Friedman's] estimates, has been implicitly or explicitly: “This is as good as we can do.”...“No you can’t.” That’s the other side here: all the reasons for why you can’t do anything. Just give up! Then this notion that Republicans make everything impossible is just another bit of ammunition for “No you can’t.”... 
Right now, we have a system that says as soon as wages start rising, you have to throttle back demand. In many ways, the people running the show don’t necessarily want very fast growth. They prefer an economy that’s sort of sputtering along because it’s one that involves a lot of insecurity and a lot of weakness for working people. When there’s a chronic oversupply of labor people can’t rock the boat.
On Twitter, Mason clarified that when he talks about "the people running the show," he meant the Republicans, not Krugman, the Romers, et al. Basically, he's accusing mainstream liberal economists of unintentionally playing into the hands of conservatives.

Krugman was not happy about this, and blogger ProGrowthLiberal was pretty mad:
The claim that economists like Christina and David Romer bought into the New Classical revolution is both absurd and dishonest...[W]e critics do admit we are below full employment and we have been calling for fiscal stimulus. On this score, the latest from J.W. Mason is even more dishonest than the latest from Gerald Friedman. Guys – you do not win a debate by lying about the other side’s position.
I think PGL is going a little far here - Friedman and Mason aren't lying about their liberal opponents' positions. They aren't claiming Krugman, et al. are New Classicals, only that in the current political and economic situation, they might as well be. Also, Jacobin appeared to put words in Mason's mouth.

But anyway, I don't like what Friedman and Mason are doing. I think economists have a duty to look at the facts as objectively as they can, regardless of their emotions and desires. You shouldn't prefer Model B over Model A just because one leads to "hope" and the other to "hopelessness". 

Suppose you're a doctor, and your patient has knee pain, so you prescribe some anti-inflammatories. The inflammation goes away and the knee pain gets somewhat better, but doesn't go away entirely, and you conclude that inflammation wasn't the only thing that was causing pain. You don't prescribe a 10x dose of the original anti-inflammatory just because doing otherwise would mean abandoning hope. That would be silly! Even if the patient has an evil boss who doesn't want him to recover, you still don't recommend the 10x dose of anti-inflammatories.

Friedman and Mason seem to be arguing that our belief about the facts should be driven, at least in part, by our desire to avoid a feeling of powerlessness. They also seem to be saying that if the facts seem to support conservative policies, even a tiny bit, we should reinterpret the facts.

I don't like this approach. It seems anti-rationalist to me, and I think that if wonks behave this way, they'll end up recommending lots of bad policies. 

Monday, April 11, 2016

Astrologers and macroeconomists

I like to keep track of "econ diss" articles, since that's what this blog was mostly about for its first few years of existence. Most of them leave a lot to be desired. But here's one I really like, in Aeon magazine, by philosophy prof Alan Levinovitz.

Levinovitz likens modern-day macroeconomics to mathematical astrology in the early Chinese empire. And in fact, the parallel sounds pretty accurate. The article is worth reading just to learn about classical Chinese astrology, actually.

But anyway, Levinovitz draws heavily on the econ disses of Paul Romer:

‘I’ve come to the position that there should be a stronger bias against the use of math,’ Romer explained to me. ‘If somebody came and said: “Look, I have this Earth-changing insight about economics, but the only way I can express it is by making use of the quirks of the Latin language”, we’d say go to hell, unless they could convince us it was really essential. The burden of proof is on them.’
 ...and Paul Pfleiderer:
Pfleiderer called attention to the prevalence of ‘chameleons’ – economic models ‘with dubious connections to the real world’...Like Romer, Pfleiderer wants economists to be transparent about this sleight of hand. ‘Modelling,’ he told me, ‘is now elevated to the point where things have validity just because you can come up with a model.’
He also rightly (in my opinion) identifies Robert Lucas as a key figure in the turn away from empiricism in macro:
Lucas’s veneration of mathematics leads him to adopt a method that can only be described as a subversion of empirical science:
"The construction of theoretical models is our way to bring order to the way we think about the world, but the process necessarily involves ignoring some evidence or alternative theories – setting them aside. That can be hard to do – facts are facts – and sometimes my unconscious mind carries out the abstraction for me: I simply fail to see some of the data or some alternative theory."
A lot of what Levinovitz is writing is just a synthesis of things that smart economists have been complaining about in private for decades, and in public since the 2008 crisis. I think econ needs more critics like this, who are willing and able to go talk to the smart dissidents within the econ mainstream, rather than just accepting at face value the arguments of "heterodox" outsiders due to political affinity (as some econ critics sadly do).

Levinovitz, however, leaves out what I think is the most important development: the empirical revolution in econ. This has been most important in micro fields, since data is much more abundant, but it's also starting to influence macro. "Micro-focused macro" - using firm-level or area-level data to test the assumptions of macro models directly, rather than just throwing in a bunch of obviously wrong assumptions and hoping they yield aggregate results you like - is a big deal these days, and getting bigger. Soon, we may even see people insisting in seminars that DSGE models only use assumptions that have been rigorously tested on high-quality micro data! That dream is still far off, but it seems to be getting closer.

I also think Levinovitz should have given a shout-out to the successes of applied micro theory - auction theory, matching theory, discrete choice models, and the rest. He writes:
Unlike engineers and chemists, economists cannot point to concrete objects – cell phones, plastic – to justify the high valuation of their discipline.
But that's actually not right. Econ theory powers lots of useful technology, from Google's ad auctions to kidney transplant allocation systems. Economic engineering isn't a term people use, but it's a real thing, and mathematical econ theories sometimes do an excellent job of describing human behavior in ways that can be consistently applied.

But anyway, Levinovitz' article is very good (and very well-written), and is worth a read. Just remember that econ is a lot more than macro, that it has become much more data-centric, and that it has produced a number of useful engineering applications.

Saturday, April 09, 2016

101ism in action: minimum wage edition

A while ago I went on a rant about the dangers of "101ism", which is a word I made up for when people use an oversimplified or just plain wrong version of Econ 101 in policy discussions. Well, here I have a perfect example for you. And among the culprits was me.

It started when American Enterprise Institute scholar Mark J. Perry tweeted the following graph about minimum wage:

I was annoyed by the word "actually". My current pet peeve is people not paying attention to empirical evidence - I think if you say "actually", there should be more than just a theory backing you up, especially if evidence is actually available. So I started giving Mark a hard time about ignoring the empirical evidence on the minimum wage question. 

That's when Alex Tabarrok jumped in and defended the cartoon, saying that it's just a basic supply-and-demand model:

But that's not right. This cartoon actually doesn't show the basic D-S model at all. Let's look at it again:

The basic, Econ 101 D-S model is a model of a market for a single homogeneous good. In the case of the labor market, that good is labor. There's one kind of labor, and everyone who does it gets paid the same wage. Since the wage in that model is equal to the marginal revenue product of labor, this means everyone's labor generates the same amount of revenue (this is also obvious just from the assumption that labor is homogeneous; if everyone's doing the exact same work, they can't each be generating different amounts of revenue). A wage floor in the basic D-S model will put some people out of work, and will raise the amount of revenue generated by each person who keeps her job, thus raising wages as well.

In the cartoon, however, different jobs are stated to generate different amounts of revenue. Also, the last panel implies that a wage floor leaves the revenue generated by workers unchanged. So while the cartoon and the D-S model both predict that minimum wage causes job loss, it's only a coincidence - they're not the same model at all. 

The cartoon could be trying to portray a sophisticated model of heterogeneous labor in a highly segmented market. Or, far more likely, it could just be some sloppy political crap made by a cartoonist who doesn't remember his intro econ class very well. Either way, Econ 101 it ain't.

When Alex claimed that the cartoon is an "accurate portrayal" of the D-S model, I waved away his protest, basically saying "Who cares, evidence comes first." But (possibly because I had a nasty virus...excuses, excuses), I failed to notice until this morning that the cartoon is not the D-S model at all! I gave it the benefit of the doubt and assumed Alex was right. But Alex must not have been paying close attention - since he teaches the D-S model in online videos, he obviously does know how that model works.

So the cartoonist, and Mark J. Perry as well, are peddling bad economics. But they managed to momentarily convince both me and Alex that they're just peddling good' ol simple Econ 101. How did they do that?

In my case, it was because I committed the fallacy of the converse. I assumed that because the basic Econ 101 model says minimum wages cause job loss, and the cartoon says the same, the latter must be equal to the former. That's like saying "Horses have legs, I have legs, therefore I must be a horse." I suspect that Alex made the same mistake. And so we both gave a stupid cartoon far more credit than it deserved.

This is 101ism at its worst. It got me too, people. It's a plague, I tell you! A plague!


This post has stimulated a lot of interesting discussion about what the basic Econ 101 supply-and-demand model actually says (see comments, also Twitter).

One point has been that the definition of "the amount of revenue a job generates" - the language in the cartoon - is not clear. I took it to mean "marginal product of labor", but some people take it to mean "average product of labor". Either way, though, the APL generally changes as total labor consumed changes, so the cartoon still doesn't make sense if we define "revenue generated" as APL.

Alex Tabarrok, in the comments, seems to suggest a model in which one "job" is not equal to one differential unit of undifferentiated labor, but actually represents several units. If this is the case, each job will have a different total revenue benefit to employers, and the MPL of a job can't even be well-defined (since it's a discrete unit rather than a differential). So with these definitions, you can definitely say that "each job generates a different amount of revenue". 

But the point is, no matter how you define a job, or the revenue generated by a job, that amount will in general change for each job under a wage floor. The amount of revenue one person's job generates depends on who else is working. That's what Econ 101 teaches - or ought to teach, anyway. And that's what the cartoon gets wrong. It shows a wage floor eliminating every job whose "revenue generated" is lower than the wage floor before the implementation of the wage floor. Actually, basic Econ 101 D-S teaches that a wage floor eliminates every job whose total revenue benefit to employers (the integral of marginal revenue product over some range represented by the "job") is less than the wage floor (representing the cost of hiring the worker) after the introduction of the wage floor. Since the wage floor changes the quantity of labor consumed, and since the marginal revenue product of labor is in general not constant, those things are not the same. 

And that is why the cartoon is a bad representation of Econ 101. Good Econ 101, in my opinion (and probably in most people's opinions), should teach how marginal benefits and costs change according to the quantity consumed. The cartoon shows them not changing. That's not good Econ 101.