Thursday, August 24, 2017

The Market Power Story


So, there's this story going around the econosphere, which says that the economy is being throttled by market power. I've sort of bought into this story. It certainly seems to be getting a lot of attention from top economists. Autor, Dorn, Katz, Patterson and van Reenen have blamed industrial concentration for the fall in labor's share of income. Now there's a new paper out by De Loecker and Eeckhout blaming monopoly power for much more than that - lower wages, lower labor force participation, slower migration, and slow GDP growth. The paper is getting plenty of attention.

That's a big set of allegations. Everyone knows that the U.S. economy has been looking anemic since the turn of the century, and now a growing chorus of papers by well-respected people is claiming that we've found the culprit. Monopoly power could potentially become Public Enemy #1 for economists, the way taxes and unions were in the 70s, and antitrust could become the new silver bullet policy.

With those kind of stakes, it was inevitable that pushback and skepticism would rev up - after all, you don't just let a big theory like that go unchallenged. My Bloomberg View colleague Tyler Cowen is one of the first to step up to the plate, with a blog post criticizing the De Loecker and Eeckhout paper (BTW I just spelled those both correctly from memory. I want some kind of prize.)

Tyler's post really made me think. It raises some important issues and caveats. But ultimately I don't think it does that much to derail the Market Power Story. Here are some of my my thoughts on Tyler's points.


1. Monopolistic Competition

Tyler:
There are two ways these mark-ups go could up: first there may be more outright monopoly, second there may be more monopolistic competition, with high mark-ups but also high fixed costs, and firms earning close to zero profits....Consider my local Chinese restaurant.  Maybe the fixed cost of a restaurant has gone up, due to rising rents and the need to invest in information technology.  That can mean higher fixed costs, but still a positive mark-up at the margin.
First of all, and most importantly, monopolistic competition is perfectly consistent with the Market Power Story. Monopolistic competition in general does not produce an efficient outcome. Though monopolistic competition doesn't generate long-term profits like monopoly does, it does generate deadweight losses. This is true even when market power comes from product differentiation, as in the typical Dixit-Stiglitz formulation. Monopolistic competition does involve market power, so could also explain the drop in labor share, wages, etc.

So this objection of Tyler's doesn't really go against the Market Power Story, which was always about monopolistic competition rather than outright monopoly.

What about markups vs. profits? In general, Tyler is right - higher markups could indicate higher fixed costs rather than higher profit margins.

But what would these fixed costs be? Tyler suggests rent, but that is a variable cost, not a fixed cost. He also suggests information technology costs -- buying computers for your office, software for the computers, point-of-sale tech, etc. But advances in IT seem just as likely to reduce fixed costs as to raise them. Typewriters cost as much in the 60s as computers do now, but computers can do infinitely more. So much business can be done on the internet, using freely available tools like Google Sheets and Google Docs and free chat apps for workplace communications. Internet outsourcing also dramatically lowers fixed costs by turning them into variable costs.

I'm open to the idea that fixed costs have increased, but I can't easily think of what those fixed costs would be. Maybe modern business organizations are more complex, and therefore require more up-front investment in firm-specific human capital? I'm just hand-waving here.


2. Profits

Tyler:
The authors consider whether fixed costs have risen in section 3.5.  They note that measured corporate profits have increased significantly, but do not consider these revisions to the data.  Profits haven’t risen by nearly as much as the unmodified TED series might suggest.
Tyler is referring to the fact that foreign sales aren't counted when calculating official profit margins, leading these margins to be overstated. Here is Jesse Livermore's corrected series, which uses gross value added in the denominator:


Profit margins are at an all-time high, but not that much higher than in the 50s and 60s.

A more accurate measure of true economic profits (i.e., what you'd expect market power to produce) would include opportunity costs (cost of capital) in the numerator. Simcha Barkai does this in a recent paper, also using gross value added in the denominator. Here's his graph for the last 30 years:


His series tells basically the same story as Livermore's - profits have gone up up up. But he doesn't extend back to the 50s, so it's not clear whether higher capital costs back then would reduce the high profit margins seen on Livermore's graph. Interest rates were similar in the 50s and 60s to what they are now, so it seems likely that Barkai's method would also produce a large-ish profit share back then as well.

So it does seem clear that profit has gone way up in recent decades. But a full account should say why profit was also high in the 50s and 60s, and whether this too was caused by market power.

Also, as an interesting side note, Barkai mentions how corporate investment has fallen. That's interesting, because it definitely doesn't square with the "increasing fixed costs" story. Here's Barkai's graph:


If this is a rise in fixed costs we're looking at, where's the investment spending?


3. Market Concentration

Tyler:
In most areas we have more choice, maybe much more choice, than before...ask yourself a simple question — in how many sectors of the American economy do I, as a consumer, feel that concentration has gone up and real choice has gone down?  Hospitals, yes.  Cable TV?  Sort of, but keep in mind that program quality and choice wasn’t available at all not too long ago.  What else There are Dollar Stores, Wal-Mart, Amazon, eBay, and used goods on the internet.  Government schools.  Hospitals.  Government.  Did I mention government?
Hmm. Autor et al. show that market concentration has increased in basically all broad industrial categories. On one hand, that doesn't take geography and local market power into account - if there's only one store in town, does it matter if it's an indie store or a Wal-Mart? But I think it gives us reliable information that Tyler's anecdotes don't. 

Also, Tyler is thinking only of consumer sectors. Much of the economy consists of intermediate goods and services - B2B. These could easily be getting more concentrated, even though we don't come into contact with them very often. 

(And one random note: Tyler at one point seems to equate product choice with market concentration, in the case of TV channels. But that's not right. If Netflix is the world's only distribution service, even if it has infinite movies and TV shows, it can jack up the price for watching TV and movies.)

That said, the example of retail is an interesting one. Autor shows that retail concentration has gone up, but I'm sure people now have more choice of retailers than they used to. I think the distinction between national concentration and local concentration probably matters a lot here. And that means maybe it matters for other industries too.

But as for which industries seem more concentrated before, just off the top of my head...let me think. Banks. Airlines (which is why they aren't now all going bankrupt). Pharma. Energy. Consumer nondurables. Food. Semiconductors. Entertainment. Heavy equipment manufacturing. So anecdotally, it does seem like there's a lot of this going on, and it's not just health care and government. 


4. Output restriction

Tyler:
Similarly, the time series for manufacturing output is a pretty straight upward series, especially once you take out the cyclical component.  If there is some massive increase in monopoly power, where does the resulting output restriction show up in that data?  Once you ask that simple question, the whole story just doesn’t add up.
This is an important point. The basic model of monopoly power is that it restricts output. That's where the deadweight loss comes from (and the same for monopolistic competition too). But overall output is going up in most industries. What gives?

I think the answer is that it's very hard to know a counterfactual. How many more airline tickets would people be buying if the industry had more competition? How much more broadband would we consume? How many more bottles of shampoo would we buy? How many more miles would we drive? It's hard to know these things.

Still, I think this question could and should be addressed with some event studies. Did big mega-mergers change output trends in their industries? That's a research project waiting to be done. 


So overall, I think that while Tyler raises some interesting and important points, and provides lots of food for thought, he doesn't really derail the Market Power Story. Even more importantly, that story relies on more than just the De Loecker and Eeckhout paper (and dammit, I had to look up the spelling this time!). The Autor et al. paper is important too. So is the Barkai paper. So are many other very interesting papers by credible economists. So is the body of work showing how antitrust enforcement has weakened in the U.S. To really take down the story, either some common problem will have to be found with all of these papers, or each one (and others to come) will have to be debunked independently, or some compelling alternate explanation will have to be found.

The Market Power Story is still alive, and still worrying. 


Update

Forgot to mention this in the original post, but basically I see the case of the Market Power Story - or any big economic story like this - as detective work. We're collecting circumstantial evidence, and while no piece of evidence is a smoking gun, each adds to the overall picture. IF the economy were being throttled by increased market power, we'd expect to see:

1. Increased market concentration (Check! See Autor et al.)

2. Increased markups (Check! See De Loecker and Eeckhout)

3. Increased profits (Check! See Barkai)

4. Decreased investment (Check! See Gutierrez and Philippon)

5. Increased prices following mergers (Probably check! See Blonigen and Pierce)

6. Weakened antitrust enforcement (Check! See Kwoka)

7. Decreased output (Not sure yet)

So, as I see it, the evidence is piling up from a number of sides here. Economists need to investigate the question of whether output has been restricted. But those who want to come up with an alternate story for the recent changes in industrial organization need one that's consistent with the various facts found by these various sleuthing detectives.


Update 2

Robin Hanson and Karl Smith both have posts responding to De Loecker and Eeckhout's paper and attacking the Market Power Story. Both give reasons why they think rising markups indicate monopolistic competition, rather than entry barriers. But both seem to forget that monopolistic competition causes deadweight loss. Just because it has the word "competition" in it does NOT mean that monopolistic competition is efficient. It is not.  


Update 3

Tyler has another post challenging the De Loecker and Eeckhout paper and the Market Power Story in general. His new post makes a variety of largely unconnected points. Briefly...

Tyler on general equilibrium:
If every sector of an economy becomes monopolistic, output will contract in each sector, and it might appear that productivity will decline.  But for the most part this output reduction will not be achieved by burning crops in the fields.  Rather, less will be produced and factors of production will be freed up for elsewhere.  New sectors will arise, and offer goods and services too, perhaps with monopolies as well... 
You can cite the deadweight loss of monopoly all you want, but we’re getting more outputs of other stuff.  Value-added could be either higher or lower, productivity too.
This seems like a hand-waving argument that economic distortions in one sector are never bad, because they free up resources to be used elsewhere. That's obviously wrong, though. To see this, suppose the government levied a 10000% tax on food. Yes, the labor and capital freed up from the contraction of the food industry would get used elsewhere. NO, overall this outcome would not be good for the economy. Monopoly acts like a tax, so a similar principle applies. 

No, resource reallocation does not make market distortions efficient. 

Tyler on innovation: 
The Schumpeterian tradition, of course, suggested that market power would boost innovation.  There are at least two first-order effects pushing in this direction.  First, the monopoly has more “free cash” for R&D, and second there is a lower chance of the innovation benefiting competing firms too.  I don’t view the “monopoly boosts innovation” hypothesis as confirmed, but it probably has commanded slightly more sympathy from researchers than the opposite point of view.  Bell Labs did pretty well.
This is actually a good and important point, and I don't think we can dismiss it at all. There are economists who argue monopoly reduces innovation, and others who argue it increases it. 

Tyler on product diversity:
[Y]ou must compare [the efficiency loss from monopolistic competition] to the rise in product diversity that follows from monopolistic competition.
Does market power increase product diversity? That was certainly Edward H. Chamberlin's theory back in the 1930s. When you start getting technical, the question becomes less clear.

Tyler on De Loecker and Eeckhout, again:
But under those same conditions, profits are zero and so the mark-up arguments from the DeLoeker and Eeckhout paper do not apply and indeed cannot hold.
That seems incorrect to me. The fact that long-term profits are zero does NOT make monopolistic competition efficient. So the De Loecker and Eeckhout argument can indeed hold, quite easily. This basic fact - the inefficiency of monopolistic competition in standard theory - keeps coming up again and again. It appears to be a key fact the bloggers now rushing to attack the De Loecker and Eeckhout paper have not yet taken into account.

Thursday, August 17, 2017

"Theory vs. Data" in statistics too


Via Brad DeLong -- still my favorite blogger after all these years -- I stumbled on this very interesting essay from 2001, by statistician Leo Breiman. Breiman basically says that statisticians should do less modeling and more machine learning. The essay has several responses from statisticians of a more orthodox persuasion, including the great David Cox (whom every economist should know). Obviously, the world has changed a lot since 2001 -- where random forests were the hot machine learning technique back then, it's now deep learning -- but it seems unlikely that this overall debate has been resolved. And the parallels to the methodology debates in economics are interesting.

In empirical economics, the big debate is between two different types of model-makers. Structural modelers want to use models that come from economic theory (constrained optimization of economic agents, production functions, and all that), while reduced-form modelers just want to use simple stuff like linear regression (and rely on careful research design to make those simple models appropriate).

I'm pretty sure I know who's right in this debate: both. If you have a really solid, reliable theory that has proven itself in lots of cases so you can be confident it's really structural instead of some made-up B.S., then you're golden. Use that. But if economists are still trying to figure out which theory applies in a certain situation (and let's face it, this is usually the case), reduced-form stuff can both A) help identify the right theory and B) help make decently good policy in the meantime.

Statisticians, on the other hand, debate whether you should actually have a model at all! The simplistic reduced-form models that structural econometricians turn up their noses at -- linear regression, logit models, etc. -- are the exact things Breiman criticizes for being too theoretical! 

Here's Breiman:
[I]n the Journal of the American Statistical Association JASA, virtually every article contains a statement of the form: "Assume that the data are generated by the following model: ..." 
I am deeply troubled bythe current and past use of data models in applications, where quantitative conclusions are drawn and perhaps policy decisions made... 
[Data generating process modeling] has at its heart the belief that a statistician, by imagination and by looking at the data, can invent a reasonably good parametric class of models for a complex mechanism devised bynature. Then parameters are estimated and conclusions are drawn. But when a model is fit to data to draw quantitative conclusions... 
[t]he conclusions are about the model’s mechanism, and not about nature’s mechanism. It follows that...[i]f the model is a poor emulation of nature, the conclusions maybe wrong... 
These truisms have often been ignored in the enthusiasm for fitting data models. A few decades ago, the commitment to data models was such that even simple precautions such as residual analysis or goodness-of-fit tests were not used. The belief in the infallibility of data models was almost religious. It is a strange phenomenon—once a model is made, then it becomes truth and the conclusions from it are [considered] infallible.
This sounds very similar to the things reduced-form econometric modelers say when they criticize their structural counterparts. For example, here's Francis Diebold (a fan of structural modeling, but paraphrasing others' criticisms):
A cynical but not-entirely-false view is that structural causal inference effectively assumes a causal mechanism, known up to a vector of parameters that can be estimated. Big assumption. And of course different structural modelers can make different assumptions and get different results.
In both cases, the criticism is that if you have a misspecified theory, results that look careful and solid will actually be wildly wrong. But the kind of simple stuff that (some) structural econometricians think doesn't make enough a priori assumptions is exactly the stuff Breiman says (often) makes way too many

So if even OLS and logit are too theoretical and restrictive for Breiman's tastes, what does he want to do instead? Breiman wants to toss out the idea of a model entirely. Instead of making any assumption about the DGP, he wants to use an algorithm - a set of procedural steps to make predictions from data. As discussant Brad Efron puts it in his comment, Breiman wants "a black box with lots of knobs to twiddle." 

Breiman has one simple, powerful justification for preferring black boxes to formal DGP modeling: it works. He shows lots of examples where machine learning beat the pants off traditional model-based statistical techniques, in terms of predictive accuracy. Efron is skeptical, accusing Breiman of cherry-picking his examples to make machine learning methods look good. But LOL, that was back in 2001. As of 2017, machine learning - in particular, deep learning - has accomplished such magical feats that no one now questions the notion that these algorithmic techniques really do have some secret sauce. 

Of course, even Breiman admits that algorithms don't beat theory in all situations. In his comment, Cox points out that when the question being asked lies far out of past experience, theory becomes more crucial:
Often the prediction is under quite different conditions from the data; what is the likely progress of the incidence of the epidemic of v-CJD in the United Kingdom, what would be the effect on annual incidence of cancer in the United States of reducing by 10% the medical use of X-rays, etc.? That is, it may be desired to predict the consequences of something only indirectly addressed by the data available for analysis. As we move toward such more ambitious tasks, prediction, always hazardous, without some understanding of underlying process and linking with other sources of information, becomes more and more tentative.
And Breiman agrees:
I readily acknowledge that there are situations where a simple data model maybe useful and appropriate; for instance, if the science of the mechanism producing the data is well enough known to determine the model apart from estimating parameters. There are also situations of great complexity posing important issues and questions in which there is not enough data to resolve the questions to the accuracy desired. Simple models can then be useful in giving qualitative understanding, suggesting future research areas and the kind of additional data that needs to be gathered. At times, there is not enough data on which to base predictions; but policydecisions need to be made. In this case, constructing a model using whatever data exists, combined with scientific common sense and subject-matter knowledge, is a reasonable path...I agree [with the examples Cox cites].
In a way, this compromise is similar to my post about structural vs. reduced-form models - when you have solid, reliable structural theory or you need to make predictions about situations far away from the available data, use more theory. When you don't have reliable theory and you're considering only a small change from known situations, use less theory. This seems like a general principle that can be applied in any scientific field, at any level of analysis (though it requires plenty of judgment to put into practice, obviously).

So it's cool to see other fields having the same debate, and (hopefully) coming to similar conclusions.

In fact, it's possible that another form of the "theory vs. data" debate could be happening within machine learning itself. Some types of machine learning are more interpretable, which means it's possible - though very hard - to open them up and figure out why they gave the correct answers, and maybe generalize from that. That allows you to figure out other situations where a technique can be expected to work well, or even to use insights gained from machine learning to allow the creation of good statistical models.

But deep learning, the technique that's blowing everything else away in a huge array of applications, tends to be the least interpretable of all - the blackest of all black boxes. Deep learning is just so damned deep - to use Efron's term, it just has so many knobs on it. Even compared to other machine learning techniques, it looks like a magic spell. I enjoyed this cartoon by Valentin Dalibard and Peter Petar Veličković (tweeted by Dendi Suhubdy):




Deep learning seems like the outer frontier of atheoretical, purely data-based analysis. It might even classify as a new type of scientific revolution - a whole new way for humans to understand and control their world. Deep learning might finally be the realization of the old dream of holistic science or complexity science - a way to step beyond reductionism by abandoning the need to understand what you're predicting and controlling.

But this, as they say, would lead us too far afield...


(P.S. - Obviously I'm doing a ton of hand-waving here, I barely know any machine learning yet, and the paper I'm writing about is 16 years out of date! I'll try to start keeping track of cool stuff that's happening at the intersection of econ and machine learning, and on the general philosophy of the thing. For example, here's a cool workshop on deep learning, recommended by the good folks at r/badeconomics. It's quite possible deep learning is no longer anywhere near as impenetrable and magical as outside observers often claim...)