Tuesday, April 30, 2013

Can "culture" predict economic development?

In this essay, Daniel Altman predicts that China will fall short of the West, because of its "Confucian" culture:
[Unconditional] convergence didn't seem to be happening in many parts of the world [in the last century]...[S]ome countries that appeared to be catching up to the West for a few decades, like Japan, hit a wall before they reached the same standards of living, falling inexplicably short of the target. 
In the very long term, [cultural] factors may turn out to be the most important ones [in China's development]. 
Confucianism is perhaps the leading influence on Chinese business practices...The teachings of Confucius date back centuries, and they are deeply ingrained in Chinese society...Yet some of its central tenets, though they may have benefits at the social level, are not necessarily conducive to economic growth. 
Confucian ethics teach that one should value the collective over the individual...A second and related tenet of Confucianism...encompasses the “respect for elders” that is a hallmark of many East Asian civilizations. In Confucianism, this deference belongs not just in family relationships but also between ruler and subject, master and servant, and employer and employee. 
Together, these tenets of Confucianism — and the way they have been interpreted by the Chinese authorities in recent times — have helped to maintain rigid hierarchies in Chinese businesses... 
There is one other cultural current that runs just as deeply as Confucianism...Chinese people learn a very particular story of the birth of their nation, in which the great struggle through the millennia has been to unite the enormous land mass and diverse ethnicities of China into one nation...The message is clear: to be united and realize the dreams of a great Chinese nation, the Chinese people need strong rulers who brook little dissent. 
The message carries through to the boardrooms of Chinese companies, which tend to concentrate the instruments of power in the hands of a single strongman... 
All of these factors will combine to lower the target for material living standards in China — or, to put it more technically, they reduce the level of per capita income toward which China is converging. With these factors in place, China simply is not in the same convergence club as the United States...
China may just manage to catch the United States and become the world’s biggest economy. But it will hold onto the title for only a few years before the United States, growing more quickly in both population and the productivity of its workers, passes China again... 
[A]s Japan’s example goes to show, holding onto culture — and other deep factors — can keep the limits to growth in place.
This column provides an object lesson in the degree to which using Twitter has limited my vocabulary. I'm struggling to think of a concise description of this essay that does not involve the word "derp".

First, I need to deal with the most glaringly annoying part: the Japan example. Altman claims that Japan failed to catch up with the West. This is laughably false. Here are the 2012 per-capita GDP numbers (at PPP) for Japan and its three closest analogues among the Western nations, the rich, medium-sized, ethnically homogeneous nations of West Europe (source: IMF):
  • Germany:   $39,028
  • UK:           $36,941
  • Japan:      $36,266
  • France:      $35,548
In case you wondered, here are the nominal GDP numbers (source: IMF):
  • Japan:      $46,736
  • Germany:   $41,513
  • France:      $41,141
  • UK:           $38,589
Hopefully, you are convinced that Japan has fully caught up to the West. Its per-capita GDP is no less than the GDPs of the countries that produced Locke and Hume and Adam Smith, Wittgenstein and Kant, Descartes and Voltaire. +1 for convergence, -1 for "culture". Why Altman feels justified in his casual assertion that Japan "fell short" of the West remains a mystery.

Anyway, let's move on to the claims about China's "Confucianist" culture. Just for fun, here are the GDP (PPP) numbers for two other East Asian countries commonly labeled as "Confucianist" - South Korea and Taiwan:
  • Taiwan:      $38,749
  • Korea:        $32,272
As you can see, Confucianism has not stopped these countries from rivaling Western ones in wealth. Taiwan, in particular, is populated by people of the exact same cultural heritage as mainland China, and yet has managed to overtake both the UK and France in GDP. Singapore, a city-state populated mostly by Chinese people, is even richer, rivaling the small countries of North Europe.

Anyway, I could sit here and question every assertion Altman makes about China's "Confucianist" culture - "How do you know that's culture and not institutions?" "Where's your data?" "Have you even ever worked in China?" - but I think the Taiwan and South Korea GDP numbers do the trick. I rest my case. +1 for convergence, -1 for "culture".

This clearly illustrates the perils of engaging in what I like to call "phlogistonomics" (a term coined by Matt Yglesias). The method goes like this:

Step 1: Take some hard-to-understand phenomenon, like economic growth. Explain the parts you can explain with standard economics (capital, labor, prices, etc.). What's left - the part that really drives the model - is the phlogiston.

Step 2: Label the phlogiston. Make sure you choose a name that refers to something people in general already believe in. "Culture" is great. "Confidence" works too, as do "institutions", "technology", "power","the true desires of the Fed", and of course, "irrational expectations" (the favorite of us behavioral finance types, hehe).

Step 3: Act like you know exactly how the phlogiston behaves. Predict its effects based on commonly held national/ethnic/gender stereotypes ("Greece is in trouble because Greeks are lazy!"), or your political beliefs ("Obama the Kenyan Muslim socialist is killing business confidence!"), or any plausible-sounding story that plays to popular prejudices, preconceptions, fears, or hopes.

Yes, in the end, conventional wisdom and stereotypes and politics end up driving the model. But along the way, your careful selection of like-minded sources, and your authoritative tone, allow you to seem really wise and sagely in front of an audience of people who were primed to believe your conclusion.

Unfortunately, you may run into a problem: Someone may use the same phlogiston, but different assumptions, to reach the exact opposite conclusion. Scott Sumner, for example, believes that China's culture is precisely what makes its catch-up to the West inevitable:
Like Japan, like Britain, like France, indeed like almost all developed countries, [China] will grow to be about 75% as rich as the US, and then level off.  It won’t get there unless it does lots more reforms.  But the Chinese are extremely pragmatic, so they will do lots more reforms... 
If we want to learn from the Chinese culture, learn from Singapore(or Hong Kong), which is how idealistic Chinese technocrats would prefer to manage an economy; indeed it’s how China itself would be managed if selfish rent-seeking special interest groups didn’t get in the way.  But they do get in the way—hence China won’t ever be as rich as Singapore; it will join the ranks of Japan, Korea, Taiwan, and the other moderately successful East Asian countries... 
I expect China to end up in the “normal” category, mostly based on its cultural similarity to other moderately rich East Asian countries.
Altman, you have met your match. Now all we, the readers, have to do is decide which of these European-Americans has a deeper, subtler understanding of the Chinese culture, and we'll know which one to believe!

(For the record, I'd go with Sumner. Also, Chinese culture seems a lot like American culture to me, but that's mainly based on my students, who of course chose to move here. If I had to predict, I'd say China will reach 50% of U.S. GDP, but that equaling us will be hard because of global resource constraints.)

Of course we could always admit that, well...we don't really know what's going to happen to Chinese growth. But we don't want to admit that. Because we don't like to not know things. Not knowing things is scary. There is safety in derp.

Update: Altman responds, noting that Japan's GDP is markedly less than that of the U.S., Canada, and Austrialia. Of course, I could have pointed out that Singapore, with a GDP (PPP) per capital of $60,410, is considerably richer than any of the countries named. But I thought it more appropriate to compare countries of similar population sizes and resource endowments...

Saturday, April 27, 2013

Will Abe address Japan's number one problem after all?

First Abe surprised me by actually following through with his monetary policy promises; he appointed Haruhiko Kuroda to the BOJ, and together they are embarking on the most ambitious attempt at "reflation" ever tried by a central bank. It remains to be seen if this will actually work, of course; Japan remains mired in deflation even after the announcement, casting further doubt on the effectiveness of the "expectations channel" of monetary policy. But Abe is trying, and that is the important thing.

Now, Abe is talking about an issue that I think is far more important than monetary policy - and one which I had even less hope that he would address. I'm referring to the status of women in the Japanese economy.

One of the essential things that differentiates Japan's economy from ours is that in Japan, women still form an economic underclass. Japan's labor market has an infamous "two-tiered" structure, in which there are two kinds of workers: "Real workers" and "contract workers". The former have (theoretically) lifetime employment guarantees, guaranteed yearly raises, bonuses, and full benefits, with the possibility of promotion to top management. The latter have low, stagnant salaries, few benefits, few guarantees, and little if any possibility of promotion. The former are mostly men. The latter are mostly women.

Not only is this a tremendous waste of talent, it discourages women from entering the workforce. For this reason, most Japanese mothers quit work when they have kids, and working Japanese women tend to have few kids. In addition to holding down Japan's GDP, this is often cited as one cause of Japan's low fertility rate.

Many of Japan's peculiarities seem less peculiar once you know this fact. For example, Japan's unemployment rate is famously low. But Japan's labor force participation rate is even lower than ours. Women make up much of the difference (teenagers and early forced retirees make up the rest).

Anyway, it has long been known that women's exclusion from the Japanese corporate system is one of the main things holding back Japan. In addition to boosting U.S. total GDP by getting more people into the formal workforce, women's increased economic equality has thought to have boosted American productivity by quite a lot. Japan has received no such boost. Pretty much everyone knows that Japan needs to make women more equal; everyone from Aung San Suu Kyi to the U.S. Embassy to the IMF harps on the point. A thousand articles have been written on the topic, but not much has changed.

Why has not much changed? Japan's protected economy, heavily subsidized "zombie" companies, and weak corporate governance insulate it from the Beckerian free market forces that probably helped advance gender equality in the U.S. in the 80s and 90s. In the absence of such market pressures, the most proven route to gender equality is the Swedish/French route, in which the government basically just tells companies "Thou shalt hire and promote women." This method has proven successful in those highly regulated, somewhat protected European countries.

However, Japan's politics has long been dominated not by France/Sweden-type social democrats, but by arch-conservatives. These arch-conservatives made their home in the long-reigning Liberal Democratic Party, which ruled uninterrupted for 55 years and squelched most efforts at social reform. Nobusuke Kishi, the founder of the LDP and its most important leader, was Shinzo Abe's grandfather.

During Abe's first term, he appeared entirely uninterested in addressing the problems of women's equality. His foreign minister and right-hand man was the late Shoichi Nakagawa, who once said:
"Women have their proper place: they should be womanly...They have their own abilities and these should be fully exercised, for example in flower arranging, sewing, or cooking. It's not a matter of good or bad, but we need to accept reality that men and women are genetically different."
So you can see why I have been skeptical about Abe's commitment to women's equality.

However, Abe may surprise me again. According to all reports, Abe is contemplating a big push to put more women in corporate boardrooms:

Japanese Prime Minister Shinzo Abe moved Friday to compel corporate Japan to promote more women to executive roles, asking business leaders to set a target of at least one female executive per company... 
“Women are Japan’s most underused resource,” [Abe] said... 
More details are expected in June, when the government is to unveil a “national growth strategy” of deregulation measures and other structural changes designed to make the economy more dynamic.
Just by saying this, Abe has surprised me, actually. But given his party's strongly sexist traditions, it is far too soon to declare a revolution. As he did with monetary policy, Abe must convince me with dramatic, unprecedented, massive action...and more importantly, he must convince Japan itself.

But if he does...then Abe will have outdone even his predecessor and patron, Junichiro Koizumi...and maybe even his own grandfather as well.

Friday, April 26, 2013

Book Review: The Occupy Handbook

This is a book you should read. It's been a year and a half since the Occupy protests, and they've mostly disappeared off of the public radar. Doesn't matter. The Occupy Handbook (edited by Janet Byrne) is a great general guide to a number of the economic problems our country is facing, the solutions people have put forth, and the grassroots movements that have sprung up to vent people's dissatisfaction.

The Occupy Handbook consists of 55 chapters, each chapter written by a different author (though there are a couple repeat appearances). The authors include famous economists, no-name activists, authors and TV personalities, and more. Among said economists are Paul Volcker, Robert Shiller, Paul Krugman, Daron Acemoglu and James Robinson, Brad DeLong, Tyler Cowen, Peter Diamond and Emmanuel Saez, Jeffrey Sachs, Nouriel Roubini, Raghuram Rajan, and others. The topics range from statistics on inequality in America, to the social structure of protest movements, to the history of Marxism, to the nature of third-world informal economies, and more. Almost all of the chapters are brief and to the point, and there are very few that did not teach me something interesting and new.

The overall message of the book is (or should be) that America's problems are complicated and deep, and not confined to a cyclical recession. They are related to our industrial structure, our class structure, our political institutions, and our government policies. And there are lots of people working on solving these problems in a number of different ways, from the halls of academia to the streets of New York to the corridors of government. No person sees all of the problems. Each person has only a piece of the elephant. And no one's solution is completely right. We all have something to learn from each other.

Anyway, some of the chapters really stood out as excellent, even in a very strong field:

John Cassidy asks the question "What good is Wall Street?", a question that (surprisingly!) receives too little discussion in the rest of the book.

Michael Hiltzik gives a great history of protest movements during the Great Depression.

James Miller has a truly excellent discussion of the problems of "consensus" decision-making, and the reason we use majority-rule democracy instead.

Robert M. Buckley writes a history of Marxism that forever changed my thinking about that movement. Specifically, he presents Marxism as a quiet but ever-present underlying threat in Western societies - a spectre that continues to haunt Europe - that forces elites to share power and wealth with the masses. This is quite possibly the best chapter in the whole book.

The incomparable Michael Lewis has two brief, witty chapters whose writing outshines the rest of the anthology.

Martin Wolf's chapter serves as a microcosm for the entire book. It represents one of the most succinct summaries of the West's economic problems that I've ever read.

Felix Salmon has the single most sensible policy proposal in the book, a call for banks to write down the principal on underwater mortgages.

The ideological distribution of the authors is naturally centered on the left, but there is definitely a spread. Tyler Cowen gives a reasonable (if not entirely convincing) conservative rebuttal to many of the complaints about inequality voiced elsewhere in the book. If I were the editor, I would have included one or two more of these, just to make Cowen's piece seem slightly less out of place, but it's not a big problem.

The book does whiff badly a couple of times - with over 50 authors, that's really inevitable. In particular, a guy named Brandon Adams is given three (three!!) chapters, more than any other author, in which he spouts a bunch of derp about how American culture is going down the tubes. These chapters can be safely skipped.

Also, Thomas Philippon really should have had a chapter.

But these are very minor quibbles. Overall, Occupy Wall Street is perhaps the most important, comprehensive guide to America's discontents since...well, I can't even think of another such guide in recent decades, and we haven't had this many discontents for quite a while. Its influence seems likely to outlast the Occupy movement itself. So, go read it, if you haven't already.

Wednesday, April 24, 2013

KrugTron the Invincible

If you grew up in the 80s you probably remember Voltron. Although the show often had convoluted plotlines, it would somehow always end with Voltron (a super-powerful robot formed from five mechanical lions) facing off against a monster called a "Robeast". Voltron had plenty of weapons, but he would invariably strike the killing blow with his "Blazing Sword". Eventually the show became kind of routine, but to a four-year-old, it was pure gold.

In the econ blogosphere, a similar dynamic has played out over the last few years. Each week a Robeast will show up, bellowing predictions of inflation and/or soaring interest rates. And each week, Paul Krugman...I mean, KrugTron, Defender of the Blogoverse, will strike down the monster with a successful prediction of...low inflation and continued low interest rates. Goldbugs, "Austrians", New Classical economists, and harrumphing conservatives of all stripes have eagerly gone head-to-head with KrugTron in the prediction wars, and have been summarily cloven in twain.

Don't remember? Well here's a quick (partial) episode guide:

It's really kind of amazing. And in case there was any doubt as to KrugTron's prognosticatorial puissance, just ask the experts, who found that he pummeled all other pundits in prediction prowess, getting 14 out of 15 predictions right.

So it's fair to ask: What is KrugTron's Blazing Sword? How does he keep vanquishing the Robeast of the Week?

Well, Krugman himself will tell you that his secret weapon is simple, elementary Keynesian economics - a rough-and-ready IS-LM view of the world, backed up by sophisticated "liquidity trap" models like this one. In those models, low aggregate demand will always keep the economy trapped in a low-inflation, low-interest-rate world.

But I'm not so sure. Keynesian models aren't really used for forecasting the world; they're used as guides for policy. A Keynesian model, be it IS-LM or Liquidity Trap, tells you "If you do fiscal policy, the economy will respond thus." It doesn't tell you how the economy will do in total; that is jointly determined by policy and by the external "shocks" that the Keynesian models (like all macro models) take as given. 

Keynesian models didn't predict that unconventional monetary policy (QE2) would be insufficient to raise expectations of future inflation, and thus would be unable to bust us out of the liquidity trap. Nor did Keynesian models predict that private investors would be willing to ignore the possibility of a U.S. sovereign default, thus allowing the U.S. to avoid a spike in interest rates.

But Krugman did predict both of these things.

And here's the most interesting one. Krugman's earliest prediction victory came at the expense of John Paulson, one of history's most successful investors (although unlike the Robeasts pictured above, Paulson didn't seek out a battle with Krugman; he was set up as the anti-Krugman by a writer at Businessweek). In 2010, Paulson predicted a strong economic recovery. Such a recovery, if it had come, would have busted us straight out of the liquidity trap and allowed monetary policy to cause inflation. Paulson backed up his bet with billions, and rolled snake eyes.

But Paulson is no mere Robeast. He is no inflationista, "Austrian" econo-troll, or conservative ideologue. In fact, he has a large group of very skilled macroeconomists working for him. There is no way his team doesn't know Keynesian econ backwards and forwards.

Nor does Keynesian theory, of the type used by Krugman, insist that an economy will remain mired in recession without a fiscal stimulus to prime the pump. Sure, somewhere out there, there are models in which the economy can fall into a bad equilibrium that requires fiscal policy to kick it out (in fact, Miles Kimball and Bob Barsky are building such a model, but they are severely late in publishing the working paper; so hurry up, guys!). But IS-LM and the Eggertsson-Krugman model don't have this feature. In those Keynesian models, growth can recover on its own.

So how did Krugman know growth would be slow? He didn't (I hope) put his trust in Reinhart and Rogoff's assertion that growth is always slow after financial crises. Maybe he just assumed that the underlying drivers of aggregate demand are sluggish, but I think Paulson's team could have done that just as easily.

No, I think Krugman's real secret weapon is something else: Like Voltron before him, he's borrowing heavily from Japan.

See, I myself am fairly agnostic about Keynesian ideas. But I've expected nothing but low growth, low interest rates, and low inflation since 2008 (though I haven't been as confident about these things as Krugman, and am thus not in his class as a super-robot). I expected these things because of one simple proposition: We are like Japan

Since its land bubble popped in 1990, Japan has had low inflation and low interest rates and low growth, even as government debt mounted and quantitative easing was tried. Paul Krugman was there. He watched Japan carefully, and he often states that it deeply affected his thinking. In fact, it might not be an exaggeration to say that watching Japan made Krugman the Keynesian he is today.

Meanwhile, the Robeasts have all used a different example to inform their understanding of the world: America in the 70s and early 80s. That was a time when government intervention in the economy (seemingly) led to high inflation. This taught generations of conservative economists, politicians, pundits, and regular folks that government intervention leads to inflation. And that if you wait long enough (or maybe enact the right structural reforms), growth will come back on its own.

But America 2008-present has not looked like America 1975-1985. It has looked like Japan, 1990-present. The proper comparison was across space, not across time. Assuming that other countries are fundamentally different than ours - that cultural differences, or institutional differences, etc. make cross-country comparisons utterly worthless - has proven to be a losing bet.

So if you want to get into the economic prediction game, and you don't want to be sliced and diced by KrugTron's Blazing Sword, but you can't bring yourself to fully embrace Keynesianism, I have a suggestion: Take a good close look at Japan.

Meanwhile, the Austrians, goldbugs, and other assorted Robeasts will continue to provide us with our weekly entertainment.

Sunday, April 21, 2013

Why did Reddit get the wrong guy? (Or: the Wisdom of Crowds vs. the Madness of Mobs)

Short answer: It didn't. Or more accurately, we'll never know if it did, because we don't really have a way of knowing what "Reddit" thinks, only what some people on Reddit seem to think.

Long answer: OK, let's back up. When the Boston Marathon bombing manhunt began, there was a Reddit forum (subreddit) devoted to finding the bombers. A lot of people had high hopes for this effort. But the main "suspect" to emerge out of Reddit was a guy named Sunil Tripathy, who had no relation whatsoever to the bombings. Meanwhile, in about the same amount of time, police found the real guys, Tamerlan and Dzhokhar Tsarnaev. If you're interested in the details of Reddit's epic fail, see here. (And more here.)

Which brings us to the question, which someone asked me on Twitter: Why, exactly, did Reddit whiff so badly?

In recent decades, we've heard a lot about the "wisdom of crowds". James Surowiecki, who wrote an excellent book on the topic, mentions things like the stock market's identification of the reason for the Challenger disaster, or the ability of a group of non-experts to collectively outguess an expert on questions like "How many jelly beans are in this jar?". More recently, we've learned that prediction markets are more accurate than polls at predicting election outcomes, and in fact that they beat sophisticated "expert" forecasts in many situations. Companies have experimented with internal prediction markets to tap the collective wisdom of their employees. In general, we have come to believe more and more in the ability of large groups of non-experts relative to the ability of small groups of experts.

Should that belief be challenged by the Sunil Tripathy fiasco?

Not necessarily. The key is that the "wisdom of crowds" may work very well in some cases, while in other cases it may give way to the "madness of mobs". We don't know exactly which case is which, but we do have a general idea what sets them apart. Surowiecki summarizes it well in his book, in fact.

Basically, when we have a method for aggregating the information of diverse independent individuals, crowds will perform very well. When the individuals in a crowd coordinate, however, diversity and independence breaks down, and crowds can pounce on the wrong answer.

We see this in finance experiments. A number of experiments, including classic work by Charles Plott, have established the ability of financial markets to aggregate the private information of diverse participants to arrive at the "right" price. However, other experiments, e.g. by Colin Camerer, have shown that when people pay attention to the actions of others instead of to their own private information, then information can become "trapped", and markets can arrive at the wrong price. There are a number of different theoretical reasons why herd behavior might take over from efficient information aggregation; some of these are "rational" explanations and others are "irrational", but they all rely on individuals having some reason to ignore their private information and focus on what other people do.

You can definitely see the herding dynamic at work in the case of the Sunil Tripathy fiasco. A few guys started saying "It was Sunil Tripathy!" And a lot of other people on the subreddit started focusing on that name, and looking for information about Tripathy. The Tripathy idea was a wrong idea that was initially concentrated among a small group of individuals, who pushed that idea loudly and confidently. Meanwhile, a large number of people on the subreddit may have had small, weak pieces of information pointing to the Tsarnaev brothers. But since Reddit had no way of collecting and aggregating these dispersed small pieces of information, it might have become "trapped", just like in a Colin Camerer experiment.

So let me return to the "short answer" at the beginning of the post. It's not really right to say that "Reddit" picked Sunil Tripathy. Some people on Reddit picked Tripathy, and their voices emerged loud and clear from the chaos, not because most people agreed with them, but because they were the loudest and most strident minority voice. So anyone paying attention to Reddit picked out a few shrill cries of "Tripathy!" rising above the cacophony, and concluded that this was Reddit's consensus verdict. Meanwhile, the attention of other Redditors was turned toward Tripathy, and they spent their time and effort evaluating the Tripathy hypothesis instead of generating alternative hypotheses.

In other words, because it had no way of aggregating information, Reddit became less like a prediction market and more like a lynch mob.

Would Reddit have done better if people could have voted on who they thought did it? I doubt it, because the set of hypotheses was not properly mapped. In an election prediction market, you know the set of candidates. In a jellybean jar contest, you know the set of numbers of jellybeans that might be in the jar (i.e. the real line). But a "whodunit" poll can't list every human being as a potential culprit; it has to limit the choices to a few popular hypotheses. In Reddit's case, a poll would have included 1. Tripathy, and 2. Someone Else. Not very helpful. A prediction market would have suffered from the same problem.

So is there any hope for crowdsourcing terrorism investigations? I think that there already is such a method: Police tip hotlines. Tips tend to be independent, since people usually don't know who else is calling in a tip. And in a high-profile case like a terrorist attack, people who call in tips tend to be fairly diverse, since so many different kinds of people are paying attention. Finally, police can tabulate the number of similar tips, which is a method of aggregation. So tip hotlines satisfy the loose, general criteria for the "wisdom of crowds" to overcome the "madness of mobs". I think it's no coincidence that in the Boston bombing case, a victim's tip ended up being hugely helpful to the police.

Anyway, it's worth pointing out that these criteria for "crowd wisdom" aren't clear-cut. How do you know how independent and diverse a crowd's members are? What is the optimal method of aggregating their beliefs? This is a large, important, open area of research. So have at it, smart people. Just don't pay too much attention to what others in the field are doing...

Thursday, April 18, 2013

The reason macroeconomics doesn't work very well

I don't think it's politics (mostly). I don't think it's the culture of consensus and hierarchy. I don't think it's too much math or too little math. I don't think it's the misplaced assumptions of representative agents, flexible prices, efficient financial markets, rational expectations, etc.

Fundamentally, I think the problem is: Uninformative data.

I was planning to write a long post about this, and I never got around to it, and now Mark Thoma has written it better than I could have. So I'll just steal most of his excellent, awesome post, and add some boldface:
The blow-up over the Reinhart-Rogoff results reminds me of a point I’ve been meaning to make about our ability to use empirical methods to make progress in macroeconomics...it's about the quantity and quality of the data we use to draw important conclusions in macroeconomics. 
Everybody has been highly critical of theoretical macroeconomic models, DSGE models in particular, and for good reason. But the imaginative construction of theoretical models is not the biggest problem in macro – we can build reasonable models to explain just about anything. The biggest problem in macroeconomics is the inability of econometricians of all flavors (classical, Bayesian) to definitively choose one model over another, i.e. to sort between these imaginative constructions. We like to think or ourselves as scientists, but if data can’t settle our theoretical disputes – and it doesn’t appear that it can – then our claim for scientific validity has little or no merit. 
There are many reasons for this. For example, the use of historical rather than “all else equal” laboratory/experimental data makes it difficult to figure out if a particular relationship we find in the data reveals an important truth rather than a chance run that mimics a causal relationship. If we could do repeated experiments or compare data across countries (or other jurisdictions) without worrying about the “all else equal assumption” we’d could perhaps sort this out. It would be like repeated experiments. But, unfortunately, there are too many institutional differences and common shocks across countries to reliably treat each country as an independent, all else equal experiment. Without repeated experiments – with just one set of historical data for the US to rely upon – it is extraordinarily difficult to tell the difference between a spurious correlation and a true, noteworthy relationship in the data. 
Even so, if we had a very, very long time-series for a single country, and if certain regularity conditions persisted over time (e.g. no structural change), we might be able to answer important theoretical and policy questions (if the same policy is tried again and again over time within a country, we can sort out the random and the systematic effects). Unfortunately, the time period covered by a typical data set in macroeconomics is relatively short (so that very few useful policy experiments are contained in the available data, e.g. there are very few data points telling us how the economy reacts to fiscal policy in deep recessions). 
There is another problem with using historical as opposed to experimental data, testing theoretical models against data the researcher knows about when the model is built...It’s not really fair to test a theory against historical macroeconomic data, we all know what the data say and it would be foolish to build a model that is inconsistent with the historical data it was built to explain – of course the model will fit the data, who would be impressed by that? But a test against data that the investigator could not have known about when the theory was formulated is a different story – those tests are meaningful... 
By today, I thought, I would have almost double the data I had [in the 80s] and that would improve the precision of tests quite a bit... 
It didn’t work out that way. There was a big change in the Fed’s operating procedure in the early 1980s... 
So, here we are 25 years or so later and macroeconomists don’t have any more data at our disposal than we did when I was in graduate school. And if the structure of the economy keeps changing – as it will – the same will probably be true 25 years from now. We will either have to model the structural change explicitly (which isn’t easy, and attempts to model structural beaks often induce as much uncertainty as clarity), or continually discard historical data as time goes on (maybe big data, digital technology, theoretical advances, etc. will help?). 
The point is that for a variety of reasons – the lack of experimental data, small data sets, and important structural change foremost among them – empirical macroeconomics is not able to definitively say which competing model of the economy best explains the data. There are some questions we’ve been able to address successfully with empirical methods, e.g., there has been a big change in views about the effectiveness of monetary policy over the last few decades driven by empirical work. But for the most part empirical macro has not been able to settle important policy questions... 
I used to think that the accumulation of data along with ever improving empirical techniques would eventually allow us to answer important theoretical and policy questions. I haven’t completely lost faith, but it’s hard to be satisfied with our progress to date. It’s even more disappointing to see researchers overlooking these well-known, obvious problems – for example the lack pf precision and sensitivity to data errors that come with the reliance on just a few observations – to oversell their results. (emphasis mine)
This is the clearest and based statement of the problem that I've ever seen. (Update: More from Thoma here.)

I'd like to add one point about the limits of time-series econometrics. To do time-series, you really need two assumptions: 1) ergodicity, and 2) stationarity. Mark addressed the ergodicity problem when he talked about trend breaks. As for stationarity, it sometimes matters a lot - for example, if technology has a unit root, then positive technology shocks should cause recessions. But the statistical tests that we use to figure out if a time-series has a unit root or not all have very low power. There are some pretty deep theoretical reasons for this.

Anyway, that's just yet one more reason macro data is uninformative. That problem isn't going to be solved by gathering more accurate data, or by seeking out new macroeconomic aggregates to measure (though we should probably do both of those things anyway).

So what are the implications of this basic fundamental limitation of macro? I think there are three.

1. Beware of would-be prophets from outside the mainstream. There are a number of people, usually associated with alternative or "heterodox" schools of thought, who claim that macro's relative uselessness is based on an obviously faulty theoretical framework, and that all we have to do to get better macro is to use different kinds of theories - philosophical "praxeology", or chaotic systems of nonlinear ODEs, etc. I'm not saying those theories are wrong, but you should realize that they are all just alternative theories, not alternative empirics. The weakness of macro empirics means that we're going to be just as unable to pick between these funky alternatives as we are now unable to pick between various neoclassical DSGE models.

2. Macroeconomists should try to stop overselling their results. Just matching some of the moments of aggregate time series is way too low of a bar. When models are rejected by statistical tests (and I've heard it said that they all are!), that is important. When models have low out-of-sample forecasting power, that is important. These things should be noted and reported. Plausibility is not good enough. We need to fight against the urge to pretend we understand things that we don't understand.

3. To get better macro we need better micro. The fact that we haven't fond any "laws of macroeconomics" need not deter us; as many others have noted, with good understanding of the behavior of individual agents, we can simulate hypothetical macroeconomies and try to do economic "weather forecasting". We can also discard a whole slew of macro theories and models whose assumptions don't fit the facts of microeconomics. This itself is a very difficult project, but there are a lot of smart decision theorists, game theorists, and experimentalists working on this, so I'm hopeful that we can make some real progress there. (But again, beware of people saying "All we need to do is agent-based modeling." Without microfoundations we can believe in, any aggregation mechanism will just be garbage-in, garbage-out.)

After the financial crisis, a bunch of people realized how little macroeconomists do know. I think people are now slowly realizing just how little macroeconomists can know. There is a difference.

Tuesday, April 16, 2013

What if all those times really were different?

Hopefully you've been following the tussle between the famed Reinhart and Rogoff and the researchers who tried to replicate their results (Hendon, Ash, and Pollin). This post isn't really about that, but here's a quick summary: RR claim that after a country's debt hits 90% of GDP  or more, it grows much more slowly. HAP take RR's data and find that RR have done some dubious things with it, including A) randomly excluding countries from the sample, B) weighting outcomes by "growth period" instead of by year, and C) making an Excel error. The Excel error is not a big deal, just a "gotcha", but the other two things are pretty important, especially the weird weighting. When the data are weighted by year and the excluded countries are allowed back in the sample, HAP find that:

1) There is no "sudden deceleration" or "structural break" in growth when debt hits 90% of GDP, i.e. there is nothing special about 90% as RR often claim, and

2) Countries with more debt may tend to grow more slowly, but the effect is considerably smaller than RR claim, and not statistically significant.

RR respond by email and defend themselves, focusing mainly on the first part of the latter result (i.e. that higher-debt countries grow more slowly), largely ignoring the other findings, and not really defending the oddities of their approach. In particular, the notion that 90% is some kind of special threshold - which Reinhart and Rogoff have repeated time and again when making the public case for austerity -  appears to lack material support, but RR's email response repeats the 90% level but doesn't really address its disappearance in HAP's results.

Hopefully a more comprehensive response to HAP's findings is forthcoming. (Update: Here is that response. Still pretty unsatisfying.)

Also, many smart people point out that even the weak result that slower-growing countries may tend to have (slightly) higher debt levels is not necessarily a causal relationship - it may be that slow growth makes countries borrow more in relation to GDP, not vice versa. Reinhart and Rogoff claim, in their email response, that they never presented this relationship as causal in their book, This Time Is Different, or in their papers. However, in their public op-eds, they clearly and unambiguously assert that debt causes slow growth rather than vice versa.

(In fact, as an aside, I don't even think that RR's usual critics go far enough in challenging the debt-growth result. RR's cross-country sample does not control for a country's level of development. So if debt/GDP levels tend to grow over time, and if countries converge a la the Solow growth model, then slower growth and higher debt could have a purely benign non-causal relationship.)

In short, RR appear to be doing everything they can to imply that correlation = causation, while never seriously addressing the possibility that it might not.

But actually, this post is not about that dispute. This post is about Reinhart and Rogoff's book, This Time Is Different.

The book basically presents a whole lot of features of financial crises across time and space, and shows how these features are similar. The authors thus try to draw some general conclusions about the "anatomy" of a crisis. The book has been hailed as an alternative to the sort of formal modeling done by most academic macroeconomists, and I agree that the naturalistic approach adds value. I just think its limitations are too rarely recognized. Naturalistic observations can be all wrong.

Reinhart and Rogoff carefully document a number of apparent similarities between financial crises. The implication is that financial crises are all inherently similar - that even if we don't know exactly what causes them, they must have a common cause because they look so similar. In other words, this time isn't different.

But that's not really valid! Similar-looking things can have a bunch of different underlying causes. Wars might be fought in similar fashion, but the causes of the wars might vary wildly - religious fervor for one war, desire for grazing land for another. Communicable diseases may all involve fever and weakness, but some are bacterial and some are viral.

Similarly, financial crises might be different animals that simply look the same. Some might be caused by domestic asset bubbles, others by currency pegs and foreign capital flows. Some might involve borrowing in external currencies, others mostly domestic borrowing. Some might be triggered by wars or political instability, others by the collapse of unsustainable growth models, others by overly complex financial systems. 

Bottom line: If you just pick out the similar features, you will bias yourself toward concluding in favor of structural underlying similarity, where no such conclusion is warranted!

And here's my second problem with the methodology of This Time Is Different: Documenting similarities automatically biases oneself (and one's readers) toward drawing the kind of inappropriate causal conclusions that many draw from RR's finding on debt and growth. If you focus mainly on the fact that all deadly communicable diseases involve fevers, you will probably try to treat them by dipping sick people's heads in buckets of ice. That's not going to work!

"Similar symptoms, disparate causes." When evaluating the history of financial crises, we should constantly keep this phrase in mind. This is the alternative hypothesis; maybe each time is different. Proving a common cause, or even a common structure, requires more than simply tabulating lists of similarities.

Now a disclaimer: I strongly suspect that RR are onto something, and that certain causal features of financial crises really do crop up again and again across time and space. But I think that books like This Time Is Different are merely jumping-off points for an investigation of that hypothesis; they do not constitute any kind of proof. Naturalism is where understanding of the world begins, but not where it ends.

Sunday, April 14, 2013

Will land prices rise as population rises?

Just a quick post here. Bill McBride, like many others, thinks that rising population implies that housing prices should have a long-term upward trend. I'm not sure he's right. McBride writes:
A key reason for the upward slope in real house prices is because some areas are land constrained, and with an increasing population, the value of land increases faster than inflation...The bottom line is there is an upward slope to real house prices.
I'm not sure this is true. The reasoning is intuitive: More people + same amount of land = land is more scarce. But here's why I think that reasoning is not quite right: Most of the value of land is the value of its proximity to centers of economic activity. Any urban economic model that incorporates geometry, geography, etc. will tell you this.

In other words, New York City real estate is high-priced because New York City is an agglomeration of economic activity. It is not high-priced because an increasing number of people are being forced to live in New York City. That isn't even the case! No law makes people cram themselves into NYC (except in that Kurt Russell movie!); you are legally free to move out to North Texas and get a nice ranch. People choose to live in the heart of New York City because of the economic (and social) opportunities offered by proximity to all the other people living there. So they're willing to pay lots for land.

To use an old real estate cliche: Location, location, location.

So will land-use restrictions around big cities cause land prices there to increase? Maybe, maybe not. If those land-use restrictions hurt agglomeration, they will choke off the economic value of living in the city center, which might lower land prices. Also, a lot depends on what else is going on in the region, and in the rest of the country. Agglomeration effects can be weirdly non-linear and history-dependent. You can even have long-term de-agglomeration, simply due to shifting geographic patterns of trade and industry; as an example, just look at the Rust Belt. Even if land prices up-trended nationally due to population pressure, regional prices might experience long secular downtrends due to rearrangements of national economic activity.

Also, there's technology, which can change a lot faster than population. Urban economists like Masahisa Fujita will tell you that once transportation costs get really low - for example, the more we switch to a digital online economy - the less economically important it is for companies and people to be in close physical proximity. That will cause land prices to decline in some places, and rise in others, even as total economic value increases across the country. And if the imperatives of those technologies cause land-use restrictions to be altered, all bets really are off. Remember, the total amount of American land actually occupied by humanity is tiny. And (as a commented points out), there is plenty of unoccupied cheap land in the near vicinity of many cities.

So I say, don't fall for the easy logic of more people = more expensive land. It might work that way, but it very well might not.

Update: Paul Krugman has some data to back up my suspicion; it turns out that the average American actually lives in a less densely populated area than a decade ago. My suspicion is that this trend stretches back farther than the year 2000. But as Krugman notes:
McBride’s point that actual real housing prices do seem to have an upward trend remains important, and needs explaining.
My intuition says that it's because we've been occupying larger and larger living spaces. Now, that wouldn't always tend to raise land prices (maybe it happened because agglomeration effects decreased), but if it indicates a shift in consumer demand for dwelling space, it would. If the value of proximity stays the same and people at a given spot choose to spend their money on bigger dwellings, they'll crowd each other out, and land close to agglomeration centers will become more scarce. Also, it might be that agglomeration effects themselves have increased in some places like San Francisco and New York, where industrial clustering in tech and finance has raised the value of living close to other people in the same industry. Anyway, if I had more time, I'd look up data and try to solve the mystery; those are just guesses.

Update 2: A couple more issues I didn't raise before, but which are interesting:

* In the very long run, McBride is right; if population goes up enough, land prices will eventually have to rise. But there's good reason to think we'll never reach that very long run. American fertility rates are slightly below replacement levels, and most rich countries are far lower; developing countries aren't far behind (China's working-age population is already falling, India and Latin America and Southeast Asia are all at or near replacement levels). Our population is growing slowly and steadily because of immigration, but that will eventually trickle off as countries' incomes converge around the world (Mexican immigration is already at zero or negative). So even if population growth is positive right now, expectations of long-term future population growth may be negative or flat, which might affect land prices.

* If median incomes fall, people will have less to spend on land, which will exert downward pressure on land prices. This has been the norm in America for over a decade now; no one knows if it will continue.

* Public transit policy matters. If cities build a lot of light rail, a lot of people can give up their cars, freeing up lots of urban land that used to be garages and parking lots.

* For many investors, what matters for housing as an investment is not really whether land prices rise, but whether they rise faster than overall GDP (or some other benchmark like the risk-free rate or the stock market). Since population growth raises GDP, it's not clear whether any realistic amount of population growth will make land appreciate faster than - or anywhere near as fast as - the economy's stock of productive assets.

So these are all caveats to think about before concluding that population growth gives land a long-term upward price trend.

Friday, April 12, 2013

Solar is libertarian, nuclear is statist

Libertarians have always seemed to have a soft spot for nuclear energy and an instinctive dislike of solar power. This confuses me a bit, because I've always been a fan of both nuclear and solar, but the part of me that likes nuclear has always been the statist part. With its huge monument-like cooling towers and its association with the Manhattan Project and the heyday of Big Science, nuclear always put me in mind of the power and glory of the American nation. Solar, on the other hand, increasingly looks like a libertarian's dream technology.

First of all, solar is decentralized. Attach solar panels to your house, and the electrical grid becomes merely a backup. This means that you rely a lot less on a government-backed monopoly for your power; if the government disappeared tomorrow, your house would still have electricity during peak hours (if not during the night). Eventually, when storage technology improves, rooftop solar will let us forget the grid entirely. 

Also, rooftop solar requires much less infrastructure than grid electricity. That means much less of a role for the government, which libertarians should like. 

Contrast this with nuclear power. Nuclear has huge fixed costs, which are difficult for private companies to pay; thus, most nuclear plants are built with the help of government loans or subsidies. The close state-corporate collusion required by nuclear power was starkly exposed in the recent Fukushima disaster. Also, nuclear plants are giant and centralized, meaning the electricity must be piped to your house via a grid, which is constructed and controlled by the government.

Also, solar is much more entrepreneurial than nuclear. Nuclear has such high costs that only the hugest of companies, like GE, can create nuclear plants (and even then, often only with government help). Solar, on the other hand, has low fixed costs, so entrepreneurs can create solar farms with relatively little startup capital. Also, R&D in the nuclear sector often has fixed costs and must be state-subsidized, while solar lends itself more to cheap private-sector R&D.

Finally, nuclear waste creates a lot of thorny land-use issues. Public goods are involved, since people are afraid that the waste may leak and injure them. This means that the location and operation of nuclear plants will always partially be decided by planning boards, environmental agencies, and angry town hall meetings. This is simply unavoidable in American society. But solar power has no such issues, and so the entrepreneur or independent-minded rooftop solar generator can operate largely unmolested by government.

Now, libertarians complain about government subsidies to solar companies. I think that complaint is very valid, and that much of the money used for subsidies would be better spent doing basic research. Libertarians may also dislike government-funded basic research, but they should recall that similar research was involved in the creation of the internet, which has proven to be an enormously effective tool for individual freedom, entrepreneurship, and decentralization of power.

To sum up: Libertarians should envision a world in which rooftop solar, small independent solar farms, and enterprising solar tech entrepreneurs bring a glorious end to the era of public utilities, government-built electrical grids, and collusion between government and big energy companies. Let statists yearn in vain for the days when government-sponsored nuclear plants reared up like monuments to the power of central planning.

Thursday, April 11, 2013

Kauffman forum video: Blogging and your economics career

I was unfortunately unable to attend the Kauffman Economics Bloggers' Forum this year. In lieu of a personal appearance, Brad DeLong asked me to do a quick video on the topic of how blogging might affect one's career in economics. So here is that video. It's a topic I've covered before, so not a ton of new stuff here if you're a regular reader. But with the video version, you get to see my office bookshelf! Awesome!

(Note: I now realize that this video makes it sound as if Justin Wolfers started blogging after me, when actually, he was blogging before I was.)

Update: As some people have pointed out, this video contains a mistake. I say that I spend "1 to 1.5 hours a week" blogging. I totally misspoke. I spend 1 to 1.5 hours per post. At 2-3 posts a week, that's 2 to 5 hours per week. Thanks to the people who caught that...

Anyway, for much much more on the Kauffman forum, visit Brad DeLong's blog.

Wednesday, April 10, 2013

Nuthin' but a 'g' thang

I had a professor at Michigan who always thought in pictures. He would much rather draw a graph than write down a system of equations. As he drew the curves on the board, he would say "OK, here is the thing, and it touches this thing, and they go like this." And that would be it! And I would be sitting there scratching my head and thinking "OK, now are you going to tell us what you just did?" Because I've always been the exact opposite kind of guy; I would rather write down equations and define words for everything. Rarely do I think in pictures. On IQ tests, I do much worse on the "visual" sections.

Over the course of my academic existence, I've often observed this dichotomy. You have the Einstein-type people who seem to visualize everything, and then you have the Heisenberg-type people would would rather use the symbols. So I've always had the intuitive hypothesis that there are different types of intelligence; that different people tend to process information in different ways, whether due to habit or nature.

But then there are all those people who say that intelligence can be boiled down to a single factor, the mysterious "g" (which I assume stands for either "general intelligence" or "gangsta"). Since this went against years of casual observation, I was somewhat pleased to see the eminent Cosma Shalizi write an essay debunking the notion of "g". But then I saw this blog post defending the notion of "g", and claiming that Shalizi makes a bunch of errors. Basically, the disagreement revolves around the question of why most or all psychometric tests and tasks seem positively correlated with each other. Shalizi points out that this correlation structure will naturally lead to the emergence of a "g"-like factor, even if one doesn't really exist; his opponent points out that if no "g" exists, it should be possible to design uncorrelated psychometric tests, which so far has proven extremely difficult to do.

The latter post, by a pseudonymous blogger calling himself "Dalliard", contains a bunch of references to psychometric research that I don't know about and have neither the time nor the will to evaluate, so I'm a bit stumped. Normally I'd leave the matter at that, shrug, and go read something else, but I realized that my intuitive hypothesis about intelligence didn't really seem to be explicitly stated in either of the posts. So I thought I'd explain my conjecture about how intelligence works.

In a nutshell, it's this: What if there are multiple "g's"?

Suppose that simple mental tasks (of the kind apparently used in all psychometric tests) can be performed by a number of different but highly substitutable mental systems. In other words, suppose that any simple information-processing task can be solved using spatial modeling, or solved using symbolic modeling, or solved using some combination of the two. That would result in a positive correlation between all simple information-processing tasks, without any dependence between the two mental abilities.

Let's illustrate this with a simple mathematical example. Suppose the performances of subject i on tests m and n are given by:

P_mi = a + b_m * X_i + c_m * Y_i + e_mi
P_ni = a + b_n * X_i + c_n * Y_i + e_ni

Here, X and Y are two different cognitive abilities. b and c are positive constants. Assume X and Y are uncorrelated, and assume e, the error term, is uncorrelated across tests and across individuals.

In this case, assessing the covariance of performances across two tests m and n across a pooled sample of subjects, we will have:

Cov(P_m, P_n) = b_m * b_n * Var(X) + c_m * c_n * Var(Y) > 0

So even though the two cognitive abilities are uncorrelated (i.e. there is no true, unique “g”), all tests are positively correlated, and thus a “g”-type factor can be extracted for any set of tests.

Now suppose that by luck, we did manage to find "pure" tests for the X and Y. In other words:

P_xi = a + b_x * X_i + e_xi
P_yi = a + c_y * Y_i + e_yi

These tests would have no correlation with each other. But they would have positive correlations with every other test in our (large) battery of tests! So the "positive manifold" (psychometricians' name for the all-positive correlation structure between tests) would still hold, with the one zero-correlation pair attributed to statistical error. Only if we found a whole bunch of tests that each depended only one X or only on Y could we separate the "single g" model from the "two g" model. But doing that would be really hard, especially because in general test-makers try to make the various tests in a battery different from each other, not similar.

Notice that all I need for my "two-g" model to fit the data is that most of the b and c coefficients are nonzero and positive. It makes sense they'd all be positive; more of some mental ability should never hurt when trying to do some task. And the "nonzero" part comes from the conjecture that simple mental tasks can be performed by a number of different, substitutable systems. (Note: the functional form I chose has the two abilities be perfect substitutes, but that is not necessary for the result to hold, as you can easily check.)

Update: A commenter reminds me of the time Richard Feynman discovered that while he counted numbers by internally reciting the numbers to himself, his friend counted by visualizing pictures of the numbers scrolling by. This is the kind of thing I'm thinking of.

So anyway, there's my proposed model of basic intelligence. For those of you who didn't follow it, just imagine several dozen hyperplanes, and project them all onto one hyperplane... ;-)

An addendum: Why do people care whether there is one "g" or several? According to "Dalliard", the notion of multiple types of intelligence is attractive because it suggests that "that everybody could be intelligent in some way." Well, if that's what you want, then realize this: It's true! Remember that psychometric tests are simple mental tasks, but most of the mental tasks we do are complex, like computer programming or chess or writing. And for those tasks, learning and practice matter as much as innate skill, or more (for example, see this study about the neurology of chess players). Therefore, everyone can be "smart" in some way, if "smart" means "good at some complex mental task". Which, in adult American society, it typically does. So don't worry, America: We're all stupid in most of the ways that matter.

...Wait, that didn't come out right...

(Final note: Looking through "Dalliard's" blog, I see that most of it is an attempt to prove that black people are dumber than white people. Sigh. Depressing but hardly surprising. Needless to say, the fact that I addressed a "Dalliard" blog post is not intended as an endorsement of his views or his general interests...)

Update: A commenter points out that Dalliard's post does consider a theory similar to the one I outline here, which he calls the "sampling" theory. Dalliard cites some fairly weak arguments against the theory by someone from long ago, but recognizes that these arguments are weak. Dalliard also makes a good point, which is that for many applications - say, separating kids into classes based on test-taking skill - it doesn't matter whether there is one "g" or many.

Update 2: Here's some recent evidence supporting my conjecture.

Two versions of Goodhart's Law

Goodhart's Law is often called a generalization of the Lucas Critique, but it's really not. The "law" actually comes in several forms, one of which seems clearly wrong, one of which seems clearly right. Here's the wrong one:
Any observed statistical regularity will tend to collapse once pressure is placed upon it for control purposes.
That's obviously false. An easy counterexample is the negative correlation between hand-washing and communicable disease. Before the government made laws to encourage hand-washing by food preparation workers (and to teach hand-washing in schools), there was a clear negative correlation between frequency of hand-washing in a population and the incidence of communicable disease in that population. Now the government has placed pressure on that regularity for control purposes, and the correlation still holds.

Incidentally, this also means that the oft-repeated statement that "Social science can never discover any laws of nature, because human beings react to the discovery of the 'law'" is false. For example, suppose you see that when gas prices rise, people drive less. You place pressure on that regularity for control purposes by instituting a gas tax, which raises the consumer price of gas. Lo and behold, people drive less! Sometimes people actually do what you try to get them to do.

Anyway, here's the version of Goodhart's Law that seems obviously true:
As soon as the government attempts to regulate any particular set of financial assets, these become unreliable as indicators of economic trends.
This seems obviously true if you define "economic trends" to mean "economic factors other than the government's actions. In fact, you don't even need any kind of forward-looking expectations for this to be true; all you need is for the policy to be effective. In other words, this law could as easily describe "Milton Friedman's thermostat" as the Lucas Critique.

An application of this correct form of Goodhart's Law is the suggested use of financial market outcomes as guides for Fed policymaking. At a speech at a conference at Michigan last year (and elsewhere), Narayana Kocherlakota suggested that instead of using its own internal inflation forecasts, the Fed should set policy according to market forecasts of inflation. Specifically, he suggested using the prices of TIPS and other inflation-dependent financial assets to calculate risk-neutral probabilities of future inflation, and to set Fed policy accordingly. His reasoning was that unlike the Fed's own forecasts, risk-neutral probabilities take into account the degree to which people value price stability in different states of the world.

I raised an issue with this approach. The risk-neutral probabilities obtained from financial markets are unconditional probabilities - in other words, markets may have already taken into account what they expect the Fed to do, but they have not yet taken into account what the Fed actually decides to do. The Fed's ultimate policy decision is something the Fed knows and the markets do not. Thus, the Fed is able to make internal conditional forecasts for each possible policy choice, and use only one of those conditional forecasts when making its decision. If it relies on markets, it must necessarily use unconditional forecasts, which may be less informative about conditional outcomes (which is what the Fed really cares about).

This seems to me to be an application of the second version of Goodhart's Law. If you set interest rates mechanically (i.e. according to some rule) based on current market expectations of inflation, you will change those expectations, and markets will move, requiring you to set interest rates differently according to your policy rule. It's possible that markets and policy might converge to some stable equilibrium, but also possible that market volatility might increase once it was known that policy was being set based on market prices themselves.

To link this to Goodhart's Law, if the Fed targeted the prices of inflation-linked assets, those prices would mostly contain information about expectations of Fed policy decisions (which the Fed already knows better than the asset market), rather than about non-Fed economic forces that might affect inflation or people's utility of stable prices (i.e. the things the Fed wants to use the asset prices to ascertain).

So Goodhart's Law, in its obviously true form, seems very important for policymaking.

Saturday, April 06, 2013

A world without macroeconomists?

Miles Kimball, myself, and some other folks were having a discussion on Twitter the other day about the question of "What would happen if there were no economists?" So we decided to do some blog posts on the topic.

Of course, the definition of "economist" is fuzzy; if you make a chart of past GDP growth rates on your home computer just for fun, does that make you an "economist"? So I'll assume that "economist" means "academic economists, or economists working at government research institutes." In other words, paid econ researchers.

Anyway, the question is much too hard! There are a LOT of kinds of economists, and they do lots of different things. Tax economists study tax policy. Labor economists study individuals as they make their way through the economy. Decision theorists study how individuals make decisions. Applied game theorists study how people bid on auctions, and lots of other stuff like that. Financial economists study financial markets. Econometricians do statistics. Development economists study how countries get rich. Trade economists study patterns of trade. Industrial organization economists study...um...Well, anyway, there are a lot of different types of economist. The question of what would happen if they all vanished is fun far-out speculation, but I'll leave that task to somebody else.

Instead I thought I'd just stick to the type of economist everyone hears about: Macroeconomists. What if universities, and the Fed, stopped hiring people to do macroeconomics?

Well, for one thing, universities probably wouldn't save any money. The demand for econ profs is based mainly on the demand for undergrad econ classes, which probably wouldn't much change if one subfield of econ research were abandoned. Econ classes would just focus more on micro and on subfields of micro, and would be taught by micro researchers. As for the U.S. government, it would save a tiny bit of money, but not much, since macroeconomists have tiny research budgets.

Would we lose the ability to forecast the macroeconomy accurately? It's tempting to conclude "No," since macro models of the type overwhelmingly used, created, and studied by modern macro researchers (DSGE models) haven't really proven better at forecasting than the non-structural spreadsheet-type models used by most private firms, or than consensus individual forecasts (actually one DSGE model has performed slightly better in recent years, but this might just be data mining and publication bias). It's telling that private firms don't hire people to make DSGE models, but do hire people to make forecasts with much simpler tools.

However, here's an interesting thing about research, and about science: Past discovery is no guide to future discovery. Chemists were basically a joke for centuries before they stumbled on a few key principles, and rapidly turned into the most reliable discovery-factories in all of science. Biologists had an even longer history of uselessness before they became incredibly useful thanks to new technologies. So someday, macroeconomists might learn how to forecast the economy extremely well. We really just don't know. A breakthrough in forecasting power would yield huge payoffs to society.

OK, what about policy? Macroeconomists will gladly tell you that modern models are not a lot of use in forecasting - that their main use is in giving policy recommendations, conditional on your assumptions (i.e. "If for whatever reason you believe that the economy works this way, here's what you should think you can do with policy.") But as I've often griped, I don't think they really do this particularly well...it's too hard to choose which one to use, even if you know your own general priors about how the economy works. And people have silly priors anyway. Not to mention that even if you choose a model, its output may be incredibly hard to interpret and use. And when the world seems to hit on a consensus policy that seems to work well - the Fed using interest rates to lean against fluctuations in inflation and GDP, for example - the models seem to follow the policy rather than the other way around.

So does this mean that macro research is useless for policymaking? No! Not at all!! Because here's an interesting thing about policymaking: No matter who advises the policymakers, policy is going to get made. That includes economic policy. So if there were no academic and Fed macroeconomists around to advise policymakers, who would policymakers listen to on economic matters?

My guess: Some very dangerous people. 

For all the talk of academic macro being politicized, it's much less politicized than the macroeconomic discussion outside of the research community. My own experience is that most macroeconomists are pretty apolitical, and research supports that...but even if my sample is biased, macro's interventionist and laissez-faire schools are pretty close to each other ideologically, compared to, say A) armchair-theorizing politicians, B) TV commentators, C) the denizens of internet forums. It really is a jungle out there. You have David Stockman. You have Ron Paul and his followers. You have David Graeber and his followers. And worse. You have "Austrians" who think all of economics can be deduced from some vague derp. You have Marxists who think - well, I'm not sure, because they tend to denounce and vilify you if you even ask them what they mean, but it sounds nuts. In short you have a cavalcade of vast unending wackitude, often with a proven track record of wrecking economies and societies.

So it's possible to see macroeconomists as doing plenty of good, simply by sitting there not being absolute wackaloons. A million DSGE models from which it is impossible to select sounds a lot better to me than three or four totally nutcase worldviews, the selection of any one of which is likely to cause human tragedy on a vast scale. (Note: This idea, of macroeconomists as a vaccine against macro-lunacy, was first suggested to me by Justin Wolfers.)

(Update: In the comments, Robert Waldmann writes:
A world without macroeconomists wouldn't be a world without economists who talk to journalists about the macroeconomy. The risk is that the voice of the economics profession will be trade theorists, economic historians, behavioral economists and finance economists such as Krugman, DeLong, C Romer, Goolsbee, Fama and Cochrane. 
I think Waldmann is right, and the danger from having zero dedicated macroeconomic researchers is not as great as I'm making it out to be; other economists could still hold back the crazies, they'd just have a harder time doing it with no empirical research or macro theory to back them up. So the crazies might not immediately take over, but they might gain more clout and respect.)

And actually, there's a huge area of macroeconomic research that I haven't even mentioned yet: macro empirics. Empirical macro is fundamentally constrained by the limitations of time-series data ("history only happens once", i.e. ergodicity and stationarity are near-impossible to verify) and cross-country comparisons. But within those limits, macro empiricists can tell us a lot, just by quantifying things and noting seeming regularities. That's enormously helpful both for policy and for forecasting. You need controlled experiments to create a reliable predictive theory of the world, but there's a huge amount you can know without controlled experiments, just by watching the world go by and keeping careful track of what you see. And macro empiricists can certainly do the latter. It's thanks to them we know things like Okun's Law, which tells us that fast growth is consistently associated with low unemployment. It's also thanks to them that we know that investment is much more sensitive to the business cycle than consumption. Or that slightly less than half of people seem to be "hand-to-mouth" consumers who don't obey the Permanent Income Hypothesis. Etc. Without macroeconomists, we just wouldn't know these things, and reasonable people would argue about them.

Also, I didn't even mention spinoff products. Macro empiricists, struggling to wring some insight out of terrible data, often come up with truly remarkable innovations that may be useful in many disciplines. I'd put Chris Sims' work with structural vector autoregressions in this category.

So, in short, from macroeconomists we get at least the following benefits:

1. Quantification of macroeconomic phenomena.

2. Observation of correlations between macroeconomic variables (and between macroeconomic and microeconomic variables).

3. The creation of "spinoffs" like sVARs.

4. The unknown potential for big breakthroughs in forecasting methods.

5. An "anchor" that keeps policy away from the dangerous extremes urged on us by various politically-motivated fringe groups.

So without macroeconomists, it seems to me that we would lose quite a bit, actually. And since macroeconomists are very cheap, all in all, it seems we should keep them around.

That doesn't mean we bloggers have to stop complaining about them, of course. ;-)