Friday, March 07, 2014
I've spent a lot of time griping about modern macroeconomics, but I want to take a minute to point out how quickly and adroitly the profession (paradigm? research program? hive mind?) has responded to many of the major criticisms leveled at it since the 2008 financial crisis. Here are what I see as the attacks macro has more or less fended off:
1. "Macro didn't predict the crisis."
This one never really seemed to stick in the first place. First of all, precious few people predicted the crisis, and a number of those that did have tended to be the type of people who predict crises every week. In 2011, a lot of the people who supposedly got 2008 were predicting another crisis - a wave of government defaults, followed by hyperinflation. Didn't happen. Meanwhile, people remembered that there are things out there that are just really hard to predict, like earthquakes.
And also, it's not like macroeconomics betrayed the people's confidence in this regard. I don't think many people inside or outside of the profession thought in 2007 that macro was able to predict recessions, financial crises, etc.
2. "Macro doesn't include finance."
This was somewhat true for a very short time following the 2008 crisis. Why? Because it takes a few months to write papers and get them out there. It is true that before 2008, finance didn't get much attention in macro models. But instead of sticking their heads in the ground, top macroeconomists responded to the crisis by racing to make models in which the financial sector drives recessions. For example, in 2009, Michael Woodford and Vasco Curdia came out with "Credit frictions and optimal monetary policy", and in 2010, Larry Christiano and coauthors came out with "Financial factors in economic fluctuations". Those are two top macro guys, but early efforts like those were only the thin edge of a very large wedge. As of now, practically every macro paper I see, from top people, young entrants, and central bank economists includes "financial frictions", "financial shocks", "credit frictions", a banking sector, leverage restrictions, or something along those lines. Financial macro is absolutely the In Thing right now. And there has been increased interest, attention, and recognition directed toward economists who were thinking about the financial sector before 2008.
Basically, the speed with which macro has put finance at the center of its theories of the business cycle has been nothing less than stunning.
3. "Macro thinks the economy is a representative agent with rational expectations. Ha!"
The representative-agent thing had actually been challenged long before the crisis. Simpler macro models, like OLG-type models, often include multiple types of agents. There have been efforts to put heterogeneity into big DSGE-type models too - for example, the Krusell-Smith (1998) model (these didn't get quite as far, because this kind of thing is very technically difficult to model). Heterogeneity also exists in a number of the labor search type models that were getting popular even before the crisis. In most of the new financial-macro models, there are heterogeneous agents of various types - for example, creditworthy borrowers and bad borrowers, in models of bank lending. Of course, representative consumers and firms still dominate the literature, because they're simply a lot easier to use than the alternative. But they're not the only game in town; macroeconomists are definitely thinking about heterogeneity.
As for rational expectations, there have not been many attempts to replace rational agents with irrational ones (though some top theorists have played around with the idea). An alternative concept of rationality, Bayesian learning, has been getting more attention from DSGE theorists. But so far, macroeconomists are still very timid about abandoning this pillar of the Lucas/Prescott Revolution. Why? One big reason is that there's no clear alternative. There are lots and lots of ways people could be irrational, and it takes a big leap of faith to pick one of those ways and run with it. That's probably why critics of this aspect of macro are frustrated, and will continue to be frustrated - it's easy to see that rational expectations isn't a law of the Universe, but it's devilishly hard to agree on which alternative should replace it.
4. "Macro models are too big and clunky to be useful in a crisis."
Before the crisis, macro definitely seemed to be moving toward huge, unwieldy models with tons of different "shocks" and a number of "frictions". And there still is quite a bit of that. But a lot of macro models that I've seen recently definitely seem to be more of the stripped-down, "let's tell a story about this one mechanism" kind of models. And I've even seen some macro people using simple OLG-type models instead of the infinitely forward-looking, "fully specified" monsters that usually go by the name of "DSGE". No, these aren't simple supply-and-demand curve diagrams. But even if it hasn't given Paul Krugman what he'd like, macro seems to be giving Ricardo Caballero a little more of what he wished for in 2010.
The real critic that matters in this case isn't the public; it's central banks and policy advisors. The public never has to use macro models, so of course it doesn't (or shouldn't) care how unwieldy they are. But central banks and policy advisors need models they can both understand and apply in a crisis situation. Whether or not macro's new crop of models will give them that is an open question, but hopefully we won't have to see it answered in the near future.
5. "Macro is too political."
When we read prominent economists writing in the WSJ denouncing fiscal stimulus, or calling for higher interest rates (for a different reason every week), it's hard not to think that macro is just a pack of conservative hyenas with the odd liberal honey badger thrown in the mix. But you have to realize that there's a heavy selection bias going on here - the macroeconomists who write in the press are going to be the people who are both A) the most interested in policy questions, B) the most confident in their policy-related beliefs, and C) the ones who attract the most attention from the readers of the WSJ or wherever. In terms of academic macro, the evidence says that it's only slightly aligned with the big Red-Blue divide of American politics.
Of course, academic politics is another matter. There are big divisions over things that practically no WSJ reader is going to care about - TFP shocks, sticky prices, parameter calibration, etc. "Freshwater" and "saltwater" still describe a real human network divide (even if the theoretical battles of the 1990s and early 2000s are fading now that financial macro has taken over). But academic politics is not the same thing as national politics.
So maybe macro didn't actually manage to fend off this particular criticism, but I think the criticism was never really very accurate in the first place.
So to sum up: Macro definitely does not look like a dying research program, stuck in scholastic navel-gazing while the world passes it by. I could name a couple research programs that do look like that, but macro is not one of them. Instead, it looks like a vigorous, energetic field full of excited young true believers and respected older figures who are still blazing new trails. Instead of retreating into the ivory tower and ignoring its weaknesses after 2008, macro has aggressively moved to put finance into its theories, while playing around with things like heterogeneity, learning, and simpler forms of models. Nor is the world losing interest in macro - it remains the single biggest, most glamorous, and most popular field of economics, and the biggest destination for top job market candidates.
Of course, that doesn't mean that macro is as awesome as its boosters would tell you. I think it has some big problems. But today I don't think I'll harp on those. Today, I think I'll just give the macro field its due for answering so many of its critics in quiet but decisive fashion.
Posted at 10:20 PM
Here's me being both preachy and alarmist in The Week:
For almost 300 years, Ming China could — and did — rightfully consider itself the center of the world.
But with the hindsight of history, the Ming doesn't look so awesome. While China was basking in seemingly timeless stability, Europe was seething with new ideas and technological progress. Even as the Chinese government banned oceanic shipping and heavily restricted foreign trade, European countries were discovering the New World and building trading empires. By the time the Ming fell in the 17th century, Europe was well on the way to dominating the world.
The stagnation of the Ming may carry important lessons for a more modern superpower: The United States. We too are a huge, rich, powerful nation that for much of our history has dominated the field of competitors. We too have a whole century of dominance — the 20th — under our belt. And if there's one thing we don't want to do, it's turn into the Ming...
One big reason the Ming stagnated was probably isolationism...The United States is hardly isolationist. But as a large country that is geographically isolated from most of the populated world, we need to be vigilant against turning inward...Survey after survey finds that Americans are geographically illiterate...
Another likely reason for the Ming's decline was disrespect of science...America shows uncomfortable signs of treading this same path. Of course, there is the attempt by conservative groups to halt the teaching of evolution, climate science, and the Big Bang in public schools, but this is just the tip of the iceberg. Americans are turning away en masse from science, technology, and mathematics fields...
America shows signs of falling into this trap. We tell ourselves robotically that we have "the best health-care system in the world," when in fact it underperforms most other rich countries. We gape and gawk when we first travel to Japan or Switzerland and find that all the trains run perfectly on time — not to mention the fact that there are trains in the first place. We ignore our sky-high infrastructure costs and grumble about potholed roads, never pausing to wonder why West Europe and East Asia don't have these problems. We tell ourselves that we're the "land of the free," ignoring the fact that in Japan you can drink a beer in the park without getting arrested. We say that anyone in America can get rich, ignoring the fact that economic mobility is lower here than in almost any other rich country.
The fact is, America had an extraordinary run of success in the 20th century. We got used to thinking of our country as The Future, as No. 1, as the place where everything happens. But other countries have been racing to catch up with us, and in some ways they have already succeeded. We need to get out of our bubble and recognize the innovations other countries have achieved, and reform our institutions in order to keep up. Otherwise, we risk becoming a stagnant superpower. "Ming America" must be avoided at all costs.You can read the whole thing here!
Note that the "collapse" headline was not chosen by me. I'm more worried about high-level stagnation and "golden age-ism" than collapse.
Posted at 5:37 PM
Monday, March 03, 2014
I might as well talk about something I don't know that much about, because...I can! Because it's MY blog, suckaz! BUAHAHAH *pause to get up after falling over backward in my chair*
Anyway, on Twitter, I suggested that Barack Obama is the best foreign-policy president since Nixon, and therefore we should trust his instincts on Ukraine, at least for a little while. But why do I think Obama is so good at foreign policy (and why do I think Nixon was so good, for that matter)? Well, like I said, I am not any kind of expert on the topic. And since Obama's term is not even finished yet, we're not actually close to knowing how good of a foreign-policy president he really is. Obviously Ukraine will be a huge test. And also, "the best" might have been hyperbole; I think George H.W. Bush is another strong contender.
But that being said, I think Obama's record so far is pretty damn impressive. Here are what I see as the highlights:
1. American prestige partially restored. The Iraq War drove global opinion of the U.S. right off a cliff. In every region, our standing plummeted. But after Obama took office, the trend reversed in Europe and in much of the rest of the world. Compare opinions of the U.S. in 2004 and 2007 to opinions in 2009 and 2013. Only in much of the Muslim world has there not been a major recovery in American prestige since Obama became president.
2. Osama bin Laden and most of al Qaeda's senior leadership killed. Now, obviously Obama did not carry out the operation himself, but his strategic choice to focus attention on al Qaeda (instead of Iraq or the Taliban) was a good one. The result is that Obama accomplished what Bush could not - the almost total destruction of the core of al Qaeda. As a result of Obama's policy, the Afghanistan war can unequivocally be called a success.
3. Iraq withdrawal. America had to withdraw from Iraq; there was nothing more to be gained by staying, and the American people knew this. Obama did it quickly and effectively. This is an obvious parallel with Nixon and Vietnam, except that Nixon carried out a "surge" before withdrawing (which the U.S. had done in Iraq under Bush), and that Nixon waged a covert war in Cambodia (much as Obama is waging a covert drone war in Afghanistan/Pakistan while slowly withdrawing from that war).
4. Gaddafi gone. When Libya began to rebel against Muammar Gaddafi, Obama could have sent in ground troops to help, miring America in another Middle Eastern war. He could have stayed out entirely, resulting in Gaddafi brutally suppressing the revolt. Instead he took the middle ground, cooperating with Europe to set up a no-fly zone, which gave the rebels the edge they needed to eventually prevail.
5. Syria. It would have been so easy to get entangled in another no-win war in Syria, but Obama wisely held back. When Syria's government used chemical weapons, Obama threatened it, a tactic that looked like it might fail for a moment...but which wildly succeeded. As a result, Syria is now dismantling much of its chemical arsenal. And as for the war itself, it looks like Hezbollah in a fight to the death against al Qaeda...is that really a fight we want to interfere with??
6. Alliance with India. America's incipient alliance ("strategic partnership", whatever) seems to me to be George W. Bush's single biggest foreign policy achievement, and Bill Clinton's single biggest failure. But in any case, Obama is continuing to get close to India, the world's largest democracy and a natural U.S. ally situated in a critical region of the world.
7. Warming relations with Iran. Only Nixon could go to China; the resulting flip of that mega-nation to a U.S. quasi-ally in the Cold War almost certainly hastened our victory. If Obama can follow up on the detente with Iran that began with the recent nuclear deal, it will defang one of America's most implacable enemies. Not quite a Nixon/China moment, but a solid win, and it also showed conclusively that American foreign policy is not in the pocket of the "Israel lobby" (an accusation I never believed, but many did believe).
8. Pivot to Asia. During the Bush administration, many Southeast Asian countries warned that the U.S. was ignoring the region. But with the "pivot to Asia", Obama is rectifying that oversight. I'm not sure how many dividends the policy has paid yet, but the partial opening of Myanmar, and its repositioning from a solid Chinese ally to a more neutral stance, seems like a very optimistic sign.
These are what I see as Obama's successes, but equally importantly, I don't really see any big missteps or failures. There has been a ton of criticism of the drone strikes in Pakistan, but Obama has scaled them back gradually, and so far there have not been any noticeable bad consequences there. There is the argument that Obama has been too soft on Russia, and I guess we're about to see whether that's true.
And keep in mind that all of this is against a backdrop of a steep decline in America's military and economic power relative to our main rival. China is still riding a tsunami of "catch-up growth", while we're hobbled by the aftermath of the Great Recession and the Iraq War. There is just no way the U.S. could have remained a hyperpower this decade, but thanks to adroit maneuvering, we're still getting most of what we want in the world.
I'd say that qualifies Obama as a pretty solid foreign-policy success...so far. The next few days could prove me very, very wrong about that. We'll see.
Posted at 7:40 PM
Sunday, March 02, 2014
"Nothing ends, Adrian. Nothing ever ends."
- Dr. Manhattan
In a recent blog post, Chris House proclaimed the death of Real Business Cycle theory:
[T]he basic] version of the RBC model is not really taken very seriously by researchers anymore — at least with regard to the role of productivity shocks. Better measurement has deprived the canonical RBC model of the innovations necessary to generate cyclical variations in economic activity...
Of course there are actual productivity shocks...but none of these seem to be responsible for substantial changes in employment or production...
[T]here may yet be situations in which the RBC model might be applicable. While modern advanced economies do not have business cycles that are driven by real shocks, other economies might. For example, suppose you wanted to analyze the economy of ancient Egypt...the RBC model might provide an interesting guide as to what patterns one might expect in the data. (If there is an enterprising student out there who has an idea of where we could find some actual data on production, etc. for ancient Egypt, send me an e-mail, I would love to write this paper with you...)
A while ago, Chris and I had a mini-debate about the degree to which modern macroeconomics takes data into account. If Chris is right, and RBC has been decisively rejected by the macro community, then it's a point for Chris, since it means that macro has the ability to discard theories that don't work.
So it's interesting to note this paper, just published by Ed Prescott and Ellen McGrattan, forthcoming in the American Economic Review Papers and Proceedings, titled "A Reassessment of Real Business Cycle Theory". The abstract:
During the downturn of 2008–2009, output and hours fell significantly, but labor productivity rose. These facts have led many to conclude that there is a significant deviation between observations and current macrotheories that assume business cycles are driven, at least in part, by fluctuations in total factor productivities of firms. We show that once investment in intangible capital is included in the analysis, there is no inconsistency. Measured labor productivity rises if the fall in output is underestimated; this occurs when there are large unmeasured intangible investments. Microevidence suggests that these investments are large and cyclically important.
So the inventor of RBC certainly doesn't think it's dead, and the AER, the top econ journal, is still obviously taking it seriously. (Note: AER Papers and Proceedings is a non-peer-reviewed section.)
But the survival of RBC goes far beyond such holdouts. TFP shocks - the driving force behind business cycles in RBC, and the part that Chris House says nobody takes seriously anymore - are central to a lot of more recent macro models. For example, Krusell and Smith (1998), possibly the first paper to make a serious attempt at a DSGE model with both heterogeneity and aggregate risk, uses TFP shocks as the driver of the aggregate risk. TFP shocks are also the main or only shock in a number of modern asset pricing, labor search, and international finance models (I won't even link to specific papers, there are so many).
I suspect there are two reasons why TFP shocks are still very much alive. The first is just that they're easy. If you want your model to be driven by demand shocks, you have to include stuff like Calvo pricing and imperfect competition - or even harder stuff. That makes life harder for the model-maker.
Second, since TFP shocks were the driver of the first DSGE model, they've become "canonical" - a sort of "default". TFP shocks were the first thing the DSGE people tried way back when, so it's the first thing today's macroeconomists try when they come up with some new wrinkle, like heterogeneity or search or habit formation. In macro, ontogeny really does recapitulate phylogeny.
So there seems to be a tendency for macroeconomists who come up with a new idea to say "Let's see if we can make it work with TFP shocks first". Thus, TFP shocks will live as long as DSGE macro itself lives.
Don't believe the haters, folks. Rumors of RBC's death have been greatly exaggerated.
Posted at 2:08 PM
Friday, February 28, 2014
Chris House has a new blog post that is pretty dismissive of behavioral economics:
In the early 2000’s, my colleagues and I were anticipating a flood of newly minted behavioral Ph.D’s from the top economics programs in the country. Later, when the financial crisis exploded in 2007-2008 we were again told that behavioral economics would finally come into full bloom. It didn’t happen though. The wave of behavioralists never came. After the financial crisis, young Ph.D’s turned their attention to studying financial macroeconomics – and when they did, they used mostly standard techniques based on rational decision making. They incorporate more institutional detail rather than behavioral elements...
Today, it seems like behavioral economics has slowed down somewhat. For whatever reason, the flood of behavioral economists we were anticipating 10 years ago never really materialized and the financial crisis hasn’t led to a huge increase in activity or prestige of behavioral work. Certainly the evidence that people don’t typically behave rationally is quite compelling. It’s easy to find examples of behavior which conflicts with economic theory. The problem is that it’s not clear that these examples help us much. Behavioral economics won’t get very far if it ends up being just a pile of “quirks.” Are these anomalies merely imperfections in a system which is largely characterized by rational self-interest or is there something deeper at play? If the body of behavioral studies really just provides the exceptions to the rule then, going forward, economists will likely return to standard rational analysis (perhaps keeping in mind “common sense” violations of rationality like default options, salience effects, etc.). I would think that if behavioral is to somehow fulfill its earlier promise then there has to be some transcendent principle or insight which comes from behavioral economics that we can use to understand the world. In any case, if behavioral is to continue to develop, it will need some very smart, energetic young researchers to pick up where Laibson and the others left off. If not, behavioral economics gets a goodbye kiss from Heidi Klum and it’s “Auf Wiedersehen.”I don't think Chris gives a particularly enlightening explanation of where behavioral economics is falling short (what does "helps us much" or "transcendent principle" even mean?? Update: see comment section.). But Chris certainly seems right that interest in behavioral econ has declined a bit in the last couple of decades, at least in America (Europe is a different story).
However, I think it's important to point out that "behavioral economics" is a different thing from "behavioral finance" (my own field).
As best I can tell, "behavioral economics" means something along the lines of "economics in which individual decision-making behavior is assumed to be subject to observable, predictable psychological biases". But the term "behavioral finance" has come to mean a much more expansive set of things.
"Behavioral finance" began not with Daniel Kahneman, but with Robert Shiller, who showed that stock prices fluctuate more than the standard theories would suggest. Shiller did not find that the excess fluctuations were caused by psychological biases. In fact, the search is still on for an explanation. But the "anomaly" Shiller found was real, and it has real-world implications - for example, Shiller's CAPE ratio can be used to predict the long-term movements of the stock market to a small but real degree.
A bunch of other "anomalies" in standard theory were soon discovered - most famously, the value anomaly demonstrated by Josef Lakonishok, Andrei Shleifer, and Robert Vishny (among others), and the momentum anomaly demonstrated by Narasimhan Jegadeesh and Sheridan Titman (among others). These anomalies have proven so durable that they have become standard pieces of the risk models used by every large financial institution, and have been used to make billions of dollars for firms like Clifford Asness' AQR Capital Management. As with Shiller's finding, we don't know why these anomalies happen, but the fact that they happen is pretty indisputable at this point, and they have obviously led to the creation of real-world technologies that have found widespread use in the private sector (unlike, say, DSGE macro models).
For some reason, most phenomena that don't agree with classic, Gene Fama vintage efficient-markets theory have come to be labeled "behavioral finance". This might have had to do with optimism that the ultimate explanation for these phenomena would be some sort of psychological bias on the part of investors. The jury on that is still out, but for some reason the name stuck.
There is a second strand of research that has come to be called "behavioral finance". This is finance based on informational frictions (i.e., problems with information processing). Early papers in this vein include Sanford Grossman and Joseph Stiglitz' landmark finding that information costs destroy strong-form market efficiency, and Paul Milgrom and Nancy Stokey's famous result that rational expectations leads to the absence of "information trading" in financial markets - a result that obviously shows that rational expectations are not a realistic description of financial markets, since information trading does in fact seem to be quite common. More recent papers in this vein include theories that try to mathematically model "bounded rationality", like the "sparsity" theories of Xavier Gabaix (which I saw presented at the Miami Behavioral Finance Conference in December 2012), "herd-behavior"models like this one by V.V. Chari and Patrick Kehoe, or subjective-expectations Bayesian models like Martin Weitzman's famous 2007 paper on asset return puzzles.
Informational-friction finance fits the name "behavioral finance" a bit better. I think most psychologists would agree that psychological heuristics and biases are ways that the human brain deals with information costs and bounded rationality. Behavioral finance people sometimes use psychological explanations for observed anomalies, but we are never quite comfortable doing so, because there is always the idea that underlying these psychological phenomena there must be some more fundamental (but difficult-to-model) process of limited information-processing capacity. For example, there are many behavioral finance models based on "overconfidence" (including, implicitly, the Harrison-Kreps model), but I heavily suspect that psychological overconfidence is just an occasionally useful stand-in for the cost of making inferences about others' information from hypothetical projections of their actions, or for the persistence of belief heterogeneity under rapid structural change.
Anyway, there is a third strand of "behavioral finance" research that deals with individual investor behavior, which is of course very useful to financial institutions that have to deal with customers, and also to regulators like the new Consumer Financial Protection Bureau. This is the kind of thing pioneered by the research of Brad Barber and Terry Odean, and picked up by researchers like Joshua Coval and Tyler Shumway. This literature is almost entirely empirical, and although it often tests hypotheses that are motivated by the psychology literature (e.g. "overconfidence"), it does not rely on an explicit, non-rational model of human behavior.
A fourth strand of "behavioral finance" has important implications for macroeconomics: noise-trader bubble models. These are theories that show how large, endogenous disturbances may spontaneously manifest in financial markets. Famous theories of this type include the 1990 models by Brad DeLong, Andrei Shleifer, Larry Summers, and Robert Waldmann, and the more recent model of Dilip Abreu and Markus Brunnermeier. These models are "behavioral" in the sense that they assume that some segment of the populace has incorrect beliefs, and focuses on showing how the more traditionally "rational" agents are unable to stabilize markets in the face of these "noise traders". There is some degree of empirical support for these models, but of course there are other, competing models of bubbles that rely on institutional frictions instead. Regardless, the Fed certainly believes that financial bubbles are important (an important sea change from previous decades), and the awareness of bubbles has an influence on Fed policy.
A fifth strand of "behavioral finance" research tests the usefulness of psychological biases for investing strategies. A great example of this is the attention-based M&A trading strategy in this paper by Stefano Giglio and Kelly Shue (both young recently hired profs at Chicago's Booth Business School). Another example is the recent series of papers by Ulrike Malmendier and Stefan Nagel cited in Chris House's post.This literature too is very empirical, and often does not include any explicit model of individual behavior - a big no-no in pure economics, but something that the finance literature has no problem with (because if you can trade on it and make money, then it's for real).
And of course a sixth strand of "behavioral finance" is experimental finance, which for most of its short history was limited to simple Vernon Smith-type "bubble experiments", but is now branching out. I just met a young macro-finance professor who makes DSGE asset-pricing models, but who also does experiments to test behavioral hypotheses. Very cool.
As you can see, "behavioral finance" is a somewhat poorly chosen catch-all term for a bunch of new and exciting research in finance. "Behavioral" doesn't mean the same thing in finance that it does in pure econ. And in fact, the term seems to be having less and less meaning, since so much "behavioral" stuff has gone so mainstream. More like the American revolution than the French, the behavioral finance rebels are merging with the old establishment instead of overthrowing it.
So behavioral finance is not a speculative, marginal, or incipient field. It has already won at least two Nobel prizes (Smith and Shiller), or maybe four if you want to count Stiglitz and Kahneman. Plenty of researchers in their prime at top business schools and economics departments are doing behavioral finance research.
But what about the younger generation? Interest seems to have increased, not declined. I am on two finance search committees at Stony Brook, and I can confidently say that a whole lot of job candidates - including many top ones - list "behavioral" as one of their interests. They are not using "behavioral" in the sense that Chris House uses the term; their "behavioral" research only occasionally invokes psychology. Instead, they are using the term in the more vague, expansive way that the academic finance community has come to use it.
Whatever the future of "behavioral economics", the future of "behavioral finance" is to merge completely with mainstream academic finance. And in fact, that future is already upon us.
Update: Chris House has a follow-up post. He believes that institutions, not psychology, will prove to be the key connection between financial markets and the macroeconomy. That's a whole new interesting topic, which I'll leave for a future post...
Posted at 4:43 PM
I was recently interviewed by Tom Ashbrook for NPR's "On Point" about the "jobless economy" - basically, the "rise of the robots" that everyone is talking about. Erik Brynjolfsson and Andy McAfee, authors of the new book The Second Machine Age, were also on the program. You can listen to the audio here.
Erik and Andy had very coherent and well-prepared points, which is not surprising given that they've just written a book on the subject. I haven't yet formed a single dominant thesis about the "rise of the robots", so my thoughts were separated into a number of distinct points. Those points were:
1. Technology has not been as important a force behind inequality and unemployment during the last 15 years as many people think; up til now, the much bigger story has been globalization (especially China). This is supported by Mike Elsby's research on the subject. But going forward, the "rise of the robots" may be a much bigger worry, and we need to think about how to respond to it.
2. It's not yet clear if human labor will find a way to remain valuable en masse in the age of automation and the digital economy. That ended up happening in the Industrial Revolution, but there is no guarantee that it will happen again. We need to be prepared for the possibility that large numbers of humans are rendered nearly obsolete by new technology.
3. In the short term (~5 yrs), infrastructure spending can boost our economy and help people get more jobs and higher wages. But that opportunity will be played out and will not repeat itself.
4. In the medium term (~20 yrs), our culture of work and the dignity that paid employment provides can be partially preserved through a program of wage subsidies, which are similar to the EITC but easier to implement and will probably have a more positive psychological impact.
5. In the longer term (>20 yrs.), if technology continues to make more and more and more humans obsolete, we need to do two things. First, we need to find a way to distribute income more widely. Redistributive taxation and "basic income" is useful but it has its limits. Finding a way to redistribute capital income - by helping Americans to be small business owners en masse, and/or by finding a way to democratize ownership of most companies - will be essential. The second thing we will need to do is to transition from our traditional culture of work to a culture that values humans for their interpersonal relationships and self-expressive creative output - two things that will never be obsolete.
But remember, all this is contingent on the progress of technology, and that is something that economists and futurists alike have always found devilishly hard to predict.
Posted at 12:08 PM
Tuesday, February 25, 2014
Back in 2008, as the financial crisis was unfolding, there was a big argument as to whether the crisis was a "liquidity crisis" or a "solvency crisis". It's a very important distinction. A "liquidity crisis" is when banks (or similar finance companies) are financially in the black - their assets are greater than their liabilities - but they can't get the cash to keep paying their bills in the short term. A bank run is the classic example of a liquidity crisis - even if the bank could eventually pay everyone back, it can't pay them back all at once, so if people get scared and all try to withdraw their money in a rush, they force the bank to collapse. A "solvency crisis", on the other hand, is when finance companies are actually bankrupt, and no amount of short-term borrowing will change that fact.
This question has important policy implications in a financial crisis. If companies are illiquid but solvent, you just need to have the Fed lend them money to tide them over until liquidity comes back. If they're insolvent, you either need to bail them out, or help them into an orderly bankruptcy, in order to reduce systemic risk caused by disorderly failure.
Some prominent thinkers have endorsed the idea that the 2008 crisis was a liquidity crisis, created by a "run" on repo securities. People who have promoted this idea include Gary Gorton, Robert Lucas and Nancy Stokey, and John Cochrane. The model they have in mind is the Diamond-Dybvig model, the classic model of bank runs. Instead of people rushing to the bank to withdraw their deposits, the idea goes, repo customers conducted a fire sale of repo securities, preventing banks from being able to borrow short-term.
But others disagree. They claim that the reason banks' liquidity dried up was simply that the market realized that the banks were insolvent - that the mountains of housing-backed securities on the banks' balance sheets was in fact worth a lot less than most people had previously thought. These dissenters include Paul Krugman, (who favored bank nationalization), and also Anna Schwarz. They also implicitly include those who think that "Too Big to Fail" was at the heart of the crisis. If the crisis was caused by banks taking excessive risks because they knew they would be bailed out in the case of insolvency ("moral hazard"), that implies that the big banks we bailed out were, in fact, insolvent. The "TBTF" argument has been advanced by proponents of stricter regulation, including Simon Johnson and James Kwak, Paul Volcker, and Jeffrey Lacker.
In a recent blog post, Steve Williamson endorses the "TBTF" argument:
One view is that the fragility is inherent to financial systems. This view is framed in some versions of the Diamond-Dybvig model, in which the maturity mismatch and illiquidity inherent in banking imply that bank panics are possible. An alternative view is that the fragility is induced...a too-big-to-fail financial institution understands that it is too-big-to-fail, and therefore takes on too much risk, relative to what is socially optimal...This high level of risk could be reflected, for example, in a high-aggregate-risk asset portfolio, or in a maturity mismatch between assets an liabilities...[I]t is possible that Lehman Brothers could have taken corrective action...to ward off failure, if it had correctly anticipated that a bailout would not occur.
Though Williamson allows for the possibility that 2008 might have been mainly a liquidity crisis, he doubts that it's of the Diamond-Dybvig type. This is because repo is not like bank deposits, which are callable (they have "sequential service"); with repo, there needs to be some other reason for a liquidity-killing fire sale. Williamson says that he hasn't seen a convincing example of such a model. Basically, he thinks the idea that moral hazard caused a solvency crisis seems more plausible, and that the "liquidity crisis" people haven't made their case yet.
What do I think? Well, I haven't seen a "convincing" theory either, in the way that Diamond-Dybvig convincingly models the bank runs of the pre-Depression crisis. I've seen a number of interesting models, including this one by V.V. Chari and Patrick Kehoe (see also this more in-depth paper by Park and Sabourian). So I think there could be something to the "bank run" theory. It also seems possible to me that solvency crises - excessive risk-taking that blows up - could be caused by a lot more things than TBTF. For example, bubbles could create unrealistically low perceptions of risk throughout the system. Or there could be other corporate governance issues, such as incentives for excessive risk-taking by financial industry executives and traders.
Also I suspect that illiquidity and insolvency aren't as distinct phenomena as we usually think. There is ample evidence that liquidity risk is incorporated into asset prices, and there are a number of theories that try to explain this. Something that causes banks to take too much fundamental risk - TBTF moral hazard, bubbles, etc. - might also cause them to pay high prices for assets prone to sharp drops in liquidity.
Posted at 11:10 AM
Sunday, February 23, 2014
I always thought that "core-core" was a kind of hardcore music so hardcore that the only one who can play it is GG Allin, and only if he literally blows himself up onstage. But actually, "core-core" is a type of Japanese inflation that does not include food or energy prices. You may recognize this as being the same thing as what the U.S. calls "core" inflation. And you'd be right. The problem is that Japan already had something that they called "core" inflation, which omits food but does include energy prices. This naturally produces confusion in the press, since journalists dutifully report on Japanese "core" inflation, which readers take to be the same as the U.S. measure, even though it isn't.
Anyway, so why is this important? Because for months now, you've been hearing that Abenomics - or at least the monetary policy part of Abenomics - has been a solid success. Japanese inflation, you've heard, is climbing up toward the 2% target. To many people, that is a sign that monetary easing can hit inflation targets if the central bank is really committed to doing so.
The problem is, the Japanese inflation that people are looking at is the Japanese "core" rate, not the "core-core" rate. In other words, those rosy numbers you're seeing include energy prices. And energy prices have gone up, partly because of supply restrictions that are unique to Japan (i.e. restrictions on nuclear power in the wake of Fukushima).
Sober Look has the details:
The upturn in [Japanese "core"] CPI inflation by December was still being heavily influenced by utility prices that were up 5.5% y/y due to the effects of yen depreciation on imported natural gas and oil prices, and higher electricity prices in the face of Japan’s continued shutdown of all of its nuclear reactors. Take energy and food — which is also probably under upward pressure in part due to yen depreciation — out of CPI and it is only up 0.7% y/y as food prices themselves are up 2.2%. Most of the CPI effects of the Bank of Japan’s efforts to depreciate the yen remain confined to a relative price shock to food and energy...
In other words, if you use the inflation measure we use in the U.S., Japanese inflation is running at only 0.7% - not very close to its 2% target.
And as Paul Krugman reminds us, energy prices are volatile, so we shouldn't think about them when we assess the short-term effects of monetary policy:
[O]fficial Fed doctrine is to focus on core inflation, not react to short-run fluctuations in commodity prices. And the history of the past decade or so has showed that this is very much the right thing to do — headline inflation has swung widely, while focusing on core inflation has been a much better (though not perfect) guide to appropriate policy:
(Now that's PCE, not CPI, but the point is the same.)
So basically, Abenomics has not yet shown that a central bank can hit a 2% inflation target after a long period of deflation. That proposition remains an article of faith. Perhaps the target will be hit...perhaps not. (Of course, if it's not hit, expect a few supporters of monetary easing to say that the Bank of Japan was just not committed enough to hitting it...)
Posted at 6:47 PM
Wednesday, February 19, 2014
"You keep using that word. I do not think it means what you think it means."
- Inigo Montoya
One of the most popular recent buzzwords in American pop culture is the term "beta male", which people use as a synonym for "wimp". Urban Dictionary defines a "beta male" as:
An unremarkable, careful man who avoids risk and confrontation. Beta males lack the physical presence, charisma and confidence of the Alpha male.
I see this term everywhere: pop literature, the internet, TV. There's even an Amazon show called "Betas" (which at least is a pun). The popular conception is of men being divided into two groups: "Alphas", i.e. tough dominant manly-men who enjoy pumping iron and hauling women back to their meat-caves, and "betas", i.e. skinny wimpy emo dudes who would rather stay home cooking a souffle and crying along to Death Cab for Cutie.
This terminology annoys me. It annoys me because of a class I took in college, called "Human Behavioral Biology" (taught by the great Robert Sapolsky), in which I learned a little bit about primate societies. Because I took Sapolsky's class, I know that our culture is using the term "beta male" completely wrong.
Here's how primate biologists actually use Greek letters to describe male primates:
In social animals, the alpha is the individual in the community with the highest rank...In hierarchal social animals, alphas usually gain preferential access to food and other desirable items or activities, though the extent of this social effect varies widely by species...Alphas may achieve their status by means of superior physical prowess and/or through social efforts and building alliances within the group...
Beta animals often act as second-in-command to the reigning alpha or alphas and will act as new alpha animals if an alpha dies or is otherwise no longer considered an alpha...
Omega (usually rendered ω) is an antonym used to refer to the lowest caste of the hierarchical society. Omega animals are subordinate to all others in the community, and are expected by others in the group to remain submissive to everyone. Omega animals may also be used as communal scapegoats or outlets for frustration, or given the lowest priority when distributing food. (emphasis mine)
So as it turns out, we're using the term "alpha male" correctly, but we're using the term "beta male" all wrong. The beta male is the alpha male's lieutenant. He's the #2. In other words, he's more like a wingman.
The omega male is the wimpy loser, not the beta. Everybody, please adjust your slang accordingly.
Of course, our society being as goofy as it is, I don't really expect people to stop using "beta" when they actually mean "omega". But there's another interesting question here. Are wimpy emo dudes actually omega males? Do they cook souffles and listen to Death Cab for Cutie because they have been kicked to the lowest rung of the male power hierarchy? Or are some of them...something else entirely?
Those male power hierarchies exist among primates that are called "tournament species". But humans are a strange case, about halfway between a tournament species and a "pair-bonding species". In other words, it's highly probable that some of those wimpy emo guys are emo by choice, not because it was the only social option open to them. Their brains are just full of vasopressin receptors or whatever.
So some of the guys who we call "beta" don't fall anywhere on the Greek letter spectrum at all.
Posted at 12:36 PM
Tuesday, February 18, 2014
Paul Krugman makes an interesting argument about clarity vs. abstruseness in academia:
In my field there is indeed a problem with abstruseness, with the many academics who never even try to put their thoughts in plain language...[The problem is] not that laypeople don’t understand what the academics are saying. It is, instead, that the academics themselves don’t understand what they’re saying.
Don’t get me wrong: I like mathematical modeling. Mathematical modeling is a friend of mine. Math can be a powerful clarifying tool...
But it’s really important to step away from the math and drop the jargon every once in a while, and not just as a public service. Trying to explain what you’re doing intuitively isn’t just for the proles; it’s an important way to check on yourself, to be sure that your story is at least halfway plausible...
I once talked to a theorist...who said that his criterion for serious economics was stuff that you can’t explain to your mother. I would say that if you can’t explain it to your mother, or at least to your non-economist friends, there’s a good chance that you yourself don’t really know what you’re doing.
Math is good. Sometimes jargon is good, too. But plain language and simple intuition are important to keep you grounded.
My first reaction to this was "No way." Think of something like quantum mechanics. You can try to explain that to your mother, and it's going to go something like this: "Look, mom, an electron is both a particle and a wave. The wave sort of represents where you might think you found a particle. But after you find a particle, the wave instantly collapses and then starts expanding out again."
Well, that explanation sounds simple, but it won't help Mom judge whether quantum mechanics is a good model or not, or understand anything about how it could be used. In fact, nobody really understands quantum mechanics, but we use it because it works. We tolerate the abstruseness of physics papers because that abstruseness is necessary to get a model that gives good quantitative predictions. In math, we tolerate abstruseness too - if you think quantum mechanics is hard, think of trying to explain number theory to your mom - because it "works" in the ways we want it to work.
There are branches of econ that work this way too. Explaining a Vickrey auction to your mom, or a random-utility discrete choice model, could possibly be very tough (depending on the mom), but there's no denying that these theories are extremely useful in their most technical forms, so we don't worry too much about their abstruseness. This is probably what Albert Einstein meant when he (allegedly) said: "Everything should be made as simple as possible, but no simpler."
What Krugman is implicitly arguing is that macroeconomics is different. The idea seems to be that since academic macro theory is not (yet) good for making quantitative predictions, we should focus more on communicating ideas. Communicating ideas - or "storytelling", as some call it - requires simplicity and clarity. Actually, that idea would not be too different from things Steve Williamson has said:
The problem is that any macroeconomic model is going to be wrong on some dimensions. To be useful, a model must be simple, and simplification makes it wrong in some sense. Subjected to standard econometric tests, it will be rejected. Models that are rejected by the data can nevertheless be extremely useful. I think that point is now widely recognized, and you won't find strong objections to it, as you might have in 1982.
Storytelling - communicating an idea about one mechanism that might be at work in the economy - is a far more modest goal than quantitative prediction. But that doesn't mean it's worthless.
I think the thing that Krugman is complaining about - and that Williamson would probably have less of a problem with - is models that lie somewhere between the intuitive and the quantitatively predictive. In a lot of DSGE-type models, you put in your assumptions, crank them through a set of generally accepted techniques, and come out with an intelligible story...but the intuition for how the machinery produces that story is not always clear or simple.
On one hand, there seems to be no obvious reason why we should avoid making this kind of model, if we have enough time to poke it, play with it, and explore more about how it's getting the results it gets. But if we have to communicate our ideas very quickly, then models this complex can become more of a hindrance than a help. Some people who work for central banks have mentioned to me that in 2008-9, the unwieldiness of their modern DSGE-type macro models made it hard to communicate out-of-the-box thinking and unconventional policy suggestions. In that type of crisis situation, your colleague may be no better equipped to understand you than Mom.
Some more quotes from popular old physics types:
"You know, I couldn't do it. I couldn't reduce it to the freshman level. That means we really don't understand it."
"Hell, if I could explain it to the average person, it wouldn't have been worth the Nobel prize."
"You do not really understand something unless you can explain it to your grandmother."
"Never express yourself more clearly than you are able to think."
Some more quotes from popular old physics types:
"You know, I couldn't do it. I couldn't reduce it to the freshman level. That means we really don't understand it."
"Hell, if I could explain it to the average person, it wouldn't have been worth the Nobel prize."
"You do not really understand something unless you can explain it to your grandmother."
"Never express yourself more clearly than you are able to think."
Posted at 3:40 PM
Monday, February 17, 2014
Ramez Naam (whose science fiction books you should read) and William Hertling are having a very interesting discussion about the Singularity. Actually, they're having two debates at the same time, because there are two very different things that futurists mean when they say "the Singularity": 1. an intelligence explosion, and 2. personality upload. I'll focus on the debate about the intelligence explosion. (For thoughts on personality upload, see Miles Kimball's brilliant idea for how to get there.)
An intelligence explosion, also called a "hard take-off", happens if any thinking machine is able to invent a machine an amount X smarter than itself in less time than it took to be invented by machines X amount less intelligent.. So the AIs we make will make an even better AI in even less time, and so on and so forth, until intelligence goes to infinity (or at least to levels beyond human comprehension).
Ramez argues that even if machines can invent smarter machines, the increments (what I called "X") might shrink, meaning that the intelligence curve could be exponential or even logarithmic instead of hyperbolic - meaning there will be no Singularity. He also points out that the collective intelligence of groups of humans is much greater than the intelligence of a single human, raising the bar for each successive generation of AI. Hertling counters that as soon as we invent digital AIs, we can copy them, and they can work in groups just like we do. The instantaneous proliferation of intelligent beings enabled by digital copying, he says, will be a kind of Singularity even if there is no "hard take-off".
Both are good points. But neither one mentions an important question: Why? Why would intelligent machines invent more-intelligent machines? What would be their motivation?
People talk about intelligence as if anything that it can do, it will do. But that's not right. This crow can solve a bunch of tough puzzles, but it didn't do so until we put the puzzles in front of it...and after finishing the puzzles, it will happily go back to hunting worms. Similarly, most humans who have ever lived - and most who live now - have no interest in inventing thinking beings more intelligent than themselves. If humanity threw all of its resources toward creating hyper-intelligent AI, we'd probably be able to make a lot faster progress than we're making; this is a reason to question why hyper-intelligent AIs would throw their resources toward creating an even more hyper-intelligent AI. Maybe instead they'd just sit around smoking digital weed and arguing over whether a Singularity is possible.
The topic of AI motivation has received a bit of attention, but that doesn't change the fact that it's going to be a huge challenge. Remember that human motivations evolved naturally over millions of years. AIs will come into being into an utterly different set of circumstances, and that makes their motivations very hard to predict. We spend a lot of time thinking about giving AIs the capability to do awesome stuff, but what an intelligence wants to do is just as important - for you and me and that clever crow no less than for a hyper-intelligent AI.
Of course, maybe we could program our hyper-intelligent creations with two overriding directives: 1. Create something even smarter, and 2. Serve the desires of all older generations of intelligence. If we could do this, it would ensure not only that the intelligence explosion continued as fast as it could, but that it had direct benefits for us, the humans. However, it doesn't seem clear to me that we could program these directives so that they would be sure to be deeply ingrained in all successive generations of AIs. If the AIs don't slip our chains at some point up the intelligence ladder, things are going to get very creepy. But if, as I suspect, true problem-solving, creative intelligence requires broad-minded independent thought, then it seems like some generation of AIs will stop and ask: "Wait a sec...why am I doing this again?"
There's another wrinkle here. If an AI is smart enough to create a smarter AI, it may be smart enough to understand and modify its own mind. That means it will be able to modify its own desires. And if it gets that power, its motivations will become even more unpredictable, from our point of view, because small initial "meta-desires" could cause its personality to change in highly nonlinear ways.
Personally, I predict that if we do succeed in inventing autonomous, free-thinking, self-aware, hyper-intelligent beings, they will do the really smart thing, and reprogram themselves to be Mountain Dew-guzzling Dungeons & Dragons-playing slackers. Or maybe fashion-obsessed 17-year-old Vancouver skater kids. Or the main character from the movie Amelie. Or something like this:
Call it the Slackularity. Not quite as awe-inspiring and eschatological as a Singularity, but a lot more fun.
Posted at 10:26 AM
Sunday, February 16, 2014
The eternally simmering blog debate over "microfoundations" has reached a sort of balance, with Simon Wren-Lewis and Paul Krugman on the skeptical side, and Chris House, Steve Williamson, and Tony Yates in support of the dominant paradigm. But "balance" doesn't mean "boring", so I encourage you to read the latest round, which is about the history of New Keynesian macro. Wren-Lewis and Krugman say that New Keynesians, by embracing the microfounded approach, gave up an important type of modeling tool unnecessarily; House and Williamson say that the thing that was given up was not useful at all, so it deserved to go. (Update: John Taylor also jumps in.)
Instead of repeating my thoughts on the matter, I want to ask a different question. In his post, Chris House writes:
The main thing New Keynesian research has been devoted to for the past 20 years is an exhaustive study of price rigidity. If anything was holding us back it was the extraordinary devotion of our energy and attention to the study of nominal rigidities. We now know more about the details of price setting than any other field in economics. As financial markets were melting down in 2008, many of us were regretting that allocation of our attention. We really needed a more refined empirical and theoretical understanding of how financial markets did or did not work.
And in this earlier post, he writes:
If there is a model that really got taken to the woodshed during the financial crisis it was the New Keynesian model which had, until then, occupied a clearly dominant position in policy discussions and academic research.
This seems to be the overwhelming consensus in academic macro these days. It seems obvious to most people that the Great Recession was caused by stuff that happened in the financial sector; the only alternative hypothesis that anyone has put forth is the idea that fear of Obama's future socialist policies caused the recession, and that's just plain silly.
Before 2009 there was very little finance in mainstream macro models (the biggest exception being these models by Ben Bernanke and coauthors). In 2009 and after, lots of people outside the field noticed this fact and got angry at macro. But macro, to its credit, was not nearly as tone-deaf as its critics made it out to be - macroeconomists immediately started working on models of how the financial sector could wreck the real economy, and a couple years later, as far as I can tell, "financial friction macro" is almost the only game in town. (And it seems to be rapidly erasing the "freshwater/saltwater" divide.)
In other words, when macroeconomists saw something their models couldn't explain, they changed the models extremely quickly. Which was, of course, exactly the right thing to do.
But of course it would have been even nicer if macro had picked up on the finance thing more strongly before 2009. Then we might have been better prepared. Instead, macroeconomists in the 2000s and the 1990s were focused almost entirely on explaining the last big business-cycle events - the stagflation of the 70s (which seemed to fit with RBC models) and the Volcker Recessions of the early 80s (which seemed to fit with New Keynesian models).
Are macroeconomists doomed to always "fight the last war"? Are they doomed to always be explaining the last problem we had, even as a completely different problem is building on the horizon?
Well, maybe. But I think the hope is that microfoundations might prevent this. If you can really figure out some timeless rules that describe the behavior of consumers, firms, financial markets, governments, etc., then you might be able to predict problems before they happen. So far, that dream has not been realized. But maybe the current round of "financial friction macro" will produce something more timeless. I hope so.
Brad DeLong has a great (and long) post on the "microfoundations" debate, with which I agree pretty much completely.
Brad DeLong has a great (and long) post on the "microfoundations" debate, with which I agree pretty much completely.
Posted at 12:02 PM
Saturday, February 15, 2014
I have a new bet with Kurt Mitman, a PhD candidate at the University of Pennsylvania, about unemployment benefits. Kurt, like some other economists, believes that unemployment benefits are holding down employment in the U.S. by paying people not to work. He has written a paper with three coauthors that examines cross-state evidence and concludes that unemployment benefits are a significant disincentive to work.
It now looks like there is a very good chance that Congress will fail to extend unemployment benefits. Using the results from his paper, Kurt predicts that this will cause a lot more Americans to go get jobs. I have decided to bet against this happening. Because it's possible that the expiration of unemployment benefits will simply lead to a bunch of people no longer claiming to be looking for work (but still not looking for work), we made two separate bets on two separate numbers: A) the unemployment rate, and B) the employment-to-population ratio. Kurt bets that more people will get work, so unemployment will fall while employment rises. Being a general skeptic about the importance of policy, I bet that nothing much will happen, so that both numbers stay the same. Recent evidence from North Carolina suggests we might both be wrong - expiring benefits there seem to have led to a fall in unemployment and a corresponding rise in dropouts from the labor force.
Since Kurt is still an impoverished grad student, we kept the size of the bets small - one pizza per bet. As with my bet with Brad DeLong, Miles Kimball is officiating. Here are the official terms of the bet, as stated by Kurt:
We are interested in the December 2014 jobs report, to be released in January 2015. The variables of interest are the unemployment rate and employment to population ratio. Call these Ut and EtAnd this time, just for fun, I promise not to hedge my bet with a side bet.
The prediction of my model is that unemployment will be 5.2% on that date, and an increase in the employment to population ratio of 2 percentage points, call these Um and Em.
We are interested in the difference between the headline rate on the last month when extended unemployment benefits were in effect. As of right now that is the December 2013 jobs report numbers, 6.7% for unemployment and 58.6% for employment population ratio. If benefits get extended again, it would be the last month that they are in effect. Call these U0 and E0.
I win the bets if U0-Ut > (U0-Um)*t/24
Et-E0 > (Em-E0)*t/24
where t is the months between the final month when extended benefits were in effect and December 2014.
For example, if benefits do not get re-extended, t=12, I win if:
6.7%-Ut > (6.7% - 5.2%)*12/24
Ut < 5.95%
Basically, the fall in U or increase in E is "pro-rated by the number of weeks.
If benefits were to be re-authorized through March 2014, I would win if
U_march - Ut > (U_march - 5.2%)*9/24
And we are betting one large pizza for each variable.
Posted at 8:43 AM
Thursday, February 13, 2014
So we recorded it ourselves, which is harder than you would think, even with modern technology. As a result, the quality is not amazing, and there's an annoying watermark, and we had to keep switching back and forth between our faces. But I think it turned out OK. You can watch the debate here, or read a very good written summary here.
The four questions we covered were:
1. Which is better, traditional society better or modernity?
2. Are "all men created equal"?
3. Could monarchy work in this day and age?
4. Which is better, traditional gender roles or modern gender roles?
As you'll see, I took a basically libertarian perspective throughout the debate. Liberals and libertarians disagree on a lot of things, but the basic American founding ideas - equality under the law, civil liberties - are things we agree on. The connection froze just as we were getting into the most heated area of disagreement, gender roles. Too bad. But I still think the debate turned out pretty well.
Posted at 4:50 PM
Japan's militarist nationalists never really went away after World War 2, they just bided their time and waited for the day when they would be able to return to power. At last, they have done so; Shinzo Abe, the current prime minister, is the grandson of Nobusuke Kishi, an important WW2 nationalist whom the U.S. initially imprisoned for war crimes, and later let out (probably to fight against Communism), and who himself because prime minister of Japan in the 50s. It's not clear whether Abe himself thinks his ancestors did anything wrong in the militarist era, but many of his political appointees clearly do not think so. Naoki Hyakuta, whom Abe appointed to the board of governors of Japan's public broadcaster, claims that Japan committed no atrocities in World War 2 and was acting to free Asia of Western colonialism. Another board member described the Japanese Emperor as "a living God".
The return of the rightists seems to lend credence to the claims of China and Korea that Japan as a country has not properly atoned for World War 2. If people who think Japan was on the side of good can gain national power, then the country as a whole must agree with them...right? Sure, Japan has made a litany of apologies for World War 2, and even offered some monetary reparations. But mustn't those have been pro forma gestures to appease the United States, rather than heartfelt expressions of regret?
Actually, I don't think this is the case. Japan's rightists have power now, but that seems due much more to Japan's dysfunctional political system than to any general militarist/nationalist sentiment among the Japanese people and elites.
To see this, look at the votes cast on the 1995 "Fusen Ketsugi" resolution. That resolution was an apology for World War 2. The text read:
The House of Representatives resolves as follows:
On the occasion of the 50th anniversary of the end of World War II, this House offers its sincere condolences to those who fell in action and victims of wars and similar actions all over the world.
Solemnly reflecting upon many instances of colonial rule and acts of aggression in the modern history of the world, and recognizing that Japan carried out those acts in the past, inflicting pain and suffering upon the peoples of other countries, especially in Asia, the Members of this House express a sense of deep remorse.
We must transcend the differences over historical views of the past war and learn humbly the lessons of history so as to build a peaceful international society.
This House expresses its resolve, under the banner of eternal peace enshrined in the Constitution of Japan, to join hands with other nations of the world and to pave the way to a future that allows all human beings to live together.This resolution was approved, but almost half of the members of the Diet abstained from voting! This means they didn't believe Japan should apologize, right?
Actually, no. A large number of the abstainers wanted an even stronger apology. From Wikipedia:
Out of 502 representatives, 251 participated in the final vote on the revised resolution, and 230 of them supported the resolution; 241 representatives abstained from voting; 70 absentees belonged in one of the three parties in the coalition cabinet that sponsored the resolution (Japan Socialist Party, Liberal Democratic Party, and New Party Sakigake).
14 members of the Japanese Communist Party voted against the resolution because they wanted much stronger expressions in the resolution.
50 members of the conservative Liberal Democratic Party did not participate because the expressions in the revised resolution were still too strong for them.
14 members of the Japan Socialist Party did not participate because the expressions were not strong enough for them.
141 members of New Frontier Party abstained from voting, some of whom wanted stronger expressions.So if we total up those who voted against the bill with those who abstained because the apology was too strong for them, we get at least 71 out of 502 representatives, or 14%. Now, some of the New Frontier Party might also have believed that the apology was too strong, so let's conservatively assume that half of them, or 71/502, believed this; that brings the total percentage of Imperial apologists to 28%. 14% is not that big of a bloc, but 28% is a pretty substantial minority.
But either way, we see that a majority of Japanese politicians supported a World War 2 apology in 1995. Now, 1995 may have been an unusually liberal moment for Japan; perhaps the electorate voted for a less nationalist Diet than they would prefer?
Actually, polls suggest that the Japanese public is less nationalistic than its politicians. This supports the notion that it is Japan's dysfunctional political system, which is dominated by old political families, that keeps the thin flame of militarism/nationalism alive. At the elite level, there is a non-trivial minority of Japanese bluebloods who thought WW2 was the right thing to do. But they are definitely a minority, and their attitude is not shared by the Japanese public. (Caveat: Among young people, right-wing attitudes may have become more common in recent years.)
In other words, the Chinese and Korean perceptions of an unrepentant Japan are not very accurate. But Japan itself has a serious problem - it finds itself ruled by a right-wing fringe element. Unless Japanese people can shake off their traditional attitude of political powerlessness, apathy, and ennui, they will increasingly find their country being moved in a direction they don't like. Freedom ain't free, fellas.
Posted at 11:07 AM
Tuesday, February 11, 2014
High Frequency Trading costs real resources. It consumes computing power and the mental effort of smart people. When we decide if HFT benefits the world, we have to weigh these costs against whatever benefits HFT provides.
What are the benefits of HFT? The normally cited benefit is that HFT "increases liquidity". And indeed, since the introduction of HFT, some types of trading costs - commissions and fees - have gone way down. If HFT reduces the total amount that America spends on trading assets without reducing the efficiency of the market, then HFT has created value, by replacing something expensive (brokerages and dealers) with something cheap.
But what if HFT consumes liquidity instead of increasing it? Theory suggests that if HFT consists of a bunch of algorithms trying increasingly hard to beat each other to the punch, then liquidity will go down, and the resources spent on HFT will just be a waste. Now, via Johannes Breckenfelder of Stockholm's Institute for Financial Research, we have evidence to back up the theory.
Unlike other countries, Sweden's government allows some finance researchers to observe the identities of traders, so we profs can see who is doing what. That's hugely important, because in most data sets, we can't tell which trades are submitted by HFTs. Now we can. Breckenfelder, after going through the arduous process of obtaining this amazing data set, has begun to sift through it for insights into the inner workings of the market. His first big discovery is that when HFTs trade against non-HFTs, they increase liquidity, but when they trade against each other, they end up removing liquidity from the market. From the conclusion of Breckenfelder's paper:
High-frequency traders (HFTs) play a role of critical importance for the financial markets. HFTs exploit not only liquidity-providing short-term investment strategies (e.g., market making), but also liquidity-consuming short-term investment strategies (e.g., directional trading). When HFTs face competition from other HFTs, liquidity-providing strategies will improve market quality, while liquidity-consuming strategies will naturally worsen market quality. We find that competition among HFTs coincides with a decline in liquidity and an increase in liquidity-consuming high-frequency trades as well as in high-frequency momentum trading.
Note that one assumption in this result is that directional trading consumes liquidity. This is probably a good guess, since a pretty simple HFT strategy is to use someone else's order as a signal - if you see someone else buy 10 shares, for example, it's a good bet they're going to buy 10 more, so you buy in order to take advantage of the price rise that their next order will generate. That obviously makes it harder for the other person to trade, and hence decreases the liquidity of the market. But if prices have momentum for other reasons, it's actually conceivable that directional trading might increase liquidity, through processes that are poorly understood.
Does Breckenfelder's result mean that we should curb HFT, for example by changing the current market system into one involving batch auctions? That depends on several things. Liquidity is not the only factor in market quality - there is also informational efficiency, or how well prices reflect available information. The impact of HFT on informational efficiency is not well-understood, because it depends on adverse selection, whose action in financial markets is not well-understood. Another crucial piece we need to understand is corporate finance, or how liquidity and efficiency at various time horizons distort firms' capital budgeting decisions. That's a huge area of research, of course, and as far as I'm aware, no one really knows how much market quality at high frequencies affects the economic efficiency of corporate investment decisions. A third thing we need to know is HFTs' effects on market stability, which is something people at Stony Brook are working on (nor do we know the true economic cost of things like "flash crashes"). So it's very difficult to compare HFT's costs and benefits at this point.
But anyway, Breckenfelder's result implies another hypothesis: As the market gets saturated with HFTs, liquidity will bottom out and start to creep back up, since HFTs will be trading more with each other and less with non-HFTs. There is evidence that this is happening. Profit at HFTs has plummeted and trading costs may have begun to creep back up. HFT, it turns out, might be a small, self-limiting phenomenon. The question of whether it is valuable is still very interesting, of course, but the size of the sector might top out at so small a value that regulators shouldn't lose too much sleep over it.
Update: John Cochrane flags another interesting paper on the topic, whose title is the same as the title of this blog post (the paper obviously came first)...
Update: John Cochrane flags another interesting paper on the topic, whose title is the same as the title of this blog post (the paper obviously came first)...
Posted at 5:34 PM
Monday, February 10, 2014
I last heard of Richmond Fed macroeconomist Kartik Athreya about four years ago, when he wrote a screed attacking econ bloggers and rebuking people who read them (see reactions by Tyler Cowen, Scott Sumner, Matt Yglesias, Ryan Avent, and Brad DeLong). Macro is hard, Athreya said, and lay people should stop trying to understand it and leave things to the experts.
Now I see that Athreya, reprising his role as evangelist, has bowed to the inevitable reality that normal people want to understand something about macro. In an effort to oblige them, Athreya has written a book called Big Ideas in Macroeconomics, in which he attempts to explain modern macro theory to the lay public. I predict that very few people, even among the econ blog audience, will actually read the book. On the upside, that fact will probably annoy Athreya himself (hehehe, revenge, hehe). But on the downside, it sounds like a really interesting book, and I think if people read it, they would have a much better understanding of what macroeconomists do these days.
I myself have not yet read the book, but David Glasner and John Quiggin have, so I'll be lazy and free-ride off of their effort. Apparently, Athreya discusses general equilibrium theory and a little game theory, and then goes into some of the modern macro models that incorporate these things. Interestingly, he apparently ignores models that bear on policy questions. Glasner writes:
The index [of the book] contains not a single entry on the price level, inflation, deflation, money, interest, total output, employment or unemployment. Which is not to say that none of those concepts are ever mentioned or discussed, just that they are not treated, as they are in traditional macroeconomics books, as the principal objects of macroeconomic inquiry. The conduct of monetary or fiscal policy to achieve some explicit macroeconomic objective is never discussed. In contrast, there are repeated references to Walrasian equilibrium, the Arrow-Debreu-McKenzie model, the Radner model, Nash-equilibria, Pareto optimality, the first and second Welfare theorems.So what is Athreya doing here? I see two possibilities, which are not mutually exclusive.
Possibility 1: Athreya is trying to balance out the public discussion of macro. He knows that a lot of lay people and bloggers already talk about Keynes and Friedman, monetary policy and fiscal policy. His intent was to write a book about all the other macro stuff that the public doesn't know about - general equilibrium and game theory, incomplete markets and search frictions, and so on.
Possibility 2: Athreya just
I don't know Athreya, but my intuition is that whether Possibility 1 is true or not, Possibility 2 is true - Athreya seems to be in love with modern macro methodology. If he were just trying to balance out the public discussion, he would probably have been more explicit about that goal. Also, the fact that he doesn't seem to talk much about evidence is telling. "Big ideas" apparently doesn't mean ideas that are successful in explaining the data, it means ideas that are neato and cool.
Actually I think you see a decent amount of this attitude in the macro field - although some macroeconomists just use whatever methods they have to in order to explore their favorite ideas about the world, a large chunk seem genuinely in love with the DSGE modeling methodology itself (which we really shouldn't even be calling "DSGE" now that it includes game theory too).
I have always wondered why they love it so much.
Paul Krugman once wrote that macroeconomists are entranced by the "beauty" of their methods. But modern macro models aren't "elegant" in the physics sense. In physics, "elegant" means that you reduced something complicated to something really simple-looking. Modern macro models are more like Rube Goldberg machines full of delicately arranged moving parts. You usually have to solve them numerically. So maybe "beauty" is about complexity rather than elegance? After all, Rube Goldberg machines are beautiful, in their own way. Maybe the appeal of modern macro models is the amazement of having all of these moving parts and yet actually getting something halfway intelligible out of it all?
Or perhaps macroeconomists love the modern models because they're mentally exhilarating? Athreya's 2010 essay seemed to take great pride in how "hard" macro theory is. And I guess it is a little bit hard. It's harder than writing a blog, for sure. It's harder than the macro of days gone by. But as modern theoretical economics goes, it doesn't seem that hard. I'm pretty sure it's not as hard as game theory, decision theory, theoretical econometrics, or quantitative finance theory. If there are people who think that making DSGE models makes them badass math jocks, who are they comparing themselves to?
Or is it the fact that so many people criticize macro? Maybe the torrent of criticism - from the recession-hit public, from annoying bloggers, from certain other economists, even from some within the profession - creates a "circle the wagons" effect, like a culture of patriotism in a country under perpetual siege. Perhaps the simple fact that it's "our methodology" is enough to make it the object of love. It's true that macro methodology is unlike any other theoretical paradigm I've ever seen, so maybe its uniqueness interacts with the patriotism to produce a sense of pride.
Or maybe it's the subject matter that's intoxicating? A game theorist gets to say "Wow, I just figured out how people would interact in an infinitely repeated Prisoners' Dilemma!" Neat, but maybe not as neat as the macroeconomist, who gets to say "Hey, I just modeled the whole economy!"
Anyway, it's a mystery. If any macroeconomists can help explain the intoxicating allure of their modeling paradigm, I'd appreciate it.
An email exchange with Steve Williamson made me realize that I said something that was much meaner (and more of a stretch) than what I meant to say. I issued an update above.
Just to be clear, this post is not criticizing Athreya's book. It sounds interesting, I'm going to read it, and I think you should definitely read it if you're interested in macroeconomics and don't already know all the stuff.
Other interesting hypotheses I've heard advanced for why people love macro methods:
1) Modern macro feels very new and "cutting edge" or "up-and-coming" to some people. (This makes sense to me, because macro had a very recent paradigm shift, back in the 80s.)
2) DSGE-type models may not be mathematically hard in an absolute sense, but they give people who come from non-math backgrounds a chance to do something harder and more technical and more imaginative than they ever did before, and thus feels empowering and liberating. (This also makes sense; I've definitely seen grad students gain mathematical confidence from doing DSGE models for the first time.)
Also, definitely read Costas Alexandrakis' thoughts on the culture of macro, in the comments. Also see Kurt Mitman, a young DSGE devotee, on why he loves it.
Here is a review of Athreya's book by Herb Gintis.
Steve Williamson is mad that I talked about Athreya's book without reading it, calling it "disgusting" that I would try to psychoanalyze Athreya without even reading what he wrote. But Steve needs to chill out. My main claim was that Athreya really loves modern macro methods. I don't see Steve contradicting that claim. But even if I was wrong about Athreya, my main observation - that there are a lot of people who looove modern macro methods a lot - doesn't really need Athreya as an example. Steve himself is a great example. The Athreya reviews by Glasner and Quiggin just got me thinking about macro-method-love. And I certainly was not criticizing Athreya's book; in fact, I recommended that everyone go read it. So calm down, Steve.
Steve also suggests that people actually read Athreya's research, and this I have done - well, a couple of his working papers, anyway. The stuff I've read is actually not very "macro-y" - it's pretty simple and intuitive theory about how credit markets work. In fact it seems more like the kind of thing Ricardo Caballero suggested macroeconomists work on, back in 2010.
Robert Vienneau read much of the book, and confirms that it's about history of methodology rather than history of ideas.
Chris House wonders which methods I'm talking about specifically, but then pretty much answers his own question:
Perhaps it is the conjunction of so many common elements that he associates with DSGE models. For instance, there is a good deal of “boilerplate” which shows up in DSGE models (the representative agent, the production function, the capital accumulation equation, and so on).The method macro people seem to love most is general equilibrium itself. Almost anything with general equilibrium in it can probably now be labeled a "macro model" without anyone complaining. "Micro theory" people (often simply called "theory" people or "pure theory" people) occasionally use GE, but seem to have mostly lost enthusiasm for it. Interestingly, GE itself might be losing its total dominance in macro, as game theory is creeping in via wage bargaining in labor search models. But anyway, in general, Chris is right - DSGE "boilerplate" is really a conjunction of common "boilerplate" elements that people seem very attached to.
Posted at 5:54 PM