Needed: Meaningful Progress on Income Inequality

Income inequality in the United States has been rising for decades. As I’m sure many of you know, the best source for data on income inequality is Piketty and Saez.  The pictures below (which I’m shamelessly stealing from Piketty and Saez) give you a pretty good sense of the problem. [1]

Image

Today, the top 10 percent of income earners take home roughly half of all pre-tax income earned in a year.  Inequality was really high in the first 40 years of the twentieth century but then fell sharply and remained low for the next 30 years.  Then, sometime around 1970, income inequality began rising gradually.  This basic pattern holds essentially regardless of how you define the top income earners.  This figure shows the dreaded top 1 percent of income earners.

Image

Keep the magnitude of these figures in mind. The top 1 percent claims almost 25 percent of all income.  The top 0.1 percent is even more striking.  They take home more than 10 percent of all income.  That is, these households earn essentially 100 times what the average household earns in a year.

Image

The source of this income is also noteworthy.  The graph below decomposes the top 0.1 percent according to the source of the income. Clearly a lot of the increase in income inequality is due to wage income. In the past, the ultra-rich were the owners. This isn’t entirely true today. Many households in this group are rich because of extraordinary labor income.  (Many CEOs are compensated for the “work” they do rather than their ownership stake in the company.)

Image

I submit to you that this state of affairs is simply unacceptable.  This current degree of income inequality is probably the most disruptive, most corrosive and most troubling problem confronting the U.S. economy today.  Even if inequality is a “natural” consequence of market based economies it doesn’t mean that we should tolerate it. (Bee-stings and allergies are also natural but you don’t just stand there and do nothing while your friend goes into anaphylactic shock.)  I only need to watch 10 min of the Real Housewives of Orange County before I become convinced that we are really in dire need of aggressive income redistribution.  It would be nice to see someone make a reality show called The Real Housewives of Gary Indiana; or the Real Housewives of Flint Michigan; or the Real Housewives of Allentown Pennsylvania.

In the past some hard-core economists might respond to inequality by saying that we can’t make meaningful comparisons across people because utility is only an ordinal concept. This response is totally unpersuasive to non-economists – and the non-economists are right. Empathy is a very real human trait and it is completely reasonable to desire a more even distribution of income for its own sake.  (Perhaps utility has a cardinal component to it after all.)  The challenge for economists and policy makers is to propose policies which effectively redistribute income to produce a more tolerable distribution of well-being while at the same time causing the least amount of damage to markets and incentives.  This is a challenge for both Republicans and Democrats.

Republicans need to come to the realization that extraordinary income inequality is real and it’s a huge problem. A very cynical view might be that we have to deal with it because not dealing with it, courts a populist movement which could usher in a wave of really bad economic policies (think about a maximum wage policy or something similar). But the correct view is this: it’s a huge problem because none of us (even the most stoic Republican out there) wants to live in a country where we have people living in obscene opulence while at the same time, just a few miles away, we have people living in obscene poverty – the kind of poverty where basic health needs becomes an issue; the kind of poverty where food and heat become luxuries.  We aren’t dealing with this problem and we need to.  It’s as simple as that.

For Democrats, the challenge is realizing that the distorting problems of taxation and careless redistribution are real and must be properly confronted by policy makers.  We need to come up with aggressive policies (policies which make meaningful progress on inequality) but that don’t cause huge market inefficiencies.  These policies are not necessarily going to be politically popular and there are many ways of screwing things up if we don’t think carefully about how best to achieve our goals.  The recent article by the Harvard economist Sendhil Mullainathan is exactly right and every well-meaning liberal should take a moment to internalize these ideas.  Incomes in the United States really are substantially higher than incomes in Europe and it’s no accident.  (If you are a Democrat and you are thinking that the first thing we need to do is raise the minimum wage, you are confused.)  I suspect that many liberal concerns about market failures are really just stand-in’s for a concern about inequality. If income were very evenly distributed, would people really care about “predatory lending.”

[1] These figures all correspond to pre-tax income for individual tax units. After-tax measures will undoubtedly be better. In addition, the composition of the households will also matter. Joint filers will typically have more income than a single filer though this doesn’t really reflect income inequality. Correcting for these factors is important but it’s not going to undo the stark reality which is behind these figures.

Cost-Benefit Calculus and Stimulus Spending

I’ve been thinking a bit about stimulus spending recently.  In part this is because Emi Nakamura and Jon Steinsson just recently published a paper on the multiplier in the American Economic Review but it also came up as I was skimming through Paul Krugman’s lecture slides for his Great Recession class.

The logic behind economic stimulus spending is pretty straightforward.  If you are in a recession caused by low demand, the government can step in as a surrogate spender to restore demand and hopefully get the economy out of trouble.  Here is Rachel Maddow describing how she understands stimulus spending during the crisis. Her description is actually pretty good.  The only thing she leaves out is much mention of the multiplier: the ratio between the final change in overall spending to the initial change in government spending. If the government spends money, the workers it employs spend some of their new income on other businesses, and so on …  In theory, this chain of spending can imply a multiplier greater than 1.  If stimulus is to be effective, it helps to have a multiplier as big as possible.

There is actually a fairly clear picture of how big multipliers are. The Nakamura and Steinsson paper is part of a family of papers that look use cross-sectional variation to quantify the effect of stimulus. They compare regions that get additional government spending to regions that don’t and ask whether the spending encourages economic activity.  (Other papers that focus on cross-sectional variation include Shoag (2011), Wilson (2012) and Hausmann 2013).  In the cross-sectional studies, the estimated multipliers seem to be quite large (roughly between 1.5 and 2.5).

A different set of studies focuses on aggregate variation in government spending. The aggregate studies have a much more sobering message. For the most part, the aggregate studies suggest that the government spending multiplier is less than 1 (typical estimates are between 0.5 and 0.8). With a multiplier less than 1, private spending contracts in response to increased government spending. (See Ramey and Zubairy 2013, Ramey, Owyang and Zubairy 2013, Hall 2009, Ramey and Shapiro 1998, and Barro and Redlick 2011). It’s perhaps not surprising that the aggregate studies find smaller multipliers. Large aggregate changes in spending entail some crowding out which the idiosyncratic spending in the cross-sectional studies won’t. The aggregate changes also come with price tags – if the Federal Government is going to spend more then U.S. tax payers are eventually on the hook for the cash. This isn’t true for a cross-sectional experiment. If Pensacola gets a new Naval contract then the money is coming from the rest of the country (and only partially from the people who live in Florida).  If we reserve stimulus spending for periods of economic slack (or liquidity traps / ZLB events), then, in theory, the multipliers will be somewhat bigger. The IS/LM model predicts that fiscal policy will have its greatest effects if the economy is in a liquidity trap. This intuition carries over to fully articulated DSGE models (see Christiano, Eichenbaum and Rebello 2011).  The empirical evidence is not as clear on this point. There is a study by Auerbach and Gorodnichenko (2012) which seems to find evidence in support of this idea.

Even if the multiplier is substantially above 1, it is not obvious that stimulus spending is a good idea. The reason is that we are not trying to maximize output and employment – we are trying to maximize overall social well-being. At a basic level, the idea behind stimulus spending is that the government will spend money on stuff that it wouldn’t have purchased if we weren’t in a recession. The classic caricature of stimulus spending is the idea of paying a worker to dig a hole and then paying another worker to fill the hole in. This type of stimulus spending will increase employment and GDP but it won’t really enhance social welfare. True, we might get the beneficial effects of the stimulus but we could achieve that by simply giving the workers the money without requiring that they dig the holes. If we simply give out the money, GDP increases by less but social well-being goes up by more since the work effort and time wasn’t required.

Image

Even though the Keynesian hole-digging example is silly, the same argument can be applied to any type of government spending. If a project doesn’t meet the basic cost / benefit test, then it shouldn’t be funded, regardless of the need for stimulus.  Of course, one form of fiscal stimulus used in the ARRA was providing funds to state governments so they could maintain services that they would normally provide. This is perfectly sound policy because it is allowing the government to continue to fund projects that (presumably) do pass the cost / benefit calculation. If the social value of a government project exceeds its social cost then we should continue to fund the project whether we are in a recession or not. If the social value falls short of the social cost then, even if the economy is in “dire need” of stimulus we should not fund it. If we really need stimulus but there are no socially viable projects in the queue then the government should use tax cuts. Tax cuts can be adopted quickly and aggressively and, unlike spending initiatives, apply to virtually all Americans.

There are other “legitimate” reasons for the government to expand spending during a recession. The most obvious is that many things are relatively cheap in recessions. Reductions in manufacturing and construction employment may lower the cost for government projects. But again, this decision can be made on a simple cost / benefit basis. If prices fall because of a recession and this makes some projects socially viable as a result, then it’s perfectly correct for the government to fund those projects.

If it makes people feel better we could re-label tax cuts as spending. I could pay people $200 to look around for better paying jobs. This would be counted as $200 of job searching services purchased by the government but in reality, the money would be essentially the same as a tax cut. In the clip above, Rachel Maddow jokingly says that it might be better to simply put money in envelopes and hand them out to low-income families. If the choice is to either hand out money in envelopes or give out the same amount of money to have people perform work that doesn’t meet the cost benefit calculation, then Rachel is right. The envelopes would be better.

UPDATE: Rudi Bachmann points me to a paper by Eric Sims and Jonathan Wolff.  An excerpt :

(M)ovements in the welfare multiplier are quantitatively much larger than for the output multiplier. The output multiplier is high in bad states of the world resulting from negative \supply” shocks and low when bad states result from \demand” shocks. The welfare multiplier displays the opposite pattern { it tends to be high in demand-driven recessions and low in supply-driven downturns. In an historical simulation based on estimation of the model parameters, the output multiplier is found to be countercyclical and strongly negatively correlated with the welfare multiplier.

UPDATE No. 2: In the comments JADHazell points me to this “sketch” by Paul Krugman. This also seems related. My initial reaction is that Krugman is saying that the marginal social cost of government spending drops sharply if the economy enters the liquidity trap (i.e., the ZLB). This means that the cost/benefit calculation points to an opportunity for the government to load up on goods and services during such periods. It does not say that further expansion is justified in the name of fiscal stimulus. That is, I suspect that a version of Krugman’s model in which the marginal benefit of government spending to consumers were zero (say due to satiation) would justify stimulus spending even if the economy were below full employment.

UPDATE No. 3: In the comments thread at MarginalRevolution, Tom West says that “(i)It sounds like the author is advocating that certain benefits be excluded from (the cost/benefit) analysis.” This is exactly what I am saying.  If the direct social benefit of a bridge is $100, then all the government needs to consider is whether the cost of building the bridge is greater or less than $100. If you then tell me that, because we are in a recession, there are additional stimulus benefits from the project (e.g., the workers who build the bridge take their new wage income and buy goods and services from other businesses further stimulating demand, increasing employment, and so on.), the government should exclude these additional benefits from its calculation.

Disclosing One’s Biases in the Classroom

Over at Crookedtimber, Harry Brighouse poses a very good question.

My own policy is to disclose very little, particularly with undergraduate students. The professor is not supposed to dictate positions to his or her students — I want my students to come to their own conclusions (of course they have to know the material for the exams but that’s a separate issue). In the graduate courses I disclose more. I think (hope) graduate students are less likely to passively accept the instructors views as the final word on a topic. Also, grad students are interested in what faculty think about the probable future direction of research.

For the record, when I discus behavioral economics, I usually say that I have something of a “bias” against behavioral though I emphasize that my views are in the minority. I’m not sure whether my students feel that my treatment of behavioral is fair (and balanced?) or not.

Oscar Do-Overs

I’m not a film connoisseur.  Then again, I’m also not an expert in football or basketball but it doesn’t take a genius to see that picking Greg Oden over Kevin Durant was a blunder of epic proportions.[1]  Sports drafts are littered with blunders like this: Kwame Brown drafted #1 overall by the Washington Wizards? Ryan Leaf drafted #2 by the Chargers?[2]  Sam Bowie drafted #2 by Portland (the #3 pick was someone named Michael Jordan).[3]  Basically any first or second round pick by the NY Jets during the 80s and 90s….

Similarly, the Academy is guilty of making many ridiculous award choices over its history.  Perhaps the worst award blunders came in the late 70s and early 80s.  Look at this record:

1978: Best Picture: Annie Hall over Star Wars. [Annie Hall is an exquisite movie and would normally be a no-brainer as best picture – unfortunately it’s up against the biggest landmark picture in history …]

1980:  Best Picture: Kramer vs. Kramer over Apocalypse Now [this is not completely implausible but it is probably a mess up].

1981: Best Picture: Ordinary People over Raging Bull [Oops.]

1982: Best Picture: On Golden Pond Chariots of Fire over Raiders of the Lost Ark.

Even 1983 (Ghandi Gandhi [4] over E.T. and Tootsie) seems potentially questionable.  Here I’ll make my recommendations as to how the awards since 1995 might be revised by a benevolent outsider with the ability to re-do history.  I’m going to leave out this year’s nominees since I haven’t seen any of these movies yet …

Best Picture Do-Overs:

2013.  Winner: Argo                                                  Do-Over: Zero Dark Thirty.

2013 was a very strong year.  Nominees also included Django Unchained. This is pretty close. My choice would be Zero Dark Thirty (Kathryn Bigelow out-does her own work in the Hurt Locker and Jessica Chastain turns in a flawless performance).

2012.  Winner: The Artist                                         Do-Over: The Help

Not a particularly strong year.  The Help takes it easily – amazing performances all around including Jessica Chastain again.

2011.   Winner: The King’s Speech                           Do-Over: The King’s Speech.

2010.   Winner: The Hurt Locker                             Do-Over: Avatar

I know what you’re thinking.

“But Avatar has clichéd dialogue and is filled with stock characters…”  Yep.

“But Avatar’s storyline is so predictable …”  Yep.

“But Avatar relies so overtly on special effects …”  Yep.

Avatar. Best Picture. It’s not close. (BTW, If I had to give the award to some other film that year, it would be Inglorious Basterds, not the Hurt Locker.)

2009.  Winner: Slumdog Millionaire                        Do-Over: Slumdog Millionaire

2008.   Winner: No Country for Old Men                Do-Over: Ratatouille

The Academy has committed one of the great sins of omission by passing over Pixar studios again and again over the past 20 years.  One could make a strong case that computer animation is the main change in the industry in this period  – it has essentially made every story accessible to the big screen – and Pixar is one of the leaders in developing the medium.  Pixar marries their incredible technique with some of the best writing and production on screen.  Really, how many “kids movies” use the phrase “demi chef de partie” in their dialogue?

No Country is a good movie but it’s clearly not the Cohen bros. best work.

2007.  Winner: The Departed                                   Do-Over: The Departed.

The Academy knows it messed up big-time with Martin Scorsese.  Giving him the award for The Departed is their way of admitting their boo boo.

2006.   Winner: Crash                                               Do-Over: Brokeback Mountain

That sound you hear is a collective groan coming from the voting members of the Academy…

2005.   Winner: Million Dollar Baby                         Do-Over: Million Dollar Baby

2004.   Winner: LOTR: The Return of the King        Do-Over: Monster

OK, they had to award Peter Jackson for this work – I understand the choice. The irony is that Return of the King is easily the weakest of the three LOTR movies.  By the way, if you are thinking that I’m moving Monster into the top slot based primarily on Charliez Theron’s performance, you are correct sir!

2003.  Winner: Chicago                                             Do-Over: The Pianist

2002.  Winner: A Beautiful Mind                              Do-Over: LOTR: The Fellowship of the Ring

This has to be a better allocation doesn’t it?

2001.  Winner: Gladiator                                           Do-Over: Erin Brokovich

2000.   Winner: American Beauty                            Do-Over: American Beauty

I’m resisting the temptation to give the 2000 award to The Insider.

1999.   Winner: Shakespeare In Love                      Do-Over: Shakespeare In Love

1998.   Winner: Titanic                                              Do-Over: Titanic

An incredibly strong year.  Other nominees included Good Will Hunting and The Full Monty.

1997.   Winner: The English Patient                        Do-Over: Fargo.

1996.   Winner: Braveheart                                      Do-Over: Braveheart

I’m tempted to give the award to Apollo-13 but I’ll let it stand.

1995.  Winner: Forest Gump                                    Do-Over: Pulp Fiction

Perhaps the strongest year of those I considered above.  The nominees include (in addition to Forest Gump and Pulp Fiction) The Shawshank Redemption, Quiz Show and Four Weddings and a Funeral.  Forest Gump is excellent but it’s not as good as either Shawshank or Pulp Fiction.  Pulp Fiction has to take the award – another huge landmark in Hollywood.

[1] Greg Oden may be making a comeback with the Heat.  That said, passing on Durant will go down as one of the great head-slapping choices for the Trailblazers.

[2] Many observers at the time were openly wondering whether Ryan Leaf should be taken #1 overall (over some guy named Peyton Manning…).  Double head-slap.

[3] Triple head-slap.

[4] I’m a hideous speller but that in no way justifies this mess-up. Thanks to Uday Rajan (UM Ross Finance) for pointing out my error.

Are You a Behavioralist or an Institutionalist?

Noah Smith’s response to my previous post on the future of behavioral economics makes an interesting point: behavioral economics is indeed somewhat more established in finance.  I think this is correct and the examples Noah cites are all worthy representatives of the sub-field of behavioral finance.

Is behavioral finance gaining ground in the wake of the financial crisis?  My own reaction to financial crisis is that macroeconomists indeed have a lot of work to do.  I don’t think simple modifications to existing DSGE models are going to do the trick.  Instead, I think that we need to improve our understanding of the architecture of financial markets by gathering data on financial flows and contracts of the key participants in these markets and hopefully getting a better sense of how these markets connect with the “real” economy.  We can then incorporate this institutional detail into quantitative DSGE models so we can be better prepared for the next crash.

An alternative reaction is to say that what matters most is the psychology of the market participants.  Perhaps asset price bubbles and speculative behavior is fundamentally tied to “sentiment” or other factors that have traditionally been in the domain of psychology.

Of course we don’t have to make a choice here.  It’s not as though learning more about the role of psychological or subjective forces in markets is a bad thing.  Even if I think that we would be best served by focusing on financial market institutions, we would certainly be made better off with a greater understanding of the behavioral tendencies of financial market participants.

Nevertheless, I think it’s telling to think introspectively about where more work is needed.  Do you think “originate to distribute” behavior is best understood as rational people reacting to poorly designed market institutions (or poor policies) or do you think that this behavior is caused by irrational over-optimism?  Do you think that when the loan markets “froze” (a terribly imprecise term incidentally) that this was a rational decision on the part of lenders who were justifiably concerned that they either wouldn’t get paid back or that they needed to retain their liquid assets rather than commit them to the markets or do you think that frozen loan markets were caused by irrational panicking?  How you answer these questions might tell you a bit about whether the future of behavioral finance is bright or dim.

Is Behavioral Economics the Past or the Future?

There are fads in every field.  As Heidi Klum would say “one day you’re in, and the next day you’re out.”  Economics is not an exception.  Trendy topics come and go.  At any moment, it’s difficult to tell whether the current hot topic is here to stay or whether it is simply enjoying the academic equivalent of Andy Warhol’s 15 minutes of fame.

When I was getting my Ph.D, behavioral economics was absolutely the hot topic.  To hear some people talk, behavioral economics promised to revolutionize macroeconomics, finance … basically every corner of the field.  Today however, it’s not clear at all what the future has in store for behavioral.

I think the reason behavioral economics was originally so intriguing was that it undercut the basic principles that govern standard economic analysis.  The basic organizing philosophy in economics is that allocations are guided by self-interest.  Or, the way economists would say it, allocations are based on rational decisions.  What economists mean by rational is that (1) people know their own preferences and, (2) their choices are based on these preferences.  Rationality is an extremely powerful card that economists play often.  If a social planner actually cares about the well-being of her subjects, she can accomplish a lot by simply allowing them to make choices based on their own likes and dislikes.  Not surprisingly, rationality often leads to neo-liberal policy conclusions.  At a very basic level, behavioral economics considers the possibility that allocations violate one or both of the conditions above.  Either people don’t know what they really like, or they have difficulty making choices that conform to their preferences. 

In the early 2000’s, my colleagues and I were anticipating a flood of newly minted behavioral Ph.D’s from the top economics programs in the country.  Later, when the financial crisis exploded in 2007-2008 we were again told that behavioral economics would finally come into full bloom.  It didn’t happen though.  The wave of behavioralists never came.  After the financial crisis, young Ph.D’s turned their attention to studying financial macroeconomics – and when they did, they used mostly standard techniques based on rational decision making.  They incorporate more institutional detail rather than behavioral elements.

In my graduate macroeconomics class, I usually devote one or two lectures to results from behavioral economics.  The papers I discuss are the best that behavioral has to offer and many of the students find the topics intriguing.  I cover David Laibson’s (1998) paper on hyperbolic discounting and self-control problems.  I cover a famous empirical paper by Stefano Della Vigna and Ulrike Malmendier (2006).  I briefly mention the paper by Brunnermeier and Parker on “optimal expectations,” a theoretical setting in which individuals can indulge in unrealistic (irrational) beliefs at the cost of making bad decisions (e.g., you can enjoy an irrational belief that you are likely to win the lottery but only if you buy a lottery ticket).  There are also excellent papers by Malmendier and Stefan Nagel (2011, 2013) who show that expectations depend importantly on whether people have personally experienced events during their lifetime. (In one of their papers, people who lived through the great depression, had beliefs and made asset choices which place greater weight on the possibility of a financial crisis.)  There are several interesting papers by Caplin and Leahy who consider, among other things, the possibility that people may get utility just from anticipating future events.  If you know you have to get a painful shot, you might experience feelings of dread or panic above and beyond the physical pain from the shot itself.  My colleagues Miles Kimball, Justin Wolfers and Betsy Stevenson analyze the determinants of people’s subjective happiness (as distinct from “utility”).  In finance, there are classic papers on “agreeing to disagree” by Harrison and Kreps (1978) and the more recent variations considered by John Genakopolos (see e.g., “The Leverage Cycle,” 2009: if pessimists face short-selling constraints, the market price of financial assets will exceed the “fundamental value” of the assets).

Perhaps the most compelling behavioral paper I know of deals with the effect of labelling a choice the “default” option.  The paper I know best on this effect is by Beshears, Choi, Laibson, and Madrian (2006).  They show that simply calling a retirement savings option the default option sharply increases the likelihood that people choose that option.  Clearly, this doesn’t sound like rational decision making.  If option A is an ideal choice for you then you should continue to pick A even if I label option B the default option.  I find this study particularly compelling because the empirical evidence is clear and convincing and also because the potential consequences of this behavioral pattern seem important.

Today, it seems like behavioral economics has slowed down somewhat.  For whatever reason, the flood of behavioral economists we were anticipating 10 years ago never really materialized and the financial crisis hasn’t led to a huge increase in activity or prestige of behavioral work. Certainly the evidence that people don’t typically behave rationally is quite compelling.  It’s easy to find examples of behavior which conflicts with economic theory.  The problem is that it’s not clear that these examples help us much.  Behavioral economics won’t get very far if it ends up being just a pile of “quirks.”  Are these anomalies merely imperfections in a system which is largely characterized by rational self-interest or is there something deeper at play?  If the body of behavioral studies really just provides the exceptions to the rule then, going forward, economists will likely return to standard rational analysis (perhaps keeping in mind “common sense” violations of rationality like default options, salience effects, etc.).  I would think that if behavioral is to somehow fulfill its earlier promise then there has to be some transcendent principle or insight which comes from behavioral economics that we can use to understand the world.  In any case, if behavioral is to continue to develop, it will need some very smart, energetic young researchers to pick up where Laibson and the others left off.  If not, behavioral economics gets a goodbye kiss from Heidi Klum and it’s “Auf Wiedersehen.”

UPDATE: One of the readers has asked for some citations for the work mentioned in the post.  Here is a list of relevant citations. You should be able to find .pdf versions by “googling” the titles.  Continue reading

The Fed in 2008

I’ve been reading through the recently released transcripts of the Federal Reserve meetings during the financial crisis and there are many noteworthy features which seem relevant for students of the crisis and modern monetary policy. 

First, not surprisingly, there is a lot of confusion in most of these meetings. This is to be expected given the volume of data that the board was receiving, the noise in the data and the sometimes conflicting nature of the statistics. I think it’s virtually impossible for economists today to look back and give a fair assessment of the Fed’s interpretation of the data at the time. We have the burden of hindsight and the luxury of being able to casually contemplate possible courses of action – neither of which were available to the Fed in 2008. I know that Matthew Yglesias, Brad DeLong and Paul Krugman have weighed-in on some of the policy makers but I don’t really think this is fair. If I think a coin flip is going to turn up heads and you think it’s tails, it is not really fair to say “well it turned out to be heads so you were a fool and I was a hero.”

Second, I am struck by the amount of detailed discussion of the architecture of the financial system in the transcripts. I’m sure many of you are thinking “duh — what else do you think the Fed discusses at its meetings?” Well, I agree, but the contrast with academic treatments of monetary policy is stark. As I wrote in a previous post, in my assessment, many macroeconomic researchers have been far too concerned with the details of price rigidity and far too indifferent about the details of financial arrangements.  It seems that these details were occupying center stage during the financial crisis and we had better start to get a better picture of how these arrangements interact with monetary policy actions if we hope to respond appropriately to the next crisis.*

Third, as many commentators have pointed out, there were people who were concerned about inflation. This seems odd given what we know followed (and odd given that a bit more inflation would be welcome news today) but, at least to a small extent, it was part of the data at the time. Some commodity prices, and oil in particular, were both rising which seemed odd given what policy makers were hearing from lenders. Jim Bullard has an interesting recent presentation on this in which it seems like he is arguing that oil supply shocks may have shaped the Fed’s assessment of the problem that summer. 

Finally, the Fed was clearly viewing the crisis both as a liquidity crisis and as a solvency crisis. At the time, many market observers felt that the crisis was primarily one of solvency. Problems in the loan markets were seen by many as being tied to counterparty risk (“I won’t lend to you because I don’t trust that you will be able to pay me back). This view led many to advocate for the realignment of the TARP funds toward equity injections rather than asset purchases. While I am sure that solvency played a large role in the crisis, I am also convinced that liquidity problems were a big part of the story (I won’t lend to you because I don’t trust the collateral you are offering me). On this dimension the Fed was perhaps ahead of the curve both in its understanding of the problems and in efforts to address the situation. The many liquidity facilities put in place, in particular the TSLF which traded Treasuries for non-standard collateral (“other stuff” in the words of one of the governors), were key to stabilizing many of the markets at the time. 

* One detail of which I wasn’t aware deals with the resolutions of Repo contracts in the event of a bankruptcy for a financial institution. Most Repo contracts are exempt from automatic stay in bankruptcy proceedings. That is, if I borrow from you with a Repo, you would own the collateral in the event that I go bankrupt. This is one of the features that makes Repo contracts so attractive. For other collateralized loans, you might think that your loan is secured by specific collateral, but, if I go bankrupt, you won’t be able to get access to the collateral until the bankruptcy proceedings have been completed (or worse – you might find out during the proceedings that someone else has a claim to the same assets which supersedes your own). However, this exemption from automatic stay does not necessarily apply if the borrower is a brokerage firm. When a brokerage firm fails, it will likely fall under the Securities Investor Protection Act (SIPA) which does not make exemptions for Repos in automatic stay. When Lehman was failing, the Fed was concerned that many of the Repos would be tied up by SIPA which could cause the problem to spread to any institution that had Repo contracts with Lehman. (See here for details, in particular footnotes 5 and 29.) 

Is there a use for Real Business Cycle Models?

The Real Business Cycle (RBC) Model receives a lot of criticism from online bloggers and from other economists. A lot of the criticism is justified. The model assumes away all frictions and market failures. It assumes that the consumers and workers can be analyzed as though they were all essentially the same or perhaps as though we could pay attention to only an average individual’s preferences. The most contentious aspect of the RBC model however has always been the assumed source of business cycle fluctuations. In the RBC model, variations in productivity, perhaps brought on by the inevitable unevenness in the pace of innovation, drive all of the variations in hours worked, investment, production and so forth.

I mentioned in a previous post that this stark version of the RBC model is not really taken very seriously by researchers anymore — at least with regard to the role of productivity shocks. Better measurement has deprived the canonical RBC model of the innovations necessary to generate cyclical variations in economic activity. While early RBC models used Solow residuals as proxies for actual changes in productivity, subsequent research demonstrated that these measures were virtually entirely due to variations in unobserved utilization (capital utilization, labor effort, etc.). Thus, in the data, variations in TFP occur at seasonal frequencies (which is pretty difficult to believe in the economy we live in today) and even in response to tax stimulus (an investment tax credit will stimulate investment and also “cause” measured TFP to rise). Even worse, the measured increases in the seasonal variations or in response to tax changes are essentially of the same magnitude as the variations observed over the business cycle. Papers that do attempt to adjust for unobserved input variations (say by including measured energy use) typically find that they eliminate a huge amount of variation in productivity. The well-known study by Basu, Fernald and Kimball (2006) produces “cleansed” Solow residuals which are at best unrelated to cyclical variations in GDP (Basu et al. actually claim that true productivity variations are negatively correlated with detrended GDP). Of course there are actual productivity shocks (e.g., Hurricane Katrina, the terrible 2011 Japanese Tsunami, the 2 day blackout in the northeast US in 2003, …) but none of these seem to be responsible for substantial changes in employment or production.

This begs the question: If the RBC model does not survive as a model of actual business cycle fluctuations, why do we still teach it in graduate macroeconomics?

I can think of three answers to this question. The first answer is due to its prominent historical place in the development of the field. Macroeconomics changed forever after the first-generation RBC models were developed. These models ushered in new methods and techniques many of which are still in use today. Similarly, the fact that we know that real shocks do not cause business cycle fluctuations (at least the way they were conceived by the original RBC theorists) is an important component of our understanding. Even when you are on a long voyage, it often is important to look back every now and then.

Second, the RBC model is an excellent pedagogical device. The RBC model is almost always the first DSGE model students confront and it is also functioning as the standard backdrop in more advanced DSGE frameworks. Many of the intuitions carry over and present themselves in more modern instances of the model. For instance, researchers have extended the basic framework to analyze tax policy, international business cycles, government spending shocks, and of course monetary policy. Often the correct intuition required for the more elaborate models can be seen in the original RBC framework.

Last, there may yet be situations in which the RBC model might be applicable. While modern advanced economies do not have business cycles that are driven by real shocks, other economies might. For example, suppose you wanted to analyze the economy of ancient Egypt. The Egyptian economy would be closely tied to the flooding of the Nile river and other types of weather shocks. If the waters don’t rise enough, food production will fall. If there is a particularly good year for growing, the Egyptians will accumulate a large stock of foods which might well be traded or stored. I would suspect that the effects of these real shocks might well have tremendous impacts on production, consumption, work, storage and so on and the RBC model might provide an interesting guide as to what patterns one might expect in the data. (If there is an enterprising student out there who has an idea of where we could find some actual data on production, etc. for ancient Egypt, send me an e-mail, I would love to write this paper with you … )

A Faustian Bargain?

Reflecting on a recent blog post by Simon Wren-Lewis, Paul Krugman argues that the modern insistence on microfoundations has impoverished macroeconomics by shutting down early understandings of financial markets “because (they) didn’t conform to a particular, highly restrictive definition of what was considered valid theory.”  In Krugman’s libretto, the role of Mephistopheles is played by “freshwater” macroeconomists. 

Krugman uses James Tobin as an example of one of the casualties of adopting freshwater methodology saying that as far as he could tell, Tobin “disappeared from graduate macro over the course of the 80s, because his models, while loosely grounded in some notion of rational behavior, weren’t explicitly and rigorously derived from microfoundations.” Tobin has not disappeared. In my course for instance, Tobin shows up in the section on investment, which is centered around Tobin’s Q (my co-author Matthew Shapiro constantly emphasizes that it should be called Brainard-Tobin’s Q.)  My students (and any graduate student familiar with David Romer’s Advanced Macroeconomics) is well aware of Tobin’s role in this line of work. Tobin’s early ideas on Q-theory were sketches – plausibility arguments – which were subsequently developed in greater detail by Andy Abel, Fumio Hayashi and Larry Summers (and also Michael Mussa). 

Adopting microfoundations does come with a cost. As I mentioned in a previous post, being precise and exact prevents economists from engaging in glib, hand-waiving theorizing. Many analysts (and commentators) see this as a serious limitation.  Using this methodology also has advantages. Being specific allows you to (1) make the theory clear by exposing the necessary components, (2) quantify the effects by attaching plausible values to parameters and (3) learn from the model. This last advantage is one of the biggest benefits to microfoundations.  Setting out a list of assumptions and then following them where they lead may expose flaws in your own understanding; it may lead you to new ideas, and so on. Let me give you two examples.

Suppose someone says that if demand goes up, prices will fall. Here is their argument: if demand goes up, the price is bid up. The price increase reduces demand and so ultimately the price falls. Every statement in this argument is reasonable but the conclusion is incorrect. The way to find the mistake is with a model – in this case a supply and demand model. (The error is a confusion of movements along a demand curve verses shifts in the demand curve.)

Here is another example. In the traditional IS/LM model, investment demand is assumed to depend negatively on the real interest rate. This assumption is important for the functioning of the model – it makes the IS curve slopes down. The assumption itself is based on a slight confusion between the demand for capital and the demand for investment. What would happen if we added some microfoundations? Suppose we removed the ad hoc investment demand curve and instead required that the marginal product of capital equal the real interest rate (the user-cost relationship).  In this case, there would be a positive relationship between output and the real interest rate (the IS curve would slope up! Higher output would require more employment which would raise the marginal product of capital and raise the real interest rate.) An increase in the money supply would cause the real rate (and the nominal rate) to rise. How should we interpret this? One interpretation is that we need to think a bit more about the investment demand component of the model. An alternative reaction would be to say “I know that the original IS/LM model is right; I don’t need the microfoundations; they are just preventing me from getting the right answer.”   

Who came up with this twisted version of the IS/LM model you might ask? Wait for it …

…yep … James Tobin. (1955, see Sargent’s 1987 Macroeconomic Theory text for a brief description of Tobin’s “Dynamic Aggregative Model.”)

Even today, when we analyze the New Keynesian model, it is often done without any investment (this is like having an IS/LM model without the “I”). Adding investment demand can sometimes result in odd behavior. In particular you often get inverted Fisher effects in which monetary expansions are associated with higher output but strangely, higher real interest rates and higher nominal interest rates.  (If you teach New Keynesian models to graduate students I would encourage you to take a look at Tobin’s model.)

It seems that Paul Krugman wants to revise the history of the field a bit. Reading his post it almost seems like he wants us to believe that the Keynesians would have figured out financial market failures if they hadn’t been led astray by microfoundations and rational expectations. This is not true. The main thing New Keynesian research has been devoted to for the past 20 years is an exhaustive study of price rigidity. If anything was holding us back it was the extraordinary devotion of our energy and attention to the study of nominal rigidities. We now know more about the details of price setting than any other field in economics. As financial markets were melting down in 2008, many of us were regretting that allocation of our attention. We really needed a more refined empirical and theoretical understanding of how financial markets did or did not work. 

The Latest Macro Dust-Up

There have been several blog posts commenting on Kartik Athreya’s book, Big ideas in Macroeconomics: A nontechnical view and I wanted to make a couple of passing remarks pertaining to the blog posts I’ve read. I haven’t read the book yet, and to be completely honest, I’m not sure I will ever get to it given the huge pile of work I have. I will not discuss the book itself. Instead I’ll focus on some of the noteworthy remarks made by bloggers.

Overall, there seems to be lots of misplaced DSGE hand-wringing going on. I think one of the main reasons that some economists dislike DSGE models is that they place limitations on our ability to engage in hand-waiving theorizing. “Microfoundations” is really just code for saying that we want you to be specific and clear. Researchers who try to brush by important parts of their business cycle theories will have a really tough time if they cannot provide the details that accompany their hand waiving. As soon as a researcher appeals to some arbitrary relationship which comes out of thin air, they will immediately feel pressure to back up that component of their theory. Unlike many macroeconomists, I am willing to let people make use of plausible ad hoc theoretical relationships provided that they come with an acknowledgement that this is unfinished work which needs to be filled in later. In the past, wise old economists could get by making sweeping statements about the nature of business cycles and the correct policy cocktail which he or she thought would save the day without having to spell out what they really meant. Today, macroeconomists are typically not granted this latitude.

In his comment, Noah Smith argues that many macroeconomists are “in love with modern macro methodology.” I think Noah is partially correct. It’s true that there are many economists (not just macro guys) who focus far too much on the tools we use to analyze problems rather than the problems themselves. In the end, tools are only valuable if they are put to good use. My own view is that grad students need to have a solid grasp of fundamental / basic tools so they can get started. But, after learning these basic tools, they should not go out of their way to learn more advanced tools unless they have a specific need to do so. There are other people who feel quite differently and I can appreciate this alternate view even if I don’t ultimately agree with it. On the other hand, I’m not so sure I know what Noah means by “macro methodology.” The techniques used by macroeconomists are for the most part used in every area of economics. Dynamic programming is used in labor and IO as well as macro. Bayesian estimation, maximum likelihood techniques and so on are used by most fields. General equilibrium analysis is again used throughout economics. There are some tools which are used almost solely by macroeconomics (the Blanchard-Kahn decomposition comes to mind) but I don’t think this is what he has in mind. Perhaps it is the conjunction of so many common elements that he associates with DSGE models. For instance, there is a good deal of “boilerplate” which shows up in DSGE models (the representative agent, the production function, the capital accumulation equation, and so on). It might be interesting to hear exactly what he views as techniques which are primarily in the domain of DSGE research which he questions.

John Quiggin takes the opportunity to Eulogize some of the modern research which he feels met its end during the financial crisis. He writes that “(t)he crisis that erupted in 2008 destroyed (the) spurious consensus (in macroeconomics).” There might be some truth to this statement as well though I’m not entirely sure what class of theories he has in mind. I suspect that he thinks that the crisis undercut real business cycle models or perhaps rational expectations models more generally. I don’t think this is the case. The productivity shocks at the heart of standard real business cycle models have not been viewed as plausible sources of business cycles fluctuations for quite a while and rational expectations theories are most likely here to stay. If there is a model that really got taken to the woodshed during the financial crisis it was the New Keynesian model which had, until then, occupied a clearly dominant position in policy discussions and academic research. The “New Old Keynesians” as he calls them aren’t having a much better time. They also don’t have a good framework for understanding the financial crisis (there is no meaningful financial sector in the traditional IS/LM model) and their versions of the supply-side of the models aren’t doing very well at all. Quiggin might be referring to the absence of Keynesian demand channels in many DSGE models. Here he might have more of a case. Getting traditional Keynesian demand models to work the way they ought to is not easy. Again, the mechanics of the DSGE approach might be limiting us. There are lots of background assumptions made in most DSGE models which play important roles in how the models function and which likely prevent Keynesian swings in aggregate demand from occuring (though Roger Farmer’s work is overtly pushing in this direction). The basic Walrasian supply and demand framework is surely one of the key features of DSGE models which limit Keynesian channels. A worker who cannot find employment can simply lower his wage. A firm that cannot sell its goods can simply lower the price if it needs to sell. Of course, sticky prices and sticky wages are the standard refrain but this now ties the model to observations in which recessions should be accompanied by deflation (which really didn’t happen during the great recession). I personally have some faith in the basic Keynesian demand story but we don’t really have a good useable version of it at the moment.

Paul Krugman also weighs in with a short comment. He makes two remarks which are noteworthy. First he says that DSGE models are “basically, the particular form of modeling that is more or less the only thing one can publish in journals these days).” This is simply not true. Second, his closing remark is (to me) somewhat cryptic:

I think, that somebody is going to end up in the dustbin of history. I wonder who?

I confess, I really don’t know who or what he is talking about when he writes this.

More to come I’m sure …