Is Behavioral Economics the Past or the Future?

There are fads in every field.  As Heidi Klum would say “one day you’re in, and the next day you’re out.”  Economics is not an exception.  Trendy topics come and go.  At any moment, it’s difficult to tell whether the current hot topic is here to stay or whether it is simply enjoying the academic equivalent of Andy Warhol’s 15 minutes of fame.

When I was getting my Ph.D, behavioral economics was absolutely the hot topic.  To hear some people talk, behavioral economics promised to revolutionize macroeconomics, finance … basically every corner of the field.  Today however, it’s not clear at all what the future has in store for behavioral.

I think the reason behavioral economics was originally so intriguing was that it undercut the basic principles that govern standard economic analysis.  The basic organizing philosophy in economics is that allocations are guided by self-interest.  Or, the way economists would say it, allocations are based on rational decisions.  What economists mean by rational is that (1) people know their own preferences and, (2) their choices are based on these preferences.  Rationality is an extremely powerful card that economists play often.  If a social planner actually cares about the well-being of her subjects, she can accomplish a lot by simply allowing them to make choices based on their own likes and dislikes.  Not surprisingly, rationality often leads to neo-liberal policy conclusions.  At a very basic level, behavioral economics considers the possibility that allocations violate one or both of the conditions above.  Either people don’t know what they really like, or they have difficulty making choices that conform to their preferences. 

In the early 2000’s, my colleagues and I were anticipating a flood of newly minted behavioral Ph.D’s from the top economics programs in the country.  Later, when the financial crisis exploded in 2007-2008 we were again told that behavioral economics would finally come into full bloom.  It didn’t happen though.  The wave of behavioralists never came.  After the financial crisis, young Ph.D’s turned their attention to studying financial macroeconomics – and when they did, they used mostly standard techniques based on rational decision making.  They incorporate more institutional detail rather than behavioral elements.

In my graduate macroeconomics class, I usually devote one or two lectures to results from behavioral economics.  The papers I discuss are the best that behavioral has to offer and many of the students find the topics intriguing.  I cover David Laibson’s (1998) paper on hyperbolic discounting and self-control problems.  I cover a famous empirical paper by Stefano Della Vigna and Ulrike Malmendier (2006).  I briefly mention the paper by Brunnermeier and Parker on “optimal expectations,” a theoretical setting in which individuals can indulge in unrealistic (irrational) beliefs at the cost of making bad decisions (e.g., you can enjoy an irrational belief that you are likely to win the lottery but only if you buy a lottery ticket).  There are also excellent papers by Malmendier and Stefan Nagel (2011, 2013) who show that expectations depend importantly on whether people have personally experienced events during their lifetime. (In one of their papers, people who lived through the great depression, had beliefs and made asset choices which place greater weight on the possibility of a financial crisis.)  There are several interesting papers by Caplin and Leahy who consider, among other things, the possibility that people may get utility just from anticipating future events.  If you know you have to get a painful shot, you might experience feelings of dread or panic above and beyond the physical pain from the shot itself.  My colleagues Miles Kimball, Justin Wolfers and Betsy Stevenson analyze the determinants of people’s subjective happiness (as distinct from “utility”).  In finance, there are classic papers on “agreeing to disagree” by Harrison and Kreps (1978) and the more recent variations considered by John Genakopolos (see e.g., “The Leverage Cycle,” 2009: if pessimists face short-selling constraints, the market price of financial assets will exceed the “fundamental value” of the assets).

Perhaps the most compelling behavioral paper I know of deals with the effect of labelling a choice the “default” option.  The paper I know best on this effect is by Beshears, Choi, Laibson, and Madrian (2006).  They show that simply calling a retirement savings option the default option sharply increases the likelihood that people choose that option.  Clearly, this doesn’t sound like rational decision making.  If option A is an ideal choice for you then you should continue to pick A even if I label option B the default option.  I find this study particularly compelling because the empirical evidence is clear and convincing and also because the potential consequences of this behavioral pattern seem important.

Today, it seems like behavioral economics has slowed down somewhat.  For whatever reason, the flood of behavioral economists we were anticipating 10 years ago never really materialized and the financial crisis hasn’t led to a huge increase in activity or prestige of behavioral work. Certainly the evidence that people don’t typically behave rationally is quite compelling.  It’s easy to find examples of behavior which conflicts with economic theory.  The problem is that it’s not clear that these examples help us much.  Behavioral economics won’t get very far if it ends up being just a pile of “quirks.”  Are these anomalies merely imperfections in a system which is largely characterized by rational self-interest or is there something deeper at play?  If the body of behavioral studies really just provides the exceptions to the rule then, going forward, economists will likely return to standard rational analysis (perhaps keeping in mind “common sense” violations of rationality like default options, salience effects, etc.).  I would think that if behavioral is to somehow fulfill its earlier promise then there has to be some transcendent principle or insight which comes from behavioral economics that we can use to understand the world.  In any case, if behavioral is to continue to develop, it will need some very smart, energetic young researchers to pick up where Laibson and the others left off.  If not, behavioral economics gets a goodbye kiss from Heidi Klum and it’s “Auf Wiedersehen.”

UPDATE: One of the readers has asked for some citations for the work mentioned in the post.  Here is a list of relevant citations. You should be able to find .pdf versions by “googling” the titles.  Continue reading

The Fed in 2008

I’ve been reading through the recently released transcripts of the Federal Reserve meetings during the financial crisis and there are many noteworthy features which seem relevant for students of the crisis and modern monetary policy. 

First, not surprisingly, there is a lot of confusion in most of these meetings. This is to be expected given the volume of data that the board was receiving, the noise in the data and the sometimes conflicting nature of the statistics. I think it’s virtually impossible for economists today to look back and give a fair assessment of the Fed’s interpretation of the data at the time. We have the burden of hindsight and the luxury of being able to casually contemplate possible courses of action – neither of which were available to the Fed in 2008. I know that Matthew Yglesias, Brad DeLong and Paul Krugman have weighed-in on some of the policy makers but I don’t really think this is fair. If I think a coin flip is going to turn up heads and you think it’s tails, it is not really fair to say “well it turned out to be heads so you were a fool and I was a hero.”

Second, I am struck by the amount of detailed discussion of the architecture of the financial system in the transcripts. I’m sure many of you are thinking “duh — what else do you think the Fed discusses at its meetings?” Well, I agree, but the contrast with academic treatments of monetary policy is stark. As I wrote in a previous post, in my assessment, many macroeconomic researchers have been far too concerned with the details of price rigidity and far too indifferent about the details of financial arrangements.  It seems that these details were occupying center stage during the financial crisis and we had better start to get a better picture of how these arrangements interact with monetary policy actions if we hope to respond appropriately to the next crisis.*

Third, as many commentators have pointed out, there were people who were concerned about inflation. This seems odd given what we know followed (and odd given that a bit more inflation would be welcome news today) but, at least to a small extent, it was part of the data at the time. Some commodity prices, and oil in particular, were both rising which seemed odd given what policy makers were hearing from lenders. Jim Bullard has an interesting recent presentation on this in which it seems like he is arguing that oil supply shocks may have shaped the Fed’s assessment of the problem that summer. 

Finally, the Fed was clearly viewing the crisis both as a liquidity crisis and as a solvency crisis. At the time, many market observers felt that the crisis was primarily one of solvency. Problems in the loan markets were seen by many as being tied to counterparty risk (“I won’t lend to you because I don’t trust that you will be able to pay me back). This view led many to advocate for the realignment of the TARP funds toward equity injections rather than asset purchases. While I am sure that solvency played a large role in the crisis, I am also convinced that liquidity problems were a big part of the story (I won’t lend to you because I don’t trust the collateral you are offering me). On this dimension the Fed was perhaps ahead of the curve both in its understanding of the problems and in efforts to address the situation. The many liquidity facilities put in place, in particular the TSLF which traded Treasuries for non-standard collateral (“other stuff” in the words of one of the governors), were key to stabilizing many of the markets at the time. 

* One detail of which I wasn’t aware deals with the resolutions of Repo contracts in the event of a bankruptcy for a financial institution. Most Repo contracts are exempt from automatic stay in bankruptcy proceedings. That is, if I borrow from you with a Repo, you would own the collateral in the event that I go bankrupt. This is one of the features that makes Repo contracts so attractive. For other collateralized loans, you might think that your loan is secured by specific collateral, but, if I go bankrupt, you won’t be able to get access to the collateral until the bankruptcy proceedings have been completed (or worse – you might find out during the proceedings that someone else has a claim to the same assets which supersedes your own). However, this exemption from automatic stay does not necessarily apply if the borrower is a brokerage firm. When a brokerage firm fails, it will likely fall under the Securities Investor Protection Act (SIPA) which does not make exemptions for Repos in automatic stay. When Lehman was failing, the Fed was concerned that many of the Repos would be tied up by SIPA which could cause the problem to spread to any institution that had Repo contracts with Lehman. (See here for details, in particular footnotes 5 and 29.) 

Is there a use for Real Business Cycle Models?

The Real Business Cycle (RBC) Model receives a lot of criticism from online bloggers and from other economists. A lot of the criticism is justified. The model assumes away all frictions and market failures. It assumes that the consumers and workers can be analyzed as though they were all essentially the same or perhaps as though we could pay attention to only an average individual’s preferences. The most contentious aspect of the RBC model however has always been the assumed source of business cycle fluctuations. In the RBC model, variations in productivity, perhaps brought on by the inevitable unevenness in the pace of innovation, drive all of the variations in hours worked, investment, production and so forth.

I mentioned in a previous post that this stark version of the RBC model is not really taken very seriously by researchers anymore — at least with regard to the role of productivity shocks. Better measurement has deprived the canonical RBC model of the innovations necessary to generate cyclical variations in economic activity. While early RBC models used Solow residuals as proxies for actual changes in productivity, subsequent research demonstrated that these measures were virtually entirely due to variations in unobserved utilization (capital utilization, labor effort, etc.). Thus, in the data, variations in TFP occur at seasonal frequencies (which is pretty difficult to believe in the economy we live in today) and even in response to tax stimulus (an investment tax credit will stimulate investment and also “cause” measured TFP to rise). Even worse, the measured increases in the seasonal variations or in response to tax changes are essentially of the same magnitude as the variations observed over the business cycle. Papers that do attempt to adjust for unobserved input variations (say by including measured energy use) typically find that they eliminate a huge amount of variation in productivity. The well-known study by Basu, Fernald and Kimball (2006) produces “cleansed” Solow residuals which are at best unrelated to cyclical variations in GDP (Basu et al. actually claim that true productivity variations are negatively correlated with detrended GDP). Of course there are actual productivity shocks (e.g., Hurricane Katrina, the terrible 2011 Japanese Tsunami, the 2 day blackout in the northeast US in 2003, …) but none of these seem to be responsible for substantial changes in employment or production.

This begs the question: If the RBC model does not survive as a model of actual business cycle fluctuations, why do we still teach it in graduate macroeconomics?

I can think of three answers to this question. The first answer is due to its prominent historical place in the development of the field. Macroeconomics changed forever after the first-generation RBC models were developed. These models ushered in new methods and techniques many of which are still in use today. Similarly, the fact that we know that real shocks do not cause business cycle fluctuations (at least the way they were conceived by the original RBC theorists) is an important component of our understanding. Even when you are on a long voyage, it often is important to look back every now and then.

Second, the RBC model is an excellent pedagogical device. The RBC model is almost always the first DSGE model students confront and it is also functioning as the standard backdrop in more advanced DSGE frameworks. Many of the intuitions carry over and present themselves in more modern instances of the model. For instance, researchers have extended the basic framework to analyze tax policy, international business cycles, government spending shocks, and of course monetary policy. Often the correct intuition required for the more elaborate models can be seen in the original RBC framework.

Last, there may yet be situations in which the RBC model might be applicable. While modern advanced economies do not have business cycles that are driven by real shocks, other economies might. For example, suppose you wanted to analyze the economy of ancient Egypt. The Egyptian economy would be closely tied to the flooding of the Nile river and other types of weather shocks. If the waters don’t rise enough, food production will fall. If there is a particularly good year for growing, the Egyptians will accumulate a large stock of foods which might well be traded or stored. I would suspect that the effects of these real shocks might well have tremendous impacts on production, consumption, work, storage and so on and the RBC model might provide an interesting guide as to what patterns one might expect in the data. (If there is an enterprising student out there who has an idea of where we could find some actual data on production, etc. for ancient Egypt, send me an e-mail, I would love to write this paper with you … )

A Faustian Bargain?

Reflecting on a recent blog post by Simon Wren-Lewis, Paul Krugman argues that the modern insistence on microfoundations has impoverished macroeconomics by shutting down early understandings of financial markets “because (they) didn’t conform to a particular, highly restrictive definition of what was considered valid theory.”  In Krugman’s libretto, the role of Mephistopheles is played by “freshwater” macroeconomists. 

Krugman uses James Tobin as an example of one of the casualties of adopting freshwater methodology saying that as far as he could tell, Tobin “disappeared from graduate macro over the course of the 80s, because his models, while loosely grounded in some notion of rational behavior, weren’t explicitly and rigorously derived from microfoundations.” Tobin has not disappeared. In my course for instance, Tobin shows up in the section on investment, which is centered around Tobin’s Q (my co-author Matthew Shapiro constantly emphasizes that it should be called Brainard-Tobin’s Q.)  My students (and any graduate student familiar with David Romer’s Advanced Macroeconomics) is well aware of Tobin’s role in this line of work. Tobin’s early ideas on Q-theory were sketches – plausibility arguments – which were subsequently developed in greater detail by Andy Abel, Fumio Hayashi and Larry Summers (and also Michael Mussa). 

Adopting microfoundations does come with a cost. As I mentioned in a previous post, being precise and exact prevents economists from engaging in glib, hand-waiving theorizing. Many analysts (and commentators) see this as a serious limitation.  Using this methodology also has advantages. Being specific allows you to (1) make the theory clear by exposing the necessary components, (2) quantify the effects by attaching plausible values to parameters and (3) learn from the model. This last advantage is one of the biggest benefits to microfoundations.  Setting out a list of assumptions and then following them where they lead may expose flaws in your own understanding; it may lead you to new ideas, and so on. Let me give you two examples.

Suppose someone says that if demand goes up, prices will fall. Here is their argument: if demand goes up, the price is bid up. The price increase reduces demand and so ultimately the price falls. Every statement in this argument is reasonable but the conclusion is incorrect. The way to find the mistake is with a model – in this case a supply and demand model. (The error is a confusion of movements along a demand curve verses shifts in the demand curve.)

Here is another example. In the traditional IS/LM model, investment demand is assumed to depend negatively on the real interest rate. This assumption is important for the functioning of the model – it makes the IS curve slopes down. The assumption itself is based on a slight confusion between the demand for capital and the demand for investment. What would happen if we added some microfoundations? Suppose we removed the ad hoc investment demand curve and instead required that the marginal product of capital equal the real interest rate (the user-cost relationship).  In this case, there would be a positive relationship between output and the real interest rate (the IS curve would slope up! Higher output would require more employment which would raise the marginal product of capital and raise the real interest rate.) An increase in the money supply would cause the real rate (and the nominal rate) to rise. How should we interpret this? One interpretation is that we need to think a bit more about the investment demand component of the model. An alternative reaction would be to say “I know that the original IS/LM model is right; I don’t need the microfoundations; they are just preventing me from getting the right answer.”   

Who came up with this twisted version of the IS/LM model you might ask? Wait for it …

…yep … James Tobin. (1955, see Sargent’s 1987 Macroeconomic Theory text for a brief description of Tobin’s “Dynamic Aggregative Model.”)

Even today, when we analyze the New Keynesian model, it is often done without any investment (this is like having an IS/LM model without the “I”). Adding investment demand can sometimes result in odd behavior. In particular you often get inverted Fisher effects in which monetary expansions are associated with higher output but strangely, higher real interest rates and higher nominal interest rates.  (If you teach New Keynesian models to graduate students I would encourage you to take a look at Tobin’s model.)

It seems that Paul Krugman wants to revise the history of the field a bit. Reading his post it almost seems like he wants us to believe that the Keynesians would have figured out financial market failures if they hadn’t been led astray by microfoundations and rational expectations. This is not true. The main thing New Keynesian research has been devoted to for the past 20 years is an exhaustive study of price rigidity. If anything was holding us back it was the extraordinary devotion of our energy and attention to the study of nominal rigidities. We now know more about the details of price setting than any other field in economics. As financial markets were melting down in 2008, many of us were regretting that allocation of our attention. We really needed a more refined empirical and theoretical understanding of how financial markets did or did not work. 

The Latest Macro Dust-Up

There have been several blog posts commenting on Kartik Athreya’s book, Big ideas in Macroeconomics: A nontechnical view and I wanted to make a couple of passing remarks pertaining to the blog posts I’ve read. I haven’t read the book yet, and to be completely honest, I’m not sure I will ever get to it given the huge pile of work I have. I will not discuss the book itself. Instead I’ll focus on some of the noteworthy remarks made by bloggers.

Overall, there seems to be lots of misplaced DSGE hand-wringing going on. I think one of the main reasons that some economists dislike DSGE models is that they place limitations on our ability to engage in hand-waiving theorizing. “Microfoundations” is really just code for saying that we want you to be specific and clear. Researchers who try to brush by important parts of their business cycle theories will have a really tough time if they cannot provide the details that accompany their hand waiving. As soon as a researcher appeals to some arbitrary relationship which comes out of thin air, they will immediately feel pressure to back up that component of their theory. Unlike many macroeconomists, I am willing to let people make use of plausible ad hoc theoretical relationships provided that they come with an acknowledgement that this is unfinished work which needs to be filled in later. In the past, wise old economists could get by making sweeping statements about the nature of business cycles and the correct policy cocktail which he or she thought would save the day without having to spell out what they really meant. Today, macroeconomists are typically not granted this latitude.

In his comment, Noah Smith argues that many macroeconomists are “in love with modern macro methodology.” I think Noah is partially correct. It’s true that there are many economists (not just macro guys) who focus far too much on the tools we use to analyze problems rather than the problems themselves. In the end, tools are only valuable if they are put to good use. My own view is that grad students need to have a solid grasp of fundamental / basic tools so they can get started. But, after learning these basic tools, they should not go out of their way to learn more advanced tools unless they have a specific need to do so. There are other people who feel quite differently and I can appreciate this alternate view even if I don’t ultimately agree with it. On the other hand, I’m not so sure I know what Noah means by “macro methodology.” The techniques used by macroeconomists are for the most part used in every area of economics. Dynamic programming is used in labor and IO as well as macro. Bayesian estimation, maximum likelihood techniques and so on are used by most fields. General equilibrium analysis is again used throughout economics. There are some tools which are used almost solely by macroeconomics (the Blanchard-Kahn decomposition comes to mind) but I don’t think this is what he has in mind. Perhaps it is the conjunction of so many common elements that he associates with DSGE models. For instance, there is a good deal of “boilerplate” which shows up in DSGE models (the representative agent, the production function, the capital accumulation equation, and so on). It might be interesting to hear exactly what he views as techniques which are primarily in the domain of DSGE research which he questions.

John Quiggin takes the opportunity to Eulogize some of the modern research which he feels met its end during the financial crisis. He writes that “(t)he crisis that erupted in 2008 destroyed (the) spurious consensus (in macroeconomics).” There might be some truth to this statement as well though I’m not entirely sure what class of theories he has in mind. I suspect that he thinks that the crisis undercut real business cycle models or perhaps rational expectations models more generally. I don’t think this is the case. The productivity shocks at the heart of standard real business cycle models have not been viewed as plausible sources of business cycles fluctuations for quite a while and rational expectations theories are most likely here to stay. If there is a model that really got taken to the woodshed during the financial crisis it was the New Keynesian model which had, until then, occupied a clearly dominant position in policy discussions and academic research. The “New Old Keynesians” as he calls them aren’t having a much better time. They also don’t have a good framework for understanding the financial crisis (there is no meaningful financial sector in the traditional IS/LM model) and their versions of the supply-side of the models aren’t doing very well at all. Quiggin might be referring to the absence of Keynesian demand channels in many DSGE models. Here he might have more of a case. Getting traditional Keynesian demand models to work the way they ought to is not easy. Again, the mechanics of the DSGE approach might be limiting us. There are lots of background assumptions made in most DSGE models which play important roles in how the models function and which likely prevent Keynesian swings in aggregate demand from occuring (though Roger Farmer’s work is overtly pushing in this direction). The basic Walrasian supply and demand framework is surely one of the key features of DSGE models which limit Keynesian channels. A worker who cannot find employment can simply lower his wage. A firm that cannot sell its goods can simply lower the price if it needs to sell. Of course, sticky prices and sticky wages are the standard refrain but this now ties the model to observations in which recessions should be accompanied by deflation (which really didn’t happen during the great recession). I personally have some faith in the basic Keynesian demand story but we don’t really have a good useable version of it at the moment.

Paul Krugman also weighs in with a short comment. He makes two remarks which are noteworthy. First he says that DSGE models are “basically, the particular form of modeling that is more or less the only thing one can publish in journals these days).” This is simply not true. Second, his closing remark is (to me) somewhat cryptic:

I think, that somebody is going to end up in the dustbin of history. I wonder who?

I confess, I really don’t know who or what he is talking about when he writes this.

More to come I’m sure …

The Fight Against Spinal Muscular Atrophy

As many of you know, my daughter, Abigail, has Spinal Muscular Atrophy (SMA).  SMA is a terrible genetic disorder which affects children all over the world.  Kids with SMA lack a gene which produces protein necessary for the proper development of motor neurons.  Because the motor neurons are underdeveloped, there is inadequate communication between the brain and the muscles and as a result, kids with SMA have extremely weak muscles throughout their bodies.  They can’t walk.  Many cannot sit upright by themselves or hold their heads up on their own.  They sometimes have difficulty chewing and swallowing food.  They often have very weak coughs.  Weak coughs are often a major source of health problems for these kids. Many children with SMA get serious pneumonias which require prolonged stays in hospitals.  No one really thinks about coughing much but it is a very basic mechanism for fighting off respiratory illnesses.  

About 1 in 40 people are carriers of the genetic defect which causes SMA. (You read that right, 1 in 40).  The gene is recessive however so for a child to have SMA, he or she needs to get the bad gene from both parents.  Roughly 1 in every 10,000 children have SMA making it a “common” rare disorder.  There are three types of SMA.  Type I is the most severe.  Type II is the intermediate type and Type III is the least severe.  Abbey is a strong type II and she is actually in fairly good health overall. 

The good news about SMA is that its days are numbered.  Scientists know quite a bit about SMA – they know the exact gene which causes the problem; they know exactly the type of protein which needs to be synthesized to encourage motor neuron development and so forth.  The medical community senses that they are close to cracking this problem and SMA attracts lots of researchers as a result.  With any luck, I will see SMA eliminated in my lifetime. 

Every year, my wife, Melissa, helps raise money for SMA research through a Walk-and-Roll fundraiser for Families of Spinal Muscular Atrophy (FSMA) in Abigail’s name.  If you would like to donate to the cause you can click on the link below. 

http://www.fsma.org/LWC/MelissaHouse8381

If you are a graduate student (either at Michigan or anywhere else) then I would suggest that you don’t give money.  I understand the permanent income hypothesis but the reality of grad student life is one of poverty.  I suggest that you “pay it forward” and donate to a worthy charity once you are done with your degree have a job.  If you are a graduate student in my class, please do not donate as I think this would create a conflict of interest. 

If you follow the link to Abbey’s site, you will find a picture with my wife and two kids (Sam and Abbey).  The fat non-photogenic person standing behind them is me.