Is Behavioral Economics the Past or the Future?

There are fads in every field.  As Heidi Klum would say “one day you’re in, and the next day you’re out.”  Economics is not an exception.  Trendy topics come and go.  At any moment, it’s difficult to tell whether the current hot topic is here to stay or whether it is simply enjoying the academic equivalent of Andy Warhol’s 15 minutes of fame.

When I was getting my Ph.D, behavioral economics was absolutely the hot topic.  To hear some people talk, behavioral economics promised to revolutionize macroeconomics, finance … basically every corner of the field.  Today however, it’s not clear at all what the future has in store for behavioral.

I think the reason behavioral economics was originally so intriguing was that it undercut the basic principles that govern standard economic analysis.  The basic organizing philosophy in economics is that allocations are guided by self-interest.  Or, the way economists would say it, allocations are based on rational decisions.  What economists mean by rational is that (1) people know their own preferences and, (2) their choices are based on these preferences.  Rationality is an extremely powerful card that economists play often.  If a social planner actually cares about the well-being of her subjects, she can accomplish a lot by simply allowing them to make choices based on their own likes and dislikes.  Not surprisingly, rationality often leads to neo-liberal policy conclusions.  At a very basic level, behavioral economics considers the possibility that allocations violate one or both of the conditions above.  Either people don’t know what they really like, or they have difficulty making choices that conform to their preferences. 

In the early 2000’s, my colleagues and I were anticipating a flood of newly minted behavioral Ph.D’s from the top economics programs in the country.  Later, when the financial crisis exploded in 2007-2008 we were again told that behavioral economics would finally come into full bloom.  It didn’t happen though.  The wave of behavioralists never came.  After the financial crisis, young Ph.D’s turned their attention to studying financial macroeconomics – and when they did, they used mostly standard techniques based on rational decision making.  They incorporate more institutional detail rather than behavioral elements.

In my graduate macroeconomics class, I usually devote one or two lectures to results from behavioral economics.  The papers I discuss are the best that behavioral has to offer and many of the students find the topics intriguing.  I cover David Laibson’s (1998) paper on hyperbolic discounting and self-control problems.  I cover a famous empirical paper by Stefano Della Vigna and Ulrike Malmendier (2006).  I briefly mention the paper by Brunnermeier and Parker on “optimal expectations,” a theoretical setting in which individuals can indulge in unrealistic (irrational) beliefs at the cost of making bad decisions (e.g., you can enjoy an irrational belief that you are likely to win the lottery but only if you buy a lottery ticket).  There are also excellent papers by Malmendier and Stefan Nagel (2011, 2013) who show that expectations depend importantly on whether people have personally experienced events during their lifetime. (In one of their papers, people who lived through the great depression, had beliefs and made asset choices which place greater weight on the possibility of a financial crisis.)  There are several interesting papers by Caplin and Leahy who consider, among other things, the possibility that people may get utility just from anticipating future events.  If you know you have to get a painful shot, you might experience feelings of dread or panic above and beyond the physical pain from the shot itself.  My colleagues Miles Kimball, Justin Wolfers and Betsy Stevenson analyze the determinants of people’s subjective happiness (as distinct from “utility”).  In finance, there are classic papers on “agreeing to disagree” by Harrison and Kreps (1978) and the more recent variations considered by John Genakopolos (see e.g., “The Leverage Cycle,” 2009: if pessimists face short-selling constraints, the market price of financial assets will exceed the “fundamental value” of the assets).

Perhaps the most compelling behavioral paper I know of deals with the effect of labelling a choice the “default” option.  The paper I know best on this effect is by Beshears, Choi, Laibson, and Madrian (2006).  They show that simply calling a retirement savings option the default option sharply increases the likelihood that people choose that option.  Clearly, this doesn’t sound like rational decision making.  If option A is an ideal choice for you then you should continue to pick A even if I label option B the default option.  I find this study particularly compelling because the empirical evidence is clear and convincing and also because the potential consequences of this behavioral pattern seem important.

Today, it seems like behavioral economics has slowed down somewhat.  For whatever reason, the flood of behavioral economists we were anticipating 10 years ago never really materialized and the financial crisis hasn’t led to a huge increase in activity or prestige of behavioral work. Certainly the evidence that people don’t typically behave rationally is quite compelling.  It’s easy to find examples of behavior which conflicts with economic theory.  The problem is that it’s not clear that these examples help us much.  Behavioral economics won’t get very far if it ends up being just a pile of “quirks.”  Are these anomalies merely imperfections in a system which is largely characterized by rational self-interest or is there something deeper at play?  If the body of behavioral studies really just provides the exceptions to the rule then, going forward, economists will likely return to standard rational analysis (perhaps keeping in mind “common sense” violations of rationality like default options, salience effects, etc.).  I would think that if behavioral is to somehow fulfill its earlier promise then there has to be some transcendent principle or insight which comes from behavioral economics that we can use to understand the world.  In any case, if behavioral is to continue to develop, it will need some very smart, energetic young researchers to pick up where Laibson and the others left off.  If not, behavioral economics gets a goodbye kiss from Heidi Klum and it’s “Auf Wiedersehen.”

UPDATE: One of the readers has asked for some citations for the work mentioned in the post.  Here is a list of relevant citations. You should be able to find .pdf versions by “googling” the titles.  Continue reading


The Fed in 2008

I’ve been reading through the recently released transcripts of the Federal Reserve meetings during the financial crisis and there are many noteworthy features which seem relevant for students of the crisis and modern monetary policy. 

First, not surprisingly, there is a lot of confusion in most of these meetings. This is to be expected given the volume of data that the board was receiving, the noise in the data and the sometimes conflicting nature of the statistics. I think it’s virtually impossible for economists today to look back and give a fair assessment of the Fed’s interpretation of the data at the time. We have the burden of hindsight and the luxury of being able to casually contemplate possible courses of action – neither of which were available to the Fed in 2008. I know that Matthew Yglesias, Brad DeLong and Paul Krugman have weighed-in on some of the policy makers but I don’t really think this is fair. If I think a coin flip is going to turn up heads and you think it’s tails, it is not really fair to say “well it turned out to be heads so you were a fool and I was a hero.”

Second, I am struck by the amount of detailed discussion of the architecture of the financial system in the transcripts. I’m sure many of you are thinking “duh — what else do you think the Fed discusses at its meetings?” Well, I agree, but the contrast with academic treatments of monetary policy is stark. As I wrote in a previous post, in my assessment, many macroeconomic researchers have been far too concerned with the details of price rigidity and far too indifferent about the details of financial arrangements.  It seems that these details were occupying center stage during the financial crisis and we had better start to get a better picture of how these arrangements interact with monetary policy actions if we hope to respond appropriately to the next crisis.*

Third, as many commentators have pointed out, there were people who were concerned about inflation. This seems odd given what we know followed (and odd given that a bit more inflation would be welcome news today) but, at least to a small extent, it was part of the data at the time. Some commodity prices, and oil in particular, were both rising which seemed odd given what policy makers were hearing from lenders. Jim Bullard has an interesting recent presentation on this in which it seems like he is arguing that oil supply shocks may have shaped the Fed’s assessment of the problem that summer. 

Finally, the Fed was clearly viewing the crisis both as a liquidity crisis and as a solvency crisis. At the time, many market observers felt that the crisis was primarily one of solvency. Problems in the loan markets were seen by many as being tied to counterparty risk (“I won’t lend to you because I don’t trust that you will be able to pay me back). This view led many to advocate for the realignment of the TARP funds toward equity injections rather than asset purchases. While I am sure that solvency played a large role in the crisis, I am also convinced that liquidity problems were a big part of the story (I won’t lend to you because I don’t trust the collateral you are offering me). On this dimension the Fed was perhaps ahead of the curve both in its understanding of the problems and in efforts to address the situation. The many liquidity facilities put in place, in particular the TSLF which traded Treasuries for non-standard collateral (“other stuff” in the words of one of the governors), were key to stabilizing many of the markets at the time. 

* One detail of which I wasn’t aware deals with the resolutions of Repo contracts in the event of a bankruptcy for a financial institution. Most Repo contracts are exempt from automatic stay in bankruptcy proceedings. That is, if I borrow from you with a Repo, you would own the collateral in the event that I go bankrupt. This is one of the features that makes Repo contracts so attractive. For other collateralized loans, you might think that your loan is secured by specific collateral, but, if I go bankrupt, you won’t be able to get access to the collateral until the bankruptcy proceedings have been completed (or worse – you might find out during the proceedings that someone else has a claim to the same assets which supersedes your own). However, this exemption from automatic stay does not necessarily apply if the borrower is a brokerage firm. When a brokerage firm fails, it will likely fall under the Securities Investor Protection Act (SIPA) which does not make exemptions for Repos in automatic stay. When Lehman was failing, the Fed was concerned that many of the Repos would be tied up by SIPA which could cause the problem to spread to any institution that had Repo contracts with Lehman. (See here for details, in particular footnotes 5 and 29.) 

Is there a use for Real Business Cycle Models?

The Real Business Cycle (RBC) Model receives a lot of criticism from online bloggers and from other economists. A lot of the criticism is justified. The model assumes away all frictions and market failures. It assumes that the consumers and workers can be analyzed as though they were all essentially the same or perhaps as though we could pay attention to only an average individual’s preferences. The most contentious aspect of the RBC model however has always been the assumed source of business cycle fluctuations. In the RBC model, variations in productivity, perhaps brought on by the inevitable unevenness in the pace of innovation, drive all of the variations in hours worked, investment, production and so forth.

I mentioned in a previous post that this stark version of the RBC model is not really taken very seriously by researchers anymore — at least with regard to the role of productivity shocks. Better measurement has deprived the canonical RBC model of the innovations necessary to generate cyclical variations in economic activity. While early RBC models used Solow residuals as proxies for actual changes in productivity, subsequent research demonstrated that these measures were virtually entirely due to variations in unobserved utilization (capital utilization, labor effort, etc.). Thus, in the data, variations in TFP occur at seasonal frequencies (which is pretty difficult to believe in the economy we live in today) and even in response to tax stimulus (an investment tax credit will stimulate investment and also “cause” measured TFP to rise). Even worse, the measured increases in the seasonal variations or in response to tax changes are essentially of the same magnitude as the variations observed over the business cycle. Papers that do attempt to adjust for unobserved input variations (say by including measured energy use) typically find that they eliminate a huge amount of variation in productivity. The well-known study by Basu, Fernald and Kimball (2006) produces “cleansed” Solow residuals which are at best unrelated to cyclical variations in GDP (Basu et al. actually claim that true productivity variations are negatively correlated with detrended GDP). Of course there are actual productivity shocks (e.g., Hurricane Katrina, the terrible 2011 Japanese Tsunami, the 2 day blackout in the northeast US in 2003, …) but none of these seem to be responsible for substantial changes in employment or production.

This begs the question: If the RBC model does not survive as a model of actual business cycle fluctuations, why do we still teach it in graduate macroeconomics?

I can think of three answers to this question. The first answer is due to its prominent historical place in the development of the field. Macroeconomics changed forever after the first-generation RBC models were developed. These models ushered in new methods and techniques many of which are still in use today. Similarly, the fact that we know that real shocks do not cause business cycle fluctuations (at least the way they were conceived by the original RBC theorists) is an important component of our understanding. Even when you are on a long voyage, it often is important to look back every now and then.

Second, the RBC model is an excellent pedagogical device. The RBC model is almost always the first DSGE model students confront and it is also functioning as the standard backdrop in more advanced DSGE frameworks. Many of the intuitions carry over and present themselves in more modern instances of the model. For instance, researchers have extended the basic framework to analyze tax policy, international business cycles, government spending shocks, and of course monetary policy. Often the correct intuition required for the more elaborate models can be seen in the original RBC framework.

Last, there may yet be situations in which the RBC model might be applicable. While modern advanced economies do not have business cycles that are driven by real shocks, other economies might. For example, suppose you wanted to analyze the economy of ancient Egypt. The Egyptian economy would be closely tied to the flooding of the Nile river and other types of weather shocks. If the waters don’t rise enough, food production will fall. If there is a particularly good year for growing, the Egyptians will accumulate a large stock of foods which might well be traded or stored. I would suspect that the effects of these real shocks might well have tremendous impacts on production, consumption, work, storage and so on and the RBC model might provide an interesting guide as to what patterns one might expect in the data. (If there is an enterprising student out there who has an idea of where we could find some actual data on production, etc. for ancient Egypt, send me an e-mail, I would love to write this paper with you … )

A Faustian Bargain?

Reflecting on a recent blog post by Simon Wren-Lewis, Paul Krugman argues that the modern insistence on microfoundations has impoverished macroeconomics by shutting down early understandings of financial markets “because (they) didn’t conform to a particular, highly restrictive definition of what was considered valid theory.”  In Krugman’s libretto, the role of Mephistopheles is played by “freshwater” macroeconomists. 

Krugman uses James Tobin as an example of one of the casualties of adopting freshwater methodology saying that as far as he could tell, Tobin “disappeared from graduate macro over the course of the 80s, because his models, while loosely grounded in some notion of rational behavior, weren’t explicitly and rigorously derived from microfoundations.” Tobin has not disappeared. In my course for instance, Tobin shows up in the section on investment, which is centered around Tobin’s Q (my co-author Matthew Shapiro constantly emphasizes that it should be called Brainard-Tobin’s Q.)  My students (and any graduate student familiar with David Romer’s Advanced Macroeconomics) is well aware of Tobin’s role in this line of work. Tobin’s early ideas on Q-theory were sketches – plausibility arguments – which were subsequently developed in greater detail by Andy Abel, Fumio Hayashi and Larry Summers (and also Michael Mussa). 

Adopting microfoundations does come with a cost. As I mentioned in a previous post, being precise and exact prevents economists from engaging in glib, hand-waiving theorizing. Many analysts (and commentators) see this as a serious limitation.  Using this methodology also has advantages. Being specific allows you to (1) make the theory clear by exposing the necessary components, (2) quantify the effects by attaching plausible values to parameters and (3) learn from the model. This last advantage is one of the biggest benefits to microfoundations.  Setting out a list of assumptions and then following them where they lead may expose flaws in your own understanding; it may lead you to new ideas, and so on. Let me give you two examples.

Suppose someone says that if demand goes up, prices will fall. Here is their argument: if demand goes up, the price is bid up. The price increase reduces demand and so ultimately the price falls. Every statement in this argument is reasonable but the conclusion is incorrect. The way to find the mistake is with a model – in this case a supply and demand model. (The error is a confusion of movements along a demand curve verses shifts in the demand curve.)

Here is another example. In the traditional IS/LM model, investment demand is assumed to depend negatively on the real interest rate. This assumption is important for the functioning of the model – it makes the IS curve slopes down. The assumption itself is based on a slight confusion between the demand for capital and the demand for investment. What would happen if we added some microfoundations? Suppose we removed the ad hoc investment demand curve and instead required that the marginal product of capital equal the real interest rate (the user-cost relationship).  In this case, there would be a positive relationship between output and the real interest rate (the IS curve would slope up! Higher output would require more employment which would raise the marginal product of capital and raise the real interest rate.) An increase in the money supply would cause the real rate (and the nominal rate) to rise. How should we interpret this? One interpretation is that we need to think a bit more about the investment demand component of the model. An alternative reaction would be to say “I know that the original IS/LM model is right; I don’t need the microfoundations; they are just preventing me from getting the right answer.”   

Who came up with this twisted version of the IS/LM model you might ask? Wait for it …

…yep … James Tobin. (1955, see Sargent’s 1987 Macroeconomic Theory text for a brief description of Tobin’s “Dynamic Aggregative Model.”)

Even today, when we analyze the New Keynesian model, it is often done without any investment (this is like having an IS/LM model without the “I”). Adding investment demand can sometimes result in odd behavior. In particular you often get inverted Fisher effects in which monetary expansions are associated with higher output but strangely, higher real interest rates and higher nominal interest rates.  (If you teach New Keynesian models to graduate students I would encourage you to take a look at Tobin’s model.)

It seems that Paul Krugman wants to revise the history of the field a bit. Reading his post it almost seems like he wants us to believe that the Keynesians would have figured out financial market failures if they hadn’t been led astray by microfoundations and rational expectations. This is not true. The main thing New Keynesian research has been devoted to for the past 20 years is an exhaustive study of price rigidity. If anything was holding us back it was the extraordinary devotion of our energy and attention to the study of nominal rigidities. We now know more about the details of price setting than any other field in economics. As financial markets were melting down in 2008, many of us were regretting that allocation of our attention. We really needed a more refined empirical and theoretical understanding of how financial markets did or did not work. 

The Latest Macro Dust-Up

There have been several blog posts commenting on Kartik Athreya’s book, Big ideas in Macroeconomics: A nontechnical view and I wanted to make a couple of passing remarks pertaining to the blog posts I’ve read. I haven’t read the book yet, and to be completely honest, I’m not sure I will ever get to it given the huge pile of work I have. I will not discuss the book itself. Instead I’ll focus on some of the noteworthy remarks made by bloggers.

Overall, there seems to be lots of misplaced DSGE hand-wringing going on. I think one of the main reasons that some economists dislike DSGE models is that they place limitations on our ability to engage in hand-waiving theorizing. “Microfoundations” is really just code for saying that we want you to be specific and clear. Researchers who try to brush by important parts of their business cycle theories will have a really tough time if they cannot provide the details that accompany their hand waiving. As soon as a researcher appeals to some arbitrary relationship which comes out of thin air, they will immediately feel pressure to back up that component of their theory. Unlike many macroeconomists, I am willing to let people make use of plausible ad hoc theoretical relationships provided that they come with an acknowledgement that this is unfinished work which needs to be filled in later. In the past, wise old economists could get by making sweeping statements about the nature of business cycles and the correct policy cocktail which he or she thought would save the day without having to spell out what they really meant. Today, macroeconomists are typically not granted this latitude.

In his comment, Noah Smith argues that many macroeconomists are “in love with modern macro methodology.” I think Noah is partially correct. It’s true that there are many economists (not just macro guys) who focus far too much on the tools we use to analyze problems rather than the problems themselves. In the end, tools are only valuable if they are put to good use. My own view is that grad students need to have a solid grasp of fundamental / basic tools so they can get started. But, after learning these basic tools, they should not go out of their way to learn more advanced tools unless they have a specific need to do so. There are other people who feel quite differently and I can appreciate this alternate view even if I don’t ultimately agree with it. On the other hand, I’m not so sure I know what Noah means by “macro methodology.” The techniques used by macroeconomists are for the most part used in every area of economics. Dynamic programming is used in labor and IO as well as macro. Bayesian estimation, maximum likelihood techniques and so on are used by most fields. General equilibrium analysis is again used throughout economics. There are some tools which are used almost solely by macroeconomics (the Blanchard-Kahn decomposition comes to mind) but I don’t think this is what he has in mind. Perhaps it is the conjunction of so many common elements that he associates with DSGE models. For instance, there is a good deal of “boilerplate” which shows up in DSGE models (the representative agent, the production function, the capital accumulation equation, and so on). It might be interesting to hear exactly what he views as techniques which are primarily in the domain of DSGE research which he questions.

John Quiggin takes the opportunity to Eulogize some of the modern research which he feels met its end during the financial crisis. He writes that “(t)he crisis that erupted in 2008 destroyed (the) spurious consensus (in macroeconomics).” There might be some truth to this statement as well though I’m not entirely sure what class of theories he has in mind. I suspect that he thinks that the crisis undercut real business cycle models or perhaps rational expectations models more generally. I don’t think this is the case. The productivity shocks at the heart of standard real business cycle models have not been viewed as plausible sources of business cycles fluctuations for quite a while and rational expectations theories are most likely here to stay. If there is a model that really got taken to the woodshed during the financial crisis it was the New Keynesian model which had, until then, occupied a clearly dominant position in policy discussions and academic research. The “New Old Keynesians” as he calls them aren’t having a much better time. They also don’t have a good framework for understanding the financial crisis (there is no meaningful financial sector in the traditional IS/LM model) and their versions of the supply-side of the models aren’t doing very well at all. Quiggin might be referring to the absence of Keynesian demand channels in many DSGE models. Here he might have more of a case. Getting traditional Keynesian demand models to work the way they ought to is not easy. Again, the mechanics of the DSGE approach might be limiting us. There are lots of background assumptions made in most DSGE models which play important roles in how the models function and which likely prevent Keynesian swings in aggregate demand from occuring (though Roger Farmer’s work is overtly pushing in this direction). The basic Walrasian supply and demand framework is surely one of the key features of DSGE models which limit Keynesian channels. A worker who cannot find employment can simply lower his wage. A firm that cannot sell its goods can simply lower the price if it needs to sell. Of course, sticky prices and sticky wages are the standard refrain but this now ties the model to observations in which recessions should be accompanied by deflation (which really didn’t happen during the great recession). I personally have some faith in the basic Keynesian demand story but we don’t really have a good useable version of it at the moment.

Paul Krugman also weighs in with a short comment. He makes two remarks which are noteworthy. First he says that DSGE models are “basically, the particular form of modeling that is more or less the only thing one can publish in journals these days).” This is simply not true. Second, his closing remark is (to me) somewhat cryptic:

I think, that somebody is going to end up in the dustbin of history. I wonder who?

I confess, I really don’t know who or what he is talking about when he writes this.

More to come I’m sure …

The Fight Against Spinal Muscular Atrophy

As many of you know, my daughter, Abigail, has Spinal Muscular Atrophy (SMA).  SMA is a terrible genetic disorder which affects children all over the world.  Kids with SMA lack a gene which produces protein necessary for the proper development of motor neurons.  Because the motor neurons are underdeveloped, there is inadequate communication between the brain and the muscles and as a result, kids with SMA have extremely weak muscles throughout their bodies.  They can’t walk.  Many cannot sit upright by themselves or hold their heads up on their own.  They sometimes have difficulty chewing and swallowing food.  They often have very weak coughs.  Weak coughs are often a major source of health problems for these kids. Many children with SMA get serious pneumonias which require prolonged stays in hospitals.  No one really thinks about coughing much but it is a very basic mechanism for fighting off respiratory illnesses.  

About 1 in 40 people are carriers of the genetic defect which causes SMA. (You read that right, 1 in 40).  The gene is recessive however so for a child to have SMA, he or she needs to get the bad gene from both parents.  Roughly 1 in every 10,000 children have SMA making it a “common” rare disorder.  There are three types of SMA.  Type I is the most severe.  Type II is the intermediate type and Type III is the least severe.  Abbey is a strong type II and she is actually in fairly good health overall. 

The good news about SMA is that its days are numbered.  Scientists know quite a bit about SMA – they know the exact gene which causes the problem; they know exactly the type of protein which needs to be synthesized to encourage motor neuron development and so forth.  The medical community senses that they are close to cracking this problem and SMA attracts lots of researchers as a result.  With any luck, I will see SMA eliminated in my lifetime. 

Every year, my wife, Melissa, helps raise money for SMA research through a Walk-and-Roll fundraiser for Families of Spinal Muscular Atrophy (FSMA) in Abigail’s name.  If you would like to donate to the cause you can click on the link below.

If you are a graduate student (either at Michigan or anywhere else) then I would suggest that you don’t give money.  I understand the permanent income hypothesis but the reality of grad student life is one of poverty.  I suggest that you “pay it forward” and donate to a worthy charity once you are done with your degree have a job.  If you are a graduate student in my class, please do not donate as I think this would create a conflict of interest. 

If you follow the link to Abbey’s site, you will find a picture with my wife and two kids (Sam and Abbey).  The fat non-photogenic person standing behind them is me.

Brad DeLong Blows a Gasket

In an all-too-common gesture of blog-civility Brad DeLong has decided that I’m an idiot because he thinks I’m asking for some sort of balance in viewpoints on economics regardless of the soundness or lack thereof of the views.  Put differently, it seems that Brad thinks that my earlier post was saying that we should grant some credibility to Prescott’s outlandish claims because Paul Krugman has said some outlandish things before. 

That is not at all what the earlier post was saying. The only substantive economic statement in the post dealt with the effects of monetary policy. Here’s exactly what I said regarding that statement: “Prescott is wrong.” Now, I realize that in any statement there is always some room for interpretation but there doesn’t seem to be a lot of room in that statement.  

The main point of my earlier post is that we should think a bit more before immediately turning to Nobel laureates for policy analysis. We should ask ourselves why we are soliciting their opinions. It could be that we turn to them because “hey, they won a Nobel Prize; let’s see what they have to say.” It could be that we turn to them because we think “oh, they have a Nobel Prize and therefore they are going to know better than most other economists about economic issues.”  I worry that some people think that the Laureates sknow better about all economic issues than people who haven’t won a Nobel Prize, or people who just work in the industry or in the private sector…. 

Now, unquestionably, when you consider the Nobel laureates, you are looking at some of the smartest people in our field.  If you want to get an opinion, talking to a smart person is definitely not a bad thing to do.  But it doesn’t mean that it is the best thing to do. Again, let me use Paul Krugman as an example.  I’ll try to be very careful to avoid offending Brad’s delicate sensibilities.  I’m not using Krugman as an example because I think his analysis is particularly flawed (or flawed at all). I’m using him as an example because he is very prominent in the popular press and because he finds himself in this position a lot. Krugman is asked to weigh in on all sorts of areas. For instance, Paul Krugman has said many things about health care policy. That’s fine. Paul is really smart and he knows a lot about economics.  Moreover, I think he tries hard to keep up to date on the details of current policies. You could do a lot worse than asking Paul Krugman for his opinion. That said, Krugman doesn’t really have any particular expertise about the economics of health policy.  (At least I don’t think he does.) There is nothing wrong with getting the opinions of a smart informed economist but that’s really all you are getting when Krugman starts talking about health policy. The same thing is true when he talks about the minimum wage. Krugman knows about the literature on the minimum wage; I’m sure he is very interested, as most economists are, in the research on the minimum wage but he’s not an expert on that area. In any case, he is the one who is asked to write about the minimum wage. Why? Because he has a Nobel Prize. 

Interestingly, in Brad’s post he inadvertently gives a very good example of what I’m talking about.  He presents an example of Robert Lucas talking to the Council on Foreign Relations on issues surrounding bailouts and the financial crisis (and also testifying to Congress about fiscal policy). Like Krugman, Lucas is also very, very smart.  In fact you could argue that Lucas has looked deeper into the field than any other economist in the latter half of the twentieth century. (There is really no way around it – Lucas’s work is incredibly significant to the development of modern economics.) In any case, there he is talking to the Council on Foreign Relations about bailouts. Why? Does he know anything about bailout policies? Not really. But then why is Lucas being asked to give his opinion?  Because he has a Nobel Prize.

Noah Smith and Matthew Yglesias on Conservative Bias.

Both Noah Smith and Matthew Yglesias have weighed in on my earlier post in which I speculated that one reason that economics faculty are sharply more conservative relative to faculty in other departments on college campuses is that the facts and analysis in economics tend to lead to somewhat more conservative interpretations of the world.  The posts by Noah and Matthew both have good points and I wanted to comment on them briefly before moving on.

First, I should point out that my original post was really about the different political viewpoints of college professors, not typical members of the Democratic or Republican Party.  Both Noah and Matthew point out correctly that the facts actually roughly agree with what an informed policy analyst would believe (either a moderate Democrat or moderate Republican).  Despite the usual public disputes between Republicans and Democrats, better informed members of these parties (e.g., people like Peter Orszag and Douglas Holtz-Eaken, or even Paul Krugman and Greg Mankiw) would actually have surprisingly similar policy assessments.  The range of differences in policy recommendations among mainstream economists is overall pretty narrow. This is also probably due to the disciplining nature of quantitative facts.

In Noah’s response, he says that in the 1980’s,

conservatives moved away from the facts because they could, and liberals moved closer to the facts because they had to.

I think this is correct though the question of why this occurred is left open.  It’s possible that the electorate simply became more conservative and this forced both parties to realign.  Alternatively, it could be that the accumulation of facts may have forced liberals to adopt a more conservative tone as our understanding of the world was refined.  Republicans in contrast could afford to adopt a less realistic view that was even further to the right. I thought initially that Noah’s quote was more like the latter view but I’m not sure…

Both Noah and Matthew suggest that there are many facts which fit better with modern liberal agendas.  Matthew mentions a variety of standard market failures that are typically highlighted in “Econ101” texts.  I’m not sure I entirely follow his line of reasoning – my original reaction was that  it seems like he’s agreeing with Noah’s statement above.  That is, the modern liberal approach to economic policy is to start with a basic free-market approach and then perhaps use corrective (Pigouvian) taxation to improve the allocations as needed (is Matthew channeling his inner Greg Mankiw?).  You won’t find many economists who would take issue with this view but at the same time this seems like he is conceding the point — start with Milton Friedman and then season lightly with a little bit of Paul Krugman?

Noah points out that because many of the incentive effects are small that liberals are tempted to ignore them as we pursue our goals.  The problem with this argument is that it applies equally well to jelly doughnuts and cigarettes.  One jelly doughnut, or one cigarette, does no appreciable health damage.  This doesn’t mean that you can eat jelly doughnuts (or smoke cigarettes) without worrying about their long-term health consequences.  If you pile on tax after tax, impediment after impediment and so on, then the costs will end up being relatively high even if the costs associated with any one regulation are small.

Noah also points out that wealthier nations typically have greater fractions of GDP allocated by the government.  The implication, I guess, is that this is a sign that more government involvement can’t be that bad if the wealthier countries are opting for it.  I don’t think I agree with this statement.  It’s true that most European nations have greater government involvement in their economies but it is also true that, on the whole, these nations are not as productive as the U.S.  Many prominent European nations (e.g., UK, Italy, France, Germany, Spain) have per capita levels of GDP which are roughly 65 – 75 percent of U.S. per capita GDP.  This is a comparison of average GDP per capita.  I would anticipate that median GDP per capita is much closer (since income distributions are much more balanced in Europe on average), but even then, these economies are starting out with much less average output per person to allocate (these measures include government provided goods and services).

I would encourage Noah and Matthew (and anyone else who’s interested) to take a look at the papers by Djankov and Shleifer (and many coauthors) linked below.*  These papers present an empirical analysis of economic performance across nations together with measures of corporate taxation, labor market regulation and regulation of business entry.  Of course, making such a comparison is difficult – there is no sense in which we can take these measures as causal.  Even more problematic, there are many variables that are changing simultaneously.  There are hundreds of policy differences, cultural differences, etc.  This will make interpreting the results extremely difficult.  Nevertheless, there are noteworthy patterns in this data and the basic message is not encouraging.

From the abstract of the corporate tax paper:

[High corporate tax rates have ] a large adverse impact on aggregate investment, FDI, and entrepreneurial activity. … Corporate tax rates are also negatively correlated with growth, and positively correlated with the size of the informal economy.

From the abstract of the paper on regulation of business entry:

Countries with heavier regulation of entry have higher corruption and larger unofficial economies, but not better quality of public or private goods. Countries with more democratic and limited governments have fewer entry regulations.

From the abstract of the paper on labor regulation:

richer countries regulate labor less than poorer countries do, although they have more generous social security systems. The political power of the left is associated with more stringent labor regulations and more generous social security systems. … Heavier regulation of labor is associated with a larger unofficial economy, lower labor force participation, and higher unemployment, especially of the young.

Of course, many European nations are trying to undo some of the policies they have adopted over the years because they are looking at the same data we are looking at. Unfortunately, we don’t really know which particular policy (or policies) is to blame for resulting in a stagnant labor markets but this is little comfort to a nation if it has low labor force participation, low productivity and high unemployment.  Hopefully this more mature liberalism that Noah argues now characterizes the center left in the U.S. has learned from the experience of its forerunners.

*These papers are a bit out of date. The most recent one was from 2008 but the earliest one dates back to 2000.  There are probably more up to date papers looking at these topics.  If there is an expert who wants to weigh in I would be interested in hearing about more current work.

A Well-Known Liberal Bias

Professors on America’s college campuses are politically quite liberal compared with the general public.  A 2005 study that was discussed widely in the popular press at the time reported that perhaps as many as 70 percent of college professors were self-described liberals.  A similar 2007 study by Neil Gross and Solon Simmons reported that roughly 10 percent of faculty members described themselves as far left while virtually none self-identified as far right.  The Gross and Simmons study also showed that in the Social Sciences, roughly 17 percent of the faculty identify as Marxists. 

It is not obvious why American college faculty members have such strong liberal political views.  Many conservatives argue that it is due to discrimination in hiring and that the students receive a sharply biased education as a consequence.  I don’t think that the intellectual capacities of conservatives are below that of liberals but I suppose it could be a possible explanation.  My own guess is that it is due to selection on attitudes towards accepting received wisdom as truth.  As I mentioned in a previous post, academics are encouraged to break new ground or even to overturn established ideas.  Einstein is an excellent example of such a thinker.  So are Stephen Hawking, John Maynard Keynes, Alfred Blalock, etc.*  Deliberately setting out to tear down established parts of your field requires a certain mindset and this mindset might be more common with people who have liberal political views.  (While it is clear that faculty on American college campuses are quite liberal I am not convinced that we need to do anything about it.  If the best chemists and physicists also happen to be Democrats then so be it.  Particularly if they confine their teaching to their area of expertise, I wouldn’t think political bias by itself should be a problem.) 

Another possibility is that, as Paul Krugman has written on a few occasions, “the facts have a well-known liberal bias.”  I’m not entirely sure I know what he means by this.  It could be that Krugman is thinking mostly about the tension between facts and far-right positions.  Creationist ideas don’t really survive close contact with the facts.  Extreme supply-side economic ideas don’t either. 

The 2005 study presented results on political views by department.  As a field, economics ranks as one of the least stereotypically liberal, and most conservative, fields on campus.  While I cannot think of a single Republican in my own department (and no, I’m not a Republican), it is quite common to hear faculty members emphasizing the benefits of limited regulation, the gains from trade, and the harm caused by market intrusions like minimum wages, capital taxation and import tariffs.  I am quite sure that many of the economists in my department would be viewed as radical right-wing conservatives by members of other departments at Michigan.  Let me use Paul Krugman, a self-proclaimed liberal, as an example again.  If you read Peddling Prosperity or Pop Internationalism you will find this self-proclaimed liberal touting the virtues of trade and the folly of poorly thought-out market interventions being discussed at the time.  Krugman is not an outlier either.  My colleague Miles Kimball’s blog is titled Confessions of a Supply-Side Liberal – disclosing both his liberal motives and his more libertarian/supply-side/conservative perspective. 

I suspect that the relatively high proportion of conservative views among economics faculty is largely due to contact with the data and with standard economic analysis.  In economics it seems like the facts and the analysis have much more of a conservative slant than a liberal one.  It really is true that taxing labor income reduces labor supply (a little).  It really is true that extending unemployment benefits encourages people to delay looking for a job (a little).  It really is true that taxation can reduce employment demand; that excessive business regulation seems to be correlated with reduced levels of business formation; that union concentration has a detrimental effect on industries and on and on.  Moreover, the theoretical analysis seems to fit with the observations.  Now of course, a liberal’s response to this would normally be “OK, perhaps these effects are there in the data, but the magnitude of the effects is usually pretty small.”  That’s often true.  It’s certainly true with the minimum wage.  At its current level, and I suspect at the proposed new $10.10 level, the effects of the minimum wage on employment will probably be modest.  Republicans tend to describe the minimum wage as though it is a “job killer” that could cripple the economy – a description which is completely over the top.  That type of exaggeration is not really serving them well and it might be what Krugman has in mind when he says that the facts have a liberal bent. 

Nevertheless, many basic empirical patterns and basic economic ideas have a fairly conservative profile.  I suspect that one of the major reasons why faculty in economics departments are so much more conservative than faculty in other departments is that they are constantly confronted with these facts and with reasoning that runs counter to stereotypical liberal positions, and it is difficult to maintain these positions when one is faced with the analysis in the field.  Economics constantly impresses on its students the benefits that flow from allocations guided by self-interest while stressing the lost opportunities associated with poorly planned government interventions, even when the intentions are good. 

The 2007 study’s findings about Marxism are notable.  I suspect that the Marxists in the Social Sciences and elsewhere on campus are, for the most part, not in economics departments.  Marx is just not taken very seriously by economists anymore and hasn’t been for quite some time.  In most economics departments you are much more likely to find the ideas of Friedman than of Marx.  (And that’s a good thing.) 

Of course, not all free market ideas fit neatly into the standard conservative talking points.  Economists almost surely are more open to legalizing drugs, gambling, prostitution, etc.  They are also more likely to see the benefits that come from allowing free immigration.  None of these are staples of conservatism.  

On the whole however, it would seem like the “facts” in economics don’t really have a strong liberal bias.  On balance, they probably break the other way.  

* OK, Alfred Blalock seems a bit out of place on this list.  Who is Alfred Blalock you might ask?  He is actually one of the pioneers of modern heart surgery (though I understand that the famous procedure named after him – the Blalock-Thomas-Tausig shunt – isn’t, technically speaking, heart surgery).  The reason I know about Blalock is that I recently saw the movie “Something the Lord Made” which is about Blalock and the legendary surgical technician Vivian Thomas who was instrumental in developing the procedure.  I *highly* recommend this movie.  The performances by Alan Rickman and Mos Def are both outstanding and the movie provides a compelling look at both race relations at the time, and medical history.