Cripes, Maybe I’m in the Echo Chamber …

Paul Krugman is not happy with my post on echo-chamber etiquette for America’s public intellectuals.  As widely read public intellectuals, Paul Krugman and Noah Smith have an obligation to present a description of the world that’s as accurate as possible.  They have an even greater responsibility to liberal readers who are attracted to ideas that fit well with their preconceptions about the nature of reality. 

Krugman points out that he wasn’t really attacking Sargent but rather commentators who posted favorable comments about Sargent’s commencement address.  Fair enough.  Paul is not attacking Sargent, and I think he’s justified in pointing out that it is curious that such favorable comments are being highlighted now when the U.S. economy is still in a weakened state and when problems with income inequality are finally getting the attention they deserve.  

He also suggests that the right wing echo chamber is worse than the left wing echo chamber.  He writes that

America, it goes without saying, has a powerful, crazy right wing. There’s nothing equivalent on the left — yes, there are individual crazy leftists, but nothing like the organized, lavishly financed madness on the right.

I don’t think I would put it quite so strongly but I think again Krugman is basically correct – the “crazy right wing” has assumed too much influence over the broader parts of conservative America and this is surely a bad thing. The left, thankfully, is not at that point yet. 

This however gets at what I was trying (perhaps unsuccessfully) to say in my earlier post.  The current condition of the right is the inevitable consequence of an environment in which crazy ideas are bandied about cavalierly and amplified to the point at which they become unquestioned.  I think this was also the point of the NYT column by Nicholas Kristof

Now, neither Noah’s post nor Krugman’s post are crazy.  Neither is “wrong.”[1] But neither is the message that their more liberal readers need to hear.  To me it seems obvious that the proper response to an extreme right-wing statement like “tax cuts pay for themselves” can’t be “oh yeah, well extra government spending pays for itself.” If I could dictate the reading habits of my fellow Americans, I might put Piketty’s Capital in the Twenty-First Century on the bedside table of every conservative but I would put Sargent’s 12 principles on the bedside table of every liberal. 

[1] Krugman points out (again correctly) that there may be efficiency gains to having a more efficient distribution of income. This is true though I strongly suspect that the overall efficiency costs to achieving a more desirable income distribution will outweigh these benefits. Of course, I strongly suspect that, if we are careful about how we go about redistributing income, the overall gains in welfare from having a more equal income distribution will outweigh the loss in efficiency.  

Watch what you Say in the Echo Chamber

In a recent blog post, Paul Krugman points out that there is no conflict between “standard economics” and concern about growing income inequality.

You can be perfectly conventional in your economics […] while still taking inequality very seriously.

This is absolutely correct and it’s a stance which economists need to embrace more often than they do. In an earlier post I argued that concern about income inequality is legitimate irrespective of how you think the economy works. You don’t need additional justification to desire a more equitable distribution of income or well-being. I personally count myself as an economist who more or less accepts the basic principles of the field, but who also recognizes that dealing effectively with the current state of income inequality is simply a necessity.

However, while Krugman is quite right in this case, both he and Noah Smith have made some remarks which I think liberals should never make. The remarks came in response to a commencement speech by Tom Sargent which has attracted a surprising amount of commentary among online commentators. In the convention speech, Sargent outlined 12 principles of economics that he felt college graduates should know. The principles were pretty uncontroversial and probably a good thing for Berkeley undergraduates to hear before the go out into the real world. However, one of Sargent’s principles struck a nerve:

There are tradeoffs between equality and efficiency.

Paul Krugman took issue with this remark saying that reducing income inequality might actually increase economic growth and (for an economy at the zero lower bound) that government spending “more than pays for itself.” He closes his remarks by saying that Sargent’s speech is simply “anti-Keynesian propaganda, cloaked in the form of a widely respected and liked economist uttering what sound like eternal truths.” those people commenting favorably on Sargent’s commencement address are simply advancing “anti-Keynesian propaganda, cloaked in the form of a widely respected and liked economist uttering what sound like eternal truths.” [1]

Noah Smith similarly found fault with the equity efficiency remark. He writes that it’s not generally true that there is such a tradeoff and he cites the Second Welfare Theorem as justification. As he goes through the list, Noah remarks that he can “start to see a policy implication emerge from the list” and in the end it all shapes up to be “one big caution against well-meaning government intervention in the economy by do-gooding liberals concerned about promoting equality and helping the poor.”

I understand the reactions by Noah and Paul. I just wish they wouldn’t succumb to the temptation to write this stuff. The truth is that most of the principles of standard economics carry with them a fairly conservative / neo-liberal message. I know there are exceptions but they are just that – exceptions. The truth is that if we want to really attack the problem of income inequality (promote equality and help the poor) then we are going to have to take stuff away from richer people and channel it to poorer people. This kind of action will most likely have consequences for markets and these consequences will be unsavory. Paul and Noah could argue (quite strongly I suspect) that these costs might be relatively small but they should not act as though they really think there are no costs. (In case you’re wondering, the Second Welfare Theorem says that if we can costlessly transfer resources across individuals then any efficient outcome can be supported as a market equilibrium…)

If you are a liberal, let me give you a sense of why this is a costly statement to make. Suppose one of Sargent’s principles included a public finance tradeoff: there is a tradeoff between low tax rates and high tax revenue. A conservative might take issue with this claim and write ‘you know this isn’t necessarily true. According to some models, lower tax rates actually lead to more revenue because they encourage economic activity.’ The conservative would of course be “correct” in a very narrow sense (this is a theoretical possibility) but he or she would be offering a very tempting fiction to their audience and to the public – the idea that you can have everything you want at no cost. Neither liberals nor conservatives should make remarks like this. When they do, it invariably costs them credibility and serves to drive a further wedge between voters who are already too polarized.

Talking in an echo chamber can be fun but public intellectuals like Paul and Noah have a greater responsibility to self-censor than most because they have large audiences. They have a responsibility to the public and also a responsibility to their liberal readers who take their statements to heart. The conservative echo chamber is probably worse than the liberal echo chamber (you can cut tax rates and raise tax revenue, cutting spending will stimulate the economy, the affordable care act is going to cripple the economy, … ) and conservatives had paid a hefty price as a consequence. They have boxed themselves into an intellectual corner which is going to be very difficult to escape from largely because they have adopted a narrative filled with soothing fictions.

Paul’s second post is the correct path to take. Liberals can simply say “Yes, there are costs to redistribution, but it is in society’s interest to bear these costs to fix the problem of inequality.” Similarly conservatives should say “Yes there are problems with income inequality, but we have to be smart and careful about how we design transfers so as to avoid too much interference with normal market functions.”

Is that so hard?

[1] In the earlier post I misinterpreted Paul Krugman’s statement to be commenting directly on Sargent’s speech when in fact he was referring to people who were themselves commenting favorably on Sargent’s speech.  Hopefully the above correction makes clear Krugman’s actual intention. Thanks to those in the comments section who noted my mistake. CH

Fixing Terrible Economics Presentations

Most economics seminar presentations are terrible. Basically, we suck at presenting our work.  

There are many guides to improving presentations that you can find online. There’s a recent set of suggestions by Jesse Shapiro on Greg Mankiw’s blog. These guides provide reasonable, uncontroversial advice and for the most part I agree with their suggestions (big font, lead with a question, have a bottom line, etc). Here I’m going to make some suggestions that I don’t often hear others making and which some of you might find controversial.

These are suggestions for presentations. I’ll have a post directed at advice for seminar participants soon.

Before I get going let me acknowledge that I am not the best presenter in the world myself. However, I have completed step 1 on the path to recovery – I have recognized that I have a problem. In fact I have completed step 2 – I am trying to do something to fix the problem.  OK, here’s my list of suggestions: 

1. FINISH ON TIME. So many other problems flow from this one simple blunder it’s hard to overstate how costly it is. By this, I don’t mean that you start panicking with 5 minutes left and then skip a dozen slides to get to your crappy conclusion slide. I also don’t mean talking really fast and blowing by each and every one of your 50 slides to make it to the end on time. What’s even worse is that we usually have more time than we really need. Seriously, how many papers require more than half an hour to present adequately? Finish on time – plan to comfortably cover a reasonable amount of material in the time you have. It’s best if you are worried about the possibility of finishing ahead of time (it will probably never happen). 

2. Be as clear and simple as possible. Most graduate students actually entertain the idea that what they have done is so simple that the audience might figure it out during the seminar. This is almost never true. Presenters have a completely distorted view of their own work. They have been doing detailed meticulous work “under the hood” for so long that they have managed to convince themselves that everything they’ve done is trivial. It’s not true. The audience is completely unprepared and it is the speaker’s obligation to explain the material clearly. 

Examples (with numbers?) are an excellent tool for clarity. Don’t present the general theorem, present the example and mention the theorem.

3. Use fewer slides. I’ve seen some recommendations that you can cover at most one slide per minute. That would be a ridiculous pace. I suggest no more than one slide every 3 to 5 minutes. That means that for a 90 minute presentation you get between 18 and 30 slides. Closer to 18 would be best …

4. What should you cut out? Let’s start with the “Preview of Results” slide. This sorry excuse for a presentation tactic is becoming more and more common but it’s really just a signal that you haven’t fixed problems #1 and #2. If you are going to cover the material clearly and on time you don’t need the preview slide. [Think of how this would play in the movie the Sixth Sense. Bruce Willis meets Haley Joel Osment and then we get to the “Preview of the Results: Bruce Willis’ character is a ghost.”]

You can also cut out the “Related Literature” slide. It’s important for a researcher to be aware of the related work. This doesn’t mean that you should put up a reference list in your presentation.

5. Slide Content: Put as few words as possible on your slides. Don’t write sentences (an exception would be if you are going to put up a quote). Your slides are not your presentation notes! The stuff on the slides should complement what you are saying. It isn’t a substitute for what you are saying.

Don’t use bullet points or lists if you can avoid them.

Use equations and math sparingly. Be choosy about the equations you are going to present and plan on re-stating the meaning of the variables for each equation. Presenting the “boilerplate” of the model is typically not a good use of time. Focus on the stuff that is specific to your work. It’s almost a forgone conclusion that the audience won’t be able to remember the notation (even if you have a notation slide). Choose notation carefully (C is consumption not capital, labor is N or L, marginal cost is MC, … ). I’ve had presentations that were disasters simply because of my poor notation choices.

The best things for slides are pictures, graphs, scatter-plots …

6. You are giving a research seminar – Don’t be “cute.”

7. Software: I know “everyone” uses Beamer now. Beamer is another really bad feature of economics presentations. It’s as though they went to PowerPoint, found the most drab, depressing format and then mandated that every economics talk had to be given in that style. The only thing Beamer has going for it is that if you write in LaTeX then you can enter the math easily. And this might not actually be a good thing (see # 5 on Math).

There’s a lot of stuff in Beamer that should simply be dropped. The drop-shadow behind the math panels, the little sphere bullet points, the little triangle bullet points, the terrible purple haze color palette that everyone uses, the slide counter (I think this is so the audience knows how many more terrible slides they will have to endure before the end), the ridiculous bubbles at the top which get filled in when you cover a section of the talk … it’s all hideous.

In fairness, the other options aren’t much better. PowerPoint is ok but you have to avoid the pre-packaged formats or you will end up running into the same problems that Beamer has. There are other options that will become better hopefully over time: I’ve never tried Keynote but it might be worth a shot. There’s Prezi and similar web-based presentation software, … whatever. Heck even chalk or a whiteboard. [1]

8. Speaking: Please don’t read your slides! (Remember, your slides are not your notes!). 

Try to eliminate “filler sounds” – uh, um, you know, like, right?, I mean, what do I want to say here?, what I’m trying to say is, sorta, kinda, … Silence is better than this kind of stammering. Have the opening sentences of the talk essentially memorized.

One trick for refining your speaking ability is to record yourself giving a talk. Count the filler words (you’ll be horrified). This will give you a sense of whether you have a problem.

Another trick is to try to emulate good speakers. Here are two clips of Sam Harris [clip1 clip2].  TRIGGER WARNING: Harris is a “new atheist” who frequently makes disparaging remarks about major religions, he’s particularly critical of Islam; if you are offended by this type of discussion then you might want to find a different example. I’m including him here because he’s an excellent public speaker.

You’ll notice that he speaks in complete sentences with very few filler sounds. His eyes are always on the audience (most of the time he doesn’t even have slides). He speaks very slowly with deliberate pauses. The sentences and his delivery are carefully orchestrated (even though they sound improvised to an extent). A scientific presentation doesn’t have to be this smooth – Harris is a public / pop intellectual who is trying to get big picture ideas across to a general audience. This isn’t typically what we are doing in a research seminar but the techniques he uses still carry over to some extent.

 

[1] I was at a conference once where Randy Wright got up to give his discussion and he didn’t have any overheads prepared (this was back when people used overhead projectors). Instead, he decided to simply write his slides as he talked. It was the best discussion at the conference.

Are the Micro-foundations of Money Demand Important?

EC607 is rapidly coming to a close. I’ve finished the RBC model and now I am on to discussing nominal rigidities and New Keynesian Economics. This transition is always somewhat awkward because I have to say something about the demand for money.

Prior to the crisis, money demand had nearly disappeared from mainstream macroeconomics. This might seem strange since so much of macroeconomics involves money, but it’s true. In Advanced Macroeconomics, when money is introduced, Romer simply adds real money balances to the utility function and then moves on to tackle the more important problems of macroeconomics (price rigidities in this case). That’s basically it for money demand: one paragraph and an ad hoc addition to the utility function which is basically never mentioned again.

I think the reason for the marginalization of money demand is two-fold. First, getting money into a neoclassical economic model is really tough. Fiat money (money that isn’t backed by anything with actual value) is simply not valued by market participants in a Walrasian setting. The Walrasian value of something that is intrinsically worthless … is zero. The fictitious Walrasian auctioneer is simply too nimble, too efficient to permit an equilibrium with valued fiat money. To get money in to these models (with any micro-foundations at all) requires that we create some kind of a “gap” in markets to create some room for an unbacked currency.

There are models that do the trick. Often researchers working on money demand use frameworks that are decendents of the Kiyotaki-Wright (1993) matching model. This model imagines that all transactions take place through random matching between individual traders. Because the probability of a “double coincidence of wants” in the Kiyotaki-Wright model is low, (it’s unlikely that you will bump into someone who wants what you have and has what you want) a fiat currency can circulate in equilibrium. More modern versions use an extension suggested by Lagos-Wright (2005) in which traders interact in two sub-periods. During the “day”, there is a centralized market where traders use state-contingent contracts. During the “night” they match in an anonymous trade stage.

These are elegant models and they do capture elements of the motives behind holding money. However, they are not used often by most macroeconomists, who often regard these models as being simply too abstract to be useful. Their abstract nature also makes empirical analysis of these models extremely difficult. (Incidentally, if you are a graduate student looking for a research topic, I would encourage you to look outside of this area. It’s a very difficult area and it doesn’t sell very well on the academic job market. Search and matching in general is very hot right now but “money-search” is not.)

The second reason why money demand has been largely relegated to the sidelines is that there are moderately persuasive arguments that we don’t actually need to understand it to study the macroeconomy – even to study monetary economics itself. The argument goes something like this: The Federal Reserve conducts monetary policy in terms of a nominal interest rate target. Once it decides on the setting for the funds rate it adjusts the money supply to enforce its target. The New Keynesian model is an excellent example of this approach. The simplest NK model has an equation governing the demand for goods and services, an equation governing inflation, and an equation describing the Fed’s operating rule. No mention is made of money supply or demand and many (most?) macroeconomists are perfectly happy with this state of affairs. The possibility that we could avoid the issue of money demand is very attractive – particularly given the difficulties of successfully modeling money demand.

Money demand may be making a comeback though. During the crisis, a lot of concern centered on malfunctioning markets for money-substitutes. Recent work by Arvind Krishnamurthy and Annette Vissing-Jorgensen, Stefan Nagel, and Adi Sunderam emphasize the liquidity aspects of many assets that are not traditionally considered “money.” Treasury bills, Commercial Paper, and highly rated securitized assets all have important liquidity components to their market values. In addition, many people think that the demand for liquid, low risk securities encouraged the creation of more and more securitized subprime loans. Not having a suitable model for money (or money substitutes) seems like a particular shortcoming given recent history.

In 1978, there was an amazing conference at the Federal Reserve Bank of Minneapolis devoted explicitly to the study of micro-foundations of money demand. The papers at the conference were later collected in Models of Monetary Economies.[1] In it, there is an interesting discussion by James Tobin who writes in part 

Why does fiat money … have value? What determines its value? This conference [is] based on two premises. One is that the two questions have not been satisfactorily and rigorously answered. The other is that the answer to the second question […] can be achieved if and only if [we have] a precise answer to the first question […]. I am dubious of both premises.

In hindsight, I think it’s clear that Tobin’s suggestion that we have a satisfactory and rigorous understanding of why people hold money – was at best not entirely correct. His second statement – that we might not need to rigorously understand why people hold money – might be right though my faith in his argument has definitely been shaken by recent events.

[1] This volume is available on line here. The 1978 conference lineup was amazingly good and the manuscript includes among other things, Lucas’ “Pure Currency” model, Townsend’s “Turnpike” model, and an excellent paper by Neil Wallace on money demand in the overlapping generations model. While it doesn’t have any of the modern matching models, it is still an impressive and insightful volume and should be required reading for anyone interested in the pure theory of money.

What the Heck is “Calibration” Anyway?

Every year I teach EC607 I arrive at the Real Business Cycle model and run into a problem. No, it’s not struggling to answer “why are you teaching the RBC model if you don’t think it is useful for understanding business cycles?”  No, the problem occurs when I get to the subject of calibration.  I would like to tell my students exactly what we mean when we say that we calibrate parameters.  I can’t tell them however since I don’t really know myself.

In my own work, I do things that I would describe as calibration. I even have an intuitive sense of what I mean when I say that some parameter has been calibrated.  However, I do not have a precise notion of what it means to calibrate a model. In fact, I am not sure anyone has a precise statement of what it means.[1]

Calibration is a way of assigning values to the parameters which determine how our models function. Unlike estimation, calibration does not assign parameter values to make the model fit the data. Some descriptions of calibration suggest that the parameter values should come from separate data sources – separate, that is, from the data that you are analyzing with the model.  In Advanced Macroeconomics, David Romer describes calibration as follows (emphasis added):

The basic idea of calibration is to choose parameter values on the basis of microeconomic evidence and then to compare the model’s predictions [with the data].  

This is a fairly reasonable description of what many people mean when they use the term ‘calibration’ but it is problematic for at least a couple of reasons.  First, economic data typically don’t come with useful labels like “Microeconomic Data: Approved for use with calibrating models” or “Macroeconomic Data: WARNING – DO NOT USE FOR CALIBRATION!” You might think that it’s obvious which is which but it’s not. Certainly panel data like the PSID sounds like data an applied microeconomist might use. What about price (inflation) data? Is that aggregate “macro data”? What about unemployment rate data? What about data on type-specific investment? Is that micro data?

Second, many of the calibrations used in practice seem to come from macro datasets anyway. Take for instance the calibration of the labor’s share parameter in the production function. This calibration is typically justified by calculating the average ratio of total employee compensation to total income – figures which both come from the NIPA.

Romer also says that we should choose the parameter values before comparing the model with the data.  I hear sentiments like this a lot though again it doesn’t really hold up when we look at standard practice. The labor’s share parameter is again a case in point. We are setting that parameter based on fitting a single moment of the data (we are going to match the model average labor share with the observed labor share). Another example concerns a standard calibration of investment adjustment costs in business cycle models. These parameters are sometimes calibrated to match the model’s predicted investment volatility with observed investment volatility. These examples make calibration sound suspiciously like estimation. (Hopefully, calibration isn’t just estimation without bothering to report a standard error.)

Nevertheless, even though I don’t really have a precise definition of what I mean by ‘calibration’, I believe that it may indeed have an important role to play in economic analysis. In particular, calibration might work quite well in situations in which we believe the model is wrong. (I know what you’re thinking – we always think the model is wrong! True. This means that calibration may indeed be very valuable.)

Let’s take a specific example. Suppose we have data on wages and employment and we have a labor supply / labor demand model which we propose to explain the observations. Suppose further that all of the changes in employment are driven by shifts to labor demand.  The only thing missing is the labor supply elasticity parameter.  An estimation based approach would do the following: we would invoke the null hypothesis that the model is correct and then estimate the missing labor supply elasticity from the observed data (just run OLS for instance). A calibration approach would not assume that the model is correct. Instead, a calibrated model would (somehow) obtain a parameter value from elsewhere, plug it into the model and compare the model output with the observed data. Let’s assume that the analyst calibrates the labor supply elasticity at roughly 0.5.

Suppose that (unfortunately for the econometrician) the model is mis-specified. In fact, the wage is stuck above the market clearing wage and there are many workers who are involuntarily unemployed. Every labor demand shift is resolved by simply absorbing available workers at the fixed wage.  The econometrician estimates the model and finds that the labor supply elasticity is very high indeed (near infinity in fact). The analyst using the calibrated model finds that his model predicts virtually no changes in employment.  Notice that it seems that the analyst using the calibrated model is actually on to something. There is a tension between his calibrated labor model and the observables. Moreover, this tension seems to provide an important clue as to how the model needs to be modified.[2] The econometrician on the other hand is happy with his estimates and will go about his business content in the belief that all is well with the model.

Naturally, the missing link in this narrative is the source of this outside information that the calibrated model draws on.  Where does this initial parameterization come from? Perhaps there were some earlier studies that provide some information on the labor supply elasticity? Perhaps the analyst just arrived at the number through sheer introspection. (If I were offered a wage increase, how would I respond?)  In a sense calibration shares a common thread with Bayesian estimation which requires a prior to guide the estimates (like calibration, the exact source of the prior is somewhat mysterious). In fact, many prominent researchers who advocate the use of Bayesian techniques come from backgrounds that embrace calibration (Jesus Fernandez-Villaverde was trained at Minnesota for instance).

One other thing which strikes me is that the researchers who use calibration are often much more interested in the performance and insights generated by the models and much less interested in the parameter values themselves.  Estimation it seems tends naturally to place much more emphasis on the point estimates themselves rather than their consequences.

In any case, calibration will likely continue to be used as an important analytical technique, even if no one knows what it actually is …

 

[1] My coauthor Jing Zhang assures me that calibration does indeed have a specific meaning though she has never articulated what this meaning is.  (Actually, when I asked her what she meant by ‘calibration’ her first reaction was to laugh at me after which she told me that I didn’t have proper training …).

[2] Paul Krugman seems to arrive at a similar conclusion in a past blog post (though you will have to put up with the obligatory “fresh water bashing” before the end of the post).