Every year I teach EC607 I arrive at the Real Business Cycle model and run into a problem. No, it’s not struggling to answer “why are you teaching the RBC model if you don’t think it is useful for understanding business cycles?” No, the problem occurs when I get to the subject of calibration. I would like to tell my students exactly what we mean when we say that we calibrate parameters. I can’t tell them however since I don’t really know myself.
In my own work, I do things that I would describe as calibration. I even have an intuitive sense of what I mean when I say that some parameter has been calibrated. However, I do not have a precise notion of what it means to calibrate a model. In fact, I am not sure anyone has a precise statement of what it means.
Calibration is a way of assigning values to the parameters which determine how our models function. Unlike estimation, calibration does not assign parameter values to make the model fit the data. Some descriptions of calibration suggest that the parameter values should come from separate data sources – separate, that is, from the data that you are analyzing with the model. In Advanced Macroeconomics, David Romer describes calibration as follows (emphasis added):
The basic idea of calibration is to choose parameter values on the basis of microeconomic evidence and then to compare the model’s predictions [with the data].
This is a fairly reasonable description of what many people mean when they use the term ‘calibration’ but it is problematic for at least a couple of reasons. First, economic data typically don’t come with useful labels like “Microeconomic Data: Approved for use with calibrating models” or “Macroeconomic Data: WARNING – DO NOT USE FOR CALIBRATION!” You might think that it’s obvious which is which but it’s not. Certainly panel data like the PSID sounds like data an applied microeconomist might use. What about price (inflation) data? Is that aggregate “macro data”? What about unemployment rate data? What about data on type-specific investment? Is that micro data?
Second, many of the calibrations used in practice seem to come from macro datasets anyway. Take for instance the calibration of the labor’s share parameter in the production function. This calibration is typically justified by calculating the average ratio of total employee compensation to total income – figures which both come from the NIPA.
Romer also says that we should choose the parameter values before comparing the model with the data. I hear sentiments like this a lot though again it doesn’t really hold up when we look at standard practice. The labor’s share parameter is again a case in point. We are setting that parameter based on fitting a single moment of the data (we are going to match the model average labor share with the observed labor share). Another example concerns a standard calibration of investment adjustment costs in business cycle models. These parameters are sometimes calibrated to match the model’s predicted investment volatility with observed investment volatility. These examples make calibration sound suspiciously like estimation. (Hopefully, calibration isn’t just estimation without bothering to report a standard error.)
Nevertheless, even though I don’t really have a precise definition of what I mean by ‘calibration’, I believe that it may indeed have an important role to play in economic analysis. In particular, calibration might work quite well in situations in which we believe the model is wrong. (I know what you’re thinking – we always think the model is wrong! True. This means that calibration may indeed be very valuable.)
Let’s take a specific example. Suppose we have data on wages and employment and we have a labor supply / labor demand model which we propose to explain the observations. Suppose further that all of the changes in employment are driven by shifts to labor demand. The only thing missing is the labor supply elasticity parameter. An estimation based approach would do the following: we would invoke the null hypothesis that the model is correct and then estimate the missing labor supply elasticity from the observed data (just run OLS for instance). A calibration approach would not assume that the model is correct. Instead, a calibrated model would (somehow) obtain a parameter value from elsewhere, plug it into the model and compare the model output with the observed data. Let’s assume that the analyst calibrates the labor supply elasticity at roughly 0.5.
Suppose that (unfortunately for the econometrician) the model is mis-specified. In fact, the wage is stuck above the market clearing wage and there are many workers who are involuntarily unemployed. Every labor demand shift is resolved by simply absorbing available workers at the fixed wage. The econometrician estimates the model and finds that the labor supply elasticity is very high indeed (near infinity in fact). The analyst using the calibrated model finds that his model predicts virtually no changes in employment. Notice that it seems that the analyst using the calibrated model is actually on to something. There is a tension between his calibrated labor model and the observables. Moreover, this tension seems to provide an important clue as to how the model needs to be modified. The econometrician on the other hand is happy with his estimates and will go about his business content in the belief that all is well with the model.
Naturally, the missing link in this narrative is the source of this outside information that the calibrated model draws on. Where does this initial parameterization come from? Perhaps there were some earlier studies that provide some information on the labor supply elasticity? Perhaps the analyst just arrived at the number through sheer introspection. (If I were offered a wage increase, how would I respond?) In a sense calibration shares a common thread with Bayesian estimation which requires a prior to guide the estimates (like calibration, the exact source of the prior is somewhat mysterious). In fact, many prominent researchers who advocate the use of Bayesian techniques come from backgrounds that embrace calibration (Jesus Fernandez-Villaverde was trained at Minnesota for instance).
One other thing which strikes me is that the researchers who use calibration are often much more interested in the performance and insights generated by the models and much less interested in the parameter values themselves. Estimation it seems tends naturally to place much more emphasis on the point estimates themselves rather than their consequences.
In any case, calibration will likely continue to be used as an important analytical technique, even if no one knows what it actually is …
 My coauthor Jing Zhang assures me that calibration does indeed have a specific meaning though she has never articulated what this meaning is. (Actually, when I asked her what she meant by ‘calibration’ her first reaction was to laugh at me after which she told me that I didn’t have proper training …).
 Paul Krugman seems to arrive at a similar conclusion in a past blog post (though you will have to put up with the obligatory “fresh water bashing” before the end of the post).