Noah Smith and Francis Coppola have recent columns discussing the prevalence of linear methods in economics and in particular in macroeconomics.
Coppola acknowledges that adding financial components to DSGE models is a step in the right direction but that it “does not begin to address the essential non-linearity of a monetary economy. … Until macroeconomists understand this, their models will remain inadequate.” (This is ok but to me it sounds like she’s going into Deepak Chopra mode a bit.)
Noah gives a good description of the properties and benefits of linear models.
“[Linear models are] easy to work with. Lines can only intersect at one point, so […] there’s only one thing that can happen.” In non-linear models “the curves can bend back around and [could] meet in some faraway location. Then you have a second equilibrium – another possible future for the economy. […] if you go with the full, correct [non-linear] versions of your models, you stop being able to make predictions about what’s going to happen to the economy. […] Also, linearized models […] are a heck of a lot easier to work with, mathematically. […] As formal macroeconomic models have become more realistic, they’ve become nastier and less usable. Maybe their days are simply numbered.”
There are elements of truth in both columns but on the whole I don’t share their assessments. In fact, I am firmly of the opinion that linear solution methods are simply superior to virtually any other solution method in macroeconomics (or any other field of economics for that matter). Moreover, I suspect that many younger economists and many entering graduate students are gravitating towards non-linear methods for no good reason and I also suspect they will end up paying a price if they adopt the fancier modelling techniques.
Why am I so convinced that linear methods are a good way to proceed? Let me count the ways:
1. Most empirical work is done in a (log) linear context. In particular regressions are linear. This is important because when macroeconomists (or any economist) tries to compare predictions with the data, the empirical statements are usually stated in terms that are already linear. This comparison between theory and data is easier if the models are making predictions that are couched in a linear framework. In addition, there aren’t many empirical results that speak clearly on non-linearities in the data. There is a statement that I’ve been told by empirical economists several times that a well-known economist said that “either the world is linear, or it’s log-linear, or God is a son-of-a-bitch.”
2. Linear doesn’t mean simple. When I read Francis’ column it seems that her main complaint lies with the economic substance of the theories rather than the solution method or the approximate linearity of the solutions. She talks about lack of rationality, heterogeneity, financial market imperfections, and so forth. None of these things requires a fundamentally non-linear approach. On the contrary, the more mechanisms and features you shove into a theory, the more you will benefit from a linear approach. Linear systems can accommodate many features without much additional cost.
3. Linear DSGE models can be solved quickly and accurately. Noah mentioned this but it bears repeating. One of the main reasons to use linear methods is that they are extremely efficient and extremely powerful. They can calculate accurate (linear) solutions in milliseconds. In comparison, non-linear solutions often require hours or days (or weeks) to converge to a solution. [Fn 1]
4. The instances where non-linear results differ importantly from linear results are few and far between. The premise behind adopting a non-linear approach is that knowing about the slopes (or elasticities) of the demand and supply curves is not sufficient. We have to know about the curvatures of the demand and supply curves too. On top of the fact that we don’t really know much about these curvature terms, the presumption itself is highly suspect. If we are picking teams, I’ll pick the first order terms and let you pick all of the higher order terms you want any day of the week (and I’ll win). In cases in which we can calculate linear vs. non-linear solutions, the differences are typically embarrassingly small and even when there are noticeable differences, they often go away as we improve the non-linear solution. A common exercise is to calculate the growth convergence path for the neoclassical (Ramsey) growth model using discrete dynamic programming techniques and then compare the solution to the one you get from a linearized solution in the neighborhood of the balanced growth path. The dynamic programming approach is non-linear – it allows for an arbitrary reaction on a discrete grid space. When you plot out the two responses, it is clear that the two solutions aren’t the same. However, as we add more and more grid points, the two solutions start to look closer and closer. (The time required for computation of the dynamic programming solution grows of course.)
5. Unlike non-linear models, approximate analytical results can be recovered from the linear systems. This is an underappreciated side benefit of adopting a linear approach. The linear equations can be solved by hand to yield productive insights. There are many famous examples of log-linear relationships that have led to well-known empirical studies based on their predictions. Hall’s log-linear Euler equation, the New Keynesian Phillips Curve, log-linear labor supply curves, etc. Mankiw, Romer and Weil (1992) used a linear approximation to crack open the important relationship between human capital and economic growth.
Given the huge benefits to linear approaches in the field. The main question I have is not why researchers don’t adopt non-linear, global solution approaches but rather why linear methods aren’t used even more widely than they already are. One example in particular concerns the field of Industrial Organization (IO). IO researchers are famous for adopting complex non-linear modelling techniques that are intimidating and impressive. They often seem to brag about how it takes even the most powerful computers weeks to solve their models. I’ve never understood this aspect of IO. IO is also known to feature the longest publication lags and revision lags of any field. It’s possible that some of this is due to the techniques that they are using; I’m not sure. I have asked friends in IO why they don’t use linear solutions more often and the impression I am left with is that it is a combination of an assumption that a linear solution simply won’t work for the kinds of problems they analyze together with a lack of familiarity with linear methods.
There are of course cases in which there is important non-linear behavior that needs to be featured in the results. For these cases it does indeed seem like a linear approach is not appropriate. Brad DeLong argued that accommodating the zero lower bound on interest rates (i.e., the “flat part of the LM curve” ) is such a case. I agree. There are cases like the liquidity trap that clearly entail important aggregate non-linearities and in those instances you are forced to adopt a non-linear approach. For most other cases however, I’ve already chosen my team …
[Fn 1] This reminds me of a joke from the consulting business. I once asked an economic consultant why he thought he could give valuable advice to people who have been working in an industry their whole lives. He told me that I was overlooking one important fact – the consultants charge a lot of money.
“There are of course cases in which there is important non-linear behavior that needs to be featured in the results. For these cases it does indeed seem like a linear approach is not appropriate. Brad DeLong argued that accommodating the zero lower bound on interest rates (i.e., the “flat part of the LM curve” ) is such a case. I agree. There are cases like the liquidity trap that clearly entail important aggregate non-linearities and in those instances you are forced to adopt a non-linear approach. For most other cases however, I’ve already chosen my team”
Given that we wanna do microfounded macro and given that micro is full of nonlinearities the conclusion is the opposite one of yours. Now I am all for simple stuff like IS-LM in macro but I fail to see why one should build elaborate microfounded models like DSGE that feature virtually no nonlinearities or market failures except for menu costs + imperfect competition on product markets.
If you want simple heuristics and rough policy recommendations you don’t need DSGE. And if you want properly microfounded models that feature all kind of stuff, like e.g. financial market imperfections, you automatically land in a topsy-turvy nonlinear world. But you cannot have both at the same time.
Doing microfounded macro has very little to do with nonlinearities. If I’m doing supply and demand from a standard micro standpoint, we will get accurate predictions from an approximation that treats the supply curve and demand curve as though they were (log) linear. If we are analyzing the behavior of a firm that faces a marginal revenue curve and a marginal cost curve both that depend on the firm’s quantity MR(q), MC(q), we will get accurate predictions from an approximation that treats these curves as though they were linear. There is nothing inherantly nonlinear in such analysis.
Huge component of IO modelling is dynamic discrete choice models, either of consumer behavior (which brand to buy, when to adopt new technology), or firm behavior (when to enter a market, which market or markets to enter, when to adopt new technology). Not sure you can linearize these effectively…..
Good point. This might be the main reason that IO guys don’t use linear approaches.
Machine learning techniques have a lot to offer here. All the usual concerns about under and over fitting models to the data apply. If the system is not linear, then linear models are only approximate for the near term in a time series, and may diverge quickly as the time window widens. Economist in 2015 need to become experts in modelling in general. There has been a lot of progress in both theory and technique in the last decade in this area and computing power and data storage are many orders of magnitude more powerful.
It seems to me that the complaint is about the word ‘model’ not the word ‘linear’.
Even non-linear models will be, models…
The basic problem is we are left making models not replicas.
If you want to think of it as a problem.
The best defense of linear models is that they are useful (not problems), models. They are good tools that can offer good insights and understanding into the relationship(s) between various pieces of an economy. They are not in fact an economy, this is sort of the point, an actual economy is too complex to understand so we aim to summarize it to make help illuminate our understanding of it. A model that is too complex is simply the wrong tool for the problem at hand.
Linear is a confusing description in any case as the output is not a line except for single factor models. The linear describes a constant relationship between one factor and another which is one coefficient in a linear model. Of course in a regression, that parameter is itself a random variable and the data for all the interesting questions isn’t rich enough to be certain what the relationship is beyond a range of varying size. That problem isn’t helped by going non-linear so the complaint is better made against the data in that case.
So lets complain about incomplete data and models, and then everyone can agree.
Macro economists seem to be unwilling, or just unaware, that because they cannot solve a model does not prove that the model cannot be solved. The economists’ mathematical tool kit is very limited and ver old. Mathematicians are an order on magnitude brighter than economists and use techniques that economists have never even heard of.
Economists use mathematical tools developed largely before 1920, real analysis, some topology, a little Measure theory, some Ito calculus and developments of the calculus of variations – Pontyagin’s Maximum Principle and Bellman’s Dynamic Programing – that date back to the late 50s and early 60s. Ask a mathematician and she will tell you that more, and deeper, math has been done in the last 50 years than in the previous 3,500 years.
With the exception of von Neumann and David Smale no first class mathematician has worked in economics. By first class I mean Field medalists and their ilk.
Economists preach the division of labor. They need to start talking to mathematicians about economic problems and seeking their help in perhaps developing new math to deal with the fiendishly difficult problems that economists try to solve using their stone age tools.
I’m not really sure what methods you have in mind. There are lots of mathematicians who work in economics and there are many economicsts who spend time reviewing mathematics with the intention of finding useful techniques but none of them has really come up with something that would be worthy of replacing basic linear analysis. It’s true that a lot of techniques that we use are quite old but so what. It’s not like addition and subtraction are invalid just because they have been in use since antiquity; quite the opposite.
Pingback: From DSGE to ABM | Never An Economist
Pingback: Friday links: statistics vs. TED talk, #scimom, Jeremy vs. Nate Silver, and more | Dynamic Ecology
Pingback: (Paul) Romer’s Rant | Orderstatistic
the present era of information technology should be much improved in technique and theory, many computers with powerful configuration and storage of large data created to meet the needs of life …
Richard Ellenbogen
Good write-up. I definitely appreciate this site. Keep writing!