Feels Like it Was Yesterday

Stay safe everyone. 

End of an Era


On August 13 Judit Polgar announced that she was retiring from competitive chess.  This came as a shock to most people who follow chess – it certainly came as a shock to me though I have heard on several occasions that Judit wanted to put more emphasis on raising her family.  On her website she said that she was going to spend more time with her children and developing her foundation (The Judit Polgar Chess Foundation promotes pioneering cognitive skills development for school children). 

Judit is often described as the strongest female chess player ever and I have no reason to doubt that assessment.  At the peak of her career, Judit was one of the strongest players of either gender.  Her highest ELO ranking was a staggering 2735 and she was ranked #8 overall in the world in 2005.  According to Wikipedia, she has been the #1 rated female chess player in the world since 1989 (!). Judit typically played in the general tournaments – she actually never competed for the women’s world championship. Over her career, she has defeated a slew of famous players including Magnus Carlsen (the current world #1 and current chess champion), Viswanathan Anand (the previous world #1), Anatoly Karpov, Boris Spassky, … the list goes on and on. [1]

There is a famous anecdote that Gary Kasparov once described Judit as a “circus puppet” and asserted that women chess players should stick to having children. I’m not actually sure where this story comes from – it was relayed by The Guardian in 2002 without further elaboration. The statement is so over the top that I wonder whether it’s actually true.  It’s might be true – chess players are known for making zany statements like this from time to time (Bobby Fischer comes to mind). Perhaps Kasparov was trying to stir up some controversy … who knows. In any case, Kasparov asked for it and he got it.  In 2002 Polgar beat Kasparov and added to her trophy room.  Kasparov was the worlds #1 ranked player at the time. 

Another fascinating aspect of Judit’s chess life is that her father, Laszlo Polgar, apparently decided to use his children to “prove” that geniuses are “made, not born.” He made a conscious effort to train his children in chess starting when they were each very young. Perhaps Laszlo was right, in addition to Judit, her sisters Susan and Sofia are also established chess grandmasters. 

Speaking for myself, Judit has been my favorite active chess player for a while now.  Her style is somewhat of an anachronism – she is known for a hyper aggressive, dramatic playing style.  She often sacrifices pieces trying to gain initiative and attacking positions. For the most part, men’s chess is actually much tamer – many of the best male players are “positional” players who grind out games looking for small advantages which they eventually convert for a win (Alexey Shirov is a counter-example – see below). Stylistically, Judit reminds me a lot of Mikhail Tal – perhaps the greatest chess tactician of all.

Below is a video of one of Judit’s most famous games. In the game Judit plays against Alexey Shirov. The game commentary is by Mato Jelic. If you are interested in learning more about chess or about Judit’s games, Mato’s youtube channel is a great place to start. Among other things, Mato has a great collection of Judit Polgar’s games with commentary. 


[1] Here’s a funny quote from Judit about her sister Susan. “My sister Susan — she was 16 or 17 — said that she never won against a healthy man. After the game, there was always an excuse: ‘I had a headache. I had a stomach ache.’ There is always something.”  

More Thoughts on Agent Based Models

My recent post on Agent Based Models (ABMs) generated a few interesting responses and I thought I would briefly reply to a couple of them in this post.  In particular, two responses came from people who actually have direct experience with ABMs.

Rajiv Sethi posts a response on his own blog.  Some excerpts:

Chris House has managed to misrepresent the methodology so completely that his post is likely to do more harm than good.

[Well that doesn’t sound too good …]

Agents can be as sophisticated and forward-looking in their pursuit of self-interest in an ABM as you care to make them; they can even be set up to make choices based on solutions to dynamic programming problems, provided that these are based on private beliefs about the future that change endogenously over time.

What you cannot have in an ABM is the assumption that, from the outset, individual plans are mutually consistent. That is, you cannot simply assume that the economy is tracing out an equilibrium path. The agent-based approach is at heart a model of disequilibrium dynamics, in which the mutual consistency of plans, if it arises at all, has to do so endogenously through a clearly specified adjustment process. This is the key difference between the ABM and DSGE approaches [.]

In a similar vein, in the comments section to the earlier post, Leigh Tesfatsion offered several thoughts many of which fit squarely with Rajiv’s opinion.  Professor Tesfatsion uses ABMs in a multiple settings including economics and climate change – I’m quite sure that she has much more experience with such models that I do (I basically don’t know anything beyond a couple of papers I’ve encountered as a referee here and there).  Here are some excerpts from Leigh’s comments:

Agents in ABMs can be as rational (or irrational) as their real-world counterparts…

The core difference between agent modeling in ABMs and agents in DSGE models is that agents in ABMs are required to be “locally constructive,” meaning they must specify and implement their goals, choice environments, and decision making procedures based on their own local information, beliefs, and attributes. Agent-based modeling rules out “top down” (modeler imposed) global coordination devices (e.g., global market clearing conditions) that do not represent the behavior or activities of any agent actually residing within the model. They do this because they are interested in understanding how real-world economies work.

Second, ABM researchers seek to understand how economic systems might (or might not) attain equilibrium states, with equilibrium thus studied as a testable hypothesis (in conjunction with basins of attraction) rather than as an a priori maintained hypothesis.

I was struck by the similarity between Professor Sethi and Professor Tesfatsion’s comments. The parts of their comments that really strike me are (1) the agents in an ABM can have rational rules; (2) in an ABM, there is no global coordination imposed by the modeler. That is, agents behaviors don’t have to be mutually consistent; and (3) ABMs are focused on explaining disequilibrium in contrast to DSGE models which operate under the assumption of equilibrium at all points.

On the first point (1) I agree with Rajiv and Leigh on the basic principle. Agents in an ABM could be endowed with rational behavioral rules – that is, they could have rules which are derived from an individual optimization problem of some sort. The end result of an economic optimization problem is a rule – a contingency plan that specifies what you intend to do and when you intend to do it. This rule is typically a function of some individual state variable (what position are you in?). In an ABM, the modeler specifies the rule as he or she sees fit and then goes from there. If this rule were identical to the contingency plan from a rational economic actor then the two modelling frameworks would be identical along those dimensions. However, in an ABM there is nothing which requires that these rules adhere to rationality. The models could accommodate rational behavior but they don’t have to. To me this still seems like a significant departure from standard economic models that typically place great emphasis on self-interest as a guiding principle. In fact, the first time I read Rajiv’s post, my initial thought was that an ABM with a rational decision rule would be essentially a DSGE model. All actions in DSGE models are based on private beliefs about the system. Both the system and the beliefs can change over time.  I for one would be very interested if there were any ABMs that fit Rajiv’s description that are in use today.

The second point (2) on mutual consistency is interesting. It is true that in most DSGE models, plans are indirectly coordinated through markets.  Each person in a typical economic model is assumed to be in (constant?) contact with a market and each confronts a common price for each good.  As a result of this common connection, the plans of individuals in economic models are assumed to be consistent in a way that they are not in ABMs.  On the other hand, there are economic models that do not have this type mutual consistency.  Search based models are the most obvious example.  In many search models, individuals meet one-on-one and make isolated bargains about trades.  There are thus many trades and exchanges occurring in such model environments and the equilibria can feature many different prices at any point in time.  This might mean that search / matching models are a half-way point between pure Walrasian theories on the one hand and ABMs on the other.

The last issue (3) that Rajiv and Leigh brought up was the idea that ABMs seek to model “disequilibrium” of some sort. I suspect that this is somewhat more an issue of terminology rather than substance but there may be something more to it.  Leigh’s comment in particular suggests that she is reserving the term “equilibrium” for a classical rest point at which the system is unchanging. I mentioned to her that this doesn’t match up with the term “equilibrium” in economics. In economic models (e.g., DSGE models) equilibria can feature erratic dynamic adjustment over time as prices and markets gradually adjust (e.g., the New Keynesian model) or as unemployment and vacancies are gradually brought into alignment (e.g., the Mortensen Pissarides model) or as capital gradually accumulates over time (e.g., the Ramsey model).  Indeed, the equilibria can be “stochastic” so that they directly incorporate random elements over time. There is no supposition that an equilibrium is a rest point in the sense that (I think) she intends.  When I mentioned this she replied:

As for your definition of equilibrium, equating it with any kind of “solution,” I believe this is so broad as to become meaningless. In my work, “equilibrium” is always used to mean some type of unchanging condition that might (or might not) be attained by a system over time. This unchanging condition could be excess supply = 0, or plans are realized (e.g., no unintended inventories), or expectations are consistent with observations (so updating ceases), or some such condition. Solution already means “solution” — why debase the usual scientific meaning of equilibrium (a system at “rest” in some sense) by equating it with solution?

I suspect that in addition to her background in economics, Professor Tesfatsion also has a strong background in the natural sciences and is somewhat unaccustomed to terminology used in economics and prefers to use the term “equilibrium” as it would be used in say physics.[1] In economics, an outcome which is constant and unchanging would be called a “steady state equilibrium” or a “stationary equilibrium.”  As I mentioned above, there are non-stationary equilibria in economic models as well.  Even though quantities and prices are changing over time, the system is still described as being “in equilibrium.”  The reason most economists use this terminology is subtle.  Even though the observable variables are changing, agents’ decision rules are not – the decision rules or contingency plans are at a rest point even though the observables move over time.

Consider this example. Suppose two people are playing chess. The player with the white pieces is accustomed to playing e4. She correctly anticipates that her opponent will respond with c5 – the Sicilian Defense. White will then respond with the Smith-Morra Gambit to which black with further respond with the Sicilian-Scheveningen variation. Both players have played several times and they are used to the positions they get out of this opening. To an economist, this is an equilibrium.  White is playing the Smith-Morra Gambit and black plays the Sicilian-Scheveningen variation. Both correctly anticipate the opening responses of the other and neither wants to deviate in the early stages of the game. Neither strategy changes over time even though the position of the board changes as they play through the first several moves. (In fact this is common to see in competitive chess – two players who play each other a lot often rapidly fire off 8-10 moves and get to a well-known position.)

In any case, I’m not sure that this means economists are “debasing” the usual scientific meaning of equilibrium or not but that’s how the term is used in the field.

One last point that came up in Rajiv’s post which deserves mention is the following:

A typical (though not universal) feature of agent-based models is an evolutionary process, that allows successful strategies to proliferate over time at the expense of less successful ones.

This is absolutely correct.  I didn’t think to mention this in the earlier post but I clearly should have done so.  Features like this are used often in evolutionary game theory.  In those settings, we gather together many individuals and endow them with different rules of behavior.  Whether a rule survives, dies, proliferates, etc. is governed by how well it succeeds at maximizing an objective.  Rajiv is quite correct that such behavior is common in many ABMs and he is right to point out its similarity with learning in economic models (though it is not exactly the same as learning).

[1] A reader pointed out that Leigh Tesfatsion’s Ph.D. is in economics and so she is well aware of non-stationary equilibria or stochastic equilibria. My original post incorectly suggested that she might unaware of economic terminology (Sorry Leigh). Leigh prefers to reserve the term “equilibrium” for a constant state as it is in many other fields. Her choice for terminology is fine as long as she and I are clear as to what were are each talking about.

More Thoughts on the Welfare Consequences of Stimulus Spending

Noah Smith has a short post questioning the reasoning of an earlier post of mine about stimulus spending. He includes a short numerical example which I reproduce below.  Here’s Noah:

Suppose [the] fiscal “multiplier” is substantial. Specifically, suppose … $100 of tax rebates will increase GDP by $110. In this case, stimulus spending is a “free lunch.”

Now suppose that instead of doing tax rebates, the government can build a bridge. The social benefit of the bridge is $90, and the bridge would cost $100. In the absence of stimulus effects, therefore, the bridge would not pass a cost-benefit analysis. For simplicity’s sake, suppose that spending money on the bridge would create exactly the same stimulus effect as doing a tax rebate – spend $100 on the bridge, and GDP goes up by $110 from the stimulus effect.

In this case, the net social benefit of spending $100 building the bridge is $90 + $110 – $100 = $100.
And the net social benefit of spending $100 on a tax rebate is $110 – $100 = $10.

Bridge wins!

Noah has a couple of subtle mistakes in his calculation. First, the increase in GDP from the stimulus isn’t completely a social benefit. Typically, every good produced is produced at some cost. If I go out to eat for lunch and buy $10 of Szechuan chicken, GDP goes up by $10. However, the net social benefit doesn’t go up by this amount. The gains to me (G) must be at least worth $10 otherwise I wouldn’t have willingly spent the money. The cost to the restaurant (C) can’t be more than $10 otherwise they wouldn’t have willingly provided that dish.  The social benefit (SB) is the difference between the gains and the cost.

SB = G – C

If markets are fairly competitive then at the margin both G and C will be “close” to $10 and so the social benefit will in all likelihood be less than $10. What this means for Noah’s calculation is that the $110 stimulus might not really be worth $110.  That’s ok however since this error is symmetric. It is worth pointing out however, that stimulus which is guided by the private sector typically passed cost/benefit tests of the kind I emphasize. In my example, G > 10 and C < 10 so SB = G – C must be positive. 

The second error is more important. In his calculation, Noah is counting the out-of-pocket revenue outlay for the tax cut as a cost. This isn’t correct. The tax cut is a transfer. There are no direct social costs associated with the tax cut.

Let me try to rephrase Noah’s example to make it clear. He is considering two options:

OPTION 1: tax cut (or transfer) of $100

OPTION 2: government spending of $100

(i) In both cases the Treasury is deprived of $100. 

(ii) In both cases the private sector gains $100 in after-tax income.

(iii) In both cases the private sector uses the additional income to spend or save (this is the source of the multiplier). 

(iv) In OPTION 1 a bridge is built. The social costs (C) of the bridge include the value of time, energy, materials, effort, etc. The social benefit (B) of the bridge is the value of having the bridge.

In comparing these two options, we can ignore (i) – (iii) since they are the same in both policies. The only difference is (iv). Whether government spending is preferable depends only on whether B is bigger than C.  Note that the magnitude of the multiplier doesn’t enter the comparison. It is symmetric in both cases. My argument is not “diametrically opposed to Econ 102 textbook Keynesianism” —  GDP will go up by more under OPTION 2.  The subtlety is that in Econ 102 we typically act as if maximizing GDP is the correct objective of public policy. This isn’t true though. GDP maximization can rationalize building pyramids, the Maginot Line, the bridge to nowhere, ethanol subsidies, … A social welfare criteria would skip these projects.

In the comments section to Noah’s post a number of readers make some points which are worth mentioning. One comment points out that many people don’t pay taxes and so wouldn’t benefit from a tax cut. Certainly, unemployed people won’t benefit from a tax cut. However, we could construct transfers that would reach these people. Rachel Maddow’s suggestion of sending people envelopes with money in them would work just fine. Also, many people and firms do pay taxes. Payroll tax cuts like those passed by the Obama administration are a particularly effective way of introducing stimulus.  These simultaneously put money in people’s pockets not to mention the beneficial incentive effects of the policy. 

Another commenter points out that calculating the social costs and benefits of a given project are quite difficult.  This is absolutely true but it is a calculation that should be carefully considered none the less.  In Noah’s example, the cost of bridge is probably closely approximated by $100.  The benefit of the bridge is more difficult to assess.  Such difficulties probably come up often.  What is the benefit of protecting or cleaning up a wetland?  What is the benefit of political stabilization in the Middle East? Tricky questions but questions that should be considered nonetheless. 

Are Agent-Based Models the Future of Macroeconomics?

A couple months back, Mark Buchannan wrote an article in which he argued that ABMs might be a productive way of trying to understand the economy.  In fact, he went a bit further – he said that ABMs would likely be the future of economics and he warned young economists not to get caught watching the paint dry and to get on board with this new approach to the study.  In contrast, he pointed to the failings of the DSGE models that mainstream economists use to understand most economic issues.

An AMB is a model environment computer model that is comprised of many individual participants. These actors interact with one another and these interactions produces outcomes – in our case, economic outcomes.  These economic outcomes can then be added up to compute GDP and aggregate, investment – any of the economic statistics that we macroeconomists are accustomed to looking at – you could calculate in an ABM.

Now, if you are an economist, a macroeconomist in particular, you are probably thinking that so far this sounds very familiar – it sounds like a DSGE model.  A DSGE model is populated by many agents and they interact and the results of those interactions produce economic outcomes and we add up those outcomes and we get GDP and so forth so it sounds like ABMs are DSGE models so how are they at all different?

Well, there are actually a few important differences.  More accurately, it seems to be a combination of three key differences that distinguishes the ABMs.

Probably the most important distinguishing feature is that, in an ABM, the interactions are governed by rules of behavior that the modeler simply encodes directly into the system individuals who populate the environment.[1] For example, we could have an ABM with a purely Keynesian consumption rule. We could assume that every time a consumer earns a dollar of income, they spend 20 cents and save the remaining 80 cents. We would make similar behavioral assumptions to govern every possible interaction.  We would make these assumptions for consumers, workers, firms, investors, etc. In an ABM, behavior is the point at which a modeler starts making assumptions.

People who write down DSGE models don’t do that. Instead, they make assumptions on what people want. They also place assumptions on the constraints people face. Based on the combination of goals and constraints, the behavior is derived.  The reason that economists set up their theories this way – by making assumptions about goals and then drawing conclusions about behavior – is that they are following in the central tradition of all of economics, namely that allocations and decisions and choices are guided by self-interest. This goes all the way back to Adam Smith and it’s the organizing philosophy of all economics. Decisions and actions in such an environment are all made with an eye towards achieving some goal or some objective. For consumers this is typically utility maximization – a purely subjective assessment of well-being.  For firms, the objective is typically profit maximization. This is exactly where rationality enters into economics. Rationality means that the “agents” that inhabit an economic system make choices based on their own preferences.

A second key difference is that the interactions are often restricted to individual (or at least limited) interactions. It could be a limited number of connections between a given consumer and potential sellers. It could be a single connection between a worker and a firm and so on.

Lastly, the individuals are “heavy.”  That is, the models keep track of each and every individual in the system and the behavior of each and every one matters (to some extent) to determine the outcome. This is again unlike many standard economic systems.  In most macroeconomic systems the behavior of a single individual can be altered without having a perceptible impact on the aggregate outcome. This isn’t typically the case in ABMs. In an ABM, one individual can influence the outcome at least to a degree.

Now, all of these features have been analyzed in economics. Macroeconomists have models that explore the consequences of matching (or search) in which the individuals make deals on a bilateral basis. In game theory, we often have environments in which each player has a substantial influence on the outcome. And there are well-known models that consider ad hoc rule-of-thumb behavior. However, these modifications are rarely considered in combination.

Of the three features, the absence of rationality is the most significant.  Ironically, eliminating rational behavior also eliminates an important source of feedback – namely the feedback from the environment to behavior.  This type of two-way feedback is prevalent in economics and it’s why equilibria of economic models are often the solutions to fixed-point mappings. Agents make choices based on the features of the economy.  The features of the economy in turn depend on the choices of the agents. This gives us a circularity which needs to be resolved in standard models. This circularity is cut in the ABMs however since the choice functions do not depend on the environment. This is somewhat ironic since many of the critics of economics stress such feedback loops as important mechanisms.

The absence of rational behavior also means that the ABMs are much easier to solve than traditional models. Once you have settled on your preferred choice functions, you just assemble many such individuals together in a computer environment and simply simulate.

In fact, the predecessors of ABMs have been around for quite a while. The earlier versions were called “cellular automata.”  The most famous of these was John Conway’s famous “Game of Life.” This “game” took place on a grid (often a torus).  Each square on the grid was either on or off (alive or dead). If a cell was alive at time t, it remained alive if 2 or 3 of its eight neighboring squares were also alive.  If 1 or 0 of its neighbors was alive then the cell “died” due to isolation.  If 4 or more were alive, the cell died due to congestion. Cells could also come to life. An inactive (dead) cell would come to life if it had exactly 3 live neighbors. The environment was very intricate. Starting from a random patter of active/inactive cells, a huge array of outcomes could be supported. There could be bursts of activity followed by a dramatic collapse.

In fact, you can get simple versions of cellular automata as apps on your iPhone. Two good ones are SPEED sim, and CA2D.  CA2D has many cellular automata rules built in but it doesn’t have the Game of Life.  Below are three pictures of Conway’s game of life taken from SPEED sim.  The first panel shows an initial purely random starting point. The second panel shows the system after 25 iterations. You can see that patterns have emerged “naturally” from the initial random state. The last frame shows the steady state.


Initial Random State


After 25 Iterations


Steady State (Periodic)

Clearly even in this simple ABM we can get very complicated patterns and behavior.

Whether ABMs have any future in economics is not clear. I suspect that the rule-based approach at the heart of the ABMs will ultimately limit their usefulness – particularly if outcomes depend importantly on subtle differences in specifications of the rules or if individuals have to adhere to simple rules even when the system starts acting wild.

Another problem facing the ABMs is that they appear to be suggested as a solution to a problem that might not exist.  In their 2009 Nature article, J. Doyne Farmer and Duncan Foley write that DSGE models “assume a perfect world, and by their very nature rule out crises of the type” we experienced in 2007-2008.  DSGE models do not “assume a perfect world.” Economists are enthusiastically adding frictions and modifications to DSGE models which incorporate many real-world types of market failures.  I will concede that I would be reluctant to offer a particular version of a DSGE model that I felt accurately captured what was going on during the crisis but I don’t think this is because of a limitation of the DSGE approach.  It’s a limitation of economists fundamental understanding of financial crises.  And that’s not something that ABMs are going to fix ….

[1] An attentive reader pointed out that my original description incorrectly described the rules as applying at the system level. This is incorrect. The rules of behavior are attached to the individuals and each can have a different rule. 

Paul Krugman’s View of Aggregate Demand and Aggregate Supply

Paul Krugman responds to my earlier post.  He has a lot to say that’s worth commenting on.  I think we actually agree on quite a bit though there are points where we disagree.  Let’s take a look. (Paul’s comments in italics).

As I see it, we [i.e., Keynesian macroeconomists] have a general proposition — most recessions are the result of inadequate demand.

I basically agree with this though I admit that I don’t have a particularly clear definition of what we really mean by “aggregate demand.”  I think often this is meant to capture changes in consumer sentiment, fluctuations in government demand for goods and services or other incentives to purchase market goods – incentives which would include tax subsidies, monetary stimulus, … etc.

[…] we have a pretty good model of aggregate demand, and of how monetary and fiscal policy affect that demand. That model is IS-LM, with endogenous money as appropriate. 

Again I basically agree, with the caveat that the IS-LM model is at most just a sketch.  The consumption block of the IS-LM model is much better treated by a modern consumption demand component augmented suitably with credit constraints, some hand-to-mouth behavior and perhaps some myopia.  The investment block needs some serious work. My own sense is that it is also much better handled with a modern formulation rather than a simple relationship between investment and the real interest rate.  I would also stress that while the IS-LM sketch is pretty good as it stands, it desperately needs to incorporate a serious treatment of the financial sector. In a sense there is a third market / third curve missing from the model – one which gives the interest rate / loan terms faced by consumers as a function of collateral, net worth, etc.  We would then get a lot closer to a sketch which could capture important elements of the financial crisis.  (See below for a more precise description.)

We do not have an equally good model of aggregate supply.

I completely agree.

What we have, instead, is an observation: prices and wages clearly are sticky in the short run, and maybe for longer than that. There’s overwhelming evidence for that proposition, but in trying to justify it we engage in various kinds of hand-waving about menu costs and bounded rationality.

On the evidence I am again in complete agreement.  I actually think Paul is being too dismissive of the justifications for why we see price and wage rigidity.  Macroeconomists have invested a huge amount of time and energy into studying price setting behavior and these studies give a pretty clear picture of what is going on at the “micro level.”  It’s not just hand-waiving.  Paul continues…

we can […] be fairly sure that expansionary policies in a depressed economy won’t be inflationary, and we can use the pretty good demand side model to tell us that monetary expansion won’t work but fiscal policy will when we’re at the zero lower bound.

I sort-of agree with this.  Certainly if the interest rate is zero then conventional monetary expansions won’t do anything.  Whether government stimulus is a good move is unclear.  I am sure that government spending increases employment and output to an extent but it is very important how the money is spent. In an earlier post I argued that even in a severely depressed economy, there is rarely a good justification for spending on projects that aren’t socially valuable.  In all likelihood the best fiscal policies will involve some sort of transfer (like payroll tax cuts) or other tax cut rather than government spending.

What the data actually look like is an oldfashioned non-expectations Phillips curve. 

OK, here is where we disagree.  Certainly this is not true for the data overall.  It seems like Paul is thinking that the system governing the relationship between inflation and output changes between something with essentially a vertical slope (a “Classical Phillips curve”) and a nearly flat slope (a “Keynesian Phillips Curve”).  I doubt that this will fit the data particularly well and it would still seem to open the door to a large role for “supply shocks” – shocks that neither Paul nor I think play a big role in business cycles.

Paul ends his post with some bait.  He writes “it remains true that Keynesians have been hugely right on the effects of monetary and fiscal policy, while equilibrium macro types have been wrong about everything.”  OK, I’m again going to try not to take the bait.  Let me just point out that this is a difficult statement to take very seriously.  For the most part, the Keynesians are a subset of the equilibrium types.  Moreover, there are many “equilibrium types” who are not Keynesian but who are instead finance guys who played a crucial role in analyzing the economy during the crisis.  I presume he is taking an obligatory shot at Minnesota / Sargent / Lucas / Mathematical modelling etc. but …. I’m not taking the bait.


APPENDIX: The traditional IS-LM system is something like this:

Y = C(Y,r) + I(r) + G +NX(e)

r = max{-π, aY + b(π) }

the first equation is the IS curve governing the goods market. Consumption demand is increasing in Y and decreasing in r.  Investment demand is decreasing in r and Net Export demand is a function of the (real) exchange rate.  The second equation is the LM curve which I have written as a Taylor rule with the restriction that the nominal interest rate (i  = r – π) is subject to the ZLB.

A simple improvement over this system would be to introduce a different borrowing rate for firms and households – call this rate R.  R is equal to the base rate r plus an “external finance premium.” The EFP could be a decreasing function of asset values A and income Y. We now have the modified system

Y = C(Y,R) + I(R) + G +NX(e)

r = max{-π, aY + b(π) }

R = r + EFP(Y, A)

I’ll call the third equation the FE curve since it provides a condition for financial market equilibrium.  This model will have a spread R-r which will reflect financial stress.  Of course the IS-LM-FE system is again just a sketch.  We still have no explicit role for liquidity, solvency concerns, bank runs, etc. and the FE block would need to be fleshed out.  In fact this model is essentially a static sketch of the financial accelerator model by Bernanke, Gertler and Gilchrist 1999.

Traditional Macroeconomic Models and the Great Recession

A common narrative: analysts who used traditional Keynesian tools to understand the crisis made better predictions and were in a better position to diagnose the problem. This narrative may be comforting to some but unfortunately it’s not correct.

In some sense, the truth of our predicament is even scarier. Macroeconomists were caught completely off-guard by the financial crisis. None of the models we were accustomed to use provided insights or policy recommendations that could be used for fighting the crisis. This is particularly true for New and Old Keynesian models. The New Keynesian model (particularly its DSGE manifestations) was the dominant macroeconomic paradigm in the pre-crisis period and judging by many of the presentations at the National Bureau of Economic Research (NBER) summer meetings, the New Keynesian or Old Keynesian (referred to as “paleo Keynesian” by some of the meeting participants) continue to serve as the primary lens through which we try to make sense of the macroecononmy.

It its standard form, neither the New Keynesian model nor its paleo-Keynesian antecedent feature a meaningful role for financial market failures. As a result, the policy response to the crisis was largely improvised. This is not to say that the improvised policy actions were bad. Improvisation guided by Ben Bernanke was about as good as we could hope for. Nevertheless, for the most part, the models we were accustomed to use to deal with business cycle fluctuations were simply incapable of making sense of what was going on. In one of Stefanie Kelton’s recent podcasts, economist Randy Wray makes exactly this point. While I typically do not grant much credence to heterodox economists, in this instance Professor Wray’s diagnosis is completely correct. Fortunately, as Noah Smith pointed out in an earlier column,macroeconomists have been working, and continue to work, on developing models that can be used to analyze financial market failures.

In addition to the fact that the prevailing business cycle theories did not incorporate financial sectors, the components that were featured prominently were not performing well. The cornerstone of pre-crisis macroeconomic theory was price rigidity. In New Keynesian models, price rigidity results in a Phillips curve relationship – more specifically, a New Keynesian Phillips curve. According to the Phillips curve, if inflation was unusually high then output would be above trend. If it was low then output would be below trend.

The financial crisis of 2007-2008 and the Great Recession that followed proved to be a particularly bad episode for the New Keynesian model. Contrary to Paul Krugman’s assertion that traditional Keynesian models performed well, the key mechanism in the New Keynesian framework – the Phillips curve – was a virtual failure. In a recent post commenting on John Cochrane, Noah Smith plots quarterly price growth during the recession and notes that inflation did fall a bit during the recession. In his words “inflation not only plunged during the recession, but remained low after the recession.” The chart below shows core inflation (inflation for all goods excluding food and energy) since 2004. Clearly inflation fell once the recession took hold. Prior to 2008 annual inflation had been roughly 2 percent. Inflation fell during the recession [1] to roughly 1 percent.


To put this change in perspective, the next chart plots annual inflation for the entire post-war period. The blue line is the inflation rate for all goods in the CPI. The red line is core inflation.


There have been many large swings in inflation during U.S. history. Compared to historical variations in inflation through the post-war, the changes in price growth during the Great Recession were quite mild. Notice that because the large drop in the overall inflation rate is not in the core measure, this movement reflects changes in food and energy (primarily oil prices). If the 1 percent reduction in core inflation is sufficient for the Keynesian model to generate the huge recession we just went through then where was the huge recession in the late 1990s? Where was the enormous recession in 1986?

In his 2011 AEA presidential address Bob Hall proposed modifying the Keynesian model by treating inflation as “nearly exogenous.” One might interpret this modification as a “hyper-Keynesian” element – the exogeneity of inflation arises because the Phillips curve is essentially flat so even minor variations in inflation cause sharp changes in output. Alternatively, one could interpret the modification as a capitulation of sorts. The inflation block of the model is incorrect and so Hall removed it, letting inflation march to its own beat, unaffected by developments within the system.

Traditional Liquidity Trap models predict that inflation should not only be low but it should be falling. Instead, even though interest rates were pushed to zero and even though economic activity contracted dramatically, the inflation rate barely budged. In his paper, Hall writes [of the New Keynesian Phillips curve] “luckily the theory is wrong.” Were the Phillips curve true, inflation would have fallen making the real interest rate even more negative, further depressing output and employment.

While commentators like Paul Krugman are correct to point to a few success stories of some models (like some aspects of the Liquidity Trap), they should also own up to the mounting evidence that the older models (even the paleo-Keynesian models that some prefer) clearly failed on some important dimensions. They couldn’t tell us much of anything about what caused the financial crisis itself and couldn’t really tell us how to deal with it and they made clear predictions about inflation that were supposedly at the center of the New Keynesian mechanism – predictions that never materialized.



[1] I am plotting the percent change in the price indices from year t-1 to year t.