Cripes, Maybe I’m in the Echo Chamber …

Paul Krugman is not happy with my post on echo-chamber etiquette for America’s public intellectuals.  As widely read public intellectuals, Paul Krugman and Noah Smith have an obligation to present a description of the world that’s as accurate as possible.  They have an even greater responsibility to liberal readers who are attracted to ideas that fit well with their preconceptions about the nature of reality. 

Krugman points out that he wasn’t really attacking Sargent but rather commentators who posted favorable comments about Sargent’s commencement address.  Fair enough.  Paul is not attacking Sargent, and I think he’s justified in pointing out that it is curious that such favorable comments are being highlighted now when the U.S. economy is still in a weakened state and when problems with income inequality are finally getting the attention they deserve.  

He also suggests that the right wing echo chamber is worse than the left wing echo chamber.  He writes that

America, it goes without saying, has a powerful, crazy right wing. There’s nothing equivalent on the left — yes, there are individual crazy leftists, but nothing like the organized, lavishly financed madness on the right.

I don’t think I would put it quite so strongly but I think again Krugman is basically correct – the “crazy right wing” has assumed too much influence over the broader parts of conservative America and this is surely a bad thing. The left, thankfully, is not at that point yet. 

This however gets at what I was trying (perhaps unsuccessfully) to say in my earlier post.  The current condition of the right is the inevitable consequence of an environment in which crazy ideas are bandied about cavalierly and amplified to the point at which they become unquestioned.  I think this was also the point of the NYT column by Nicholas Kristof

Now, neither Noah’s post nor Krugman’s post are crazy.  Neither is “wrong.”[1] But neither is the message that their more liberal readers need to hear.  To me it seems obvious that the proper response to an extreme right-wing statement like “tax cuts pay for themselves” can’t be “oh yeah, well extra government spending pays for itself.” If I could dictate the reading habits of my fellow Americans, I might put Piketty’s Capital in the Twenty-First Century on the bedside table of every conservative but I would put Sargent’s 12 principles on the bedside table of every liberal. 

[1] Krugman points out (again correctly) that there may be efficiency gains to having a more efficient distribution of income. This is true though I strongly suspect that the overall efficiency costs to achieving a more desirable income distribution will outweigh these benefits. Of course, I strongly suspect that, if we are careful about how we go about redistributing income, the overall gains in welfare from having a more equal income distribution will outweigh the loss in efficiency.  

Watch what you Say in the Echo Chamber

In a recent blog post, Paul Krugman points out that there is no conflict between “standard economics” and concern about growing income inequality.

You can be perfectly conventional in your economics […] while still taking inequality very seriously.

This is absolutely correct and it’s a stance which economists need to embrace more often than they do. In an earlier post I argued that concern about income inequality is legitimate irrespective of how you think the economy works. You don’t need additional justification to desire a more equitable distribution of income or well-being. I personally count myself as an economist who more or less accepts the basic principles of the field, but who also recognizes that dealing effectively with the current state of income inequality is simply a necessity.

However, while Krugman is quite right in this case, both he and Noah Smith have made some remarks which I think liberals should never make. The remarks came in response to a commencement speech by Tom Sargent which has attracted a surprising amount of commentary among online commentators. In the convention speech, Sargent outlined 12 principles of economics that he felt college graduates should know. The principles were pretty uncontroversial and probably a good thing for Berkeley undergraduates to hear before the go out into the real world. However, one of Sargent’s principles struck a nerve:

There are tradeoffs between equality and efficiency.

Paul Krugman took issue with this remark saying that reducing income inequality might actually increase economic growth and (for an economy at the zero lower bound) that government spending “more than pays for itself.” He closes his remarks by saying that Sargent’s speech is simply “anti-Keynesian propaganda, cloaked in the form of a widely respected and liked economist uttering what sound like eternal truths.” those people commenting favorably on Sargent’s commencement address are simply advancing “anti-Keynesian propaganda, cloaked in the form of a widely respected and liked economist uttering what sound like eternal truths.” [1]

Noah Smith similarly found fault with the equity efficiency remark. He writes that it’s not generally true that there is such a tradeoff and he cites the Second Welfare Theorem as justification. As he goes through the list, Noah remarks that he can “start to see a policy implication emerge from the list” and in the end it all shapes up to be “one big caution against well-meaning government intervention in the economy by do-gooding liberals concerned about promoting equality and helping the poor.”

I understand the reactions by Noah and Paul. I just wish they wouldn’t succumb to the temptation to write this stuff. The truth is that most of the principles of standard economics carry with them a fairly conservative / neo-liberal message. I know there are exceptions but they are just that – exceptions. The truth is that if we want to really attack the problem of income inequality (promote equality and help the poor) then we are going to have to take stuff away from richer people and channel it to poorer people. This kind of action will most likely have consequences for markets and these consequences will be unsavory. Paul and Noah could argue (quite strongly I suspect) that these costs might be relatively small but they should not act as though they really think there are no costs. (In case you’re wondering, the Second Welfare Theorem says that if we can costlessly transfer resources across individuals then any efficient outcome can be supported as a market equilibrium…)

If you are a liberal, let me give you a sense of why this is a costly statement to make. Suppose one of Sargent’s principles included a public finance tradeoff: there is a tradeoff between low tax rates and high tax revenue. A conservative might take issue with this claim and write ‘you know this isn’t necessarily true. According to some models, lower tax rates actually lead to more revenue because they encourage economic activity.’ The conservative would of course be “correct” in a very narrow sense (this is a theoretical possibility) but he or she would be offering a very tempting fiction to their audience and to the public – the idea that you can have everything you want at no cost. Neither liberals nor conservatives should make remarks like this. When they do, it invariably costs them credibility and serves to drive a further wedge between voters who are already too polarized.

Talking in an echo chamber can be fun but public intellectuals like Paul and Noah have a greater responsibility to self-censor than most because they have large audiences. They have a responsibility to the public and also a responsibility to their liberal readers who take their statements to heart. The conservative echo chamber is probably worse than the liberal echo chamber (you can cut tax rates and raise tax revenue, cutting spending will stimulate the economy, the affordable care act is going to cripple the economy, … ) and conservatives had paid a hefty price as a consequence. They have boxed themselves into an intellectual corner which is going to be very difficult to escape from largely because they have adopted a narrative filled with soothing fictions.

Paul’s second post is the correct path to take. Liberals can simply say “Yes, there are costs to redistribution, but it is in society’s interest to bear these costs to fix the problem of inequality.” Similarly conservatives should say “Yes there are problems with income inequality, but we have to be smart and careful about how we design transfers so as to avoid too much interference with normal market functions.”

Is that so hard?

[1] In the earlier post I misinterpreted Paul Krugman’s statement to be commenting directly on Sargent’s speech when in fact he was referring to people who were themselves commenting favorably on Sargent’s speech.  Hopefully the above correction makes clear Krugman’s actual intention. Thanks to those in the comments section who noted my mistake. CH

Fixing Terrible Economics Presentations

Most economics seminar presentations are terrible. Basically, we suck at presenting our work.  

There are many guides to improving presentations that you can find online. There’s a recent set of suggestions by Jesse Shapiro on Greg Mankiw’s blog. These guides provide reasonable, uncontroversial advice and for the most part I agree with their suggestions (big font, lead with a question, have a bottom line, etc). Here I’m going to make some suggestions that I don’t often hear others making and which some of you might find controversial.

These are suggestions for presentations. I’ll have a post directed at advice for seminar participants soon.

Before I get going let me acknowledge that I am not the best presenter in the world myself. However, I have completed step 1 on the path to recovery – I have recognized that I have a problem. In fact I have completed step 2 – I am trying to do something to fix the problem.  OK, here’s my list of suggestions: 

1. FINISH ON TIME. So many other problems flow from this one simple blunder it’s hard to overstate how costly it is. By this, I don’t mean that you start panicking with 5 minutes left and then skip a dozen slides to get to your crappy conclusion slide. I also don’t mean talking really fast and blowing by each and every one of your 50 slides to make it to the end on time. What’s even worse is that we usually have more time than we really need. Seriously, how many papers require more than half an hour to present adequately? Finish on time – plan to comfortably cover a reasonable amount of material in the time you have. It’s best if you are worried about the possibility of finishing ahead of time (it will probably never happen). 

2. Be as clear and simple as possible. Most graduate students actually entertain the idea that what they have done is so simple that the audience might figure it out during the seminar. This is almost never true. Presenters have a completely distorted view of their own work. They have been doing detailed meticulous work “under the hood” for so long that they have managed to convince themselves that everything they’ve done is trivial. It’s not true. The audience is completely unprepared and it is the speaker’s obligation to explain the material clearly. 

Examples (with numbers?) are an excellent tool for clarity. Don’t present the general theorem, present the example and mention the theorem.

3. Use fewer slides. I’ve seen some recommendations that you can cover at most one slide per minute. That would be a ridiculous pace. I suggest no more than one slide every 3 to 5 minutes. That means that for a 90 minute presentation you get between 18 and 30 slides. Closer to 18 would be best …

4. What should you cut out? Let’s start with the “Preview of Results” slide. This sorry excuse for a presentation tactic is becoming more and more common but it’s really just a signal that you haven’t fixed problems #1 and #2. If you are going to cover the material clearly and on time you don’t need the preview slide. [Think of how this would play in the movie the Sixth Sense. Bruce Willis meets Haley Joel Osment and then we get to the “Preview of the Results: Bruce Willis’ character is a ghost.”]

You can also cut out the “Related Literature” slide. It’s important for a researcher to be aware of the related work. This doesn’t mean that you should put up a reference list in your presentation.

5. Slide Content: Put as few words as possible on your slides. Don’t write sentences (an exception would be if you are going to put up a quote). Your slides are not your presentation notes! The stuff on the slides should complement what you are saying. It isn’t a substitute for what you are saying.

Don’t use bullet points or lists if you can avoid them.

Use equations and math sparingly. Be choosy about the equations you are going to present and plan on re-stating the meaning of the variables for each equation. Presenting the “boilerplate” of the model is typically not a good use of time. Focus on the stuff that is specific to your work. It’s almost a forgone conclusion that the audience won’t be able to remember the notation (even if you have a notation slide). Choose notation carefully (C is consumption not capital, labor is N or L, marginal cost is MC, … ). I’ve had presentations that were disasters simply because of my poor notation choices.

The best things for slides are pictures, graphs, scatter-plots …

6. You are giving a research seminar – Don’t be “cute.”

7. Software: I know “everyone” uses Beamer now. Beamer is another really bad feature of economics presentations. It’s as though they went to PowerPoint, found the most drab, depressing format and then mandated that every economics talk had to be given in that style. The only thing Beamer has going for it is that if you write in LaTeX then you can enter the math easily. And this might not actually be a good thing (see # 5 on Math).

There’s a lot of stuff in Beamer that should simply be dropped. The drop-shadow behind the math panels, the little sphere bullet points, the little triangle bullet points, the terrible purple haze color palette that everyone uses, the slide counter (I think this is so the audience knows how many more terrible slides they will have to endure before the end), the ridiculous bubbles at the top which get filled in when you cover a section of the talk … it’s all hideous.

In fairness, the other options aren’t much better. PowerPoint is ok but you have to avoid the pre-packaged formats or you will end up running into the same problems that Beamer has. There are other options that will become better hopefully over time: I’ve never tried Keynote but it might be worth a shot. There’s Prezi and similar web-based presentation software, … whatever. Heck even chalk or a whiteboard. [1]

8. Speaking: Please don’t read your slides! (Remember, your slides are not your notes!). 

Try to eliminate “filler sounds” – uh, um, you know, like, right?, I mean, what do I want to say here?, what I’m trying to say is, sorta, kinda, … Silence is better than this kind of stammering. Have the opening sentences of the talk essentially memorized.

One trick for refining your speaking ability is to record yourself giving a talk. Count the filler words (you’ll be horrified). This will give you a sense of whether you have a problem.

Another trick is to try to emulate good speakers. Here are two clips of Sam Harris [clip1 clip2].  TRIGGER WARNING: Harris is a “new atheist” who frequently makes disparaging remarks about major religions, he’s particularly critical of Islam; if you are offended by this type of discussion then you might want to find a different example. I’m including him here because he’s an excellent public speaker.

You’ll notice that he speaks in complete sentences with very few filler sounds. His eyes are always on the audience (most of the time he doesn’t even have slides). He speaks very slowly with deliberate pauses. The sentences and his delivery are carefully orchestrated (even though they sound improvised to an extent). A scientific presentation doesn’t have to be this smooth – Harris is a public / pop intellectual who is trying to get big picture ideas across to a general audience. This isn’t typically what we are doing in a research seminar but the techniques he uses still carry over to some extent.

 

[1] I was at a conference once where Randy Wright got up to give his discussion and he didn’t have any overheads prepared (this was back when people used overhead projectors). Instead, he decided to simply write his slides as he talked. It was the best discussion at the conference.

Are the Micro-foundations of Money Demand Important?

EC607 is rapidly coming to a close. I’ve finished the RBC model and now I am on to discussing nominal rigidities and New Keynesian Economics. This transition is always somewhat awkward because I have to say something about the demand for money.

Prior to the crisis, money demand had nearly disappeared from mainstream macroeconomics. This might seem strange since so much of macroeconomics involves money, but it’s true. In Advanced Macroeconomics, when money is introduced, Romer simply adds real money balances to the utility function and then moves on to tackle the more important problems of macroeconomics (price rigidities in this case). That’s basically it for money demand: one paragraph and an ad hoc addition to the utility function which is basically never mentioned again.

I think the reason for the marginalization of money demand is two-fold. First, getting money into a neoclassical economic model is really tough. Fiat money (money that isn’t backed by anything with actual value) is simply not valued by market participants in a Walrasian setting. The Walrasian value of something that is intrinsically worthless … is zero. The fictitious Walrasian auctioneer is simply too nimble, too efficient to permit an equilibrium with valued fiat money. To get money in to these models (with any micro-foundations at all) requires that we create some kind of a “gap” in markets to create some room for an unbacked currency.

There are models that do the trick. Often researchers working on money demand use frameworks that are decendents of the Kiyotaki-Wright (1993) matching model. This model imagines that all transactions take place through random matching between individual traders. Because the probability of a “double coincidence of wants” in the Kiyotaki-Wright model is low, (it’s unlikely that you will bump into someone who wants what you have and has what you want) a fiat currency can circulate in equilibrium. More modern versions use an extension suggested by Lagos-Wright (2005) in which traders interact in two sub-periods. During the “day”, there is a centralized market where traders use state-contingent contracts. During the “night” they match in an anonymous trade stage.

These are elegant models and they do capture elements of the motives behind holding money. However, they are not used often by most macroeconomists, who often regard these models as being simply too abstract to be useful. Their abstract nature also makes empirical analysis of these models extremely difficult. (Incidentally, if you are a graduate student looking for a research topic, I would encourage you to look outside of this area. It’s a very difficult area and it doesn’t sell very well on the academic job market. Search and matching in general is very hot right now but “money-search” is not.)

The second reason why money demand has been largely relegated to the sidelines is that there are moderately persuasive arguments that we don’t actually need to understand it to study the macroeconomy – even to study monetary economics itself. The argument goes something like this: The Federal Reserve conducts monetary policy in terms of a nominal interest rate target. Once it decides on the setting for the funds rate it adjusts the money supply to enforce its target. The New Keynesian model is an excellent example of this approach. The simplest NK model has an equation governing the demand for goods and services, an equation governing inflation, and an equation describing the Fed’s operating rule. No mention is made of money supply or demand and many (most?) macroeconomists are perfectly happy with this state of affairs. The possibility that we could avoid the issue of money demand is very attractive – particularly given the difficulties of successfully modeling money demand.

Money demand may be making a comeback though. During the crisis, a lot of concern centered on malfunctioning markets for money-substitutes. Recent work by Arvind Krishnamurthy and Annette Vissing-Jorgensen, Stefan Nagel, and Adi Sunderam emphasize the liquidity aspects of many assets that are not traditionally considered “money.” Treasury bills, Commercial Paper, and highly rated securitized assets all have important liquidity components to their market values. In addition, many people think that the demand for liquid, low risk securities encouraged the creation of more and more securitized subprime loans. Not having a suitable model for money (or money substitutes) seems like a particular shortcoming given recent history.

In 1978, there was an amazing conference at the Federal Reserve Bank of Minneapolis devoted explicitly to the study of micro-foundations of money demand. The papers at the conference were later collected in Models of Monetary Economies.[1] In it, there is an interesting discussion by James Tobin who writes in part 

Why does fiat money … have value? What determines its value? This conference [is] based on two premises. One is that the two questions have not been satisfactorily and rigorously answered. The other is that the answer to the second question […] can be achieved if and only if [we have] a precise answer to the first question […]. I am dubious of both premises.

In hindsight, I think it’s clear that Tobin’s suggestion that we have a satisfactory and rigorous understanding of why people hold money – was at best not entirely correct. His second statement – that we might not need to rigorously understand why people hold money – might be right though my faith in his argument has definitely been shaken by recent events.

[1] This volume is available on line here. The 1978 conference lineup was amazingly good and the manuscript includes among other things, Lucas’ “Pure Currency” model, Townsend’s “Turnpike” model, and an excellent paper by Neil Wallace on money demand in the overlapping generations model. While it doesn’t have any of the modern matching models, it is still an impressive and insightful volume and should be required reading for anyone interested in the pure theory of money.

What the Heck is “Calibration” Anyway?

Every year I teach EC607 I arrive at the Real Business Cycle model and run into a problem. No, it’s not struggling to answer “why are you teaching the RBC model if you don’t think it is useful for understanding business cycles?”  No, the problem occurs when I get to the subject of calibration.  I would like to tell my students exactly what we mean when we say that we calibrate parameters.  I can’t tell them however since I don’t really know myself.

In my own work, I do things that I would describe as calibration. I even have an intuitive sense of what I mean when I say that some parameter has been calibrated.  However, I do not have a precise notion of what it means to calibrate a model. In fact, I am not sure anyone has a precise statement of what it means.[1]

Calibration is a way of assigning values to the parameters which determine how our models function. Unlike estimation, calibration does not assign parameter values to make the model fit the data. Some descriptions of calibration suggest that the parameter values should come from separate data sources – separate, that is, from the data that you are analyzing with the model.  In Advanced Macroeconomics, David Romer describes calibration as follows (emphasis added):

The basic idea of calibration is to choose parameter values on the basis of microeconomic evidence and then to compare the model’s predictions [with the data].  

This is a fairly reasonable description of what many people mean when they use the term ‘calibration’ but it is problematic for at least a couple of reasons.  First, economic data typically don’t come with useful labels like “Microeconomic Data: Approved for use with calibrating models” or “Macroeconomic Data: WARNING – DO NOT USE FOR CALIBRATION!” You might think that it’s obvious which is which but it’s not. Certainly panel data like the PSID sounds like data an applied microeconomist might use. What about price (inflation) data? Is that aggregate “macro data”? What about unemployment rate data? What about data on type-specific investment? Is that micro data?

Second, many of the calibrations used in practice seem to come from macro datasets anyway. Take for instance the calibration of the labor’s share parameter in the production function. This calibration is typically justified by calculating the average ratio of total employee compensation to total income – figures which both come from the NIPA.

Romer also says that we should choose the parameter values before comparing the model with the data.  I hear sentiments like this a lot though again it doesn’t really hold up when we look at standard practice. The labor’s share parameter is again a case in point. We are setting that parameter based on fitting a single moment of the data (we are going to match the model average labor share with the observed labor share). Another example concerns a standard calibration of investment adjustment costs in business cycle models. These parameters are sometimes calibrated to match the model’s predicted investment volatility with observed investment volatility. These examples make calibration sound suspiciously like estimation. (Hopefully, calibration isn’t just estimation without bothering to report a standard error.)

Nevertheless, even though I don’t really have a precise definition of what I mean by ‘calibration’, I believe that it may indeed have an important role to play in economic analysis. In particular, calibration might work quite well in situations in which we believe the model is wrong. (I know what you’re thinking – we always think the model is wrong! True. This means that calibration may indeed be very valuable.)

Let’s take a specific example. Suppose we have data on wages and employment and we have a labor supply / labor demand model which we propose to explain the observations. Suppose further that all of the changes in employment are driven by shifts to labor demand.  The only thing missing is the labor supply elasticity parameter.  An estimation based approach would do the following: we would invoke the null hypothesis that the model is correct and then estimate the missing labor supply elasticity from the observed data (just run OLS for instance). A calibration approach would not assume that the model is correct. Instead, a calibrated model would (somehow) obtain a parameter value from elsewhere, plug it into the model and compare the model output with the observed data. Let’s assume that the analyst calibrates the labor supply elasticity at roughly 0.5.

Suppose that (unfortunately for the econometrician) the model is mis-specified. In fact, the wage is stuck above the market clearing wage and there are many workers who are involuntarily unemployed. Every labor demand shift is resolved by simply absorbing available workers at the fixed wage.  The econometrician estimates the model and finds that the labor supply elasticity is very high indeed (near infinity in fact). The analyst using the calibrated model finds that his model predicts virtually no changes in employment.  Notice that it seems that the analyst using the calibrated model is actually on to something. There is a tension between his calibrated labor model and the observables. Moreover, this tension seems to provide an important clue as to how the model needs to be modified.[2] The econometrician on the other hand is happy with his estimates and will go about his business content in the belief that all is well with the model.

Naturally, the missing link in this narrative is the source of this outside information that the calibrated model draws on.  Where does this initial parameterization come from? Perhaps there were some earlier studies that provide some information on the labor supply elasticity? Perhaps the analyst just arrived at the number through sheer introspection. (If I were offered a wage increase, how would I respond?)  In a sense calibration shares a common thread with Bayesian estimation which requires a prior to guide the estimates (like calibration, the exact source of the prior is somewhat mysterious). In fact, many prominent researchers who advocate the use of Bayesian techniques come from backgrounds that embrace calibration (Jesus Fernandez-Villaverde was trained at Minnesota for instance).

One other thing which strikes me is that the researchers who use calibration are often much more interested in the performance and insights generated by the models and much less interested in the parameter values themselves.  Estimation it seems tends naturally to place much more emphasis on the point estimates themselves rather than their consequences.

In any case, calibration will likely continue to be used as an important analytical technique, even if no one knows what it actually is …

 

[1] My coauthor Jing Zhang assures me that calibration does indeed have a specific meaning though she has never articulated what this meaning is.  (Actually, when I asked her what she meant by ‘calibration’ her first reaction was to laugh at me after which she told me that I didn’t have proper training …).

[2] Paul Krugman seems to arrive at a similar conclusion in a past blog post (though you will have to put up with the obligatory “fresh water bashing” before the end of the post).

Technodox Economics and ‘Deepak Chopra Mode’

Both Noah Smith and Mark Buchanan have posted replies to my earlier post on the tendency for physicists to be attracted to economics and both make good points.

After posting my original entry I thought about whether there might be areas of economics in which physicists would be able to contribute quickly and it occurred to me that perhaps finance – particularly high-frequency trading – might be such an area. Here lots of trades and market patterns might be governed by the technology of trading and could present an environment in which the skills of physicists are particularly valuable. So I was pleased to see that Mark seems to also focus on finance as an area where physicists have found a grip.

He mentions one particular area – distributions of random variables that have “fat tails” – which has specific value in modern economics. Several researchers are using such distributions to analyze economic phenomena. Xavier Gabaix has used power laws to study “granular” business cycles – the idea that random shocks which affect large players might have substantial consequences for the aggregate economy. Robert Barro, Francois Gourio, and others consider the possibility that such distributions place weight on extreme outcomes which in turn influence asset pricing (asset pricing implications of rare disasters). Trade theories, models of urban (spatial) formation, etc. use power laws. If physicists are actually the source of some of the background on fat-tailed distributions then kudos– that’s a point for physicists in economics.

Mark’s post also presents an example of what you might call ‘Deepak Chopra’ mode. This occurs when a writer (or speaker) talks about an issue which is complicated but then takes the opportunity to start hurling around complicated-sounding ideas and jargon with at best tangential relevance to the topic and at worst no relevance to the topic whatsoever. (Deepak Chopra is known to start talking about quantum healing and other new-age concepts that have the word “quantum” attached to them …). Here is Mark veering dangerously close to Deepak Chopra territory:

What the physicists DO believe, however, is that markets and economies are great examples of what scientists have come to call “complex systems” — systems of many elements (people, firms, etc.) with strong interactions between those elements which create webs of non-linear feedback. The elements learn and adapt, their interactions create “emergent” coherent structures and fluctuations at the collective level, and these structures then act back downwards to influence the behavior of the elements.

He quotes a similar sentiment from the essay by Brian Arthur:

It is this recursive loop that connects with complexity. Complexity is not a theory but a movement in the sciences that studies how the interacting elements in a system create overall patterns, and how these overall patterns in turn cause the interacting elements to change or adapt. … Complexity is about formation—the formation of structures—and how this formation affects the objects causing it.

(… and this is why we need physicists?)

OK, the economy is “complex”. There are many players on the stage and they take a variety of actions which together contribute to the aggregate behavior of the economic system. In turn, aggregate outcomes influence individual behavior. This simultaneity is indeed one feature that makes analyzing the economy so difficult. The thing is, this type of two-way feedback is what economists are doing already. We often reduce environments like this to “fixed point problems” – the players take the market conditions as given when they make their decisions and in turn, these decisions generate the perceived market conditions. In fact, we often prove that equilibria exist by appealing to mathematical “fixed point theorems” for exactly this reason. You might think that this rules out learning and adaptation to an environment. It doesn’t. Economists have been analyzing environments with learning for decades. Learning does make things a bit more difficult. In learning models, the market participants start with some subjective beliefs upon which they base their actions. These actions lead to market outcomes and beliefs are refined. The interplay between beliefs, actions and outcomes is analyzed simultaneously (and it plays out over time). A bit more difficult, yes. But not an insurmountable obstacle for the field.

If economists already do this stuff, why do smart outside observers like Mark Buchanan think we do not? Why are we being lobbied by physicists like Eric Weinstein to adopt techniques which are of such speculative value? My guess is that techno-heterodox ideas (Technodox economics?) like gauge theory, agent based models, chaos (or complexity) theory are advocated in the hope that it might either (1) provide outsiders with some authority even though they don’t really know much about the field and (2) because most practicing economists will have to admit (if they are honest) that they know essentially nothing about these techniques – an admission which could deprive them of some authority even though they do know quite a bit about the economy.

Incidentally, the jargon which comes along with a lot of these Technodox ideas allows even those who don’t understand these speculative techniques to masquerade as though they do understand them. Jargon is a very problematic part of academia. Often academic jargon is particularly imprecise on top of the fact that it allows the speaker to try to “pull rank” on the audience.

As an aside, Eric Weinstein commented on my earlier post by arguing that gauge theory was actually valuable. I told him that I was willing to be convinced but I asked him to convey his insights without referring to the mathematics. He hasn’t got back to me yet. (Eric also commented on Noah’s post with a lengthy comment with a bunch of mysterious mathematical notation. I doubt he is going to convince people by taking this route.)

Why Are Physicists Drawn to Economics?

Even before the financial crisis, there has always been a surprising number of ex-physicists who find their way to graduate study in economics. It could be that many of these math-physics people have simply concluded that they no longer like physics and are interested in economics instead. (Moreover, the job market for economics Ph.D’s is much better than the job market for physics Ph.D’s.) I suspect however that some of them are here because they have some incorrect perceptions about the field. A student with a mathematical-physics background could easily convince himself that he has superior mathematics abilities than typical economists and superior statistical and computational skills than most economists.[1] He might go on to conclude that, as a consequence of his superior mathematical and computational abilities, he should be able to enter economics and start contributing quickly and easily. He might also anticipate that he could easily adapt established models or techniques in physics to study economic phenomena and impress the profession.

If you are one of these people, let me try to disabuse you of these notions. Your mathematical abilities are actually not that much better than most economists (if they are better at all). You will have to spend a lot of time acclimating to the subject and the path to actually making contributions will be long and difficult. In all likelihood, there are very few (perhaps zero) off-the-shelf models or techniques in physics (or engineering, or chemistry, …) that will produce meaningful economic results. High-tech methods and approaches will be valued only if they can be described in simple, direct ways.

Economists are not held back because of a deficiency of mathematical tools and techniques. As soon as I hear a physicist (or a mathematician or whoever) start talking about the need for economists to use “the right mathematical techniques” I immediately think that the person has absolutely no idea what the main problems and questions in economics actually are.

Eric Weinstein for instance has somehow managed to convince himself that the instability of preferences is a huge problem for economics (it’s not) and that the application of Gauge Theory to economics will improve things. Now, I don’t know anything about Gauge Theory but I would be willing to bet that it has virtually nothing to add to economics. If Eric Weinstein has some insight that he wants to share then fine – send it my way and I’ll listen but I’m not going to listen just because the math is difficult. The fact that it’s difficult to understand something does not mean that it is important to pay attention to it. Eric Weinstein might be perfectly well-intentioned but if he thinks that because he knows some fancy mathematics, economists are obligated to grant credence to his work, he is sorely mistaken.

As another example, take Mark Buchanan, an ex-physicist who writes the blog The Physics of Finance. Mark seems genuinely interested in economics, particularly macro, but it doesn’t sound like he has a good grasp of the field or of the problems in the field. In a recent column in Bloomberg, he calls for a new age of pluralism in economics:

[M]acroeconomists should learn to speak the languages of other fields, including sociology and psychology, as well as neuroscience and engineering.

An appeal to pluralism like this is usually a sign that the appellant’s ideas are probably not particularly helpful. If you have a good idea, it won’t need affirmative action to get a hearing. True, new ideas and techniques are often met with hostility from the establishment but the reason to listen isn’t that the ideas come from outside the field. The reason to listen (if there is one) is that the ideas are good.

Lee Smolin is yet another physicist who has (at least in the past) waded in to economic waters. Smolin, like all physicists, is clearly very smart and he is asking good questions, but they are the questions of a smart undergraduate. According to Smolin, the “well known fault” of neoclassical economics is the possibility of multiple equilibria and the possibility that equilibria may be “path dependent.” Like Mark Buchanan, Professor Smolin thinks that researchers from other fields (he mentions complex systems as one) are needed to help push economics forward. Also, despite apparently having no direct familiarity with finance, Smolin is prepared to offer his diagnosis of the current state of economics as it relates to the financial crisis:

[T]he whole thing is a disaster if I can say that as an outsider. And it [was one of the reasons] why regulations were lifted on markets and trading through the decades, but when people were making arguments to Congress, to the President’s office that the economy would be better off without regulation, this was the “scientific rationale for it” and led to the very unstable situation of the last economic crisis.

Mathematical fads often pop up in economics but they won’t last unless they have concrete value. Chaos theory was tried briefly but it produced essentially nothing of any value in economics. Neural-networks, agent based modeling, path-dependent equilibria, knot theory … the list of techniques that sound impressive is long but the list of accomplishments associated with these techniques is decidedly quite short.

Naturally, there are counter-examples. The introduction of calculus in economics represented a huge leap forward. Dynamic Programming, originally developed by the mathematician Richard Bellman in the 40s and 50s, is one of the most widely used mathematical tools in economics today. The famous mathematician John Von Neumann [2] was instrumental in the early development of Game Theory, and so on. However, guys like Alfred Marshal, Richard Bellman and John Von Neumann and don’t come along all that often.

If you are a physicist and you want to work in economics, you had better strap yourself in and prepare for a long challenging path – one that is only worth following if you are really interested in the subject itself. There isn’t very much low-hanging fruit left (as Lones Smith once said, ‘there certainly isn’t any low-hanging fruit left in the middle of the yard!’). Don’t think that after watching Inside Job you can jump in to economics and save the day just because you understand the Navier–Stokes equations.

[1] For brevity I will use ‘he’ for this post when the gender is unknown.

[2] I’m not sure I should necessarily classify Von Neumann as a mathematician. It’s a testament to his genius that he is routinely “claimed” by so many fields. He’s a physicist! No, wait, he’s a mathematician! No, wait, he’s a computer scientist! No, wait, he’s an economist! No wait …

Inequality Redux

A reader sent me the message below. It makes a very good point and I felt it deserved to have its own post. The reader has asked to remain anonymous.

[As someone born in a poor foreign country] I dream of a world where Indians and Africans can confront obesity, involuntary leisure via partially insured unemployment, and dull service-sector drone work as their chief nemeses.

The inequality that therefore most concerns me, and that I would strongly suggest should most concern you and your readers, is emphatically not inequality between Americans, but inequality between Americans and their counterparts pretty much everywhere but Western Europe and the few other “rich” nations.

As many of us know, the gaps in any standard of well-being between these groups are simply astonishing. The conditions in India, Africa, and elsewhere dwarf the complaints of almost anyone I can find in the American poor. The overall lack of agency of people in their lives, their daily struggle for drinking water and space to even relieve themselves privately (for women in particular) would make inter-American concerns a dreamlike problem for several billion people on the planet. 

You might argue that this is a false choice: that one can assist both Americans and the world’s poor. But as an economist, you know that to a large extent that the answer is ‘either/or’, not ‘both.’

One might also argue that the problems outside our borders are more intractable, and lie beyond our reach as they involve institutional problems that would defeat any effort we might try. Yet, essentially every argument leveled against retaining a broader perspective on inequality and deprivation can be leveled against efforts attempting to equalize within the U.S. borders. For example, U.S. inequality has been very intractable to say the least. Intergenerational mobility is stubbornly low within rich countries as well, etc. Institutional racism and sexism have been plausibly advanced as serious impediments to achieving equality of outcomes, and opportunities alike.

Personally, I hope American movements to end inequality will approach the issue more globally, and that they will become less insistent in their promotion of what is, ironically for US-based progressives, a version of American exceptionalism: that our poverty must be seen as more urgent than that afflicting nameless strangers in foreign lands.

Why not allow people refundable credits to donate to globally-oriented charities, until it amounted to the total amount of social insurance transfers that occur at present within our borders? Or offer people a choice to either pay taxes (perhaps after they pay a minimum to fund defense), or transfer an equivalent amount to the Gates foundation, or some similar cross-national entity which is decisively worldwide in its mission and reach?

On a more positive note, maybe there is room for dealing with both American and global deprivation simultaneously: what I can offer the poor here is my leisure time, while what I can offer Africa is my money. We might exhort people to work hard, earn as much as they can, use their spare time to mentor/volunteer here, and give their money away to the third world.

Americans, at present, are strongly encouraged to transfer to other Americans. While I can see why I must contribute to national defense, I see less obvious social justice in preferring transfers to these other strangers who, by all accounts, are themselves rich in comparison to the huge mass of people who lie beyond our borders. (Two clear exceptions to this, for me personally, are African-American and Native American populations, both of whom I regard as having a hugely legitimate beef with their lot in our society, and whose conditions—especially taking into account the daily risk of violence they face–can be legitimately viewed as Third World. )

People in the U.S. would benefit from more direct encounters with conditions prevailing in much of the world, to see firsthand just how deprived people in the developing world are. It will inevitably help them decide how to prioritize any assistance they elect to give.. If widely screened, The Real Housewives of Somalia would embarrass everyone in the U.S. The former won’t have taps in the house, or even a toilet.

In the end, the focus on American inequality and poverty is, to my taste, like urging prompt action because among the ten people in business class, one is super rich. What about the 200, not even in coach, but hanging off the wings?

My comments:

The reader makes an excellent point. The differences in relative well-being between typical members of the richest nations (and the U.S. is one of the richest) and their counterparts in the poorest nations is staggering. Growth economists have in the past talked about a difference that is on the order of a factor of 30 or 40 – meaning that per capita income in the U.S. is roughly 30 or 40 times greater than average per capita income in say Ghana, Chad or the Sudan. While it is true that some of this difference can be attributed to mismeasured output (e.g., home farming or other types of home production) the true difference in well-being must be on the order of 15 or 20 even after such adjustments. These differences are simply unimaginable for most Americans. Speaking only for myself, I have never been to a developing country and so I haven’t seen firsthand the kind of poverty and desperation the reader alludes to. I am aware of it as an academic issue but that’s it.

Let me make two comments on a somewhat hopeful note:

First, while income inequality across nations is unbelievably extreme, and income inequality within countries has been increasing (for some countries quite dramatically), global income inequality has been falling overall due, essentially, to dramatic increases in well-being in China and India (though both China and India remain far behind U.S. living standards).

Second, the reader says:

You might argue that this is a false choice: that one can assist both Americans and the world’s poor. But as an economist, you know that to a large extent that the answer is ‘either/or’, not ‘both.’

I actually don’t agree with this entirely. My guess is that many of the problems which cause extreme poverty in the world are actually closely connected to “obvious” problems which could be solved without entailing substantial welfare losses for developed countries. The most severe problem facing many of these nations is civil war. Now it may be quite difficult to put an end to such conflicts but if it could be done we would greatly alleviate world suffering without transferring goods from the West. Socialism also presents a huge problem for many nations. If we could get North Korea to abandon its system of government and instead try to emulate South Korea, we could again achieve tremendous improvements without transfers.

This last point obviously also bears on the limitations of Western influence. The West has no direct authority in any developing country.  We can make suggestions and we can make donations but making progress on some of the most important problems requires structural change within these countries. If we want to improve the lives of Somalis we would need to take over the country and remove the warlords who are running things. Similarly if we want to improve the lives of North Koreans, we would have to remove the Communist government. The right path is clear and there are large gains to be had but taking this path is difficult.