Sunday, August 14, 2016

Olivier Blanchard on DSGE



PIIE - Olivier Blanchard: do DSGE models have a future? (pdf). Some quotes:

There are many reasons to dislike current DSGE models.

First:

They are based on unappealing assumptions. Not just simplifying assumptions, as any model must, but assumptions profoundly at odds with what we know about consumers and firms.

Go back to the benchmark New Keynesian model, from which DSGEs derive their bone structure. The model is composed of three equations: an equation describing aggregate demand; an equation describing price adjustment; and an equation describing the monetary policy rule. At least the first two are badly flawed descriptions of reality: Aggregate demand is derived as consumption demand by infinitely lived and foresighted consumers. Its implications, with respect to both the degree of foresight and the role of interest rates in twisting the path of consumption, are strongly at odds with the empirical evidence. Price adjustment is characterized by a forward-looking inflation equation, which does not capture the fundamental inertia of inflation.



Current DSGE models extend the New Keynesian model in many ways, allowing for investment and capital accumulation, financial intermediation, interactions with other countries, and so on. The aggregate demand and price adjustment equations remain central, however, although they are modified to better fit the data. In the first case, by allowing, for example, a proportion of consumers to be “hand to mouth” consumers, who simply consume their income. In the second case, by introducing backward-looking price indexation, which, nearly by assumption, generates inflation inertia. Both, however, are repairs rather than convincing characterizations of the behavior of consumers or of the behavior of price and wage setters.

Second:

Their standard method of estimation, which is a mix of calibration and Bayesian estimation, is unconvincing.

The models are estimated as a system, rather than equation by equation as in previous macroeconometric models. They come, however, with a very large number of parameters to estimate, so that classical estimation of the full set is unfeasible. Thus, a number of parameters are set a priori, through “calibration.” This approach would be reasonable if these parameters were well established empirically or theoretically. For example, under the assumption that the production function is Cobb-Douglas, using the share of labor as the exponent on labor in the production function may be reasonable. But the list of parameters chosen through calibration is typically much larger, and the evidence often much fuzzier. For example, in the face of substantial differences in the behavior of inflation across countries, use of the same “standard Calvo parameters” (the parameters determining the effect of unemployment on inflation) in different countries is highly suspicious. In many cases, the choice to rely on a “standard set of parameters” is simply a way of shifting blame for the choice of parameters to previous researchers.

The remaining parameters are estimated through Bayesian estimation of the full model. The problems are twofold. One is standard in any system estimation. Misspecification of part of the model affects estimation of the parameters in other parts of the model. For example, misspecification of aggregate demand may lead to incorrect estimates of price and wage adjustment, and so on. And it does so in ways that are opaque to the reader. The other problem comes from the complexity of mapping from parameters to data. Classical estimation is de facto unfeasible, the likelihood function being too flat among many dimensions. Bayesian estimation would indeed seem to be the way to proceed, if indeed we had justifiably tight priors for the coefficients. But, in many cases, the justification for the tight prior is weak at best, and what is estimated reflects more the prior of the researcher than the likelihood function.

Third:

While the models can formally be used for normative purposes, normative implications are not convincing.

A major potential strength of DSGE models is that, to the extent that they are derived from microfoundations, they can be used not only for descriptive but also for normative purposes. Indeed, the single focus on GDP or GDP growth in many policy discussions is misleading: Distribution effects, or distortions that affect the composition rather than the size of output, or effects of current policies on future rather than current output, may be as important for welfare as effects on current GDP. Witness the importance of discussions about increasing inequality in the United States, or about the composition of output between investment and consumption in China.

The problem in practice is that the derivation of welfare effects depends on the way distortions are introduced in the model. And, often, for reasons of practicality, these distortions are introduced in ways that are analytically convenient but have unconvincing welfare implications. To take a concrete example, the adverse effects of inflation on welfare in these models depend mostly on their effects on the distribution of relative prices as not all firms adjust nominal prices at the same time. Research on the benefits and costs of inflation suggests, however, a much wider array of effects of inflation on activity and in turn on welfare. Having looked in a recent paper (Blanchard, Erceg, and Linde 2016) at welfare implications of various policies through both an ad hoc welfare function reflecting deviations of output from potential and inflation from target and the welfare function implied by the model, I drew two conclusions. First, the exercise of deriving the internally consistent welfare function was useful in showing potential welfare effects I had not thought about but concluded ex post was probably relevant. Second, between the two, I still had more confidence in the conclusions of the ad hoc welfare function.

And for further commentary on the topic:

Paul Krugman - the state of macro is sad. Score one for Hicks:

General relativity got its big boost when light did, in fact, bend as predicted. The theory of a natural rate of unemployment got a big boost when the Phillips curve turned into clockwise spirals, as predicted, during the stagflation of the 1970s.

So has there been anything like that in recent years? Yes: economists who knew and still took seriously good old-fashioned Hicksian IS-LM type analysis made some strong predictions after the financial crisis that were very much at odds with what lay commentators, and quite a few economists, were saying. They – OK, we – declared that with interest rates near zero massive increases in the monetary base would not cause high inflation, that large budget deficits would not drive interest rates up or crowd out private investment, and that fiscal multipliers would be positive, in fact more than one, and would be considerably larger than estimates based on non-liquidity-trap episodes suggested.

And all of that came to pass. Those of us who knew our Hicks, directly or indirectly, seem to have had a real advantage over those who didn’t.

Can you say anything comparable about DSGE? Were there any interesting predictions from DSGE models that were validated by events? If there were, I’m not aware of it.

But wait! There's more!:

Simon Wren-Lewis - Blanchard on DSGE. His points:

One of Blanchard’s recommendations is that DSGE “has to become less imperialistic. Or, perhaps more fairly, the profession (and again, this is a note to the editors of the major journals) must realize that different model types are needed for different tasks.” The most important part of that sentence is the bit in brackets. He talks about a distinction between fully microfounded models and ‘policy models’. The latter used to be called Structural Econometric Models (SEMs), and they are the type of model that Lucas and Sargent famously attacked.

These SEMs have survived as the core model used in many important policy institutions (except for the Bank of England) for good reason, but DSGE trained academics have followed Lucas and Sargent as viewing these as not ‘proper macroeconomics’. Their reasoning is simply wrong, as I discuss here. As Blanchard notes, it is the editors of top journals that need to realise this, and stop insisting that all aggregate models have to be microfounded. The moment they allow space for eclecticism, then academics will be able to choose which methods they use.

Blanchard has one other ‘note for editors’ remark, and it also gets to the heart of the problem with today’s macroeconomics. He writes “Not every discussion of a new mechanism should be required to come with a complete general equilibrium closure.” The example he discusses, and which I have also used in this context, is consumption. DSGE modellers have of course often departed from the simple Euler equation, but I suspect the ways they have done this (rule of thumb consumers, habits) reflect analytical convenience rather than realism.

What sometimes seems to be missing in macro nowadays is a connection between people working on partial equilibrium analysis (like consumption) and general equilibrium modellers. Top journal editors’ preference for the latter means that the former is less highly valued. In my view this has already had important costs. I argue that the failure to take seriously the strong evidence about the importance of changes in credit availability for consumption played an important part in the inability of macroeconomics to adequately model the response to the financial crisis (for more discussion see here and here). Even if you do not accept that, the failure of most DSGE models to include any kind of precautionary saving behaviour does not seem right when DSGE has a monopoly in ‘proper modelling’.


No comments:

Post a Comment