The Empirical Failure of DSGE Models According to a DSGE Economist

The empirical failure of DSGE models according to a DSGE economist

Noah Smith points out advances in neoclassical DSGE modelsLars Syll does not agree. But what do DSGE economists themselves think about their models? The failure of the DSGE paradigm is clearly shown by the utter lack of independent measurement or even conceptual description of the core variables of these models, like ‘social indifference curves’ which, of course, leads to a proliferation of models: each economist his own rational expectations, consistent with his and only his model. Proper science however does not only consist of theory but also of the measurement (and discovery!) of variables. DSGE economics fails this test. As shown by Frank Schorfheide, a DSGE economist from the university of Pennsylvania.  He does what ever DSGE economist should be doing. In the beginning of his paper he does not invoke the usual bland canonical incantations that this is about a non-trivial, micro-founded and sound model but actually investigates if these models really are non-trivial.

And he finds, as predicted above, ad-hoc chaos.

He investigates for instance the specification of the Phillips curve, the relation between inflation and employment:

Challenge 1: Fragile Parameter Estimates. The NKPC … [Phillips curve, M.K.]  appears in many DSGE models. In Schorfheide (2008) I compiled a table of 42 DSGE model-based estimates of  [this relation]  that had been published in academic journals. The large number of estimates is testament to a widespread use of the estimation techniques that have been developed in recent years. The estimates range from essentially zero to about four. A value near zero implies that monetary policy changes have a large effect on output but very little effect on inflation. A value of four, on the other hand, means that prices are essentially flexible and that output does not react to monetary policy changes. This remarkable range is due to differences in model specification, choice of observables and sample period, data definitions, and detrending. Unfortunately, the measures of uncertainty reported in the individual studies give no indication about the fragility of the results from a meta perspective. To illustrate this point, Figure 3 depicts a 90% credible set for b and  in (15) based on the estimation of the DSGE model described in Section 2.1 as well as the 42 parameter estimates surveyed in Schorfheide (2008). It is apparent that the posterior uncertainty conditional on a specific model and data choice is dwarfed by the variation across model specifications and data sets. The fragility of parameter estimates potentially translates into other objects of interest such as inference about the sources of business cycle fluctuations, forecasts, as well as policy prescriptions.

The answer to this chaos, indicative of the lack of empirical discipline of these models should be: independent micro-measurement of this variable and aggregating the micro data in a credible, non-trivial way (i.e. the procedure which the ‘national accounts’ statisticians have been following for decades…) – that, and only that, would yield a really sound, really micro-founded model. But that’s not what’s happening. They are just going on. Shall we call these ‘calibrated’ models: ”Diederik Stapel‘ General Equilibrium models’?

Comparable remarks can be made about the use of inflation metrics: an ad-hoc conceptual and empirical mess.

This entry was posted in despotic academia, ECONOMIC DEVELOPMENT AND SUSTAINABILITY, EPISTEMOLOGY AND SCIENTIFIC METHOD, Full SPECTRA Dominance, HOW TO LIE WITH STATISTICS, ideological classrooms, MSM MANIPULATION, Neoclassical and Neo-liberal Economics, Real World Economics, SOCIAL STRUCTURES OF ACCUMULATION, Wealth and Income Inequality, Weapons of Mass Deception. Bookmark the permalink.

Leave a Reply

Your email address will not be published. Required fields are marked *