[nerdy] Reply to Hendry and Mizon: we have DSGE models with time-varying parameters and variances

Hendry and Mizon summarised a recent paper of theirs on VoxEU explaining that DSGE models break down in crises because these events involve shifts in the distribution of observeables that fixed-parameter, fixed-variance DSGE models can’t articulate.  They tell the story in a way that a lay reader might conclude is catastrophic for microfounded, dynamic macroeconomics, and/or rational expectations.

But it isn’t.

Several papers have taken steps to articulate models that have time-varying propagation parameters, and time-varying variances.  And there is a literature connecting these models to empirical macro models that estimate time-varying econometric counterparts.  None of these papers make it into the citations of the VoxEU post, or the original academic paper.

Part of the discussion is about how the equilibrium laws of motion of the economy, got by invoking the law of iterated expectations, in some cases, aren’t derivable by these same means with time-varying parameters.  This is well-known.  But DSGE modellers who use time-varying parameters, or time-varying variances, know how to solve such models, at least in cases where there aren’t too many things moving around all at once.  Finding expectations that, when used, generate laws of motion whose expectations are equal to what you started with is neat and easy with time-invariant parameters.  But though difficult when they vary, the process of searching for this ‘fixed point’ as it’s called is conceptually the same, and often achievable.

Some examples of time-varying DSGE models:  (i)  Models with ‘stochastic volatility’ [variances of shocks that move around in continuous and random small steps over time], including Caldera et al, and Fernandez-Villaverde and Rubio-Ramirez.  (ii) Or Markov-regime-switching models [models in which parameters like price-stickiness, or policy parameters, move around randomly through a small set of possibilities]. including Foerster et al.

All these models rest on a degree of time-invariance;  in a stochastic volatility model, the shocks to variances are themselves drawn from a fixed variance.  In a Markov-Switching model, the switches occur with fixed probabilities.  But in principle we could push the time-variation one step further if we really wanted to.  [In fact in the case of Markov-Switching I believe there are examples that do this, though I can’t lay my hands on one now].

The article dwells on the notion that from the perspective of agents the mean and the variance of relevant distributions won’t be known ahead of time.  But expectations can be calculated provided that the distributions from which this means and variances are drawn are known.

At any rate, the critique is somewhat academic, because many authors have pushed the boundaries of DSGE models by dropping the notion of rational expectations.  Sargent and coauthors have worked out the equilibria for agents who are Bayesian learners, yet doubt the distributions for relevant concepts implied by their models.  Ilut has figured out a simple DSGE model in which agents respond to changes in the degree of ambiguity about the distribution of technology over time.  Models with learning can be simulated recursively, so there is no problem at all shoving through changes in policy or economic parameters, or changes in variances.  I can’t find an example that does this, but that’s because it’s so easy that no-one would get any points for trying to tell anyone else that this was possible on their computer!

A further point to make is that we will rarely be able to say decisively that the distribution has changed.  A common theme in the stochastic volatility literature is that it is hard to distinguish a high probability draw from a new distribution with a larger variance from a low probability draw from the old distribution with a small variance.   Perhaps the post Great Moderation era indicates a shift to a new distribution of macro variables.  Perhaps it just reveals that the time-invariant distribution involved a higher probability of disasters than we thought before the crisis.  The failure of our pre-crisis DSGE models doesn’t necessarily indicate we need time-varying ones, just models that generate low probability extreme outcomes (like a crash).  We should guard against jumping too quickly from atheoretic econometric analysis which appears to show distribution-shifts, to concluding that time-varying distribution DSGE models are necessary.

Hendry and Mizon make a number of scathing references to the fact that central banks like the Bank of England operate with these fixed-parameter DSGE models, in apparent oblivion to the fact that distributions are changing all around them.  In the BoE’s defence, their modelling staff know the time-varying DSGE and empirical macro literatures well, and some of them have published in these fields, and many of the contributors have presented in the Bank’s seminar series.  Further, the staff and the MPC don’t follow the models slavishly, or necessary believe literally in the assumption of rational expectations which Hendry and Mizon think (mistakenly in my view) is so problematic.

Strictly speaking, the post criticises ‘standard’ macro models.  Leaving open that they accept that there are ‘non-standard’ varieties that are immune from their critiques.  In which case there is no dispute.  But I think this other work deserves a mention.   It illustrates that time-variation in variances isn’t catastrophic for rational expectations or DSGE models.  And anyway, who cares, given the strange and wonderful new work on more realistic, non-rational expectations, which most central banks would, I surmise, subscribe to.

[Update:  this post, and the Hendry and Mizon paper, sparked a discussion on econjobrumors.  One of the contributors makes a great point that I hadn’t thought of, which is that many DSGE models generate multiple equilibria, which is another class of model that would produce data that might appear to an econometrician to manifest distribution changes, even though the Data Generating Process had not changed at all.]

Advertisements
This entry was posted in Uncategorized. Bookmark the permalink.

5 Responses to [nerdy] Reply to Hendry and Mizon: we have DSGE models with time-varying parameters and variances

  1. James Reade says:

    Prompted by our mini-twitter dialogue, I’ll in the short time I have write a few issues I have with your response.

    The first, which was driving my original tweet is this. You appear to suggest that the Hendry-Mizon (HM) critique is baseless because there exist some DSGE models with time varying parameters. Underlying HM are issues with estimation of DSGE models, and so I wanted to understand what estimation methods are employed for these kinds of models? In that, what I really want to know is: What post-estimation model validation type stuff goes on? Is fit assessed? If so, how? Are informative Bayesian priors used, and how much are parameters allowed to change based on the actual data?

    Then a bigger issue, again underlying HM, is predictability. If these distributions are changing, how do we make forecasts? Simply allowing parameters to change, while interesting, doesn’t really aid forecasting at all – what will these parameters change to in the future?

    • Tony Yates says:

      Thanks for this.
      Three methods. Bayesian estimation for DSGE’s with stochastic time varying coefficients. Bayesian because even fixed coefficient models are not that well identified. I see where your cynicism about Bayesian comes from, but I think you overdo it. Some is totally innocent; we have tons of data, for example, from micro, revealing the discount rate, so it’s pointless to burden our time series estimation with trying to find that out again.
      ML used for some Markov-switching examples, BML for others.
      Mixture of non-parametric classical methods [my work] and Bayesian methods with relatively flat priors [work of Cogley, Sargent and others] used to estimate tvp-empirical macro models connected to tvp-DSGE models.
      Lots of discussion of fit where it’s useful.
      If parameters are changing, then allowing them to change will improve foreasting. But I would not want to claim any of these models does a particularly good job on that score.

      • James Reade says:

        Ha, re-reading I guess I did leave a little residual cycnicism, though my tweets were much more laced.

        I don’t doubt what DSGE folk do is well intentioned – and another side of the problem is that those of us taking a more econometric line don’t necessarily engage so well with macro theorist – it is damn hard to find models that fit the data.

        My rough sketch of how things go is the following: the original DSGE models were criticised for not fitting the data, so they became more complicated, making estimating the models much more difficult – likelihoods that only converge if very specific combinations of initial values are used, for example. This kind of thing breed cynicism amongst us econometricians – but we’re hardly solving the problem by carping from the sidelines.

        What I would like to see though is proper testing of these models – we do really need to know whether predictions of DSGE theory are upheld. That links back to the discussion of fit – could you point me to some papers that do that? My perception from brief reads of papers is that there’s essentially no reference to this, usually just a few impulse responses plotted and nothing else at all about the model. But I am a cynic…

      • Tony Yates says:

        I don’t know what you mean by ‘likelihoods that only converge…’. Do you mean algorithms for finding a maximum ‘that only converge….’?
        Ok papers that take model fit seriously. I think the original Smets-Wouters papers do that [2003 JEEA, 2007 AER]. Also the Christiano Eichenbaum Evans minimum distance estimation is all about impulse response ‘fit’. My paper in the JEDC with Cogley and others lines up Bayesian estimates of several models in the process of ariving at a policy that does best over a weighted average of them. Del Negro and Schorfheide’s work on DSGE-VARs is all about the relative ‘fit’ of both.

  2. Costas Milas says:

    Nice piece Tony. My only objection to the BoE’s COMPASS model is that it uses data information up to 2007 because, in their own words, they want to “avoid this episode [the recent financial crisis] having a disproportionate effect on the properties of the data”. Now then, If I had to referee the BoE’s paper for an academic journal I wouldn’t buy their argument.
    Best wishes,
    Costas

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s