Simon Wren Lewis on ‘mainly microfounded macro’

Simon Wren-Lewis responded to my post on microfoundations in macro, wondering what was wrong with mainly micro-founded macro [is this what his blog name really refers to?] if small ad-hoc interventions helped the model fit the data better.

The presumption in this question must be that there isn’t a modification to the microfoundations that would also help the model describe the data better (than the solely microfounded model you started with).  I’m not sure about Simon’s specific example, but this is a legitimate enough question.

If the objective is to describe the data better, perhaps also to forecast the data better, then what is wrong with this is that you can do better still, and estimate a VAR – a system of equations where everything is allowed to be a function of everything else lagged.

In fact, if you want to take this microfoundations plus ad-hoc modification model to the data, and construct what Sims (he cites Haavelmo in his Nobel lecture for this project) calls a ‘probability model’, then what is wrong with Simon’s proposed model is that it incorporates theoretically unsubstantiated – in his language ‘incredible’ – identification restrictions.

To re-emphasise a point in my previous post, you can use this model for policy experiments.  In other words, a Sims like model is an attractive option even if you aren’t solely trying to explain in sample, or forecast the data out of sample.  Provided, that is, that you convince yourself of the following:  that the experiment does not constitute so much of a departure from the past that the statistical laws of motion you estimate will move around so much in response to the policy as to invalidate the inference about what you should do that you made from the original model.  This approach resembles in part the one Simon toys with, weighing the costs of Lucas-Critique problems against the possible benefits of describing the data better.  Only here we are talking about an entirely statistically founded, rather than microfounded model.  A topical example of this approach in action is the literature using VARs to measure the fiscal multiplier, including work by Romer and Romer, Cloyne, Blanchard and Perotti, Caldera and others.

Another way to put Sims’ point would be this [partly inspired by a Nick Rowe post].  Once you make that modification, you don’t know what you have any more.   You might hope that you have a system of equations that describes what consumers and firms do, and one that does it in a more data-congruent way than before.  But in fact your hopes might be dashed.  Really all you have is a system of equations linking numbers that the statistics agency collects.  And no good reason to have paired down that system of equations by eliminating variables that could appear in them.

Commenting on Simon’s post, Noah Smith writes ‘YES YES A THOUSAND TIMES YES’ at the suggestion that you could add a little statistical realism to the microfounded model.      Noah is extremely sceptical of microfoundations.  So much so that he requests a post to explain why they might have any merit at all.  So, he should be saying:  NO NO GET RID OF ALL THE MOTHER&&&&&&G MICROFOUNDATIONS WHILE YOU ARE AT IT.

As I said in my previous post, ad-hoc modifications seem attractive if they are a guess at what a microfounded model would look like, and you are a policymaker who can’t wait, and you find a way to assess the Lucas-Critique errors you might be making.  Or you are a generous researcher who wants to try to help someone cleverer or more persistent to confirm your guess.  Or you want to convince someone of the same ilk that it’s worth trying to confirm the guess, because you demonstrate that (conditional on the guess proving correct) there is some great prize at stake, some significant revision of our diagnosis of past events, or policy prescription for responding to some future event.

Advertisements
Posted in Uncategorized | 1 Comment

Why microfoundations have merit.

This post is prompted by a twitter exchange some time ago between Adam Posen, Noah Smith and myself over the ‘merit’ of microfoundations.  [Here’s a storify recap].  And that in turn by the fall-out from the events at the Federal Reserve Bank of Minneapolis, where Naranya Kocherlakota [former microfounding whizz] has begun to push out senior advisors and research economists, including Pat Kehoe and Ellen McGratten.  But this debate will be familiar to those following econ blogs for longer than that.  It’s tangentially related to the controversy in the UK over how economics should be taught, fuelled by the student campaign group at Manchester University, and by the initiative led by Wendy Carlin at UCL to reform the curriculum.  And it probably surfaces whenever a few economists have a beer and talk shop.

In this twitter exchange, Adam Posen said ‘microfoundations are without merit’. Noah challenged me to substantiate my claim that they do have merit.

The merit in any economic thinking or knowledge must lie in it at some point producing an insight, a prediction, a prediction of the consequence of a policy action, that helps someone, or a government, or a society to make their lives better.

Microfounded models are models which tell an explicit story about what the people, firms, and large agents in a model do, and why.  What do they want to achieve, what constraints do they face in going about it?  My own position is that these are the ONLY models that have anything genuinely economic to say about anything.  It’s contestable whether they have any merit or not.

The early microfoundations project was about pointing out the unreliability of pre-microfoundations models, sometimes known as ‘Cowles Commission’ models, after one of the research centres that sponsored such a model.  These were models that were long lists of equations for economic aggregates built out of stories economists had that linked some of these aggregates together [like ‘people tend to consume something, plus something else times disposable income’ – the consumption function].   This contribution [crystallised in Lucas’ 1976 ‘Critique’] was to note that if policy was based on statistical estimates of these guessed-at relationships, when that policy changed, those relationships may change too, invalidating the original policy choice.  So the contribution was negative.  It was about warning that another way of doing economics did not have as much merit as first thought, and might in fact entail substantial economic costs.

It seems likely to me that this early contribution ‘had merit’.  To me, it seems highly probable that major policy mistakes, informed, for example, by the belief that permanently higher inflation might buy permanently lower unemployment, were avoided.  I can’t prove it.  But there are dozens of empirical papers exploring this same point, in the light of what Lucas said.  The evidence there is not entirely on one side.  How could it be?  But I would say it was decisively tilted in favour of Lucas/Phelps/Friedman’s warning that higher inflation doesn’t get you lower unemployment forever.  Here is an example of recent empirical work on this by Luca Benati that concludes that high inflation doesn’t buy permanently lower unemployment.

Pointing at the Lucas Critique doesn’t establish without doubt that microfounded modelling leads to a better world.  It suggests that it might.  Revealing that there is some probability that high inflation policies will not work, when previously this probability was discounted, has merit.  Lucas showed that if the world behaved according to the postulates of a particular micro-founded model, and you didn’t bother with microfoundations, then you would mistakenly infer that you had found causal and stable statistical connections between your instrument and your goal variable, when these would mutate once you used them to inform policy.  You might doubt that a microfounded laboratory is a legitimate tool to discover anything.  If you don’t accept microfounded models as an interesting laboratory to test out any thought experiment, then you might not care about the Lucas Critique.

Caring about it requires that you accept that something that is true in a possible world (the model) may be informative about the actual one.  I think it’s impossible to refute this, unless you know how the real world works.  In which case you would of course not bother with the model.  To refute it, you’d need to represent the real world, somehow, and show that acting on something that was true in the false word was useless in the real one.

The discussion about how to do macro often neglects that there are serious people trying to work out the details of how to do policy when you don’t understand how the world works.  Or how the world looks when it’s packed full of agents who doubt their own representations of how the world works.  Tom Sargent and Lars Hansen, now both Nobel laureates, have spent the last 15 years exploring these topics, sharpening discussions that used to and still go on inside central banks.

I would not try to claim that microfounded modelling is the ONLY way of doing macro that has merit.  Sims explained one of the other ways.  Form systems of statistical equations linking variables you care about.  Where you regress everything on everything else lagged.  Then you can forecast things if you like.  To get further, you have to start using economics.  Further meaning trying to measure the effects of a policy, for example, so that you can work out what good policy would look like.  Identifying policy shocks need not use microfounded economic logic.  For example, you could study the runes of history and declare that it was self-evident that some policy change was exogenous and not related to the things the policymaker cared about (despite the obvious quandary – discussed in the last blog post, since policy changes should surely only be motivated by care about something).  An example of this approach in the past is the use of military spending changes to measure fiscal policy shocks.  These being a result of decisions to go to war, or ideological shifts in government, or both, not business cycle policy.

Another kind of non-microfounded thinking I would accept is thinking that has been shown in a microfounded context to work.  So, for example, Sims and Zha talked of ‘modest policy interventions’.  These are small enough changes to the way policymakers have typically gone about things for it to be conjectured that the Lucas Critique won’t matter.  If you don’t mess around with policy too much, the statistical relationships you have estimated won’t change much either.  This isn’t a cast-iron guarantee, however.  Because to get even this far you need to write down a possible economic world (a microfounded one), and show that the statistical relationships it generates don’t move much for a given type of policy change that you want to contemplate.  And in writing down that world you are going to assume some things that are false.  And have to keep in the back of your mind that anything you deduce using it might be false too.

Another kind of non-micro-founded thinking I think is valuable is knowledge building exercises like solving non-micro-founded rational expectations models.  [The Bank of England had a model like this until just a couple of years ago].  Without going through the motion of solving a model like this, you won’t know how to solve a micro-founded one.  And some insights can be got about how forward-looking behaviour manifests itself in time series.  Of course, the assumption of rational expectations may often be very unrealistic.  But if there are circumstances when it’s useful, and you need to figure out how to go about building a microfounded RE model, then building a non-micro-founded RE model might be a useful learning step.

A final possibility is that there is no alternative but to proceed in non-micro-founded way.  Yet some business has to be done – some policy decision, or some investment based on a forecast.  In these circumstances, it’s ok to take a stab at the decision rules or laws of motions for aggregates in an economy might look like if you could micro-found what you are concerned with, and move on.  Perhaps doing so will shed light on how to do it properly.  Or at least give you some insight into how to set policy.  Actually many so called microfounded models probably only have this status;  guesses at what something would look like if only you could do it properly.

Adam Posen dismissed my many examples of microfounded economic thinking that I thought had changed the face of economics and policy.  [See the storify collection for some examples].  He said that the merit in examples of microfounded models lies in the ‘intuition’ behind the ‘one line idea’.  In my tweets I explained my position on this.  This statement is highly perplexing to me.  Economic ideas are claims about what people and firms and governments do, and why, and what unfolds as a consequence.  The models are the ideas.  ‘Intuition’, the verbal counterpart to the models, are not separate things, the origins of the models.  They are utterances to ourselves that arise from us comprehending the logical object of the model, in the same way that our account to ourselves of an equation arises from the model.  One could make an argument for the separateness of ‘intuition’ at best, I think, as classifying it in some cases to be a conjecture about what a possible economic world [a microfounded model] would look like.  Intuition as story-telling to oneself can sometimes be a good check on whether what we have done is nonsense.  But not always.  Lots of results are not immediately intuitive.  That’s not a reason to dismiss it.  (Just like most of modern physics is not intuitive.)  Just a reason to have another think and read through your code carefully.

Let’s google Adam and take an example from his written or spoken work.   I agree with almost all that Adam says about everything.  I think he’s super smart and almost polymathic in his grip on all kinds of different economic issues and problems.  But, I vehemently disagree with his claim about microfounded models, and I also think he himself is the great improvising microfounder of our times.

Adam said or was reported to have said [and I agree with him 100% on the substance in this example]:

‘If they [Germany] had resolved and done more transfers to southern Europe in the form of writing off more of the bad loans they gave to southern Europe. If they had pushed for a monetary policy that was more expansionary instead of blocking expansionary monetary policy, and if they had invested at home and in their own public and in their own people, these global balances and the imbalances in Europe would be reduced.’

As someone who thinks microfounded models have merit, but knowing that Adam claims to think they don’t, this is a tricky paragraph to grasp as one that still ‘has merit’.  What on earth is he saying?  [And marvel at the confidence with which it is said].  I read it as a good account of what would happen in a two-country sticky-price RBC model with rational expectations, and possibly other partially forward-looking models for expectations, of the sort you can find in Obstfeld and Rogoff’s textbook from 1995, modified to encode issues of sovereign default and financial frictions.  Or rather, since neither of us (and perhaps nobody, I haven’t checked) has actually worked out a model with all these details, I should say that reads like a  conjecture about what such a model would say, based on having seen previous ones like it.

There are other ways to read it [and as something that nevertheless ‘has merit’].  For example it could be read as having meant ‘based on past historical correlations, I predict that if debt had been forgiven, and monetary policy looser, imbalances would have been reduced, and the world would have been a better place.’  But to make such a claim, Adam would have had to develop a sharp econometric procedure to isolate a kind of natural experiment from the historical time series that enables him to replay it now in his head.  Or to be referring to others who had.  Without isolating these experiments, you can’t confidently claim from past episodes what was due to policy, and what was due to things that were perhaps prompting a particular policy.  Worse still, although in principle there are ways to identify these shocks without using microfounded economic theory, they are pretty controversial.  The runes of history or military spending approaches are contestable.  [How can we know what was in the heads of the policymakers?]  And for monetary policy these approaches are very tricky.  You can’t help but rely in part on identification schemes that rely on microfounded models.  Such schemes might include the following:  ‘for any microfounded model I can think of, a monetary policy contraction surely does not boost output, surely raises interest rates, and surely lowers inflation’.  Or ‘in the long run monetary policy should be neutral on real variables.’

Aside from these two ways of characterising Adam’s statement, there are no other scientific ways to put it.  You might read those words, agree with them, and think to yourself:  ‘surely all he’s saying is that looser fiscal policy would put money in German consumers’ pockets, and the looser monetary policy would lower real rates and make German firms want to invest more, and they would buy more foreign goods, and this would provide the missing demand for Greek exports….’.  The trouble, is, this claim is empty and baseless without it making reference to one or both of the literatures that I have pointed to above.   It can’t be a claim about the real world.  On what basis would Adam claim to know about that?  [We are excluding econometrics now, remember].  Perhaps if we had surveyed all the households in Germany and outside, and asked them what they would do in certain situations, (and also asked them how happy they would be if certain things happened, since there is an implicit welfare claim in Adam’s statement), then we could interpet Adam’s statement as being useful.  [‘With merit’].  Then we could claim that we really knew something about what the world looked like.  But we know that no such survey has been done.  So this leaves us basing Adam’s utterance either in a conjecture about what a possible microfounded but false economic world would look like (and hedging our statements about it more cautiously than Adam did in that soundbite, a slightly unfair point since it’s possible that the hedging was stripped out of the interview report, or was implicit).  Or a statement about the consequences of identified fiscal shocks.  If we ground it in the latter, however, we can’t say anything about whether such an outcome would be better.  To do that we need to have a model in which we compute how people feel about stuff and in which such a shock can be replicated, to see if they feel better when Adam’s suggested loosening is tried out.  In other words, we are stuck with the microfounded but false way of thinking again.

This is not to say that all microfounded thinking is flawless.  Much of it is highly dubious.  The freshwater lot groan to see all the rigidities built into New Keynesian models partly because they suspect that the micofoundations for them are baseless;  that they are put in because they make the models fit aggregate time series better.  The particular mechanism for price stickiness widely used is a case in point.  One of the most popular devices is Calvo’s:  this assumes that prices roll a dice each period and get to change prices with some probability. And they get to never ever change prices until the end of time with some probability too.  This makes the model behave very oddly in some circumstances.  And is responsible for generating welfare costs of inflation instability that dwarf everything else in New Keynesian models.  Calvo price stickiness makes the model tractable and fits certain aspects of the data better than the model without it.  [For example, monetary policy shocks have effects on the real economy].  But no-one thinks this is anything like what happens in real firms.  But it is an approximation that might well be helpful in lots of circumstances.  Example:  in such models, rules like Taylor Rules (in which interest rates respond more than one for one with inflation, but also to the output gap) do a pretty good job.  And during periods when central banks actually followed them, performance was usually pretty good.  I deduce from this that with some probability the false microfounded model has taught us something useful about good monetary policy design.  [Almost all policymakers in central banks accept this too, as you can tell from the minutes of their meetings, or the documents explaining their monetary frameworks].

There’s something irksome about defending micro-founded macro from the attack that it is ‘without merit’.  A voice inside me says:  if they aren’t doing macro, by which I mean, generating new empirical or theoretical work themselves, who are they to go about proclaiming whether something has merit or not, or how macro should be done?  [I’m not singling out Adam here.  Lots are at it.]  And why should anyone care what they say?

Three concerns make it hard to resist attempting a defence, however.  First, there is the concern that the macro project at large gets tarred with the same brush as the microfounders who insist almost religiously that prices are flexible and markets are efficient.  So it’s worth trying to disentangle the defence of the project as a whole from defending those substantive positions.  Second, there’s the concern that someone with some influence might take Adam or others who say these things seriously.  Third, there’s the concern that if people inside macro don’t respond to challenge, no matter how high-handed and ill thought through, there’s the risk that we come to seem like a cult bent on disengaging, concerned to interact with those outside the cult only so far as is necessary to squeeze them for the money we need to continue playing with our toys.  [Perhaps that day has already come?!]

In my time in central banks one definitely encountered a breed of policymaker that behaved as if they were above actually doing macro, but yet seemed to know all the answers for sure, and know how macro should be done [of course by someone else, not them].  It seemed to many of us who observed them as though they had fallen victim to the illusion that since they had done so well in life, their gut feelings about stuff must really be valuable, and that perhaps that’s where macroeconomic truth lay, in what they as great individuals felt and said.  Many can tell stories of attempting to advise them, and being met with the condescending twinkle in the eye that translates as ‘Ah, so that’s what’s true in your silly little toy world, is it, tee hee, how quaint that you think such things worth repeating, well, I can only hope that one day you glimmer the real source of truth, namely, the instinctive knowledge of the chosen’.   If the meme that microfounded macro has ‘no merit’ were to gain any more traction, I assert that great danger would lie ahead!:  theorising that is incomplete and ‘accidental’ [in the sense meant by Krugman];  policy promises that are unverifiable;  discretion untamable;  and a search for new economic knowledge that is empty and futile (since the truth is already felt by the great policymakers, and the only way to divine it is to draw the few charts they ask us to plot, and sit around and wait until the charts work their inner magic and they are kind enough to write it down in speeches for us).

[Includes edits for typos and minor stylistic changes not in original].

Posted in Uncategorized | 28 Comments

How should we empirically verify whether QE increased or decreased inflation?

Matt O Brien tweeted round an interesting piece by David Beckworth, which estimated a VAR to try to resolve a debate between Steve Williamson on the one hand, and Brad deLong, Paul Krugman, Noah Smith and others about whether QE was deflationary or inflationary.  I responded:  surely you need to identify QE shocks to recover this effect, and Matt asked ‘how would you do it?’.  I’m going to teach stuff like this next term, so, if I can’t explain this in plain English, I have a problem.

First, going over some stuff for the benefit of those not brainwashed into thinking the way empirical macro tend to think about these things.  We have to look for ‘QE shocks’, which are changes in QE not prompted by changes in the economy, in order to measure the effects of QE on the economy.  Why?  Because if we don’t, we might conflate the effect on the economy of what policymakers are responding to (the terror at the great contraction turning into a depression) with the effects of QE itself.  This same problem crops up, naturally, when people try to figure out what the effect of changes in conventional policy rate and fiscal instruments are.  Hang on, you might say, why on earth should a sensible policymaker change an instrument that is supposed to be used for smoothing the business cycle in a way that is unrelated to the business cycle.  That would be mad, wouldn’t it?  A well-functioning policymaker would never execute any policy ‘shocks’, and so we would never be able to estimate the effects of the instrument for someone doing their job well.  Well this is right, basically.   But actually history might provide us with many shocks nonetheless.  Changes in personnel at the top.  Revisions to data that the authorities use to decide how to move their instrument.  So policy typically has an unavoidably trembling hand, and researchers can use this to find out useful things.  Related to this there’s an old debate about whether policy should deliberately experiment to generate the required noise to work out what should be done with the instrument.  Alan Blinder had stern words to say about this in his book ‘Central Banking’ and most senior central bankers will tutt-tutt at you if you mention this argument in the same way.

The basic problem with trying to recover QE shocks is that QE hasn’t been going on for long enough to make a credible enough stab at disentangling QE shocks from QE prompted by the Fed following through on how it thinks it should be doing its job.  The sample time series is too small.

If we had enough data, then there would be a few ways of going about it.

One would be to make an assumptions about the signs of the effects a QE shock would have on some things, invoking knowledge we are confident of from theory.  For example, when people identify interest rate policy shocks, they often assume that a contraction reduces output and inflation and raises rates.  But we can’t do that very well here.  The whole point is to try to investigate whether QE reduces or increases inflation.  And there isn’t any theory of which we are confident to use to explore the rest of the theory that is more questionable.  It’s all up for grabs.

Another would be to use what’s known in the field as a ‘recursive’ identification method.  This is what David did.  Here you assume that inflation doesn’t respond within the quarter to changes in QE.  By contrast, QE responds to inflation data as it comes in.  However, two problems.  Who is to say that inflation over a quarter [or month] won’t respond to QE undertaken at the beginning of the quarter [or month]?  Perhaps if prices are sticky enough, this might be ok, but perhaps it won’t.  Second, more worrisome, doing this with only 2 variables is very tricky indeed.  You hope that you have a policy shock because you have a movement in QE that wasn’t caused by a movement in inflation.  However, because you don’t have other stuff in the VAR you don’t know that what you think is an unwarranted QE mistake isn’t actually entirely warranted in response to something else you aren’t measuring [like unemployment, or stock prices, or surveys, or whatever].  One of the set-piece debates the academic literature on interest rate shocks involves Chris Sims (recent Nobel laureate) explaining that if you try to measure the effects of interest rates without incorporating enough variables to capture what the Fed are responding to, you can find that a contraction in monetary policy raises inflation rather than reducing it.  This pathology became known as the ‘price puzzle’.  It comes about because there is something outside your model the Fed thinks is going to push up inflation in the future [Sims conjectured commodity prices] and so the Fed raises rates to combat it;  the policy response is inevitably not entirely successful at choking off the inflationary threat, and inflation rises, and this looks, through the lens of the small model, like the Fed increasing inflation with a rise in rates.

To cut a long story short then, you can’t hope to do this at all well with a two variable VAR.  People often found three or four variable VARs were inadequate when they tried to identify interest rate shocks.  The fancy way to say this is that with a small VAR, the shocks to your VAR equations can’t hope to span the underlying economic shocks that you are trying to discover, so that you can compute their effects.

There are other ways to do it, using ‘long run restrictions’, but these would be completely incredible in these circumstances – with such a small time series – so I won’t bother trying to explain them.  [Though if you want a great explanation, go to Karl Whelan’s website and look at his lecture notes].

So, Matt asked ‘how would you do it?’ and my response is bleak:  you can’t, not yet.

Are we stuck?

Not entirely.  For two reasons.  First, if you were willing to look just at the Treasuries buying part of QE, you can lengthen the time series, because then we are just talking about debt management really, and that’s been going on for a very long time.  You have to go through the careful thought processes above, but this is doable.  Second, if the Williamson story were fleshed out completely, it would have implications for other data, probably, not just inflation and QE itself.  Perhaps there is some prediction for the comovement of a spread and the business cycle, and that could be verified, without confining oneself to the period during which QE was actually practiced.

Posted in Uncategorized | 2 Comments

Why don’t economic research functions exist within Finance Ministries?

This question was posed in conversation with the Norges Bank’s Oistein Roisland and Gisle Natvik last night.

Why indeed?  The task of figuring out optimal fiscal policy is no less important than optimal monetary policy.  And arguably a lot more so.  In the UK, both the Bank of England and the Treasury are filled with talented civil servants.  But in the Treasury those ranks are populated with staff with fewer years of economics training, and there are far fewer doing anything that would resemble modern economic research.  [Many might think that a good thing of course!]  This pattern is replicated, as far as I can tell, in the US, Norway, Sweden, France, Italy, Belgium, Spain, Canada, Chile, Mexico, New Zealand, Australia, South Africa, South Korea, Japan, Israel, Iceland [guesswork].  I don’t list other countries as I have zero information about them.  You can even find the same pattern in supernational bodies.  The ECB has a huge research department pouring out deluges of top quality research by great people.  The Commission doesn’t.

Oistein wondered whether it was because central banks are close to their own source of finance – they make money by printing it (it’s called seigniorage).  That way they get to spend it on the luxury of research.   Gisle Natvik conjectured that research departments might not thrive in finance ministries because they are more closely tied into the political process, and this makes it harder to publish independent and critical research, and that in turn dissuades researchers from working in these institutions, and this prevents research nodes getting going.  Perhaps the political cycle also makes the institution more short-termist,  so the horizons over which research delivers fruit (if it delivers any) are too long for it to be worthwhile.

Another possible reason:  there are market failures in academia in research relating to central bank functions [monetary policy, financial stability], hence they employ researchers to fill the gap.  In topics relevant to finance ministry functions, the academic market for research works fine.  Nope, that doesn’t work.  You can make a convincing case that there are market failures in academic research, but not that these failures are greater for monetary policy than fiscal policy research.  You can see that central banks think there is a market failure.  Often senior central bank speeches are peppered with derogatory references to the academic literature, and its failure to provide practical advice.  And central banks set up their own journal for economic research [The International Journal of Central Banking] for this reason.  But the failures, if they are failures, don’t seem any more acute in fiscal policy.  Senior finance ministry employees don’t give speeches about the academic fiscal policy literature because they haven’t read it, not because they are content with what the academics are doing.

Maybe what I’m describing is a mirage.  Research is happening in finance ministries, it’s just that they don’t have their own departments, they contract it out.  I don’t see that happening.

In the UK people talk about a heyday of the Treasury, when the best economists worked there, in the 1970s and the 1980s.  The arrival of the Thatcher regime either led to an exodus or a cull, depending on which rumour you believe.  I’d be interested to understand just what happened, as this was a period which seems to have bucked the global trend for central banks being the place which attracted the greatest density of specialist and technically oriented economists.  I know that this cohort included Simon Wren Lewis and Peter Westaway, so if they are reading, what do they have to say about it?  At some point, well before independence in 1997, the balance tilted decisively towards the Bank, but it wasn’t always that way.

The oddest manifestation of this trend is (or was?) the nodes of freshwater economics in the regional Fed system, eg in the Fed of Minneapolis.  There you have a bunch of (brilliant) economists [not everyone thinks so but I do] preaching that central banks are basically irrelevant [because they believe in flexible prices, and in models with flexible prices central banks can’t stabilise booms and busts with monetary policy], but feeding their families by working for… a central bank.  [Aside:  for me their brilliance wasn’t in this substantive message about monetary policy, which I don’t buy, but in explaining to us how we should do macroeconomics generally;  how we should build models and verify them.]

Whatever the cause, and whether you think technical economic research is useful or not [lots on the blogosphere about that at the moment, from both sides] this uneven distribution of research activity doesn’t seem to make sense.  Either it’s useful, and should be taking place across government functions, not just inside central banks, or it’s not, and it shouldn’t happen anywhere.

Posted in Uncategorized | 2 Comments

Bitcoin Bitpuzzling

The Economist carried a nice piece on Bitcoin that many may already have read.  They conclude that the explosion in the price of Bitcoins (recently breaking the $1000 barrier) looks bubbly.  I agree.

Most people with things to sell don’t accept Bitcoin in exchange.  So most choosing to hold Bitcoin do so knowing that they are holding something they will one day exchange for cash, before exchanging that cash for something they really want [I’m assuming we can forget about currency hobbiests here, though you can find a lot of those around Charing Cross market stalls on a weekend].  In this sense, if Bitcoin is a bubble, it’s a bubble resting on another bubble.  Cash is also worthless intrinsically.  But we accept it because we know everyone else will.  And we know they will because they are making the same calculation.  It’s a bubble that can burst if you try hard enough.  Witness countries that ‘dollarized’ in the face of decades of sporadic hyperinflation.  Although the cash bubble can sometimes prove surprisingly resilient.  For example, in this speech, Mervyn King describes how in Sadaam Hussein-controlled Iraq, when the Kurdish region was protected by the no-fly zone, pre-no-fly-zone Sadaam-issued banknotes continued to circulate, even though Sadaam himself first debased them [partly to plug gaps in his own finances] and then renounced them.  Users no doubt calculated that at some point in the future Sadaam would be deposed, and the deposers would recognise these notes and exchange them [for another intrinsically worthless bit of paper] and they were proved right.

You might often hear the argument that money is accepted because the law states that creditors must accept cash as final settlement [in this sense cash is ‘legal tender’].  But this has always struck me and no doubt others as superficial.  Laws are costly to enforce.  If conditions changed such that accepting cash was not wise, people would not accept it. [Short of changing the penalties for breaking it:  Sargent and Velde describe how the Jacobins made it a capital offence to use wine and cheese to store wealth, as they debased their Assignats, claims exchangeable at future auctions of expropriated church lands].   So cash is a bubble, which might or might not burst under different pressures.  And Bitcoin is a bubble blowing on a bubble, and holders are calculating that they might or might not be able to exchange for cash at some point, and might or might not (probably will) be able to exchange the cash they get for their Bitcoins for something they really want.

Bitcoin seems like a solution to a problem that doesn’t exist, or, if it does exist, is slowly being solved over time.  One problem with currency is that it’s inconvenient to cart it around.  But electronic payments are slowly taking over.  Except [as Paul Flowers’ secretly filmed consumer transactions recently reminded UK TV viewers] for illegal goods.  Another is that people can steal it from you.  But Bitcoin also seems exposed to theft.  Another is that it gets eroded by inflation.  But societies around the world have slowly got on top of that problem too.  Looking far out into the future, I recall that people used to speculate about a world in which we could engage in real-time wealth exchange.  Perhaps swapping claims on indexed mutual funds.  This world advances too, with the odd hicupp in financial markets.  [OK, quite a big cough].

Bernanke reportedly said that Bitcoin may hold ‘long term promise’.  But, if I’m wrong about it, and people do switch to using it, then it will cause a problem for central banks, depriving it of a lever.  Those who think that using monetary policy for the purposes of trying to iron out booms and busts does more harm than good may think that losing this lever would be a good thing.  Personally, I would worry about it.  Imagine the world economy without the loosening of the Fed, ECB, BoJ?  Not even John Taylor would advocate policy that tight.

Could Bitcoin be a kind of hedge against other monies?  I doubt it.  A world where all public monies are being debased feels more like a world that is facing catastrophe, and it doesn’t seem right to conjecture that virtual computer based monies would feel safe.  If all the cash bubbles burst, surely the Bitcoin bubble would too.  A world in which just one public money was being debased would be one in which investors could protect against by holding multiple currencies.  So Bitcoin doesn’t seem to help here either.

Bitcoin’s hard-wired inflation rate is currently set to flatten out at zero.  Not good either.  Presuming that global supply continues to grow, and velocity stabilises [in this hypothetical world where we settle on Bitcoin as the medium of exchange], that will mean deflation.  Deflation, for anyone who reads Paul Krugman’s blog, or anyone knows sticky price models, is a bad thing.  If you plot GDP in the major economies over the life of the gold and other metallic standards, the episodes of deflation following inflations are not usually remembered as happy periods.

Private monies like this may have a beneficial effect.  Hayek imagined a world where private monies would compete and discipline issuers not to debase their currency.  Perhaps Bitcoin and other competitors that emerge might do the same.  But then, as just mentioned above, this could have its downsides if it forces central banks to generate inflation rates that are too low, or to avoid using monetary policy to stabilise booms and busts.

The Economist report that the German finance ministry recognise Bitcoin as a ‘unit of account’.  But that doesn’t amount to much.  I’m pleased to announce that anyone who wants to deal with me can count whatever they like in olives or marshmallows, provided they don’t expect me to take them as final settlement.  [I don’t like either].

Posted in Uncategorized | 1 Comment

Price level targeting: response to NY Fed blog

In a nice post on the New York Fed’s blog, Liberty Street Economics, Marc Giannoni and Hannah Herman write on price level targeting.  In a nutshell, they observe that, whatever the FRB said it was doing ex ante, ex post it has brought about a trend-stationary price level, which is what you would get if you were ‘price level targeting’.  The question is begged:  so why not consolidate this into a formal Price Level Target to lock in the benefits?

A few comments.

1.  First, suppose, as most of the analytical work assumes, that expectations are rational.  In this case it seems unlikely that a price level target would be credible.  Imagine a few boom years where inflation has got away from the central bank a little and the price level has risen above target.  Is the central bank really going to be believed that it will deliberately engineer a recession to make up for it?  I doubt it.  In which case expectations won’t respond as the NYFed bloggers describe, and an even more draconian tightening would be required to bring about the price level correction.  In the case of the US, maybe one could argue that since the price level has been trend-stationary over the recent past, people will continue to think that it will be over the future, so a PLT will be credible.  But on the other hand, you could argue that people might think ‘The Fed have all this time been pursuing a secret PLT, deceiving us.  So they say they are doing PLT now, but perhaps they have just switched to a new secret target.’

2.  Second, perhaps expectations are not rational after all!!  [I can hear the sniggers of non-practicioners in macro]  In which case the benefits from having a lever over forward expectations don’t materialise.  This was one of the reasons that the Bank of Canada concluded that it would be unwise to do PLT in its public research review of the topic.

3.  There is a third reason why I don’t like PLT, and it will sound a bit unscientific.  [That is, more so than other macro-style arguments which as a newcomer to the blog and twittersphere I realise are rejected as unscientific anyway].  The benefits of price level certainty are clear in RE models with the very arbitrary way we have of modelling sticky prices and the value people place on (surely actually worthless) paper money.  The fine details of optimal monetary policy depend quite a bit on these arbitrary choices.  Both are completely question-begging.  Why the hell are prices sticky if it’s so costly?  Where do the costs come from.  The QJE and AER are stuffed full of hard papers on this.  And why do people hold money and how will the value they place on it fluctuate?  Likewise, the top journals have been peppered with great work on this [including Steve Williamson, author of the blog ‘new monetarist economics’].  It seems extremely odd to me to fine tune the monetary regime in favour of conclusions reached in a particular New Keynesian model of sticky prices and money, when these two crucial features of the model are so question-begging.

4.  To believe that the Fed should switch to PLT seems to me to be unwise.  [To be fair Giannoni and Herman don’t actually recommend this, but they might be interpreted as doing that].  It requires us thinking:  the Fed should not actually do anything different at all, because it has been doing what it needs to do to generate a trend-stationary price level;  everyone saw and understood exactly what the Fed was doing, since they have rational expectations, and they perceived the Fed to be committed to carrying on doing it; following the change in communication, nothing else will happen.  There will be no misunderstanding or incomplete credibility surrounding the new policy.  In a sense, a formal switch would be a contradiction.  No need to do anything different, and, since expectations are rational, no need to say anything different [or anything at all].  Working backwards, since central banks think communication is very important, and take great care over it [witness the tip-toeing about quantifying the inflation target over many years, or broaching the idea of ‘tapering’ asset purchases], one might infer that expectations are quite different from those that guarantee the benefits of PLT, in which case, better leave well alone.

5.  Ultra-nerdy point and a bit of blatant self-publicity.  In this piece of work coauthored with Andy Blake and Tatiana Kirsanova, we explain how much of the past work studying the benefits of price level targeting neglect the fact that you can have multiple rational expectations outcomes.  Without a closer look, you can’t tell whether moving from an inflation target to a price level target is going to improve things or not.  In fact PLT doesn’t come out too badly.

Posted in Uncategorized | 2 Comments

More on the Scottish Currency skullduggery

A post-script to my previous post on this, thanks to another friend who can’t be named and tweeting by Monique Ebell and Angus Armstrong at NIESR.

1.  First-off, on the insidious association with the notion of Sterling as an ‘asset’ that the Scots have a right to and could trade for a share of the national debt.  Oh, the cunning.  Of course, given that everyone accepts Sterling in the UK, if I have some in my pocket, that is an asset to me.  I expect to be able to swap it for something useful.  But the SNP are mixing up that concept with the idea of not ‘some’ Sterling, but ‘the Sterling currency area’.  That’s a very different animal.  In a sense, it is also an asset.  It is a system of bookkeeping, recording who has done what in the past and who owes what to whom.  And other things too.  But it is not something that can be divvied up, with individual groups of society doing with it what they will.  Scots can take the wealth they have accumulated if they go their own way after the referendum.  Some of that will be stocks and shares, denominated in Sterling, and, at some point, they will be able to sell them, and convert them into whatever currency they need, and buy goods and services.  They will also be able to take notes and coins with them.  Could be Sterling, or it could be these exchanged for Scottish notes, or euros, or whatever.  But they can’t take a share of ‘the benefits conferred by being a member of a currency union’ with them to do with as they wish.  What they want to do is set independent fiscal policy.  But in so doing they will undo the benefits that the Sterling currency area confers on everyone else.  The sneaky thing about the language here is the slippery connection of the Sterling ‘asset’ with the national ‘debt’.  And it’s obviously not a coincidence. Otherwise, why not say simply that if the Rest of the UK [RUK] refuse to allow them to remain in the Sterling area, Scotland will want compensation across a range of other issues.  Eg they could trade this for a delay in the repatriation of the nuclear Trident submarines.  Instead, they make this corrosive association with debts and assets.

2.  An analogy to try to bring to life why one group doesn’t have a right to impose an externality on another.  Think of a city playground.  There’s one a hundred metres from my house in Kentish Town.  Imagine a nirvana where dogs were not allowed into that playground.  But then a group of residents bordering one corner of it decided that their use of it would involve dogs running around and crapping all over the grass, because they had decided to be ‘independent’ and adopt their own rules for dogs.  Would we agree that the dog libertarian residents had a ‘right’ to do what they wanted with their access to the playground?  Of course not.

3.  Suppose the SNP decide to try to blackmail the RUK into letting them remain in the Sterling area with independent fiscal policy.  They are now explicitly threatening to refuse to honour their share of the national debt.  My friend guesses that this would be worth about 8 percentage points on the RUK debt to GDP ratio.  Quite nasty if the RUK feel forced to honour it to avoid a default.

4.  But what kind of a threat is it?  For two reasons, it might not be such a good one.  First, it would be equivalent to a default.  That is likely to raise the cost of funding debt the Scots issue independently, and force them to run tighter fiscal policy for a very long time to come.  At a time when the shape and behavior of the new state is uncertain, and in these fragile times where risk tolerance for investors in suspicious sovereigns is low, the funding premium could turn out to be very large.  Second, suppose the RUK agree to be bullied, and the Scots then take their share of the debt.  Next period, the Scots are done for, because there is nothing binding future RUK parliaments and electorates, ensuring that they honour the commitment to allow the Scots to continue to be a member of the Sterling union.  (Or, pedantically, nothing to prevent them setting up their own currency union which they refuse to admit Scotland into).  Once the debt is handed over, there won’t be much the Scots could do about it.  As with many issues, the SNP haven’t given any details of just how they will try to bully the RUK, so we don’t know whether they have a way to get around this.

Posted in Uncategorized | 2 Comments