Published instrument plans: the Financial Policy Committee do what the MPC dare not

The Bank of England published its Financial Stability Report, and, along with it the decision of its Financial Policy Committee to raise the counter-cyclical capital buffer from 0 to 0.5%.  Interestingly, there is also an announced plan to raise this further in November to 1%, on the presumption that the economy, and along with it financial stability risks, evolve as FPC expect them to.  So the FPC has agreed and disclosed an instrument plan for its macro-prudential instrument.

This seems to be beyond the Bank’s Monetary Policy Committee.  For monetary policy, we have decisions about today’s instrument settings [interest rates and QE] and a published forecast conditioned on an estimate of what markets are guessing for those settings over the future.  And we have speeches, which make references to the chance of a hike in the near term and the likely destination of rates in the longer run, that are coded to differing degrees.

Two arguments made against interest rate and QE plans were these.

‘How could a committee of 9 people, who find it hard enough to agree decisions for today’s instruments, possibly have a manageable discussion about a whole sequence?’

And:  ‘If we publish plans, they will be misunderstood as promises for instruments to do this come what may, and when actual settings deviate, people will think we have cheated and our credibility will suffer.’

These objections, in principle, would seem to hold also for FPC, but they have found a way.

 

Posted in Uncategorized | Leave a comment

Balance sheet shrinkage: so soon?

The original plan for balance sheet shrinkage, articulated by Bernanke in the States, and Mervyn King in the UK, was organised around the idea of spending as small an amount of time as possible using QE as the marginal tool of adjustment for monetary policy.  This in turn flowing from the idea that the effects of QE are ill understood, so best not to rely on it if you don’t have to.  [I think this argument has flaws, which I’ll write about in another post].

The corollary of this idea was that you should wait to shrink QE stocks until you were sure that there was little chance of having to reverse them and start QE again [which means using QE as the marginal tool of adjustment].  And that in turn means waiting until the economy had recovered to the point that enough interest hikes were warranted that there was a lot of room above the zero bound to cope with responding to the next recessionary shock.

If you take FRB St Louis President James Bullard at face value, we may get shrinkage before another hike in rates.  This seems like a change of plan, since interest rates are only at 1.25%, leaving very little room to cope with future shocks, given some plausible guess at what the distribution of those shocks looks like.  To give a rather extreme comparison:  central banks were hoping for something like -8% interest rates in the darkest days of the Great Financial Crisis.  This is after starting at 5-6%.  So if they could only have smashed through the zero bound, they would have liked a 13-14% rate cut.  1.25% therefore gives the Fed room to respond to a shock about 1/10th the size of the GFC without reversing course on QE.

There are two ways of rescuing the idea that the plan is being stuck to.

One is that the average maturity of QE holdings in the US is short, so the shrinkage can be achieved by simply letting assets mature rather than outright sales.  So if the balance sheet has to grow again, new purchases won’t be following sales.  If you believe that only actual sales or purchases matter [it’s the flows not the stocks that matter] then this is not a reversal.  It’s purchases following natural wastage, not purchases following sales.  However, this is pretty much the opposite of what the event study analysis [including work by Fed staff] says:  ie it’s stocks, not flows that matter.  Natural wastage shrinks the balance sheet;  purchases grow it.  So natural wastage followed by purchases is a reversal of the stocks.

A second way to rescue the ‘no change to plan’ view is that there is reason to be very optimistic about the future distribution of shocks.  You’d have to be a brave person to hold that view.  And also this view is not consistent with the chatter about raising the inflation target, which suggests that the Fed is conscious of the worry that the distribution of shocks is much less favourable than when the old 2 per cent target was designed [more room for responding therefore needed, which raising the target provides].

If this is a change of plan, why has it happened?  Is it because the FOMC are worried about extra hostility towards QE from Congress and the White House, which has tilted rightward since Bernanke’s time?

Posted in Uncategorized | Leave a comment

Yellen on raising the inflation target

It was very surprising to hear Janet Yellen hint in public that there was a good case for raising the inflation target.

The economic logic of doing just that is very sound:  if we think that equilibrium real interest rates are likely to be low for the foreseeable future, then the corollary is that the resting place for central bank rates is low.  That means less room to cut rates in response to a future recession, leading to worse outcomes for the real economy [and inflation].

On the assumption that the Fed – or the combined might of the Fed and the Treasury – has the instruments to hit a higher target, raising the target will, in the long run, raise the resting point for nominal rates one for one, albeit after a period where rates will stay lower than they otherwise would have, to generate the stimulus necessary to hit the new target.

This said, the Fed Chair herself suggesting that the target might be changed amounts to the Fed moving the goalposts against which it is judged itself.

The current 2 per cent target was declared by the Fed, as a way of interpreting the price stability part of its mandate, set ultimately by Congress.  The historical precedent was laid down, therefore, that Congress tolerates the Fed interpreting the otherwise vague mandate in this way.  So perhaps a raising of this definition of ‘price stability’ stays within this precedent of a tolerated corridor of Fed independence.

Defining 4% to be ‘price stability’ would seem to be a stretch, at face value.  Prices that are ‘stable’ surely do not go up by 4%.

Perhaps one could make the legalistic argument that if the rate of price increase is stable, this amounts to a kind of ‘price stability’.  Or, in similarly technocratic terms, it might be pointed out that without the rise in the inflation target, which departs from ‘price stability’, there is less hope of meeting the other part of the dual mandate, which involves full employment, since this means more time at the zero floor to interest rates with fewer means with which to counter the business cycle.

After all, this was undoubtedly part of the calculation that led to Bernanke going for 2 per cent, and not a lower number.  Even though the equilibrium real rate was much higher then, it was still recognised that the zero bound could be hit [there was the case of Japan] and that downward nominal rigidities in the labour market meant that a bit of inflation was needed to grease the wheels [and thus achieve full employment].

The broader context in which Yellen’s remarks were made is less encouraging.  The Fed’s interventions with conventional and unconventional monetary policy, not to mention the bailouts, attracted the ire of the right in Congress.

A Fed initiated move to raise the inflation target – noting that conservatives often view inflation as expropriation – to permit more active Fed policy, would seem to be a hard sell to that part of US politics, which currently has the upper hand.  Trump himself accused the Fed of conspiring to try to help the Democrats with loose monetary policy.  Yellen’s remarks, seen in this context, might be predicted to re-ignite Congressional efforts to tame the Fed [remember the #AudittheFed campaign] and reduce its powers further.

Stepping back from the specifics of the US political and legislative context, it always seems to me unwise when central bank officials speak about choices that are essentially political.  Since by intervening in this way they raise the chance of future incumbents being chosen using political criteria and to achieve political, and not necessarily economically beneficial ends.

But in her defence, Yellen may have calculated that she was not likely to be reappointed anyway, being seen as a legacy of the Obama era, and that the next appointment was inevitably going to be a highly political one, in which case the battle to retain that job for technocrats was effectively over, and the best one could do for the cause of better future Fed policy was to champion it explicitly.

 

 

Posted in Uncategorized | 1 Comment

Disaster economics

The UK is rightly transfixed with the unfolding story of the catastrophic fire at Grenfell Tower in Kensington, London, which at the time of writing had led to 30 confirmed deaths.  This follows – as the Queen pointed out in her birthday message – terrorist attacks on Westminster and London Bridges, and in Manchester.

‘Disaster economics’ seems like an inappropriately technocratic topic at a time like this.  But disasters often have their root in the inherent challenges of disaster economics [or rather disaster economics and statistics].  And failing to rise to them can lead to more disasters than is necessary.

One of the main challenges is figuring out the frequency of disaster-events of different severity, when such things are relatively rare.  In small samples of a few years, you will have many observations of rainfall around the most common quantity, but you will have very few – perhaps even no – occurrences of huge floods.  Estimating the probability of a huge flood or a catastrophic drought is therefore a more hazardous business than guessing the probabilities of milder events.

A key part of this problem is figuring out not just the frequency of very bad things, but how that changes as policy changes.  How high would a sea wall have to be to reduce floods of a seaside town to 1 year in an 100?  How much resources would need to be spent on potential terrorist supervision to reduce the frequency of London Bridge style attacks to one every ten years?

A focus of the literature on policymaking when the chance of very bad things happening is poorly estimated is the idea of ‘robustness’.  I came across late applications by Tom Sargent and others of this idea to monetary policymaking, but the idea was stolen from engineering and control work done in the 1960s and 1970s.  The idea here is to set up the policy problem so that one chooses a policy that does best in the event that things turn out as worse as could be compared to the benchmark understanding of the policy problem.  To translate:  imagine we start out with a guess at the height needed to build a sea wall to get the frequency of a flood down to 100 years equal to 3 metres.  We then say – what’s the worst this height could actually be without it being apparent in the data we have?  Suppose the answer to that is 5m.  We then fund a sea wall to 5m.

Two difficulties follow from this.  The first is that it is not often easy to put a boundary on the ‘worst case scenario’.  If our time series on floods is short, or patchy, or not that accurate, we might not have a good idea where that boundary lies.

A second difficulty arises from the fact that scarce public funds have to try to deal pre-emptively with multiple sets of disasters of unknown probability.  If the only thing we had to spend money on was terrorist attacks, we could simply define the worse case scenario of the amount needed to get attacks down to 1 in 10 years to be the total feasible tax take.

But in reality governments have to deal with the risk of tower block fires, hospital epidemics, terrorist attacks, wars, floods, road pile-ups, corruption, cyber attacks, financial crises, climate change, prison riots, and much more.

An overly cautious approach to avoiding one kind of catastrophe deprives funds available to prevent others, and will lead to more catastrophes of those kind.

The problem of disaster policymaking gets harder when we place it in the context of a real life democracy with real voters.   Several issues arise.

First is gaining acceptance that – particularly given the multi-dimensional and competing nature of disasters that we face – it is impossible to eliminate risk entirely.

A second problem that derives from this is the need for ‘something to be done’ in response to a disaster.  I say this derives from the first difficulty, because even with optimal disaster policy, there are going to be disasters, so it may be that nothing needs to be done at all.  I make this point not to pretend that this is the position we are in at the moment.   There are plenty of persuasive arguments emerging out of the coverage of the Grenfell Tower fire and recent terrorist attacks that might lead us to think that things have to be done.

Third, given the news cycle, short memories, and the limited horizons of politicians in a competitive democracy, there is pressure for something to be done quickly enough for the incumbents to salvage credit for responding appropriately and quickly to the disaster.  A better something – that did not drain money from effective disaster prevention elsewhere – that emerged out of a time-consuming investigation, can’t always be waited for.

Fourth, policy is made in the prism of voters psychological responses to different kinds of risk.  These responses are not always rational, as research in behavioural economics and related fields has shown.

A famous recent example is the response of US citizens to the 2001 terrorist attacks involving hijacking planes to be used as bombs.  The thought of being caught up in such a horrible event, however, unlikely, was sufficient to cause so many people to use road transport instead, that far more were killed on the roads, due to mundane, but less awful to contemplate, risks of crashes [than would have been given a plausible estimate of the chance of repeat plane hijacks].

Making this point is rather distasteful given what Grenfell Tower residents went through, and how that disaster might well have been averted with safer construction, or better evacuation advice.  I’m hoping that you take me to mean not that these reactions in the analysis so far are misplaced, just that the good that comes out of this tragedy is not confined to fire safety, but involves an appreciation of disaster economics and policy as a whole.

Another feature of the disaster policy problem is that it can be easier to muster political support and mission to respond to events that are fresh in the mind, rather than to risks that appear, at least perhaps to some local constituency, to be latent, that is risks that have a certain probability of happening but have not yet happened.  This is perhaps what dogs climate change mitigation, where the connection between our individual choices and the problem is hard to detect.

Climate change’s most dramatic effects to date seem to be far from the UK, at the polls, or in glacial areas, or in low-lying, poorer economies that most of us have not visited.   Or, like climate change, the connection between the policy choice and the event is far removed in time [in this case, discussion revolves around temperature changes over 100 years].

This is not to push back against the likely response to the Grenfell Tower fire.  Far from it.  The point would be that winding back time to before the fire policy choices up to that point might later be seen to have been tainted by the issue of failing to respond appropriately to risks that were at that time latent, yet to crystallize.

A final aspect of the disaster policy problem relates to general difficulties that people, the media and policymakers tend to have in dealing with statistics and policy analysis.

These difficulties surface all the time, and cropped up in the much more mundane and less tragic context of the debate around Brexit.

For example:  framing the analysis of the cost of Brexit as the assertion that people will, with certainty, be x amount poorer [which led to the counter under the banner of ‘project fear’];  the deduction by Brexiteers that the counterfactual analysis HMT and others did could be dismissed as a simple ‘forecast’;  the observation by Brexiteers that pre-referendum forecasts of the UK turned out to be ‘wrong’.  And many more.

There are pockets of wisdom in public policy thinking that relate to this disaster economics issue.  For example, in health, there is the ‘qualy’, a way of figuring out how many units of good life a given amount of spending on different treatments confers, and thus allocating money between them to preserve the maximum amount of life for a £.  And in defence analysis there was a tradition of the exact opposite:  working out how much it costs to kill the enemy using different weapons, and therefore optimising the number killed for a £ of expenditure.

But this kind of analysis tends to be kept under wraps for fear of causing revulsion and a collapse in support for state activities.

Reading this draft back, there is a risk that some are going to take it as a tactless and dry response to the sickening events of the last few months.

But the intention is to point out that there is a need not just to get the Grenfell Tower response right, but to take a look at the government’s approach to disaster economics as a whole.  Is the tax take reserved for such things large enough?  And is it divided up in the right way?  Are all regulations – not just fire regulations – striking the right balance between liberty and disaster prevention?  And not just one determined by the kind of dysfunctions described above?

 

 

Posted in Uncategorized | 3 Comments

There is a capitalist logic to requisitioning empty property near a disaster

‘Requisition houses?  Communism!’

That is the impulse of some to respond to Jeremy Corbyn’s offhand suggestion that empty properties of ‘the rich’ be requisitioned for use rehousing those made homeless by disasters like the fire at Grenfell Tower.

But there is a perfectly sound logic to it consistent with the way governments treat all our property rights.

For starters, as Jonathan Portes notes in his book ‘Capitalism’, property rights are not absolute.  Planning regulations restrict what we can do with land.  Driving and parking regulations restrict what we can do with our cars.

These restrictions are in the name of a collective, greater good [less ugly towns and no chaos on the roads].  So inviolable rights to dispose of property as we like in all circumstances are rare – because to grant them would ultimately cause harm elsewhere.

So too, perhaps, with the right to have an empty flat located just round the corner from a disaster zone that has made many homeless.

In this spirit, one could imagine temporary requisitioning to be like a congestion charge for cars.  Housing contiguous to the disaster site is the scarce resource, like roads through a busy city.  The equivalent of ‘rush hour’ on the roads is the time immediately after the disaster when the need for accommodation nearby is extreme, and the invoilable right to hold empty flats causing greatest harm.

Relatedly, one could envisage empty-property taxes that rose in the immediate area around a disaster zone like Grenfell Tower, which would either help fund temporary rehousing, or could be discharged in kind by the owners handing over the keys for a while.

Having such rules in place ex ante would encourage better – more socially desirable – use of scarce land.

Posted in Uncategorized | 2 Comments

Brief history of time spent inflation targeting

This pulls together tweets that I sent on the history of inflation targeting, having read this article in the New York Times.  Warning:  this is a highly subjective, UK-centric and stream-of-consciousness, and still very brief ‘history’.

A key thing to recall about inflation targeting was that it was about targeting the only thing that central banks had not yet tried targeting and failed.  It came after broken promises to target the relative price of money and gold;  the exchange rate;  and the growth rate of money aggregates;  even after periods when, confused about the difference between an instrument and a target, central banks ‘targeted’ the interest rate.

That statement leaves out nominal GDP targeting.  Despite elegant works by Meade and others, this never seemed to be a runner at the time.

Another feature of inflation targeting, remembering the haste with which it was embarked on, particularly in the UK, was ‘we have to target something, inflation is something, so we have to target inflation’.

At the outset, what seemed scary about doing it was the notion that you can just promise to target the thing that policy really cares about, rather than specifying a value for an intermediate target like money or the exchange rate.

One way of interpreting this new ‘promise’ was:  ‘we won’t tie our hands, because we’ve tried that before, and found that we always have to untie them again;  so instead we will just do it.’  That borrows from Bennet McCallum’s use of the Nike advertising campaign back in the day:  instead of making a commitment, just do good policy.

Naturally, there were sceptics.  Why would markets believe a promise just to get inflation down, when promises to keep intermediate targets had been broken?  The answer was that the lack of resolve that had led to those past failures had their ultimate cause in the variable link between intermediate targets and the ultimate goal, meaning that there were times when, with respect to the ultimate goal, the intermediate target would be better broken.  There was no better demonstration of that than the UK’s exit from the ERM in 1992 which precipitated inflation targeting.

These flaws with intermediate targeting were known at the time, but the dominant view was that the credibility benefits of sticking to a verifiable intermediate target trumped the credibility costs of promising to do something that was occasionally harmful.

I wonder too if part of the reason for pre-inflation targeting strategies was the lack of understanding about what caused inflation.  Money growth and exchanges rates were a monetary policy phenomenon, but inflation was to do with costs, trade unions, oil prices, and lots of stuff that wasn’t monetary policy.  So how could a promise be framed in terms of a variable so little under central bank control?

Subsequently, inflation targeting central bankers enjoying the good times of macro stability would crow about the optimality of their regime, but the benefits of final goal targeting were not at all universally subscribed to at the outset.

Another aspect of the history not mentioned in that NYT piece is the shift from lexical mandates that stressed the primacy of the inflation goal, using ‘subject to’ texts to mention other goals associated with the real economy, towards mandates that emphasised the trade-offs that existed between inflation and other goals.

In my opinion the ‘subject to’ language was always nonsense.  In responding to shocks that did not generate a trade-off, the ‘subject to’ clause was redundant.  Stabilising inflation would stabilise other goals automatically.  In circumstances when there was a trade-off, the ‘subject to’ clause was simply wrong, directing policy, essentially, to ignore the trade-off to the detriment of the economy.

But ‘subject to’ probably seemed necessary at the outset given the worries about our monetary policy misbehaviour in the past, the sense that simply promising to hit your final goal sounded a bit like magic, and the need to sound like you had engaged conservative central bankers, and not lily-livered ones worried about unemployment.  Who, in the aftermath of our ejection from the ERM, could imagine a Chancellor declaring that ‘we will henceforth target a weighted sum of the variance of inflation and resource utilisation’?  Though that is precisely where the logic of giving up on intermediate targets leads, ultimately.

Central bankers did slowly become bolder and less confusing about the trade-off language.  Two reasons:  1) They were emboldened by the apparent taming of inflation, and perhaps felt that there was a credibility dividend that could be spent.  2) There was a torrent of applied monetary policy work on evaluating regimes articulating why even in the simplest models dual goals were warranted [culminating in Woodford’s Interest and Prices].

But initially – and still occasionally – multiple goal pursuit was hidden in tricksy language about there being only one target, just variations in the horizon at which it was optimal to meet it.

The most recent chapter in inflation targeting history has been the great financial crisis, which has had many consequences.

The first is that it retroactively boosted a line of thought that held that inflation targeting had led to a neglect of asset prices, and this had brought about financial instability.  The second is that it thrust interest rates at the zero bound, essentially handing back to fiscal authorities the job of hitting the inflation target.  The third was a bursting of the bubble of thought that had clearly greatly exaggerated the contribution of inflation targeting and central bank independence to macroeconomic stability.  A fourth is the unprecedented level of political controversy generated by persistently low interest rates, and quantitative easing, both of which are resented in some quarters as conspiracies to aid the rich, borrowers, or both.

The criticism that inflation targeting ignored asset prices – the BIS were the most persistent advocates of this – was always wrong.  It was sometimes based on a misunderstanding of the capability and inclination of inflation targeters to respond to multiple goals, to things not defined in their headline quantified index.  It also exaggerated the power of monetary policy to do anything about what, in essence, was a ‘real’ and not a ‘monetary policy’ phenomenon, whose root cause lay in inadequate regulation, not loose monetary policy.  This critique therefore misconstrued symptoms as causes.  The focus on fine tuning the details of inflation targeting practice, and lack of focus on regulation were both caused by a failure to see the risks building up in the system.

The accidental return of the job of hitting the inflation target to the fiscal authorities [who had delegated it in the first place] was unfortunate, and came at a time when those authorities faced their own credibility issues attempting it and when fiscal policy was so politicised that fiscal branding took precedence over confronting the technical problem of assisting monetary policy.

A highlight of the bubble bursting on the contribution of monetary policy frameworks to stability was of course the UK Bank of England conference on the ‘Great Stability’, which took place in September 2007, and was punctuated by the senior attendees leaving to follow up the breaking news they had seen on their Blackberries.

This broad brush history misses out some of the details that transfixed central bank economists.  The developments in communication towards increased, but still incomplete transparency.  The use and abuse of measures of ‘core inflation’.  Discussion of point versus range targets;  of price level versus inflation targets.  Developments in the method of estimating price increases and biases that remained.  The spread of DSGE models in central banks.  The curious phenomenon of the ECB and Fed inflation targeting quietly while saying that they weren’t.

It also begs questions about the future:  how inflation targeting should be reformed to better weather future macroeconomic storms.  Much ink has been spilled on that, so I won’t repeat here.

 

 

 

Posted in Uncategorized | 6 Comments

More on Bitcoin and the conditions for a takeover of fiat money

Something I did not stress about the likelihood of a crypto-currency takeover in my Alphaville post that I should have done, and which cropped up in a Twitter exchange with Joe Weisenthal, relates to the fact that in theory, and even in history, the unit of account and medium of exchange can differ/have differed.

So, here, the question I started with was the low likelihood that Bitcoin or similar might take over soon, given the small value of currency in circulation relative to the value of paper US dollars. [100bn compared to 1.4trn].

In this 2000 paper by Woodford he explains how the central bank could retain control of monetary policy, even if people stop using central bank money as a medium of exchange or store of value, simply by central bank money remaining the unit of account.

Analogy:  if central banks were given the power to define the metre in a textile based economy, then even without being the monopoly supplier of money they could pump up the business cycle by lengthening the metre.  Textile suppliers would have temporarily fixed prices per metre of cloth.  [Which amount to amounts of goods they would accept directly, or indirectly, in exchange for a metre of cloth].

The lengthening of the metre would pump up demand for cloth [which was now cheaper in terms of goods per old metre!  Still with this?] in the same way that an increase in the money supply reduces the real price of fix-price goods in a conventional model economy.  We don’t yet broaden out central bank empires to defining the metre, but we could contemplate it one day:  we’d have to include the kilo, litre, and presumably also allow central banks to define the time unit so that the otherwise weightless/dimensionless service economy could be controlled.

In Chile, policy actively sought to disentangle the unit of account from the medium of exchange, with the creation of the Unidad de Fomento.

This was to try to avoid the costs of endemic inflation in terms of the Chilean Peso.  Exchange rates between UDFs [which had no material form – you could not buy UDFs] and Pesos were published daily in the newspapers, which were simply ways of presenting changes in the CPI.  [See Shiller(2002) and also this nice blog post by JP Konig].

A similar indexed unit of account concept, the ‘Unidad Reajustable’ exists in Uraguay.  And there are other examples too.

In the case of crypto-currencies, we are contemplating a switch that from the perspective of the authorities is involuntary, not voluntary like in Chile.  But the Chilean experiment shows that the unit of account/medium of exchange separation possible in theory is also possible in practice.  If central authorities can will this separation, perhaps markets can coordinate on it too.

That was a long-winded and somewhat contorted way of explaining that it’s at least possible that the take-up of Bitcoin and similar as a medium of exchange mis-states the probability that it becomes a unit of account.

Control of monetary policy may escape central banks even if Bitcoin never takes over as a medium of exchange;  similarly, if central banks can retain rights to define the unit of account, it might be relaxed – a small seigniorage loss aside – about losing the role of monopoly issuer of the currency.

 

Posted in Uncategorized | 10 Comments