Moving the Bank of England to Birmingham won’t help monetary or financial policy

A report in the FT today reveals that Labour commissioned two consulting firms – GFC and Clearpoint – and the report is said to conclude that the BoE’s London location “leads to the regions being underweighted in policy decisions.” The recommendation is to move the Bank of England to Birmingham.

This is disheartening.  Two reasons.  First, there are actual things that could be fixed in the Bank of England.  Like the relationship between monetary and fiscal policy at the zero bound;  transparency in the models and forecasts;  the lack of an action plan for unconventional policies in the event of another crisis;  the vagueness of the financial stability mandate;  the preponderance of internal members on the Monetary Policy Committee;  the refusal of the MPC to present clear forecasts of what they intend to do;  the lack of clarity about how they trade-off real and nominal variables… and much more.  Instead the headline is about a report on something that does not need fixing.

A few points.

How would monetary or financial policy settings ideally have been different?  Tighter or looser?  Why?

The Bank of England’s mandates do not mention the regions explicitly as part of monetary or financial stability targets.

The BoE’s monetary policy remit does mention the regions:

“The Committee’s performance and procedures will be reviewed by the Bank of England’s Court on an ongoing basis (with particular regard to ensuring the Bank is collecting proper regional and sectoral information).”
But this is interpreted by the BoE, and a reasonable reader, as meaning that in figuring out the appropriate policy in pursuit of its aggregate policy objectives, it should collect the right information.  Part of the clue is in the mentioning of sectoral, as well as regional.  That is not to give the BoE two extra goals – adjudicating on the regional and sectoral distribution of activity and inflation – but to instruct it about dimensions of the distribution of activity that might be important for its aggregate goals.
I think the text is arguable, and bendable in the direction of regionalists and sectoralists.  But this is certainly how the MPC interpret it, for example, as evidenced many times, including a recent Treasury Committee hearing,and how it should be interpreted.
In pursuit of stability in aggregate inflation and activity [and financial stability], distributions are not irrelevant;  the prevalence of a small group of highly indebted individuals, and others who were highly exposed was one of the drivers of the financial crisis.  Had the debt or the exposures been shared out more equally, there would not have been the defaults and fire-sales.  Mismatch between where jobs and workers are – in actual and occupational space – reduces effective labour supply.  Because rich people have lower marginal propensities to consume than do the poor, inequality affects aggregate consumption.  But these effects are important for the BoE only because they affect the aggregates that they seek to control.  And they often come with important aggregate signals – like the spread on risky assets, aggregate consumption, or aggregate unemployment and vacancies.  And financial effects aside, they are often not particularly time-varying or large effects.
But where is the evidence that the BoE gives insufficient weight to these considerations?  How did monetary or financial policy suffer?
Even if the BoE were to given regional goals, they would not be able to achieve them.
The BoE’s main tools are aggregate tools, not capable of policing a regional distribution even if it wanted to.
Imagine trying to set regional central bank interest rates.  Counterparties would turn up for the lowest deposit rate, and the highest lending rate, and then lend on to others confronting less favourable rates, being set to achieve some regional goal.  Without policing the regional destination of onward lending, and in fact instituting separate regional currencies, you could not sustain regionally divergent risk free interest rates.
Regional macro-pru might be possible, but again it would be hard to police intermediation arbitrage without essentially going as far as instituting regional financial authorities.
Regional purchases of local government bonds would be possible, but most of the time would not make much difference [assuming that they were ultimately reversed, and not regionally differentiated monetary finance].  Regional purchases of private sector bonds would be possible;  but the market is not large, and currently concentrated with London and South East issuers, so such a policy could not make that much difference….  and we could go on.
Regional policy is for the government to sort out;  it has the legitimacy to undertake the necessary deliberate redistribution.  And it has the tools best suited to do it.
The Bank has a regional network of 12 Agencies, which it advertises conducts 9k visits with business contacts per year, and hosts 60 visits with ‘policymakers’.   My feeling is that it spends too much effort, and with insufficient science, collecting its own regional information.  We don’t ask the BoE to collect inflation or GDP data.  Partly because we ask it to specialise in monetary and financial economics, not statistics [ok so they do collect monetary and bank balance sheet data…].  And partly because it does not look good to collect the data against which you will be subsequently judged.  There is enough trouble with inflation and GDP truthers as it is.  The Agency set up is an unwieldy mix between a PR/accountability function – the BoE has to be seen to be listening, and seeing the real activities of the real people and firms its policies affect, and taking its case to those constituent – and a dubious data gathering function.
If there is a failure – which I don’t see – how is the accountability system monitoring the Bank giving rise to it?  And why could this not be addressed simply by giving regional concerns more weight in the hearings of MPC and FPC members?  Moving BoE functions to Birmingham would presumably mean replacing a bunch of visits to Birmingham with a bunch of visits to London.  How would policy be improved by that?
Despite all this, I am not particularly against moving the BoE.
As part of an orchestrated move to dismantle the success of London’s economy, and try to recreate it for a part of the country that had so far missed out, there is at least a case to argue.  But we should not kid ourselves that it would help any actual policy that the BoE conducts.
I would propose something different.  Rather than uprooting and dismantling successful institutions and local economies, squishing the London tax surplus in the process in the hope that it reappears somewhere else, preserve and spend that surplus on better transport facilities and universities in neglected areas.
Oh, and rename the Bank of England ‘The Central Bank of the United Kingdom’ and rotate MPC and FPC meetings through towns like Penrith, Bangor, Peterborough, Thurso and similar.
Advertisements
Posted in Uncategorized | Leave a comment

Time for an opportunistic inflation

The title of this post is a play on a paper by Orhapnides [ex Governor of the central bank of Cyprus] and Wilcox, the Opportunistic Approach to Disinflation.  That paper described a policymaker that would not seek to engineer low inflation through deliberate monetary policy, but would wait for the good fortune of disinflationary shocks to do it for them, locking in low inflation later.

What is the relevance of this now?

Post the Great Financial Crisis, the UK and other economies are stuck with very low real rates for the foreseeable future.  Other things equal, this means lower central bank rates.  The resting point for rates is perhaps as low as 2-3%.  This means that there is little room to respond to the next crisis.  Recall that interest rates started out at 5.5% in the Summer of 2008.  A way round this is to raise the inflation target.  This would, once met, tend to raise the resting point for central bank rates.  There are arguments against.  Getting a reputation for moving the monetary goalposts.  The traditional case for the costs of inflation which underpin the inflation target in the first place.

Another – and a clincher for me at the time – was the fact that central banks were having a lot of trouble hitting the old target of 2%.  Raising the target was just setting up the central bank for failure.

But here comes the relevance of the opportunistic approach.  In the UK, the unfortunate decision by UK voting turkeys to vote for their economic Christmas caused Sterling to fall, and has led to a protracted period of inflation greater than 3%.  In some ways this is the perfect time to raise the target.  There is a good chance of achieving it, with the right mix of monetary and fiscal policy, and a good chance of a promise to hit a higher target being believed, with inflation itself already high.

Some would cry foul, and assume that the authorities were never serious about the target, and complain that the higher target would be a step along a slippery slope.  But despite those risks it would be worth it to regain monetary potency in time for the next recession.

If you think QE is a perfect substitute for conventional monetary policy, or you are happy with the de facto return of managing the inflation target back to the Treasury, who wield the other remaining fiscal instruments, or you are ok with reforming monetary institutions to allow for very negative interest rates, then you won’t see raising the target as necessary or desirable.  But if you are not wholly in any of those camps, you should.

The implied position of the Government and in particular Mr Hammond, given the recent renewal of the inflation target remit, is that everything is fine as it is.  With inflation now fortuitously high, time to look at this again.

The Labour Party seem to have been doing some thinking about shaking up the Bank of England.  But it is distressing that with important matters of policy substance that could be addressed, like the level of the target, they chose to focus instead on the case for moving it to Birmingham.

Posted in Uncategorized | Leave a comment

The Superintelligence

Warning : amateur, off-topic blogging coming.  Offered in the spirit of pre-Christmas cheer.

If you haven’t watched or read Nick Bostrom on the ‘Superintelligence’, you are not a self-respecting cultural omnivore.

The ‘superintelligence’ is a hypothetical extreme risk to humanity posed by artificial intelligence [AI].  The scenario is that computer capabilities increase to the point where they become as good or slightly better at general purpose thinking, including applying themselves to the task of designing improvements to themselves.

At that point capabilities head rapidly towards an intelligence ‘explosion’, as each new modification designs another one.  The superintelligent entity has capabilities far exceeding any individual human, and even the whole of humanity, and, unless it can be harnessed to our needs, may either deliberately, or inadvertently annihilate us.  This is a formalisation of a pretty familiar anxiety that has permeated science fiction for ages, through films the Terminator franchise, or Transcendence, Wall-E, A Space Odyssey [“I’m sorry Dave, I’m afraid I can’t do that.“]

Benedict Evans’ newsletter included a link to a blog by Francois Chollet on the ‘Impossibility of the Superintelligence‘.  I think it goes wrong for a few reasons.

Chollet writes:

“there is no such thing as “general” intelligence. On an abstract level, we know this for a fact via the “no free lunch” theorem — stating that no problem-solving algorithm can outperform random chance across all possible problems. If intelligence is a problem-solving algorithm, then it can only be understood with respect to a specific problem. In a more concrete way, we can observe this empirically in that all intelligent systems we know are highly specialized. The intelligence of the AIs we build today is hyper specialized in extremely narrow tasks — like playing Go, or classifying images into 10,000 known categories. The intelligence of an octopus is specialized in the problem of being an octopus. The intelligence of a human is specialized in the problem of being human.”

The no free lunch theorem is a red herring.  The Superintelligence worriers are concerned about the emergence of a capability that is designed to do as well as it needs to across the range of possible challenges facing it.  Confronted with this objection from Chollet they would probably argue that the superintelligence would design itself to be in charge of multiple specialised units optimized for each individual problem it faces.  Or that it would hone simple algorithms to work on multiple problems.

The final sentence should not be any comfort;  ‘the intelligence of a human is specialized in the problem of being a human.’  But there are bad humans, thwarted by their own slowness, forgetfulness, lack of access to resources.  The malign superintelligence under consideration is just like one of those, only without those constraints.

Chollet next argues that a superintelligence would not be possible because….  our own intelligence arises out of a slow process of learning.  He writes, for example:

“Empirical evidence is relatively scarce, but from what we know, children that grow up outside of the nurturing environment of human culture don’t develop any human intelligence. Feral children raised in the wild from their earliest years become effectively animals, and can no longer acquire human behaviors or language when returning to civilization.”

So what, the Superintelligence worriers retort.   The first general intelligence unit has the internet.  And subsequent units can get to work training themselves at super-fast speed.  Next.

Chollet argues by analogy about the evidence that super-high-IQ humans are usually not very capable.

“In Terman’s landmark “Genetic Studies of Genius”, he notes that most of his exceptionally gifted subjects would pursue occupations “as humble as those of policeman, seaman, typist and filing clerk”. There are currently about seven million people with IQs higher than 150 — better cognitive ability than 99.9% of humanity — and mostly, these are not the people you read about in the news.”

Then he explains the reverse;  that many of the most capable humans have had only moderate IQs:

“Hitler was a high-school dropout, who failed to get into the Vienna Academy of Art — twice….   ……many of the most impactful scientists tend to have had IQs in the 120s or 130s — Feynman reported 126, James Watson, co-discoverer of DNA, 124 — which is exactly the same range as legions of mediocre scientists.”

I don’t find it comforting – with respect to the likelihood of a super AI taking over – that great achievements required only medium IQs.  It may be that the non-IQ facets of high achieving humans are not reproducible in machines, but merely stating that those facets exist does not bear on whether this is possible or not.  Maybe the AI would get one of its copies to track down life stories of failed geniuses or successful dullards to maximise its own chance of success.

The next argument is that our capabilities are not limited by our IQ but by the environment:

“All evidence points to the fact that our current environment, much like past environments over the previous 200,000 years of human history and prehistory, does not allow high-intelligence individuals to fully develop and utilize their cognitive potential.”

The idea that the environment inhibits the optimization of intelligence sounds right.  Example:  machine learning algorithms of today can be deteriorated by depriving them of data.

But:  1) the intermediate machines that precede a superintelligence are going to have a *lot* of data, including the data generated by their own existence, and, eventually, the entirety of knowledge, and AI generated knowledge;  2) we can see how actual individual lifetimes have limited individual human brains, but not how the sum total of all knowledge limit successively improved AIs.  We don’t know enough to jump from such limits in the past to state that a Superintelligence is an ‘impossibility’.

Chollet next argues:

“our biological brains are just a small part of our whole intelligence. Cognitive prosthetics surround us, plugging into our brain and extending its problem-solving capabilities. Your smartphone. Your laptop. Google search. The cognitive tools your were gifted in school. Books. Other people. Mathematical notation. Programing.”

This is not an argument against a Superintelligence:  AIs will have access to all these things too.  They will be able to program.  They will have computing power.  They will be able to connect to the internet and search on Google.  They will have access to all books written, the outputs of past people.  And they will have access to other people, and other people’s online outputs.

Chollet’s tries to allay our fears about a superintelligence with this:

“It is civilization as a whole that will create superhuman AI, not you, nor me, nor any individual. A process involving countless humans, over timescales we can barely comprehend. A process involving far more externalized intelligence — books, computers, mathematics, science, the internet — than biological intelligence.”

This is true of the first AI that equals or surpasses an individual human. It will have been the output of a huge amount of prior human history and knowledge, and will stand on the shoulders of many giants.  But this doesn’t make a sound prediction about what happens in the future.  Once the AI gets to work, unless something restricts it, its new thinking, or the thinking of its many copies and simulations will constitute a new artificial, and highly purposed civilization or ‘cognitive prosthetic’.

“Will the superhuman AIs of the future, developed collectively over centuries, have the capability to develop AI greater than themselves? No, no more than any of us can. Answering “yes” would fly in the face of everything we know — again, remember that no human, nor any intelligent entity that we know of, has ever designed anything smarter than itself. What we do is, gradually, collectively, build external problem-solving systems that are greater than ourselves.”

This is not comforting for two reasons.  First, the new just-better-than-us AI’s can be reproduced, and work together to improve themselves:  it would be a mistake to presume that they will be as limited as past individual humans.  The final sentence [“What we do is, gradually, collectively, build external problem-solving systems that are greater than ourselves.”] takes us no further than ‘it hasn’t happened before, so it won’t happen in the future.”  The former is true, but the latter does not follow from this.  I think the Superintelligence worriers are also not all those who are ‘answering yes’.  They are stating a hypothetical risk, and urging that we think carefully now while we have the time and opportunity through collective action to make a Superintelligence next to impossible.

The same comforting extrapolation from the past is deployed again by Chollet:

“Science is, of course, a recursively self-improving system, because scientific progress results in the development of tools that empower science …  Yet, modern scientific progress is measurably linear. …. We didn’t make greater progress in physics over the 1950–2000 period than we did over 1900–1950 — we did, arguably, about as well. Mathematics is not advancing significantly faster today than it did in 1920. Medical science has been making linear progress on essentially all of its metrics, for decades. And this is despite us investing exponential efforts into science — the headcount of researchers doubles roughly once every 15 to 20 years, and these researchers are using exponentially faster computers to improve their productivity.”

Maybe this would characterise the recursive self-improvement of computers using copies of themselves to develop improved versions of themselves; but maybe it would not!  How about we still devote thinking time to how we plan in advance for the not.

Chollet cites two sources of bottlenecks in human-conducted science currently that are supposed to dog AI-self-improvement in the future:

“Sharing and cooperation between researchers gets exponentially more difficult as a field grows larger. It gets increasingly harder to keep up with the firehose of new publications….. As scientific knowledge expands, the time and effort that have to be invested in education and training grows, and the field of inquiry of individual researchers gets increasingly narrow.”

 

 

Yet if we are prepared to contemplate that human science bottlenecks would not prevent an AI being constructed equivalent or better than a human, these subsequent problems are not as relevant.  The AI copies itself and devises its own strategies for cooperating with its sub units.

For me, Chollet fails in substantiating his claim that a Superintelligence is an ‘impossibility’.  What probability it has I have no idea.  Nick Bostrom seems to be convinced that it is a certainty:  a matter if when, not if.  Perhaps the truth lies between these two authors.  It would be nice if there were relatively cheap and reliable ways of heading off the risk, so that even if the probability was low, we could justify putting resources aside to do them.  But reading Bostrom I was convinced that this wasn’t likely.  The most compelling scenario for the emergence of an uncontrolled self-improving Superintelligence is via state actors competing for military advantage, or non-state companies competing in secret for overwhelming commercial advantage.  Policies to head off a Superintelligence would have to be agreed cooperatively, something that seems beyond a hostile multi-polar world.

Posted in Uncategorized | Leave a comment

Death and austerity

Simon Wren Lewis looks at a recent research paper in the BMJ conjecturing that the Coalition ‘austerity’ program led to a flattening off of the previous downward trend in mortality (and upward trend in life-expectancy), and thus induced deaths that would counter-factually have been avoided with more spending on local authority social care.

This seems a highly plausible thesis.  Large high frequency changes in life expectancy around prevailing trends – in the absence of major disease outbreaks – ought not to be expected.  Reduced public spending is unlikely to be substituted for quickly, if at all, for those at the bottom of the income and wealth distribution by private spending.  Those anyway at the end of their working lives are likely not to be able to change their plans and work longer to make up for a withdrawal of provision by the government.  Even within stable life expectancy figures, we know that there are large inequalities, and these are associated with income.  A sudden change in policy reducing the ‘social income’ of the poor – ie their access to health preserving social care – could individuals through a health distribution that is already laid bare by the past experience of the population even in more stable periods of government funding.

Simon is rightly perplexed, in my opinion, that the BBC chose not to cover and debate the report.  The reason given was that the analysis was considered ‘highly speculative’.

Screen Shot 2017-12-01 at 17.57.34

The decision is odd, in retrospect, because the BBC website had already run a story on the end of the life-expectancy reductions, combining charts of the data with comments from Professor Sir Michael Marmot from University College London.  This coincided with the release of the ‘Marmot Indicators 2017‘, a collection of charts on life expectancy, inequality, and development.  In that story, the BBC report the fall in life expectancy, and Marmot making the conjectural link with spending on social care.  The text of the BBC article reads:

“[Marmot] said it was “entirely possible” austerity was to blame and said the issue needed looking at urgently” the story argues, making no bones of quoting a speculation, even if it was a reasonable one.

Later on, when researchers make a decent econometric fist of testing the hypothesis, the BBC decide to back off.  It seems that a paper trying to put the causal connection between life expectancy and austerity on firmer foundations, rather than just speculating about one, was a step too far.

To provide some context, recall the decision to cover analysis by Economists for Free Trade – analysis that would be rejected by an overwhelming majority of economists not just as ‘speculative’ but in fact wrong, and this looks like a regrettable decision.

 

 

Posted in Uncategorized | 1 Comment

More bits on Bitcoin

Jean Tirole opines in the FT about the social costs of crypto-currencies, prompted no doubt by the continued surge in the relative price of Bitcoin, depicted below [y axis in £].Screen Shot 2017-12-01 at 10.01.28

Tirole writes:

Bitcoin’s social value is rather elusive to me. Consider seigniorage: an expansion in the money supply traditionally provides the government with extra resources. As it should, the proceeds of issuance should go to the community. In the case of bitcoin, the first minted coins went into private hands. Newly minted coins create the equivalent of a wasteful arm’s race. “Mining pools” compete to obtain bitcoins by investing in computing power and spending on electricity. There goes the seigniorage.

Bitcoin – and to emphasise, we are not just talking about Bitcoin, as there are now 100s of competitors – does have a questionable social value.  But that it displaces seigniorage is not high on the list of its downsides.   Governments down the ages have very often abused the power to raise finance by issuing money, and devising monetary and fiscal institutions that can prevent this has been a priority, and a qualified success.

Tirole continues:

Bitcoin may be a libertarian dream, but it is a real headache for anyone who views public policy as a necessary complement to market economies. It is still too often used for tax evasion or money laundering. And how would central banks run countercyclical policies in a world of private cryptocurrencies?”

These are fair points.  However:  cash, one of the assets Bitcoin and similar seems to compete with, is also used for illicit and illegal activity.  [Sufficiently so in India that it prompted the government to withdraw and redenominate 85% of the note issue].  That any technology can be used in a way that is detrimental to the public good is not enough for us to think of eliminating or curtailing it.  The question is whether the costs outweigh the benefits.

The final remark the paragraph above, about the impossibility of running countercyclical policy with current crypto-currency protocols, also merits comment.

Would central banks lose control over monetary policy if something like Bitcoin took over? Roger Farmer tweeted the same thought:

Screen Shot 2017-12-01 at 11.39.10

In principle, central banks can retain control over the economy so long as they retain the ability to define the unit of account.

Imagine an economy that was just textile manufacturing.  Central banks could adjust the definition of a metre to control the business cycle.  Lengthening the definition of a metre would lower the real price of textiles [how much you get for a ‘metre’] and, provided prices were sticky, and posted as currency units per metre, boost demand.

The notion of having a different unit of account from the medium of exhange surfaced in the context of solutions to the problem of the bound imposed on central bank rates by the property that cash returns zero interest.  Buiter and Kimball are associated with the idea that the central bank might manage the unit of account so that the medium of exchange [cash] depreciates in value against it, thus yielding a negative interest rate, and permitting negative rates to emerge on market instruments.

So even if everyone shifted to Bitcoin, a central bank might still have its monetary policy lever.  Personally I think Roger/Jean’s concerns about a take-over are real.

Collective, private decisions to ditch the official medium of exchange have typically involved ditching the unit of account too.

In history this was because monetary policy wrecked the unit of account function and the store of value/medium of exchange function of money.  We are contemplating here a world in which the unit of account function has not been wrecked by central banks.  So it’s conceivable that only the medium of exchange would shift.  But the (recent) historical precedent that the two tend to go together, even the conceptual difficulty of disentangling them, makes a Bitcoin takeover that disempowers central banks at least as probable.  (I say recent because if we go back to medieval times, say in continental Europe it was pretty common to observe units of account different from the multiple media of exchange circulating).

The aspect of crypto-currencies that concerns people, the protocol of essentially fixed supply, may be precisely what will limit their spread and preserve central bank leverage over their economy.

As David Andolfatto [or David Blockchain as he prefers now to be known] and others have pointed out, the fixed supply means that fluctuations in Bitcoin money demand are not accommodated and are felt in the price, making the price inherently more volatile [than that of central bank money].

The desirability of a currency protocol often – though not always – dictates the extent to which they are used in the future.  Repeating an example already cited here, those countries that dollarised were the ones with the worst local protocols [for managing their own currency].

Moreover, just as the protocols that govern central bank-note issue have changed, mostly for the better [the recent Indian demonetization, and Venezuela two recent exceptions] so the protocols governing crypto-currencies might evolve for the better too.   One reading of monetary history – not too Panglossian – is a slow process of discovering what works for the common good.  Anecdotally, I know from interacting with some of them that cryptocurrency developers understand the fixed supply ‘problem’, and it is not beyond the bounds of possibility that a better algorithmic protocol, or even one with human committees, emerges.

Which leads us to remember that central banks – nothing but a committee driven money protocol defined by the inflation targets, interest rate setting procedures, etc – could step in and provide their own.  Indeed, the Fed, the BoE, the Norges Bank and perhaps others have openly contemplated this idea.  [Detail:  they already do provide digital currency to financial intermediary counterparties.  The question is whether they provide it to all of us.]

 

 

 

 

Posted in Uncategorized | 1 Comment

Brexit impact studies: the clothes of the emperor

The Government is struggling to avoid releasing the complete 58 ‘Brexit impact studies’.

David Davis mentioned that these existed ‘in excruciating detail’.  It was later claimed, in a manner that the government must have realised would be taken to be an attempt to avoid publication, that the studies did not in fact exist in the form in which they were requested.  Subsequently, we are to understand, the studies were compiled in a form that they were requested, but with redactions.

In part this is an exercise in principle, rather than a substantive one in exposing the impact of different kinds of Brexit.  The principle being:  If the government has figured out something of import that affects us, and how we judge their actions, then we have a right to know it, unless by knowing it we somehow harm our collective selves.

But those impacts are, in aggregate, known in so far as they could ever be known [nothing is certain of course, in conditional forecasting:  I don’t know how much weight I will lose if I give up pizza].

We have estimates from NIESR, IMF, OECD, Oxford Economics, Economists for Brexit, the LSE’s CEP, the Treasury, and others.  And we know what we need to know about these different groups’ competencies to decide what weight to put on them.

If that were all we needed to know, there would be nothing in the DExEU studies to find out.

However, this is not all that will be in those studies.  There probably is not another study that looks at the impact of Brexit at such a level of disaggregation, at how it impacts each industry and sub industry.  If all we wanted to do with that disaggregation was add it up, we would not be much further than we were, except perhaps having corroboration and refinement of the estimates already out there.

But these studies will highlight those hardest hit; and also those with most to benefit in a way that bucks the aggregate loss.

Political economy is full of stories of policies thwarted or imposed because the costs or benefits are felt highly unevenly, prompting those who have most to lose or gain to organise and get the best outcome for themselves.

Those hardest hit can use their number as leverage and as a rallying cry.  Those who have lobbied for Brexit and can be tallied in the ‘most to gain at the nation’s expense’ column can have their advice discredited.

I am sure it is partly for this reason that the government doesn’t want to release those impacts.

Even this aspect of the calculus – the distribution of Brexit impacts – is not impossible to guess in advance.  And it is ultimately knowable with, probably, as much accuracy as the analysis done by civil servants, if others outside the government were tasked with coming up with their own figures.  There is said to be material that is commercially sensitive redacted from the impact studies.  But I am sceptical that that would be of any great use for the economics.

But part of the motivation to force the government to publish, and part of the reason why it is resisting, relates to the disparaging of experts’ views on Brexit.

Although outsiders had told us all that Brexit was going to cost us, the government – except the Remainer Treasury, easily tarred as partisan – had not had to put it’s name to any numbers, and had thus far eased through with optimistic platitudes.  However, the real business of Brexit and government depends on numbers.  Impacts on government expenditure and revenue via the multifarious automatic stabilisers;  priorities for industrial policy and logistical government intervention:  all these require lots of numbers and government experts to produce them.

Once this material is out in the public domain, we can observe the government’s own experts confirming what we already know, roughly speaking.  And this isn’t just for show.

The final shape of Brexit is still up for grabs;  as is how far the government can push the strategy of contemplating a no deal in preference for a status quo transition.  The act of demonstrating to moderate, pro-Brexit forces how bad different Brexits will be, and where it will be felt hardest, and the risks run by no-deal brinkmanship could weaken resolve, make a smoother transition more likely, and moderate the deal struck at the endpoint. ‘We didn’t vote for this!’  ‘You knew this for months and had to be forced to tell us?!’

Posted in Uncategorized | Leave a comment

The budget, the OBR, and futurology

The OBR has followed the Bank of England and downgraded its medium/long-term forecast of the growth in the productive potential of the economy.  It now thinks only 1.6% is likely.  This comes after 10 years of zero productivity growth has disappointed forecasts that were forever projecting the old growth rate of 2.5% to resume, something at the time that had a note of pessimism to it, since it was reasonable to speculate early on in the crisis that the productivity level extrapolated out from the old trend might one day be recovered [let alone the growth rate].

But this is not itself really news.  It’s a revelation that one group of experts – who incidentally have no special insight into future productivity – have fallen into line with another.  The depressing productivity data were there before budget day.  We now know what the OBR made of them.

Another reaction that might be tempting from the Remainer tribe is:  ‘You see!  Brexit!  I told you so.’  See, for example, these tweets:Screenshot-2017-11-23 Alastair Campbell ( campbellclaret) TwitterScreenshot-2017-11-23 Polly Toynbee ( pollytoynbee) Twitter

But the forecast gloom was not really about Brexit.  The OBR’s economic and fiscal outlook explains its Brexit assumptions on page 96:

Screenshot-2017-11-23 Nov2017EFOwebversion-2 pdf

This is – unless I am mistaken – a smooth transition to a Brexit that affects nothing.  This is not on either account a Brexit that is likely to happen.  We know from previous work [for example by the CEP / OECD / IMF /BoE] that anything but continued membership of the single market and customs union, transited to smoothly, will have significant negative consequences for GDP/head over the next 10 years.  So if we layered on top of this forecast a probability-weighted sum of possible Brexits, the outlook would be substantially worse.  Perhaps by something of the order of 0.5pp on growth each year, until the transition to our new, dislocated trading state is complete, and longer if the worse surmises about whether openness affects growth are proved correct.

Brexit gloom is not entirely absent.  The Brexit story is in the depreciation of Sterling following the referendum;  the subsequent predictable rise in inflation to 3%, the protracted adjustment downwards of real wages this means;  and the slowing of growth relative to our trading partners.  And this impulse will have made itself felt somewhat in the early part of the forecast.

Another thing to take away from the day is that the institution of the OBR is doing its job.  At least, it is producing a forecast that seems plausible and unaffected by the tendency for Brexiters to taint comments about the future with unwarranted optimism.  And there have been no personal attacks – unlike in the immediate aftermath of some of the Bank of England’s interventions.  Recall, for example, these tweets:

Screen Shot 2017-11-23 at 12.50.02

Prior to the OBR, there was ample scope for fiddling the uncertain science of estimating potential output, and forecasting its expansion into the future, to make fiscal policy look more prudent than it really was.  This time around, it is easy to imagine a world without the OBR in which Brexit Jacobins putting pressure on Philip Hammond to forecast that the future will be rosey, rather than simply make relatively anodyne comments about Brexit presenting ‘opportunities’.  One wonders what fiscal policy would have been like in that world;  contrasting it with the one we inhabit would provide a measure of the added value of the OBR.

For this to work the independence of the OBR has to be credible.

But not just that.  We have to be convinced that they are not overstepping their remit.

One finance chat I am part of on WhatsApp [every self respecting economist has to WhatsApp-drop these days] included a caricature that its the OBR that actually sets fiscal policy.  One can see the spirit in which this was meant.  Imagine the government had figured out, finally, a scheme for setting fiscal policy [as it is often urged to do in these pages and those of other commentators].  In that case the OBR would come along with new forecasts, and the Treasury would simply crank the handle and set fiscal instruments appropriately.

Lurking there is the danger that the OBR might, or might be thought to taint the forecasts itself to bring about a particular mechanical following through of their consequences in fiscal policy.  To prevent that, we can observe that in practice there are watchdogs of the fiscal watchdog:  the Institute for Fiscal Studies, the Bank of England, and other external forecasters.

Another reaction to the forecast gloom was from Iain Martin.  ‘Futurology is futile’ he tweeted above his Times piece.  Unfortunately, futurology is essential.  Why?  Because there are long lags between deciding to do something with fiscal policy [or monetary policy for exampe] and those decisions having their full effect on the things that we care about [like growth, debt, inflation, cost of finance].

In the case of fiscal policy it can take quite some time between making a decision to spend more and having any effect on anything whatsoever [viz the myth of ‘shovel ready projects’].  So, in order to make sure that your fiscal instrument settings are doing the right things to what you care about you have to line up candidate alternative fiscal plans against what you forecast will happen to what you care about.

The OBR forecasts will no doubt prove to be wrong ex post.  All forecasts are.  But that won’t invalidate them as forecasts now.  [Tired example:  when I roll a six sided dice 10 times and get a total score of 60, my forecast of a 35 is not proved wrong].  If you are a technological pessimist, you might plausibly be tempted to extrapolate flat productivity from the last 10 years very far out into the future, and get forecasts that are much more gloomy than the OBR’s.  If you are an optimist and think that the data is missing digital miracles, or that a stimulative fiscal policy could unleash a return to the old trend line, or at least its old slope, you would have a much more optimistic perspective.  The OBRs forecasts are a finger in the air, but a reasonable one at that, and necessary.

 

 

Posted in Uncategorized | 1 Comment