The death of the Bitcoin coder

One of the few ideas in literary theory that made its way across the chasm that separates me from it is the notion of the ‘death of the author’.  By analogy with the original idea, with no more than a cursory link to the Wikipedia entry and disregarding Barthes and subsequent scholarship as irrelevant, I will take this to refer to the observation that whatever the intentions of the author in writing a text, the effective meaning and substance of it is in how it presents itself to the mass of readers.

We could say the same about Bitcoin and other cryptocurrencies.  ‘What Bitcoin is for’ is a frequent topic of discussion.  This is a difficult question to address conclusively.

We could ask ourselves what Satoshi Nakamoto thought Bitcoin was for, and consult as evidence his white paper on Bitcoin.  But once Bitcoin code was written and operationalised, and the protocols evolved, what he thought it was for is less relevant.  Nakamoto cannot control what Bitcoin is for now, any more than the workers at Los Alamos could dictate what nuclear weapons were ‘for’ once the recipe was known more widely.

From the perspective of miners and many holders, it may be nothing more than an opportunity to make money out of those willing to part with something of real worth.

For some users it is no doubt an instrument to facilitate crime.  For others participants Bitcoin maybe a political or intellectual hobby taking a kind of material form.  The subject in my experience tends to draw to it futurologists, techno-optimists and anti-state libertarians like moths to a lamp.  What Bitcoin is for is in that sense subjective, will differ from person to person, and over time.

By the same token, [excuse the pun], the fact that Bitcoin has not fulfilled many of purposes projected onto it by its original enthusiasts does not negate the fact that those intentions were sincerely held, and that they remain latent possibilities.

Bitcoin may yet turn into a currency, or tame the characteristics of existing currencies;  its distributed ledger technology may yet disintermediate some of the thing its fans hope it will.  The tens of millions being spent by existing large financial intermediaries – an irony probably not foreseen by Nakamoto or the originalists – may yet unearth something that Bitcoin indeed will be for.

Posted in Uncategorized | Leave a comment

Bitcoin and the underpinning of illicit fundamentals

One approach to pricing Bitcoin [and similar] has to been try to ask oneself what the fundamental value might be in terms of its enduring use and appeal to the community wishing to store value or make payments outside the reach of regulatory and tax authorities.

John Cochrane’s recent post has an element of this, attempting to distinguish between “speculative” and “fundamental” values for Bitcoin.

Bitcoin’s current high cost of individual payments make it unappealing for small and legal payments.  For small payments, the payment fee is a large fraction or multiple of the actual payment, and so highly inefficient.  For legal payments, there are cheaper alternatives and no obvious benefit, yet to using Bitcoin.  However, for those seeking to keep their wealth hidden, and move it around without the knowledge of the authorities, or being taxed, Bitcoin may still be useful.

However, there are no laws enforcing the use of crypto-currency in the illicit communities imagined here.  No law of legal tender.  The value of a medium of exchange in that community depends just as much on trust as it does for conventional monies in the overground communities.  That is, whether I accept Bitcoin for the drugs or guns that I sell illegally is going to depend on what I think others will think about taking Bitcoin from me when I try to turn the proceeds of my arms and drugs dealing into something I need myself.  [Readers will note that I am talking hypothetically at all points here].  And that, as always, will involve me thinking about what a putative future holder of Bitcoin will imagine that the next holder might think of it.  And so on.

This point – that there is no real fundamental value for the currency – is all the more important because the illicit and legal communities are intertwined.  For many of the illicit activities, of course, the whole point is to extract resources from the legal community and they are entirely parasitical in that respect.  For others, we can think of illicits trading contraband for legal goods.  And of course for that to happen, the reverse goes on:  those working in the legal economy trade the legally gotten gains of their labour for something under the counter.  Therefore, amongst the potential future holders’ beliefs that illicit agents have to factor in when they are assessing what the future value of Bitcoin might be are those of the legal agents [agents who earn legally].

Just as the quantity equation for conventional money does not provide useful guidance about its price when real central bank money demand is fluctuating a lot, so here the size of the illicit economy is not sufficient to say much about the floor to Bitcoin.  The complex judgements about how others think others think others think….  the exchangeability of Bitcoin will proceed have to be made by Bitcoin users, and changes in these judgements will cause girations in the value of Bitcoin.  There may be a fundamental demand for illicit goods, but that need not generate much restraint on the price of Bitcoin either way.

Misbehaviour by governments and central banks has in the past resulted in rapid coordination by private agents in economies on the rejection of the local currency in favour of the dollar, or commodities.  Although Bitcoin itself can’t be over-issued in the same way, it is easy to think of other events, like further forks, sudden regulatory interventions, exchange fraud, that could lead the illicit community to coordinate on rejecting Bitcoin [and similar] too.

A related phenomenon is the history of technological standards that emerged that were not necessarily optimal.  The econ 101 example of this was the Sony Betamax video recording format, which was dropped in favour of VHS despite being technologically superior.  The relevance is that the ‘fundamental’ characteristics of the thing were less important than whether others used the thing and were expected to use it in the future.

I think it’s a mistake to think of Bitcoin’s value, therefore, as underpinned by a reliable long term demand from badly behaved or private people;  that community may be as fickle in coordinating on a currency as our own, and as responsive to what they think we will do as we are to them.


Posted in Uncategorized | Leave a comment

Moving the Bank of England to Birmingham won’t help monetary or financial policy

A report in the FT today reveals that Labour commissioned two consulting firms – GFC and Clearpoint – and the report is said to conclude that the BoE’s London location “leads to the regions being underweighted in policy decisions.” The recommendation is to move the Bank of England to Birmingham.

This is disheartening.  Two reasons.  First, there are actual things that could be fixed in the Bank of England.  Like the relationship between monetary and fiscal policy at the zero bound;  transparency in the models and forecasts;  the lack of an action plan for unconventional policies in the event of another crisis;  the vagueness of the financial stability mandate;  the preponderance of internal members on the Monetary Policy Committee;  the refusal of the MPC to present clear forecasts of what they intend to do;  the lack of clarity about how they trade-off real and nominal variables… and much more.  Instead the headline is about a report on something that does not need fixing.

A few points.

How would monetary or financial policy settings ideally have been different?  Tighter or looser?  Why?

The Bank of England’s mandates do not mention the regions explicitly as part of monetary or financial stability targets.

The BoE’s monetary policy remit does mention the regions:

“The Committee’s performance and procedures will be reviewed by the Bank of England’s Court on an ongoing basis (with particular regard to ensuring the Bank is collecting proper regional and sectoral information).”
But this is interpreted by the BoE, and a reasonable reader, as meaning that in figuring out the appropriate policy in pursuit of its aggregate policy objectives, it should collect the right information.  Part of the clue is in the mentioning of sectoral, as well as regional.  That is not to give the BoE two extra goals – adjudicating on the regional and sectoral distribution of activity and inflation – but to instruct it about dimensions of the distribution of activity that might be important for its aggregate goals.
I think the text is arguable, and bendable in the direction of regionalists and sectoralists.  But this is certainly how the MPC interpret it, for example, as evidenced many times, including a recent Treasury Committee hearing,and how it should be interpreted.
In pursuit of stability in aggregate inflation and activity [and financial stability], distributions are not irrelevant;  the prevalence of a small group of highly indebted individuals, and others who were highly exposed was one of the drivers of the financial crisis.  Had the debt or the exposures been shared out more equally, there would not have been the defaults and fire-sales.  Mismatch between where jobs and workers are – in actual and occupational space – reduces effective labour supply.  Because rich people have lower marginal propensities to consume than do the poor, inequality affects aggregate consumption.  But these effects are important for the BoE only because they affect the aggregates that they seek to control.  And they often come with important aggregate signals – like the spread on risky assets, aggregate consumption, or aggregate unemployment and vacancies.  And financial effects aside, they are often not particularly time-varying or large effects.
But where is the evidence that the BoE gives insufficient weight to these considerations?  How did monetary or financial policy suffer?
Even if the BoE were to given regional goals, they would not be able to achieve them.
The BoE’s main tools are aggregate tools, not capable of policing a regional distribution even if it wanted to.
Imagine trying to set regional central bank interest rates.  Counterparties would turn up for the lowest deposit rate, and the highest lending rate, and then lend on to others confronting less favourable rates, being set to achieve some regional goal.  Without policing the regional destination of onward lending, and in fact instituting separate regional currencies, you could not sustain regionally divergent risk free interest rates.
Regional macro-pru might be possible, but again it would be hard to police intermediation arbitrage without essentially going as far as instituting regional financial authorities.
Regional purchases of local government bonds would be possible, but most of the time would not make much difference [assuming that they were ultimately reversed, and not regionally differentiated monetary finance].  Regional purchases of private sector bonds would be possible;  but the market is not large, and currently concentrated with London and South East issuers, so such a policy could not make that much difference….  and we could go on.
Regional policy is for the government to sort out;  it has the legitimacy to undertake the necessary deliberate redistribution.  And it has the tools best suited to do it.
The Bank has a regional network of 12 Agencies, which it advertises conducts 9k visits with business contacts per year, and hosts 60 visits with ‘policymakers’.   My feeling is that it spends too much effort, and with insufficient science, collecting its own regional information.  We don’t ask the BoE to collect inflation or GDP data.  Partly because we ask it to specialise in monetary and financial economics, not statistics [ok so they do collect monetary and bank balance sheet data…].  And partly because it does not look good to collect the data against which you will be subsequently judged.  There is enough trouble with inflation and GDP truthers as it is.  The Agency set up is an unwieldy mix between a PR/accountability function – the BoE has to be seen to be listening, and seeing the real activities of the real people and firms its policies affect, and taking its case to those constituent – and a dubious data gathering function.
If there is a failure – which I don’t see – how is the accountability system monitoring the Bank giving rise to it?  And why could this not be addressed simply by giving regional concerns more weight in the hearings of MPC and FPC members?  Moving BoE functions to Birmingham would presumably mean replacing a bunch of visits to Birmingham with a bunch of visits to London.  How would policy be improved by that?
Despite all this, I am not particularly against moving the BoE.
As part of an orchestrated move to dismantle the success of London’s economy, and try to recreate it for a part of the country that had so far missed out, there is at least a case to argue.  But we should not kid ourselves that it would help any actual policy that the BoE conducts.
I would propose something different.  Rather than uprooting and dismantling successful institutions and local economies, squishing the London tax surplus in the process in the hope that it reappears somewhere else, preserve and spend that surplus on better transport facilities and universities in neglected areas.
Oh, and rename the Bank of England ‘The Central Bank of the United Kingdom’ and rotate MPC and FPC meetings through towns like Penrith, Bangor, Peterborough, Thurso and similar.
Posted in Uncategorized | Leave a comment

Time for an opportunistic inflation

The title of this post is a play on a paper by Orhapnides [ex Governor of the central bank of Cyprus] and Wilcox, the Opportunistic Approach to Disinflation.  That paper described a policymaker that would not seek to engineer low inflation through deliberate monetary policy, but would wait for the good fortune of disinflationary shocks to do it for them, locking in low inflation later.

What is the relevance of this now?

Post the Great Financial Crisis, the UK and other economies are stuck with very low real rates for the foreseeable future.  Other things equal, this means lower central bank rates.  The resting point for rates is perhaps as low as 2-3%.  This means that there is little room to respond to the next crisis.  Recall that interest rates started out at 5.5% in the Summer of 2008.  A way round this is to raise the inflation target.  This would, once met, tend to raise the resting point for central bank rates.  There are arguments against.  Getting a reputation for moving the monetary goalposts.  The traditional case for the costs of inflation which underpin the inflation target in the first place.

Another – and a clincher for me at the time – was the fact that central banks were having a lot of trouble hitting the old target of 2%.  Raising the target was just setting up the central bank for failure.

But here comes the relevance of the opportunistic approach.  In the UK, the unfortunate decision by UK voting turkeys to vote for their economic Christmas caused Sterling to fall, and has led to a protracted period of inflation greater than 3%.  In some ways this is the perfect time to raise the target.  There is a good chance of achieving it, with the right mix of monetary and fiscal policy, and a good chance of a promise to hit a higher target being believed, with inflation itself already high.

Some would cry foul, and assume that the authorities were never serious about the target, and complain that the higher target would be a step along a slippery slope.  But despite those risks it would be worth it to regain monetary potency in time for the next recession.

If you think QE is a perfect substitute for conventional monetary policy, or you are happy with the de facto return of managing the inflation target back to the Treasury, who wield the other remaining fiscal instruments, or you are ok with reforming monetary institutions to allow for very negative interest rates, then you won’t see raising the target as necessary or desirable.  But if you are not wholly in any of those camps, you should.

The implied position of the Government and in particular Mr Hammond, given the recent renewal of the inflation target remit, is that everything is fine as it is.  With inflation now fortuitously high, time to look at this again.

The Labour Party seem to have been doing some thinking about shaking up the Bank of England.  But it is distressing that with important matters of policy substance that could be addressed, like the level of the target, they chose to focus instead on the case for moving it to Birmingham.

Posted in Uncategorized | Leave a comment

The Superintelligence

Warning : amateur, off-topic blogging coming.  Offered in the spirit of pre-Christmas cheer.

If you haven’t watched or read Nick Bostrom on the ‘Superintelligence’, you are not a self-respecting cultural omnivore.

The ‘superintelligence’ is a hypothetical extreme risk to humanity posed by artificial intelligence [AI].  The scenario is that computer capabilities increase to the point where they become as good or slightly better at general purpose thinking, including applying themselves to the task of designing improvements to themselves.

At that point capabilities head rapidly towards an intelligence ‘explosion’, as each new modification designs another one.  The superintelligent entity has capabilities far exceeding any individual human, and even the whole of humanity, and, unless it can be harnessed to our needs, may either deliberately, or inadvertently annihilate us.  This is a formalisation of a pretty familiar anxiety that has permeated science fiction for ages, through films the Terminator franchise, or Transcendence, Wall-E, A Space Odyssey [“I’m sorry Dave, I’m afraid I can’t do that.“]

Benedict Evans’ newsletter included a link to a blog by Francois Chollet on the ‘Impossibility of the Superintelligence‘.  I think it goes wrong for a few reasons.

Chollet writes:

“there is no such thing as “general” intelligence. On an abstract level, we know this for a fact via the “no free lunch” theorem — stating that no problem-solving algorithm can outperform random chance across all possible problems. If intelligence is a problem-solving algorithm, then it can only be understood with respect to a specific problem. In a more concrete way, we can observe this empirically in that all intelligent systems we know are highly specialized. The intelligence of the AIs we build today is hyper specialized in extremely narrow tasks — like playing Go, or classifying images into 10,000 known categories. The intelligence of an octopus is specialized in the problem of being an octopus. The intelligence of a human is specialized in the problem of being human.”

The no free lunch theorem is a red herring.  The Superintelligence worriers are concerned about the emergence of a capability that is designed to do as well as it needs to across the range of possible challenges facing it.  Confronted with this objection from Chollet they would probably argue that the superintelligence would design itself to be in charge of multiple specialised units optimized for each individual problem it faces.  Or that it would hone simple algorithms to work on multiple problems.

The final sentence should not be any comfort;  ‘the intelligence of a human is specialized in the problem of being a human.’  But there are bad humans, thwarted by their own slowness, forgetfulness, lack of access to resources.  The malign superintelligence under consideration is just like one of those, only without those constraints.

Chollet next argues that a superintelligence would not be possible because….  our own intelligence arises out of a slow process of learning.  He writes, for example:

“Empirical evidence is relatively scarce, but from what we know, children that grow up outside of the nurturing environment of human culture don’t develop any human intelligence. Feral children raised in the wild from their earliest years become effectively animals, and can no longer acquire human behaviors or language when returning to civilization.”

So what, the Superintelligence worriers retort.   The first general intelligence unit has the internet.  And subsequent units can get to work training themselves at super-fast speed.  Next.

Chollet argues by analogy about the evidence that super-high-IQ humans are usually not very capable.

“In Terman’s landmark “Genetic Studies of Genius”, he notes that most of his exceptionally gifted subjects would pursue occupations “as humble as those of policeman, seaman, typist and filing clerk”. There are currently about seven million people with IQs higher than 150 — better cognitive ability than 99.9% of humanity — and mostly, these are not the people you read about in the news.”

Then he explains the reverse;  that many of the most capable humans have had only moderate IQs:

“Hitler was a high-school dropout, who failed to get into the Vienna Academy of Art — twice….   ……many of the most impactful scientists tend to have had IQs in the 120s or 130s — Feynman reported 126, James Watson, co-discoverer of DNA, 124 — which is exactly the same range as legions of mediocre scientists.”

I don’t find it comforting – with respect to the likelihood of a super AI taking over – that great achievements required only medium IQs.  It may be that the non-IQ facets of high achieving humans are not reproducible in machines, but merely stating that those facets exist does not bear on whether this is possible or not.  Maybe the AI would get one of its copies to track down life stories of failed geniuses or successful dullards to maximise its own chance of success.

The next argument is that our capabilities are not limited by our IQ but by the environment:

“All evidence points to the fact that our current environment, much like past environments over the previous 200,000 years of human history and prehistory, does not allow high-intelligence individuals to fully develop and utilize their cognitive potential.”

The idea that the environment inhibits the optimization of intelligence sounds right.  Example:  machine learning algorithms of today can be deteriorated by depriving them of data.

But:  1) the intermediate machines that precede a superintelligence are going to have a *lot* of data, including the data generated by their own existence, and, eventually, the entirety of knowledge, and AI generated knowledge;  2) we can see how actual individual lifetimes have limited individual human brains, but not how the sum total of all knowledge limit successively improved AIs.  We don’t know enough to jump from such limits in the past to state that a Superintelligence is an ‘impossibility’.

Chollet next argues:

“our biological brains are just a small part of our whole intelligence. Cognitive prosthetics surround us, plugging into our brain and extending its problem-solving capabilities. Your smartphone. Your laptop. Google search. The cognitive tools your were gifted in school. Books. Other people. Mathematical notation. Programing.”

This is not an argument against a Superintelligence:  AIs will have access to all these things too.  They will be able to program.  They will have computing power.  They will be able to connect to the internet and search on Google.  They will have access to all books written, the outputs of past people.  And they will have access to other people, and other people’s online outputs.

Chollet’s tries to allay our fears about a superintelligence with this:

“It is civilization as a whole that will create superhuman AI, not you, nor me, nor any individual. A process involving countless humans, over timescales we can barely comprehend. A process involving far more externalized intelligence — books, computers, mathematics, science, the internet — than biological intelligence.”

This is true of the first AI that equals or surpasses an individual human. It will have been the output of a huge amount of prior human history and knowledge, and will stand on the shoulders of many giants.  But this doesn’t make a sound prediction about what happens in the future.  Once the AI gets to work, unless something restricts it, its new thinking, or the thinking of its many copies and simulations will constitute a new artificial, and highly purposed civilization or ‘cognitive prosthetic’.

“Will the superhuman AIs of the future, developed collectively over centuries, have the capability to develop AI greater than themselves? No, no more than any of us can. Answering “yes” would fly in the face of everything we know — again, remember that no human, nor any intelligent entity that we know of, has ever designed anything smarter than itself. What we do is, gradually, collectively, build external problem-solving systems that are greater than ourselves.”

This is not comforting for two reasons.  First, the new just-better-than-us AI’s can be reproduced, and work together to improve themselves:  it would be a mistake to presume that they will be as limited as past individual humans.  The final sentence [“What we do is, gradually, collectively, build external problem-solving systems that are greater than ourselves.”] takes us no further than ‘it hasn’t happened before, so it won’t happen in the future.”  The former is true, but the latter does not follow from this.  I think the Superintelligence worriers are also not all those who are ‘answering yes’.  They are stating a hypothetical risk, and urging that we think carefully now while we have the time and opportunity through collective action to make a Superintelligence next to impossible.

The same comforting extrapolation from the past is deployed again by Chollet:

“Science is, of course, a recursively self-improving system, because scientific progress results in the development of tools that empower science …  Yet, modern scientific progress is measurably linear. …. We didn’t make greater progress in physics over the 1950–2000 period than we did over 1900–1950 — we did, arguably, about as well. Mathematics is not advancing significantly faster today than it did in 1920. Medical science has been making linear progress on essentially all of its metrics, for decades. And this is despite us investing exponential efforts into science — the headcount of researchers doubles roughly once every 15 to 20 years, and these researchers are using exponentially faster computers to improve their productivity.”

Maybe this would characterise the recursive self-improvement of computers using copies of themselves to develop improved versions of themselves; but maybe it would not!  How about we still devote thinking time to how we plan in advance for the not.

Chollet cites two sources of bottlenecks in human-conducted science currently that are supposed to dog AI-self-improvement in the future:

“Sharing and cooperation between researchers gets exponentially more difficult as a field grows larger. It gets increasingly harder to keep up with the firehose of new publications….. As scientific knowledge expands, the time and effort that have to be invested in education and training grows, and the field of inquiry of individual researchers gets increasingly narrow.”



Yet if we are prepared to contemplate that human science bottlenecks would not prevent an AI being constructed equivalent or better than a human, these subsequent problems are not as relevant.  The AI copies itself and devises its own strategies for cooperating with its sub units.

For me, Chollet fails in substantiating his claim that a Superintelligence is an ‘impossibility’.  What probability it has I have no idea.  Nick Bostrom seems to be convinced that it is a certainty:  a matter if when, not if.  Perhaps the truth lies between these two authors.  It would be nice if there were relatively cheap and reliable ways of heading off the risk, so that even if the probability was low, we could justify putting resources aside to do them.  But reading Bostrom I was convinced that this wasn’t likely.  The most compelling scenario for the emergence of an uncontrolled self-improving Superintelligence is via state actors competing for military advantage, or non-state companies competing in secret for overwhelming commercial advantage.  Policies to head off a Superintelligence would have to be agreed cooperatively, something that seems beyond a hostile multi-polar world.

Posted in Uncategorized | Leave a comment

Death and austerity

Simon Wren Lewis looks at a recent research paper in the BMJ conjecturing that the Coalition ‘austerity’ program led to a flattening off of the previous downward trend in mortality (and upward trend in life-expectancy), and thus induced deaths that would counter-factually have been avoided with more spending on local authority social care.

This seems a highly plausible thesis.  Large high frequency changes in life expectancy around prevailing trends – in the absence of major disease outbreaks – ought not to be expected.  Reduced public spending is unlikely to be substituted for quickly, if at all, for those at the bottom of the income and wealth distribution by private spending.  Those anyway at the end of their working lives are likely not to be able to change their plans and work longer to make up for a withdrawal of provision by the government.  Even within stable life expectancy figures, we know that there are large inequalities, and these are associated with income.  A sudden change in policy reducing the ‘social income’ of the poor – ie their access to health preserving social care – could individuals through a health distribution that is already laid bare by the past experience of the population even in more stable periods of government funding.

Simon is rightly perplexed, in my opinion, that the BBC chose not to cover and debate the report.  The reason given was that the analysis was considered ‘highly speculative’.

Screen Shot 2017-12-01 at 17.57.34

The decision is odd, in retrospect, because the BBC website had already run a story on the end of the life-expectancy reductions, combining charts of the data with comments from Professor Sir Michael Marmot from University College London.  This coincided with the release of the ‘Marmot Indicators 2017‘, a collection of charts on life expectancy, inequality, and development.  In that story, the BBC report the fall in life expectancy, and Marmot making the conjectural link with spending on social care.  The text of the BBC article reads:

“[Marmot] said it was “entirely possible” austerity was to blame and said the issue needed looking at urgently” the story argues, making no bones of quoting a speculation, even if it was a reasonable one.

Later on, when researchers make a decent econometric fist of testing the hypothesis, the BBC decide to back off.  It seems that a paper trying to put the causal connection between life expectancy and austerity on firmer foundations, rather than just speculating about one, was a step too far.

To provide some context, recall the decision to cover analysis by Economists for Free Trade – analysis that would be rejected by an overwhelming majority of economists not just as ‘speculative’ but in fact wrong, and this looks like a regrettable decision.



Posted in Uncategorized | 1 Comment

More bits on Bitcoin

Jean Tirole opines in the FT about the social costs of crypto-currencies, prompted no doubt by the continued surge in the relative price of Bitcoin, depicted below [y axis in £].Screen Shot 2017-12-01 at 10.01.28

Tirole writes:

Bitcoin’s social value is rather elusive to me. Consider seigniorage: an expansion in the money supply traditionally provides the government with extra resources. As it should, the proceeds of issuance should go to the community. In the case of bitcoin, the first minted coins went into private hands. Newly minted coins create the equivalent of a wasteful arm’s race. “Mining pools” compete to obtain bitcoins by investing in computing power and spending on electricity. There goes the seigniorage.

Bitcoin – and to emphasise, we are not just talking about Bitcoin, as there are now 100s of competitors – does have a questionable social value.  But that it displaces seigniorage is not high on the list of its downsides.   Governments down the ages have very often abused the power to raise finance by issuing money, and devising monetary and fiscal institutions that can prevent this has been a priority, and a qualified success.

Tirole continues:

Bitcoin may be a libertarian dream, but it is a real headache for anyone who views public policy as a necessary complement to market economies. It is still too often used for tax evasion or money laundering. And how would central banks run countercyclical policies in a world of private cryptocurrencies?”

These are fair points.  However:  cash, one of the assets Bitcoin and similar seems to compete with, is also used for illicit and illegal activity.  [Sufficiently so in India that it prompted the government to withdraw and redenominate 85% of the note issue].  That any technology can be used in a way that is detrimental to the public good is not enough for us to think of eliminating or curtailing it.  The question is whether the costs outweigh the benefits.

The final remark the paragraph above, about the impossibility of running countercyclical policy with current crypto-currency protocols, also merits comment.

Would central banks lose control over monetary policy if something like Bitcoin took over? Roger Farmer tweeted the same thought:

Screen Shot 2017-12-01 at 11.39.10

In principle, central banks can retain control over the economy so long as they retain the ability to define the unit of account.

Imagine an economy that was just textile manufacturing.  Central banks could adjust the definition of a metre to control the business cycle.  Lengthening the definition of a metre would lower the real price of textiles [how much you get for a ‘metre’] and, provided prices were sticky, and posted as currency units per metre, boost demand.

The notion of having a different unit of account from the medium of exhange surfaced in the context of solutions to the problem of the bound imposed on central bank rates by the property that cash returns zero interest.  Buiter and Kimball are associated with the idea that the central bank might manage the unit of account so that the medium of exchange [cash] depreciates in value against it, thus yielding a negative interest rate, and permitting negative rates to emerge on market instruments.

So even if everyone shifted to Bitcoin, a central bank might still have its monetary policy lever.  Personally I think Roger/Jean’s concerns about a take-over are real.

Collective, private decisions to ditch the official medium of exchange have typically involved ditching the unit of account too.

In history this was because monetary policy wrecked the unit of account function and the store of value/medium of exchange function of money.  We are contemplating here a world in which the unit of account function has not been wrecked by central banks.  So it’s conceivable that only the medium of exchange would shift.  But the (recent) historical precedent that the two tend to go together, even the conceptual difficulty of disentangling them, makes a Bitcoin takeover that disempowers central banks at least as probable.  (I say recent because if we go back to medieval times, say in continental Europe it was pretty common to observe units of account different from the multiple media of exchange circulating).

The aspect of crypto-currencies that concerns people, the protocol of essentially fixed supply, may be precisely what will limit their spread and preserve central bank leverage over their economy.

As David Andolfatto [or David Blockchain as he prefers now to be known] and others have pointed out, the fixed supply means that fluctuations in Bitcoin money demand are not accommodated and are felt in the price, making the price inherently more volatile [than that of central bank money].

The desirability of a currency protocol often – though not always – dictates the extent to which they are used in the future.  Repeating an example already cited here, those countries that dollarised were the ones with the worst local protocols [for managing their own currency].

Moreover, just as the protocols that govern central bank-note issue have changed, mostly for the better [the recent Indian demonetization, and Venezuela two recent exceptions] so the protocols governing crypto-currencies might evolve for the better too.   One reading of monetary history – not too Panglossian – is a slow process of discovering what works for the common good.  Anecdotally, I know from interacting with some of them that cryptocurrency developers understand the fixed supply ‘problem’, and it is not beyond the bounds of possibility that a better algorithmic protocol, or even one with human committees, emerges.

Which leads us to remember that central banks – nothing but a committee driven money protocol defined by the inflation targets, interest rate setting procedures, etc – could step in and provide their own.  Indeed, the Fed, the BoE, the Norges Bank and perhaps others have openly contemplated this idea.  [Detail:  they already do provide digital currency to financial intermediary counterparties.  The question is whether they provide it to all of us.]





Posted in Uncategorized | 1 Comment