Believe those who are seeking the truth. Doubt those who find it. Andre Gide


Tuesday, November 29, 2016

The failure to inflate Japan

On January 22, 2013, the Government and the Bank of Japan issued a rare joint statement on overcoming deflation and achieving sustainable economic growth. The purpose of the statement was to introduce a two percent inflation target. It was issued jointly to emphasize that the monetary and fiscal authorities could be expected to coordinate for the purpose of achieving their shared goal--a clear attempt to enhance the credibility of the new inflation target.

On April 4, 2013, the BOJ explained how it intended to achieve the inflation target: Quantitative and Qualitative Easing. QQE is (more or less) standard monetary policy, except on a larger than normal scale. That is, the policy entails the creation of bank reserves (money) which are then used to purchase securities--primarily government bonds (JGBs).

At the time, I was skeptical that the policy would work as intended (see here). My skepticism has not abated since then. This post is about explaining why. In a nutshell, my argument is that while the BOJ seems willing to increase inflation, it is largely unable to--and while the government is able to increase inflation, it seems unwilling to. In short, the necessary policy coordination appears to be absent.

Let's begin with some basics. First, note that a JGB is basically an interest-bearing claim to (possibly) interest-bearing BOJ money. The total nominal government debt is the sum of BOJ money and JGBs. The fiscal authority controls the total supply of debt. The monetary authority determines its composition (between money and bonds). Quantitative easing increases the supply of money and reduces the supply of bonds held in the wealth portfolios of private agents. That is, it changes the composition of government debt without changing its level.

Because bonds are normally discounted (that is, they generally earn a higher yield than money), an open-market operation that alters the composition of government debt will generally have real and nominal consequences. But in present circumstances, the yield and risk characteristics of Japanese money and bonds are very similar.  In the limiting case where money and bonds are perfect substitutes (we're not quite there yet), altering the composition of government debt (without affecting its level) is inconsequential. It's like swapping one hundred dollars worth of $10 bills for one thousand $1 bills. Such an operation--even it is permanent--is not likely to have any measurable effect on the economy, including the price-level. Why should it? Empirically, it didn't seem to have any measurable impact on inflation the first time Japan tried QE from 2002-2006 (see also my 2003 paper here, section VI).


For the rate of inflation to rise, one of two things must happen: [1] the growth rate in the supply of nominal government debt must rise; or [2] the growth rate in the demand for government debt must fall.

One interpretation of what has happened in Japan (and elsewhere) is that a persistently bearish sentiment has led to an elevated growth in the demand for safe securities, like JGBs (at the expense of private investment). The effect of this force is to drive down bond yields and create deflationary pressure (deflation is a market mechanism for increasing the growth rate of the real quantity of nominal object when it is in short supply.) While the supply of nominal debt has been rising, ultra-low bond yields and lowflation suggest that the demand for debt has been rising even more rapidly.

According to the joint statement mentioned above, the government's commitment to helping the BOJ achieve the 2% inflation target amounts to reducing the demand for government debt by implementing reforms intended to create a bullish investment climate designed to stimulate real economic growth (the third of Abe's three arrows). While this is fine as far as it goes, what's the contingency plan in case the third arrow cannot be released or misses its mark?

In my view, the appropriate contingency plan would involve a promise to use nominal debt to finance (say) social security payments or tax cuts as long as inflation remains below target. This is essentially "helicopter money." The "money" in this case is government debt (whether the BOJ monetizes new debt or not is irrelevant if the two objects are perfect substitutes). Importantly (and as far as I understand), the BOJ has no authority to engage in helicopter money. Only the government can do this. And in present circumstances, my view is that only a commitment on the part of the government to adjust money/debt-finance expenditures to meet the inflation target can render it credible. The question is whether the government has expressed any willingness to support the inflation target in this manner. All the evidence I can find suggests that the answer is no.

To begin, the Japanese government appears to be very concerned with the size (and growth) of its public debt. From the joint statement above:
In addition, in strengthening coordination between the Government and the Bank of Japan, the Government will steadily promote measures aimed at establishing a sustainable fiscal structure with a view to ensuring the credibility of fiscal management.
Now don't get me wrong--everyone agrees that a "sustainable fiscal structure" is a good thing. The question is in determining what is sustainable. Of course, the debt-to-GDP ratio cannot rise forever. But it may certainly rise to a much higher level, even from its current elevated position, especially in light of how low interest rates presently are.

The government of Japan, however, appears almost obsessively concerned with deficit reduction. Publications from the Ministry of Finance seem to go out of their way in raising debt-sustainability alarm bells. Consider the contents of this Japanese Public Finance Fact Sheet, for example. Most of the document stresses the need for "fiscal consolidation" (deficit reduction) and includes lessons to be drawn from the European debt crisis. The graph of total government expenditure on page 4 strangely includes spending on the repayment of debt. And on page 3, there is the familiar and misleading "here is what a family's balance sheet would look like if it behaved like the government" exercise. This is a great way to promote the government's seriousness about stabilizing the debt-to-GDP ratio. But it is not, in my view, a policy that is consistent with helping the BOJ achieve its 2% inflation target.

And by the way, just how serious is the government debt problem in Japan? Japan's debt-to-GDP ratio is presently 250%, or so we are told. As it turns out, this figure overstates the level of public debt (see here, section 3.1). The 250% figure represents gross debt, which includes government loans and certain intragovernmental transfers, all of which should be netted out. Once this is done, the net debt-to-GDP ratio is closer to 150%.


Moreover, if one further accounts for the sizable quantity of government assets, the ratio falls to 100% (see the balance sheet of the central government here on page 51). And finally, if one was to view the fact that 40% of government bonds are held by the BOJ and likely to remain monetized, the ratio falls further still. In my view, the very low yield on JGB's reflects the market's assessment that public finances in Japan are nowhere near being out of order (a caveat to this view here).

So, relative to the market demand for their product, the government of Japan appears to be in "austerity" mode--it is bent on limiting the supply of highly-valued JGBs. In the meantime, the BOJ is aggressively purchasing the limited supply of JGBs to the point where it is now worried that the supply of bonds available for purchase will soon be exhausted (story here).



How can the BOJ credibly promise to continue with its bond purchases until its inflation target is met? It can't. Not without the proper support from the government, which appears not to be coming anytime soon. And so, after a transitory blip in inflation following the austerity-induced VAT, headline CPI is back near zero territory.


Partly out of a concern over running out of eligible securities to purchase, the BOJ recently announced a new negative interest rate policy (NIRP) with yield curve control (YCC); see here. The intervention appears to have little impact on inflation expectations (inflation and inflation expectations today are similar to the early 2000s, prior to the financial crisis).


Let me conclude. First, this post is not meant as an argument in favor of the 2% inflation target. Second, it should not be construed as an argument against the Japanese government's debt management strategy. Nor is it an argument against the BOJ's asset purchase program. I will discuss these issues in a subsequent post.

The point of this post is as follows. IF the monetary and fiscal authorities wish to implement a 2% inflation target, THEN success of the policy (in present circumstances) requires a sufficiently accommodative fiscal policy (deficit financed expenditures and/or tax cuts) when inflation and inflation expectations are running below target. Absent this commitment on the part of the fiscal authority, the endeavor is ultimately doomed (if an overall bearish outlook persists) and--as a consequence--the credibility of a monetary authority that keeps promising an inflation it cannot deliver may at some point be jeopardized.

Additional readings:
[1] Understanding lowflation.
[2] A model of U.S. monetary policy before and after the great recession.

Thursday, November 10, 2016

U.S. postwar growth and the pop in epop

Here's the picture of real per capita GDP growth in the postwar United States.

While this is an impressive record of economic development, the recent trajectory away from (log-linear) trend has many people concerned. I share this concern. But I sometimes wonder whether the assumption of (log-linear) trend does not distort our view a little bit. In particular, one might alternatively view the pattern of economic development as "naturally" alternating between episodes or more or less rapid growth, kind of like this...


This representation of "trend and cycle" is a little disconcerting in that it suggests that there is no obvious reason to expect "mean-reverting growth" any time soon. On the other hand, perhaps there is some comfort to be drawn as well. In particular, we've been there before and we somehow managed--not only to survive--but also recover. (Related post: Secular stagnation then and now).

In today's post, I want to look a little more closely at that recovery phase. While I think that a growth recovery is in the cards at some point, I'm not sure we should be expecting it to be as robust as what we experienced in the immediate postwar period or in the 1980-90s. The latter growth episode in particular was driven at least in part by a demographic force that appears to have largely dissipated. The pop in epop has popped, so to speak:


Whatever drives secular growth, it obviously cannot rely on an ever-rising employment-to-population ratio (EPOP). But EPOP can nevertheless rise for decades, as it did in the 1975-2000 period, giving the impression of secular (rather than transitory) growth.

How much did the transitory increase in EPOP contribute to GDP growth? To get a rough answer for this question, suppose that EPOP remained fixed at 58% throughout the entire sample period and subtract the amount (EPOP(t) - 0.58)*GDP(t) from the actual real per capita GDP at date t. Here's what we get:


That is, unlike the economic boom of the immediate postwar period, the more recent 80-90s boom was driven in part by a pop in EPOP. For those that prefer a log scale:


Viewed from this perspective, the growth spurt beginning in the early 1980s does not look as impressive, though it's still pretty good. And unless there's reason to believe that a similar pop in EPOP is in store in the near future, it might be prudent to scale back our forecasts for longer-term economic growth accordingly.

Thursday, September 29, 2016

Beveridge curves

The Beveridge Curve refers to relationship between job vacancies and unemployment or, more generally, between business sector recruiting activity and household sector job search activity.

Theoretically, the Beveridge Curve should be negatively-sloped in V-U space. When economic prospects look promising, firms wanting to expand capacity begin to post more vacancies. For a given level of unemployment, there is an increase in labor market tightness (V/U) which makes finding a job easier for unemployed workers. The unemployment rate declines as the vacancy rate rises. The reverse holds true when economic prospects are diminished.

Empirical Beveridge Curves don't always have the clean shape suggested by theory. Sometimes, the Beveridge Curve appears to "shift." Beginning with Lilien (1982), there's been an inclination to interpret shifts in the Beveridge Curve as reflecting the effects of "structural" shocks as opposed to the "cyclical" shocks that drive the normal U-V dynamic. For some recent work in this area, see my interview with Gianluca Violante here: "What Shifts the Beveridge Curve? Recruitment Effort and Financial Shocks."

I'm not going to provide much in the way of analysis in what follows. The primary purpose of this post is just to share some data that may or may not stimulate some hypotheses. Let me begin with the BC using the JOLTS data.


Here you see the familiar cyclical pattern driven by the Great Recession and recovery. Except that the BC appears to have shifted outward. In other words, given present levels of recruiting intensity, we would have expected (based on historical experience) the unemployment rate to be significantly lower. The pattern is similar if we instead use an alternative measure of job vacancies from the HWOL (the Conference Board Help Wanted Online series).


Because the size of worker flows between employment and out-of-the-labor-force are as large as the flows between employment and unemployment, I sometimes like to use a broader measure of job search (available to work) like nonemployment (you may prefer one of the alternative measures listed here.)




This representation of the data suggests that the U.S. labor market looks a lot different today than it did prior to the Great Recession.

One of the benefits of the HWOL data is that measurements are available at the MSA level. (I also have the benefit of a great research assistant, Andew Spewak, who did all the leg work for us.) Here are few examples.


 Or, in terms of nonemployment rates...






So some MSAs display a relatively stable BCs in V-U and V-N space, whereas others do not.

To get some additional sense of the heterogeneity existing at the MSA level, consider the following data, which plots the ppt change in vacancies and unemployment over the recession (2007-09) and the recovery (2009-16) for a set of selected MSAs (most of the largest ones).


Not surprisingly, the unemployment rate shot up across all the MSAs in this sample and the vacancy rate declined, though not by very much in many jurisdictions. Here is how the same set of MSAs behaved during the recovery.


 Again, not a very surprising pattern, apart from the extent of the heterogeneity. If we repeat the exercise above replacing the unemployment rate with the nonemployment rate, during the recession we see,


And during the recovery,


That is, recruiting intensity in the recovery appears to be up across the board. One would expect the employment rate to be up across the board as well. But it is not. MSAs like Seattle, Denver, and Phoenix, for example, have experienced declines in the employment rate despite marked increases in their respective job vacancy rates. These differences are interesting and could have implications for (say) the relative merit of policies targeted at the aggregate vs. sectoral/regional level.

Friday, August 26, 2016

Jackson Hole and Fed Communication

Fed chair Janet Yellen gave what I considered to be a good speech at this year's Jackson Hole conference (see here).  Not everyone seems impressed, however. The Fed has no credibility, it seems. For example, it keeps saying it's going to do things, like raise its policy interest rate, only to repeatedly back off. I mean, what the heck? Don't they even know what they're doing?

At some level, this degree of frustration is understandable. (I am less sympathetic, however, when it comes to informed journalists and market traders, who should know better.) Let me try to help ease your frustration.

The first thing to keep in mind is that monetary policy is not a precise science. Much remains to be discovered, especially since the environment (technology in particular) continues to evolve. Keep in mind that most central banks employ the services of research divisions. As Einstein is purported to have said: "If we knew what it was we were doing, it would not be called research, would it?"

That's not to say that monetary policy makers are completely clueless. Evidence. Theory. Discussion. Debate. Experience. Wisdom. They all have a role to play in the process of formulating monetary policy. There is considerable consensus along some dimensions (e.g., keeping inflation low and stable). There is outright disagreement along other dimensions. That's just the way it is. And it's likely to remain this way for the foreseeable future. But in the meantime, if you live in the U.S., try to take some solace in this:

Annual Inflation Rates
Now, in terms of Yellen's Jackson Hole speech, what are people complaining about? Well, consider this WSJ article: Yellen Cries Wolf, with the subtitle: Fed chairwoman tries to convince market that a rate rise is coming but investors aren't listening. Of course, digging deeper into the article, the author clarifies that Yellen did not actually say that, only that she came "close" to saying it. Sigh.

The main issue here, I think, is what people expect in the way of Fed communication in terms of its economic outlook and its description/explanation of its policy rule. These are two conceptually distinct objects and are often confused.

My own personal view is that a central bank should make its policy rule clear, but that it should refrain from providing an economic outlook. So, for example, the Fed should want to make it clear that a sharp uptick in inflation would be met with a correspondingly sharp increase in its policy rate (assuming that this is an appropriate policy response). But what would be the use in having the Fed provide an outlook (a probability assessment) over future inflation? All that people need to know, really, is that the Fed is committed to keeping inflation in check. The credibility of this belief is ultimately based on reputation (see diagram above). As for forecasting the contingencies that would trigger this or that policy response, let the private forecasters do their job.

But some people want more from the Fed. They want the Fed to tell them how the economy is going to evolve in the foreseeable future (and in some cases, beyond). As if the Fed, or anyone for that matter, can actually know.

Now, if people generally appreciated the inherent difficulty in offering forecasts of this sort, I'd say that it would do no harm for a central bank to offer its economic outlook--a prognosis that would find its way in a portfolio of outlooks generated by other agencies. Market participants could then combine the information in these outlooks and, together with the Fed's clearly stated policy rule, make their own forecast of (say) the future path of short-term interest rates.

But perhaps I'm being naive. If a central bank was to just state its policy rule and refrain from offering its outlook, it would surely be criticized for not providing the market with enough "guidance." It is the demand for this "guidance" that compels central bankers to offer an economic outlook. Here is the outlook provided by JY (emphasized phrases my own):

Looking ahead, the FOMC expects moderate growth in real gross domestic product (GDP), additional strengthening in the labor market, and inflation rising to 2 percent over the next few years. Based on this economic outlook, the FOMC continues to anticipate that gradual increases in the federal funds rate will be appropriate over time to achieve and sustain employment and inflation near our statutory objectives. Indeed, in light of the continued solid performance of the labor market and our outlook for economic activity and inflation, I believe the case for an increase in the federal funds rate has strengthened in recent months. Of course, our decisions always depend on the degree to which incoming data continues to confirm the Committee's outlook
And, as ever, the economic outlook is uncertain, and so monetary policy is not on a preset course. Our ability to predict how the federal funds rate will evolve over time is quite limited because monetary policy will need to respond to whatever disturbances may buffet the economy. In addition, the level of short-term interest rates consistent with the dual mandate varies over time in response to shifts in underlying economic conditions that are often evident only in hindsight. For these reasons, the range of reasonably likely outcomes for the federal funds rate is quite wide--a point illustrated by figure 1 in your handout...The reason for the wide range is that the economy is frequently buffeted by shocks and thus rarely evolves as predicted.

And so, there you have it. Evidently, the Fed plans to raise its policy rate soon. And if it doesn't, its credibility will be diminished. Or if it does raise rates even though conditions do not warrant it, its credibility will be again be diminished. Or, as the fan chart above demonstrates, the Fed evidently has no idea where interest rates will go. There's no winning this game. Go back and look at the first diagram again and give it a rest.


Tuesday, June 14, 2016

DSGE Theory

This post is for my students, and whoever else is interested in what DSGE theory is and why I find it useful.

Dynamic Stochastic General Equilibrium (DSGE) theory refers to a methodology employed by macroeconomists to build DSGE models -- mathematical representations of the macroeconomy. DSGE models, like all models, are used for a variety of purposes. They are used to help organize thinking. They are used to interpret data. They are used to help make conditional forecasts. They are used to predict and evaluate the possible consequences of government policies (especially useful for policies that have never been tried before). They are used to help make policy recommendations.

The use of DSGE theory is often criticized in ways that reflect what I view as a deep misunderstanding of the research program, how it fits in with the evolution of macroeconomic theory over time, and how it is actually applied by (say) central bank policy makers. This is, I think, to some extent the fault of DSGE practitioners who, accustomed to speaking in their specialized trade language, find it difficult to translate core ideas and findings in the vernacular. (This is an issue with most trade associations, of course, but is especially acute in economics because so many non-specialists take an interest in the subject.)

Let me first provide some context for my views. We are all scientists trying to understand the world around us. We use our eyes, ears and other senses to collect data, both qualitative and quantitative. We need some way to interpret/explain this data and, for this purpose, we construct theories (or hypotheses, or models, or whatever term you prefer). Mostly, these theories exist in our brains as informal "half-baked" constructs. This is not meant to be a criticism (as long as we recognize the half-baked nature of our ideas and why some humility is always in order). Often it seems we are not even aware of the implicit assumptions that are necessary to render our views valid. Ideally, we may possess a degree of higher-order awareness--e.g., as when we're aware that we may not be aware of all the assumptions we are making. It's a tricky business. Things are not always a simple as they seem. And to help organize our thinking, it is often useful to construct mathematical representations of our theories--not as a substitute, but as a complement to the other tools in our tool kit (like basic intuition). This is a useful exercise if for no other reason than it forces us to make our assumptions explicit, at least, for a particular thought experiment. We want to make the theory transparent (at least, for those who speak the trade language) and therefore easy to criticize. Constructive criticism is the fuel that fires the furnace of new ideas in academia. [ End of philosophical rant :) ]

Now let me turn back to DSGE theory. I think it will be useful to break the acronym into its parts and discuss each component separately.

The "D" stands for dynamic--as in--the phenomena in question involve a time element. The opposite of dynamic is static. While static models have their uses, who's going to argue that a dynamic element isn't desirable? Almost all decisions like consumption and saving, deficit-finance, human capital investments, have a time dimension to them. No controversy here, I hope.

The "S" stands for stochastic--as in--societies appear subject to random events, like unforeseen technological breakthroughs, unexpected changes in government policy regimes, or just random acts of nature. Again, I don't think there's much controversy with this idea. Note, however, many DSGE models do not have the S, in which case we might instead employ the acronym DGE. (For a history of the evolution of these acronyms, see here.)

The "G" stands for general--as in--well, it's not entirely clear. There is a traditional distinction in economics between partial and general equilibrium theory. The partial equilibrium approach (associated with Alfred Marshall) refers to the supply-demand curve analysis that most people are familiar with. The analysis is "partial" in the sense that it typically restricts attention to a particular market--like the market for motor vehicles, taking the price of other goods as given. In contrast, the general equilibrium approach (associated with Leon Walras) strives to model the economy as a closed system, paying particular attention to how markets interact with each other and how prices are determined jointly. Importantly, the "G" insists on giving an explicit account of the government budget constraint (i.e., a government is not to be modeled as Jesus feeding the multitude.) Another way to think about "G" is that it means to capture the possibility of "feedback effects." The notion of feedback effects in macroeconomic systems is not, I do not think, controversial.

This leaves us with the "E," which stands for equilibrium. Here lies the controversy. But why? For all sorts of reasons, some of which are based on legitimate concerns, and some of which are based on simple misunderstanding.

Let me first address the misunderstanding. The concept of "equilibrium" in economics has evolved to mean something quite specific and something quite different from the notion of a "system at rest" (which is closer to what economists label a steady-state). Technically, an equilibrium is simply a set of conditions imposed by the theorist to help determine the outcome of an hypothetical social interaction. In this sense, an equilibrium is probably better thought of as a solution concept. There is no unique way to specify an equilibrium solution concept. In the game theory, there is plethora of alternatives, beginning with the Nash equilibrium. The classical theory of Walras uses the concept of a competitive equilibrium. In my own view (probably not representative), I even think of general disequilibrium as just another type of equilibrium concept. Every theorist has to have a solution concept in mind when deducing the likely outcome of an hypothetical social interaction. There is no right or wrong way to specify an equilibrium concept--there are just more or less useful ways in doing so.

Another misunderstanding is that insisting on equilibrium analysis necessarily implies that one assumes markets always "clear" in the sense prices adjust to ensure supply equals demand at all times. This is understandable because many DSGE models (especially the RBC variety) do in fact make this assumption. But, of course, there's a large class of DSGE models that do not (e.g., the NK variety). More to the point, it's important to understand that the concept of equilibrium is not wedded to the concept of competitive market-clearing models. In DSGE models that replace centralized Walrasian markets with decentralized search markets, conventional "supply and demand" curves do not even exist. In search models, prices are determined through bilateral negotiations and the "clearing" mechanism operates through quantity variables, like labor-market tightness (the ratio of vacancies to unemployment).

A more legitimate concern relates to the equilibrium concept of "rational expectations." Because of the "D" element, the theorist must take a stand on how expectations are formed and updated over time. Macroeconomic theorists have grappled with this question for over a century, if not longer (see Laider, 1999). There is little controversy that people are forward-looking. But exactly how are they forward-looking? John Muth (1961) suggested that, in the context of a model, we might begin by assuming that our modeled agents (somehow) form model-consistent expectations (i.e., "rational" expectations). Intuitively, the idea is that we should not model people as forming expectations that are wildly at odds with the reality unfolding around them and, that as a limiting case, we might even begin by assuming that expectations are formed in a manner that is perfectly consistent with the surrounding reality. Among other things, model agents are assumed to possess common knowledge (see, Geanakoplos, 1992).

Now, if all of this sounds like a bit of a stretch, it no doubt is. The relevant criticism and response is recorded in section 6.4 Stationary Models and the Neglect of Learning in Lucas and Sargent (1979). I'm not going to get into it here, but suffice it to say that there's been a large and vibrant literature on non-rational-expectations "learning" models since Lucas and Sargent wrote that piece. And you'd be very wrong to think it hasn't had any influence in the way policymakers, central bankers in particular, think about policy and its effects. St. Louis Fed president James Bullard, for example, is among those who have made significant academic contributions in this area (you can view his works here).

In terms of their use in policy making, DSGE models are no different than their predecessors. Some applications entail large scale quantitative models to make conditional forecasts. But their main value is the manner in which they (along with other models) are used to organize thinking in policy deliberations. I think I disagree with Narayana Kocherlakota here when he suggests that DSGE models are built purposely not be useful for day-to-day policy making--for example, in helping to answer the question of whether the interest rate should be changed in the upcoming FOMC meeting. Instead, he views DSGE models as useful for thinking about policy rules (which I agree with). But his view here seems inconsistent with a view he has expressed elsewhere, namely, that isolated changes in the policy rate are largely irrelevant--that what is important is how the path of interest rates is expected to evolve over time (I agree with this too). I think that the decision of whether to move rates today has to be made in the context of what the policymaker views as wise policy principles based on some combination of theory, evidence, and experience. These principles should no doubt make allowances for the necessity of discretionary and ad hoc policy actions. But this allowance does not mean that reference to a DSGE model (or any other model) cannot be useful for thinking through the likely consequences of a contemporaneous policy action. [Note: I may have misunderstood the point NK was trying to make.]

In terms of a defense of the use of DSGE theory for policy, I can do no better than Chris Sims here (video, highly recommended). See also this interview with Tom Sargent, who defends modern macro theory. Finally, I have my own related post: In Defense of Modern Macro Theory.

Sunday, May 29, 2016

Some questions concerning equity-financed banking

John Cochrane has another fun and provocative post making his pitch for equity-financed banking. He makes a lot of great points. But I'm still left feeling a little uneasy. In particular, I wonder whether some of his sweeping claims have any firm theoretical backing. It could be I just haven't thought hard enough or long enough about it. In any case, in the spirit of promoting discussion, let me describe some of the things that bother me.

Actually, before I start, I should preface my concerns with a couple of observations. Policies directed toward stabilizing the banking sector target both the asset and liability side of bank balance sheets. The "narrow banking" proposal of 100% reserves, for example, is a policy designed to make bank assets safe. The "100% equity-financed banking" proposal on the other hand is a policy designed to render bank liabilities safe (run-proof). According to Cochrane, "...bank assets aren’t risky! A diversified, mostly marketable portfolio of loans and mortgage backed securities is far safer than the profit stream of any company." The problem evidently lies on the liability side. Moreover, the issue here is not simply one of ascertaining whether banks are "over-levered" (I'm willing to agree that they probably are). The issue is whether debt (fixed-value promises), especially demandable debt, has a role to play in the business of banking at all.
 
The main question I have is: where's the theory? In the benchmark neoclassical model, some version of the Modigliani-Miller theorem typically holds. The theorem states that under a very specific set of assumptions, the liability structure of a firm does not matter. We know that these assumptions (e.g., symmetric information) do not literally hold in reality. When information is asymmetric, debt can be superior way to fund assets relative to equity (see here and here, for example). Demandable debt may have socially desirable properties when liquidity demands are private information; see here. In a world where exchange media (including collateral assets) are valued, it could matter very much how the "pizza" is sliced into tranches designed to serve special uses.

What explains the widespread use of the debt contract and the prevalence of fractional reserve banking? The explanation is unlikely (in my view) to be "government distortions" or "greedy bankers." It seems to me that asymmetric information in financial markets is a pertinent real-world friction. Could it be that debt represents a sort of "second-best" solution to the problem of efficient (low-cost) financing in a world of asymmetric information? Might the same not be true of demandable debt? Implicitly, Cochrane must be thinking that these benefits are quantitatively small. Maybe so, but senior liability tranches do seem rather highly valued in the market place, especially as exchange media. Private monetary instruments have always been in the form of debt, not equity. Why has this been the case?

Cochrane begins his post with the statement "My premise is that, at its core, our financial crisis was a systemic run. The mechanism is familiar from Diamond and Dybvig."  The Diamond and Dybvig (1983) model can indeed be interpreted as a theory of bank sector fragility. But we should keep in mind it's also a theory that explains the benefits of an illiquid bank sector (a point stressed by Wallace, 1996). Moreover, it's also a theory that explains the desirability of demandable debt. We want demandable liabilities (according to this theory) because, well, imagine going to an ATM wanting to withdraw cash and then having the blasted thing make you fill out an insurance claim attempting to verify whether you do indeed have a pressing need for liquidity. (In fact, banks were known to do this during the banking panics of the 20th century.)

Now, I suspect that Cochrane may reply that while demandable debt in its traditional form once had a useful role to play, markets and communication technologies are now developed to the point that renders the demand deposit liability superfluous. Call me a hopeful skeptic. Cochrane claims that "unlevered bank equity would have 1/10 less the volatility it has today, so we're talking about 2% volatility on an annual basis." I'm not sure where he gets these numbers, though I do agree with him qualitatively. (On the other hand, how do we know that banks will not hold riskier assets if they are equity financed?)


In any case, just how much volatility is "too much" for depositors wanting transactions balances with a steady value to ensure that payment obligations can be met in all circumstances on a timely basis? Evidently, depositors value the fact that they can redeem their bank money at par for cash quite a bit. What Cochrane advocates is the replacement our present ATMs with one-armed bandits spitting out random returns whenever we want to redeem our bank equity shares for cash. That sounds like a lot of fun, but it may not be very practical. Perhaps the volatility of these returns will not be so great (how do we know?). Maybe we'll be able to withdraw only 98 cents on the dollar more often than not "by chance" (as the Gorton-Pennacchi insiders skim us outsiders while claiming "bad equity returns" as the culprit. Again, how do we know?)

I want to be clear here. I am not suggesting that Cochrane's proposals are a bad idea when all considerations are factored in. I'm just questioning whether it's the open-and-shut case he makes it out to be. If it is such a great idea, I wonder why banks have not offered the product on their own? (I am sure there is no shortage of explanations here, but still, it's worth having them spelled out.)
 
There is one final thing I want to touch on before I sign off here. The main reason Cochrane preferes equity over debt is because equity is evidently "run-proof." I don't know, he may have his own special definition of "run." There are macroeconomic models of multiple equilibria (see Roger Farmer) where shareholders might be compelled to "run" on equity, driving its price lower, leading to all sorts of negative pecuniary externalities and self-fulfilling crises. Getting rid of debt will not necessarily get rid of financial crises.

Of course, getting rid of debt will get rid of bankruptcy. But I am sometimes led to question whether bankruptcy is quantitatively relevant for causing or exacerbating recessions. As Cochrane points out:
Our crisis and recession were not the result of specific business operations failing. Failure is failure to pay creditors, not a black hole where there once was a business. Operations keep going in bankruptcy. The ATMs did not go dark.
Absolutely. I can recall several times when an airline went bankrupt with no noticeable side-effects (passengers were treated terribly, but that was normal even outside of bankruptcy). Bruce Smith (2002) reports evidence suggesting that bank panics are not always associated with output losses. If so, then what's the big deal? As Cochrane explains, when equity takes a plunge, we all pull out our hair, but the firm is under no obligation to do anything on our behalf. But with debt--demandable debt in particular--we can demand--demand--our money back. And the firm has to ... has to what? I'm not really sure. The firm can just continue to operate as usual and restructure its debt, no? After all, bankruptcy is just a rearrangement of claims against a firm's assets (well, I suppose in some cases senior management gets the ax, but not always). In the old days, banks were permitted to temporarily suspend withdrawals without legal repercussion. As well, bank clearinghouses might issue currency substitutes in lieu of specie, etc.

These considerations lead me to wonder whether interventions on the asset side of bank balance sheets might not be a better way to promote a run-free banking system. Alternatively, as Cochrane suggests, we might consider opening up the central bank's balance sheet to the public. As I've mentioned before, the U.S. treasury does permit the public to hold online UST accounts at www.treasurydirect.gov. While the system is not set up for making payments, there is no reason why, in principle, it could not be. I'm not suggesting this as a panacea, of course. But I think the idea, or some variant of it, deserves serious consideration.

Thursday, May 5, 2016

Why the Blockchain should be familiar to you

From L2R: Michael Casey (MIT Media Lab), David Andolfatto (FRB SL),
Simon Johnson (MIT) and John Schindler (FSB)
I'm freshly returned from Consensus 2016: Making Blockchain Real where I participated in a panel on "Digital Cash for Central Bankers." Michael Casey did a stellar job in crafting the session. It was fun and informative to have Simon Johnson and John Schindler as co-panelists. As we didn't get booed off the stage, I think maybe the audience enjoyed what we had to say as well. (I left the session with almost a kilogram of business cards--odd that paper is still so widely used in this capacity. By the way, some of what I had to say can be found in my blog post here.)

Today's post is more about marketing the idea of blockchain. The word sounds intimidating to many people. That's probably because attempts to explain it often make use of a highly technical trade language that few people understand. My goal here is to think of ways to communicate the idea of blockchain in a manner that will make people feel like the concept is familiar to them. Indeed, I believe that the broad conceptual idea of blockchain should be familiar to us all.

Renowned Bitcoin expert Andreas Antonopoulos writes here:
It will take time for the idea of decentralized trust through computation to become a part of mainstream consciousness, and until then, the idea creates cognitive dissonance for those accustomed to centralized trust systems. With thousands of years of practical use, centralized systems of trust are accepted unconditionally and without much thought as the only model of trust.
It's an excellent article and I highly recommend you read it. What I want to do here is push back a little on the notion that decentralized trust systems should necessarily create cognitive dissonance. In particular, I should like to point out that we've had tens of thousands of years of experience with decentralized trust systems. Alright, so let's get started.

Consider the following scenario. You are attending a cocktail party with dozens of people present and you are asked by your hostess to deliver a short speech. Now suppose you utter something outrageous, e.g., "I think the Fed should buy the existing stock of bitcoin and store it as a foreign currency reserve!" The audience will stare at you, mouths agape (especially if you're a central banker, or a renowned Bitcoin enthusiast). You wake up the next day and regret your rash public remark. You wish you could take back what you said, but how? The only way this could be done is if you could somehow persuade the group to forget what you said. But just think about how difficult it would be to do that. Especially if the number of people in attendance was large.

What has just been demonstrated (I hope) is the power of a distributed database validated through a communal consensus algorithm. The database here is your silly statement above together with the time you made it (a timestamp). The information in this database is shared on a distributed network of brains (what you said and when you said it is imprinted forever in the memories of all who witnessed the event). The consensus algorithm here is "let's all agree to remember what was actually said (as opposed to some alternative, fabricated statement)."

A database in this form is extremely secure. It will survive intact even if some brains holding the database are destroyed. The database can be communicated to other brains (who can confirm the validity of the statement by seeing how it squares with the memories of others). If one or more people tried to fabricate an alternative history, the attempt would almost surely fail (we cannot rule out the possibility entirely, however). If your remark instead lived only as an electronic recording in a central databank, the task of re-writing history would be much easier.

Now imagine living in a primitive village. Relevant elements of the database would include observations like: [1] John had his wound tended to by Bob at date t, [2] John killed a wild pig and shared it with the village at date t-1, etc. The database in this case can be organized in a sequence of time-dated blocks X(t) = {x(t), x(t-1),...}, where x(t) is the database (block) at date t, and X(t) is the "blockchain." So, the blockchain is just a communal databank recording some relevant aspects of villagers' activities. In village economies, this communal memory typically exists in a virtual state (written records are a much more modern invention).

Notice how the blockchain described above could serve a very useful economic purpose. In particular, note that the act of consumption (medical services) in [1], John is effectively using [2] as currency. At least, this is how things work in what anthropologists describe as "gift-giving societies." And if you think about it for a while, you'll notice that the same principle is at work in the various groups you interact with on a daily basis (your friends, your family, coworkers, etc.). Much, quite possibly most, economic exchange occurs via such localized trust networks.

The problem with this ancient blockchain technology is that it doesn't scale very well. There's only so much data we can fit in our brains.  So as populations grew and as people started forming large communities, a new type of record-keeping system was needed. The model that came to dominate is one in which databases are collected and maintained by trusted third parties. Much effort is expended in keeping these private databases secure (not always successfully). It is often difficult for these agencies to communicate and reconcile their databases (as in when you try to send money from your bank account to your friend's foreign bank account overseas).

And so enter the "new" technology, blockchain. I hope I have convinced you what is new here is not the principle of the blockchain. The new technological developments are: [1] bigger brains (increased capacity for data storage and processing via computers); [2] better communications (the Internet); and [3] computer-based algorithms to serve as communal consensus mechanisms (e.g., proof-of-work).

These innovations will permit a revolution in the truest sense of the word: we are traveling back to where we began--but with planet earth as our village.

===============================================================

PS. Please let me know if this was helpful or how it could be improved. After writing this post, I came across this short video: Blockchain for Dummies. Some of the comments are critical of it, but I thought it communicated the idea in a nice way.