Believe those who are seeking the truth. Doubt those who find it. Andre Gide

Tuesday, November 30, 2021

On the Necessity and Desirability of CBDC

Remarks prepared for P2PFISY Panel Discussion, December 1, 2021

At a conceptual level, CBDC is a compelling idea. It envisions everyone having an account with the central bank consisting of a direct claim against digital fiat currency that can be used as a safe and efficient form of payment. Since all debiting and crediting of accounts occurs on the central bank’s balance sheet, all the costs and counterparty risks associated with intermediated payments is eliminated. All individuals and businesses would have access to secure, low-cost real-time payment services. Moreover, concerns over data privacy and ownership can be dealt with directly and in a manner consistent with societal preferences.

I have nothing against a retail CBDC per se. Indeed, there may even be some merit to the idea as a basic public option. But is it really something that is essential? What existing problems is a CBDC supposed to solve that cannot be solved through a wholesale-CBDC with supporting legislation?

A wholesale-CBDC is an old idea. It is basically a proposal to permit free-entry into the business of narrow-banking. Let Novi, Square, PayPal and other reputable firms have Fed accounts. Let them issue “stablecoin” liabilities fully-backed by interest-bearing reserves. Consider adopting the U.K.’s open banking legislation. Let the private sector work its magic. What else needs to be done?

Do we really think that consumers would flock to CBDC for reasons of safety? Bank deposits are close to fully-insured for most people, and all deposits would effectively be fully-insured in a narrow bank. Do we think that big banks overcharge for basic payment services? I see many online banks offering free checking accounts and I see service fees generally declining over time—something that would be spurred on with a wholesale-CBDC. I do see interchange fees in the U.S. remaining stubbornly high. But I diagnose this as a by-product of American’s love-affair with the cash-back and rewards programs offered by credit card issuers. I do not see how a CBDC is supposed to discourage consumers from using cards that effectively pay them to spend money. (This seems to be less of a problem outside of North America.)

But more importantly, do we want to rely on the government sector to deliver high-performance customer service at the retail level and to keep up with technological advances in the space? A well-functioning government is essential for a well-functioning private sector (and vice-versa), but these two sectors should probably stick to their knitting. Let the central bank handle monetary policy, bank supervision, lender of last resort operations, and wholesale payments. Let the private sector handle servicing the vast, demanding and rapidly-evolving retail sector. It’s a model that has proven to work best, in my view.

As for financial inclusion, one should keep in mind that the most significant progress along this dimension in recent years has been the outcome of private initiatives, not state initiatives. Consider, for example, the hundreds of millions people who now have access to digital payments thanks to M-Pesa, WeChat and AliPay. Contrast this to the many developing countries that already have CBDC issued by their state banks. If their state banks have not been able to deliver on this score, what makes us think that retail-CBDC is essential?

Thursday, November 11, 2021

Run-Proof Stablecoins

A stablecoin (SC) is a financial structure that attempts to peg the value of its liabilities (or a tranched subset of its liabilities) to an object outside its control, like the USD. To do this, the SC must effectively convince its liability holders that SC liabilities can be redeemed on demand (or on short notice) for USD at par (or some fixed exchange rate). 

The purpose of this structure is to render SC liabilities more attractive as a payment instrument. Pegging to the USD is attractive to people living in the U.S. because the USD is the unit of account. Non-U.S. holders may be attracted to the product because the USD is the world's reserve currency. This structure serves to increase the demand for a SC pegged to the USD. 

To a macroeconomist, an SC looks like a unilateral fixed exchange rate regime or a currency board. The structure also resembles a money market fund that pegs the price of its liabilities with the USD at par (presently, government money funds in the United States). It also looks like a bank without deposit insurance (bank deposit liabilities are pegged at par value against cash). 

The history of unilateral fixed exchange rate regimes is mixed. Hong Kong has successfully pegged its currency to the USD for decades. But the experience for many countries seems closer to that of Argentina. Unless a USD-based SC is backed fully by USD reserves (it needs an account at the Fed for this) or by USD bills (maximum denomination is $100, so unlikely), it may be prone to a bank run. Any other security (including USTs, as the events of March 2020 demonstrated) is subject to liquidity risk -- i.e., a risk that the market for the security suddenly freezes, or demand for the security vanishes as investors seek safer havens. If a SC cannot dispose of its assets at "fair" or "normal" prices, it will fail to raise the money it needs to meet its par redemption promise. The SC will turn out to be not so stable. 
The theory of bank runs suggests that SCs might be rendered run-proof if their liabilities are properly designed. The famous Diamond and Dybvig (JPE 1983) model of bank runs is, in fact, a paper that demonstrates how banks can be rendered run-proof. In the first part of their paper, the explain how a credible promise to suspend redemptions when redemption activity is abnormally high can serve to discourage runs (redemptions based on belief of failure, rather than a need for liquidity) altogether. There is no need for deposit insurance (the second part of their paper is devoted to explaining why deposit insurance may nevertheless be needed, but their argument is not entirely satisfactory). 
In reality, we do see attempts to render run-prone structures less prone to runs. The Dodd-Frank Act, for example, prevented institutional money funds from pricing their liabilities at par with the USD (only government funds can now do this). In addition, the Act required that fund managers implement liquidity fees and redemption gates in the event of heavy redemption activity. These provisions have not been entirely successful. Even banks that suspended redemptions in the old days did not manage to prevent mass redemption events. The theory suggests that what is needed is a *credible* policy. Evidently, when push comes to shove, people cannot always be expected to follow through on their promises. 
Back in my teaching days, I used a "crowded movie theatre" as a metaphor to explain the phenomenon. Imagine a movie theatre that seats 500 people. If someone was to yell "fire!" (for legitimate or illegitimate reasons), people can be expected to rush for the exits. Invariably, some people are likely to be trampled and even killed. If people would instead react to the alarm by rising calmly from their seats and proceeding sequentially to the exits, then only the very last few people in the queue are destined not to make it. Losing one or two people relative to (say) is terrible, but it's preferable to losing 50 people in a mad rush for the exits. 

An economist might detect a missing market here. Why not sell tickets with queue positions (in the event of fire)? The tickets with the last few queue positions are likely to sell at a discount (that would depend on the likelihood of the event). This way, if someone yells "fire!", customers will simply show their assigned queue position to the ushers and proceed calmly out of the exits. If the fire does exist, the few people know they are doomed and accept their fate with stoic resignation, knowing that they are dying so that many others may live. (And if it turns out there is no fire, they are saved from the fire and the prospect of death that would have been present had they joined the rush to the exits). 

Except we know that's not what is likely to happen. People cannot be expected to commit in this manner. And there's no obvious way to enforce such contractual stipulations. 
But this is where SCs may have an advantage over conventional institutional structures. In particular, their use of "smart contracts" means that commitment is not an issue. The terms of such contracts are executed under the specified contingencies whether you like it or not. You may not like it ex post, but such commitment can be valuable ex ante. In the context of SCs, the credible threat of suspending redemptions in the event of abnormal redemption activity may actually prevent any runs from occurring in the first place

There are, of course, limits to what smart contracts can achieve. They wouldn't, for example, solve the movie theatre problem I just described. This is because people do not live "on-chain." (See also my blog post Smart Contracts and Asset Tokenization.) To some extent, the same issue exists for USD SCs because USDs and USTs exist "off-chain." Nevertheless, money accounts are different than people, so I think the principle described above can apply to financial products. 

Related work: Preventing Bank Runs (w/ Nosal and Sultanum, TE 2017).

Wednesday, September 29, 2021

EEA-ESEM Panel: Macroeconomic Consequences of the Pandemic

I was recently asked whether I'd like to share my thoughts on monetary policy in a post-pandemic world. Sure, why not? Thanks to Jan Eckhout for thinking of me. The panel was hosted by the European Economic Association last month and moderated by Diane Coyle. I was honored to speak alongside Ricardo Reis and Beata Javorcik, both of whom provided riveting presentations. For what it's worth, I thought I'd provide a transcript of my remarks here. 

Lunch Panel EEA-ESEM Copenhagen
August 25, 2021

I want to focus my discussion on the U.S. economy and from the perspective of a Fed official concerned with the challenges the Fed may face in fulfilling its Congressional mandates in a post-pandemic world.

First, to provide a bit of context, let me offer a bit of history on policy and, in particular, on what I think were some policy mistakes. Let me begin with the 2008-09 financial crisis, which is something I think most people would agree should never have happened. Whether a sufficiently aggressive Fed lender-of-last-resort operation would have averted the crisis remains a open question. Even if it had been successful, such an operation would have had costs. It may, for example, have elicited an even greater political backlash than we saw at the time--and who knows how this may have manifested itself as undesirable changes to the FRA. As well, such an intervention may have just pushed mounting structural problems down the road. In particular, while it’s now clear that some private sector lending practices needed to change, it’s not clear where the incentive to do so would have come from absent a crisis. In any case, the crisis happened. How was it managed?

The ensuing recession was deep and the recovery dynamic very slow. The prime-age employment-to-population ratio did not reach its pre-pandemic level until 2019, a full decade later. Nevertheless, on the whole, I think the Fed followed an appropriate interest rate policy. There were one or two times the FOMC exhibited a little too much enthusiasm for “normalizing” policy, but I think the slow recovery dynamic had more to do with insufficient fiscal stimulus—especially at the state and local level—rather than a consequence of inappropriate monetary policy. The evidence for this can also be seen by the fact that inflation remained below the Fed’s 2% target for most of the time the policy rate was close to its ELB. The Fed has interpreted this low inflation episode as partly a monetary policy mistake, something its new AIT regime is designed to address. But my own view is that persistently low inflation—and the low money market yields that go along with it—have more to do with the supply and demand for U.S. Treasury securities. This is something the Fed does not have very much direct control over.

I know many people are skeptical of fiscal theories of the price-level, but in virtually every economic model I know, a fiscal anchor is necessary to pin down the long-run rate of inflation. Monetary policy—specifically, interest rate policy—can, of course, influence the price-level, so monetary policy can influence inflation dynamics. But it can do so only in the “short run.” Interest rate policy alone cannot, in my view, determine the long-run rate of inflation, at least, not without appropriate fiscal support.

Now, I know many of you may be asking how I can think fiscal policy has very much to do with inflation given how rapidly the debt has risen since the financial crisis and again with the C-19 crisis, all with little apparent pressure on long-run inflation expectations and on long-term bond yields. We should, however, keep in mind that an observed change in the quantity of an object may entail both supply and demand considerations. And one can easily point to several forces that have contributed to increases in the global demand for UST securities in recent decades. For example, the growing use of USTs as collateral in repo and credit derivatives markets beginning in the 1970s and accelerating through the 1980s. The growing demand for USTs as a safe store of value from EMEs. The evaporation of private-label safe assets during the financial crisis that left a gaping hole for USTs to fill. Next, we had a large increase in the regulatory demand for USTs coming out of Dodd-Frank and Basel III. The Fed’s SRF and FIMA facility should further enhance the demand for USTs. On top of all this, we’ve witnessed an emergent class of money funds called “stablecoins” that are further contributing to the demand for USTs. These forces have been disinflationary, leading bond investors to revise down their expectation of the future path of policy interest rates. It is interesting to ponder a counterfactual here. In particular, think of what may have transpired absent an accommodating U.S. fiscal policy. We may very well have experienced the mother of all deflations. If this is correct, then an elevated debt-to-GDP ratio, given a relatively stable inflation and interest rate structure, reflects an elevated real demand for outside assets. The problem is not that the debt-to-GDP ratio is going up. The problem is what disruptions might occur if it goes down owing to a sudden and unexpected inflation.

The recent rise in inflation is concentrated in durable goods, and I think is mostly attributable to ongoing supply-chain issues associated with the pandemic. This effect is likely to reverse itself, the way lumber prices recently have. Some of what I think is temporarily high inflation may not reverse itself, leading to a permanently higher price-level. In this case, households will worry whether their wages will keep pace with the higher the cost of living. There is even the possibility—though I think less likely—that the rate of inflation itself will remain elevated and that inflation expectations will rise well above the Fed’s 2% target. This may happen, for example, if the traditional bipartisan support for fiscal anchoring in the new generation of Congressional representatives is perceived to wane, or if the global demand for safe assets slows. If either or both of these things happen and are persistent, then the Fed may find itself faced with what Sargent and Wallace phrase an “unpleasant monetarist arithmetic.” That paper, which was published exactly 40 years ago, warned how tightening monetary policy without fiscal support might actually make inflation go higher rather than lower.

The implications for U.S. monetary policy are quite interesting should an event like this unfold. A determined Fed may try to fight inflation by raising its policy rate. The result is likely to be a temporary disinflation and recession.  Should fiscal policy remain unaltered, the logic provided by Sargent and Wallace implies that inflation will return even higher than before as the deficit must increase to finance a larger interest expense on the debt. The best the Fed can do in this case is to lower its policy rate, announce a temporarily higher inflation target, and hope that the fiscal authority gets its house in order. The notion that a Volcker-like policy would lower the long-run rate of inflation depends on fiscal capitulation. This capitulation to some extent did happen under Volcker, although keep in mind he had considerable Congressional support from both sides of the aisle. I do not think this type of political support is something one can count on, especially given today’s political climate. So, you may want to buckle up, as we may be in for some interesting times ahead.

Related Readings:

Is it Time for Some Unpleasant Monetarist Arithmetic? Link to blog post. Link to paper.

Monday, April 26, 2021

Labor Force Participation Gaps Between the U.S. and Canada

Am hearing some talk about whether the U.S. labor market can fully recover to its pre-Covid19 levels. Is it possible that a sizeable number of workers with marginal attachment to the workforce decide to remain out of the labor force? For example, this crisis, unlike the one that preceded it, has been associated with large increases in personal wealth. Workers on the cusp of retiring may now choose to do so earlier? (This is just one of the many stories I hear.)

For what it's worth, I thought I'd update my labor market participate blog post from 2013 which compared participation rates across Canada and the U.S., for males and females, and across different age categories. You can find my old post here along with some links to related posts. Below I report the updated data.

Remember, an individual is counted "in the labor force" if they are either employed (working) or unemployed (actively searching for work) in the previous 4 weeks from the time the labor force survey enquires about labor market status. 

First up, prime-age women. 

The increase in female labor participation is well-known. Less well-known is how U.S. female participation rates have diverged from other countries (I think Canada is more similar to Europe, last time I checked).  Let's take a look at prime-age men.

For the U.S., a secular decline throughout the sample, with evidence of a short-lived rebound from 2016-2019. The Canada-U.S. gap emerged during the Great Recession and is now wider than its ever been. This does suggest there's considerable room for an employment rebound for this demographic. But what accounts for the cross-country gap? 

I provide the other diagrams without commentary. 

Friday, April 23, 2021

On the Role and Future of Cryptocurrencies

My former colleague Howard Wall asked me to join Lawrence White yesterday evening to discuss the role and future of cryptocurrencies at an event hosted by the Hammond Institute for Free Enterprise. It was a great honor to share the stage with Larry. 

I've been thinking about cryptocurrencies for a long time; many of my writings and talks on the subject can be found here. My thoughts on the subject are evolving as I learn more about the phenomenon. For what it's worth, I thought I'd share my opening remarks with interested readers below. As usual, any feedback is welcome. 

The Role and Future of Cryptocurrencies

A money and payments system is about managing databases containing the money accounts of individuals and organizations. Any database management system must necessarily define read and write protocols. Read privileges specify who can view what on the database. Write privileges specify who gets to write what to the database.

So, for example, we can read what’s in our bank account. So can the bank and its regulators. But other people cannot see how much money is held in any account apart from their own. While we cannot write directly to our account, we can send our bank instructions to so on our behalf. The bank can also write directly to our account. It may, for example, credit our account with interest, or debit it for service fees.

Note that bank accounts are digital. Moreover, the messages we send to our bank over the Internet are secured with the aid of cryptography. In this sense, one could say that bank deposits are a form of cryptocurrency.

Bank deposits, however, are not typically viewed as cryptocurrencies. Well then, what are the distinguishing characteristics of a cryptocurrency? It’s not so clear-cut, but two things come to my mind. First, the database for a cryptocurrency is typically associated with an open-read privilege. This implies that the database can, in principle, at least, be subject to audits from any person, or any agency, at any time, all the time.  This property offers a degree of financial transparency that is unheard of in conventional money services businesses. Second, the task of managing the database is typically decentralized in some manner to “validator nodes;” or, what one might label more mundanely as “accountants” in the non-crypto world. What is interesting here is how these validators are potentially recruited and compensated. For Bitcoin, anyone can potentially become a validator and compensation arrives in the form of a stochastic reward.

When it comes to keeping track of money balances, an open-write privilege is problematic. This is known as the double-spend problem. In conventional payment systems, the double-spend problem is solved by delegating database management to a trusted third party. A cryptocurrency like Bitcoin or Ethereum must instead rely on a consensus mechanism that somehow ensures that a dispersed write-privilege does not result in garbage being written to the database. To date, the most popular mechanisms are based on PoW (Proof of Work) and PoS (Proof of Stake). But there are others as well, and one should expect innovation along this dimension since, as far as I know, no existing consensus mechanism has yet proven to be entirely satisfactory.

Of course, the same can be said of conventional database management systems. To young eyes, the current system seems a hopelessly tangled mess of databases that have trouble communicating with each other. Moreover, they appear not to be very secure at times. But despite the problems we all encounter with the modern banking system, one should, in fairness, acknowledge the tremendous achievements that have taken place over the last fifty years. For example, we are now able to travel to foreign countries with just a credit card. This is not the way things worked until relatively recently. Anyone who has had the experience of needing traveler’s checks can fill you in on what it was like to travel in the old days.

Well, if there’s been so much progress in money and payments, what accounts for the emergence and proliferation of cryptocurrencies?

As is so often the case, I think the fundamental cause of this development is rapid technological change moving against a relatively slow-moving incumbency that includes banks, money services businesses, and especially their regulators. In saying this, I do not mean to assign blame; the inertial properties of existing institutional arrangements likely has some merit. Institutional inertia can stabilizing, for example. But to benefit the communities they serve, institutions also have to evolve to meet the challenges of new technologies. And I think this is happening today in the sphere of money and payments.

What new technologies are we talking about? Innovations in communications, like the Internet, have been transformational. As well, there have been advances in data storage and cryptography that have played a critical role. All these innovations are, however, within the grasp of incumbent banks and money service businesses. And indeed, incumbents have made use of these technologies. Internet banking and PayPal are real things, after all. I think the important innovation as far as cryptocurrencies are concerned isthe development of database management protocols that permits a degree of decentralization for managing large databases. I say “large” databases because we already have decentralized database management systems for small communities, like gift exchange or the exchange of favors among friends (see: Why the Blockchain Should be Familiar to You). Advances in data storage and communications have, in effect, permitted this ancient form of communal record-keeping to scale.

The decentralized or communal aspect of managing a database is, of course, very much at odds with the notion of delegating the responsibility to a privileged set of institutions. Some people believe that these developments will lead to a revolution—an overthrow of existing institutions—a triumph in democracy over a privileged class. What is much more likely is an evolution of existing institutions to accommodate the threat posed by the potential usurpers in a manner that serves the broader community. In short, what we are likely to witness is the usual pattern of economic development in relatively well-functioning societies.

What do cryptocurrencies offer individuals and society? What are the concerns of regulators and policymakers?

To answer these questions, we need to recognize that there are different classes of cryptocurrencies, each of which cater to a specific constituency. Broadly, they can be categorized as belonging to one of two groups distinguished by their respective exchange rate regimes and governance structures.

In one group, we have the decentralized autonomous organizations, like Bitcoin. From the perspective of domestic policymakers, Bitcoin can be viewed as foreign currency operating under a floating exchange rate regime. Except that there’s no negotiating with Bitcoin (there's no negotiating with some countries either). The intermediaries that deal or broker BTC transactions can, however, be regulated.

In the other group, we have the so-called stablecoins, like USD Coin (sponsored by Circle and Coinbase) and Diem (sponsored by Facebook). To domestic policymakers, stablecoins can be viewed as checkable mutual funds operating under a unilateral fixed exchange rate regime utilizing various forms of collateral. The major innovation here has less to do with technological innovation and more to do with the willingness and ability to process USD payments outside the commercial banking sector.

Viewed in this light, cryptocurrencies do not look so unfamiliar. As a foreign currency operating under a floating exchange rate, they’ll likely never displace the domestic unit of account. They may, however, serve as store-of-value or portfolio hedge. And they may facilitate certain kinds of payments, typically on-chain and large-value. As a stablecoin offering a par exchange rate, they suffer from all the usual problems of uninsured fractional reserve banking—unless they promise to back their currencies fully with USD cash.

The question here is whether these products are offering something fundamentally more cost-effective when it comes to making payments, or whether they owe their existence primarily to regulatory arbitrage. I do not know the answer to this question, but I suspect that much of what they have to offer comes from the latter. Diem, for example, can bypass banking regulations by not becoming a bank. It can leverage Facebook’s huge social network as a payment system connecting 2B+ users around the world. It can potentially offer money-transmitting services for “free” or, rather, in exchange for personal data. My guess is that banks (or even PayPal) are not permitted operate in this manner. Regulatory advantage: Diem. 

Regulators need to keep a close eye on these structures since it is politically impossible to commit to the doctrine of caveat emptor when it comes to money and banking. The temptation, as always, will be to replace “cash” for higher-yielding “cash equivalents” on the balance sheet. The structure slowly evolves into an uninsured fractional reserve bank, but in the shadow bank sector. If something goes wrong, depositors will seek compensation, first from the firm and then from the government. After all, how could a government knowingly permit such an unstable structure to exist in the first place?

To sum up, I think the future of cryptocurrencies like Bitcoin is to serve as an alternative asset class for investors. I doubt that it will ever become a dominant medium of exchange in any large economy. Fractional reserve banks using BTC as reserves are not likely to be tolerated.

The future of stablecoins seems more interesting to me. In the first instance, they seem capable of filling the gaps that remain apparent in modern day payment systems (think correspondent banking here). But the main effect here is likely to spur conventional banks and their regulators to fill these gaps at a faster pace. There is a possibility that a project like Diem might one day abandon its peg to the USD and offer itself as a stand-alone currency. Policymakers would in that case be concerned about a country maintaining monetary policy sovereignty. One manifestation of this concern could be a pre-emptive action on the part of the government, for example, by offering its own universally-accessible CBDC.


Wednesday, March 31, 2021

A Natural Rate of Interest

This post was motivated by a conversation with Eric Lonergan. It began with a simple question: what should be the interest rate paid on reserves? I answered that according to theories I'm familiar with, reserves should earn the "natural" rate of interest, which I defined as the sum of population and productivity growth. So, assuming 2% "real" growth and 2% inflation, reserves (and government debt more generally) should be yielding around 4%. 

I think it's fair to say most people did not find my answer very satisfying. So I thought I'd take a moment to explain how I arrived at it. I want to do so in the context of a model economy. Let me describe the model first. We can discuss its limitations and possible extensions later on. 

Consider an economy where people live for two periods; they are "young" and then become "old." Let N(t) denote the population of young at date t. Assume that the population grows at (gross) rate n; that is, N(t) = nN(t-1). In this "overlapping generations" model, the population at date t is given by N(t)+N(t-1). 

Individuals in this economy generate y units of perishable output (goods and services) when they are young. I'm going to treat y as fixed over time. This implies that the RGDP at date t is given by N(t)y and that the RGDP grows at rate n over time (there is no productivity growth). In what follows, I label n the "natural" rate of interest. 

Suppose that people only value consumption when they are old. This poses an interesting economic problem. The young can produce goods that the old value, but the old have no way of paying for these goods. Private credit markets don't work here. 

The cooperative solution is very simple: the young should "gift" their goods y to the old. If everyone followed this cooperative protocol, then the young of generation t would consume (in their old age) c(t+1) = N(t+1)y/N(t) = ny. 

That is, by following this protocol, it's as if the young "deposit" their income y in a savings account that generates a (gross) real yield equal to n, the "natural" rate of interest. 

Since private competitive markets cannot be expected to implement this socially-desirable outcome, what other mechanisms might be employed? In small communities, reciprocal gift exchange seems to work quite well. In the present context, the young look after their parents, expecting their children to return the favor, and so on. 

Larger communities need to rely on other mechanisms. In the present context, a PAYGO social security system that taxes the young y and pays the old ny would do the trick. 

The same outcome could be achieved through monetary exchange. Suppose the government lets all individuals open a central bank money account. The government creates (out of thin air) M dollars and credit the accounts of the "initial old" with M/N(0) dollars. Assume that M is kept constant over time. The old are expected to spend these dollars on a competitive spot market, where goods exchange for dollars at price p(t). 

The equilibrium price-level is easy to derive in this example. At any date t, we have N(t-1) old people collectively holding M dollars. These M dollars will be spent (the old have no reason not to) on the goods available for sale, N(t)y. The market-clearing condition here is M = p(t)N(t)y at every date t. Because both M and y are constant, and because population N(t) is growing at rate n, it follows that the equilibrium price-level p(t) must be falling at rate n. 

So, if we interpret M as "reserves" in this model economy, then reserves yielding the natural rate of interest would be consistent with economic efficiency. If reserves yield zero nominal interest rate, then efficiency requires some deflation. But the same outcome is possible if reserves were to yield a nominal interest rate n in a zero-inflation rate regime. 

This result continues to hold for more general preferences. Suppose that people care about consumption when young and old. Then the young will only want to consume a fraction of their income. That fraction will depend on (among other things) the real rate of return they expect on their retirement savings. As it turns out, the so-called "Golden Rule" allocation requires that money/bonds yield the natural rate of interest. 

Is this a good place to start thinking about Eric's question? It may be a good place to start, but we don't want to stop here. The analytical framework above is "bare bones." Among other things, my analysis implicitly assumes that there is no difference between reserves and government treasury securities. Does this matter? If it does, the reasons need to be spelled out. What modifications to the simple model above would imply that to meet a given social objective, it is desirable to have reserves yield less than treasury securities? In reality there is duration risk. But why does the Treasury issue bonds with different maturities in the first place? Moreover, why are these bonds purposefully rendered illiquid (for example, by discouraging the Fed from monetizing the entire bond issue or, at least, from setting up standing purchase facilities?). The answers to these questions are not immediately obvious to me. But they may be to you! 

Thursday, March 4, 2021

Is it time for some unpleasant monetarist arithmetic?

The title of this post alludes to a paper written by Tom Sargent and Neil Wallace 40 years ago "Some Unpleasant Monetarist Arithmetic." The startling conclusion of this paper is that a central bank (limited to interest rate policy and/or open market operations) does not have unilateral control over the long-run rate of inflation. The result is made all the more powerful by the fact that it relies mostly on arithmetic and only minimally on theory.
So, what's the basic idea? First, begin with the fact that monetary and fiscal policy are inextricably linked via a consolidated government budget constraint. This implies that monetary policy will have fiscal consequences. In particular, interest-rate policy affects the interest expense associated with rolling over any given amount of government debt. The question is how the fiscal authority intends to finance interest expense. There are two basic ways it can to do this: (1) use primary surplus (increase taxes and/or cut spending); (2) issue debt. The first option is associated with what economists call a Ricardian fiscal policy; the second option is associated with a Non-Ricardian fiscal policy. 
A central bank has no control over (1) or (2); these are determined by the tax and spend decisions made by the fiscal authority. Suppose the fiscal authority chooses (2). If so, then what is accomplished by increasing the policy rate? Ceteris paribus, it increases the rate at which nominal debt is issued. This is arithmetic. If nominal debt is money (and it is), then this must lead to higher inflation, not lower inflation. 
The corollary here is that a central bank has no unilateral control over the long-run rate of inflation. A central bank may be permitted to choose a long-run inflation target, but only with the blessing of the fiscal authority. Canadians know this: the Bank of Canada and the Government of Canada meet every five years to review their joint inflation-control agreement. In other jurisdictions, the central bank simply assumes that fiscal policy will be conducted in a "responsible" manner (i.e., a manner that will not un-anchor long-run inflation expectations). 
If the primary deficit is managed in a manner to anchor long-run inflation, then the central bank is left free to use interest rate policy for the purpose of stabilizing shocks to aggregate demand. To stabilize the inflation rate around target requires, in this context, that the central bank raises its policy rate aggressively against above-target inflation. (Hopefully, even the threat of such a response keeps inflation close to target. This is the so-called Taylor principle.) 
The question monetary policymakers might want to mull over these days is whether this standard policy prescription is appropriate in an environment where the traditional fiscal support for inflation-targeting seems to be waning? I want to be clear here: I am not saying this is happening today or that it will happen in the future. I'm saying that it might happen and that if it does, monetary policy makers should have a contingency plan in place. What should this contingency plan look like? 
So, to take a concrete example, suppose that the tax and spend decisions coming from Congress imply an elevated primary deficit for the foreseeable future. Perhaps there's been a "regime change" in thinking that transcends political parties so, no matter who controls Congress, the expectation is for elevated primary deficits for as long as we can see. (I am not suggesting this is good or bad, I'm just saying suppose.)
Next, suppose the economy is humming along at or near what anyone would call "full employment." And then suppose inflation rises to 3, 4, 5% or higher and stays there with no sign of ever wanting to return to the Fed's official 2% long-run inflation target. What should the FOMC do in this hypothetical scenario? (Again, consider this as the type of thought-experiment that is necessary to form a contingency plan -- I do not mean to suggest that this scenario is likely, only that it is possible--and not in a Dumb and Dumber way). 
The monetary policy advice coming from a model like Sargent and Wallace (1981) might suggest something like this: For as long as Congress remains in a regime of high primary deficits

[1] Keep policy rate low, or even lower it, if possible; and 
[2] Announce a temporarily higher inflation target (consistent with the new fiscal regime).
Recommendation [1] comes from Unpleasant Monetarist Arithmetic. Increasing the interest rate in this fiscal regime will only lead to higher inflation. Lowering the interest rate has the opposite effect. If recommendation [2] is not adopted, the monetary authority would have to explain (after every meeting) why it is missing its 2% inflation target. They might, of course, just say it's "temporary," but this would wear thin after a few years. 

This advice is based on the assumption that everyone knows there's been a change in fiscal regime and that it will be persistent. What if no one is really sure of regime change or, if regime change, how long it might last? 
The prudent thing to do in this more realistic case is to hedge your bets. In terms of [1], one might recommend raising the policy rate, but not by as much as would normally be done given the observed inflationary pressure. In terms of [2], one could probably get away with maintaining the long-run inflation target at 2% and legitimately explaining away deviations from target as "transitory." 
Probably the last thing monetary policy should do under these circumstances is to raise the policy rate aggressively against inflationary pressure (as recommended by the Taylor principle). The Taylor rule works fine under a Ricardian fiscal policy. But it may backfire un a Non-Ricardian fiscal policy--this was the whole point of Sargent and Wallace (1981). 
Tightening monetary policy might have the effect of bringing inflation down temporarily (this is consistent with the Sargent and Wallace model). But in reality (and in some models, like here and here), this would come at the cost of economic recession. 
I can see no rationale for creating a recession to bring inflation down temporarily. But there may be a political-economy rationale for the threat of such a policy. That is, a Congress that does not trust future Congresses may want to create an independent (but accountable) central bank to pursue a low-inflation mandate and to do whatever it can with interest rate policy to achieve that mandate, even at the cost of recession. Future Congresses are in this manner obliged to behave in a Ricardian manner (so only temporary deficits permitted), which has the effect of anchoring long-run inflation. 

Well, maybe. But I can't help but think of Dr. Strangelove here. 
PS. I recently gave a talk on this that you can view here beginning at the 3 hour 12 minute mark.

Sunday, February 21, 2021

A Journey in Macroeconomic Thinking

I've been thinking a bit lately about theories of the business cycle (a lot of time for reflection in these days of COVID-19). At least, the way some of these theories have evolved over my lifetime and from the perspective of my own training in the field. From my (admittedly narrow) perspective as a researcher and advisor at a central bank, the journey beginning c. 1960 seems like it's taken the following steps: (1) Phillips Curve and some Natural Rate Hypothesis; (2) Real Business Cycle (RBC) theory; (3) New Keynesian theory. It seems like we might be ready to take the next step. I'll offer some thoughts on this at the end, for whatever they're worth. 

There's no easy way to summarize the state of macroeconomic thinking, of course. But it seems clear that, at any given time, some voices and ways of thinking are more dominant than others. By the time the 1960s rolled around, there seemed to be a consensus that monetary and fiscal policy should be used to stabilize the business cycle. The main issue, in this regard, revolved over which set of instruments was better suited for the job. (See, for example, this classic debate between Milton Friedman and Walter Heller). 

Central to macroeconomic thinking at the time was a concept called the Phillips Curve (PC). There is a subtle, but important, distinction to make here between the PC as a statistical correlation and the PC as a theory of that statistical relationship. In 1958, Phillips noticed an interesting pattern in the data: nominal wage growth seemed negatively correlated with the unemployment rate in the U.K. over the period 1913-48 (see diagram to the right). How to interpret this correlation? One theory is that when the unemployment rate is high, workers are easy to find and their bargaining position is weak, leading to small nominal wage gains. Conversely, when unemployment is low, available workers are scarce and their bargaining position is strong, leading to large nominal wage gains. 

Then, in 1960, Paul Samuelson and Robert Solow wrote their classic piece "Problem of Achieving and Maintaining a Stable Price-Level: Analytical Aspects of Anti-Inflation Policy." Then, as is the case still now, the authors lamented the lack of consensus on a theory of price inflation. Various cost-push and demand-pull hypotheses were reviewed, problems of identification noted, and calls for micro-data to help settle the issue were made. They also mentioned Phillips' article and noted how the same diagram for the U.S. looked like a shot-gun blast (little correlation, except for some sub-samples). Then they translated the Phillips curve using price inflation instead of wage inflation. No data was sacrificed in this exercise; their "theory" was summarized with the diagram to the left. 

I put "theory" in quotes in the passage above because the theory (explanation) was never clear to me. In particular, while I could see how an increase in the rate of unemployment might depress the level wage, I could not grasp how it could influence the rate of growth of wages for any prolonged period of time. This logical inconsistency was solved by the Phelps-Friedman natural rate hypothesis; see Farmer (2013) for a summary and critique. 

The TL;DR version of this hypothesis is that the PC is negatively sloped only in the short-run, but vertical in the long-run. So, while monetary policy (increasing the rate of inflation) could lower the unemployment rate below its natural rate, it could only do so temporarily. Eventually, the unemployment rate would move back to its natural rate at the higher rate of inflation. This hypothesis seemed to provide a compelling interpretation of the stagflation (high inflation and high unemployment) experienced in the 1970s. It also seemed to explain the success of Volcker's disinflation policy in the 1980s. Nevertheless, uneasiness in the state of the theory remained and a new (well, nothing is ever completely new) way of theorizing was on the horizon.

By the time I got to grad school in the late 1980s, "real business cycle theory" was in vogue; see Charles Plosser's summary here and Bob King's lecture notes here

There was a lot going on with this program. A central thesis of RBC theory is that the phenomena of economic growth and business cycles are inextricably linked. This is, of course, is an old idea in economics going back at least to Dennis Robertson (see this review by Charles Goodhart) and explored extensively by a number of Austrian economists, like Joseph Schumpeter. 

The idea that "the business cycle" is to some extent a byproduct of the process of economic development is an attractive hypothesis. Economic growth is driven by technological innovation and diffusion, and perhaps regulatory policies. There is no a priori reason to expect these "real" processes to evolve in a "smooth" manner. In fact, these changes appear to arrive randomly and with little or no mean-reverting properties. It would truly be a marvel if the business cycle did not exist. 

The notion of "no mean-reverting properties" is important. It basically means that technology/policy shocks are largely permanent (or at least, highly persistent). If macroeconomic variables like the GDP inherit this property, then a "cycle"--the tendency for a variable to return to some long-run trend--does not even exist (and if you think you see it, it's only a figment of your imagination). For this reason, early proponents of RBC theory preferred the label "fluctuations" over "cycle." This view was supported by the fact that econometricians had a hard time rejecting the hypothesis that the real GDP followed a random walk (with drift). For example, here is Canadian GDP plotted against two realizations of a random walk with drift:

This perspective fermented at a time when the cost of computing power was falling dramatically. This permitted economists to study models that were too complicated to analyze with conventional "pencil and paper" methods. Inspiration was provided by Lucas (1980), who wrote:

Our task, as I see it…is to write a FORTRAN program that will accept specific economic policy rules as ‘input’ and will generate as ‘output’ statistics describing the operating characteristics of time series we care about, which are predicted to result from these policies.”

And so that's what people did. But what sort of statistics were model economies supposed to reproduce? Once again, it was Lucas (1976) who provided the needed guidance. The empirical business cycle regularities emphasized by Lucas were "co-movements" between different aggregate time-series. Employment, for example, is "pro-cyclical" (tends to move in the same direction as GDP) around "trend." These types of regularities can be captured by statistics like correlations. But these correlations (and standard deviations) only make sense for stationary time-series, and the data is mostly non-stationary. So, what to do? 

Transforming the data through first-differencing (i.e., looking at growth rates instead of levels) is one way to render (much of) the data stationary. Another approach was made popular by Prescott (1986), who advocated a method that most people employ: draw a smooth line through the data, label it "trend," and then examine the behavior of "deviations from trend." Something like this, 

It's important to note that Prescott viewed the trend line in the figure above as "statistical trend," not an "economic trend." To him, there was no deterministic trend, since the data was being generated by a random walk (so, the actual trend is stochastic). Nevertheless, drawing a smooth trend line was a useful way to render the data stationary. The idea was to apply the same de-trending procedure to actual data and simulated data, and then compare statistical properties across model and data.

The point of mentioning this is that no one involved in this program was conditioned to interpret the economy as "overheating" or in "depression." Growing economies exhibited fluctuations--sometimes big and persistent fluctuations. The question was how much of these observed fluctuations could be attributed purely to the process of economic development (technological change), without reference to monetary or financial factors? I think it's fair to say that the answer turned out to be "not much, at least, not at business cycle frequencies." The important action seemed to occur at lower frequencies. Lucas (1988) once again provided the lead when he remarked "Once one starts to think about growth, it is hard to think about anything else." And so, the narrow RBC approach turned its attention to low-frequency dynamics; e.g., see my interview with Lee Ohanian here

Of course, many economists never bought into the idea that monetary and financial factors were unimportant for understanding business cycles. Allen and Gale, for example, schooled us on financial fragility; see here. But this branch of the literature never really made much headway in mainstream macro, at least, not before 2008. Financial crises were something that happened in history, or in other parts of the world. Instead, macroeconomists looked back on its roots in the 1960s and embedded a version of the PC into an RBC model to produce what is now known as the New Keynesian framework. Short-run money non-neutrality was achieved by assuming that nominal price-setting behavior was subject to frictions, rendering nominal prices "sticky." In this environment, shocks to the economy are not absorbed efficiently, at least, not in the absence of an appropriate monetary policy. And so, drawing inspiration from John Taylor and Michael Woodford, the framework added an interest rate policy rule now known as the Taylor rule. Today, the basic NK model consists of these three core elements:

[1] An IS curve: Relates aggregate demand to the real interest rate and shocks.
[2] An Phillips Curve: Relates the rate of inflation (around trend) to the output gap.
[3] A Taylor Rule: Describes how interest rate policy reacts to output and inflation gaps.

I have to be honest with you. I never took a liking to NK model. I'm more of an Old Keynesian, similar to Roger Farmer (we share the same supervisor, so perhaps this is no accident). In any case, the NK framework became (and continues to be) a core thought-organizing principle for central bank economists around the world. It has become a sort of lingua franca in academic macro circles. And if you don't know how to speak its language, you're going to have a hard time communicating with the orthodoxy. 

Of the three basic elements of the NK model, I think the NK Phillips Curve (which embeds the natural rate hypothesis) has resulted in the most mischief; at least, from the perspective of advising the conduct of monetary policy. The concept is firmly embedded in the minds of many macroeconomists and policymakers. Consider, for example, Greg Mankiw's recent piece "Yes, There is a Trade-Off Between Inflation and Unemployment."

Today, most economists believe there is a trade-off between inflation and unemployment in the sense that actions taken by a central bank push these variables in opposite directions. As a corollary, they also believe there must be a minimum level of unemployment that the economy can sustain without inflation rising too high. But for various reasons, that level fluctuates and is difficult to determine.

 The Fed’s job is to balance the competing risks of rising unemployment and rising inflation. Striking just the right balance is never easy. The first step, however, is to recognize that the Phillips curve is always out there lurking.

The Phillips curve is always lurking. The message for a central banker is "sure, inflation and unemployment may be low for now, but if we keep monetary policy where it is and permit the unemployment rate to fall further, we will risk higher inflation in the future." I'm not sure if economists who write in this manner are aware that they're making it sound like workers are somehow responsible for inflation. Central banker to workers: "I'm sorry, but we need to keep some of you's the inflation, you see." 

There is evidence that this line of thinking influenced the FOMC in 2015 in its decision to "lift off" and return the policy rate to some historically normal level; see my post here explaining the pros and cons in the lift-off debate. By the start of 2014, there was considerable pressure on the Fed to begin "normalizing" its policy rate. By mid 2014, the expectation of "lift off" likely contributed to significant USD appreciation and the economic weakness that followed. If I recall correctly, Vice Chair Stan Fischer started off the year by announcing that four rate hikes for 2015 were in order (as it turned out, the Fed only raised rates once--in December). To some observers, this all seemed very strange. After all, the unemployment rate was still above its estimated "natural" rate (5%) and inflation continued to undershoot its 2% target. What was going on?

What was going on was the Phillips curve. Here is Chair Yellen at the March 17-18, 2015 FOMC meeting (transcript available here):

If we adopt alternative B, one criterion for an initial tightening is that we need to be reasonably confident that inflation will move back to 2 percent over the medium term. For the remainder of this year, my guess is that it will be hard to point to data demonstrating that inflation is actually moving up toward our objective. Measured on a 12-month basis, both core and headline inflation will very likely be running below 1½ percent all year. That means that if we decide to start tightening later this year, a development that I think is likely, we will have to justify our inflation forecasts using indirect evidence, historical experience, and economic theory.
The argument from history and economic theory seems straightforward. Experience here and abroad teaches us that, as resource utilization tightens, eventually inflation will begin to rise. To me, this seems like a simple matter of demand and supply. So the more labor and product markets tighten, the more confident I’ll become in the inflation outlook. Because of the lags in monetary policy, the current high degree of monetary accommodation, and the speed at which the unemployment rate is coming down, it would, to my mind, be imprudent to wait until inflation is much closer to 2 percent to begin to normalize policy. I consider this a strong argument for an initial tightening with inflation still at low levels, and it’s one that I plan to make. But I also recognize and am concerned that, at least in recent years, the empirical relationship between slack and inflation has been quite weak.
Now, I don't want to make too much of this particular episode. Personally, I don't think it had a major impact on the recovery dynamic. But I do think it had an impact; in particular, the pace of improvement in labor market conditions temporarily slowed. It was an unforced error (as I think other members of the Committee sensed as well). 
I think the lift-off episode has contributed to a general re-thinking of the Phillips curve and the natural rate hypothesis. The notion of an economy operating at "excess capacity" has always seemed a bit strange to me, let alone the idea that excess capacity as a cause of inflation (as opposed to a force operating on the price-level). Perhaps it is time to re-visit Milton Friedman's "plucking model."  Instead of drawing a smooth line through the center of a time-series, Friedman drew a line that defined a ceiling (a capacity constraint). Shocks to the economy manifest themselves as "downward plucks" (as if plucking on an elastic band). 

The plucking model is consistent with the observed cyclical asymmetry in unemployment rate fluctuations. And labor market search models are a natural way to model that asymmetry. In case you're interested, I develop a super-simple (and dare I say, elegant) search model here to demonstrate (and test) the idea: Evidence and Theory on the Cyclical Asymmetry in Unemployment Rate Fluctuations, CJE 1997). See also my blog post here as well as some recent work by Ferraro (RED, 2018) and Dupraz, Nakamura and Steinsson (2019). I like where this is going! 

One attractive feature of search models, in my view, is that they model relationship formation. Relationships provide a very different mechanism for coordinating economic activity relative to the canonical economic view of anonymous spot exchange in centralized markets. In a relationship, spot prices do not matter as much as the dynamic path of these prices (and other important aspects) over the course of a relationship (see my critique of the sticky price hypothesis here). The observation that retailers, in the early days of C-19, voluntarily rationed goods instead of raising prices makes little sense in anonymous spot exchange, but makes perfect sense for a merchant concerned with maintaining a good relationship with his or her customers. And merchant-supplier relationships can handle shortages without price signals (we're out of toilet paper--please send more!). In financial markets too, the amount of time that is spent forming and maintaining credit relationships is hugely underappreciated in economic modeling. Search theory turns out to be useful for interpreting the way money and bond markets work too. These markets are not like the centralized markets we see modeled in textbooks--they operate as decentralized over-the-counter (OTC) markets, where relationships are key. One reason why economies sometimes take so long to recover after a shock is because the shock has destroyed an existing set of relationships. And it takes time to rebuild relationship capital.

Notions of "overheating" in this context probably do not apply to labor market variables, although there is still the possibility of an overaccumulation of certain types of physical capital in a boom (what the Austrians label "malinvestment"). Any "overheating" is likely to manifest itself primarily in asset prices. And sudden crashes in asset prices (whether driven by fundamentals or not), can have significant consequences on real economic activity if asset valuations are used to support lines of credit. 

Finally, we need a good theory of inflation. The NKPC theory of inflation is not, in my view, a completely satisfactory theory in this regard. To begin, it simply assumes that the central bank can target a long-run rate of inflation (implicitly, with the support of a Ricardian fiscal policy, though this is rarely, if ever, mentioned). At best, it is a theory of how inflation can temporarily depart from its long-run target and how interest rate policy can be used to influence transition dynamics. But the really interesting questions, in my view, have to do with monetary and fiscal policy coordination and what this entails for the ability of an "independent" central bank even to determine the long-run rate of inflation (Sargent and Wallace, 1981).  

I know what I've described only scratches the surface of this amazingly deep and broad field. Most of you have no doubt lived through your own process of discovery and contemplation in the world of macroeconomic theorizing. Feel free to share your thoughts below.