Believe those who are seeking the truth. Doubt those who find it. Andre Gide
Thursday, November 11, 2021
Wednesday, September 29, 2021
I was recently asked whether I'd like to share my thoughts on monetary policy in a post-pandemic world. Sure, why not? Thanks to Jan Eckhout for thinking of me. The panel was hosted by the European Economic Association last month and moderated by Diane Coyle. I was honored to speak alongside Ricardo Reis and Beata Javorcik, both of whom provided riveting presentations. For what it's worth, I thought I'd provide a transcript of my remarks here.
Lunch Panel EEA-ESEM Copenhagen
August 25, 2021
I want to focus my discussion on the U.S. economy and from the perspective of a Fed official concerned with the challenges the Fed may face in fulfilling its Congressional mandates in a post-pandemic world.
First, to provide a bit of context, let me offer a bit of history on policy and, in particular, on what I think were some policy mistakes. Let me begin with the 2008-09 financial crisis, which is something I think most people would agree should never have happened. Whether a sufficiently aggressive Fed lender-of-last-resort operation would have averted the crisis remains a open question. Even if it had been successful, such an operation would have had costs. It may, for example, have elicited an even greater political backlash than we saw at the time--and who knows how this may have manifested itself as undesirable changes to the FRA. As well, such an intervention may have just pushed mounting structural problems down the road. In particular, while it’s now clear that some private sector lending practices needed to change, it’s not clear where the incentive to do so would have come from absent a crisis. In any case, the crisis happened. How was it managed?
The ensuing recession was deep and the recovery dynamic very slow. The prime-age employment-to-population ratio did not reach its pre-pandemic level until 2019, a full decade later. Nevertheless, on the whole, I think the Fed followed an appropriate interest rate policy. There were one or two times the FOMC exhibited a little too much enthusiasm for “normalizing” policy, but I think the slow recovery dynamic had more to do with insufficient fiscal stimulus—especially at the state and local level—rather than a consequence of inappropriate monetary policy. The evidence for this can also be seen by the fact that inflation remained below the Fed’s 2% target for most of the time the policy rate was close to its ELB. The Fed has interpreted this low inflation episode as partly a monetary policy mistake, something its new AIT regime is designed to address. But my own view is that persistently low inflation—and the low money market yields that go along with it—have more to do with the supply and demand for U.S. Treasury securities. This is something the Fed does not have very much direct control over.
I know many people are skeptical of fiscal theories of the price-level, but in virtually every economic model I know, a fiscal anchor is necessary to pin down the long-run rate of inflation. Monetary policy—specifically, interest rate policy—can, of course, influence the price-level, so monetary policy can influence inflation dynamics. But it can do so only in the “short run.” Interest rate policy alone cannot, in my view, determine the long-run rate of inflation, at least, not without appropriate fiscal support.
Now, I know many of you may be asking how I can think fiscal policy has very much to do with inflation given how rapidly the debt has risen since the financial crisis and again with the C-19 crisis, all with little apparent pressure on long-run inflation expectations and on long-term bond yields. We should, however, keep in mind that an observed change in the quantity of an object may entail both supply and demand considerations. And one can easily point to several forces that have contributed to increases in the global demand for UST securities in recent decades. For example, the growing use of USTs as collateral in repo and credit derivatives markets beginning in the 1970s and accelerating through the 1980s. The growing demand for USTs as a safe store of value from EMEs. The evaporation of private-label safe assets during the financial crisis that left a gaping hole for USTs to fill. Next, we had a large increase in the regulatory demand for USTs coming out of Dodd-Frank and Basel III. The Fed’s SRF and FIMA facility should further enhance the demand for USTs. On top of all this, we’ve witnessed an emergent class of money funds called “stablecoins” that are further contributing to the demand for USTs. These forces have been disinflationary, leading bond investors to revise down their expectation of the future path of policy interest rates. It is interesting to ponder a counterfactual here. In particular, think of what may have transpired absent an accommodating U.S. fiscal policy. We may very well have experienced the mother of all deflations. If this is correct, then an elevated debt-to-GDP ratio, given a relatively stable inflation and interest rate structure, reflects an elevated real demand for outside assets. The problem is not that the debt-to-GDP ratio is going up. The problem is what disruptions might occur if it goes down owing to a sudden and unexpected inflation.
The recent rise in inflation is concentrated in durable goods, and I think is mostly attributable to ongoing supply-chain issues associated with the pandemic. This effect is likely to reverse itself, the way lumber prices recently have. Some of what I think is temporarily high inflation may not reverse itself, leading to a permanently higher price-level. In this case, households will worry whether their wages will keep pace with the higher the cost of living. There is even the possibility—though I think less likely—that the rate of inflation itself will remain elevated and that inflation expectations will rise well above the Fed’s 2% target. This may happen, for example, if the traditional bipartisan support for fiscal anchoring in the new generation of Congressional representatives is perceived to wane, or if the global demand for safe assets slows. If either or both of these things happen and are persistent, then the Fed may find itself faced with what Sargent and Wallace phrase an “unpleasant monetarist arithmetic.” That paper, which was published exactly 40 years ago, warned how tightening monetary policy without fiscal support might actually make inflation go higher rather than lower.
The implications for U.S. monetary policy are quite interesting should an event like this unfold. A determined Fed may try to fight inflation by raising its policy rate. The result is likely to be a temporary disinflation and recession. Should fiscal policy remain unaltered, the logic provided by Sargent and Wallace implies that inflation will return even higher than before as the deficit must increase to finance a larger interest expense on the debt. The best the Fed can do in this case is to lower its policy rate, announce a temporarily higher inflation target, and hope that the fiscal authority gets its house in order. The notion that a Volcker-like policy would lower the long-run rate of inflation depends on fiscal capitulation. This capitulation to some extent did happen under Volcker, although keep in mind he had considerable Congressional support from both sides of the aisle. I do not think this type of political support is something one can count on, especially given today’s political climate. So, you may want to buckle up, as we may be in for some interesting times ahead.Related Readings:
Monday, April 26, 2021
Am hearing some talk about whether the U.S. labor market can fully recover to its pre-Covid19 levels. Is it possible that a sizeable number of workers with marginal attachment to the workforce decide to remain out of the labor force? For example, this crisis, unlike the one that preceded it, has been associated with large increases in personal wealth. Workers on the cusp of retiring may now choose to do so earlier? (This is just one of the many stories I hear.)
For what it's worth, I thought I'd update my labor market participate blog post from 2013 which compared participation rates across Canada and the U.S., for males and females, and across different age categories. You can find my old post here along with some links to related posts. Below I report the updated data.
Remember, an individual is counted "in the labor force" if they are either employed (working) or unemployed (actively searching for work) in the previous 4 weeks from the time the labor force survey enquires about labor market status.
First up, prime-age women.
The increase in female labor participation is well-known. Less well-known is how U.S. female participation rates have diverged from other countries (I think Canada is more similar to Europe, last time I checked). Let's take a look at prime-age men.
For the U.S., a secular decline throughout the sample, with evidence of a short-lived rebound from 2016-2019. The Canada-U.S. gap emerged during the Great Recession and is now wider than its ever been. This does suggest there's considerable room for an employment rebound for this demographic. But what accounts for the cross-country gap?
I provide the other diagrams without commentary.
Friday, April 23, 2021
I've been thinking about cryptocurrencies for a long time; many of my writings and talks on the subject can be found here. My thoughts on the subject are evolving as I learn more about the phenomenon. For what it's worth, I thought I'd share my opening remarks with interested readers below. As usual, any feedback is welcome.
The Role and Future of Cryptocurrencies
A money and payments system is about managing databases containing the money accounts of individuals and organizations. Any database management system must necessarily define read and write protocols. Read privileges specify who can view what on the database. Write privileges specify who gets to write what to the database.
So, for example, we can read what’s in our bank account. So can the bank and its regulators. But other people cannot see how much money is held in any account apart from their own. While we cannot write directly to our account, we can send our bank instructions to so on our behalf. The bank can also write directly to our account. It may, for example, credit our account with interest, or debit it for service fees.
Note that bank accounts are digital. Moreover, the messages we send to our bank over the Internet are secured with the aid of cryptography. In this sense, one could say that bank deposits are a form of cryptocurrency.
Bank deposits, however, are not typically viewed as cryptocurrencies. Well then, what are the distinguishing characteristics of a cryptocurrency? It’s not so clear-cut, but two things come to my mind. First, the database for a cryptocurrency is typically associated with an open-read privilege. This implies that the database can, in principle, at least, be subject to audits from any person, or any agency, at any time, all the time. This property offers a degree of financial transparency that is unheard of in conventional money services businesses. Second, the task of managing the database is typically decentralized in some manner to “validator nodes;” or, what one might label more mundanely as “accountants” in the non-crypto world. What is interesting here is how these validators are potentially recruited and compensated. For Bitcoin, anyone can potentially become a validator and compensation arrives in the form of a stochastic reward.
When it comes to keeping track of money balances, an open-write privilege is problematic. This is known as the double-spend problem. In conventional payment systems, the double-spend problem is solved by delegating database management to a trusted third party. A cryptocurrency like Bitcoin or Ethereum must instead rely on a consensus mechanism that somehow ensures that a dispersed write-privilege does not result in garbage being written to the database. To date, the most popular mechanisms are based on PoW (Proof of Work) and PoS (Proof of Stake). But there are others as well, and one should expect innovation along this dimension since, as far as I know, no existing consensus mechanism has yet proven to be entirely satisfactory.
Of course, the same can be said of conventional database management systems. To young eyes, the current system seems a hopelessly tangled mess of databases that have trouble communicating with each other. Moreover, they appear not to be very secure at times. But despite the problems we all encounter with the modern banking system, one should, in fairness, acknowledge the tremendous achievements that have taken place over the last fifty years. For example, we are now able to travel to foreign countries with just a credit card. This is not the way things worked until relatively recently. Anyone who has had the experience of needing traveler’s checks can fill you in on what it was like to travel in the old days.
Well, if there’s been so much progress in money and payments, what accounts for the emergence and proliferation of cryptocurrencies?
As is so often the case, I think the fundamental cause of this development is rapid technological change moving against a relatively slow-moving incumbency that includes banks, money services businesses, and especially their regulators. In saying this, I do not mean to assign blame; the inertial properties of existing institutional arrangements likely has some merit. Institutional inertia can stabilizing, for example. But to benefit the communities they serve, institutions also have to evolve to meet the challenges of new technologies. And I think this is happening today in the sphere of money and payments.
What new technologies are we talking about? Innovations in communications, like the Internet, have been transformational. As well, there have been advances in data storage and cryptography that have played a critical role. All these innovations are, however, within the grasp of incumbent banks and money service businesses. And indeed, incumbents have made use of these technologies. Internet banking and PayPal are real things, after all. I think the important innovation as far as cryptocurrencies are concerned isthe development of database management protocols that permits a degree of decentralization for managing large databases. I say “large” databases because we already have decentralized database management systems for small communities, like gift exchange or the exchange of favors among friends (see: Why the Blockchain Should be Familiar to You). Advances in data storage and communications have, in effect, permitted this ancient form of communal record-keeping to scale.
The decentralized or communal aspect of managing a database is, of course, very much at odds with the notion of delegating the responsibility to a privileged set of institutions. Some people believe that these developments will lead to a revolution—an overthrow of existing institutions—a triumph in democracy over a privileged class. What is much more likely is an evolution of existing institutions to accommodate the threat posed by the potential usurpers in a manner that serves the broader community. In short, what we are likely to witness is the usual pattern of economic development in relatively well-functioning societies.
What do cryptocurrencies offer individuals and society? What are the concerns of regulators and policymakers?
To answer these questions, we need to recognize that there are different classes of cryptocurrencies, each of which cater to a specific constituency. Broadly, they can be categorized as belonging to one of two groups distinguished by their respective exchange rate regimes and governance structures.
In one group, we have the decentralized autonomous organizations, like Bitcoin. From the perspective of domestic policymakers, Bitcoin can be viewed as foreign currency operating under a floating exchange rate regime. Except that there’s no negotiating with Bitcoin (there's no negotiating with some countries either). The intermediaries that deal or broker BTC transactions can, however, be regulated.
In the other group, we have the so-called stablecoins, like USD Coin (sponsored by Circle and Coinbase) and Diem (sponsored by Facebook). To domestic policymakers, stablecoins can be viewed as checkable mutual funds operating under a unilateral fixed exchange rate regime utilizing various forms of collateral. The major innovation here has less to do with technological innovation and more to do with the willingness and ability to process USD payments outside the commercial banking sector.
Viewed in this light, cryptocurrencies do not look so unfamiliar. As a foreign currency operating under a floating exchange rate, they’ll likely never displace the domestic unit of account. They may, however, serve as store-of-value or portfolio hedge. And they may facilitate certain kinds of payments, typically on-chain and large-value. As a stablecoin offering a par exchange rate, they suffer from all the usual problems of uninsured fractional reserve banking—unless they promise to back their currencies fully with USD cash.
The question here is whether these products are offering something fundamentally more cost-effective when it comes to making payments, or whether they owe their existence primarily to regulatory arbitrage. I do not know the answer to this question, but I suspect that much of what they have to offer comes from the latter. Diem, for example, can bypass banking regulations by not becoming a bank. It can leverage Facebook’s huge social network as a payment system connecting 2B+ users around the world. It can potentially offer money-transmitting services for “free” or, rather, in exchange for personal data. My guess is that banks (or even PayPal) are not permitted operate in this manner. Regulatory advantage: Diem.
Regulators need to keep a close eye on these structures since it is politically impossible to commit to the doctrine of caveat emptor when it comes to money and banking. The temptation, as always, will be to replace “cash” for higher-yielding “cash equivalents” on the balance sheet. The structure slowly evolves into an uninsured fractional reserve bank, but in the shadow bank sector. If something goes wrong, depositors will seek compensation, first from the firm and then from the government. After all, how could a government knowingly permit such an unstable structure to exist in the first place?
To sum up, I think the future of cryptocurrencies like Bitcoin is to serve as an alternative asset class for investors. I doubt that it will ever become a dominant medium of exchange in any large economy. Fractional reserve banks using BTC as reserves are not likely to be tolerated.
The future of stablecoins seems more interesting to me. In the first instance, they seem capable of filling the gaps that remain apparent in modern day payment systems (think correspondent banking here). But the main effect here is likely to spur conventional banks and their regulators to fill these gaps at a faster pace. There is a possibility that a project like Diem might one day abandon its peg to the USD and offer itself as a stand-alone currency. Policymakers would in that case be concerned about a country maintaining monetary policy sovereignty. One manifestation of this concern could be a pre-emptive action on the part of the government, for example, by offering its own universally-accessible CBDC.
Wednesday, March 31, 2021
This post was motivated by a conversation with Eric Lonergan. It began with a simple question: what should be the interest rate paid on reserves? I answered that according to theories I'm familiar with, reserves should earn the "natural" rate of interest, which I defined as the sum of population and productivity growth. So, assuming 2% "real" growth and 2% inflation, reserves (and government debt more generally) should be yielding around 4%.
I think it's fair to say most people did not find my answer very satisfying. So I thought I'd take a moment to explain how I arrived at it. I want to do so in the context of a model economy. Let me describe the model first. We can discuss its limitations and possible extensions later on.
Consider an economy where people live for two periods; they are "young" and then become "old." Let N(t) denote the population of young at date t. Assume that the population grows at (gross) rate n; that is, N(t) = nN(t-1). In this "overlapping generations" model, the population at date t is given by N(t)+N(t-1).
Individuals in this economy generate y units of perishable output (goods and services) when they are young. I'm going to treat y as fixed over time. This implies that the RGDP at date t is given by N(t)y and that the RGDP grows at rate n over time (there is no productivity growth). In what follows, I label n the "natural" rate of interest.
Suppose that people only value consumption when they are old. This poses an interesting economic problem. The young can produce goods that the old value, but the old have no way of paying for these goods. Private credit markets don't work here.
The cooperative solution is very simple: the young should "gift" their goods y to the old. If everyone followed this cooperative protocol, then the young of generation t would consume (in their old age) c(t+1) = N(t+1)y/N(t) = ny.
That is, by following this protocol, it's as if the young "deposit" their income y in a savings account that generates a (gross) real yield equal to n, the "natural" rate of interest.
Since private competitive markets cannot be expected to implement this socially-desirable outcome, what other mechanisms might be employed? In small communities, reciprocal gift exchange seems to work quite well. In the present context, the young look after their parents, expecting their children to return the favor, and so on.
Larger communities need to rely on other mechanisms. In the present context, a PAYGO social security system that taxes the young y and pays the old ny would do the trick.
The same outcome could be achieved through monetary exchange. Suppose the government lets all individuals open a central bank money account. The government creates (out of thin air) M dollars and credit the accounts of the "initial old" with M/N(0) dollars. Assume that M is kept constant over time. The old are expected to spend these dollars on a competitive spot market, where goods exchange for dollars at price p(t).
The equilibrium price-level is easy to derive in this example. At any date t, we have N(t-1) old people collectively holding M dollars. These M dollars will be spent (the old have no reason not to) on the goods available for sale, N(t)y. The market-clearing condition here is M = p(t)N(t)y at every date t. Because both M and y are constant, and because population N(t) is growing at rate n, it follows that the equilibrium price-level p(t) must be falling at rate n.
So, if we interpret M as "reserves" in this model economy, then reserves yielding the natural rate of interest would be consistent with economic efficiency. If reserves yield zero nominal interest rate, then efficiency requires some deflation. But the same outcome is possible if reserves were to yield a nominal interest rate n in a zero-inflation rate regime.
This result continues to hold for more general preferences. Suppose that people care about consumption when young and old. Then the young will only want to consume a fraction of their income. That fraction will depend on (among other things) the real rate of return they expect on their retirement savings. As it turns out, the so-called "Golden Rule" allocation requires that money/bonds yield the natural rate of interest.
Is this a good place to start thinking about Eric's question? It may be a good place to start, but we don't want to stop here. The analytical framework above is "bare bones." Among other things, my analysis implicitly assumes that there is no difference between reserves and government treasury securities. Does this matter? If it does, the reasons need to be spelled out. What modifications to the simple model above would imply that to meet a given social objective, it is desirable to have reserves yield less than treasury securities? In reality there is duration risk. But why does the Treasury issue bonds with different maturities in the first place? Moreover, why are these bonds purposefully rendered illiquid (for example, by discouraging the Fed from monetizing the entire bond issue or, at least, from setting up standing purchase facilities?). The answers to these questions are not immediately obvious to me. But they may be to you!
Thursday, March 4, 2021
Sunday, February 21, 2021
I've been thinking a bit lately about theories of the business cycle (a lot of time for reflection in these days of COVID-19). At least, the way some of these theories have evolved over my lifetime and from the perspective of my own training in the field. From my (admittedly narrow) perspective as a researcher and advisor at a central bank, the journey beginning c. 1960 seems like it's taken the following steps: (1) Phillips Curve and some Natural Rate Hypothesis; (2) Real Business Cycle (RBC) theory; (3) New Keynesian theory. It seems like we might be ready to take the next step. I'll offer some thoughts on this at the end, for whatever they're worth.
There's no easy way to summarize the state of macroeconomic thinking, of course. But it seems clear that, at any given time, some voices and ways of thinking are more dominant than others. By the time the 1960s rolled around, there seemed to be a consensus that monetary and fiscal policy should be used to stabilize the business cycle. The main issue, in this regard, revolved over which set of instruments was better suited for the job. (See, for example, this classic debate between Milton Friedman and Walter Heller).
Problem of Achieving and Maintaining a Stable Price-Level: Analytical Aspects of Anti-Inflation Policy." Then, as is the case still now, the authors lamented the lack of consensus on a theory of price inflation. Various cost-push and demand-pull hypotheses were reviewed, problems of identification noted, and calls for micro-data to help settle the issue were made. They also mentioned Phillips' article and noted how the same diagram for the U.S. looked like a shot-gun blast (little correlation, except for some sub-samples). Then they translated the Phillips curve using price inflation instead of wage inflation. No data was sacrificed in this exercise; their "theory" was summarized with the diagram to the left.
I put "theory" in quotes in the passage above because the theory (explanation) was never clear to me. In particular, while I could see how an increase in the rate of unemployment might depress the level wage, I could not grasp how it could influence the rate of growth of wages for any prolonged period of time. This logical inconsistency was solved by the Phelps-Friedman natural rate hypothesis; see Farmer (2013) for a summary and critique.
The TL;DR version of this hypothesis is that the PC is negatively sloped only in the short-run, but vertical in the long-run. So, while monetary policy (increasing the rate of inflation) could lower the unemployment rate below its natural rate, it could only do so temporarily. Eventually, the unemployment rate would move back to its natural rate at the higher rate of inflation. This hypothesis seemed to provide a compelling interpretation of the stagflation (high inflation and high unemployment) experienced in the 1970s. It also seemed to explain the success of Volcker's disinflation policy in the 1980s. Nevertheless, uneasiness in the state of the theory remained and a new (well, nothing is ever completely new) way of theorizing was on the horizon.
There was a lot going on with this program. A central thesis of RBC theory is that the phenomena of economic growth and business cycles are inextricably linked. This is, of course, is an old idea in economics going back at least to Dennis Robertson (see this review by Charles Goodhart) and explored extensively by a number of Austrian economists, like Joseph Schumpeter.
The idea that "the business cycle" is to some extent a byproduct of the process of economic development is an attractive hypothesis. Economic growth is driven by technological innovation and diffusion, and perhaps regulatory policies. There is no a priori reason to expect these "real" processes to evolve in a "smooth" manner. In fact, these changes appear to arrive randomly and with little or no mean-reverting properties. It would truly be a marvel if the business cycle did not exist.
The notion of "no mean-reverting properties" is important. It basically means that technology/policy shocks are largely permanent (or at least, highly persistent). If macroeconomic variables like the GDP inherit this property, then a "cycle"--the tendency for a variable to return to some long-run trend--does not even exist (and if you think you see it, it's only a figment of your imagination). For this reason, early proponents of RBC theory preferred the label "fluctuations" over "cycle." This view was supported by the fact that econometricians had a hard time rejecting the hypothesis that the real GDP followed a random walk (with drift). For example, here is Canadian GDP plotted against two realizations of a random walk with drift:
This perspective fermented at a time when the cost of computing power was falling dramatically. This permitted economists to study models that were too complicated to analyze with conventional "pencil and paper" methods. Inspiration was provided by Lucas (1980), who wrote:
Our task, as I see it…is to write a FORTRAN program that will accept specific economic policy rules as ‘input’ and will generate as ‘output’ statistics describing the operating characteristics of time series we care about, which are predicted to result from these policies.”
And so that's what people did. But what sort of statistics were model economies supposed to reproduce? Once again, it was Lucas (1976) who provided the needed guidance. The empirical business cycle regularities emphasized by Lucas were "co-movements" between different aggregate time-series. Employment, for example, is "pro-cyclical" (tends to move in the same direction as GDP) around "trend." These types of regularities can be captured by statistics like correlations. But these correlations (and standard deviations) only make sense for stationary time-series, and the data is mostly non-stationary. So, what to do?
Transforming the data through first-differencing (i.e., looking at growth rates instead of levels) is one way to render (much of) the data stationary. Another approach was made popular by Prescott (1986), who advocated a method that most people employ: draw a smooth line through the data, label it "trend," and then examine the behavior of "deviations from trend." Something like this,
It's important to note that Prescott viewed the trend line in the figure above as "statistical trend," not an "economic trend." To him, there was no deterministic trend, since the data was being generated by a random walk (so, the actual trend is stochastic). Nevertheless, drawing a smooth trend line was a useful way to render the data stationary. The idea was to apply the same de-trending procedure to actual data and simulated data, and then compare statistical properties across model and data.
The point of mentioning this is that no one involved in this program was conditioned to interpret the economy as "overheating" or in "depression." Growing economies exhibited fluctuations--sometimes big and persistent fluctuations. The question was how much of these observed fluctuations could be attributed purely to the process of economic development (technological change), without reference to monetary or financial factors? I think it's fair to say that the answer turned out to be "not much, at least, not at business cycle frequencies." The important action seemed to occur at lower frequencies. Lucas (1988) once again provided the lead when he remarked "Once one starts to think about growth, it is hard to think about anything else." And so, the narrow RBC approach turned its attention to low-frequency dynamics; e.g., see my interview with Lee Ohanian here.
Of course, many economists never bought into the idea that monetary and financial factors were unimportant for understanding business cycles. Allen and Gale, for example, schooled us on financial fragility; see here. But this branch of the literature never really made much headway in mainstream macro, at least, not before 2008. Financial crises were something that happened in history, or in other parts of the world. Instead, macroeconomists looked back on its roots in the 1960s and embedded a version of the PC into an RBC model to produce what is now known as the New Keynesian framework. Short-run money non-neutrality was achieved by assuming that nominal price-setting behavior was subject to frictions, rendering nominal prices "sticky." In this environment, shocks to the economy are not absorbed efficiently, at least, not in the absence of an appropriate monetary policy. And so, drawing inspiration from John Taylor and Michael Woodford, the framework added an interest rate policy rule now known as the Taylor rule. Today, the basic NK model consists of these three core elements: An IS curve: Relates aggregate demand to the real interest rate and shocks.
 An Phillips Curve: Relates the rate of inflation (around trend) to the output gap.
 A Taylor Rule: Describes how interest rate policy reacts to output and inflation gaps.Roger Farmer (we share the same supervisor, so perhaps this is no accident). In any case, the NK framework became (and continues to be) a core thought-organizing principle for central bank economists around the world. It has become a sort of lingua franca in academic macro circles. And if you don't know how to speak its language, you're going to have a hard time communicating with the orthodoxy.
Of the three basic elements of the NK model, I think the NK Phillips Curve (which embeds the natural rate hypothesis) has resulted in the most mischief; at least, from the perspective of advising the conduct of monetary policy. The concept is firmly embedded in the minds of many macroeconomists and policymakers. Consider, for example, Greg Mankiw's recent piece "Yes, There is a Trade-Off Between Inflation and Unemployment."
Today, most economists believe there is a trade-off between inflation and unemployment in the sense that actions taken by a central bank push these variables in opposite directions. As a corollary, they also believe there must be a minimum level of unemployment that the economy can sustain without inflation rising too high. But for various reasons, that level fluctuates and is difficult to determine.
The Fed’s job is to balance the competing risks of rising unemployment and rising inflation. Striking just the right balance is never easy. The first step, however, is to recognize that the Phillips curve is always out there lurking.
The Phillips curve is always lurking. The message for a central banker is "sure, inflation and unemployment may be low for now, but if we keep monetary policy where it is and permit the unemployment rate to fall further, we will risk higher inflation in the future." I'm not sure if economists who write in this manner are aware that they're making it sound like workers are somehow responsible for inflation. Central banker to workers: "I'm sorry, but we need to keep some of you unemployed...it's the inflation, you see."
There is evidence that this line of thinking influenced the FOMC in 2015 in its decision to "lift off" and return the policy rate to some historically normal level; see my post here explaining the pros and cons in the lift-off debate. By the start of 2014, there was considerable pressure on the Fed to begin "normalizing" its policy rate. By mid 2014, the expectation of "lift off" likely contributed to significant USD appreciation and the economic weakness that followed. If I recall correctly, Vice Chair Stan Fischer started off the year by announcing that four rate hikes for 2015 were in order (as it turned out, the Fed only raised rates once--in December). To some observers, this all seemed very strange. After all, the unemployment rate was still above its estimated "natural" rate (5%) and inflation continued to undershoot its 2% target. What was going on?
What was going on was the Phillips curve. Here is Chair Yellen at the March 17-18, 2015 FOMC meeting (transcript available here):
If we adopt alternative B, one criterion for an initial tightening is that we need to be reasonably confident that inflation will move back to 2 percent over the medium term. For the remainder of this year, my guess is that it will be hard to point to data demonstrating that inflation is actually moving up toward our objective. Measured on a 12-month basis, both core and headline inflation will very likely be running below 1½ percent all year. That means that if we decide to start tightening later this year, a development that I think is likely, we will have to justify our inflation forecasts using indirect evidence, historical experience, and economic theory.
The argument from history and economic theory seems straightforward. Experience here and abroad teaches us that, as resource utilization tightens, eventually inflation will begin to rise. To me, this seems like a simple matter of demand and supply. So the more labor and product markets tighten, the more confident I’ll become in the inflation outlook. Because of the lags in monetary policy, the current high degree of monetary accommodation, and the speed at which the unemployment rate is coming down, it would, to my mind, be imprudent to wait until inflation is much closer to 2 percent to begin to normalize policy. I consider this a strong argument for an initial tightening with inflation still at low levels, and it’s one that I plan to make. But I also recognize and am concerned that, at least in recent years, the empirical relationship between slack and inflation has been quite weak.
The plucking model is consistent with the observed cyclical asymmetry in unemployment rate fluctuations. And labor market search models are a natural way to model that asymmetry. In case you're interested, I develop a super-simple (and dare I say, elegant) search model here to demonstrate (and test) the idea: Evidence and Theory on the Cyclical Asymmetry in Unemployment Rate Fluctuations, CJE 1997). See also my blog post here as well as some recent work by Ferraro (RED, 2018) and Dupraz, Nakamura and Steinsson (2019). I like where this is going!
One attractive feature of search models, in my view, is that they model relationship formation. Relationships provide a very different mechanism for coordinating economic activity relative to the canonical economic view of anonymous spot exchange in centralized markets. In a relationship, spot prices do not matter as much as the dynamic path of these prices (and other important aspects) over the course of a relationship (see my critique of the sticky price hypothesis here). The observation that retailers, in the early days of C-19, voluntarily rationed goods instead of raising prices makes little sense in anonymous spot exchange, but makes perfect sense for a merchant concerned with maintaining a good relationship with his or her customers. And merchant-supplier relationships can handle shortages without price signals (we're out of toilet paper--please send more!). In financial markets too, the amount of time that is spent forming and maintaining credit relationships is hugely underappreciated in economic modeling. Search theory turns out to be useful for interpreting the way money and bond markets work too. These markets are not like the centralized markets we see modeled in textbooks--they operate as decentralized over-the-counter (OTC) markets, where relationships are key. One reason why economies sometimes take so long to recover after a shock is because the shock has destroyed an existing set of relationships. And it takes time to rebuild relationship capital.
Notions of "overheating" in this context probably do not apply to labor market variables, although there is still the possibility of an overaccumulation of certain types of physical capital in a boom (what the Austrians label "malinvestment"). Any "overheating" is likely to manifest itself primarily in asset prices. And sudden crashes in asset prices (whether driven by fundamentals or not), can have significant consequences on real economic activity if asset valuations are used to support lines of credit.
Finally, we need a good theory of inflation. The NKPC theory of inflation is not, in my view, a completely satisfactory theory in this regard. To begin, it simply assumes that the central bank can target a long-run rate of inflation (implicitly, with the support of a Ricardian fiscal policy, though this is rarely, if ever, mentioned). At best, it is a theory of how inflation can temporarily depart from its long-run target and how interest rate policy can be used to influence transition dynamics. But the really interesting questions, in my view, have to do with monetary and fiscal policy coordination and what this entails for the ability of an "independent" central bank even to determine the long-run rate of inflation (Sargent and Wallace, 1981).
I know what I've described only scratches the surface of this amazingly deep and broad field. Most of you have no doubt lived through your own process of discovery and contemplation in the world of macroeconomic theorizing. Feel free to share your thoughts below.