Believe those who are seeking the truth. Doubt those who find it. Andre Gide


Sunday, April 12, 2015

In defense of modern macro theory

The first economist
The 2008 financial crisis was a traumatic event. Like all social trauma, it invoked a variety of emotional responses, including the natural (if unbecoming) human desire to find someone or something to blame. Some of the blame has been directed at segments of the economic profession. It is the nature of some of these criticisms that I'd like to talk about today.

One of the first questions macroeconomists get asked is: How could you possibly not have predicted the crisis? We all remember when the Queen of England asked this (supposedly embarrassing) question. Put on the spot, I might have replied that the same question could have been asked of her ancestor(*) predecessor King Charles I, whose death in 1649 also came about under rather unexpected circumstances. Or, I might have replied that many economists did in fact predict this crisis...along with the many other crises that failed to materialize (recall the old joke about the economist who successfully predicted 10 out of the past 2 recessions).

But seriously, the delivery of precise time-dated forecasts of events is a mug's game. If this is your goal, then you probably can't beat theory-free statistical forecasting techniques. But this is not what economics is about. The goal, instead, is to develop theories that can be used to organize our thinking about various aspects of the way an economy functions. Most of these theories are "partial" in nature, designed to address a specific set of phenomena (there is no "grand unifying theory" so many theories coexist). These theories can also be used to make conditional forecasts: IF a set of circumstances hold, THEN a number of events are likely to follow. The models based on these theories can be used as laboratories to test and measure the effect, and desirability, of alternative hypothetical policy interventions (something not possible with purely statistical forecasting models).

There is a sense in which making predictions is very easy. Here's one for you: Mt. Vesuvius will experience another major eruption on the scale of AD 79, when it buried the city of Pompeii, tragically killing thousands of people (among them, the famous naturalist Pliny the Elder). While volcanologists are getting progressively better at predicting eruptions, it remains very difficult to forecast their size. So when an event like this arrives, it always comes as a bit of a shock. In any case, like I said, making predictions (unconditional forecasts) that will eventually come true is easy. There are thousands of people predicting that the world will end in 2015, 2016, 2016, etc. Some of these prognosticators will one day be proven correct. Those making predictions that fail to come true hide in the shadows for a while, but then re-emerge bolder than ever. I don't blame these soothsayers: there seems to be an insatiable demand for the likes of Nostradamus, and this is clearly a case of demand creating its own supply. In this spirit then, permit me to deliver my own forecast (remember, you heard it here first): there will be another major financial crisis on the scale experienced in 2008.

While we can't predict when the next major crisis will occur (I hope the Queen can forgive us), it is reasonable to expect experts to make good conditional forecasts. IF Vesuvius blows, THEN a lot of people are going to die. This type of conditional forecast should lead policymakers to think of ways in which the potential death toll can be avoided or reduced. Perhaps citizens should be prohibited from inhabiting dangerous areas. At the very least, an emergency evacuation procedure should be put in place. The same is true for financial crises. Perhaps restrictions should be placed on the exchange of some types of financial products. At the very least, an emergency response strategy should be put in place. Actually, there is an emergency response strategy--the Fed's emergency lending facility--which essentially worked according to plan in 2008-09. Now, maybe you don't like various aspects of the Fed's liquidity facility and that's fine (even if it did make a healthy profit for the U.S. taxpayer). But you can't say that economists hadn't predicted the possible need for such a facility. Indeed, the Fed was set up on the premise that financial crises would continue to afflict modern economies (by the way, financial crises were a common part of the economic landscape well before the founding of the Fed in 1913, so think carefully before you accuse the Fed as the source of market instability).

Alright, so much for blaming economists and their less-than-crystal balls (hmm, a part of me says I should have edited that last sentence.) What else? Well, I notice a lot of blame also being heaped on modern macroeconomic theory and the professors of such theory. "What's Wrong With Macro?" the headlines wail (roll eyes here). Things have become so bad that we now see students telling professors how macro should be taught. Next we'll have teenagers telling their parents how to raise children. Well, we already have that of course. But the point is that while parents patiently hear out these protestations (having been young for much longer than the youth in question), they do not generally capitulate to them. I'm sorry, but you're only 16, I love you, and no, you can't have the keys to the car!

And yet, amazingly, we have to read things like this (source):
Wendy Carlin, professor of economics at University College London, who is directing a project at the Institute for New Economics Thinking, a think-tank set up by billionaire financier George Soros, said at a conference last year that students had become “disenchanted” and lecturers “embarrassed” by the way economics is taught.
Lecturers at UCL are "embarrassed" by the way economics is taught? What does this mean? Are they embarrassed about the way they personally teach their economics classes? Then they should be fired for incompetence. Are they embarrassed by the current state of macroeconomic theory? Then they should be fired and sent back to grad school (or the Russian front, if you're a Hogan's Heroes fan).

The dynamic general equilibrium (DGE) approach is the dominant methodology in macro today. I think this is so because of its power to organize thinking in a logically consistent manner, its ability to generate reasonable conditional forecasts, as well as its great flexibility--a property that permits economists of all political persuasions to make use of the apparatus.

For the uninitiated, let me describe in words what the DGE approach entails. First, it provides an explicit description of what motivates and constrains individual actors. This property of the model reflects a belief that individuals are incentivized--in particular, they are likely to respond in more or less predictable ways to changes in the economic environment to protect or further their interests. Second, it provides an explicit description of government policy. While this latter property sounds straightforward, it is in fact a rather delicate and important exercise. To begin, in a dynamic model, a "policy" does not correspond to a given action at a point in time. Rather, it corresponds to a full specification of (possibly state-contingent) actions over time. Moreover, there is no logical way in which to separate (say) "monetary" policy from "fiscal" policy. The policies of different government agencies are inextricably linked through a consolidated government budget constraint (see A Dirty Little Secret).  Thus, any statement concerning (say) the conduct of monetary policy must explicitly (or implicitly) contain a statement stipulating a consistent fiscal policy. The exercise is delicate in the sense that model predictions can depend sensitively on the exact details of how policies are designed and how they interact with each other. The exercise is important because the aforementioned sensitivity is quite likely present in real-world policy environments. Finally, the DGE approach insists that the policies adopted by private and public sector actors are in some sense "consistent" with each other. Notions of consistency are imposed through the use of solution concepts, like competitive equilibrium, Nash equilibrium, search and bargaining equilibrium, etc. Among other things, consistency requires that economic outcomes respect resource feasibility and budget constraints.

Now, what part of the above manifesto do you not like? The idea that people respond to incentives? Fine, go ahead and toss that assumption away. What do you replace it with? People behave like robots? Fine, go ahead and build your theory. What else? Are you going to argue against having to describe the exact nature of government policy? Do you want to do away with consistency requirements, like the respect for resource feasibility. Sure, go ahead. Maybe your theory explains some things a lot better than mine when you dispense with resource constraints. But do you really want to hang your hat on that interpretation of the world? An internally inconsistent theory that happens to be consistent with some properties of the data is not what I would call deep understanding. (Nor is an internally consistent theory inconsistent with the data something to be happy about, but that's the purpose of continued research.)

The point I want to make here is not that the DGE approach is the only way to go. I am not saying this at all. In fact, I personally believe in the coexistence of many different methodologies. The science of economics is not settled, after all. The point I am trying to make is that the DGE approach is not insensible (despite the claims of many critics who, I think, are sometimes driven by non-scientific concerns).

I should make clear too that by "the DGE approach," I do not limit the phrase to New Keynesian DGSE models or RBC models. The approach is much more general. While one might legitimately observe that these latter sets of models largely downplay the role of financial frictions and that practioners should therefore not have relied so heavily on them, it would not be correct to say that DGE theory cannot account for financial crises. If you don't believe me, go read this (free) book by Franklin Allen and Douglas Gale: Understanding Financial Crises. While this book was published in 2007, it reflects a lifetime of work on the part of the authors. And if you take a look at the references, you'll discover a large and lively literature on financial crises well before 2007. In my view, this constitutes evidence that "mainstream" economists were thinking about episodes like 2008-09. If central bank economists were not paying too much attention to that branch of the literature, it is at most an indictment on them and not on the body of tools that were available to address the questions that needed to be answered. (In any case, as I mentioned above, I think the Fed did act according to the way theory generally suggests during the crisis).

Once again (lest I be misunderstood, which I'm afraid seems unavoidable these days) I am not claiming that DGE is the be-all and end-all of macroeconomic theory. There is still a lot we do not know and I think it would be a good thing to draw on the insights offered by alternative approaches. I do not, however, buy into the accusation that there "too much math" in modern theory. Math is just a language. Most people do not understand this language and so they have a natural distrust of arguments written in it. Different languages can be used and abused. But this goes as much, if not more, for the vernacular as it does for specialized languages. Complaining that there is "too much math" in a particular theoretical exposition is like complaining that there is too much Hirigana in a haiku poem. Before criticizing, either learn the language or appeal to reliable translations (in the case of haiku poetry, you would not want to rely solely on translations hostile to Japanese culture...would you?).

As for the teaching of macroeconomics, if the crisis has led more professors to pay more attention to financial market frictions, then this is a welcome development. I also fall in the camp that stresses the desirability of teaching more economic history and placing greater emphasis on matching theory with data. However, it's often very hard, if not impossible, to fit everything into a one-semester course. Invariably, a professor must pick and choose. But while a particular course is necessarily limited in what can be presented, the constraint binds less tightly for a program as a whole. Thus, one could reasonably expect a curriculum to be modified to include more history, history of thought, heterodox approaches, etc. But this is a far cry from calling for the abandonment of DGE theory. Do not blame the tools for how they were (or were not) used.


(*) My colleague Doug Allen points out that Elizabeth II did not descend from Charles I. The Stuart line dies out with Queen Anne, at which point George I is brought over from Germany. EII is a member of the house of Hanover/Windsor, and not a Stuart. Many also think that EII is a descendant of Elizabeth I, but of course she had no children, and ended the Tudor line. 

Monday, March 23, 2015

The "Audit" the Fed Crowd

Alex Pollock says that It's High Time to "Audit" the Federal Reserve. I'm glad to see that this headline puts the word "audit" in quotes. It suggests what Mr. Pollock means by "audit" differs from how most people understand the word: a complete and careful examination of the financial records of a business or person (Merriam Webster).

You see, just the other day, Senator Rand Paul, a leader in "Audit-the-Fed" movement (a significant step down from his father's "End-the-Fed" movement) was making statements like this one:
“[An] audit of the Fed will finally allow the American people to know exactly how their money is being spent by Washington.”
Of course, the Fed does not control how money is being spent by Washington. The Fed prints money to buy government securities. It sometimes extends loans against high-grade collateral. Everything you want to know about these purchases and loans is publicly available. So Paul's initial line of attack has fizzled out, something that is implicitly acknowledged by Pollock when he writes:
The calls in Washington to “audit” the Federal Reserve are not for a narrow, bean-counting review of the institution’s financial statements.
Good. Progress has been made: the Fed's books are in order. Nothing scandalous to report (sigh). How else then to "audit" the Fed?

Let's be honest here. There is nothing new to discover in further auditing. This movement is motivated by what they perceive to be bad monetary policy. It doesn't even make sense to say we want to "audit" the Fed's policy because the policy is already transparent (which is, after all, what permits critics to label it "bad" in the first place!).

There is, of course, nothing wrong with critiquing Fed policy. Indeed, there are many economists working inside the Fed that critique various aspects of Fed policy all the time. And, as we all know, members of the FOMC can hold very different opinions ("hawks" and "doves"). Thoughtful critiques of policy should be welcomed. Policymakers and researchers at the Fed do welcome them.

Moreover, I'm all for full accountability. The Fed should be accountable to the American people--it is, after all, a creation of the American people through their representatives in Congress. But as I have said, the issue here is not about accountability. It is about a group of individuals who want to see their preferred monetary policy adopted. That's fair enough. I just ask that they be honest about their motives. It has nothing to do with audits or accountability.

Sunday, March 8, 2015

Involuntary Labor Market Choices?

My pal Roger Farmer has a lot of good ideas, but he doesn't always use the best language to express them. In a recent post, for example, Roger asserts the following.
Participation is a voluntary choice.  Unemployment is not. 
The idea that unemployment is voluntary is classical nonsense.
I do not like this language. But before I explain why I feel this way, let me first describe what I think Roger is trying to say. I think he means to say that recessions are socially inefficient outcomes, manifesting themselves primarily in the form elevated levels of unemployment and not in low participation rates. The unemployed are people without good-paying jobs, but looking for good-paying jobs. Good-paying jobs are relatively scarce in a recession (especially for individuals with lower skill sets--the young, those without advanced education, etc.) If you were to interview the unemployed during a deep recession and ask them how they're feeling, most of them would are likely to reply that they are not doing well relative to when they were employed. Economists (classical or otherwise) would say that recessions are welfare-reducing events for most people. The "classical" idea that there is little a government can or should do to help society in a deep recession is nonsense.

I think this probably captures Roger's view fairly well. Notice, however, that nowhere did I employ the adjectives "voluntary" or "involuntary" to describe labor market outcomes. I did not because these labels are not useful (which I why we do not see these terms used in the labor literature). Indeed, want to go a step further and argue that the use of these labels might be worse than useless. Now let me explain why I feel this way.

Let's start with some things I think we can all agree on. First, people are endowed with some time, T. Second, there are competing uses for this time. Let me assume, for simplicity, that there are three uses of time: work (e), search (u), and leisure (n). Think of "work" as time devoted toward producing marketable goods and services, "unemployment" as searching for work, and "leisure" as producing non-marketable goods and services. Third, we can all agree that we face a time constraint: e + u + n = T.

Now, suppose for simplicity that T is indivisible: it must be allocated to one and only one of the three available time-use categories (the allocation can, however, change over calendar time). In this case, a standard labor force survey (LFS) will record e = T as employment, u = T as unemployment, and n = T as nonparticipation (or not-in-the-labor-force, NILF). [Note: the LFS never asks people whether they are unemployed or not. It asks whether they have done any paid work in the previous 4 weeks and if they have not, it then asks a series a questions relating to job search activities. If they report no job search activity, they are then classified as NILF.]

Now, Roger seems to be saying that people have a choice to make when it comes to allocating their time to either work (e = T) or leisure (n = T), but that they have no choice in determining time spent unemployed (u = T). Moreover, the idea that people may choose u = T constitutes "classical nonsense." But is this really what he means to say?

Let's start with a basic neoclassical model. In this abstraction, individuals and firms meet in a centralized market place and individuals are assumed to know where to find the best price for their labor. Put another way, there is absolutely no reason to devote precious time to searching for work. To put it yet another way, the neoclassical model was never designed to explain unemployment--it was designed to explain employment (and non-employment). And so, in the neoclassical model, where search is not necessary, individuals rationally choose u = 0.

Now, you may think this is a silly abstraction and that you want to impose (involuntarily) the state u = T on some individuals. But why? Unemployment is not idleness. Unemployment (at least the way the LFS defines it) constitutes the activity of searching for work--it is a form of investment (that hopefully pays off in a better job opportunity in a world where finding jobs is costly). Individuals not working and not searching are counted as out of the labor force (and even these people may not be "idle" because they might be doing housework or schoolwork, etc.).

So back to our neoclassical model. Since there is no unemployment, the time-allocation problem boils down to choosing between work and leisure. Depending on idiosyncratic considerations (the price of one's specific labor, wealth position, the opportunities for home production, schooling, etc.), some individuals choose work and others choose leisure. In the neoclassical model, these idiosyncratic "shocks" are largely beyond an individual's control. If the demand for your labor declines, it will cause the market price of your labor to fall. You will not like that. The shock is involuntary. BUT, you still get to choose whether to work at that (or some other) lower wage, or exit the labor force. To take another example, suppose that a source of non-labor income suddenly vanishes (involuntary). You may now be compelled to take that lousy paying job. Should we label this outcome "involuntary employment?" If so, then what next? Involuntary saving? (oops). Are all choices to be considered "involuntary?"

This is not the way we (as economists) want to go, in my opinion. In my view, it makes more sense to view choices as voluntary and responsive to the incentives imposed on individuals by the economic environment. If we want to view anything as "involuntary," it would be exogenous changes to the environment that reduce material living standards.  If circumstances change for the better, welfare increases. If they change for the worse, welfare declines. In either case, people can be expected to allocate their scarce time toward the activities that promise the highest expected payoff. What room is there left for the "voluntary/involuntary" distinction? None, in my view.

Let's stick with the neoclassical model for a bit longer, but tweak it the way I did here to permit multiple equilibria. Now, this is right up Roger's alley. All individual choices here are rational and "voluntary."  But this doesn't mean that the economy operates perfectly all the time. Indeed, the economy might get stuck in a bad equilibrium, where employment is low, non-employment is high (and unemployment is still zero). What would Roger suggest here in the way of labels? Is this a model of involuntary leisure?  How does this label help us understand anything? I argue that it does not.

Alright, so I don't find the "involuntary leisure" label useful. So what? Well, I don't want to make too much of this, but I think such labels can lead to muddled thinking. The label "involuntary" suggests that individuals may not respond to incentives (after all, they evidently have no choice in the matter). I think it's better, from the perspective of designing a proper intervention, to view the individual's circumstances as beyond their control, but to respect the fact that they are likely to respond to altered incentives. We are economists, after all -- why would we not interpret the world this way? People demonstrably do respond to incentives! 

I could go on and talk at length about abandoning the neoclassical assumption of centralized labor markets and replacing this construct with a decentralized search market. There is a big literature on labor market search and I'm not about to review it here. If you're interested, read my Palgrave Dictionary entry on the subject here. Suffice it to say that I find no value in interpreting an individual's state of unemployment as "involuntary" either. There are all sorts of jobs out there and I think people rationally turn "ill-suited" job opportunities down to search for better matches (the way I did, when I lost my construction job in the 1981 recession). Sometimes, people get "discouraged" and exit the labor force. These are all choices that people make relative to the circumstances they find themselves in. If we want to design programs to help the unfortunate (some of whom are employed or out of the labor force), then we want to design a system that respects incentives. 

What's that you say? You don't believe that incentives matter? Not for the unemployed? This is what I call nonsense. Consider, for example, the well-known "spike" in unemployment exit rates at the point of unemployment benefit exhaustion (see David Card here: "In Austria, the exit rate from registered unemployment rises by over 200% at the expiration of benefits..."). We see clear evidence that the unemployed do respond to incentives--they do have choices, especially in an economy with so many competing uses for time. Interpreting unemployment as "voluntary" does not mean that we are to have no compassion for the the unemployed. We feel bad for anyone (employed or out of the labor force too) who face terrible circumstances beyond their control. What it means is that we should measure economic welfare based on consumption (material living standards), not time allocation choices. It means is that we understand and respect the fact that people make choices based on the incentives they face. It means that a well-designed policy should respect these incentives.

Let me sum up here. Commentators attach the label "involuntary" to unemployment to emphasize the fact that the unemployed are not typically happy with their circumstances. Fine. But then can the same not be said of many people who find themselves "involuntarily" employed (the working poor, for example) or "involuntarily" out of the labor force (looking after a sick relative, for example)? If so, then how can one unequivocally proclaim that "participation is a voluntary choice, unemployment is not?" It makes no sense to me. I want to ask Roger to stop using bad language. 

Thursday, March 5, 2015

Lifting Off...Sooner or Later

From Barron's yesterday we have this lovely headline: Two Fed Presidents Contradict Each Other on Same Day.
From the dovish corner, Charles Evans, president of the Chicago Fed, suggested that the Fed should be patient about raising rates and not act until 2016. He said: 
Given uncomfortably low inflation and an uncertain global environment, there are few benefits and significant risks to increasing interest rates prematurely. Let's be confident that we will achieve both dual mandate goals within a reasonable period of time before taking actions that could undermine the very progress we seek.
Weighing in for the Fed hawks, Kansas City Fed president Esther George said she thought the Fed should raise rates mid-year. Her take: 
This balanced approach framework supports taking steps to remove the extraordinary amount of monetary accommodation currently in place. The next phase in this process is to move the federal funds rate off its near-zero setting. While the FOMC has made no decisions about the timing of this action, I continue to support liftoff towards the middle of this year due to improvement in the labor market, expectations of firmer inflation, and the balance of risks over the medium and longer run.
I want to evaluate these two views in the context of a Taylor rule. The Taylor rule is simply a mathematical representation of how the Fed should (or will) set its policy rate in relation to the current state of the economy as measured by inflation gaps (inflation minus target inflation) and output gaps (output minus potential output). Every FOMC member presumably has a Taylor rule in mind if for no other reason than the existence of the Fed's dual mandate (the Congressional mandate that the Fed strive to stabilize inflation and employment around some long-run targets). 
A simple version of the Taylor can be written in this way:
i(t) = r* + p* + A[p(t) - p*] + B[y(t) - y*]
where i(t) is the nominal interest rate (IOER) at date t, p(t) is the inflation rate at date t, and y(t) is the (logged) real GDP at date t. The starred variables are long-run values associated with the real interest rate (r*), the inflation target (p*) and the level of "potential" GDP (y*). The parameters A and B govern how strongly the Fed reacts to deviations in the inflation target [p(t) - p*] and the output gap [y(t) - y*]. 
Let me start with the hawkish view (see also this presentation by Jim Bullard). According to this view, y(t) is below, but very close to y*. So, let's just say that the output gap is zero. PCE inflation is presently around p(t) = 1%. We all know that p* = 2%, so the inflation gap is -1%. Now, we have some leeway here with respect to the parameter A, but let's assume that the Fed responds aggressively to the inflation gap (consist with the Taylor principle) so that A=2. 
Now, if we think of the long-run real rate of interest as r* = 2%, then our Taylor rule delivers i(t) = 2%. Presently, the Fed's policy rate is i(t) = 0.25%. So, if you're OK with these calculations, the Fed should be "lifting off" (raising its policy rate) right now. Oh, and don't call it a "tightening." Instead, call it a "normalization." After all, even with i(t) = 2%, the Fed is still maintaining an accommodative stance on monetary policy because 2% is lower than the long-run target policy rate of r* + p* = 4%. 
What about the doves? Because doves like to emphasize the unemployment rate, the argument of a large negative output gap is now harder for them to make (see also here). But one could reasonably make the case that the output gap--as measured, say, by the employment rate of prime-age males--is still negative, let's say [y(t) - y*] = -1%. Let's be generous and also assume B=1. 
Now, if we continue to assume r*+p* = 4%, our dovish Taylor rule tells us that the policy rate should presently be set at  i(t) = 4% - 2% - 1% = 1%. So the recommended policy rate is lower than the hawkish case, but still significantly above 25 basis points. 
Thus, if we take the historical Taylor rule as a decent policy rule (in the sense that historically, it was associated with good outcomes), then one might say that the hawks have a stronger case than the doves. Both camps should be arguing for lift-off--the only question is how much and how fast. 
On the other hand, something does not seem quite right with the hawk view that things are presently close to normal and that the Fed should therefore normalize its policy rate. All we have to do is look around and observe all sorts of strange things happening. The real interest on U.S. treasuries is significantly negative, for example. Indeed, the nominal interest rate on some sovereigns is significantly negative. This does not look "normal" to a lot of people (including me). And so, maybe this is one way to rescue the dovish position. For example, one might claim that the real interest rate is now lower than it normally was, e.g., r* = 1%. (see this post by James Hamilton). If so, then this might be used to justify delaying liftoff.

Regardless of positions, everyone seems to assume that liftoff will occur sooner or later. But as Jim Bullard observed here in 2010, the promise of low rates off into the indefinite future may mean low rates (and deflation) forever. Few people seem to take this argument seriously except for, gosh, the predictions seems to be playing out (see Noah Smith's post here). For those who hold this position, the question of liftoff becomes more like now or never, rather than sooner or later. 
To conclude, we see that the contradictory views expressed by Evans and George might spring from something as basic as a disagreement on what constitutes the "natural" rate of interest r*. Further disagreement might be based on the appropriate measure of "potential" y* and on the appropriate size of the parameters A and B. There are also other concerns (like "financial stability") that are not captured in the Taylor rule above that might lead Fed presidents to adopt different views on policy. 
In the immortal words of Buffalo Springfield: "There's something happening here, What it is ain't exactly clear." What this something is, its root cause, and what might be done about it seems rather elusive at the moment. And I mean elusive not in the sense that nobody knows. I mean in the sense that everyone seems to have an opinion, most of which are mutually inconsistent. It makes for interesting times, at least. 

Saturday, February 14, 2015

A Simple Model of Multiple Equilibrium Business Cycles

Noah Smith has a nice piece here on Roger Farmer's view of the business cycle.

The basic idea is that, absent intervention, economic slumps (as measured, say, by an elevated rate of unemployment) can persist for a very long time owing to a self-reinforcing feedback effect. The economy can get stuck in what game theorists would label a "bad equilibrium." This interpretation seems to me to be highly consistent with Keynes' (1936) own view on the matter as expressed in this passage:
[I]t is an outstanding characteristic of the economic system in which we live that, whilst it is subject to severe fluctuations in respect of output and employment, it is not violently unstable. Indeed it seems capable of remaining in a chronic condition of subnormal activity for a considerable period without any marked tendency either towards recovery or towards complete collapse.
Now, there is more than one way to explain how an economy can get stuck in a rut. A favorite argument on the right is that recessions are naturally self-correcting if the market is left to its own devices and that prolonged slumps are attributable primarily to the misguided, clumsy and uninformed attempts on the part of government policymakers to "fix" the problem (see here).

But there is another view. The view begins with an observation from game theory: most structures that govern social interaction permit many possible outcomes--outcomes that have nothing to do with the existence of any fundamental uncertainty. If we think of the macroeconomy as a collection of individuals interacting in a large "market game," then the same principle holds--we shouldn't be surprised to discover that many equilibrium outcomes are possible. This idea forms the basis of Roger's pioneering book: The Macroeconomics of Self-Fulfilling Prophecies.

According to Noah, "[Farmer's] approach is mathematically sophisticated, and uses the complex modern techniques that are ubiquitous in academic literature." While this is certainly true, I think there is an easy way to teach the basic idea using standard undergraduate teaching tools. In what follows, I assume that the reader has some knowledge of indifference curves, budget sets, and production possibilities frontiers.

The framework is the basic static "income-leisure" model. A representative household has a fixed unit of time that can be devoted to one of two uses: market work or home work. The household values the consumption of two goods: a market-produced good and a home-produced good. An individual household takes the return to market work as exogenous. If the (expected) return to market work fluctuates randomly over time owing to (say) productivity shocks, tax shocks, news shocks, etc., then the choices that households make can be depicted with the following diagram:


In the diagram above, the x-axis measures time devoted to home work (so that the distance n* is a measure of employment) and the y-axis measures output (real income). The straight lines correspond to the household's budget set (which corresponds to the production possibilities frontier for a linear technology). The curved lines represent indifference curves--how the household values the market and home goods. This is, in essence, the RBC theory of the business cycle: as the returns to economic activities vary over time, people rationally substitute into higher return activities and out of lower return activities. If these shocks are correlated across households, then in the aggregate we observe cyclical fluctuations in output and employment.

Is it possible to model Roger's world view using the same apparatus? Yes, it is. One way to do this is to imagine a fixed production possibilities frontier that exhibits increasing returns to scale. The basic idea is that the return to labor (more generally, any economic activity) is higher when everyone is working hard and vice-versa. The following diagram formalizes this idea.


The RBC view is that there are two separate production functions shifting up and down (with the y-intercept moving between z_H and z_L. But suppose that the production function is in fact stable and that it takes the shape as traced by the solid kinked line connecting z_H to 1.0. The kink occurs at some critical level of employment labeled n_C. The individual's return to labor is expected to be high IF he expects aggregate employment to exceed n_C. Conversely, the individual's expected return to labor is low IF he expects aggregate employment to fall short of n_C.

Given this setup, whether the economy ends up at point A (the high-level equilibrium) or at point B (the low-level equilibrium) depends entirely on "animal spirits." That is, if the community as whole expects B then it is individually rational to choose B which, if done en masse, confirms the initial expectation. Likewise for point A. The allocations and prices associated with points A and B constitute self-fulfilling prophecies.

It is interesting to note that these two very different hypotheses can generate output and employment fluctuations that are observationally equivalent. How would the poor econometrician, uninformed of the true structure of the economy, distinguish between these two competing hypotheses? They both generate procyclical employment, productivity and wages. And if a slump lasts for an unusually long time well, RBC theory can claim that's just because people are rationally expecting a large future penalty (tax) on their employment activities (or, in the context of a search model, their recruiting investments). And if the economy oscillates randomly between A and B at high frequency, the Keynesian theory can claim that this behavior is a part of a "sunspot" equilibrium where fluctuations are driven by "animal spirits."

This observational equivalence problem is unfortunate because the two hypotheses have very different policy implications. The first interpretation more or less supports a laissez-faire approach, while the second interpretation suggests a fruitful role for a well-designed fiscal policy (in this model, even the credible threat of employing idle workers can keep the economy at point A without any actual intervention).

Isn't macroeconomics fun?

*****

PS. I lifted these diagrams from my free online macro lecture notes, available here. (Warning: the notes are in desperate need of correction and updating. I'll get to it one day.)

Tuesday, February 3, 2015

Fedcoin: On the Desirability of a Government Cryptocurrency


It was J.P. Koning's blog post on Fedcoin that first got me thinking seriously of the potential societal benefits of government-sponsored cryptocurrency. When I was invited to speak at the International Workshop on P2P Financial Systems 2015, I thought that a talk on Fedcoin would be an interesting and provocative way to start the conference. You can view my presentation here, but what I'd like to do in this post is clarify some of the arguments I made there.

As I described in this earlier post, I view a payment system as a protocol (a set of rules) for debiting and crediting accounts, I view money as widely agreed-upon record-keeping device, and I view monetary policy as a protocol designed to manage the supply of money over time.

The cryptocurrency Bitcoin is a payment system with monetary objects called bitcoin and a monetary policy prescribed as deterministic path for the supply of bitcoin converging to a finite upper limit. I view Bitcoin as a potentially promising payment system, saddled with a less-than-ideal money and monetary policy. As the protocol currently stands, bitcoins are potentially a better long-run store of value than non-interest-bearing USD. But if long-run store of value is what you are looking for, we already have a set of income-generating assets that do a pretty good job at that (stocks, bonds, real estate, etc.). [For a comparison of the rates of return on stocks vs. gold, look here.]

Let's set aside Bitcoin's monetary policy for now and concentrate on the bitcoin monetary object. What is the main problem with bitcoin as a monetary instrument in an economy like the U.S.? It is the same problem we face using any foreign currency in domestic transactions--the exchange rate is volatile and unpredictable. (And our experience with floating exchange rates tells us that this volatility will never go away.) Bill Gates hits the nail on the head in his Reddit AMA:
Bitcoin is an exciting new technology. For our Foundation work we are doing digital currency to help the poor get banking services. We don't use bitcoin specifically for two reasons. One is that the poor shouldn't have a currency whose value goes up and down a lot compared to their local currency. 
For better or worse, like it or not, the USD is the U.S. economy's unit of account--the numeraire--the common benchmark relative to which the value of various goods and services are measured and contractual terms stipulated. With a floating exchange rate, managing cash flow becomes problematic when (say) revenue is in BTC and obligations are in USD. Intermediaries like Bitreserve can mitigate some this risk but, of course, at an added expense. Hedging foreign exchange risk is costly--a cost that is absent when the exchange rate is fixed.

And so, here is where the idea of Fedcoin comes in. Imagine that the Fed, as the core developer, makes available an open-source Bitcoin-like protocol (suitably modified) called Fedcoin. The key point is this: the Fed is in the unique position to credibly fix the exchange rate between Fedcoin and the USD (the exchange rate could be anything, but let's assume par).

What justifies my claim that the Fed has a comparative advantage over some private enterprise that issues (say) BTC backed by USD at a fixed exchange rate? The problem with such an enterprise is precisely the problem faced by countries that try to peg their currency unilaterally to some other currency. Unilateral fixed exchange rate systems are inherently unstable because the agency fixing the BTC/USD exchange rate cannot credibly commit not to run out of USD reserves to meet redemption waves of all possible sizes. In fact, the structure invites a speculative attack.

In contrast, the issue of running out of USD or Fedcoin to maintain a fixed exchange rate poses absolutely no problem for the Fed because it can issue as many of these two objects as is needed to defend the peg (this would obviously call for a modification in the Bitcoin protocol in terms of what parameters govern the issuance of Fedcoin). Ask yourself this: what determines the following fixed-exchange rate system:


Do you ever worry that your Lincoln might trade at a discount relative to (say) Washingtons? If someone ever offered you only 4 Washingtons for your 1 Lincoln, you have the option of approaching the Fed and asking for a 5:1 exchange rate--the exchange rate you are used to. Understanding this, people will generally not try to violate the prevailing fixed exchange rate system. The system is credible because the Fed issues each of these "currencies." Now, just think of Fedcoin as another denomination (with an exchange rate fixed at par).

Now, I'm not sure if Fedcoin should be a variant of Bitcoin or some other protocol (like Ripple). In particular, I have some serious reservations about the efficiency of proof-of-work mechanisms. But let's set these concerns aside for the moment and ask how this program might be implemented in general terms.

First, the Fedcoin protocol could be made open source, primarily for the purpose of transparency. The Fed should only honor the fixed exchange rate for the version of the software it prefers. People can download free wallet applications, just as they do now for Bitcoin. Banks or ATMs can serve as exchanges where people can load up their Fedcoin wallets in exchange for USD cash or bank deposits. There is a question of how much to reward miners and whether the Fed itself should contribute hashing power for the purpose of mining. These are details. The point is that it could be done.

Of course, just because Fedcoin is feasible does not mean it is desirable. First, from the perspective of the Fed, because Fedcoin can be viewed as just another denomination of currency, its existence in no way inhibits the conduct of monetary policy (which is concerned with managing the total supply of money and not its composition). In fact, Fedcoin gives the Fed an added tool: the ability to conveniently pay interest on currency. In addition, Koning argues that Fedcoin is likely to displace paper money and, to the extent it does, will lower the cost of maintaining a paper money supply as part of the payment system.

What about consumers and businesses? They will have all the benefits of Bitcoin--low cost, P2P transactions to anyone in the world with the appropriate wallet software and access to the internet. Moreover, domestics will be spared of exchange rate volatility. Because Fedcoin wallets, like cash wallets, are permissionless and free, even people without proper ID can utilize the product without subjecting themselves to an onerous application process. Finally, because Fedcoin, like cash, is a "push" (rather than "pull") payment system, it affords greater security against fraud (as when someone hacks into your account and pulls money out without your knowledge).

In short, Fedcoin is essentially just like digital cash. Except in one important respect. Physical cash is still a superior technology for those who demand anonymity (see A Theory of Transactions Privacy). Cash does not leave a paper trail, but Fedcoin (and Bitcoin) do leave digital trails. In fact, this is an excellent reason for why Fedcoin should be spared any KYC restrictions. First, the government seems able to live with not imposing KYC on physical cash transactions--why should it insist on KYC for digital cash transactions? And second, digital cash leaves an digital trail making it easier for law enforcement to track illicit trades. Understanding this, it is unlikely that Fedcoin will be the preferred vehicle to finance illegal activities.

Finally, the proposal for Fedcoin should in no way be construed as a backdoor attempt to legislate competing cryptocurrencies out of existence. The purpose of Fedcoin is to compete with other cryptocurrencies--to provide a property that no other cryptocurrency can offer (guaranteed exchange rate stability with the USD). Adopting Fedcoin means accepting the monetary policy that supports it. To the extent that people are uncomfortable with Fed monetary policy, they may want to trust their money (if not their wealth) with alternative protocols. People should be (and are) free to do so.

Postscript, February 06, 2015.

A number of people have asked me why we would need a distributed/decentralised consensus architecture to support a FedCoin. In the talk I gave in Frankfurt, I actually made two proposals. The first proposal was called "Fedwire for All." This is basically digital cash maintained on a closed centralized ledger, like Fedwire. It would be extremely cheap and efficient, far more efficient that Bitcoin. But of course, it does not quite replicate the properties of physical cash in two respects. First, as with TreasuryDirect, the Fedwire accounts would not be permissionless. People would have to present IDs, go through an application procedure, etc. Second, the Fed is unlikely to look the other way (as it does with cash) in terms of KYC restrictions. So, to the extent that these two latter properties are desirable, I thought (at the time I wrote this piece) that we needed to move beyond Fedwire-for-All to Fedcoin. There may, of course, be other ways to implement these properties. I'm all ears!

Sunday, February 1, 2015

Money and Payments, or How we Move Marbles.

I'm writing this to serve as background for my next post on Fedcoin. If you haven't thought much about the money and payments system, I hope you'll find this a useful primer explaining some basic principles.

I view the payments system as a protocol (a set of rules) for debiting and crediting accounts. I view money as an object that is used to debit/credit accounts in a payments system. I view monetary policy as a protocol to manage the supply of money over time. Collectively, these objects form a money and payments system.

One way to visualize the money and payments system is as a compartmentalized box of marbles, displayed to the right. The marbles represent agreed-upon monetary tokens--record-keeping devices (see also the discussion here). The compartments represent individual accounts. Paying for a good or service corresponds to moving marbles from one account to another.

What makes a good marble? What is the best way to manage the supply of marbles over time? And what is the best way to move marbles around from account to account? There are books devoted to addressing these questions.

A good marble should have easily recognizable and understandable properties. This is one reason why complicated securities make poor money. Fiat money and senior claims to fiat money make good money along this dimension because everyone knows that fiat money is a claim to nothing (so there is no asymmetric information, a property emphasized by  Gorton and Pennaccchi, 1990). Gold, even if it is coined, is not especially good along this dimension because it is heterogeneous in quality (and it's not costless to have it assayed, see here). Plus, precious-metal coins can be shaved (although, there is no motivation to shave token coins).

A good marble should also be durable, divisible, and difficult to counterfeit. Paper money issued in different denominations can have these properties. And while virtual money can easily be made durable and divisible, it is extremely easy to counterfeit. For this reason, trusted intermediaries are needed to create and manage a virtual money supply (at least, up to the invention of Bitcoin). Gold (and other precious metals) have these desired properties. But to the extent that these metals have competing uses, it is inefficient to have them serve as accounting marbles. Unless you don't trust the intermediaries that manage the fiat-marble supply, that is. (Unfortunately, there have been enough failed experiments along this dimension to warrant some skepticism.)

How should the supply of marbles be managed over time? Advocates of the gold standard want the supply to be determined by the market sector (through mining). This protocol means that the supply of money is essentially fixed over short periods of time, and grows relatively slowly over long periods of time (although, big new discoveries have often led to inflationary episodes). If the demand for money increases suddenly and dramatically (as it is prone to do during a financial crisis), then the consequence of a fixed short-run supply of money is a sudden and unanticipated deflation. Because nominal debt obligations are not typically indexed to the price-level, the effect of this protocol is to make a recession larger than it otherwise might be. The idea behind a central bank as lender-of-last resort is to have an agency that can temporarily increase the supply of money (in exchange for "excessively" discounted private paper) to meet the elevated demand for money so as to stabilize the price-level. In effect, such a policy, if executed correctly, can replace the missing state-contingency in nominal debt contracts. Whether a central bank can be trusted to manage such a policy in responsible and competent manner is, of course, another question. Let's just say that there are costs and benefits to either approach and that reasonable people can reasonably disagree.

Apart from cyclical adjustments the money supply, there is the question of whether money-printing should ever be used to finance operating expenditure (seigniorage). Generally, the answer is "yes"--at least, once again, if it is done responsibly. It is of some interest to note that the Bitcoin protocol uses seigniorage to finance payment processors (miners). The idea here, I suppose, is that the protocol, which is a computer program and not a politician--can be trusted to manage the inflation-tax optimally. That is, at least for a limited amount of time--the long-run supply of bitcoin is presently capped at 21M units.

Alright, so how about the payments system. What are the different ways of rearranging marbles in a ledger?

The most basic method of payment is the combination of a physical cash exchanged in a P2P meeting. When I buy my Starbucks latte, I debit my wallet of cash and Starbucks credits its cash register by the same amount. The ledger that describes the distribution of physical cash holdings (and the histories of how each unit of cash has moved across accounts over time) is hidden from all of us. This is why cash transactions are associated with a degree of anonymity.

Another popular way to make a payment is via a debit card. In this case, Starbucks and I have accounts in a ledger that is managed by the banking system. These accounts are stocked with virtual book-entry objects. When I pay for my latte with a debit card, I  send a message to the banking system asking it to debit my account and credit the merchant's account. In this protocol, the banking system verifies that I have sufficient account balances and executes the funds transfer. The protocol obviously relies on the use trusted intermediaries to manage the ledger and keep it secure. Also, because bank accounts are associated with individual identities and because centralized ledger transactions can be recorded, there is no anonymity associated with the use of this payments protocol.

The Bitcoin protocol is an amazing invention--I'm on record as describing it as a stroke of genius. The amazing part of it is not it's monetary policy (which I think is flawed). Its main contribution is to permit P2P payments in digital cash without the use of a centralized ledger managed by a trusted intermediary. (In fact, the economic implications of this invention extend far beyond payments; see Ethereum, for example).

What makes digital cash without an intermediary so difficult? Think of digital cash as a computer file that reads "one dollar, SN 24030283." Suppose I want to email this digital file to you in payment for services rendered. When I take a dollar bill out of my pocket and send it to the merchant, there is no question of that dollar bill leaving my pocket. For the same thing to be true of my digital dollar, I would be required to destroy my computer file "one dollar, SN 24030383" after sending it to the merchant. The problem is that  people are likely to make endless copies of their digital money files. In other words, digital money can be costlessly counterfeited. And this is why we make use of intermediaries to handle payments in a virtual ledger. (We don't expect the intermediaries to counterfeit our balances...our main complaint with them is that they charge too much for their accounting services!)

There is no need to get too far into the details of how the Bitcoin protocol manages this feat. If you are interested, you can consult this book by the inspirational Andreas Antonopoulos. The main idea behind the protocol is a distributed public ledger (called the block chain) that is updated and made secure through the collective efforts of decentralized payment processors (called miners). I find it interesting how the Bitcoin consensus mechanism resembles, in spirit at least, the communal record-keeping practices of ancient gift-giving societies. In a gift-giving society, who contributes what to the collective good is recorded on a distributed network of brains. This is easy to do in small societies because there's not much to keep track of and verbal communication is sufficient to keep all nodes updated.

I want to end with a a couple of notes. First, isn't it interesting to note the coexistence of so many different monies and payments systems? Even today, a great deal of economic activity among small social networks (family, close friends, etc.) continues to be supported by gift-giving principles (including the threat of ostracism for bad behavior). This coexistence is likely to remain going forward and I think that open competition is probably the best way for society to determine the optimal mix.

It is also interesting to note that almost every money and payments system requires some degree of trust. This is also true of Bitcoin. In particular, the vast majority of Bitcoin users cannot read C++ and even for those that can, most are not about to go and check all 30MB (or so) of the Bitcoin source code. Nor will most people know what to do with a 30GB (and growing) block chain. Core developers? Mining coalitions? Who are these agents and why should they be trusted? The protocol cannot be changed...really? It won't be changed...really? It's just software, my friend. There's no guarantee that a consensus will not form in the future to alter the program in a materially significant way that some users will not desire. The same holds true for any consensus protocol, including the Federal Reserve Act of 1913 and the U.S. Constitution.

In my view, people will come to trust Bitcoin (or not) depending on its historical performance as a money and payments system. This is perfectly natural. It is not necessary, for example, that a person learns precisely how an internal combustion engine works before operating a motor vehicle. Most people drive cars because our experiences and observations tell  us we can trust them to work. And so it is with money and payments systems.

Update April 11, 2015.

For an excellent explanation of the modern payment system, see here: A Simple Explanation of How Money Moves Around the Banking System, by Richard Gendal Brown.