Believe those who are seeking the truth. Doubt those who find it. Andre Gide

Friday, January 11, 2019

When is more competition bad?

Contrary to popular belief, standard economic theory does not provide a theoretical foundation for the notion that "competition is everywhere and always good." It turns out that legislation that promotes competition among producers may improve consumer welfare. Or it may not. As so many things in economics (and in life), it all depends.

I recently came across an interesting paper demonstrating this idea by Ben Lester, Ali Shourideh, Venky Venkateswaran, and Ariel Zetlin-Jones with the title "Screening and Adverse Selection in Frictional Markets," forthcoming in the Journal of Political Economy.The paper is written in the standard trade language. Like any trade language, it's difficult to understand if you're not in the trade! But I thought the idea sufficiently important that I asked Ben to translate the basic results and findings for a lay audience. I'm glad to say he was very happy to oblige.

And so, without further ado, today's guest post by Ben Lester, my colleague at the Philadelphia Fed.
You can follow Ben on Twitter :  @benjamminlester 



Competition in Markets with Asymmetric Information
By Benjamin Lester

Background
In many basic economic models, competition is good – it increases welfare.  As a result, policy makers often introduce reforms that they hope will reduce barriers or “frictions” in order to increase competition.  For example, the Dodd-Frank Act contains regulations aimed at promoting more competition in certain financial markets, such as derivatives and swaps, while the Affordable Care Act contained provisions that were intended to promote competition across health insurance providers.

In a recent paper with Ali Shourideh, Venky Venkateswaran, and Ariel Zetlin-Jones, we re-examine the question of whether more competition is welfare-improving in markets with a particular feature – what economists call “asymmetric information.”  These are markets where one side has information that is relevant for a potential trade, but the other side can’t see it. Classic examples include insurance markets, where an individual knows more about his own health than an insurer; loan markets, where a borrower knows more about her ability to repay than a lender; and financial markets, where the owner of an asset (like a mortgage-backed security) may know more about the value of the underlying assets than a potential buyer.

Unfortunately, understanding the effects of more or less competition in markets with asymmetric information has been constrained by a shortage of appropriate theoretical frameworks.  As Chiappori et al. (2006) put it, there is a “crying need for [a model] devoted to the interaction between imperfect competition and adverse selection.”

What we do
We develop a mathematical model of a market – to fix ideas, let’s call it an insurance market – that has three key ingredients.  The first ingredient is adverse selection: one side of the market (consumers) know more about their health than the other side of the market (insurers).  Second, we allow the two sides of the market to trade sophisticated contracts: as in the real world, insurers can offer consumers a rich set of options to choose from, consisting of different levels of coverage that can be purchased at different prices.  Last, we introduce imperfect competition by assuming that consumers don’t always have access to multiple insurers: in particular, each consumer will get offers from multiple insurers with some probability, but there is also a chance of receiving only one offer.[1]  Hence, our model allows us to capture the case of perfect competition (where all consumers get multiple offers), monopoly (where all consumers get only one offer), and everything in between.


What we find

One of our main results is that increasing competition can actually make people worse off.[2]  To understand why, it’s important to understand the types of contracts that our model predicts will be offered by insurers.  Let’s say that there are two types of consumers: those who are likely to require large medical expenses (“sick” consumers), and those who are not (“healthy” consumers).  Then insurers will often find it optimal to offer two different plans: one that is expensive but provides more coverage, and one that is cheaper but provides less coverage.[3]  Designed correctly, these two options will induce consumers to self-select into the plan intended for them, so that sick consumers will pay a higher price for more coverage and healthy consumers will pay a lower price for less coverage.

An important property of these contracts is that they fully insure sick consumers, but they under-insure healthy consumers.  Ideally, insurers would like to offer healthy patients more coverage, but they can’t: given the lower price, sick consumers would choose this new plan, making it no longer profitable for insurers to offer it.  This theoretical result – that separating the sick from the healthy requires under-insuring healthy consumers – is a fundamental result in markets where asymmetric information is present.  The relevant question for us is: how does the amount of competition determine the extent to which healthy consumers are under-insured? The answer we find is that some competition can induce insurers to provide healthy consumers with more insurance, but too much competition can have the opposite effect. 

The intuition is as follows.  When consumers are more likely to receive multiple offers, insurers respond by making more attractive offers to consumers, as they try to retain market share.  The key question turns out to be: does increasing competition make them sweeten the deal more for sick consumers, or for healthy consumers? On the one hand, as the offer intended for sick consumers gets better, they have less incentive to take the offer intended for healthy consumers – in the parlance of economics, their “incentive constraint” loosens.  Hence, as insurers sweeten the offer intended for sick consumers, they are able to offer healthy consumers more coverage, and welfare rises.[4]  On the other hand, however, as the offer intended for healthy consumers become more attractive, sick consumers are more tempted to take it – their incentive constraint tightens – and the only way to keep the two separate is to reduce the amount of coverage being offered to healthy consumers, causing welfare to decline.

In the paper, we show that the former, positive effect dominates in markets where insurers have a lot of market power, while the latter, negative effect dominates when the market is relatively competitive. Hence, in markets with asymmetric information, welfare is maximized at some interior point, where there is some competition, but not too much!


Other results and future research
In the paper, we also show that increasing transparency has ambiguous effects on welfare.  In particular, we study the effects of a noisy signal about a consumer’s type – in the insurance example, this could be a blood test or information about an individual’s pre-existing conditions.  We show that increasing transparency is typically beneficial when insurers have a lot of market power, but it can be detrimental to welfare in highly competitive environments.

More generally, our model provides a tractable framework to confront a variety of theoretical questions regarding markets that suffer from asymmetric information, and offers a number of insights into existing empirical studies, too.[5]  For example, there is a large literature that tests for the presence of asymmetric information by studying the quantitative relationship between, e.g., the amount of insurance that consumers buy and their tendency to get sick.[6]  However, according to our analysis, insurers find it optimal to offer menus that separate consumers only when markets are sufficiently competitive, and when there is a sufficiently large number of sick consumers in the population.  Otherwise, they find it best to offer a single insurance plan.  This finding implies that, when insurers have sufficient market power, there will be no relationship between the quantity of insurance a consumer buys and his health status.  In other words, one can’t empirically test for asymmetric information without controlling for the market structure.  This is just one of many positive predictions of our model that we plan to test in the data.

References:
Burdett, K., and K. L. Judd (1983) “Equilibrium Price Dispersion,” Econometrica, 51, pages 955–69.
Chiappori, P.-A., B. Jullien, B. Salanié, and F. Salanié (2006) “Asymmetric Information in Insurance: General Testable Implications,” RAND Journal of Economics, 37, pages 783–98.
Chiappori, P.-A., and B. Salanié  (2000) “Testing for Asymmetric Information in Insurance Markets” Journal of Political Economy,  108, pages 56–78.
Lester, B., A. Shourideh, V. Venkateswaran, and A. Zetlin-Jones (2018) “Screening and Adverse Selection in Frictional Markets,” Journal of Political Economy, forthcoming.





[1] We borrow this modeling device from the paper by Burdett and Judd (1983).
[2] At a high level, the idea that reducing frictions can sometimes make people worse off is not unique to our paper; these types of results are known from the theory of the second best. What distinguishes our result is the context in which it arises, and our ability to characterize precisely when and why reducing frictions (or increasing competition) is harmful.
[3] The negative relationship between price and coverage should be familiar to most readers; see, e.g., the metal tiers (platinum, gold, silver, bronze) offered at https://www.healthcare.gov/choose-a-plan/plans-categories/.
[4] Since sick consumers are always fully insured, consumers’ welfare always rises when healthy consumers are offered more insurance.  On a more technical level, all of our statements about welfare are based on a measure of ex ante, utilitarian welfare.
[5] As a technical aside, unlike many models of asymmetric information and screening, we find that an equilibrium always exists in our environment, that the equilibrium is unique, and that the equilibrium does not rely on any assumptions regarding “off-path beliefs.”
[6] See the seminal paper by Chiappori and Selanie (2000).



Disclaimer

The views expressed here are those of the authors and do not necessarily reflect the views of the Federal Reserve Bank of Philadelphia or the Federal Reserve System.



Tuesday, December 25, 2018

Racial Diversity in the Supply of U.S. Econ PhDs

This post is motivated by Eshe Nelson's column "The Dismal Cost of Economics' Lack of Racial Diversity." I was especially struck by this data -- out of the 539 economics doctorates awarded to U.S. citizens and permanent residents (by U.S. institutions), only 18 of the recipients were African-American.
I thought it would be of some interest to see what the data looks like more broadly over other groups and over a longer period of time. I thank my research assistant, Andrew Spewak, for gathering this data (from the National Science Foundation). 

Let's start with the raw numbers first. The data is aggregated into 5-year bins beginning in 1965 and up to 2014. The orange bars represent the number of econ PhDs awarded to U.S. citizens and permanent residents (by U.S. institutions) over a given 5-year period. The blue bars represent the total number of doctorates awarded.


Seems like the number of econ PhDs awarded to U.S. citizens is on the decline and that this decline has been partially made up by the number of PhDs awarded to foreign students.

Now, let's stick with citizens for the moment and decompose the data across various "racial" categories. The following figure reports the share of econ doctorates earned by various groups.


The most dramatic pattern is the relative decline of PhDs awarded to Whites and the increasing share of degrees awarded to Asians (there is also a noticeable uptick in the "Other" category which includes groups like Native Americans). Blacks and Hispanics have made some gains since the early years, but have since stabilized to about a 5% share.

I now reproduce the picture above, but this time looking at total PhDs awarded.


The relative decline of Whites here is even more evident, as is the increasing share of Asians. It is interesting to note that while the share of Hispanics has increased noticeably by including foreign (non PR) recipients, the same is not true for Blacks. One possibility here is that English-speaking foreign black students may be more likely to target the U.K. over the U.S. and that French-speaking blacks may be more likely to target French-speaking institutions in France or former French colonies, like Quebec. (It would be interesting to examine these statistics for Canadian universities). 

Finally, let's take a look at how the share of PhDs across groups lines up with their share of the total population. Here is what the data looks like for the period 2010-2014. 


While White citizens are over-represented, Whites as a whole are under-represented (relative to the domestic U.S. population). Blacks are significantly under-represented both as citizens and including foreigners. Asians, on the other hand, are significantly over-represented--both as citizens and especially if one includes foreigners. Only the "Other" category seems to be roughly representative of the population.

To conclude, there are some clear racial imbalances here. I think most people would agree that increasing Black and Hispanic representation in the U.S. economics profession is a good idea (for many of the reasons highlighted in Eshe's column). Future research into this matter should be informed by the fact that not all minority groups have fared in the same way. It would also be interesting to see how these patterns have evolved in other countries. 

Sunday, December 23, 2018

Does the Fed have a symmetric inflation target?

It's well-known that the Fed has been undershooting its inflation 2% target every year since 2012 (ironically, the year it formally adopted a 2% inflation target). This has led some to speculate whether 2% is being viewed more as a ceiling, rather than a target, as it is with the ECB. The Fed, however, continues to insist that not only is 2% a target, it is a symmetric target.  But what does this mean, exactly? And how can we judge whether the Fed has a symmetric inflation target or not?

These questions came to me while listening to Jay Powell's recent press conference following the FOMC's decision to follow through with a widely anticipated rate hike. At the 16:15 mark, reporter Binyamin Appelbaum (NY Times) asked Powell the following question:
BA: You're about to undershoot your inflation target for the seventh straight year and you forecast that you're going to undershoot it for the eighth straight year...Can you help us to understand why people would be advocating restrictive monetary policy at a time of persistent inflation undershoots? 
Here is how Powell responded:
JP: Well, we as a committee do not desire inflation undershoots and you're right -- inflation has continued to surprise to the downside -- not by a lot though -- I think we're very close to 2% and, you know, we do believe it's a symmetric goal for us -- symmetric around 2% -- and that's how we're going to look at it. We're not trying to be under 2% -- we're trying to be symmetrically around 2% -- and, you know, I've never said that I feel like we've achieved that goal yet. The only way to achieve inflation symmetrically around 2% is to have inflation symmetrically around 2% -- and we've been close to that but we haven't gotten there yet and we haven't declared victory on that yet. So, that remains to be accomplished. 
While this answer sounded reasonable on some level, it did not satisfy the very next inquisitor, Jeanna Smialek (Bloomberg):
JS: Just following up on Binya's question...I guess if you haven't achieved 2% and you don't see an overshoot -- which would sort of be implied by a symmetrical target -- what's the point of raising rates at all? 
Powell replied to this by making reference to the strength of the economy -- growth well above trend, unemployment falling, inflation moving up to 2%, and a positive forecast. In this context, the rate hike seemed appropriate. Again, a sensible sounding answer -- but did it answer the question actually posed?

As I reflected on this exchange, I felt something amiss. And then it occurred to me that people might be mixing up the notion of a symmetric inflation target with a price-level target.

In her question above, Jeanna suggested that if the Fed has a symmetrical inflation, then we should be expecting an overshoot of inflation. But the intentional overshooting of inflation is not inflation targeting -- it is price-level targeting. With an inflation target, one should be expecting inflation to return to the target--not beyond the target.

This would have been a fine answer to Jeanna's question, but isn't it inconsistent with the earlier reply to Binyamin? In that response, Powell left us with the impression that the FOMC has failed to achieve its symmetric inflation goal -- that success along this dimension would consist of actually observing inflation vary symmetrically around 2%. I'm not sure this is entirely correct.

To my way of thinking, an inflation target means getting people to expect that inflation will eventually return to target (from below, if inflation is presently undershooting, and from above, if inflation is presently overshooting). A symmetric inflation target simply means that the rate at which inflation is expected to return to target is the same whether inflation is presently above or below target. To put it another way, symmetry implies that the FOMC should feel equally bad about inflation being 50bp above or below target. Along the same line, persistent inflation overshoots and overshoots should be equally tolerated (given appropriate conditions).
 
Should a successful symmetric inflation targeting regime generate inflation rates that average around target? It's hard to see how it would not in the long run and if the shocks hitting the economy are themselves symmetric (this is not so obviously a given, but let me set it aside for now). Does missing the inflation target from below for roughly a decade imply that the FOMC has failed to implement a symmetric inflation targeting regime? Powell's mea culpa above suggests yes. But again, I am not so sure.

As I said above, the success of an inflation targeting regime should be measured by how well inflation expectations are anchored around target. By this measure, the FOMC has managed, in my mind, a reasonable level of success (2015-16 looks weak). The following diagram plots the PCE inflation rate (blue) against expected inflation (TIPS breakevens) five years (red) and ten years (green) out.


In my view, the fact that realized inflation has persistently remained below target does not necessarily imply the absence of a symmetric inflation target. Let's take a look at the FOMC's official view on the matter, originally made public on January 24, 2012 in its Statement of Longer-Run Goals and Monetary Policy Strategy. Let me quote the relevant passage and highlight the key phrases:
The Committee reaffirms its judgment that inflation at the rate of 2 percent, as measured by the annual change in the price index for personal consumption expenditures, is most consistent over the longer run with the Federal Reserve’s statutory mandate. The Committee would be concerned if inflation were running persistently above or below this objective. Communicating this symmetric inflation goal clearly to the public helps keep longer-term inflation expectations firmly anchored, thereby fostering price stability...
It seems clear enough that the real goal here is to keep longer-term inflation expectations anchored at 2%.  The idea is that if inflation expectations are anchored in this manner, then the actual inflation rate today shouldn't matter that much for longer-term plans (like investment decisions). If inflation turns out to be low, you should be expecting it to rise. If it turns out to be high, you should be expecting it to fall. Nowhere does the statement suggest we should be expecting under or over shooting -- a characteristic we would associate with a price-level target. As for the phenomenon of persistent under or over shoots, the statement makes clear that the Committee would be equally (symmetrically) concerned in either case.

If one accepts my definition of symmetric inflation target then, unfortunately, we do not yet have enough data to judge whether the Fed's inflation target is symmetric. The policy was only formally implemented in 2012. Since then we've only observed a persistent undershoot and the conditions leading to these persistent downward surprises. Would the FOMC be equally tolerant of letting inflation surprise to the upside for several years should economic conditions warrant? It seems that we'll have to wait and see.


Thursday, December 6, 2018

Working More for Less

I had an interested chat with a colleague of mine the other day about the labor market. In the course of conversation, he mentioned that he used to teach a class in labor economics. Naturally, an important lesson included the theory of labor supply. Pretty much the first question asked is how the supply of labor can be expected to change in response to a change in the return to labor (the real wage). 

My colleague said that for years he would preface the theoretical discussion with a poll. He would turn to the class and ask them to imagine themselves employed at some job. Then imagine having your wage doubled for a short period of time. How many of you would work more? (The majority of the class would raise their hands.) How many of you would not change your hours worked? (A minority of hands). How many of you would work less? (A sprinkling of hands). At the end of the polling, he'd start teaching a standard theory of labor supply and using it to interpret the poll results (substitution vs. wealth effects).

My colleague administered this poll for over a decade. The results were always the same. (How satisfying.)
 
Then, one day, for no apparent reason, he decided to mix it up a little bit. Instead of asking the class to imagine an increase in the wage rate, he asked his students to consider a decrease in their wage rate. He was expecting a symmetrically opposite response. To his shock, a majority of the class responded that they would work more. Only a minority replied that they would work less or not change their hours.

Surely, this was an anomaly? But when he repeated the experiment with another class, he got the same result. He mentioned it to a colleague of his, who then ran the same experiment with his class and he too confirmed the result. What was going on here? If true, then employers can apparently get more labor out their workers by lowering their wages?!
 
The phenomenon here seems related to the evidence of "income targeting" among some groups of workers; see, for example, the classic study of New York taxi drivers by Camerer, Babock, Lowenstein and Thaler (QJE May, 1997). Evidently, inexperienced taxi drivers tend to work less when the return to working is high, and work more when the return to working is low. This behavior doesn't quite square with the phenomenon reported by my colleague. The effect there appeared to be asymmetric: students reported willing working more at a lower wage, but also reported willing working more at a higher wage. In both cases, however, it seems that the existence of some fixed obligation (e.g., monthly food and rent payments) plus no ready access to credit could explain why workers might be willing to work longer hours when the return to work declines.

I'm not sure if these findings shed any light on the state of the labor market today. But it is interesting to speculate. Conventional supply/demand analysis isn't always the best guide. 

Monday, December 3, 2018

Does the Floor System Discourage Bank Lending?

David Beckworth has a new post up suggesting that the Fed's floor system has discouraged bank lending by making interest-bearing reserves a relatively more attractive investment; see here. I've been hearing this story a lot lately, but I can't say it makes a whole lot of sense to me.

Here's how I think about it. Consider the pre-2008 "corridor" system where the Fed targeted the federal funds rate. The effective federal funds rate (FFR) traded between the upper and lower bounds of the corridor--the upper bound given by the discount rate and the lower bound given by the zero interest-on-reserves (IOR) rate. The Fed achieved its target FFR by managing the supply of reserves through open-market operations involving short-term treasury debt.

Consider a given target interest rate equal to (say) 4%. Since the Fed is financing its asset holdings (USTs yielding 4%) with 0% reserves, it is making a profit on the spread, which it remits to the treasury. Another way of looking at this is that the treasury has saved a 4% interest expense on that part of its debt purchased by the Fed (the treasury would have had to find some additional funds to pay for that interest expense had it not been purchased by the Fed).

Now, suppose that the Fed wants to achieve its target interest rate by paying 4% on reserves. The supply of reserves need not change. The yield on USTs need not change. Bank lending need not change. The only thing that changes is that the Fed now incurs an interest expense of 4% on reserves. The Fed's profit in this case go to zero and the remittances to the treasury are reduced accordingly. From the treasury's perspective, it may as well have sold the treasuries bought by the Fed to the private sector instead.

But the question here is why one would think that moving from a corridor system to a floor system with interest-bearing reserves inherently discourages bank lending. It is true that bank lending is discouraged by raising the IOR rate. But is it not discouraged in exactly the same way by an equivalent increase in the FFR? If I am reading the critics correctly (and I may not be), the complaint seems to be more with where the policy rate is set, as opposed to anything inherent in the operating system. If the complaint is that the IOR has been set too high, I'm willing to agree. But I would have had the same complaint had the FFR been set too high under the old corridor system.

Alright, now let's take a look at some of the data presented by David. Here, I replicate his Panel A depicting the evolution of the composition of bank assets.
David wants to direct our attention to the period after 2008 when the Fed flooded the banking system with reserves and started paying a positive IOR rate. The large rise in the orange line since 2008 was due almost entirely to reserves and not other safe assets. This suggests that banks were motivated to hold interest-bearing reserves instead of private-sector interest-bearing assets (loans). He writes:
Something big happened in 2008 that continues to the present that caused banks to allocate more of their portfolios to cash assets and less to loans. While the financial crisis surely was a part of the initial rebalancing, it is hard to attribute what appears to be 10-year structural change to the crisis alone. Instead, it seems more consistent with the critics view that the floor system itself has fundamentally changed bank portfolios allocation.
I think the diagram above is rather misleading since all it shows is portfolio composition and not the level of bank lending. Here's what the picture looks like when we take the same data and deflate it by the GDP instead of bank assets,

According to this picture, bank lending is close to 50% of GDP, not far off its historical average and considerably higher than in the decade following the S&L crisis (1986-1995). Here's what commercial and industrial loans as a ratio of GDP looks like:
It's no surprise that bank lending contracted during and shortly after the crisis. One could even make the argument that paying positive IOR contributed to the contraction. But as I mentioned above, one could have made the same argument had the FFR been kept at 25bp. Again, this criticism has less to do with the operating system than it does with where the policy rate was set. In any case, note that commercial and industrial loans are presently above their pre-crisis levels (as a ratio of GDP). 

To sum up, I do not believe that a floor system inherently discourages bank lending as some critics appear to be arguing. Now that the Fed is paying IOR, reserves are essentially viewed by banks as an alternative form of interest-bearing government debt. New regulations since the crisis have induced banks to load up on safe government assets. But as the following figure shows, this has not come at the expense of private lending.
Banks are lending about as much as they have over the past 50 years (relative to GDP). Bank lending as a ratio of bank assets may be low, but this is because banks are loaded up on safe assets--not because they've cut back on their lending activity.


Thursday, November 8, 2018

Smart Contracts and Asset Tokenization

Book of Smart Contracts 1959
In his 1959 classic Theory of Value, Gerard Debreu takes a deep dive into general (Walrasian) equilibrium theory. (Yes, I know, but please try to stay awake for at least a few more paragraphs.)

He studies a very stark hypothetical scenario where people are imagined to gather at the beginning of time and formulate trading plans for a given vector of market prices (called out by some mysterious auctioneer). Commodities can take the form of different goods, like apples and oranges. But they can also be made time-contingent and state-contingent. An apple delivered tomorrow is different commodity than an apple delivered today. An orange delivered tomorrow in the event of rain is different commodity than an orange delivered tomorrow in the event of sunshine. And so on.

For any given vector of relative prices (there is no money), individuals offer to sell claims against the commodities they own to acquire claims against the commodities they wish to acquire. A market-clearing price vector is one that makes everyone's desired trades consistent with each other. How this equilibrium price-vector is achieved is not studied--he is mainly concerned with the less interesting, but still important, question of whether any such price vector might even be expected to exist in the first place.

The theory imagines all relevant trading activity to take place once-and-for-all at the beginning of time. Once trading positions are agreed to, all subsequent good and service flows across individuals over time and under different contingencies are dictated by the terms of promises made at the initial auction. Suppose I had earlier acquired the right for the delivery of oranges next month in the event of rain. Suppose it rains next month. Then the delivery of oranges is made by the orange producer who issued the promissory note now in my possession. In short, contracts look very "smart" in the sense that they can be tailored in any way we want and, moreover, they are assumed to be "self-executing." It's almost as if contractual terms have been spelled out mathematically and enforced by self-executing computer code. Indeed, this is essentially what Debreu assumes.

The Debreu model (also associated with Ken Arrow and Lionel MacKenzie) is often viewed as a sort of benchmark of what one might expect if auction markets are "complete" and worked perfectly (no financial market frictions like asymmetric information, limited commitment, limited communications, etc.) There is no role for money as a medium of exchange in such a frictionless world. As such, it should come as no surprise to learn that monetary theory is devoted to studying economies where these frictions play a prominent role. Financial institutions (governance structures in general, including "the government") can to a large extent be understood as collective arrangements that are designed (or have evolved) to mitigate these frictions for the economic benefit of a given set of constituents (either general or special interests, depending on the distribution of political power).

A recurring theme of the "blockchain" movement is how this new record-keeping technology may one day permit us to decentralize all economic activity. No more (government) money. No more banks. No more intermediaries of any sort. This seems to be, at least in part, what "asset tokenization" is about; see, for example, here: How Tokenization Is Putting Real-World Assets on Blockchains. According to this article,
Tokenization is the process of converting rights to an asset into a digital token on a blockchain. 
This sounds fancy, but as the article soon makes clear, it's basically a variation of an old theme,
There are many proposed methods for taking real-world assets and "putting them on a blockchain." The goal is to achieve the security, speed and ease of transfer of Bitcoin, combined with real-world assets. This is a new form of an old concept: "securitization" (turning a set of assets into a security), and in some cases the tokenization is of securitized assets.
Here's how the innovation is supposed to help small investors (source):
Imagine that you have some property — say an apartment. You need cash quickly. The apartment is valued at $150,000 but you just need $10,000. Can you do this quickly without much friction? To my best knowledge, this is next to impossible.
I often use a similar example in my monetary theory classes. How to liquidate a fraction of one's illiquid wealth? One way is to use a bank (say, to open up a credit line secured by your property). But what he means, I think, is that it's basically impossible to issue a personal IOU representing a claim against the property (and ultimately, against the income that is generated by that property). Well, it's possible, but any such security is not likely to be marketable at any reasonable price. The author has stumbled across the concept of an "illiquid" asset. We use institutions called banks to monetize illiquid assets (banks transform illiquid assets into liquid deposit liabilities). But why do we need banks? Why are most assets illiquid? Economic theory answers: because of the frictions associated with asymmetric information and limited commitment (or lack of trust). O.K., but is there any way to get around these frictions without the use of banks? The same article continues:
Enter tokenization. Tokenization is a method that converts rights to an asset into a digital token. Suppose there is a $200,000 apartment. Tokenization can transform this apartment into 200,000 tokens (the number is totally arbitrary, we could have issued 2 million tokens). Thus, each token represents a 0.0005% share of the underlying asset. Finally, we issue the token on some sort of a platform supporting smart contracts, for example on Ethereum, so that the tokens can be freely bought and sold on different exchanges. When you buy one token, you actually buy 0.0005% of the ownership in the asset. Buy 100,000 tokens and you own 50% of the assets. Buy all 200,000 tokens and you are 100% owner of the asset. Obviously, you are not becoming a legal owner of the property. However, because Blockchain is a public ledger that is immutable, it ensures that once you buy tokens, nobody can “erase” your ownership even if it is not registered in a government-run registry. It should be clear now why Blockchain enables this type of services.
Well, no, to be honest it is not at all clear how "blockchain" solves any of the fundamental problems associated with transforming an illiquid asset into a payment instrument.

We have to keep in mind that "blockchain" is nothing more than a consensus-based database management system (where the data is organized and secured in a particular way). Moreover, any useful innovation found in a blockchain-based database management system (recording data as a Merkle tree, for example) could likely be applied in a non-consensus-based database management system. It's one thing to transfer tokens (or information) across accounts in a database. It's quite another thing to exert your own effort to evict the non-compliant tenant of your 0.0005% share of the apartment you own, especially if other owners are not on board.

It may be that technology will one day eliminate financial market "frictions" and permit widespread asset tokenization (including our human capital), all of which will be traded using smart contracts on an Internet-based auction. If or when that day comes, the people of that world can refer to Debreu (1959) as an economic model applicable to that future world. 

Monday, September 24, 2018

What is the yield curve forecasting?

It's well-known that in the United States, recessions are often preceded by an inversion of the yield curve. Is there any economic rationale for why this should be the case? 

Most yield curve analysis makes reference to nominal interest rates. Economic theory, however, stresses the relevance of real (inflation-adjusted) interest rates. (The distinction does not matter much for the U.S in recent decades, as inflation has remained low and stable). According to standard asset-pricing theory (which, unfortunately for present purposes, abstracts from liquidity premia), the real interest rate measures the rate at which consumption (a broad measure of material living standards) is expected to grow over a given horizon. A high 1-year yield signals that growth is expected to be high over a one-year horizon. A high 10-year yield signals that annual growth is expected, on average, to be high over a ten-year horizon. If the difference in the 10-year and 1-year yield is positive, then growth is expected to accelerate. If the difference is negative--i.e., if the real yield curve inverts--then growth is expected to decelerate.

What is the economic intuition for these claims? One way to think about this is in terms of Friedman’s Permanent Income Hypothesis, which states that an individual’s desired consumption expenditure today should depend not only on current income, but the likely path of his/her income over the foreseeable future. The logic of this argument follows from the assumption that people are willing and able to smooth their consumption over time, given their expectations over how their incomes are likely to evolve over time. For example, if people expect their income to be higher in the future, then they will want to consume more today in order to smooth out their consumption. They can attempt to do so by saving less (or borrowing more). If a community is collectively “bullish” in this sense, desired consumer spending should rise in the aggregate, and desired saving should fall, leading to upward pressure on the real interest rate.

Alternatively, suppose that firms suddenly turn bullish on the likely returns to capital spending. Then the resulting increase in the demand for investment financing should drive real interest rates upward. In this case as well, a higher real interest rate signals the expectation of a higher rate of economic growth. If individual expectations over future prospects are correct more often they are incorrect, then higher real interest rates today should be correlated with higher future growth rates.


So, in theory at least, an inverted yield curve does not forecast recessions--it forecasts growth slowdowns. Nevertheless, there is a sense in which an inverted (or even flat) yield curve can, in some circumstances, suggest that recession is more likely. Here's the basic idea.

Consider an economy that grows over time, but where growth occurs unevenly (i.e., the economy alternates between high- and low-growth regimes). Imagine, as well, that the economy is occasionally buffeted by negative “shocks”—adverse events that occur at unpredictable moments in time (an oil price spike, a stock market collapse, etc.). It seems clear enough that in such an economy, recessions are more likely to occur when a shock of a given size occurs in a low-growth state as opposed to a high-growth state.

Now, as explained above, suppose that an inverted yield curve forecasts a deceleration in growth. Then the deceleration will entail moving from a higher growth state to lower growth state. Suppose this lower growth state is near zero. In this state, growth is now more likely to turn negative in the event of a shock. In this way, an inverted yield curve does not forecast recession; instead, it forecasts the economic conditions that make recession more likely.

How does this idea match up the data? Here is a plot of the 10-1 real yield spread in the United States from 1985-2018 (blue line) along with the year-over-year growth rate of real per capita consumption--nondurables plus services (orange line).
Well, it's not perfect, but as is clear from the figure, the real yield curve flattened and inverted prior to each of the past three recessions. Consistent with the theory, note that consumption growth tends to decelerate as the yield curve flattens. This is true even in non-recessionary episodes. In particular, the consumption growth deceleration of 1985-85, 1988-89, and 2006-07 were each associated or preceded by a flattening or inverted yield curve. Each of the three recessions occurred when consumption was growing at a moderate to low pace.

Were the recessions that occurred following yield curve inversion forecastable? The proximate cause of the 1990 recession was the Iraqi invasion of Kuwait and the associated spike in world oil prices. Is it plausible to believe that bond market participants knew well in advance Saddam Hussein’s plans of invasion? The proximate cause of the two most recent recessions were asset price collapses (some blame Fed tightening, but theoretically gradual tightening should only cause a slowdown in growth, not a sharp collapse). While many people may have had a sense of impending doom, it seems unlikely that anyone knew years in advance the exact date of collapse. According to the interpretation here, the severity of the Great Recession was in part due to the fact that the economy was already growing slowly at the time of the collapse in real estate prices. While consumption growth did slow prior to the collapse of the “dot com” asset price bubble in 2000, it grew at a moderate rate entering that recession. Accordingly, the recession of 2000-2001 is one of the mildest ever recorded.

Does the recent flattening of the yield curve portend recession? Not necessarily. The flattening of the real yield curve may simply reflect the fact that real consumption growth is not expected to accelerate or decelerate from the present growth rate of about 1% per annum. On the other hand, a 1% growth rate is substantially lower than the historical average of 2% in the United States. Because of this, the risk that a negative shock (of comparable magnitude to past shocks) sends the economy into technical recession is increased. While the exact date at which the shock arrives is itself is unpredictable, the likelihood of recession is higher relative to a high real interest rate, high growth economy.