Comment on Guerrieri & Lorenzoni (2017)

This comment is intended as a criticism of literature standard of not reporting detail on how shock processes are discretized, including the choice of hyperparameters in various methods, not of the paper itself. I simply use this paper as a convenient punching bag as it clearly illustrates the importance of the decision.

Guerrieri & Lorenzoni (2017)Credit Crises, Precautionary Savings, and the Liquidity Trap contains an AR(1) process on (the log of) efficiency-units-of-labour. To solve the model this shock is discretized to just 12 states, which is done using the Tauchen method (pg. 1438). They use a value of 2.1 for Tauchen’s q (the hyperparameter) for the Tauchen method. This value of 2.1 is not mentioned anywhere in the paper or technical appendix despite being unusually small in the literature; the Tauchen method is widely used and typical values of Tauchen’s q are 3 or 4. By choosing such a low value for Tauchen’s q, the shock process has min/max values of just -/+ one standard deviation, and this is actually key to getting such a large recession in response to the tightening of the credit constraint (using q=3 the implied recession is much smaller). The q=2.1 value actually likely comes from using the Tauchen-Hussey method (which is Tauchen method but additionally chooses q to match the variance) rather than being the deliberate decision of the authors. It seems likely that neither the authors, the editor, nor the referees were aware of the importance of this: q=2.1 leaves the economy largely riskless, and so the exercise in the paper of checking that results are robust to changing the risk-aversion parameter is largely pointless. This comment has not been written to blame the authors or anyone else for following the literature standard. Instead it is written to highlight the too often overlooked importance of how Quantitative Economists discretize shock processes —namely, often with way too few grid points— and to advocate that the Tauchen-Hussey method not be used both because it hides this very important decision and because better methods exist and implementations of these are easily available for many programming languages. Easy alternatives include Tauchen, Rouwenquist, Farmer-Toda, etc. For just how bad Tauchen-Hussey is, see Toda (2020) – Data-based Automatic Discretization of Nonparametric Distributions.

The choice of Tauchen’s q as 2.1 is not mentioned anywhere in the paper (or appendices) of Guerrieri & Lorenzoni (2017) where it is just stated that the Tauchen method with 12 states is used. Nor is it obvious in the codes provided by Guerrieri & Lorenzoni (2017) where a file (inc_process.mat) containing the result of the Tauchen method approximation is instead provided. I discovered the value of 2.1 by reverse engineering to find the Tauchen q that produces (a close approximation of) the contents inc_process.mat. Email communication with Lorenzoni (he’s a nice guy!) later revealed that Tauchen hyperparameters were chosen to match both volatility and autocorrelation; not exactly Tauchen-Hussey but very similar.

This is despite the crucial role played by the choice of Tauchen q=2.1. The role of this can be seen in the following plot showing the stationary distribution of agents in general equilibrium for the Tauchen approximation based on the saved values of GL2017 (inc_process.mat), and for Tauchen q=2.1, q=3, and q=4. It also shows an ‘accurate’ stationary distribution, which not only uses Tauchen q=4 but also increases the number of points used for the approximation (n_theta) to 51, rather than the 13 used elsewhere (12 plus the unemployment state); this illustrates what the stationary distribution ‘should’ look like if it were actually the AR(1), rather than the product of the Tauchen method approximation.

The first thing to observe is that GL2019 inc_process appears to get close to the ‘accurate’ cdf. But it does this via the backdoor, namely it targets variation and achieves this by largely eliminating risk (by choosing a low q, and so small magnitude of min/max shocks). Notice where the cdf for q=3 and q=4 lie; these use the same number of grid points (13) as the GL2017 inc_process but include more ‘realistic’ sized shocks.

The reason this choice matters for the size of the recession resulting from a credit crisis is that with q=2.1 the highest and lowest values of the (log of the) process on efficiency-units-of-labour is only +/-1 standard deviation. This makes income volatility much lower than is empirically plausible. With so little income risk the economy has higher general equilibrium interest rates, and so during the credit crisis the zero-lower bound does not bind for that long (the zero-lower bound binds fewer periods relative to the regular income shocks than in a ‘riskier’ economy with q=4) and so drives a much smaller fall in interest rates and output. With a more normal value of Tauchen q like 3, and more in line with the empirical estimates of the (log) income process to which the model is being calibrated, the same sized credit crisis results in a ‘permanent’ recession with the zero-lower bound binding for decades. (If you want to see this run the Guerrieri & Lorenzoni (2017) example codes with Tauchen q=3).

It is possible to justify the choice of q=2.1 as that which, when using a 12-state approximation with the Tauchen method delivers the a Markov process that has the same variance as the variance of the AR(1) process being approximated. (That is, reproduces variance(z) for z_t=rho*z_tminus1+epsilon.) But as described above this has the impact of removing most of the risk from the economy and is both an unusually low value for Tauchens q and key to the results of the paper. It is so unusual that it took me a few days to realise why I was initially completely unable to replicate the paper at the first attempt (I had gone with q=3, which tends to be the standard if not otherwise mentioned, and while I tried q=4 the thought of something as low as q=2 simply did not occur to me based on my experience replicating many models of this kind.)

Again, the point of this comment is not so much to criticise the decision to set Tauchen q equal to 2.1; there are possible justifications such as that above about variance, or about wanting to calibrate to have a substantial fraction of the population near the borrowing constraint (empirically assessing the correct value for this fraction is very difficult).

The point of this comment is that such a crucial and unusual decision was taken without mention or discussion in the paper. This likely occured because the Tauchen-Hussey method with only a few grid points was used. It is likely that neither the editor, referees, nor authors comprehended what was going on with Tauchen q as they have kept a robustness test for a ‘high risk aversion (phi=6)’ calibration in the paper as an important test. This is rendered pointless by Tauchen q equal to 2.1: what is the point of testing robustness to high risk aversion in an economy in which most of the risk has been removed???

PS. Other than this one issue the paper was actually a pleasure to replicate compared to most. It provides clear and precise descriptions of the experiments being undertaken, reports alternative calibrations, and the code (while only reproducing part of the paper) is clean and well documented and calibrated. The paper itself is also a good and insightful read into how these models operate 🙂

Link to code related to this comment, in particular finding q=2.1 and creating the figure.

© 2024 A MarketPress.com Theme