Behavioural Life-Cycle Models

Impatience, Temptation and Self-Control, Loss Aversion and Ambiguity Aversion can now all be handled in life-cycle models. The Intro to Life-Cycle Models includes examples of each of these.

Impatience, modeled as Quasi-Hyperbolic Discounting, is demonstrated in Life-Cycle Model 36. Temptation and Self-Control, modeled as Gul-Pesendorfer preferences, is demonstrated in Life-Cycle Model 37. Loss Aversion, modeled as Prospect Theory, is demonstrated in Life-Cycle Model 38. Ambiguity Aversion, modeled as maximin over multiple priors, is demonstrated in Life-Cycle Model 39.

Behavioural economics became part of mainstream economics a decade or two ago, but remains only occasionally seen in structural models and Macroeconomics. Assuming this is largely because it requires relearning how to solve models for each different setting , hopefully their implementation in VFI Toolkit will help see them become more widely used.

All of these behavioural aspects can also be used in OLG models. When using behavioural in an OLG model the only part of the model that changes is the life-cycle model. The behavioural aspects determine the optimal policy functions, but once we have the policy nothing else about solving an OLG model changes.

The Appendix to the Intro to Life-Cycle Models contains explanations of how these preferences work, and how they are implemented in codes. These behavioural life-cycle models tend to run marginally slower that a standard life-cycle model, but almost always just one to two times slower so they are easily usable.


If you are unfamiliar with these behavioural life-cycle models, you might find these related lecture slides useful.

VFI Toolkit also handles Epstein-Zin preferences.

Portfolio-Choice with Epstein-Zin preferences

Portfolio-choice models have households choosing both savings, and the division of savings between safe and risky assets. Next period assets depend on these two decisions, as well as on a stochastic return to the risky asset. Version 2.1 of VFI Toolkit introduces riskyasset specifically for these problems in which aprime(d,u), that is, the next period endogenous state, aprime, depends on decision variable(s), d, and an i.i.d. shock that occurs between this period and next period, u.

There are four examples in the Intro To Life-Cycle Models, and an implementation of the baseline model of Cocco, Gomes & Maenhout (2005) – Consumption and Portfolio Choice over the Life Cycle. The four life-cycle models build various aspects of Portfolio-Choice models. Life-Cycle Model 31 introduces Portfolio-Choice in an otherwise standard life-cycle model, showing how to set up a riskyasset to solve these. Life-Cycle Model 32 adds Epstein-Zin preferences, Life-Cycle Model 33 shows how to do Warm-Glow of Bequests which are more complicated with Epstein-Zin preferences, and Life-Cycle Model 34 adds endogenous labor.

Epstein-Zin preferences are important in the Portfolio-Choice literature as they allow separating risk aversion (which is important to the division of savings between safe and risky assets) from the elasticity of intertemporal substitution (which is important to how much to save for retirement); both of these are controlled by the same parameter in standard (vonNeumann-Morgenstern) preferences. As part of Version 2.1 Epstein-Zin preferences have undergone a (breaking) overhaul. This added two additional aspects to Epstein-Zin: (i) there is an option to choose between Epstein-Zin preferences in utility-units, or to use the more traditional consumption-units, (ii) handling of survival probabilities (a.k.a. mortality risk) and warm-glow of bequests. For more about Epstein-Zin preferences and how to use them see the Appendix of the Intro to Life-Cycle Models.

There is code implementing an example of the baseline model of Cocco, Gomes & Maenhout (2005) – Consumption and Portfolio Choice over the Life Cycle. This example uses portfolio-choice, Epstein-Zin preferences, and permanent types all together and shows how these models can be handled.


————————————-

Cocco, Gomes & Maenhout (2005) have permanent shocks in their model. The correct way to handle permanent shocks is a renormalization of the model that makes them disappear from the state-space. Instead the example codes keep them as a state, this means that computationally the example code is actually solving a much more difficult problem. It also means that you can easily switch to a more modern earnings process (my reading of Gomes (2020) is that he views the recent evidence as clearly against using permanent shocks).

1000 Downloads!!! The wrong way :P

All downloads are good downloads, but only some downloads count.

If you visit vfitoolkit.com you will be directed to github to download a copy of VFI Toolkit. Github does not count downloads, and so the number of times this has been done is unknown. But there is another way — the wrong way 😛 — to download, and that is via Matlab’s website. The number of downloads there just ticked past 1000!!!

If you are one of those thousand, thanks for using VFI Toolkit! And if you are one of the uncounted github downloaders, thanks to you too even though you aren’t counted. I have no idea how many people downloaded via github but guessing that for every ‘wrong’ download there are between one and nine ‘right’ downloads, it would be something between 2000 and 10,000!

Anyway, just happy to see people find VFI Toolkit useful 😀 Thanks for using! As ever, if you have any questions, or feature requests, come visit the forum, discourse.vfitoolkit.com, or email me.


There is of course no wrong way to download. You can download however you like 🙂

An Introduction to OLG Models

pdf: An Introduction to OLG Models
Codes for all the models can be found at: https://github.com/vfitoolkit/IntroToOLGModels

(or just use this link to download as a zip)

Overlapping-Generations (OLG) models are a workhorse model of Macroeconomics containing many households, many firms, and widely used to understand the importance of progressive taxation, demographic aging, and much more. We show how to easily build and solve OLG models over a series of examples, adding a feature each time. We begin with a deterministic OLG and show how to add pensions, demographics, and government. We then switch to stochastic OLG models, introducing idiosyncratic shocks that help generate more realistic life-cycles and inequality. By the end we are solving OLG models with married couple households, single male households, single female households and heterogeneous firms. The intention is that you can go through the models one-by-one, first reading the pdf explanation of a given model and then running the codes and seeing how to implement it.

These OLG models can be easily used, all you need is Matlab and a GPU (preferably with 8+gb of gpu memory). Households in an OLG model are based on life-cycle models, if you are unfamiliar with them it may be worth first looking at the Introduction to Life-Cycle Models, but it is possible to skip straight to OLGs.

These codes take advantage of VFI Toolkit, all of which leaves you to free to just get on with the economics and solving OLG models.

If you have any questions about the material, or spot a typo in the codes, or would just like to ask a clarifying question, etc., please use the forum: discourse.vfitoolkit.com
If you think there is anything important relating to OLG models that is not covered please let me know and I will think about adding another example.


Replication: Webinar Series and an opportunity to get your hands dirty

ReplicationWiki is organising a series of online seminars about replication in Economics. There will be nine webinars that you can select from, taking place from September 8th on. You can find the full list here, but I will highlight two in particular: the first is on “Why replication? How is it done? Where to find replication material?” and will be run by The ReplicationWiki on Sept 8th, the other is “Replication in Quantitative Macroeconomics” and will be run by Robert Kirkby, the lead Dev on VFI Toolkit on Sept 29th.

Announcement on INET.YSI is here, full information is here. The webinars will be run as a ‘flipped classroom’, meaning a video will be made available prior and then the actual webinar session will be used for discussion. I want to highlight one aspect which is that we encourage you to undertake your own replication, giving you feedback via mutual peer review and support from experts to submit completed replications to academic journals.

If you are interested specifically in replicating a paper using VFI Toolkit, likely something in heterogeneous agent incomplete markets macroeconomics, please feel free to contact me directly, robertdkirkby@gmail.com. I will take a look at the paper, let you know if VFI Toolkit is capable of solving that model, and give you an idea what kind of hardware are run-times are likely to be required. If you want to get involved but don’t have a paper in mind, perhaps one of the following might interest you: Ventura (1999) – Flat tax reform: A quantitative exploration, Attanasio, Low & Sanchez-Marcos (2008) – Explaining changes in female labor supply in a life-cycle model.

OLG Transition Paths: Example based on Conesa & Krueger (1999)

New example based on model of Conesa & Krueger (1999) – Social Security Reform with Heterogeneous Agents. This example illustrates how to solve general equilibrium transition paths in OLG models. The model itself evaluates the economic impacts of a variety of possible reforms to the US Social Security (pension) system. Transitions are done for both a reform that happens immediately, and a reform announced now but which will take place in the future.

This example show how the VFI Toolkit can be used to easily compute a general equilibrium transition path for OLG models in response to a path for parameters (the ‘TransitionPath_Case1_FHorz()’ command calculates the transition relating to the ‘ParamPath’ in codes). It also demonstrates tools to analyse outputs along a specific transition path, such as ‘EvalFnOnTransPath_AggVars_Case1_FHorz()’, or to calculate the value function over the resulting price path with ‘ValueFnOnTransPath_Case1_FHorz()’ and use this for welfare analysis.

For full details of the model see the original paper. Code for example.

Have also uploaded a replication of Conesa & Krueger (1999).


Main post ends here. The rest is extra background.

If you use transpathoptions.fastOLG=1, the codes will (additionally) parallelize over age j. This is much faster, but requires a large amount of GPU memory (GDDR memory) and so will only work on more powerful GPUs (within a few years this will no longer be relevant). In practice it is often a good idea to use fastOLG=1 to solve a version with smaller asset grids, and then use this as the initial guess for larger grids with fastOLG=0.

The transition path is solved for using ‘shooting algorithms’. Essentially, you guess (a path for) prices, solve the model, generate new prices, and then iterate on this until you get convergence in the prices. The codes explain how this is done in terms of the general equilibrium conditions, and allows for different update weights for the different prices. This is the easiest approach I have been able to come up with.

The model of Conesa & Krueger (1999) actually allows for a closed form expression for the labor supply in terms of the other state variables (including next period assets), and this could be implemented by placing that expression into the return function (and would be faster). This is not done here so as to make the codes easier to modify for other purposes.

Disclaimer: If you are willing to assume that models are linear in the aggregate you can use these transition paths as a way to solve and simulate models with aggregate shocks, see Boppart, Krusell & Mitman (2018). There are ways to further exploit this linearity assumption to massively speed up solutions, see Auclert, Bardóczy, Rognlie & Straub (2021), but since VFI Toolkit is about global non-linear solution methods there is no plan to implement these approaches. The BKM method in particular is very easy once you have solved the transition path and so you can implement it easily by building on the toolkit results.

Version 2 of VFI Toolkit

You can now just refer to parameters by name, and likewise for aggregate variables. All the examples are updated to Version 2 so you can see it in action. Makes larger models much easier to keep track of.

Sick of writing ‘ParamNames’? Good news, version 2 does away with them. You no longer create ReturnFnParamNames at all, it is simply figured out internally. Likewise for FnsToEvaluate.

Even better, FnsToEvaluate is now created as a structure, where the field names are the names of the variables. The input arguments are the decision, endogenous state, and exogeneous state variables in order, followed by any parameters. For example, in the Aiyagari (1994) model, we would want aggregate capital K, so we set
FnsToEvaluate.K=@(kprime,k,z) k
If we needed some parameter, say we tax capital (wealth) at rate tau, then the tax revenue would be calculated as
FnsToEvaluate.K=@(kprime,k,z,tau) tau*k
You can see that this makes using parameters easy (VFI Toolkit will look for tau in the parameters structure, called Params in example codes).

But it gets better. Now imagine you want to use K in your general equilibrium condition. Again, let’s consider the Aiyagari (1994) model where the general equilibrium condition is that the interest rate is equal to the marginal product of capital. So we would just set this up as
GeneralEqmEqns.CapitalMarket=@(r,K,alpha,delta) r-(alpha*K^(alpha-1)-delta);
where the inputs can be parameters (alpha, delta), general equilibrium prices (r), and even the aggregates of the FnsToEvaluate (K). Everything is just by name and in any order.

Better still, when you run commands to solve the general equilibrium you get easy-to-understand feedback. At each iteration (while finding the general equilibrium prices) you will be told the current prices (r), aggregate variables (K), and general equilibrium conditions (CaptialMarket). Because everything is by name it is easy to follow what is happening and so see where anything goes wrong.

Of course there are still more improvements. Because FnsToEvaluate contains the names of the variables, the output of all commands using them now uses these names. So for example calculating the aggregate capital in the Aiyagari (1994) model would be done as,
AggVars=EvalFnOnAgentDist_AggVars_Case1(StationaryDist, Policy, FnsToEvaluate,Params, [],n_d, n_a, n_z, d_grid, a_grid,z_grid);
and the output AggVars contains the aggregate values of the ‘FnsToEvaluate’ which are now referred to by name, so for example
AggVars.K.Mean
would be the aggregate value of K. This is also true of the command for things like the median, standard deviation, lorenz curve, etc.; they contain all results by name. The big advantage, other than ease of reading the code, is that you can add or remove FnsToEvaluate without breaking code as nothing depends on the number of functions to evaluate nor on their order.

Lastly, there is one thing that is broken by the update to version 2 and that is transition paths (both infinite and finite horizon transition path commands). This was a deliberate decision as being able to refer to everything by name in version 2 turns this from difficult into something easy enough to use. There are currently three examples available of how to compute transitions. Two infinite horizon models one of which extends the Aiyagari (1994) model with the other one based on Guerrieri & Lorenzoni (2017), and one OLG transition based Conesa & Krueger (1999).

All up Version 2 should make it both much easier to write codes, and much easier to read and understand them. In my experience it also makes it much easier to debug and correct them; since everything is by name it is possible at a glance to see from the output where a model is going wrong and therefore how to correct it. The update is especially helpful for models with lots of parameters, functions to evaluate, and general equilibrium conditions, since it becomes easy to keep track of everything and trivial to add or remove aspects.

As always, any questions or comments please use the forum: discourse.vfitoolkit.com/ (or you can email me directly at robertdkirkby@gmail.com)

—————–
Comment: The permanent type ‘PType’ commands have also been updated to only work with Version 2, but since these are yet used in the example codes it is not as noteworthy.

Comment: General equilibrium in the Aiyagari (1994) model is often described as being about getting K to match K, this is equivalent to the above where we get r to equal the marginal product of labor. I personally find it much more intuitive to think about the equilibrium in prices, but of course it is mathematically equivalent to consider it in prices or in quantities.

Comment: If you don’t want all that feedback on your general equilibrium, you can just use heteroagentoptions.verbose=0 to turn it off. verbose=0 is part of all the options so you can also set it for transitionpathoptions, etc.

Comment: Currently transition paths require a powerful GPU so may not be ‘available’ to everyone. But given two or three years they should become something just about anyone can easily do.

Disclaimer: If you are willing to assume that models are linear in the aggregate you can use these transition paths as a way to solve and simulate models with aggregate shocks, see Boppart, Krusell & Mitman (2018). There are ways to further exploit this linearity assumption to massively speed up solutions, see Auclert, Bardóczy, Rognlie & Straub (2021), who also provide a Python toolkit for this purpose. I just want to let people know that these much faster methods exist for those willing to assume linearity of the model in the aggregates.

An Introduction to Life-Cycle Models

pdf: An Introduction to Life-Cycle Models.
Codes for all the models can be found at: https://github.com/vfitoolkit/IntroToLifeCycleModels

(or just use this link to download as a zip)

Want to solve life-cycle models easily? Good news! Here are a series of life-cycle models that gradually build up to look at income, hours worked, consumption, and assets over the life-cycle. We will start with a deterministic life-cycle model in which people live for J periods and make decisions on how much to work. Our second model will then add a decision about how much to save (assets). Our third model will just use this model to draw a life-cycle profile. We will then step-by-step make additions to the model to understand how these help us create more realistic life-cycle profiles including idiosyncratic shocks. The intention is that you can go through the models one-by-one, first reading the pdf explanation of a given model and then running the codes and seeing how to implement it.

By the end we will have a life-cycle model in which people make consumption-savings and consumption-leisure choices, which has working age and retirement, in which earnings are hump-shaped over age (peaking around ages 45-55), the variance of both income and consumption increase with age, incomes grow in line with deterministic economic growth of the economy as a whole, people have some assets left when they die, people face the risk of substantial medical costs when old, and where borrowing constraints and precautionary savings play an important role. And we will be able to use these to plot life-cycles profiles, including the mean, variance, and Gini coefficient of a variable conditional on age, and even on 5 year age-bins. We will also be easily able to simulate panel data sets from the model on which we could run regressions. There are also some models illustrating important concepts like the role of borrowing constraints and precautionary savings.

These life-cycle models can be used easily requiring very little knowledge of numerical methods; all you need is Matlab and a gpu.

These codes take advantage of what will become version 2 of VFI Toolkit. You just refer to parameters by name, and VFI Toolkit handles the rest. You create life-cycle profiles of ‘earnings’, and then just refer to it by name. When parameters depend on age this is handled automatically. All of which leaves you to free to just get on with the economics and solving life-cycle models.

If you have any questions about the material, or spot a typo in the codes, or would just like to ask a clarifying question, etc., please use the forum: discourse.vfitoolkit.com
If you think there is anything important relating to life-cycle models that is not covered please let me know and I will think about adding another example.

Video about the Introduction to Life-Cycle models (24mins): vimeo.com/750251629 (slides)

Exotic Preferences: Epstein-Zin & Quasi-Hyperbolic

New example based on model of Imrohoroglu, Imrohoroglu & Joines (1995) – A Life-Cycle Analysis of Social Security. This example solves the general equilibrium for an OLG model with standard expected utility preferences.

VFI Toolkit allows you to switch to ‘exotic’ preferences like Epstein-Zin and Quasi-Hyperbolic discounting with just a few lines of code. Here are examples that solve the exact same model again but this time using Epstein-Zin preferences and Quasi-Hyperbolic discounting preferences respectively. The only differences in the codes are in the first few lines, everything after that is identical demonstrating how easy it is to switch preferences. Two further examples show how to add endogenous labor, and how to use endogenous labor with Epstein-Zin preferences.

These examples demonstrates new features in VFI Toolkit for solving models with exotic preferences. These features are simply implemented as an option in standard value function. Note that from the perspective of simulating agent distributions there is no difference (hence you must set appropriate vfoptions, but no change to simoptions). General equilibrium commands automatically handle the exotic preferences.

Epstein-Zin preferences are useful as they seperate ‘intertemporal substitution’ from ‘risk aversion’, both of which are determined by the same parameter in, e.g., a CES utility function with (standard) von-Neumann-Morgenstern expected utility preferences. Quasi-Hyperbolic discounting captures ‘impatience’, you take actions today that are in your present interest, but are not in the longer-term interest of your future self. Both are explained in more detail in this pdf detailing the exact models that the Epstein-Zin and Quasi-Hyperbolic discounting examples are solving, as well as an explanation of their purpose. It also includes psuedo-code for the algorithms used by the VFI Toolkit. Note that there are two types of Quasi-Hyperbolic discounting, naive and sophisticated; both are implemented and can be set using vfoptions as in the example above, and naive is used by default if you do not specify.

Have also uploaded a replication of Imrohoroglu, Imrohoroglu & Joines (1995).

One paper that uses Quasi-Hyperbolic discounting is Imrohoroglu, Imrohoroglu & Joines (2003). The model is similar but different to their 1995 paper, and I link this mostly to give you a better understanding of how and where Quasi-Hyperbolic discounting might matter in terms of the Economics; beware there is a typo in their formulation of the sophisticated quasi-hyperbolic discounter’s value function problem.

I have also uploaded some examples based on the infinite-horizon Aiyagari model. Example solving the original Aiyagari model is already available. I have added a version with Epstein-Zin preferences, a version with Quasi-Hyperbolic discounting, a version with endogenous labor, and a version with both endogenous labor and Epstein-Zin preferences. All of these models are explained in the aforementioned pdf.


All of the codes implementing the Aiyagari model and the IIJ1995 model, as well as the variations using Epstein-Zin preferences, Quasi-hyperbolic discounting, Endogenous labor, and Endogenous labor with Epstein-Zin preferences, as well as the pdf explaining them can be found at: https://github.com/vfitoolkit/VFItoolkit-matlab-examples/tree/master/Exotic%20Preferences

Version 1.5 of Toolkit (Warning: Not backwards compatible!)

Three major but basic changes to make VFI Toolkit much easier to use, but which are not backward compatible, so will break all your existing codes. Hopefully they will make the VFI Toolkit much easier to use going forwards and be worth the change! The changes are all around making it easier to use VFI Toolkit on different hardware, and having to declare options.

The first major change is that VFI Toolkit now detects whether you have a GPU (graphics card) and sets defaults accordingly. This means that the same code runs on computers with a GPU, and on computers without a GPU (albeit very slowly). A further advantage of this is that users will need to do way less setting up of technical options and can focus on their Economic models. Have mainly done this so that users can write codes on laptop with no GPU (with small grids), and then run same codes (with bigger grids) on Desktop or Server with a GPU. Note that this does not mean all codes can run with just CPUs, some remain GPU only (specifically almost anything with value function in finite horizon or OLG). It remains possible to set specific parallelization options exactly as before, e.g., vfoptions.parallel=2.

Second major change is that an initial guess for the value function is no longer required by ‘ValueFnIter‘ commands. All existing codes must therefore change any calls of these commands to remove the initial guess; change, e.g., ValueFnIter_Case1(V0, n_d,…) to ValueFnIter_Case1(n_d,…). This was done as initial guesses are not widely used and it makes switching between GPU and CPU implementations much easier. You can still set initial guesses using new vfoptions.V0 (default is an initial guess of zeros). This also means that the ‘HeteroAgentStationaryEqm‘ also no longer require initial guess for value function.

Third major change is that for ‘HeteroAgentStationaryEqm‘ commands the inputs and outputs for equilibrium prices are now structures rather than vectors. This makes them easier to use as can add/remove conditions/prices without having to worry about causing errors elsewhere in codes due to reordering. Likewise for all ‘TransitionPath‘ commands the input and output price (and parameter) paths are now structures.

These changes are reflected in all examples. I will be rolling them out to all replications over the coming weeks.

As this version is anyway breaking backwards compatibility I have taken the opportunity to remove the SSvalues commands. Their functionality was earlier replaced with the EvaluateFnOnAgentDist commands. I had intended to keep them around longer as legacy code to avoid breaking backward compatibility but since version 1.5 breaks most backward compatibility why not just break everything? 🙂

Because this does break backwards compatibility an archived copy of the last v1.4 is available as a zip-file.

As ever, if you find something that does not work, or there is a feature you think would really help improve the VFI Toolkit, please don’t hesitate to either send me an email or post on the forum.

© 2024 A MarketPress.com Theme