Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Skip to main content
We introduce a numerical algorithm for solving dynamic economic models that merges stochastic simulation and projection approaches: we use simulation to ap-proximate the ergodic measure of the solution, we cover the support of the... more
We introduce a numerical algorithm for solving dynamic economic models that merges stochastic simulation and projection approaches: we use simulation to ap-proximate the ergodic measure of the solution, we cover the support of the con-structed ergodic measure with a …xed grid, and we use projection techniques to accurately solve the model on that grid. The construction of the grid is the key novel piece of our analysis: we replace a large cloud of simulated points with a small set of "representative " points. We present three alternative techniques for constructing representative points: a clustering method, an epsilon-distinguishable set method, and a locally-adaptive variant of the epsilon-distinguishable set method. As an illustration, we solve one- and multi-agent neoclassical growth models and a large-scale new Keynesian model with a zero lower bound on nominal interest rates. The proposed solution algorithm is tractable in problems with high dimensionality (hundreds ...
JEL No. C63,C68 We develop numerically stable stochastic simulation approaches for solving dynamic economic models. We rely on standard simulation procedures to simultaneously compute an ergodic distribution of state variables, its... more
JEL No. C63,C68 We develop numerically stable stochastic simulation approaches for solving dynamic economic models. We rely on standard simulation procedures to simultaneously compute an ergodic distribution of state variables, its support and the associated decision rules. We differ from existing methods, however, in how we use simulation data to approximate decision rules. Instead of the usual least-squares approximation methods, we examine a variety of alternatives, including the least-squares method using SVD, Tikhonov regularization, least-absolute deviation methods, principal components regression method, all of which are numerically stable and can handle ill-conditioned problems. These new methods enable us to compute high-order polynomial approximations without encountering numerical problems. Our approaches
IVIE working papers o®er in advance the results of economic research under way in order to encourage a discussion process before sending them to scienti¯c journals for their ¯nal publication.
The views expressed herein are those of the authors and do not necessarily reflect the views of the
numerical solutions of models with heterogeneous agents (Models B): a
Abstract. Euler-equation methods for solving nonlinear dynamic models involve parameterizing some policy functions. We argue that in the typical macroeconomic model with valuable leisure, labor function is particularly convenient for... more
Abstract. Euler-equation methods for solving nonlinear dynamic models involve parameterizing some policy functions. We argue that in the typical macroeconomic model with valuable leisure, labor function is particularly convenient for parameterizing. This is because under the labor-function parameterization, the intratemporal first-order condition admits a closed-form solution, while under other parameterizations, there should be a numerical solution. In the context of a simulation-based parameterized expectations algorithm, we find that using the labor-function parameterization instead of the standard consumption-function parameterization reduces computational time by more than a factor of 10.
referee for useful comments. We thank Ben Malin and Paul Pichler for providing us with the Smolyak-polynomial terms and log-linear solutions, respectively. The views expressed herein are those of the authors and do not necessarily reflect... more
referee for useful comments. We thank Ben Malin and Paul Pichler for providing us with the Smolyak-polynomial terms and log-linear solutions, respectively. The views expressed herein are those of the authors and do not necessarily reflect the views of the National Bureau of Economic Research. NBER working papers are circulated for discussion and comment purposes. They have not been peer-reviewed or been subject to the review by the NBER Board of Directors that accompanies official NBER publications.
Artificial intelligence (AI) has impressive applications in many fields (speech recognition, computer vision, etc.). This paper demonstrates that AI can be also used to analyze complex and high-dimensional dynamic economic models. We show... more
Artificial intelligence (AI) has impressive applications in many fields (speech recognition, computer vision, etc.). This paper demonstrates that AI can be also used to analyze complex and high-dimensional dynamic economic models. We show how to convert three fundamental objects of economic dynamics -- lifetime reward, Bellman equation and Euler equation -- into objective functions suitable for deep learning (DL). We introduce all-in-one integration technique that makes the stochastic gradient unbiased for the constructed objective functions. We show how to use neural networks to deal with multicollinearity and perform model reduction in Krusell and Smith's (1998) model in which decision functions depend on thousands of state variables -- we literally feed distributions into neural networks! In our examples, the DL method was reliable, accurate and linearly scalable. Our ubiquitous Python code, built with Dolo and Google TensorFlow platforms, is designed to accommodate a variety...
We introduce a deep learning classification (DLC) method for analyzing equilibrium in discrete-continuous choice dynamic models. As an illustration, we apply the DLC method to solve a version of Krusell and Smith's (1998)... more
We introduce a deep learning classification (DLC) method for analyzing equilibrium in discrete-continuous choice dynamic models. As an illustration, we apply the DLC method to solve a version of Krusell and Smith's (1998) heterogeneous-agent model with incomplete markets, borrowing constraint and indivisible labor choice. The novel feature of our analysis is that we construct discontinuous decision functions that tell us when the agent switches from one employment state to another, conditional on the economy's state. We use deep learning not only to characterize the discrete indivisible choice but also to perform model reduction and to deal with multicollinearity. Our TensorFlow-based implementation of DLC is tractable in models with thousands of state variables.
Appendix A: Nonlinear regression model and nonlinear approximation methods In this section, we extend the approximation approaches that we developed in Sections 4.2 and 4.3 to the case of the nonlinear regression model y = Ψ (kk a; b) +... more
Appendix A: Nonlinear regression model and nonlinear approximation methods In this section, we extend the approximation approaches that we developed in Sections 4.2 and 4.3 to the case of the nonlinear regression model y = Ψ (kk a; b) + εε (A.1) where b ∈ R n+1 , k ≡ (k 0 k T −1) ∈ R T , a ≡ (a 0 a T −1) ∈ R T , and Ψ (kk a; β) ≡ (Ψ (k 0 a 0 ; β) Ψ (k T −1 a T −1 ; β)) ∈ R T. 1 We first consider a nonlinear LS (NLLS) problem and then formulate the corresponding LAD problem. The NLLS problem is min b y − Ψ (kk a; b) 2 2 = min b [y − Ψ (kk a; b)] [y − Ψ (kk a; b)] (A.2) The typical NLLS estimation method linearizes (A.2) around a given initial guess b by using a first-order Taylor expansion of Ψ (kk a; b) and makes a step b toward a solution b b + bb (A.3) Using the linearity of the differential operator, we can derive an explicit expression for the step b. This step is given by a solution to the system of normal equations J J b = J y (A.4) 1 The regression model with the exponentiate...
The neoclassical growth model with quasi-geometric discounting is shown by Krusell and Smith (2000) to have multiple solutions. As a result, value-iterative methods fail to converge. The set of equilibria is however reduced if we restrict... more
The neoclassical growth model with quasi-geometric discounting is shown by Krusell and Smith (2000) to have multiple solutions. As a result, value-iterative methods fail to converge. The set of equilibria is however reduced if we restrict our attention to the interior (satisfying the Euler equation) solution. We study the performance of the grid-based and the simulation-based Euler-equation methods in the given context. We find that both methods converge to an interior solution in a wide range of parameter values, not only in the ''test'' model with the closed-form solution but also in more general settings, including those with uncertainty.
The paper proposes a theory of the wage arrears phenomenon in transition economies. We build on the standard one-sector neoclassical growth model. The neoclassical firms in transition make losses and use wage arrears as the survival... more
The paper proposes a theory of the wage arrears phenomenon in transition economies. We build on the standard one-sector neoclassical growth model. The neoclassical firms in transition make losses and use wage arrears as the survival strategy. At the agents' level, the randomness in the timing and extent of wage payments act as idiosyncratic shocks to earnings. We calibrate the model to reproduce evidence from the Ukrainian data and assess its quantitative implications. We find that wage arrears imply substantial social costs such as the consumption loss of 8% - 16% and the welfare loss from idiosyncratic uncertainty, equivalent to an additional consumption loss of 1% - 6%.
We show that the standard two-agent new Keynesian (TANK) model without capital is able to produce monetary policy amplification and no forward guidance puzzle. This finding contrasts with the previous literature that argued that both... more
We show that the standard two-agent new Keynesian (TANK) model without capital is able to produce monetary policy amplification and no forward guidance puzzle. This finding contrasts with the previous literature that argued that both results cannot be achieved simultaneously. Our key ingredient is output stabilization in the Taylor rule, and we do not rely on the presence of idiosyncratic uncertainty. For both deterministic and stochastic versions, we derive novel closed form solutions for all conceivable cases of eigenvalues and analyze the effects of redistribution between participants and non-participants in asset markets. A stochastic version of the model predicts that positive productivity shocks always worsens consumption inequality, which is counterfactual. Finally, we built a version of the model with capital which realistically predicts that consumption inequality can either increase or decrease in response to shocks. Moreover, the forward guidance puzzle is resolved even w...
Central bank's announcements about future monetary policy make economic agents to react before the announced policy takes place. We evaluate the anticipation effects of such announcements in the context of a realistic dynamic economic... more
Central bank's announcements about future monetary policy make economic agents to react before the announced policy takes place. We evaluate the anticipation effects of such announcements in the context of a realistic dynamic economic model of central banking. In our experiments, we consider temporary and permanent anticipated changes in policy rules including changes in inflation target, natural rate of interest and Taylor-rule coefficients, as well as anticipated switches from inflation targeting to price-level targeting and average inflation targeting. We show that the studied nonrecurrent news shocks about future policies have sizable anticipation effects on the economy. Our methodological contribution is to develop a novel perturbation-based framework for constructing nonstationary solutions to economic models with nonrecurrent news shocks. JEL Classification: C61, C63, C68, E31, E52 Keywords: news shocks, turnpike theorem, time-dependent models, nonstationary models, Unbal...
This paper studies how the EU Eastern enlargement can affect the economies of the old and the new EU members and the non-acceded countries in the context of a multi-country neoclassical growth model where Foreign Direct Investment (FDI)... more
This paper studies how the EU Eastern enlargement can affect the economies of the old and the new EU members and the non-acceded countries in the context of a multi-country neoclassical growth model where Foreign Direct Investment (FDI) is subject to border costs. We assume that in the moment of the EU enlargement border costs are eliminated between the old and the new EU member states but they remain unchanged between the old EU member states and the nonacceded countries. In a calibrated version of the model, the short-run effects of the EU enlargement proved to be relatively small for all the economies considered. The long-run effects are however significant: in the acceded countries, investors from the old EU member states become permanent owners of about 3/4 of capital, while in the nonacceded countries, they are forced out of business by local producers.
We develop a cluster-grid algorithm (CGA) that solves dynamic economic models on their ergodic sets and is tractable in problems with high dimensionality (hundreds of state variables) on a desktop computer. The key new feature is the use... more
We develop a cluster-grid algorithm (CGA) that solves dynamic economic models on their ergodic sets and is tractable in problems with high dimensionality (hundreds of state variables) on a desktop computer. The key new feature is the use of methods from cluster analysis to approximate an ergodic set. CGA guesses a solution, simulates the model, partitions the simulated data into clusters and uses the centers of the clusters as a grid for solving the model. Thus, CGA avoids costs of finding the solution in areas of the state space that are never visited in equilibrium. In one example, we use CGA to solve a large-scale new Keynesian model that includes a Taylor rule with a zero lower bound on nominal interest rates.   : C61, C63, C68, E31, E52   : ergodic set; clusters; large-scale economy; new Keynesian model; ZLB; projection method; numerical method; stochastic simulation ∗This paper is a substantially revised version of an earlier version that circulated as...
In this paper, we describe how to solve Model A (finite number of countries - complete markets) of the JEDC project by using a simulation-based Parameterized Expectations Algorithm (PEA).
This paper studies the properties of the social utility function defined by the planner's problem of Constantinides (1982). We show one set of restrictions on the optimal planner's policy rule, which is sufficient for constructing... more
This paper studies the properties of the social utility function defined by the planner's problem of Constantinides (1982). We show one set of restrictions on the optimal planner's policy rule, which is sufficient for constructing the social utility function analytically. For such well-known classes of utility functions as the HARA and the CES, our construction is equivalent to Gorman's (1953) aggregation. However, we can also construct the social utility function analytically in some cases when Gorman's (1953) representative consumer does not exist; in such cases, the social utility function depends on "heterogeneity" parameters. Our results can be used for simplifying the analysis of equilibrium in dynamic heterogeneous-agent models.
We demonstrate that the standard two-agent new Keynesian (TANK) model without capital is able to simultaneously produce monetary policy amplification and reasonable responses to forward guidance if a central bank stabilizes output. For... more
We demonstrate that the standard two-agent new Keynesian (TANK) model without capital is able to simultaneously produce monetary policy amplification and reasonable responses to forward guidance if a central bank stabilizes output. For both deterministic and stochastic versions, we derive novel closed-form solutions for all conceivable cases of eigenvalues and analyze the effects of redistribution between participants and non-participants in asset markets. How redistribution resources are funded plays a key role in equilibrium determinacy, as well as in monetary-policy effects at the individual and aggregate levels. A stochastic version of the model counterfactually predicts that positive productivity shocks always worsens consumption inequality. The latter negative implication can be overcome in a version of the model with capital adjustment costs that realistically predicts countercylical consumption inequality. JEL Classification: C61, C63, C68, E31, E52
A seminal work of Krusell, Ohanian, RA­os-Rull and Violante (2000) demonstrated that the capital-skill-complementarity mechanism is capable of explaining a U-shaped skill premium pattern over the 1963-1992 period in the US economy.... more
A seminal work of Krusell, Ohanian, RA­os-Rull and Violante (2000) demonstrated that the capital-skill-complementarity mechanism is capable of explaining a U-shaped skill premium pattern over the 1963-1992 period in the US economy. However, the world experienced an unprecedented technological change since then. In this paper, we ask how the finding of their article change if we consider more recent data. First, we find that over the 1992-2017 period, the skill premium pattern changed dramatically, from a U-shaped to monotonically increasing, however, the capital-skill complementarity framework remains remarkably successful in explaining the data. Second, we use this framework to construct a projection, and we conclude that the skill premium will continue to grow in the US economy.
How wrong could policymakers be when using linearized solutions to their macroeconomic models instead of nonlinear global solutions? This question became of much practical interest during the Great Recession and the recent zero lower... more
How wrong could policymakers be when using linearized solutions to their macroeconomic models instead of nonlinear global solutions? This question became of much practical interest during the Great Recession and the recent zero lower bound crisis. We assess the importance of nonlinearities in a scaled-down version of the Terms of Trade Economic Model (ToTEM), the main projection and policy analysis model of the Bank of Canada. In a meticulously calibrated “baby” ToTEM model with 21 state variables, we find that local and global solutions have similar qualitative implications in the context of the recent episode of the effective lower bound on nominal interest rates in Canada. We conclude that the Bank of Canada’s analysis would not improve significantly by using global nonlinear methods instead of a simple linearization method augmented to include occasionally binding constraints. However, we also find that even minor modifications in the model's assumptions, such as a variation...
We construct a general-equilibrium version of Krusell, Ohanian, Rios-Rulland Violante’s (2000) model with capital-skill complementarity. To account forgrowth patterns observed in the data, we assume several sources of... more
We construct a general-equilibrium version of Krusell, Ohanian, Rios-Rulland Violante’s (2000) model with capital-skill complementarity. To account forgrowth patterns observed in the data, we assume several sources of growthsimultaneously, specifically, exogenous growth of skilled and unskilled labor,equipment-specific technological progress, skilled and unskilled labor-augmentingtechnological progress and Hicks-neutral technological progress. We deriverestrictions that make our model consistent with steady-state growth. A calibratedversion of our model is able to account for the key growth patterns in the U.S. data,including those for capital equipment and structures, skilled and unskilled laborand output, but it fails to explain the long-run behavior of the skill premium.
This paper modifies the standard one-sector growth model with uninsurable idiosyncratic risk and liquidity constraints to include multiple types of quasi-geometric consumers. For a calibrated version of the model, we show that a modest... more
This paper modifies the standard one-sector growth model with uninsurable idiosyncratic risk and liquidity constraints to include multiple types of quasi-geometric consumers. For a calibrated version of the model, we show that a modest difference between the quasi-geometric discounting parameters of types can lead to large differences in their marginal propensities to consume. Unlike the standard one-sector growth model, the model with heterogeneous quasi-geometric consumers can generate realistic degrees of wealth inequality.
In this paper, we study household consumption-saving and portfolio choices in a heterogeneousagent economy with sticky prices and time-varying total factor productivity and idiosyncratic stochastic volatility. Agents can save through... more
In this paper, we study household consumption-saving and portfolio choices in a heterogeneousagent economy with sticky prices and time-varying total factor productivity and idiosyncratic stochastic volatility. Agents can save through liquid bonds and illiquid capital and shares. With rich heterogeneity at the household level, we are able to quantify the impact of uncertainty across the income and wealth distribution. Our results help us in identifying who wins and who loses when during periods of heightened individual and aggregate uncertainty. To study the importance of heterogeneity in understanding the transmission of economic shocks, we use a deep learning algorithm. Our method preserves non-linearities, which is essential for understanding the pricing decisions for illiquid assets. JEL Classification: N/A Keywords: Machine Learning, deep learning, neural network, HANK, Heterogeneous Agents Yuriy Gorodnichenko ygorodni@econ.berkeley.edu UC Berkeley and CEPR Lilia Maliar lmaliar@...
This paper examines the empirical relevance of an intertemporal model of consumption with dynamically inconsistent decision makers. The model has testable implications concerning the relation between the consumers' degrees of... more
This paper examines the empirical relevance of an intertemporal model of consumption with dynamically inconsistent decision makers. The model has testable implications concerning the relation between the consumers' degrees of short-run patience (self-control) and their consumption-saving decisions. Using Spanish panel data on household expenditure, we estimate the Euler equation derived from the model. We find evidence in favor of consumers' preferences being time inconsistent. Moreover, our results indicate that there are significant differences in the degrees of short-run patience across households.
During the recent economic crisis, when nominal interest rates were at their effective lower bounds, central banks used forward guidance announcements about future policy rates to conduct their monetary policy. Many policymakers believe... more
During the recent economic crisis, when nominal interest rates were at their effective lower bounds, central banks used forward guidance announcements about future policy rates to conduct their monetary policy. Many policymakers believe that forward guidance will remain in use after the end of the crisis; however, there is uncertainty about its effectiveness. In this paper, we study the impact of forward guidance in a stylized new Keynesian economy away from the effective lower bound on nominal interest rates. Using closed-form solutions, we show that the impact of forward guidance on the economy depends critically on a specific monetary policy rule, ranging from non-existing to immediate and unrealistically large, the so-called forward guidance puzzle. We show that the size of the smallest root (or eigenvalue) captures model dynamics better than the underlying parameters. We argue that the puzzle occurs under very special empirically implausible and socially sub-optimal monetary policy rules, whereas empirically relevant Taylor rules lead to sensible implications.
The thesis studies quantitative implications of the real business cycle models with heterogeneous agents. The questions I ask are: First, Can the studied models not only replicate the aggregate time series facts but also the distributions... more
The thesis studies quantitative implications of the real business cycle models with heterogeneous agents. The questions I ask are: First, Can the studied models not only replicate the aggregate time series facts but also the distributions of individual quantities such as consumption, hours worked, income, wealth observed in the micro data? Second, Does incorporating heterogeneity enhance the aggregate performance of the representative consumer models? To simplify the characterization of the equilibria in the models, I use results from aggregation theory. The thesis is composed of three chapters, each of which can be read independently. A l l chapters are joint with Serguei Maliar. The first chapter analyzes the predictions of a heterogeneous agents version of the model by Kydland and Prescott (1982). I calibrate and solve the model with eight heterogeneous agents to match the aggregate quantities and the distributions of productivity and wealth in the U.S. economy. I find that the model can generate the distributions of consumption and working hours which are consistent with patters observed in the data. I show that incorporating the heterogeneity helps to improve on the model's predictions with respect to labor markets. In particular, unlike a similar representative agent setup, the heterogeneous model can account for the Dunlop-Tarshis observation of weak correlation between productivity and hours worked. The second chapter constructs a heterogeneous agents version of the indivisible labor model by Hansen (1985) with search and home production. I calibrate and solve the model with five agents to replicate aggregate quantities and differences in productivity across agents in the U.S. economy. The model does reasonably well at reproducing cyclical behavior of the macroeconomic aggregates. At the individual level, it can account for the stylized facts that more productive individuals (i) enjoy a higher employment rate, (ii) have a lower volatility of employment, (iii) spend less time working at home. It is important to emphasize that none of the heterogeneous models with home production existing in the literature has been able to explain fact (ii). The third chapter studies distributive dynamics of wealth and income in a heterogeneous agents version of Kydland and Prescott's (1982) model. I show that imder the assumptions of complete markets and Cobb-Douglas preferences, the evolution of wealth and income distributions in the model can be described in terms of aggregate variables and time-invariant agent-specific parameters. This allows me to characterize explicitly the behavior of such inequality measvures as the coefficient of variation of wealth and income over the business cycle. I test the model's implications by using the time series on the U.K. economy. The predictions are in agreement with the data.
Research Interests:
Research Interests:

And 57 more