32. Knowing the Forecasts of Others¶
Contents
In addition to what’s in Anaconda, this lecture will need the following libraries:
!pip install upgrade quantecon
!conda install y c plotly plotly plotlyorca
Requirement already satisfied: quantecon in /usr/share/miniconda3/envs/quantecon/lib/python3.8/sitepackages (0.5.2)
Requirement already satisfied: requests in /usr/share/miniconda3/envs/quantecon/lib/python3.8/sitepackages (from quantecon) (2.26.0)
Requirement already satisfied: numba>=0.38 in /usr/share/miniconda3/envs/quantecon/lib/python3.8/sitepackages (from quantecon) (0.54.1)
Requirement already satisfied: sympy in /usr/share/miniconda3/envs/quantecon/lib/python3.8/sitepackages (from quantecon) (1.9)
Requirement already satisfied: numpy in /usr/share/miniconda3/envs/quantecon/lib/python3.8/sitepackages (from quantecon) (1.20.3)
Requirement already satisfied: scipy>=1.0.0 in /usr/share/miniconda3/envs/quantecon/lib/python3.8/sitepackages (from quantecon) (1.7.1)
Requirement already satisfied: setuptools in /usr/share/miniconda3/envs/quantecon/lib/python3.8/sitepackages (from numba>=0.38>quantecon) (58.0.4)
Requirement already satisfied: llvmlite<0.38,>=0.37.0rc1 in /usr/share/miniconda3/envs/quantecon/lib/python3.8/sitepackages (from numba>=0.38>quantecon) (0.37.0)
Requirement already satisfied: idna<4,>=2.5 in /usr/share/miniconda3/envs/quantecon/lib/python3.8/sitepackages (from requests>quantecon) (3.2)
Requirement already satisfied: charsetnormalizer~=2.0.0 in /usr/share/miniconda3/envs/quantecon/lib/python3.8/sitepackages (from requests>quantecon) (2.0.4)
Requirement already satisfied: certifi>=2017.4.17 in /usr/share/miniconda3/envs/quantecon/lib/python3.8/sitepackages (from requests>quantecon) (2021.10.8)
Requirement already satisfied: urllib3<1.27,>=1.21.1 in /usr/share/miniconda3/envs/quantecon/lib/python3.8/sitepackages (from requests>quantecon) (1.26.7)
Requirement already satisfied: mpmath>=0.19 in /usr/share/miniconda3/envs/quantecon/lib/python3.8/sitepackages (from sympy>quantecon) (1.2.1)
Collecting package metadata (current_repodata.json): 
\

/

\

/

\

/

\

/

\

/
done
Solving environment: \

/

\

/

\

/

\

/

\

/

\

/

\

/

\

/

\

/

\

/

\

/

\

/

\

/
done
# All requested packages already installed.
32.1. Introduction¶
Robert E. Lucas, Jr. [REL75], Kenneth Kasa [Kas00], and Robert Townsend [Tow83] showed that giving decision makers incentives to infer persistent hidden state variables from equilibrium prices and quantities can elongate and amplify impulse responses to aggregate shocks in business cycle models.
Townsend [Tow83] noted that such incentives can naturally induce decision makers to want to forecast the forecast of others.
This theme has been pursued and extended in analyses in which decision makers’ imperfect information forces them into pursuing an infinite recursion of forming beliefs about the beliefs of other (e.g., [AMS02]).
Lucas [REL75] side stepped having decision makers forecast the forecasts of other decision makers by assuming that they simply pool their information before forecasting.
A pooling equilibrium like Lucas’s plays a prominent role in this lecture.
Because he didn’t assume such pooling, [Tow83] confronted the forecasting the forecasts of others problem.
To formulate the problem recursively required that Townsend define decision maker’s state vector.
Townsend concluded that his original model required an intractable infinite dimensional state space.
Therefore, he constructed a more manageable approximating model in which the hidden Markov component of the demand shock is revealed to all firms after a fixed and finite number of periods.
In this lecture, as yet another instance of the theme that finding the state is an art, we show how to formulate Townsend’s original model in terms of a lowdimensional state space.
By doing so, we show that Townsend’s model shares equilibrium prices and quantities with those that prevail in a pooling equilibrium.
That finding emerged from a line of research about Townsend’s model that culminated in [PS05] that built on [PCL86].
However, rather than deploying the [PCL86] machinery here, we shall rely instead on a sneaky guessandverify tactic.
We compute a pooling equilibrium and represent it as an instance of a linear statespace system provided by the Python class
quantecon.LinearStateSpace
.Leaving the statetransition equation for the pooling equilibrium unaltered, we alter the observation vector for a firm to what it in in Townsend’s original model. So rather than directly observing the signal received by firms in the other industry, a firm sees the equilibrium price of the good produced by the other industry.
We compute a population linear least squares regression of the noisy signal that firms in the other industry receive in a pooling equilibrium on time \(t\) information that a firm receives in Townsend’s original model. The \(R^2\) in this regression equals \(1\). That verifies that a firm’s information set in Townsend’s original model equals its information set in a pooling equilibrium. Therefore, equilibrium prices and quantities in Townsend’s original model equal those in a pooling equilibrium.
32.1.1. A Sequence of Models¶
We proceed by describing a sequence of models of two industries that are linked in a single way: shocks to the demand curves for their products have a common component.
The models are simplified versions of Townsend’s [Tow83].
Townsend’s is a model of a rational expectations equilibrium in which firms confront the problem forecasting the forecasts of others.
In Townsend’s model, firms condition their forecasts on observed endogenous variables whose equilibrium laws of motion are determined by their own forecasting functions.
We start with model components that we shall progressively assemble in ways that can help us to appreciate the structure of a pooling equilibrium that ultimately concerns us.
While keeping other aspects of the model the same, we shall study consequences of alternative assumptions about what decision makers observe.
Technically, this lecture deploys concepts and tools that appear in First Look at Kalman Filter and Rational Expectations Equilibrium.
32.2. The Setting¶
We cast all variables in terms of deviations from means.
Therefore, we omit constants from inverse demand curves and other functions.
Firms in each of two industries \(i=1,2\) use a single factor of production, capital \(k_t^i\), to produce output of a single good, \(y_t^i\).
Firms bear quadratic costs of adjusting their capital stocks.
A representative firm in industry \(i\) has production function \(y_t^i = f k_t^i\), \(f >0\), acts as a price taker with respect to output price \(P_t^i\), and maximizes
Demand in industry \(i\) is described by the inverse demand curve
where \(P_t^i\) is the price of good \(i\) at \(t\), \(Y_t^i = f K_t^i\) is output in market \(i\), \(\theta_t\) is a persistent component of a demand shock that is common across the two industries, and \(\epsilon_t^i\) is an industry specific component of the demand shock that is i.i.d. and whose time \(t\) marginal distributon is \({\mathcal N}(0, \sigma_{\epsilon}^2)\).
We assume that \(\theta_t\) is governed by
where \(\{v_{t}\}\) is an i.i.d. sequence of Gaussian shocks each with mean zero and variance \(\sigma_v^2\).
To simplify notation, we’ll study a special case of the model by setting \(h=f=1\).
The presence of costs of adjusting their capital stocks imparts to firms an incentives to forecast the price of the good that they sell.
Throughout, we use the rational expectations equilibrium concept presented in this lecture Rational Expectations Equilibrium.
We let capital letters denote market wide objects and lower case letters denote objects chosen by a representative firm.
In each industry, a competitive equilibrium prevails.
To rationalize the big \(K\), little \(k\) connection, we can think of there being a continua of each type of firm, each indexed by \(\omega \in [0,1]\) with \(K^i = \int_0^1 k^i(\omega) d \omega\).
In equilibrium, \(k_t^i = K_t^i\), but as usual we must distinguish between \(k_t^i\) and \(K_t^i\) when we pose the firm’s optimization problem.
32.3. Tactics¶
We shall compute equilibrium laws of motion for capital in industry \(i\) under a sequence of assumptions about what a representative firm observes.
Successive members of this sequence make a representative firm’s information more and more obscure.
We begin with the most information, then gradually withdraw information in a way that approaches and eventually reaches the information structure that that we are ultimately interested in.
Thus, we shall compute equilibria under the following alternative information structures:
Perfect foresight: future values of \(\theta_t, \epsilon_{t}^i\) are observed in industry \(i\).
Observed but stochastic \(\theta_t\): \(\{\theta_t, \epsilon_{t}^i\}\) are realizations from a stochastic process; current and past values of each are observed at time \(t\) but future values are not.
One noiseridden observation on \(\theta_t\): Values of \(\{\theta_t, \epsilon_{t}^i\}\) separately are never observed. However, at time \(t\), a history \(w^t\) of a scalar noiseridden observations on \(\theta_t\) is observed at time \(t\).
Two noiseridden observations on \(\theta_t\): Values of \(\{\theta_t, \epsilon_{t}^i\}\) separately are never observed. However, at time \(t\), a history \(w^t\) of two noiseridden observations on \(\theta_t\) is observed at time \(t\).
Successive computations build one on another.
We proceed by first finding an equilibrium under perfect foresight.
To compute an equilibrium with \(\theta_t\) observed, we use a certainty equivalence principle to justify modifying the perfect foresight equilibrium by replacing future values of \(\theta_s, \epsilon_{s}^i, s \geq t\) with mathematical expectations conditioned on \(\theta_t\).
This provides the equilibrium when \(\theta_t\) is observed at \(t\) but future \(\theta_{t+j}\) and \(\epsilon_{t+j}^i\) are not observed.
To find an equilibrium when only a history \(w_t\) of a single noise ridden observations on \(\theta_t\) is observed, we again apply a certainty equivalence principle and replace future values of the random variables \(\theta_s, \epsilon_{s}^i, s \geq t\) with their mathematical expectations conditioned on \(w^t\).
To find an equilibrium when only a history \(w_t\) of a two noisy signal on \(\theta_t\) is observed, we replace future values of the random variables \(\theta_s, \epsilon_{s}^i, s \geq t\) with their mathematical expectations conditioned on history \(w^t\).
We call the equilibrium with two noiseridden observations on \(\theta_t\) a pooling equilibrium.
It corresponds to an arrangement in which at the beginning of each period firms in industries \(1\) and \(2\) somehow get together and share information about current values of their noisy signals on \(\theta\).
We want ultimately to compare outcomes in a pooling equilibrium with an equilibrium under the following alternative information structure for a firm in industry \(i\) that interested [Tow83]:
Firm \(i\)’s noiseridden signal on \(\theta_t\) and the price in industry \(i\), a firm in industry \(i\) observes a history \(w^t\) of one noiseridden signal on \(\theta_t\) and a history of industry \(i\)’s price is observed.
With this information structure, the representative firm \(i\) sees the price as well as the aggregate state variable \(Y_t^i\) in its own industry.
That allows it to infer the total demand shock \(\theta_t + \epsilon_{t}^i\).
However, at time \(t\), the firm sees only \(P_t^{i}\) and does not see \(Y_t^{i}\), so that firm \(i\) does not directly observe \(\theta_t + \epsilon_t^{i}\).
Nevertheless, it will turn out that equilibrium prices and quantities in this equilibrium equal their counterparts in a pooling equilibrium because firms in industry \(i\) are able to infer the noisy signal about the demand shock received by firms in industry \(i\).
We shall eventually verify this assertion by using a guess and verify tactic. 1
32.4. Equilibrium conditions¶
It is convenient to solve the firm’s problem without uncertainty by forming the Lagrangian:
where \(\{\phi_t^i\}\) is a sequence of Lagrange multipliers on the transition law for \(k_{t+1}^i\). First order conditions for the nonstochastic problem are
Substituting the demand function (32.2) for \(P_t^i\), imposing the condition that the representative firm is representative ( \(k_t^i = K_t^i\)), and using the definition below of \(g_t^i\), the Euler equation (32.4), lagged by one period, can be expressed as \( b k_t^i + \theta_t + \epsilon_t^i + (k_{t+1}^i  k_t^i)  g_t^i =0\) or
where we define \(g_t^i\) by
We can write Euler equation (32.4) as:
In addition, we have the law of motion for \(\theta_t\), (32.3), and the demand equation (32.2).
In summary, with perfect foresight, equilibrium conditions for industry \(i\) include the following system of difference equations:
Without perfect foresight, the same system prevails except that the following equation replaces the third equation of (32.8):
where \(x_{t+1,t}\) denotes the mathematical expectation of \(x_{t+1}\) conditional on information at time \(t\).
32.4.1. Equilibrium under perfect foresight¶
Our first step is to compute the equilibrium law of motion for \(k_t^i\) under perfect foresight.
Let \(L\) be the lag operator. 2
Equations (32.7) and (32.5) imply the second order difference equation in \(k_t^i\): 3
Factor the polynomial in \(L\) on the left side as:
where \(\tilde \lambda  < 1\) is the smaller root and \(\lambda\) is the larger root of \((\lambda1)(\lambda1/\beta)=b\lambda\).
Therefore, (32.9) can be expressed as
Solving the stable root backwards and the unstable root forwards gives
Recall that we have already set \(k^i = K^i\) at the appropriate point in the argument (i.e., after having derived the firstorder necessary conditions for a representative firm in industry \(i\).
Thus, under perfect foresight the equilibrium capital stock in industry \(i\) satisfies
Next, we shall investigate consequences of replacing future values of \((\epsilon_{t+j}^i + \theta_{t+j})\) in equation (32.10) with alternative forecasting schemes.
In particular, we shall compute equilibrium laws of motion for capital under alternative assumptions about the information available to decision makers in market \(i\).
32.5. Equilibrium with \(\theta_t\) stochastic but observed at \(t\)¶
If future \(\theta\)’s are unknown at \(t\), it is appropriate to replace all random variables on the right side of (32.10) with their conditional expectations based on the information available to decision makers in market \(i\).
For now, we assume that this information set \(I_t^p = \begin{bmatrix} \theta^t & \epsilon^{it} \end{bmatrix}\), where \(z^t\) represents the infinite history of variable \(z_s\) up to time \(t\).
Later we shall give firms less information.
To obtain an appropriate counterpart to (32.10) under our current assumption about information, we apply a certainty equivalence principle.
In particular, it is appropriate to take (32.10) and replace each term \(( \epsilon_{t+j}^i+ \theta_{t+j} )\) on the right side with \(E[ (\epsilon_{t+j}^i+ \theta_{t+j}) \vert \theta^t ]\).
After using (32.3) and the i.i.d. assumption about \(\{\epsilon_t^i\}\), this gives
or
where \(\lambda \equiv (\beta \tilde \lambda)^{1}\).
For future purposes, it is useful to represent the equilibrium \(\{k_t^i\}_t\) process recursively as
32.5.1. Filtering¶
32.5.1.1. One noisy signal¶
We get closer to a model that we ultimately want to study by now assuming that firms in market \(i\) do not observe \(\theta_t\), but instead observe a history \(w^t\) of noisy signals at time \(t\).
In particular, assume that
where \(e_t\) and \(v_t\) are mutually independent i.i.d. Gaussian shock processes with means of zero and variances \(\sigma_e^2\) and \(\sigma_v^2\), respectively.
Define
where \(w^t = [w_t, w_{t1}, \ldots, w_0]\) denotes the history of the \(w_s\) process up to and including \(t\).
Associated with the statespace representation (32.13) is the innovations representation
where \(a_t \equiv w_t  E(w_t  w^{t1})\) is the innovations process in \(w_t\) and the Kalman gain \(k\) is
and where \(p\) satisfies the Riccati equation
32.5.1.2. \(\theta\)reconstruction error:¶
Define the state reconstruction error \(\tilde \theta_t\) by
Then \(p = E \tilde \theta_t^2\).
Equations (32.13) and (32.14) imply
Now notice that we can express \(\hat \theta_{t+1}\) as
where the first term in braces equals \(\theta_{t+1}\) and the second term in braces equals \(\tilde \theta_{t+1}\).
We can express (32.11) as
An application of a certainty equivalence principle asserts that when only \(w^t\) is observed, the appropriate solution is found by replacing the information set \(\theta^t\) with \(w^t\) in (32.19).
Making this substitution and using (32.18) leads to
Simplifying equation (32.18), we also have
Equations (32.20), (32.21) describe the equilibrium when \(w^t\) is observed.
Relative to (32.11), the equilibrium acquires a new state variable, namely, the \(\theta\)–reconstruction error, \(\tilde \theta_t\).
For future purposes, by using (32.15), it is useful to write (32.20) as
In summary, when decision makers in market \(i\) observe a noisy signal \(w_t\) on \(\theta_t\) at \(t\), we can represent an equilibrium law of motion for \(k_t^i\) as
32.5.2. Two noisy signals¶
We now construct a pooling equilibrium by assuming that a firm in industry \(i\) receives a vector \(w_t\) of two noisy signals on \(\theta_t\):
To justify that we are constructing is a pooling equilibrium we can assume that
so that a firm in industry \(i\) observes the noisy signals on that \(\theta_t\) presented to firms in both industries \(i\) and \(i\).
The appropriate innovations representation becomes
where \(a_t \equiv w_t  E [w_t  w^{t1}]\) is a \((2 \times 1)\) vector of innovations in \(w_t\) and \(k\) is now a \((1 \times 2)\) vector of Kalman gains.
Formulas for the Kalman filter imply that
where \(p = E \tilde \theta_t \tilde \theta_t^T\) now satisfies the Riccati equation
Thus, when a representative firm in industry \(i\) observes two noisy signals on \(\theta_t\), we can express the equilibrium law of motion for capital recursively as
Below, by using a guessandverify tactic, we shall show that outcomes in this pooling equilibrium equal those in an equilibrium under the alternative information structure that interested [Tow83]. 4
32.6. Guessandverify tactic¶
As a preliminary step we shall take our recursive representation (32.23) of an equilibrium in industry \(i\) with one noisy signal on \(\theta_t\) and perform the following steps:
Compute \(\lambda\) and \(\tilde{\lambda}\) by posing a rootfinding problem and then solving it using
numpy.roots
Compute \(p\) by forming the appropriate discrete Riccati equation and then solving it using
quantecon.solve_discrete_riccati
Add a measurement equation for \(P_t^i = b k_t^i + \theta_t + e_t\), \(\theta_t + e_t\), and \(e_t\) to system (32.23). Write the resulting system in statespace form and encode it using
quantecon.LinearStateSpace
Use methods of the
quantecon.LinearStateSpace
to compute impulse response functions of \(k_t^i\) with respect to shocks \(v_t, e_t\).
After analyzing the onenoisysignal structure in this way, by making appropriate modifications we shall analyze the twonoisysignal structure.
We proceed to analyze first the onenoisysignal structure and then the twonoisysignal structure.
32.7. Equilibrium with one signal on \(\theta_t\)¶
32.7.1. Step 1: Solve for \(\tilde{\lambda}\) and \(\lambda\)¶
Cast \(\left(\lambda1\right)\left(\lambda\frac{1}{\beta}\right)=b\lambda\) as \(p\left(\lambda\right)=0\) where \(p\) is a polynomial function of \(\lambda\).
Use
numpy.roots
to solve for the roots of \(p\)Verify \(\lambda \approx \frac{1}{\beta\tilde{\lambda}}\)
Note that \(p\left(\lambda\right)=\lambda^{2}\left(1+b+\frac{1}{\beta}\right)\lambda+\frac{1}{\beta}\).
32.7.2. Step 2: Solve for \(p\)¶
Cast \(p=\sigma_{v}^{2}+\frac{p\rho^{2}\sigma_{e}^{2}}{2p+\sigma_{e}^{2}}\) as a discrete matrix Riccati equation.
Use
quantecon.solve_discrete_riccati
to solve for \(p\)Verify \(p \approx\sigma_{v}^{2}+\frac{p\rho^{2}\sigma_{e}^{2}}{2p+\sigma_{e}^{2}}\)
Note that:
32.7.3. Step 3: Represent the system using quantecon.LinearStateSpace
¶
We use the following representation for constructing the
quantecon.LinearStateSpace
instance.
This representation includes extraneous variables such as \(P_{t}\) in the state vector.
We formulate things in this way because
it allows us easily to compute covariances of these variables with other
components of the state vector (step 5 above) by using the stationary_distributions
method of the LinearStateSpace
class.
import numpy as np
import quantecon as qe
from plotly.subplots import make_subplots
import plotly.graph_objects as go
import plotly.express as px
import plotly.offline as pyo
from statsmodels.regression.linear_model import OLS
from IPython.display import display, Latex, Image
pyo.init_notebook_mode(connected=True)
β = 0.9 # Discount factor
ρ = 0.8 # Persistence parameter for the hidden state
b = 1.5 # Demand curve parameter
σ_v = 0.5 # Standard deviation of shock to θ_t
σ_e = 0.6 # Standard deviation of shocks to w_t
# Compute λ
poly = np.array([1, (1 + β + b) / β, 1 / β])
roots_poly = np.roots(poly)
λ_tilde = roots_poly.min()
λ = roots_poly.max()
# Verify that λ = (βλ_tilde) ^ (1)
tol = 1e12
np.max(np.abs(λ  1 / (β * λ_tilde))) < tol
True
A_ricc = np.array([[ρ]])
B_ricc = np.array([[1.]])
R_ricc = np.array([[σ_e ** 2]])
Q_ricc = np.array([[σ_v ** 2]])
N_ricc = np.zeros((1, 1))
p = qe.solve_discrete_riccati(A_ricc, B_ricc, Q_ricc, R_ricc, N_ricc).item()
p_one = p # Save for comparison later
# Verify that p = σ_v ^ 2 + p * ρ ^ 2  (ρ * p) ^ 2 / (p + σ_e ** 2)
tol = 1e12
np.abs(p  (σ_v ** 2 + p * ρ ** 2  (ρ * p) ** 2 / (p + σ_e ** 2))) < tol
True
κ = ρ * p / (p + σ_e ** 2)
κ_prod = κ * σ_e ** 2 / p
κ_one = κ # Save for comparison later
A_lss = np.array([[0., 0., 0., 0., 0., 0.],
[κ / (λ  ρ), λ_tilde, κ_prod / (λ  ρ), 0., ρ / (λ  ρ), 0.],
[κ, 0., κ_prod, 0., 0., 1.],
[b * κ / (λ  ρ) , b * λ_tilde, b * κ_prod / (λ  ρ), 0., b * ρ / (λ  ρ) + ρ, 1.],
[0., 0., 0., 0., ρ, 1.],
[0., 0., 0., 0., 0., 0.]])
C_lss = np.array([[σ_e, 0.],
[0., 0.],
[0., 0.],
[σ_e, 0.],
[0., 0.],
[0., σ_v]])
G_lss = np.array([[0., 0., 0., 1., 0., 0.],
[1., 0., 0., 0., 1., 0.],
[1., 0., 0., 0., 0., 0.]])
mu_0 = np.array([0., 0., 0., 0., 0., 0.])
lss = qe.LinearStateSpace(A_lss, C_lss, G_lss, mu_0=mu_0)
ts_length = 100_000
x, y = lss.simulate(ts_length, random_state=1)
# Verify that two ways of computing P_t match
np.max(np.abs(np.array([[1., b, 0., 0., 1., 0.]]) @ x  x[3])) < 1e12
True
32.7.4. Step 4: Compute impulse response functions¶
To compute impulse response functions of \(k_t^i\), we use the impulse_response
method of the
quantecon.LinearStateSpace
class and plot the result.
xcoef, ycoef = lss.impulse_response(j=21)
data = np.array([xcoef])[0, :, 1, :]
fig = go.Figure(data=go.Scatter(y=data[:1, 0], name=r'$e_{t+1}$'))
fig.add_trace(go.Scatter(y=data[1:, 1], name=r'$v_{t+1}$'))
fig.update_layout(title=r'Impulse Response Function',
xaxis_title='Time',
yaxis_title=r'$k^{i}_{t}$')
fig1 = fig
# Export to PNG file
Image(fig1.to_image(format="png"))
# fig1.show() will provide interactive plot when running
# notebook locally
32.7.5. Step 5: Compute stationary covariance matrices and population regressions¶
We compute stationary covariance matrices by
calling the stationary_distributions
method of
the quantecon.LinearStateSpace
class.
By appropriately decomposing the covariance matrix of the state vector, we obtain ingredients of some population regression coefficients.
where \(\Sigma_{11}\) is the covariance matrix of dependent variables and \(\Sigma_{22}\) is the covariance matrix of independent variables.
Regression coefficients are \(\beta=\Sigma_{21}\Sigma_{22}^{1}\).
To verify an instance of a law of large numbers computation, we construct a long simulation of the state vector and for the resulting sample compute the ordinary leastsquares estimator of \(\beta\) that we shall compare to the corresponding population regression coefficients.
_, _, Σ_x, Σ_y, Σ_yx = lss.stationary_distributions()
Σ_11 = Σ_x[0, 0]
Σ_12 = Σ_x[0, 1:4]
Σ_21 = Σ_x[1:4, 0]
Σ_22 = Σ_x[1:4, 1:4]
reg_coeffs = Σ_12 @ np.linalg.inv(Σ_22)
print('Regression coefficients (e_t on k_t, P_t, \\tilde{\\theta_t})')
print('')
print(r'k_t:', reg_coeffs[0])
print(r'\tilde{\theta_t}:', reg_coeffs[1])
print(r'P_t:', reg_coeffs[2])
Regression coefficients (e_t on k_t, P_t, \tilde{\theta_t})

k_t: 3.275556845219768
\tilde{\theta_t}: 0.9649461170475454
P_t: 0.9649461170475454
# Compute R squared
R_squared = reg_coeffs @ Σ_x[1:4, 1:4] @ reg_coeffs / Σ_x[0, 0]
R_squared
0.9649461170475452
# Verify that the computed coefficients are close to least squares estimates
model = OLS(x[0], x[1:4].T)
reg_res = model.fit()
np.max(np.abs(reg_coeffs  reg_res.params)) < 1e2
True
# Verify that R_squared matches least squares estimate
np.abs(reg_res.rsquared  R_squared) < 1e2
True
# Verify that θ_t + e_t can be recovered
model = OLS(y[1], x[1:4].T)
reg_res = model.fit()
np.abs(reg_res.rsquared  1.) < 1e6
True
32.8. Equilibrium with two noisy signals on \(\theta_t\)¶
Steps 1, 4, and 5 are identical to those for the onenoisysignal structure.
Step 2 requires only a straightforward modification.
For step 3, we use construct the following statespace representation so that we can get our hands on all of the random processes that we require in order to compute a regression of the noisy signal about \(\theta\) from the other industry that a firm receives directly in a pooling equilibrium on the information that a firm receives in Townsend’s original model.
For this purpose, we include equilibrium goods prices from both industries appear in the state vector:
A_ricc = np.array([[ρ]])
B_ricc = np.array([[np.sqrt(2)]])
R_ricc = np.array([[σ_e ** 2]])
Q_ricc = np.array([[σ_v ** 2]])
N_ricc = np.zeros((1, 1))
p = qe.solve_discrete_riccati(A_ricc, B_ricc, Q_ricc, R_ricc, N_ricc).item()
p_two = p # Save for comparison later
# Verify that p = σ_v^2 + (pρ^2σ_e^2) / (2p + σ_e^2)
tol = 1e12
np.abs(p  (σ_v ** 2 + p * ρ ** 2 * σ_e ** 2 / (2 * p + σ_e ** 2))) < tol
True
κ = ρ * p / (2 * p + σ_e ** 2)
κ_prod = κ * σ_e ** 2 / p
κ_two = κ # Save for comparison later
A_lss = np.array([[0., 0., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 0., 0.],
[κ / (λ  ρ), κ / (λ  ρ), λ_tilde, κ_prod / (λ  ρ), 0., 0., ρ / (λ  ρ), 0.],
[κ, κ, 0., κ_prod, 0., 0., 0., 1.],
[b * κ / (λ  ρ), b * κ / (λ  ρ), b * λ_tilde, b * κ_prod / (λ  ρ), 0., 0., b * ρ / (λ  ρ) + ρ, 1.],
[b * κ / (λ  ρ), b * κ / (λ  ρ), b * λ_tilde, b * κ_prod / (λ  ρ), 0., 0., b * ρ / (λ  ρ) + ρ, 1.],
[0., 0., 0., 0., 0., 0., ρ, 1.],
[0., 0., 0., 0., 0., 0., 0., 0.]])
C_lss = np.array([[σ_e, 0., 0.],
[0., σ_e, 0.],
[0., 0., 0.],
[0., 0., 0.],
[σ_e, 0., 0.],
[0., σ_e, 0.],
[0., 0., 0.],
[0., 0., σ_v]])
G_lss = np.array([[0., 0., 0., 0., 1., 0., 0., 0.],
[0., 0, 0, 0., 0., 1., 0., 0.],
[1., 0., 0., 0., 0., 0., 1., 0.],
[0., 1., 0., 0., 0., 0., 1., 0.],
[1., 0., 0., 0., 0., 0., 0., 0.],
[0., 1., 0., 0., 0., 0., 0., 0.]])
mu_0 = np.array([0., 0., 0., 0., 0., 0., 0., 0.])
lss = qe.LinearStateSpace(A_lss, C_lss, G_lss, mu_0=mu_0)
ts_length = 100_000
x, y = lss.simulate(ts_length, random_state=1)
xcoef, ycoef = lss.impulse_response(j=20)
data = np.array([xcoef])[0, :, 2, :]
fig = go.Figure(data=go.Scatter(y=data[:1, 0], name=r'$e_{1,t+1}$'))
fig.add_trace(go.Scatter(y=data[:1, 1], name=r'$e_{2,t+1}$'))
fig.add_trace(go.Scatter(y=data[1:, 2], name=r'$v_{t+1}$'))
fig.update_layout(title=r'Impulse Response Function',
xaxis_title='Time',
yaxis_title=r'$k^{i}_{t}$')
fig2=fig
# Export to PNG file
Image(fig2.to_image(format="png"))
# fig2.show() will provide interactive plot when running
# notebook locally
_, _, Σ_x, Σ_y, Σ_yx = lss.stationary_distributions()
Σ_11 = Σ_x[1, 1]
Σ_12 = Σ_x[1, 2:5]
Σ_21 = Σ_x[2:5, 1]
Σ_22 = Σ_x[2:5, 2:5]
reg_coeffs = Σ_12 @ np.linalg.inv(Σ_22)
print('Regression coefficients (e_{2,t} on k_t, P^{1}_t, \\tilde{\\theta_t})')
print('')
print(r'k_t:', reg_coeffs[0])
print(r'\tilde{\theta_t}:', reg_coeffs[1])
print(r'P_t:', reg_coeffs[2])
Regression coefficients (e_{2,t} on k_t, P^{1}_t, \tilde{\theta_t})

k_t: 0.0
\tilde{\theta_t}: 0.0
P_t: 0.0
# Compute R squared
R_squared = reg_coeffs @ Σ_x[2:5, 2:5] @ reg_coeffs / Σ_x[1, 1]
R_squared
0.0
# Verify that the computed coefficients are close to least squares estimates
model = OLS(x[1], x[2:5].T)
reg_res = model.fit()
np.max(np.abs(reg_coeffs  reg_res.params)) < 1e2
True
# Verify that R_squared matches least squares estimate
np.abs(reg_res.rsquared  R_squared) < 1e2
True
_, _, Σ_x, Σ_y, Σ_yx = lss.stationary_distributions()
Σ_11 = Σ_x[1, 1]
Σ_12 = Σ_x[1, 2:6]
Σ_21 = Σ_x[2:6, 1]
Σ_22 = Σ_x[2:6, 2:6]
reg_coeffs = Σ_12 @ np.linalg.inv(Σ_22)
print('Regression coefficients (e_{2,t} on k_t, P^{1}_t, P^{2}_t, \\tilde{\\theta_t})')
print('')
print(r'k_t:', reg_coeffs[0])
print(r'\tilde{\theta_t}:', reg_coeffs[1])
print(r'P^{1}_t:', reg_coeffs[2])
print(r'P^{2}_t:', reg_coeffs[3])
Regression coefficients (e_{2,t} on k_t, P^{1}_t, P^{2}_t, \tilde{\theta_t})

k_t: 3.1373589171035654
\tilde{\theta_t}: 0.924234396744368
P^{1}_t: 0.037882801627815835
P^{2}_t: 0.9621171983721839
# Compute R squared
R_squared = reg_coeffs @ Σ_x[2:6, 2:6] @ reg_coeffs / Σ_x[1, 1]
R_squared
0.9621171983721838
32.9. Key step¶
Now we come to the key step of verifying that equilibrium outcomes for prices and quantities are identical in the pooling equilibrium and Townsend’s original model.
We accomplish this by compute a population linear least squares regression of the noisy signal that firms in the other industry receive in a pooling equilibrium on time \(t\) information that a firm receives in Townsend’s original model.
Let’s compute the regression and stare at the \(R^2\):
# Verify that θ_t + e^{2}_t can be recovered
# θ_t + e^{2}_t on k^{i}_t, P^{1}_t, P^{2}_t, \\tilde{\\theta_t}
model = OLS(y[1], x[2:6].T)
reg_res = model.fit()
np.abs(reg_res.rsquared  1.) < 1e6
True
reg_res.rsquared
1.0
The \(R^2\) in this regression equals \(1\).
That verifies that a firm’s information set in Townsend’s original model equals its information set in a pooling equilibrium.
Therefore, equilibrium prices and quantities in Townsend’s original model equal those in a pooling equilibrium.
32.10. Comparison of the two signal structures¶
It is enlightening side by side to plot impulse response functions for capital in an industry for the two information noisysignal information structures.
Please remember that the twosignal structure corresponds to the pooling equilibrium and also Townsend’s original model.
fig_comb = go.Figure(data=[*fig1.data,
*fig2.update_traces(xaxis='x2', yaxis='y2').data]).set_subplots(1, 2,
subplot_titles=("One noisysignal structure", "Two noisysignal structure"),
horizontal_spacing=0.1,
shared_yaxes=True)
# Export to PNG file
Image(fig_comb.to_image(format="png"))
# fig_comb.show() will provide interactive plot when running
# notebook locally
The graphs above show that
the response of \(k_t^i\) to shocks \(v_t\) to the hidden Markov demand state \(\theta_t\) process is larger in the twonoisy=signal structure
the response of \(k_t^i\) to idiosyncratic ownmarket noiseshocks \(e_t\) is smaller in the twonoisysignal structure
Taken together, these findings in turn can be shown to imply that time series correlations and coherences between outputs in the two industries are higher in the twonoisysignals or pooling model.
The enhanced influence of the shocks \(v_t\) to the hidden Markov demand state \(\theta_t\) process that emerges from the twonoisysignal model relative to the onenoisysignal model is a symptom of a lower equilibrium hiddenstate reconstruction error variance in the twosignal model:
display(Latex('$\\textbf{Reconstruction error variances}$'))
display(Latex(f'Onenoise structure: {round(p_one, 6)}'))
display(Latex(f'Twonoise structure: {round(p_two, 6)}'))
Kalman gains for the two structures are
display(Latex('$\\textbf{Kalman Gains}$'))
display(Latex(f'One noisysignal structure: {round(κ_one, 6)}'))
display(Latex(f'Two noisysignals structure: {round(κ_two, 6)}'))
32.11. Notes on History of the Problem¶
To truncate what he saw as an intractable, infinite dimensional state space, Townsend constructed an approximating model in which the common hidden Markov demand shock is revealed to all firms after a fixed number of periods.
Thus,
Townsend wanted to assume that at time \(t\) firms in industry \(i\) observe \(k_t^i, Y_t^i, P_t^i, (P^{i})^t\), where \((P^{i})^t\) is the history of prices in the other market up to time \(t\).
Because that turned out to be too challenging, Townsend made an alternative assumption that eased his calculations: that after a large number \(S\) of periods, firms in industry \(i\) observe the hidden Markov component of the demand shock \(\theta_{tS}\).
Townsend argued that the more manageable model could do a good job of approximating the intractable model in which the Markov component of the demand shock remains unobserved for ever.
By applying technical machinery of [PCL86], [PS05] showed that there is a recursive representation of the equilibrium of the perpetually and symmetrically uninformed model formulated but not completely solved in section 8 of [Tow83].
A reader of [PS05] will notice that their representation of the equilibrium of Townsend’s model exactly matches that of the pooling equilibrium presented here.
We have structured our notation in this lecture to faciliate comparison of the pooling equilibrium constructed here with the equilibrium of Townsend’s model reported in [PS05].
The computational method of [PS05] is recursive: it enlists the Kalman filter and invariant subspace methods for solving systems of Euler equations 5 .
As [Sin87], [Kas00], and [Sar91] also found, the equilibrium is fully revealing: observed prices tell participants in industry \(i\) all of the information held by participants in market \(i\) (\(i\) means not \(i\)).
This means that higherorder beliefs play no role: seeing equilibrium prices in effect lets decision makers pool their information sets 6 .
The disappearance of higher order beliefs means that decision makers in this model do not really face a problem of forecasting the forecasts of others.
They know those forecasts because they are the same as their own.
32.11.1. Further historical remarks¶
[Sar91] proposed a way to compute an equilibrium without making Townsend’s approximation.
Extending the reasoning of [Mut60], Sargent noticed that it is possible to summarize the relevant history with a low dimensional object, namely, a small number of current and lagged forecasting errors.
Positing an equilibrium in a space of perceived laws of motion for endogenous variables that takes the form of a vector autoregressive, moving average, Sargent described an equilibrium as a fixed point of a mapping from the perceived law of motion to the actual law of motion of that form.
Sargent worked in the time domain and had to guess and verify the appropriate orders of the autoregressive and moving average pieces of the equilibrium representation.
By working in the frequency domain [Kas00] showed how to discover the appropriate orders of the autoregressive and moving average parts, and also how to compute an equilibrium.
The [PS05] recursive computational method, which stays in the time domain, also discovered appropriate orders of the autoregressive and moving average pieces.
In addition, by displaying equilibrium representations in the form of [PCL86], [PS05] showed how the moving average piece is linked to the innovation process of the hidden persistent component of the demand shock.
That scalar innovation process is the additional state variable contributed by the problem of extracting a signal from equilibrium prices that decision makers face in Townsend’s model.
 1
[PS05] verified this assertion using a different tactic, namely, by constructing analytic formulas an equilibrium under the incomplete information structure and confirming that they match the pooling equilibrium formulas derived here.
 2
See [Sar87], especially chapters IX and XIV, for the principles that guide solving some roots backwards and others forwards.
 3
As noted [Sar87], this difference equation is the Euler equation for the planning problem of maximizing the discounted sum of consumer plus producer surplus.
 4
[PS05] verify the same claim by applying machinery of [PCL86].
 5
See [AHMS96] for an account of invariant subspace methods.
 6
See [AHMS96] for a discussion of the information assumptions needed to create a situation in which higher order beliefs appear in equilibrium decision rules. The way to read our findings in light of [AMS02] is that Townsend’s section 8 model has too few sources of random shocks relative to sources of signals to permit higher order beliefs to play a role.