{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "\n", "" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Stackelberg Plans" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Contents\n", "\n", "- [Stackelberg Plans](#Stackelberg-Plans) \n", " - [Overview](#Overview) \n", " - [Duopoly](#Duopoly) \n", " - [The Stackelberg Problem](#The-Stackelberg-Problem) \n", " - [Stackelberg Plan](#Stackelberg-Plan) \n", " - [Recursive Representation of Stackelberg Plan](#Recursive-Representation-of-Stackelberg-Plan) \n", " - [Computing the Stackelberg Plan](#Computing-the-Stackelberg-Plan) \n", " - [Exhibiting Time Inconsistency of Stackelberg Plan](#Exhibiting-Time-Inconsistency-of-Stackelberg-Plan) \n", " - [Recursive Formulation of the Follower’s Problem](#Recursive-Formulation-of-the-Follower’s-Problem) \n", " - [Markov Perfect Equilibrium](#Markov-Perfect-Equilibrium) \n", " - [MPE vs. Stackelberg](#MPE-vs.-Stackelberg) " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "In addition to what’s in Anaconda, this lecture will need the following libraries:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide-output": false }, "outputs": [], "source": [ "!pip install --upgrade quantecon" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Overview\n", "\n", "This notebook formulates and computes a plan that a **Stackelberg\n", "leader** uses to manipulate forward-looking decisions of a **Stackelberg\n", "follower** that depend on continuation sequences of decisions made once\n", "and for all by the Stackelberg leader at time $0$.\n", "\n", "To facilitate computation and interpretation, we formulate things in a\n", "context that allows us to apply dynamic programming for linear-quadratic models.\n", "\n", "From the beginning, we carry along a linear-quadratic model of duopoly in\n", "which firms face adjustment costs that make them want to forecast\n", "actions of other firms that influence future prices.\n", "\n", "Let’s start with some standard imports:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide-output": false }, "outputs": [], "source": [ "import numpy as np\n", "import numpy.linalg as la\n", "import quantecon as qe\n", "from quantecon import LQ\n", "import matplotlib.pyplot as plt\n", "%matplotlib inline" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Duopoly\n", "\n", "Time is discrete and is indexed by $t = 0, 1, \\ldots$.\n", "\n", "Two firms produce a single good whose demand is governed by the linear\n", "inverse demand curve\n", "\n", "$$\n", "p_t = a_0 - a_1 (q_{1t}+ q_{2t} )\n", "$$\n", "\n", "where $q_{it}$ is output of firm $i$ at time $t$ and\n", "$a_0$ and $a_1$ are both positive.\n", "\n", "$q_{10}, q_{20}$ are given numbers that serve as initial\n", "conditions at time $0$.\n", "\n", "By incurring a cost of change\n", "\n", "$$\n", "\\gamma v_{it}^2\n", "$$\n", "\n", "where $\\gamma > 0$, firm $i$ can change its output according\n", "to\n", "\n", "$$\n", "q_{it+1} = q_{it} + v_{it}\n", "$$\n", "\n", "Firm $i$’s profits at time $t$ equal\n", "\n", "$$\n", "\\pi_{it} = p_t q_{it} - \\gamma v_{it}^2\n", "$$\n", "\n", "Firm $i$ wants to maximize the present value of its profits\n", "\n", "$$\n", "\\sum_{t=0}^\\infty \\beta^t \\pi_{it}\n", "$$\n", "\n", "where $\\beta \\in (0,1)$ is a time discount factor." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Stackelberg Leader and Follower\n", "\n", "Each firm $i=1,2$ chooses a sequence\n", "$\\vec q_i \\equiv \\{q_{it+1}\\}_{t=0}^\\infty$ once and for all at\n", "time $0$.\n", "\n", "We let firm 2 be a **Stackelberg leader** and firm 1 be a **Stackelberg\n", "follower**.\n", "\n", "The leader firm 2 goes first and chooses\n", "$\\{q_{2t+1}\\}_{t=0}^\\infty$ once and for all at time $0$.\n", "\n", "Knowing that firm 2 has chosen $\\{q_{2t+1}\\}_{t=0}^\\infty$, the\n", "follower firm 1 goes second and chooses\n", "$\\{q_{1t+1}\\}_{t=0}^\\infty$ once and for all at time $0$.\n", "\n", "In choosing $\\vec q_2$, firm 2 takes into account that firm 1 will\n", "base its choice of $\\vec q_1$ on firm 2’s choice of\n", "$\\vec q_2$." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Abstract Statement of the Leader’s and Follower’s Problems\n", "\n", "We can express firm 1’s problem as\n", "\n", "$$\n", "\\max_{\\vec q_1} \\Pi_1(\\vec q_1; \\vec q_2)\n", "$$\n", "\n", "where the appearance behind the semi-colon indicates that\n", "$\\vec q_2$ is given.\n", "\n", "Firm 1’s problem induces the best response mapping\n", "\n", "$$\n", "\\vec q_1 = B(\\vec q_2)\n", "$$\n", "\n", "(Here $B$ maps a sequence into a sequence)\n", "\n", "The Stackelberg leader’s problem is\n", "\n", "$$\n", "\\max_{\\vec q_2} \\Pi_2 (B(\\vec q_2), \\vec q_2)\n", "$$\n", "\n", "whose maximizer is a sequence $\\vec q_2$ that depends on the\n", "initial conditions $q_{10}, q_{20}$ and the parameters of the\n", "model $a_0, a_1, \\gamma$.\n", "\n", "This formulation captures key features of the model\n", "\n", "- Both firms make once-and-for-all choices at time $0$. \n", "- This is true even though both firms are choosing sequences of\n", " quantities that are indexed by **time**. \n", "- The Stackelberg leader chooses first **within time** $0$,\n", " knowing that the Stackelberg follower will choose second **within\n", " time** $0$. \n", "\n", "\n", "While our abstract formulation reveals the timing protocol and\n", "equilibrium concept well, it obscures details that must be addressed\n", "when we want to compute and interpret a Stackelberg plan and the\n", "follower’s best response to it.\n", "\n", "To gain insights about these things, we study them in more detail." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Firms’ Problems\n", "\n", "Firm 1 acts as if firm 2’s sequence $\\{q_{2t+1}\\}_{t=0}^\\infty$ is\n", "given and beyond its control.\n", "\n", "Firm 2 knows that firm 1 chooses second and takes this into account in\n", "choosing $\\{q_{2t+1}\\}_{t=0}^\\infty$.\n", "\n", "In the spirit of *working backward*, we study firm 1’s problem first,\n", "taking $\\{q_{2t+1}\\}_{t=0}^\\infty$ as given.\n", "\n", "We can formulate firm 1’s optimum problem in terms of the Lagrangian\n", "\n", "$$\n", "L=\\sum_{t=0}^{\\infty}\\beta^{t}\\{a_{0}q_{1t}-a_{1}q_{1t}^{2}-a_{1}q_{1t}q_{2t}-\\gamma v_{1t}^{2}+\\lambda_{t}[q_{1t}+v_{1t}-q_{1t+1}]\\}\n", "$$\n", "\n", "Firm 1 seeks a maximum with respect to\n", "$\\{q_{1t+1}, v_{1t} \\}_{t=0}^\\infty$ and a minimum with respect to\n", "$\\{ \\lambda_t\\}_{t=0}^\\infty$.\n", "\n", "We approach this problem using methods described in Ljungqvist and\n", "Sargent RMT5 chapter 2, appendix A and Macroeconomic Theory, 2nd\n", "edition, chapter IX.\n", "\n", "First-order conditions for this problem are\n", "\n", "\n", "\\begin{aligned} \\frac{\\partial L}{\\partial q_{1t}} & = a_0 - 2 a_1 q_{1t} - a_1 q_{2t} + \\lambda_t - \\beta^{-1}\n", " \\lambda_{t-1} = 0 , \\quad t \\geq 1 \\cr\n", " \\frac{\\partial L}{\\partial v_{1t}} & = -2 \\gamma v_{1t} + \\lambda_t = 0 , \\quad t \\geq 0 \\cr \\end{aligned}\n", "\n", "\n", "These first-order conditions and the constraint $q_{1t+1} = q_{1t} + v_{1t}$ can be rearranged to take the form\n", "\n", "\n", "\\begin{aligned} v_{1t} & = \\beta v_{1t+1} + \\frac{\\beta a_0}{2 \\gamma} - \\frac{\\beta a_1}{\\gamma} q_{1t+1} -\n", " \\frac{\\beta a_1}{2 \\gamma} q_{2t+1} \\cr\n", " q_{t+1} & = q_{1t} + v_{1t} \\end{aligned}\n", "\n", "\n", "We can substitute the second equation into the first equation to obtain\n", "\n", "$$\n", "(q_{1t+1} - q_{1t} ) = \\beta (q_{1t+2} - q_{1t+1}) + c_0 - c_1 q_{1t+1} - c_2 q_{2t+1}\n", "$$\n", "\n", "where\n", "$c_0 = \\frac{\\beta a_0}{2 \\gamma}, c_1 = \\frac{\\beta a_1}{\\gamma}, c_2 = \\frac{\\beta a_1}{2 \\gamma}$.\n", "\n", "This equation can in turn be rearranged to become the second-order\n", "difference equation\n", "\n", "\n", "\n", "$$\n", "q_{1t} + (1+\\beta + c_1) q_{1t+1} - \\beta q_{1t+2} = c_0 - c_2 q_{2t+1} \\tag{37.1}\n", "$$\n", "\n", "Equation [(37.1)](#equation-sstack1) is a second-order difference equation in the sequence\n", "$\\vec q_1$ whose solution we want.\n", "\n", "It satisfies **two boundary conditions:**\n", "\n", "- an initial condition that $q_{1,0}$, which is given \n", "- a terminal condition requiring that\n", " $\\lim_{T \\rightarrow + \\infty} \\beta^T q_{1t}^2 < + \\infty$ \n", "\n", "\n", "Using the lag operators described in chapter IX of *Macroeconomic\n", "Theory, Second edition (1987)*, difference equation\n", "[(37.1)](#equation-sstack1) can be written as\n", "\n", "$$\n", "\\beta(1 - \\frac{1+\\beta + c_1}{\\beta} L + \\beta^{-1} L^2 ) q_{1t+2} = - c_0 + c_2 q_{2t+1}\n", "$$\n", "\n", "The polynomial in the lag operator on the left side can be **factored**\n", "as\n", "\n", "\n", "\n", "$$\n", "(1 - \\frac{1+\\beta + c_1}{\\beta} L + \\beta^{-1} L^2 ) = ( 1 - \\delta_1 L ) (1 - \\delta_2 L) \\tag{37.2}\n", "$$\n", "\n", "where $0 < \\delta_1 < 1 < \\frac{1}{\\sqrt{\\beta}} < \\delta_2$.\n", "\n", "Because $\\delta_2 > \\frac{1}{\\sqrt{\\beta}}$ the operator\n", "$(1 - \\delta_2 L)$ contributes an **unstable** component if solved\n", "**backwards** but a **stable** component if solved **forwards**.\n", "\n", "Mechanically, write\n", "\n", "$$\n", "(1- \\delta_2 L) = -\\delta_{2} L (1 - \\delta_2^{-1} L^{-1} )\n", "$$\n", "\n", "and compute the following inverse operator\n", "\n", "$$\n", "\\left[-\\delta_{2} L (1 - \\delta_2^{-1} L^{-1} )\\right]^{-1} = - \\delta_2 (1 - {\\delta_2}^{-1} )^{-1} L^{-1}\n", "$$\n", "\n", "Operating on both sides of equation [(37.2)](#equation-sstack2) with\n", "$\\beta^{-1}$ times this inverse operator gives the follower’s\n", "decision rule for setting $q_{1t+1}$ in the\n", "**feedback-feedforward** form.\n", "\n", "\n", "\n", "$$\n", "q_{1t+1} = \\delta_1 q_{1t} - c_0 \\delta_2^{-1} \\beta^{-1} \\frac{1}{1 -\\delta_2^{-1}} + c_2 \\delta_2^{-1} \\beta^{-1} \\sum_{j=0}^\\infty \\delta_2^j q_{2t+j+1} , \\quad t \\geq 0 \\tag{37.3}\n", "$$\n", "\n", "The problem of the Stackelberg leader firm 2 is to choose the sequence\n", "$\\{q_{2t+1}\\}_{t=0}^\\infty$ to maximize its discounted profits\n", "\n", "$$\n", "\\sum_{t=0}^\\infty \\beta^t \\{ (a_0 - a_1 (q_{1t} + q_{2t}) ) q_{2t} - \\gamma (q_{2t+1} - q_{2t})^2 \\}\n", "$$\n", "\n", "subject to the sequence of constraints [(37.3)](#equation-sstack3) for $t \\geq 0$.\n", "\n", "We can put a sequence $\\{\\theta_t\\}_{t=0}^\\infty$ of Lagrange\n", "multipliers on the sequence of equations [(37.3)](#equation-sstack3)\n", "and formulate the following Lagrangian for the Stackelberg leader firm\n", "2’s problem\n", "\n", "\n", "\n", "\n", "\\begin{aligned} \\tilde L & = \\sum_{t=0}^\\infty \\beta^t\\{ (a_0 - a_1 (q_{1t} + q_{2t}) ) q_{2t} - \\gamma (q_{2t+1} - q_{2t})^2 \\} \\cr\n", " & + \\sum_{t=0}^\\infty \\beta^t \\theta_t \\{ \\delta_1 q_{1t} - c_0 \\delta_2^{-1} \\beta^{-1} \\frac{1}{1 -\\delta_2^{-1}} + c_2 \\delta_2^{-1} \\beta^{-1}\n", " \\sum_{j=0}^\\infty \\delta_2^{-j} q_{2t+j+1} - q_{1t+1} \\} \\end{aligned} \\tag{37.4}\n", "\n", "\n", "subject to initial conditions for $q_{1t}, q_{2t}$ at $t=0$.\n", "\n", "**Comments:** We have formulated the Stackelberg problem in a space of\n", "sequences.\n", "\n", "The max-min problem associated with Lagrangian\n", "[(37.4)](#equation-sstack4) is unpleasant because the time $t$\n", "component of firm $1$’s payoff function depends on the entire\n", "future of its choices of $\\{q_{1t+j}\\}_{j=0}^\\infty$.\n", "\n", "This renders a direct attack on the problem cumbersome.\n", "\n", "Therefore, below, we will formulate the Stackelberg leader’s problem\n", "recursively.\n", "\n", "We’ll put our little duopoly model into a broader class of models with\n", "the same conceptual structure." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## The Stackelberg Problem\n", "\n", "We formulate a class of linear-quadratic Stackelberg leader-follower\n", "problems of which our duopoly model is an instance.\n", "\n", "We use the optimal linear regulator (a.k.a. the linear-quadratic dynamic\n", "programming problem described in [LQ Dynamic Programming\n", "problems](https://python-intro.quantecon.org/lqcontrol.html)) to\n", "represent a Stackelberg leader’s problem recursively.\n", "\n", "Let $z_t$ be an $n_z \\times 1$ vector of **natural\n", "state variables**.\n", "\n", "Let $x_t$ be an $n_x \\times 1$ vector of endogenous\n", "forward-looking variables that are physically free to jump at $t$.\n", "\n", "In our duopoly example $x_t = v_{1t}$, the time $t$ decision\n", "of the Stackelberg **follower**.\n", "\n", "Let $u_t$ be a vector of decisions chosen by the Stackelberg leader\n", "at $t$.\n", "\n", "The $z_t$ vector is inherited physically from the past.\n", "\n", "But $x_t$ is a decision made by the Stackelberg follower at time\n", "$t$ that is the follower’s best response to the choice of an\n", "entire sequence of decisions made by the Stackelberg leader at time\n", "$t=0$.\n", "\n", "Let\n", "\n", "$$\n", "y_t = \\begin{bmatrix} z_t \\\\ x_t \\end{bmatrix}\n", "$$\n", "\n", "Represent the Stackelberg leader’s one-period loss function as\n", "\n", "$$\n", "r(y, u) = y' R y + u' Q u\n", "$$\n", "\n", "Subject to an initial condition for $z_0$, but not for $x_0$, the\n", "Stackelberg leader wants to maximize\n", "\n", "\n", "\n", "$$\n", "-\\sum_{t=0}^\\infty \\beta^t r(y_t, u_t) \\tag{37.5}\n", "$$\n", "\n", "The Stackelberg leader faces the model\n", "\n", "\n", "\n", "$$\n", "\\begin{bmatrix} I & 0 \\\\ G_{21} & G_{22} \\end{bmatrix}\n", "\\begin{bmatrix} z_{t+1} \\\\ x_{t+1} \\end{bmatrix}\n", "= \\begin{bmatrix} \\hat A_{11} & \\hat A_{12} \\\\ \\hat A_{21} & \\hat A_{22} \\end{bmatrix} \\begin{bmatrix} z_t \\\\ x_t \\end{bmatrix} + \\hat B u_t \\tag{37.6}\n", "$$\n", "\n", "We assume that the matrix\n", "$\\begin{bmatrix} I & 0 \\\\ G_{21} & G_{22} \\end{bmatrix}$ on the\n", "left side of equation [(37.6)](#equation-new2) is invertible, so that we\n", "can multiply both sides by its inverse to obtain\n", "\n", "\n", "\n", "$$\n", "\\begin{bmatrix} z_{t+1} \\\\ x_{t+1} \\end{bmatrix}\n", "= \\begin{bmatrix} A_{11} & A_{12} \\\\ A_{21} & A_{22} \\end{bmatrix}\n", "\\begin{bmatrix} z_t \\\\ x_t \\end{bmatrix} + B u_t \\tag{37.7}\n", "$$\n", "\n", "or\n", "\n", "\n", "\n", "$$\n", "y_{t+1} = A y_t + B u_t \\tag{37.8}\n", "$$" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Interpretation of the Second Block of Equations\n", "\n", "The Stackelberg follower’s best response mapping is summarized by the\n", "second block of equations of [(37.7)](#equation-new3).\n", "\n", "In particular, these equations are the first-order conditions of the\n", "Stackelberg follower’s optimization problem (i.e., its Euler equations).\n", "\n", "These Euler equations summarize the forward-looking aspect of the\n", "follower’s behavior and express how its time $t$ decision depends on\n", "the leader’s actions at times $s \\geq t$.\n", "\n", "When combined with a stability condition to be imposed below, the Euler\n", "equations summarize the follower’s best response to the sequence of\n", "actions by the leader.\n", "\n", "The Stackelberg leader maximizes [(37.5)](#equation-maxeq) by\n", "choosing sequences $\\{u_t, x_t, z_{t+1}\\}_{t=0}^{\\infty}$\n", "subject to [(37.8)](#equation-constrainteq) and an initial condition for $z_0$.\n", "\n", "Note that we have an initial condition for $z_0$ but not for $x_0$.\n", "\n", "$x_0$ is among the variables to be chosen at time $0$ by the\n", "Stackelberg leader.\n", "\n", "The Stackelberg leader uses its understanding of the responses\n", "restricted by [(37.8)](#equation-constrainteq) to manipulate the follower’s\n", "decisions." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### More Mechanical Details\n", "\n", "For any vector $a_t$, define $\\vec a_t = [a_t,\n", "a_{t+1} \\ldots ]$.\n", "\n", "Define a feasible set of $(\\vec y_1, \\vec u_0)$ sequences\n", "\n", "$$\n", "\\Omega(y_0) = \\left\\{ (\\vec y_1, \\vec u_0) : y_{t+1} = A y_t + B u_t, \\forall t \\geq 0 \\right\\}\n", "$$\n", "\n", "Please remember that the follower’s Euler equation is embedded in the\n", "system of dynamic equations $y_{t+1} = A y_t + B u_t$.\n", "\n", "Note that in the definition of $\\Omega(y_0)$, $y_0$\n", "is taken as given.\n", "\n", "Although it is taken as given in $\\Omega(y_0)$,\n", "eventually, the $x_0$ component of $y_0$ will be chosen by the\n", "Stackelberg leader." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Two Subproblems\n", "\n", "Once again we use backward induction.\n", "\n", "We express the Stackelberg problem in terms of **two subproblems**.\n", "\n", "Subproblem 1 is solved by a **continuation Stackelberg leader** at each\n", "date $t \\geq 0$.\n", "\n", "Subproblem 2 is solved by the **Stackelberg leader** at $t=0$.\n", "\n", "The two subproblems are designed\n", "\n", "- to respect the protocol in which the follower chooses\n", " $\\vec q_1$ after seeing $\\vec q_2$ chosen by the leader \n", "- to make the leader choose $\\vec q_2$ while respecting that\n", " $\\vec q_1$ will be the follower’s best response to\n", " $\\vec q_2$ \n", "- to represent the leader’s problem recursively by artfully choosing\n", " the state variables confronting and the control variables available\n", " to the leader " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Subproblem 1\n", "\n", "$$\n", "v(y_0) = \\max_{(\\vec y_1, \\vec u_0) \\in \\Omega(y_0)} - \\sum_{t=0}^\\infty \\beta^t r(y_t, u_t)\n", "$$" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Subproblem 2\n", "\n", "$$\n", "w(z_0) = \\max_{x_0} v(y_0)\n", "$$\n", "\n", "Subproblem 1 takes the vector of forward-looking variables $x_0$ as\n", "given.\n", "\n", "Subproblem 2 optimizes over $x_0$.\n", "\n", "The value function $w(z_0)$ tells the value of the Stackelberg plan\n", "as a function of the vector of natural state variables at time $0$,\n", "$z_0$." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Two Bellman Equations\n", "\n", "We now describe Bellman equations for $v(y)$ and $w(z_0)$." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Subproblem 1\n", "\n", "The value function $v(y)$ in subproblem 1 satisfies the Bellman\n", "equation\n", "\n", "\n", "\n", "$$\n", "v(y) = \\max_{u, y^*} \\left\\{ - r(y,u) + \\beta v(y^*) \\right\\} \\tag{37.9}\n", "$$\n", "\n", "where the maximization is subject to\n", "\n", "$$\n", "y^* = A y + B u\n", "$$\n", "\n", "and $y^*$ denotes next period’s value.\n", "\n", "Substituting $v(y) = - y'P y$ into Bellman equation [(37.9)](#equation-bellman-stack) gives\n", "\n", "$$\n", "-y' P y = {\\rm max}_{ u, y^*} \\left\\{ - y' R y - u'Q u - \\beta y^{* \\prime} P y^* \\right\\}\n", "$$\n", "\n", "which as in lecture [linear regulator](https://python-intro.quantecon.org/lqcontrol.html) gives\n", "rise to the algebraic matrix Riccati equation\n", "\n", "$$\n", "P = R + \\beta A' P A - \\beta^2 A' P B ( Q + \\beta B' P B)^{-1} B' P A\n", "$$\n", "\n", "and the optimal decision rule coefficient vector\n", "\n", "$$\n", "F = \\beta( Q + \\beta B' P B)^{-1} B' P A\n", "$$\n", "\n", "where the optimal decision rule is\n", "\n", "$$\n", "u_t = - F y_t\n", "$$" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Subproblem 2\n", "\n", "We find an optimal $x_0$ by equating to zero the gradient of $v(y_0)$\n", "with respect to $x_0$:\n", "\n", "$$\n", "-2 P_{21} z_0 - 2 P_{22} x_0 =0,\n", "$$\n", "\n", "which implies that\n", "\n", "$$\n", "x_0 = - P_{22}^{-1} P_{21} z_0\n", "$$" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Stackelberg Plan\n", "\n", "Now let’s map our duopoly model into the above setup.\n", "\n", "We will formulate a state space system\n", "\n", "$$\n", "y_t = \\begin{bmatrix} z_t \\cr x_t \\end{bmatrix}\n", "$$\n", "\n", "where in this instance $x_t = v_{1t}$, the time $t$ decision\n", "of the follower firm 1." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Calculations to Prepare Duopoly Model\n", "\n", "Now we’ll proceed to cast our duopoly model within the framework of the\n", "more general linear-quadratic structure described above.\n", "\n", "That will allow us to compute a Stackelberg plan simply by enlisting a\n", "Riccati equation to solve a linear-quadratic dynamic program.\n", "\n", "As emphasized above, firm 1 acts as if firm 2’s decisions\n", "$\\{q_{2t+1}, v_{2t}\\}_{t=0}^\\infty$ are given and beyond its\n", "control." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Firm 1’s Problem\n", "\n", "We again formulate firm 1’s optimum problem in terms of the Lagrangian\n", "\n", "$$\n", "L=\\sum_{t=0}^{\\infty}\\beta^{t}\\{a_{0}q_{1t}-a_{1}q_{1t}^{2}-a_{1}q_{1t}q_{2t}-\\gamma v_{1t}^{2}+\\lambda_{t}[q_{1t}+v_{1t}-q_{1t+1}]\\}\n", "$$\n", "\n", "Firm 1 seeks a maximum with respect to\n", "$\\{q_{1t+1}, v_{1t} \\}_{t=0}^\\infty$ and a minimum with respect to\n", "$\\{ \\lambda_t\\}_{t=0}^\\infty$.\n", "\n", "First-order conditions for this problem are\n", "\n", "\n", "\\begin{aligned} \\frac{\\partial L}{\\partial q_{1t}} & = a_0 - 2 a_1 q_{1t} - a_1 q_{2t} + \\lambda_t - \\beta^{-1}\n", " \\lambda_{t-1} = 0 , \\quad t \\geq 1 \\cr\n", " \\frac{\\partial L}{\\partial v_{1t}} & = -2 \\gamma v_{1t} + \\lambda_t = 0 , \\quad t \\geq 0 \\cr \\end{aligned}\n", "\n", "\n", "These first-order order conditions and the constraint $q_{1t+1} =\n", "q_{1t} + v_{1t}$ can be rearranged to take the form\n", "\n", "\n", "\\begin{aligned} v_{1t} & = \\beta v_{1t+1} + \\frac{\\beta a_0}{2 \\gamma} - \\frac{\\beta a_1}{\\gamma} q_{1t+1} -\n", " \\frac{\\beta a_1}{2 \\gamma} q_{2t+1} \\cr\n", " q_{t+1} & = q_{1t} + v_{1t} \\end{aligned}\n", "\n", "\n", "We use these two equations as components of the following linear system\n", "that confronts a Stackelberg continuation leader at time $t$\n", "\n", "$$\n", "\\begin{bmatrix} 1 & 0 & 0 & 0 \\cr\n", " 0 & 1 & 0 & 0 \\cr\n", " 0 & 0 & 1 & 0 \\cr\n", " \\frac{\\beta a_0}{2 \\gamma} & - \\frac{\\beta a_1}{2 \\gamma} & -\\frac{\\beta a_1}{\\gamma} & \\beta \\end{bmatrix}\n", " \\begin{bmatrix} 1 \\cr q_{2t+1} \\cr q_{1t+1} \\cr v_{1t+1} \\end{bmatrix}\n", " = \\begin{bmatrix} 1 & 0 & 0 & 0 \\cr\n", " 0 & 1 & 0 & 0 \\cr\n", " 0 & 0 & 1 & 1 \\cr\n", " 0 & 0 & 0 & 1 \\end{bmatrix} \\begin{bmatrix} 1 \\cr q_{2t} \\cr q_{1t} \\cr v_{1t} \\end{bmatrix}\n", " + \\begin{bmatrix} 0 \\cr 1 \\cr 0 \\cr 0 \\end{bmatrix} v_{2t}\n", "$$\n", "\n", "Time $t$ revenues of firm 2 are\n", "$\\pi_{2t} = a_0 q_{2t} - a_1 q_{2t}^2 - a_1 q_{1t} q_{2t}$ which\n", "evidently equal\n", "\n", "$$\n", "z_t' R_1 z_t \\equiv \\begin{bmatrix} 1 \\cr q_{2t} \\cr q_{1t} \\end{bmatrix}'\n", " \\begin{bmatrix} 0 & \\frac{a_0}{2}& 0 \\cr\n", " \\frac{a_0}{2} & -a_1 & -\\frac{a_1}{2}\\cr\n", " 0 & -\\frac{a_1}{2} & 0 \\end{bmatrix}\n", "\\begin{bmatrix} 1 \\cr q_{2t} \\cr q_{1t} \\end{bmatrix}\n", "$$\n", "\n", "If we set $Q = \\gamma$, then firm 2’s period $t$ profits can\n", "then be written\n", "\n", "$$\n", "y_t' R y_t - Q v_{2t}^2\n", "$$\n", "\n", "where\n", "\n", "$$\n", "y_t = \\begin{bmatrix} z_t \\cr x_t \\end{bmatrix}\n", "$$\n", "\n", "with $x_t = v_{1t}$ and\n", "\n", "$$\n", "R =\n", "\\begin{bmatrix} R_1 & 0 \\cr 0 & 0 \\end{bmatrix}\n", "$$\n", "\n", "We’ll report results of implementing this code soon.\n", "\n", "But first, we want to represent the Stackelberg leader’s optimal choices\n", "recursively.\n", "\n", "It is important to do this for several reasons:\n", "\n", "- properly to interpret a representation of the Stackelberg leader’s\n", " choice as a sequence of history-dependent functions \n", "- to formulate a recursive version of the follower’s choice problem \n", "\n", "\n", "First, let’s get a recursive representation of the Stackelberg leader’s\n", "choice of $\\vec q_2$ for our duopoly model." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Recursive Representation of Stackelberg Plan\n", "\n", "In order to attain an appropriate representation of the Stackelberg\n", "leader’s history-dependent plan, we will employ what amounts to a\n", "version of the **Big K, little k** device often used in\n", "macroeconomics by distinguishing $z_t$, which depends partly on\n", "decisions $x_t$ of the followers, from another vector\n", "$\\check z_t$, which does not.\n", "\n", "We will use $\\check z_t$ and its history $\\check z^t\n", "= [\\check z_t, \\check z_{t-1}, \\ldots, \\check z_0]$ to describe the\n", "sequence of the Stackelberg leader’s decisions that the Stackelberg\n", "follower takes as given.\n", "\n", "Thus, we let\n", "$\\check y_t' = \\begin{bmatrix}\\check z_t' & \\check x_t'\\end{bmatrix}$\n", "with initial condition $\\check z_0 = z_0$ given.\n", "\n", "That we distinguish $\\check z_t$ from $z_t$ is part and\n", "parcel of the **Big K, little k** device in this\n", "instance.\n", "\n", "We have demonstrated that a Stackelberg plan for\n", "$\\{u_t\\}_{t=0}^\\infty$ has a recursive representation\n", "\n", "\n", "\\begin{aligned} \\check x_0 & = - P_{22}^{-1} P_{21} z_0 \\cr\n", " u_t & = - F \\check y_t, \\quad t \\geq 0 \\cr\n", " \\check y_{t+1} & = (A - BF) \\check y_t, \\quad t \\geq 0 \\end{aligned}\n", "\n", "\n", "From this representation, we can deduce the sequence of functions\n", "$\\sigma = \\{\\sigma_t(\\check z^t)\\}_{t=0}^\\infty$ that comprise a\n", "Stackelberg plan.\n", "\n", "For convenience, let $\\check A \\equiv A - BF$ and partition\n", "$\\check A$ conformably to the partition\n", "$y_t = \\begin{bmatrix}\\check z_t \\cr \\check x_t \\end{bmatrix}$ as\n", "\n", "$$\n", "\\begin{bmatrix}\\check A_{11} & \\check A_{12} \\cr \\check A_{21} & \\check A_{22} \\end{bmatrix}\n", "$$\n", "\n", "Let $H^0_0 \\equiv - P_{22}^{-1} P_{21}$ so that\n", "$\\check x_0 = H^0_0 \\check z_0$.\n", "\n", "Then iterations on $\\check y_{t+1} = \\check A \\check y_t$ starting from initial\n", "condition $\\check y_0 = \\begin{bmatrix}\\check z_0 \\cr H^0_0 \\check z_0\\end{bmatrix}$\n", "imply that for $t \\geq 1$\n", "\n", "$$\n", "x_t = \\sum_{j=1}^t H_j^t \\check z_{t-j}\n", "$$\n", "\n", "where\n", "\n", "\n", "\\begin{aligned} H^t_1 & = \\check A_{21} \\cr\n", " H^t_2 & = \\check A_{22} \\check A_{21} \\cr\n", " \\ \\ \\vdots \\ \\ & \\ \\ \\quad \\vdots \\cr\n", " H^t_{t-1} & = \\check A_{22}^{t-2} \\check A_{21} \\cr\n", " H^t_t & = \\check A_{22}^{t-1}(\\check A_{21} + \\check A_{22} H^0_0 ) \\end{aligned}\n", "\n", "\n", "An optimal decision rule for the Stackelberg’s choice of $u_t$ is\n", "\n", "$$\n", "u_t = - F \\check y_t \\equiv - \\begin{bmatrix} F_z & F_x \\cr \\end{bmatrix}\n", "\\begin{bmatrix}\\check z_t \\cr x_t \\cr \\end{bmatrix}\n", "$$\n", "\n", "or\n", "\n", "\n", "\n", "$$\n", "u_t = - F_z \\check z_t - F_x \\sum_{j=1}^t H^t_j z_{t-j} = \\sigma_t(\\check z^t) \\tag{37.10}\n", "$$\n", "\n", "Representation [(37.10)](#equation-finalrule) confirms that whenever\n", "$F_x \\neq 0$, the typical situation, the time $t$ component\n", "$\\sigma_t$ of a Stackelberg plan is **history-dependent**, meaning\n", "that the Stackelberg leader’s choice $u_t$ depends not just on\n", "$\\check z_t$ but on components of $\\check z^{t-1}$." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Comments and Interpretations\n", "\n", "After all, at the end of the day, it will turn out that because we set\n", "$\\check z_0 = z_0$, it will be true that $z_t = \\check z_t$\n", "for all $t \\geq 0$.\n", "\n", "Then why did we distinguish $\\check z_t$ from $z_t$?\n", "\n", "The answer is that if we want to present to the Stackelberg **follower**\n", "a history-dependent representation of the Stackelberg **leader’s**\n", "sequence $\\vec q_2$, we must use representation\n", "[(37.10)](#equation-finalrule) cast in terms of the history\n", "$\\check z^t$ and **not** a corresponding representation cast in\n", "terms of $z^t$." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Dynamic Programming and Time Consistency of **follower’s** Problem\n", "\n", "Given the sequence $\\vec q_2$ chosen by the Stackelberg leader in\n", "our duopoly model, it turns out that the Stackelberg **follower’s**\n", "problem is recursive in the *natural* state variables that confront a\n", "follower at any time $t \\geq 0$.\n", "\n", "This means that the follower’s plan is time consistent.\n", "\n", "To verify these claims, we’ll formulate a recursive version of a\n", "follower’s problem that builds on our recursive representation of the\n", "Stackelberg leader’s plan and our use of the **Big K, little k** idea." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Recursive Formulation of a Follower’s Problem\n", "\n", "We now use what amounts to another “Big $K$, little $k$” trick (see\n", "[rational expectations equilibrium](https://python-intro.quantecon.org/rational_expectations.html))\n", "to formulate a recursive version of a follower’s problem cast in terms\n", "of an ordinary Bellman equation.\n", "\n", "Firm 1, the follower, faces $\\{q_{2t}\\}_{t=0}^\\infty$ as\n", "a given quantity sequence chosen by the leader and believes that its\n", "output price at $t$ satisfies\n", "\n", "$$\n", "p_t = a_0 - a_1 ( q_{1t} + q_{2t}) , \\quad t \\geq 0\n", "$$\n", "\n", "Our challenge is to represent $\\{q_{2t}\\}_{t=0}^\\infty$ as\n", "a given sequence.\n", "\n", "To do so, recall that under the Stackelberg plan, firm 2 sets output\n", "according to the $q_{2t}$ component of\n", "\n", "$$\n", "y_{t+1} = \\begin{bmatrix} 1 \\cr q_{2t} \\cr q_{1t} \\cr x_t \\end{bmatrix}\n", "$$\n", "\n", "which is governed by\n", "\n", "$$\n", "y_{t+1} = (A - BF) y_t\n", "$$\n", "\n", "To obtain a recursive representation of a $\\{q_{2t}\\}$ sequence\n", "that is exogenous to firm 1, we define a state $\\tilde y_t$\n", "\n", "$$\n", "\\tilde y_t = \\begin{bmatrix} 1 \\cr q_{2t} \\cr \\tilde q_{1t} \\cr \\tilde x_t \\end{bmatrix}\n", "$$\n", "\n", "that evolves according to\n", "\n", "$$\n", "\\tilde y_{t+1} = (A - BF) \\tilde y_t\n", "$$\n", "\n", "subject to the initial condition $\\tilde q_{10} = q_{10}$ and\n", "$\\tilde x_0 = x_0$ where $x_0 = - P_{22}^{-1} P_{21}$ as\n", "stated above.\n", "\n", "Firm 1’s state vector is\n", "\n", "$$\n", "X_t = \\begin{bmatrix} \\tilde y_t \\cr q_{1t} \\end{bmatrix}\n", "$$\n", "\n", "It follows that the follower firm 1 faces law of motion\n", "\n", "\n", "\n", "$$\n", "\\begin{bmatrix} \\tilde y_{t+1} \\\\\n", "q_{1t+1} \\end{bmatrix} = \\begin{bmatrix} A - BF & 0 \\\\\n", "0 & 1 \\end{bmatrix} \\begin{bmatrix} \\tilde y_{t} \\\\\n", "q_{1t} \\end{bmatrix} + \\begin{bmatrix} 0 \\cr 1 \\end{bmatrix} x_t \\tag{37.11}\n", "$$\n", "\n", "This specification assures that from the point of the view of a firm 1,\n", "$q_{2t}$ is an exogenous process.\n", "\n", "Here\n", "\n", "- $\\tilde q_{1t}, \\tilde x_t$ play the role of **Big K** \n", "- $q_{1t}, x_t$ play the role of **little k** \n", "\n", "\n", "The time $t$ component of firm 1’s objective is\n", "\n", "$$\n", "\\tilde X_t' \\tilde R x_t - x_t^2 \\tilde Q = \\begin{bmatrix} 1 \\cr q_{2t} \\cr \\tilde q_{1t} \\cr \\tilde x_t \\cr q_{1t} \\end{bmatrix}'\n", " \\begin{bmatrix} 0 & 0 & 0 & 0 & \\frac{a_0}{2} \\cr\n", " 0 & 0 & 0 & 0 & - \\frac{a_1}{2} \\cr\n", " 0 & 0 & 0 & 0 & 0 \\cr\n", " 0 & 0 & 0 & 0 & 0 \\cr\n", " \\frac{a_0}{2} & -\\frac{a_1}{2} & 0 & 0 & - a_1 \\end{bmatrix}\n", " \\begin{bmatrix} 1 \\cr q_{2t} \\cr \\tilde q_{1t} \\cr \\tilde x_t \\cr q_{1t} \\end{bmatrix} - \\gamma\n", " x_t^2\n", "$$\n", "\n", "Firm 1’s optimal decision rule is\n", "\n", "$$\n", "x_t = - \\tilde F X_t\n", "$$\n", "\n", "and it’s state evolves according to\n", "\n", "$$\n", "\\tilde X_{t+1} = (\\tilde A - \\tilde B \\tilde F) X_t\n", "$$\n", "\n", "under its optimal decision rule.\n", "\n", "Later we shall compute $\\tilde F$ and verify that when we set\n", "\n", "$$\n", "X_0 = \\begin{bmatrix} 1 \\cr q_{20} \\cr q_{10} \\cr x_0 \\cr q_{10} \\end{bmatrix}\n", "$$\n", "\n", "we recover\n", "\n", "$$\n", "x_0 = - \\tilde F \\tilde X_0\n", "$$\n", "\n", "which will verify that we have properly set up a recursive\n", "representation of the follower’s problem facing the Stackelberg leader’s\n", "$\\vec q_2$." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Time Consistency of Follower’s Plan\n", "\n", "Since the follower can solve its problem using dynamic programming its\n", "problem is recursive in what for it are the **natural state variables**,\n", "namely\n", "\n", "$$\n", "\\begin{bmatrix} 1 \\cr q_{2t} \\cr \\tilde q_{10} \\cr \\tilde x_0 \\end{bmatrix}\n", "$$\n", "\n", "It follows that the follower’s plan is time consistent." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Computing the Stackelberg Plan\n", "\n", "Here is our code to compute a Stackelberg plan via a linear-quadratic\n", "dynamic program as outlined above" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide-output": false }, "outputs": [], "source": [ "# Parameters\n", "a0 = 10\n", "a1 = 2\n", "β = 0.96\n", "γ = 120\n", "n = 300\n", "tol0 = 1e-8\n", "tol1 = 1e-16\n", "tol2 = 1e-2\n", "\n", "βs = np.ones(n)\n", "βs[1:] = β\n", "βs = βs.cumprod()" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide-output": false }, "outputs": [], "source": [ "# In LQ form\n", "Alhs = np.eye(4)\n", "\n", "# Euler equation coefficients\n", "Alhs[3, :] = β * a0 / (2 * γ), -β * a1 / (2 * γ), -β * a1 / γ, β\n", "\n", "Arhs = np.eye(4)\n", "Arhs[2, 3] = 1\n", "\n", "Alhsinv = la.inv(Alhs)\n", "\n", "A = Alhsinv @ Arhs\n", "\n", "B = Alhsinv @ np.array([[0, 1, 0, 0]]).T\n", "\n", "R = np.array([[0, -a0 / 2, 0, 0],\n", " [-a0 / 2, a1, a1 / 2, 0],\n", " [0, a1 / 2, 0, 0],\n", " [0, 0, 0, 0]])\n", "\n", "Q = np.array([[γ]])\n", "\n", "# Solve using QE's LQ class\n", "# LQ solves minimization problems which is why the sign of R and Q was changed\n", "lq = LQ(Q, R, A, B, beta=β)\n", "P, F, d = lq.stationary_values(method='doubling')\n", "\n", "P22 = P[3:, 3:]\n", "P21 = P[3:, :3]\n", "P22inv = la.inv(P22)\n", "H_0_0 = -P22inv @ P21\n", "\n", "# Simulate forward\n", "\n", "π_leader = np.zeros(n)\n", "\n", "z0 = np.array([[1, 1, 1]]).T\n", "x0 = H_0_0 @ z0\n", "y0 = np.vstack((z0, x0))\n", "\n", "yt, ut = lq.compute_sequence(y0, ts_length=n)[:2]\n", "\n", "π_matrix = (R + F. T @ Q @ F)\n", "\n", "for t in range(n):\n", " π_leader[t] = -(yt[:, t].T @ π_matrix @ yt[:, t])\n", "\n", "# Display policies\n", "print(\"Computed policy for Stackelberg leader\\n\")\n", "print(f\"F = {F}\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Implied Time Series for Price and Quantities\n", "\n", "The following code plots the price and quantities" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide-output": false }, "outputs": [], "source": [ "q_leader = yt[1, :-1]\n", "q_follower = yt[2, :-1]\n", "q = q_leader + q_follower # Total output, Stackelberg\n", "p = a0 - a1 * q # Price, Stackelberg\n", "\n", "fig, ax = plt.subplots(figsize=(9, 5.8))\n", "ax.plot(range(n), q_leader, 'b-', lw=2, label='leader output')\n", "ax.plot(range(n), q_follower, 'r-', lw=2, label='follower output')\n", "ax.plot(range(n), p, 'g-', lw=2, label='price')\n", "ax.set_title('Output and prices, Stackelberg duopoly')\n", "ax.legend(frameon=False)\n", "ax.set_xlabel('t')\n", "plt.show()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Value of Stackelberg Leader\n", "\n", "We’ll compute the present value earned by the Stackelberg leader.\n", "\n", "We’ll compute it two ways (they give identical answers – just a check\n", "on coding and thinking)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide-output": false }, "outputs": [], "source": [ "v_leader_forward = np.sum(βs * π_leader)\n", "v_leader_direct = -yt[:, 0].T @ P @ yt[:, 0]\n", "\n", "# Display values\n", "print(\"Computed values for the Stackelberg leader at t=0:\\n\")\n", "print(f\"v_leader_forward(forward sim) = {v_leader_forward:.4f}\")\n", "print(f\"v_leader_direct (direct) = {v_leader_direct:.4f}\")" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide-output": false }, "outputs": [], "source": [ "# Manually checks whether P is approximately a fixed point\n", "P_next = (R + F.T @ Q @ F + β * (A - B @ F).T @ P @ (A - B @ F))\n", "(P - P_next < tol0).all()" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide-output": false }, "outputs": [], "source": [ "# Manually checks whether two different ways of computing the\n", "# value function give approximately the same answer\n", "v_expanded = -((y0.T @ R @ y0 + ut[:, 0].T @ Q @ ut[:, 0] +\n", " β * (y0.T @ (A - B @ F).T @ P @ (A - B @ F) @ y0)))\n", "(v_leader_direct - v_expanded < tol0)[0, 0]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Exhibiting Time Inconsistency of Stackelberg Plan\n", "\n", "In the code below we compare two values\n", "\n", "- the continuation value $- y_t P y_t$ earned by a continuation\n", " Stackelberg leader who inherits state $y_t$ at $t$ \n", "- the value of a **reborn Stackelberg leader** who inherits state\n", " $z_t$ at $t$ and sets $x_t = - P_{22}^{-1} P_{21}$ \n", "\n", "\n", "The difference between these two values is a tell-tale sign of the time\n", "inconsistency of the Stackelberg plan" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide-output": false }, "outputs": [], "source": [ "# Compute value function over time with a reset at time t\n", "vt_leader = np.zeros(n)\n", "vt_reset_leader = np.empty_like(vt_leader)\n", "\n", "yt_reset = yt.copy()\n", "yt_reset[-1, :] = (H_0_0 @ yt[:3, :])\n", "\n", "for t in range(n):\n", " vt_leader[t] = -yt[:, t].T @ P @ yt[:, t]\n", " vt_reset_leader[t] = -yt_reset[:, t].T @ P @ yt_reset[:, t]" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide-output": false }, "outputs": [], "source": [ "fig, axes = plt.subplots(3, 1, figsize=(10, 7))\n", "\n", "axes.plot(range(n+1), (- F @ yt).flatten(), 'bo',\n", " label='Stackelberg leader', ms=2)\n", "axes.plot(range(n+1), (- F @ yt_reset).flatten(), 'ro',\n", " label='continuation leader at t', ms=2)\n", "axes.set(title=r'Leader control variable $u_{t}$', xlabel='t')\n", "axes.legend()\n", "\n", "axes.plot(range(n+1), yt[3, :], 'bo', ms=2)\n", "axes.plot(range(n+1), yt_reset[3, :], 'ro', ms=2)\n", "axes.set(title=r'Follower control variable $x_{t}$', xlabel='t')\n", "\n", "axes.plot(range(n), vt_leader, 'bo', ms=2)\n", "axes.plot(range(n), vt_reset_leader, 'ro', ms=2)\n", "axes.set(title=r'Leader value function $v(y_{t})$', xlabel='t')\n", "\n", "plt.tight_layout()\n", "plt.show()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Recursive Formulation of the Follower’s Problem\n", "\n", "We now formulate and compute the recursive version of the follower’s\n", "problem.\n", "\n", "We check that the recursive **Big** $K$ **, little** $k$ formulation of the follower’s problem produces the same output path\n", "$\\vec q_1$ that we computed when we solved the Stackelberg problem" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide-output": false }, "outputs": [], "source": [ "A_tilde = np.eye(5)\n", "A_tilde[:4, :4] = A - B @ F\n", "\n", "R_tilde = np.array([[0, 0, 0, 0, -a0 / 2],\n", " [0, 0, 0, 0, a1 / 2],\n", " [0, 0, 0, 0, 0],\n", " [0, 0, 0, 0, 0],\n", " [-a0 / 2, a1 / 2, 0, 0, a1]])\n", "\n", "Q_tilde = Q\n", "B_tilde = np.array([[0, 0, 0, 0, 1]]).T\n", "\n", "lq_tilde = LQ(Q_tilde, R_tilde, A_tilde, B_tilde, beta=β)\n", "P_tilde, F_tilde, d_tilde = lq_tilde.stationary_values(method='doubling')\n", "\n", "y0_tilde = np.vstack((y0, y0))\n", "yt_tilde = lq_tilde.compute_sequence(y0_tilde, ts_length=n)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide-output": false }, "outputs": [], "source": [ "# Checks that the recursive formulation of the follower's problem gives\n", "# the same solution as the original Stackelberg problem\n", "fig, ax = plt.subplots()\n", "ax.plot(yt_tilde, 'r', label=\"q_tilde\")\n", "ax.plot(yt_tilde, 'b', label=\"q\")\n", "ax.legend()\n", "plt.show()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Note: Variables with _tilde are obtained from solving the follower’s\n", "problem – those without are from the Stackelberg problem" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide-output": false }, "outputs": [], "source": [ "# Maximum absolute difference in quantities over time between\n", "# the first and second solution methods\n", "np.max(np.abs(yt_tilde - yt_tilde))" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide-output": false }, "outputs": [], "source": [ "# x0 == x0_tilde\n", "yt[:, 0][-1] - (yt_tilde[:, 1] - yt_tilde[:, 0])[-1] < tol0" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Explanation of Alignment\n", "\n", "If we inspect the coefficients in the decision rule $- \\tilde F$,\n", "we can spot the reason that the follower chooses to set $x_t =\n", "\\tilde x_t$ when it sets $x_t = - \\tilde F X_t$ in\n", "the recursive formulation of the follower problem.\n", "\n", "Can you spot what features of $\\tilde F$ imply this?\n", "\n", "Hint: remember the components of $X_t$" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide-output": false }, "outputs": [], "source": [ "# Policy function in the follower's problem\n", "F_tilde.round(4)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide-output": false }, "outputs": [], "source": [ "# Value function in the Stackelberg problem\n", "P" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide-output": false }, "outputs": [], "source": [ "# Value function in the follower's problem\n", "P_tilde" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide-output": false }, "outputs": [], "source": [ "# Manually check that P is an approximate fixed point\n", "(P - ((R + F.T @ Q @ F) + β * (A - B @ F).T @ P @ (A - B @ F)) < tol0).all()" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide-output": false }, "outputs": [], "source": [ "# Compute P_guess using F_tilde_star\n", "F_tilde_star = -np.array([[0, 0, 0, 1, 0]])\n", "P_guess = np.zeros((5, 5))\n", "\n", "for i in range(1000):\n", " P_guess = ((R_tilde + F_tilde_star.T @ Q @ F_tilde_star) +\n", " β * (A_tilde - B_tilde @ F_tilde_star).T @ P_guess\n", " @ (A_tilde - B_tilde @ F_tilde_star))" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide-output": false }, "outputs": [], "source": [ "# Value function in the follower's problem\n", "-(y0_tilde.T @ P_tilde @ y0_tilde)[0, 0]" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide-output": false }, "outputs": [], "source": [ "# Value function with P_guess\n", "-(y0_tilde.T @ P_guess @ y0_tilde)[0, 0]" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide-output": false }, "outputs": [], "source": [ "# Compute policy using policy iteration algorithm\n", "F_iter = (β * la.inv(Q + β * B_tilde.T @ P_guess @ B_tilde)\n", " @ B_tilde.T @ P_guess @ A_tilde)\n", "\n", "for i in range(100):\n", " # Compute P_iter\n", " P_iter = np.zeros((5, 5))\n", " for j in range(1000):\n", " P_iter = ((R_tilde + F_iter.T @ Q @ F_iter) + β\n", " * (A_tilde - B_tilde @ F_iter).T @ P_iter\n", " @ (A_tilde - B_tilde @ F_iter))\n", "\n", " # Update F_iter\n", " F_iter = (β * la.inv(Q + β * B_tilde.T @ P_iter @ B_tilde)\n", " @ B_tilde.T @ P_iter @ A_tilde)\n", "\n", "dist_vec = (P_iter - ((R_tilde + F_iter.T @ Q @ F_iter)\n", " + β * (A_tilde - B_tilde @ F_iter).T @ P_iter\n", " @ (A_tilde - B_tilde @ F_iter)))\n", "\n", "if np.max(np.abs(dist_vec)) < 1e-8:\n", " dist_vec2 = (F_iter - (β * la.inv(Q + β * B_tilde.T @ P_iter @ B_tilde)\n", " @ B_tilde.T @ P_iter @ A_tilde))\n", "\n", " if np.max(np.abs(dist_vec2)) < 1e-8:\n", " F_iter\n", " else:\n", " print(\"The policy didn't converge: try increasing the number of \\\n", " outer loop iterations\")\n", "else:\n", " print(\"P_iter didn't converge: try increasing the number of inner \\\n", " loop iterations\")" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide-output": false }, "outputs": [], "source": [ "# Simulate the system using F_tilde_star and check that it gives the\n", "# same result as the original solution\n", "\n", "yt_tilde_star = np.zeros((n, 5))\n", "yt_tilde_star[0, :] = y0_tilde.flatten()\n", "\n", "for t in range(n-1):\n", " yt_tilde_star[t+1, :] = (A_tilde - B_tilde @ F_tilde_star) \\\n", " @ yt_tilde_star[t, :]\n", "\n", "fig, ax = plt.subplots()\n", "ax.plot(yt_tilde_star[:, 4], 'r', label=\"q_tilde\")\n", "ax.plot(yt_tilde, 'b', label=\"q\")\n", "ax.legend()\n", "plt.show()" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide-output": false }, "outputs": [], "source": [ "# Maximum absolute difference\n", "np.max(np.abs(yt_tilde_star[:, 4] - yt_tilde[2, :-1]))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Markov Perfect Equilibrium\n", "\n", "The **state** vector is\n", "\n", "$$\n", "z_t = \\begin{bmatrix} 1 \\cr q_{2t} \\cr q_{1t} \\end{bmatrix}\n", "$$\n", "\n", "and the state transition dynamics are\n", "\n", "$$\n", "z_{t+1} = A z_t + B_1 v_{1t} + B_2 v_{2t}\n", "$$\n", "\n", "where $A$ is a $3 \\times 3$ identity matrix and\n", "\n", "$$\n", "B_1 = \\begin{bmatrix} 0 \\cr 0 \\cr 1 \\end{bmatrix} ,\n", "\\quad B_2 = \\begin{bmatrix} 0 \\cr 1 \\cr 0 \\end{bmatrix}\n", "$$\n", "\n", "The Markov perfect decision rules are\n", "\n", "$$\n", "v_{1t} = - F_1 z_t , \\quad v_{2t} = - F_2 z_t\n", "$$\n", "\n", "and in the Markov perfect equilibrium, the state evolves according to\n", "\n", "$$\n", "z_{t+1} = (A - B_1 F_1 - B_2 F_2) z_t\n", "$$" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide-output": false }, "outputs": [], "source": [ "# In LQ form\n", "A = np.eye(3)\n", "B1 = np.array([, , ])\n", "B2 = np.array([, , ])\n", "\n", "R1 = np.array([[0, 0, -a0 / 2],\n", " [0, 0, a1 / 2],\n", " [-a0 / 2, a1 / 2, a1]])\n", "\n", "R2 = np.array([[0, -a0 / 2, 0],\n", " [-a0 / 2, a1, a1 / 2],\n", " [0, a1 / 2, 0]])\n", "\n", "Q1 = Q2 = γ\n", "S1 = S2 = W1 = W2 = M1 = M2 = 0.0\n", "\n", "# Solve using QE's nnash function\n", "F1, F2, P1, P2 = qe.nnash(A, B1, B2, R1, R2, Q1,\n", " Q2, S1, S2, W1, W2, M1,\n", " M2, beta=β, tol=tol1)\n", "\n", "# Simulate forward\n", "AF = A - B1 @ F1 - B2 @ F2\n", "z = np.empty((3, n))\n", "z[:, 0] = 1, 1, 1\n", "for t in range(n-1):\n", " z[:, t+1] = AF @ z[:, t]\n", "\n", "# Display policies\n", "print(\"Computed policies for firm 1 and firm 2:\\n\")\n", "print(f\"F1 = {F1}\")\n", "print(f\"F2 = {F2}\")" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide-output": false }, "outputs": [], "source": [ "q1 = z[1, :]\n", "q2 = z[2, :]\n", "q = q1 + q2 # Total output, MPE\n", "p = a0 - a1 * q # Price, MPE\n", "\n", "fig, ax = plt.subplots(figsize=(9, 5.8))\n", "ax.plot(range(n), q, 'b-', lw=2, label='total output')\n", "ax.plot(range(n), p, 'g-', lw=2, label='price')\n", "ax.set_title('Output and prices, duopoly MPE')\n", "ax.legend(frameon=False)\n", "ax.set_xlabel('t')\n", "plt.show()" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide-output": false }, "outputs": [], "source": [ "# Computes the maximum difference between the two quantities of the two firms\n", "np.max(np.abs(q1 - q2))" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide-output": false }, "outputs": [], "source": [ "# Compute values\n", "u1 = (- F1 @ z).flatten()\n", "u2 = (- F2 @ z).flatten()\n", "\n", "π_1 = p * q1 - γ * (u1) ** 2\n", "π_2 = p * q2 - γ * (u2) ** 2\n", "\n", "v1_forward = np.sum(βs * π_1)\n", "v2_forward = np.sum(βs * π_2)\n", "\n", "v1_direct = (- z[:, 0].T @ P1 @ z[:, 0])\n", "v2_direct = (- z[:, 0].T @ P2 @ z[:, 0])\n", "\n", "# Display values\n", "print(\"Computed values for firm 1 and firm 2:\\n\")\n", "print(f\"v1(forward sim) = {v1_forward:.4f}; v1 (direct) = {v1_direct:.4f}\")\n", "print(f\"v2 (forward sim) = {v2_forward:.4f}; v2 (direct) = {v2_direct:.4f}\")" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide-output": false }, "outputs": [], "source": [ "# Sanity check\n", "Λ1 = A - B2 @ F2\n", "lq1 = qe.LQ(Q1, R1, Λ1, B1, beta=β)\n", "P1_ih, F1_ih, d = lq1.stationary_values()\n", "\n", "v2_direct_alt = - z[:, 0].T @ lq1.P @ z[:, 0] + lq1.d\n", "\n", "(np.abs(v2_direct - v2_direct_alt) < tol2).all()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## MPE vs. Stackelberg" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide-output": false }, "outputs": [], "source": [ "vt_MPE = np.zeros(n)\n", "vt_follower = np.zeros(n)\n", "\n", "for t in range(n):\n", " vt_MPE[t] = -z[:, t].T @ P1 @ z[:, t]\n", " vt_follower[t] = -yt_tilde[:, t].T @ P_tilde @ yt_tilde[:, t]\n", "\n", "fig, ax = plt.subplots()\n", "ax.plot(vt_MPE, 'b', label='MPE')\n", "ax.plot(vt_leader, 'r', label='Stackelberg leader')\n", "ax.plot(vt_follower, 'g', label='Stackelberg follower')\n", "ax.set_title(r'MPE vs. Stackelberg Value Function')\n", "ax.set_xlabel('t')\n", "ax.legend(loc=(1.05, 0))\n", "plt.show()" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide-output": false }, "outputs": [], "source": [ "# Display values\n", "print(\"Computed values:\\n\")\n", "print(f\"vt_leader(y0) = {vt_leader:.4f}\")\n", "print(f\"vt_follower(y0) = {vt_follower:.4f}\")\n", "print(f\"vt_MPE(y0) = {vt_MPE:.4f}\")" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide-output": false }, "outputs": [], "source": [ "# Compute the difference in total value between the Stackelberg and the MPE\n", "vt_leader + vt_follower - 2 * vt_MPE" ] } ], "metadata": { "date": 1628245933.5214322, "filename": "dyn_stack.md", "kernelspec": { "display_name": "Python", "language": "python3", "name": "python3" }, "title": "Stackelberg Plans" }, "nbformat": 4, "nbformat_minor": 4 }