Computational Economics 20: 57–86, 2002.
© 2002 Kluwer Academic Publishers. Printed in the Netherlands.
57
System Reduction and Solution Algorithms
for Singular Linear Difference Systems under
Rational Expectations
ROBERT G. KING
1
and MARK W. WATSON
2
1
Boston University, Department of Economics, 270 Bay State Road, MA 02215, Boston, U.S.A.;
2
Princeton University
Abstract. A first-order linear difference system under rational expectations is,
AEy
t+1
|I
t
= By
t
+ C(F)Ex
t
|I
t
,
where y
t
is a vector of endogenous variables; x
t
is a vector of exogenous variables; Ey
t+1
|I
t
is
the expectation of y
t+1
given date t information; and C(F)Ex
t
|I
t
= C
0
x
t
+ C
1
Ex
t+1
|I
t
··+
C
n
Ex
t+n
|I
t
. If the model is solvable, then y
t
can be decomposed into two sets of variables: dynamic
variables d
t
that evolve according to Ed
t+1
|I
t
= Wd
t
+
d
(F)Ex
t
|I
t
and other variables that obey
the dynamic identities f
t
=−Kd
t
f
(F)Ex
t
|I
t
. We develop an algorithm for carrying out this
decomposition and for constructing the implied dynamic system. We also provide algorithms for (i)
computing perfect foresight solutions and Markov decision rules; and (ii) solutions to related models
that involve informational subperiods.
Key words: system reduction, algorithm, models, solutions, in practice
1. Introduction
Many dynamic linear rational expectations models can be cast in the form:
AEy
t+1
|I
t
= By
t
+ C
0
x
t
+ C
1
Ex
t+1
|I
t
+···+C
n
Ex
t+n
|I
t
,t= 0, 1, 2,... (1)
where A, B and {C
i
} are matrices of constants; y
t
is a column vector of endogenous
variables; x
t
is a column vector of exogenous variables; and E ·|I
t
denotes the
The authors are also affiliated with the National Bureau of Economic Research and the Federal
Reserve Bank of Richmond. This article combines results from two of our earlier working papers
(1995a, b). We have had useful support on this research project from many people. Examples of
singularities in real business cycle models provided by Marianne Baxter, Mario Crucini, and Sergio
Rebelo were one of the motivations for this research. Matthew Van Vlack and Pierre Sarte aided in the
initial development of the MATLAB programs that tested our algorithms and Julie Thomas helped
us refine them. Michael Kouparitsas and Steven Lange worked on the translations into GAUSS.
Financial support was provided by the National Science Foundation through grants SES-91-22463
and SBR-94-09629.
58 ROBERT G. KING AND MARK W. WATSON
(rational) expectation conditional on I
t
. While seemingly special, this framework
is general enough to accommodate models with (i) expectations of variables more
than one period into the future, (ii) lags of endogenous variables, (iii) lags of expec-
tations of endogenous variables, and many other complications. Accordingly, (1)
has proven to be a convenient representation for analyzing properties of rational
expectations models, such as the existence and uniqueness of solutions. It further
provides a useful starting point for the design of methods for computing such
rational expectations solutions.
In general, economic models give rise to matrices A and B which are singular,
so that (1) is called a singular linear difference system under rational expectations.
The analysis of Blanchard and Kahn (1980) studied the existence and uniqueness
of stable rational expectations solutions to (1) under the assumption that A was
nonsingular and their analysis also suggested how to compute such solutions in this
case. This article describes a general system reduction algorithm that simplifies
the solution and analysis of (1), by isolating a reduced-dimension, nonsingular
dynamic system in a subset of variables, d
t
, of the full vector of endogenous vari-
ables y
t
. Once the rational expectations solution to this smaller system is obtained,
it is easy to also calculated the solution for the remaining variables, f
t
, as these
are governed by dynamic identities. This article also discusses two algorithms for
computing the rational expectations solution to the reduced system and shows as
well how to solve related ‘timing’ models, i.e., variants of (1) in which there are
distinct informational subperiods.
Before turning to the details, it is important to stress that system reduction has
long been a standard component of the analysis of linear rational expectations
models (and dynamic systems, more generally). For example, in their study of the
business cycle implications of the neoclassical growth model, King, Plosser and
Rebelo (1988a, b) studied a model that included output, consumption, investment,
labor supply and the capital stock. Transformations of these endogenous variables,
together with a Lagrange multiplier (the shadow value of a unit of capital) made
up the vector y
t
, and the exogenous variables x
t
where real shocks including the
level of total factor productivity and government purchases. One of the first steps
in their analysis was to show that output, consumption, investment and the labor
supply could be solved out of the model. This allowed them to analyze a smaller
dynamical system that included only two endogenous variables: the capital stock
and its shadow value. In the notation of the last paragraph, their system reduction
used a decomposition where f
t
included output, consumption, investment and the
labor supply, and d
t
included the capital stock and its shadow value.
System reduction in this King–Plosser–Rebelo (KPR) model was relatively easy
a few lines of algebra both because the number of variables in the model was
small and because the workings of the model were relatively simple. However,
modern quantitative macroeconomic models are much larger (sometimes involving
more than a hundred equations) and much more complicated. In these models, it
can become very difficult to know which of the endogenous variables belong in f
t
SYSTEM REDUCTION AND SOLUTION ALGORITHMS 59
and which belong in d
t
, how to eliminate f
t
from the model, and the form of the
resulting dynamic system for d
t
.
The plan of the article is as follows. Section 2 uses two basic real business cycle
models to provide motivate the representation (1) and to illustrate the problems
which can arise in application of the conventional approach to system reduction
employed in KPR and in many other studies. We prove that if (1) has a unique
solution, then there exists a reduced-dimension dynamical system for d
t
satisfies
the Blanchard–Kahn (1980) conditions for solvability. Section 3 presents the com-
putational algorithm for carrying out system reduction that we have found to be
both fast and numerically stable, even in very large models. The algorithm is iter-
ative and we show that it converges in a finite number of iterations if there exists
any solution to (1). Section 4 details how to solve the reduced and complete model
under rational expectations and Section 5 extends the method to ‘timing’ models.
Finally, Section 6 provides a brief summary and conclusion.
2. Background
Let F denote the expectations lead operator (Sargent, 1979), defined as
F
h
Ew
t+n
|I
t
= Ew
t+n+h
|I
t
for any variable w
t
. (Notice that the operator shifts
the dating of the variable but does not change the dating of the conditioning
information.) Using this operator (1) can be written more compactly as
AEy
t+1
|I
t
= By
t
+ C(F)Ex
t
|I
t
,t= 0, 1, 2,... (2)
Throughout, we will assume that (y
t
,x
t
,I
t1
) are contained in I
t
as well as other
information to be specified in more detail below (we also sometimes write expecta-
tions as E
t
y
t+1
). Some of the endogenous variables in the model are predetermined,
i.e., do not respond at t to shifts in x
t
: we call these variables k
t
and assume that
they are ordered last in the vector y
t
.
2.1.
APPROXIMATING RECURSIVE MODELS
The behavioral equations of many other macroeconomic models notably recur-
sive optimizing models can be expressed as a nonlinear system of expectational
equations,
E
t
F(Y
t+1
,Y
t
,X
t+1
,X
t
) = 0. (3)
In this expression Y
t
is a column vector of endogenous variables and X
t
is a column
vector of exogenous variables. Models of the form (3) are typically solved by linear
or loglinear approximation around a stationary point at which F(Y, Y, X, X) = 0.
A linear approximation of F = 0 about (Y, X)is
0 = F
1
(Y
t+1
Y)+ F
2
(Y
t
Y)
+ F
3
(X
t+1
X) + F
4
(X
t
X) ,
60 ROBERT G. KING AND MARK W. WATSON
where F
1
denotes the matrix of partial derivatives of the model equations with
respect to the elements of Y
t+1
and so forth. Accordingly, any model of the generic
form (3) has a related approximate linear rational expectations system of the form
AE
t
y
t+1
= By
t
+ C
0
x
t
+ C
1
E
t
x
t+1
, with suitable redefinitions of variables (as
deviations from stationary levels y
t
= Y
t
Y and x
t
= X
t
X) and suitable
redefinitions of matrices (so that A = F
1
, B =−F
2
, C
1
=−F
3
,andC
0
=−F
4
).
A basic example of this class of models is the basic neoclassical model used
in real business cycles. Its general structure is discussed in King, Plosser and
Rebelo (1988a, b). The approximation and the solution of the basic RBC model
is discussed in the KPR technical appendix published elsewhere in this volume,
which we henceforth call KPR-TA. In the next subsection, we review the model
analyzed by KPR-TA, focusing in particular on their system reduction. We then
present a generalization of this model to two locations of production and display
the complications that this introduces in the system reduction step. Complications
like this motivate the general algorithm presented in Section 3.
2.2.
THE STOCHASTIC GROWTH MODEL
It is convenient to use dynamic programming to describe the KPR-TA version
of the neoclassical model because this leads naturally to recursive behavioral
equations of the form (3).
1
The Bellman equation is
v(k
t,
ξ
t
) = max
c
i
,i
t
{u(c
t
) + βE
t
v(k
t+1
t+1
)} ,
where c
t
is consumption, i
t
is investment, and k
t
is the predetermined capital
stock. The value function v depends on capital and on information that is useful
for forecasting the exogenous variables of the model, which we call ξ
t
,andis
taken to evolve according to a Markov process. The expectation E
t
v(k
t+1
t+1
)
is short-hand for Ev(k
t+1
t+1
)|(k
t
t
).
This maximization is subject to two constraints. The resources available for
consumption and investment are affected by productivity shocks (a
t
), which is
assumed for simplicity to be the sole source of uncertainty:
c
t
+ i
t
= a
t
f(k
t
).
The capital stock evolves as the result of investment less depreciation, which occurs
at proportional rate δ:
k
t+1
k
t
= i
t
δk
t
.
SYSTEM REDUCTION AND SOLUTION ALGORITHMS 61
2.2.1. Restrictions on Macroeconomic Variables
To find restrictions on the paths of efficient consumption, investment and capital, it
is convenient to form the Lagrangian
L = u(c
t
) + βE
t
v(k
t+1
t+1
)
+ p
t
[a
t
f(k
t
) c
t
i
t
]
+ λ
t
[(1 δ)k
t
+ i
t
k
t+1
]
and the resulting first order conditions are
c
t
: 0 = Du(c
t
) p
t
i
t
: 0 =−p
t
+ λ
t
p
t
: 0 = a
t
f(k
t
) c
t
i
t
k
t+1
: 0 =−λ
t
+ βE
t
∂v(k
t+1
t+1
)
∂k
t+1
λ
t
: 0 = (1 δ)k
t
+ i
t
k
t+1
The capital efficiency condition can be rewritten as λ
t
= βE
t
{λ
t+1
(1 δ) +
p
t+1
a
t+1
Df (k
t+1
)]} using the envelope theorem.
Consequently, one sector stochastic growth model ts neatly into the general
form (3), which is E
t
F(Y
t+1
,Y
t
,X
t+1
,X
t
) = 0, and there are five conditions that
restrict the evolution of the vector endogenous variables Y
t
=[c
t
,i
t
,p
t
t
,k
t
]
given the vector of exogenous variables X
t
=[a
t
].
2.2.2. A Loglinear Approximation
Using the circumflex notation of KPR-TA to denote proportionate deviations from
stationary values, the first three equations are approximated as
c
t
=
1
σ
p
t
(4)
p
t
=
λ
t
(5)
a
t
+ s
k
k
t
= s
i
i
t
+ s
c
c
t
. (6)
The parameters s
i
and s
c
are the output shares of investment and consumption; s
k
is the capital income share; and (1 ) is the intertemporal substitution elasticity in
preferences.
Similarly, the last two equations are approximated as
E
t
k
t+1
= (1 δ)
k
t
+ δ
i
t
(7)
γηE
t
k
t+1
+ γE
t
a
t+1
+ γE
t
p
t+1
+ (1 γ)E
t
λ
t+1
=
λ
t
, (8)
where γ = 1 β(1 δ) and η = kD
2
f (k)/Df (k) is the elasticity of the marginal
product of capital with respect to the capital stock.
62 ROBERT G. KING AND MARK W. WATSON
2.2.3. Writing the System in Standard Form
Taken together, (4)–(8) can be written in the rst-order form AE
t
y
t+1
= By
t
+
C
0
x
t
+ C
1
E
t
x
t+1
as:
000 0 0
000 0 0
000 0 0
000 0 1
00γ 1 γδγ
E
t
c
t+1
i
t+1
p
t+1
λ
t+1
k
t+1
=
101 00
00 1 10
s
c
s
i
00s
k
01 0 0 1 δ
00 0 1 0
c
t
i
t
p
t
λ
t
k
t
+
0
0
1
0
0
E
t
a
t
+
0
0
0
0
γ
E
t
a
t+1
Inspecting this expression, note that the A matrix for the system is singular, as
it contains rows of zeros. Thus, the model is ‘singular’. But, a system reduction
approach can easily isolate a reduced-dimension nonsingular system in this model.
The classical system reduction method, used in KPR-TA, is to look for a vector
of controls, flows, and point-in-time shadow prices. In the current context, this
vector is f
t
= (c
t
i
t
p
t
)
. The first three loglinear equations of the model ((4)
through (6)) have rows of zeros in the A matrix: thus, one can use these equations to
‘solve out’ for the three f
t
variables as functions of x
t
and the remaining variables
d
t
=
t
k
t
)
. When these solutions are substituted into the remaining equations,
the results is a two variable nonsingular system in d
t
: this is the system reduction
step in KPR-TA.
Applied to the one sector growth model, the system reduction method described
in the next section will implement this approach automatically, enabling a re-
searcher to simply specify behavioral equations in an arbitrary order and not to
distinguish between f
t
and d
t
. However, it is useful to note that there is not a
unique method for system reduction or a unique ‘reduced’ representation of (2).
For example, in this one sector growth model, one can select any two of c
t
, p
t
and
λ
t
to be f
t
elements and the third to be the element of d
t
. This indeterminacy is
a familiar one, as it is reflected in the differing, but formally equivalent analytical
presentations of the global dynamics of the growth model which alternatively use
the phase plane in (c, k) space or in , k) space. While the reduced systems are
different, the RE solutions based on such alternative reductions will be identical.
2.3.
CONVENTIONAL SYSTEM REDUCTION
As we have emphasized, system reduction involves writing the model in a way
that decomposes the vector of endogenous variables y
t
into a vector f
t
that is
easily solved out of the model and another vector d
t
that summarizes the model’s
SYSTEM REDUCTION AND SOLUTION ALGORITHMS 63
intrinsic dynamics. In the one sector growth model analysis in KPR-TA, as well as
in many other contexts in economics and engineering, this decomposition follows
from detailed knowledge about how the dynamic system works for all parameter
values. For example, the analysis in the KPR-TA example used two related ideas.
First, some behavioral equations contain no elements of E
t
y
t+1
, i.e., involved rows
of zeros in A. Second, from aspects of the economic problem, some elements of
y
t
are controls and point-in-time shadow prices and, therefore, it is natural to think
about solving out for these variables using the ‘non dynamic’ subset of model
equations. Using this decomposition of y
t
(2) has the form:
00
A
df
A
dd
E
t
f
t+1
d
t+1
=
B
ff
B
fd
B
df
B
dd

f
t
d
t
+
C
f
(F)
C
d
(F)
E
t
x
t
. (9)
Notice that this general system, like the example, has a singular A matrix due to
rows of zeros: conventional system reduction assumes that there are as many such
equations, n(f ),asthereareelementsoff
t
.
2
Notice also that there is a subset of
variables, f
t
, which are ordered rst.
The conventional system reduction procedure solves the equations for f
t
f
t
=−B
1
ff
B
fd
d
t
B
1
ff
C
f
(F)E
t
x
t
and uses this solution to obtain the reduced dynamical system:
aE
t
d
t+1
= bd
t
+
d
(F)E
t
x
t
, (10)
where a =[A
dd
A
df
B
1
ff
B
fd
]; b =[B
dd
B
df
B
1
ff
B
fd
];and
d
(F) =[C
d
(F)
B
df
B
1
ff
C
f
(F) + A
df
B
1
ff
C
f
(F)F].
This process requires two important constraints on the model. First, B
ff
must
be nonsingular. Second, reduction of the dynamic system is typically completed
by inverting a to get a dynamic system in the form Ed
t+1
|I
t
= a
1
bd
t
+
a
1
d
(F)Ex
t
|I
t
; this requires that a is nonsingular. Finally, note that the reduction
process typically adds a lead, i.e., there is an additional power of F in
d
(F) in
(10).
This discussion of system reduction has stressed that some of the variables are
solved out of the model. However, conventional system reduction may alternatively
be viewed as the application of T(F) = GF + H to the dynamic system with
T(F) = GF + H =
00
A
df
B
1
ff
0
F +
B
1
ff
0
B
df
B
1
ff
I
.
That is, the dynamic system (AFB)E
t
y
t
= C(F)E
t
x
t
is transformed into a new
system by multiplying both sides by T(F). Notice that since |T(z)|=|B
1
ff
|=
τ , the characteristic polynomial of the transformed system,
|
T (z)(Az B)
|
=
τ |Az B|, has the same roots as the characteristic polynomial of the original
system. It is also notable that the application of T(F) to the dynamic system
[AF B]Ey
t
|I
t
= C(F)Ex
t
|I
t
has different effects on the order of the system’s
64 ROBERT G. KING AND MARK W. WATSON
internal and exogenous dynamics. Since there is a special structure to T(F),it
follows that T(F)[AF B]=[A
F B
], i.e., premultiplication does not change
the first-order nature of the difference equation because of the nature G and H .
However, T(F)C(F) is typically higher order than C(F), which is another way of
stating that a lead is added by conventional system reduction.
The discussion below switches between these two notions of system re-
duction: ‘solving equations’ is generally more intuitive, but ‘multiplication by
matrix polynomials in F can be more revealing about the consequences of these
operations.
2.4.
TWO LOCATIONS IN THE GROWTH MODEL
To motivate the need for a general system reduction algorithm, it is useful to dis-
cuss an extension of the growth model to two locations of economic activity. This
provides a concrete example of the problems that can arise using the convention
system reduction outlined in the last subsection.
3
Specifically, consider a model
with two locations of production of the same final consumption/investment good
with relative sizes π
1
and π
2
, with π
1
+ π
2
= 1. There are then two capital
stocks k
1t
and k
2t
, two investment flows i
1t
and i
2t
which augment these capital
stocks, two shadow values of capital λ
1t
and λ
2t
, two productivity levels a
1t
and
a
2t
. Assume that the final consumption/investment good can be costlessly moved
across locations of economic activity.
The Bellman equation for optimal consumption, investment and capital accu-
mulation is
v(k
1t,
k
2t
t
) = max
c
i
,i
1t
,i
2t
{u(c
t
) + βE
t
v(k
1,t+1,
k
2,t+1
t+1
)}
subject to production and capital accumulation constraints. The Lagrangian form
of the dynamic program is
L = u(c
t
) + βE
t
v(k
1,t+1,
k
2,t+1
t+1
)
+ p
t
[π
1
a
1t
f(k
1t
) + π
2
a
2t
f(k
2t
) c
t
π
1
i
1t
π
2
i
2t
]
+ λ
1t
π
1
[(1 δ)k
1t
+ i
1t
k
1,t+1
]
+ λ
2t
π
2
[(1 δ)k
2t
+ i
2t
k
2,t+1
] .
Since this a direct generalization of the previous model, the full set of approximate
equations that describe the model’s dynamics will not be presented. The key point
is that, given this is a two location generalization of the standard growth model,
then a natural way of writing the vector f
t
is f
t
= (c
t
i
1t
i
2t
p
t
)
, i.e., using the
fact that these are the controls and point-in-time shadow prices. Correspondingly,
a natural way of writing d
t
is d
t
=
1t
λ
2t
k
1t
k
2t
)
, since these are the states and
costates.
SYSTEM REDUCTION AND SOLUTION ALGORITHMS 65
The four efciency conditions corresponding to elements of f
t
are
c
t
: Du(c
t
) p
t
= 0
i
1t
: π
1
[−p
t
+ λ
1t
]=0
i
2t
: π
2
[−p
t
+ λ
2t
]=0
p
t
:[π
1
a
1t
f(k
1t
) + π
2
a
2t
f(k
2t
) c
t
π
1
i
1t
π
2
i
2t
]=0
which take the log-linear approximate forms,
c
t
: c
t
=
1
σ
p
t
i
1t
: p
t
λ
1t
= 0
i
2t
: p
t
λ
2t
= 0
p
t
:[π
1
(a
1t
+ s
k
k
1t
) + π
2
(a
1t
+ s
k
k
1t
) s
c
c
t
π
1
s
i
i
1t
π
2
s
i
i
2t
]=0 .
Each of these corresponds to a row of zeros in the A matrix of the 8 by 8
approximate linear system, which is consistent with the idea that f
t
= (c
t
i
1t
i
2t
p
t
)
.
Conventional system reduction fails using these definitions of f and d:when
the model is written in the form (9), the equations 0 = B
ff
f
t
+B
fd
d
t
+C
f
(F )E
t
x
t
include the pair of equations p
t
= λ
1t
and p
t
= λ
2t
. Thus, B
ff
is singular, violating
one of the necessary constraints for conventional system reduction. (The economics
behind this singularity is clear: the two investment goods are perfect substitutes
from the standpoint of resource utilization so that their shadow prices must be
equated if there is a positive amount of each investment, i.e., λ
1t
= λ
2t
.).
4
2.5. RE SOLVABILITY AND SYSTEM REDUCTION
These examples have shown that, in general, macroeconomic models may im-
ply that both the matrices A and B are singular. The next section describes a
generalized system reduction algorithm that finds a reduced dynamic system,
Ed
t+1
|I
t
= Wd
t
+
d
(F)Ex
t
|I
t
(11)
with other variables that evolve according to
f
t
=−Kd
t
f
(F)Ex
t
|I
t
. (12)
This representation is constructed by re-ordering the non-predetermined elements
of y
t
.
5
Before presenting that algorithm, we present a result that provides a useful link
between the properties of the reduced system and the original system. In particular
it answers the following question. Suppose that there is a unique solution to (2).
Then, will it always be possible to find a representation like (11)–(12) and, if so,
what characteristics will it share with the original model?
6
66 ROBERT G. KING AND MARK W. WATSON
THEOREM 1. If there exists a unique rational expectations solution to the singular
linear difference equation system
AEy
t+1
|I
t
= By
t
+ C(F)Ex
t
|I
t
then there is an equivalent dynamic system
Ed
t+1
|I
t
= Wd
t
+
d
(F)Ex
t
|I
t
f
t
=−Kd
t
f
(F)Ex
t
|I
t
that can be derived from the original model by transforming its equations. This
reduced system has eigenvalues, solutions to |Iz W |=0, which are the same as
the finite eigenvalues of the original system, which are solutions to |Az B|=0.
Further, the solvability of the original system implies that the reduced system also
satisfies the Blanchard–Khan conditions for its solvability.
This theorem is proved in Appendix A, using results from a previous analysis of
the theoretical conditions under which a singular linear difference system under ra-
tional expectations can be solved uniquely (King and Watson, 1998). It establishes
that the system reduction algorithm developed in the next section is always aiming
for a well-defined target, if the model is uniquely solvable. After developing the
iterative steps of the algorithm, we will see that it will always nd a representation
(11)–(12) if any RE solution exists from general initial conditions, not just if there
is a unique one.
3. The Reduction Algorithm
The reduction algorithm begins with the dynamic system (2) and produces a se-
quence of equivalent dynamic systems ending in the transformed system (11, 12).
Before getting to the details, it is worth providing an outline of the algorithm.
First, it finds rows of zeros in the matrix A or introduces rows of zeros into it: this
step essentially defines identities that link together the variables of the model at a
point in time. Second, it ‘solves out’ for as many nonpredetermined variables as
possible from these identities, potentially reordering the vector of variables y
t
for
this purpose. This process continues until (11) is reached.
This section discusses (i) the concept of equivalent dynamic systems; (ii) the
steps in the reduction algorithm; and (iii) a proof that any solvable model will result
in a reduced dynamic system of the form (11, 12). The reduction algorithm that
we construct in this section is closely related to the ‘shuffle algorithm’ developed
by Luenberger (1978); Luenberger works directly with the matrices A, B, and C,
applying transformations that ‘shuffle’ the rows of the matrices so as to extract an
identity linking elements of y, solves the identity and then searches for additional
ones.
7
By contrast, we use the Singular Value Decomposition (SVD) to introduce
multiple rows of zeros into A and use the QR factorization to solve the related
identities for elements of f
t
. Further, in our algorithm, we impose constraints on the
SYSTEM REDUCTION AND SOLUTION ALGORITHMS 67
admissible transformations that rule out moving any of the predetermined variables
(k)intof .
3.1.
EQUIVALENT DYNAMIC SYSTEMS
The various systems produced by the system reduction algorithm are equivalent in
the sense that there is one-to-one transformation connecting them. Specifically, two
types of transformations are used in the algorithm. The first type of transformations
involves non-singular matrices T and V designed to introduce rows of zeros into
A. That is: it transforms the equations (T ) and the variables (V ) of the model:
T [AF B]V
1
VEy
t
|I
t
= TC(F)Ex
t
|I
t
,
to yield a new dynamic system of the form,
[A
F B
]Ey
t
|I
t
= C
(F)Ex
t
|I
t
,
with [A
FB
]=T [AFB]V
1
, C
(F) = TC(F), and new variables, Ey
t
|I
t
=
VEy
t
|I
t
.SinceT and V are non-singular, the transformation produces an equiv-
alent system. Further, both the new system and the original system have the same
roots to their determinantal polynomials, since |A
z B
|=|T ||Az B||V
1
|.
The algorithm restricts V to simply reorder the variables so that the reduced di-
mension system is in terms of the model’s original variables. The second set of
transformations are those of conventional system reduction. Section (2.3) showed
how this transformation could be summarized by multiplying the model’s equa-
tions by a specific polynomial T(F) =GF + H , with two key properties. First,
|T(z)|=τ = 0, so that again, this transformation does not alter the roots of the
model’s determinantal equation. Second, the matrices G and H are chosen so that
the system remains rst order.
Each iteration of the algorithm begins with a dynamic system with the following
characteristics: some elements of f have already been identified, and comprise
the vector
f , and the remaining elements of y are d. The transformations that
have been previously undertaken have not altered the fact that the predetermined
variable k are the last elements of the transformed y vector, which we call
y;this
also implies that the initial elements of d are nonpredetermined variables, which
we call λ. Thus, the system is of the form:
AEy
t+1
|I
t
= By
t
+ C(F)Ex
t
|I
t
, (13)
where
A =
00
0 a
, y =
f
d
, B =
0 b
, C(F) =
C
f
(F)
C
d
(F)
.
Technically, the vector
y = Vy,whereV is a permutation matrix determined
in previous iterations, and f denotes the flow variables determined in previous
68 ROBERT G. KING AND MARK W. WATSON
iterations. The vector d is partitioned as d =
k
)
, so that the predetermined
variables k always appear as the nal n(k) elements. The iteration then proceeds
by moving some of the elements of
λ into f.The algorithm terminates when this is
no longer possible. As we show below, the resulting value of a will be non-singular
if the original system satisfies the restriction |Az B| = 0 and a solution to the
model exists.
3.2.
THE ALGORITHM STEPS
The algorithm includes ve steps:
Step 1: Initialization
Begin by setting
y = d = y, A = a = A, B = b = B, C(F) = C
d
(F) = C(F) and
C
f
(F) = 0. Since f is initially absent from the model, n(f) = 0andn(d) = n(y).
If a is non-singular, proceed to Step 5, otherwise proceed to Step 2.
Step 2: Uncovering New Candidate Flows
Use the singular value decomposition to transform a so that there are some rows
of zeros, i.e., that there are some nondynamic equations in the transformed system.
Let the singular value decomposition of a be a = U S V
, with U U
= I,
V V
= I ,andS is a diagonal matrix with the (nonnegative) singular values on the
diagonal: these singular values are ordered in increasing size, so that s
j
s
i
for
j i.
8
Supposing that r of these are nonzero (which is equivalently the number of
positive singular values or the rank of a) then it is possible to induce n(d) r rows
of zeros into a. More specifically, multiplying the full system (13), which includes
the elements of
f ,by
T
1
=
I
n(f )
0
0 U
,
produces a transformed version of (13), T
1
AEy
t+1
|I
t
= T
1
By
t
+ T
1
C(F)Ex
t
|I
t
.
By construction, the dynamic component of this new system has the form:
00
a
2λ
a
2k
E
t
λ
t+1
k
t+1
=
b
1λ
b
1k
b
2λ
b
2k

λ
t
k
t
+
1
(F)
2
(F)
E
t
x
t
. (14)
Let n(f
c
) = n(d) r denote the number of equations in the blocks with a leading
subscript 1 (the subscript c denotes the idea that these are candidate flows); these
are those model equations with rows of zeros in (14). Now, we want to solve for
some elements of
λ
t
as flows, which is done in next two steps.
SYSTEM REDUCTION AND SOLUTION ALGORITHMS 69
Step 3: Isolating New Flows
Focusing on the rst block of equations of (14): b
1λ
λ
t
+ b
1k
k
t
=−
1
(F)E
t
x
t
,
first determine the implied number of linearly independent restrictions on λ
t
(call
this n(f
n
) for the number of new ows). Transform the equations and variables
so as to facilitate the solution of the equations using the QR factorization of b
1λ
:
QR = b
1λ
P .
9
Since P is an n(λ) × n(λ) permutation matrix, this factorization
implies that the elements of λ
t
can be reordered by multiplying by P
. Thus, for
the full vector y, the reordering is produced by:
V
1
=
I 00
0 P
0
00I
.
That is, rewrite the dynamic system T
1
AEy
t+1
|I
t
= T
1
By
t
+ T
1
C(F)Ex
t
|I
t
as
T
1
AEV
1
1
V
1
y
t+1
|I
t
= T
1
BV
1
1
V
1
y
t
+ T
1
C(F)Ex
t
|I
t
. While reordering λ, V
1
pre-
serves the ordering of
f and k. Also reorganize the system’s equations using the
QR factorization of b
1λ
T
2
=
I 00
0 Q
0
00I
.
Hence, the transformations of the system through Step 2 are: T
2
T
1
AEV
1
1
V
1
y
t+1
|I
t
= T
2
T
1
BV
1
1
V
1
y
t
+ T
2
T
1
C(F)Ex
t
|I
t
.
Step 4: Conventional System Reduction
It follows that there are n(f
n
) = rank(b
1λ
) new flows. Put alternatively, if n(f) is
the number of pre-existing ows then the (n(f)+ n(f
n
)) × (n(f)+ n(f
n
)) leading
submatrix of T
2
T
1
BV
1
1
is of the form
B
ff
=
I 0
0 R
11
,
where R
11
is the (n(f
n
)) by (n(f
n
)) submatrix of the R matrix in the QR fac-
torization of b
1λ
. The matrix is B
ff
is clearly invertible (since R
11
is invertible)
and, hence, we can employ the conventional system reduction procedure detailed
in Section (2.3). This yields a new system in the form of (13), but with a smaller
matrix a.
Step 5: Terminating the Iterative Scheme
If the resulting new value a is singular, then the sequence of Steps 2–4 is repeated.
10
Otherwise, the resulting dynamic system is aEd
t+1
|I
t
= bd
t
+C
d
(F)Ex
t
|I
t
,which
can be rewritten as Ed
t+1
|I
t
= Wd
t
+
d
(F)Ex
t
|I
t
with W = a
1
b and
d
(F) =
a
1
C
d
(F).
70 ROBERT G. KING AND MARK W. WATSON
3.3. CONVERGENCE OF THE ALGORITHM
We now establish that this algorithm will produce a non-singular reduced system
of the form (11) with p n(y) iterations, i.e., the construction and application
of p transformations T(F) for equations and L (reordering transformations) for
variables. This results requires two conditions: (i) that there is a non-zero determi-
nantal polynomial, |Az B| = 0forsomez and (ii) that for every set of initial
conditions (each vector k
0
) there exists a solution, i.e., there is a stochastic process
{y
t
}
t=0
such that the equations of the original model are satisfied at each date.
The first of these conditions is readily verifiable: it can be checked prior to
starting on the algorithm. The second of these conditions is an assumption, which
is discussed further below. The convergence result is stated formally as:
THEOREM 2. The singular linear difference equation system AE
t
y
t+1
= B
y
t
+ C(F)E
t
x
t
has an equivalent dynamic system representation comprised of
f
t
+ Kd
t
+
f
(F)E
t
x
t
= 0 and E
t
d
t+1
= Wd
t
+
d
(F)E
t
x
t
,wheref and
d are vectors containing the elements of y, if: (i) the determinental polynomial,
|AzB|, is not zero for all values of z; and (ii) there exists a solution to the singular
linear difference equation system from every set of initial conditions k
0
.Further,
the system reduction algorithm will compute this equivalent, reduced system in a
number of iterations p that is no more than the number of variables.
The theorem is proved as follows. First, note that each algorithm iteration involves
application of a matrix polynomial T(F) with nonzero determinant τ to the dy-
namic system: this operation does not alter condition (i) since |AzB| = 0 implies
|A
z B
|=|T(z)||Az B|=τ |Az B| = 0. Second, if {y
t
}
t=1
is a solution
to the original model, then it is also a solution to any transformed model obtained
via such a T(F) operation. (If the original model implies AE
t
y
t+1
B y
t
C(F)
E
t
x
t
= 0 then application of T(F) = GF + H implies that A
E
t
y
t+1
B
y
t
T(F)C(F)E
t
x
t
= 0 with A
F B
= T(F)[AF B] so that {y
t
}
t=1
is a
solution to the transformed model.)
The next parts of the proof of the theorem involve details of the algorithm. As
stressed above, each iteration eliminates some elements of y from d and moves
them into f so long as it is possible to construct another T(F). Thus it is direct to
establish an upper bound on the number of algorithm iterations: with one or more
elements moved on each iteration, the algorithm must converge in less than n(y)
iterations.
The construction of a T(F) at each iteration of the algorithm will be possible
unless n(f
n
) = 0 in Step 3, i.e., unless it is impossible to isolate one or more new
flows. Since n(f
n
) = rank(b
1λ
),ifn(f
n
) = 0 then this implies that the matrix b
1λ
contains only zeros. That is, an equation of the transformed model is:
b
1λ
λ
t
+ b
1k
k
t
= 0λ
t
+ b
1k
k
t
=
1
(F)E
t
x
t
. (16)
SYSTEM REDUCTION AND SOLUTION ALGORITHMS 71
Now, suppose that, in addition b
1k
= 0: then there is a row of zeros in Az B and,
hence, 0 =|
Az B|=|Az B|. In this equality, the first equality follows from
the implied rows of zeros in T
1
A and T
1
B, and the second equality follows because
the roots of the determinantal equation are unaltered by any of the transformations
used in the reduction algorithm. Thus, condition (i) is violated in this case. Thus,
condition (i) implies b
1k
= 0ifb
1λ
= 0.
11
On the other hand, if b
1λ
= 0andb
1k
= 0, then b
1k
k
t
=
1
(F)E
t
x
t
makes it is
possible to solve for a subset of the predetermined variables k
t
at each date t that
are a function of the other predetermined variables and a distributed lead of the
expected values of the x
t
s. This outcome violates condition (ii), since it imposes
constraints on the initial conditions for the predetermined variables. Hence, under
conditions (i) and (ii), there cannot be an equation of the form (16). As a result,
each successive iteration of the algorithm eliminates some new flows and ultimate
convergence is guaranteed.
The theorem of this section is useful for two related purposes. First, it provides
a useful interpretation of the failure of the algorithm in Step 3 in the context of
a specific model: if the algorithm fails at this stage and it has been verified that
|Az B| = 0, then this failure implies that no solution of the model exists from
all initial conditions. Second, the theorem establishes that the algorithm can find
a reduced dynamic system of the form (11, 12) even when there is not a unique,
stable solution of the form studied by Blanchard and Kahn (1980) and King and
Watson (1998). Since Theorem 2 only postulates a solution from a set of arbitrary
initial conditions, then the solution might include unstable dynamics of economic
interest (for example, unit roots) or it might involve a multiplicity of RE equilibria
along the lines of Farmer (1993). Hence, Theorem 2 broadens the range of models
to which the system reduction algorithm can be applied.
4. Solving the Model
After application of the system reduction algorithm, the transformed dynamic
system takes the form:
f
t
=−[K
K
fk
]
λ
t
k
t
f
(F)Ex
t
|I
t
(17)
E
t
λ
t+1
k
t+1
=
W
λλ
W
λk
W
W
kk

λ
t
k
t
+
λ
(F)
k
(F)
E
t
x
t
. (18)
This is a nonsingular dynamic system of the form studied by Blanchard and Kahn
(1980) and which is at the heart of the model solution approach in KPR-TA.
The solution of this model is discussed in three steps. First, the implications of
stability for the initial conditions on λ
t
are determined. Second, the construction
of perfect foresight solutions is considered. Finally, dynamic decision rules are
constructed.
72 ROBERT G. KING AND MARK W. WATSON
4.1. INITIAL CONDITIONS ON λ
t
The treatment of the determination of the initial conditions on λ
t
follows Blanchard
and Kahn (1980). To begin, we let V
u
be a (n(u) by n(d)) matrix that isolates the
unstable roots of W .
12
That is, V
u
has the property that
V
u
W = µV
u
,
where µ is a lower triangular (n(u) by n(u)) matrix with the unstable eigenvalues
on its diagonal. Applying V
u
to (18), then V
u
E
t
d
t+1
= V
u
Wd
t
+ V
u
d
(F)E
t
x
t
=
µV
u
d
t
+ V
u
d
(F)E
t
x
t
.
13
The dynamics of u
t
= V
u
d
t
are then:
E
t
u
t+1
= µu
t
+ V
u
d
(F)E
t
x
t
. (19)
This expression can be used to generate formulae for (a) perfect foresight solutions;
and (b) Markov decision rules. In each case, u
t
is chosen so that there is a stable
system despite the existence of unstable roots. Following Sargent (1979) and Blan-
chard and Kahn (1980), this is accomplished by unwinding unstable roots forward.
Practically, in the computer code discussed below, unstable roots are defined as
follows. Suppose that there is a discount factor, β. Then, a root µ
i
is unstable if
βµ
i
> 1. This version of the requirement allows us to treat unit roots as stable.
14
The solutions for u can be used to uniquely determine the date t behavior
of the non-predetermined variables λ
t
. As stressed by Blanchard–Kahn (1980)
this requires that there be the same number of elements of u as there are non-
predetermined variables. In terms of the condition u
t
= V
u
d
t
= V
λ
t
+ V
uk
k
t
,
this requirement means that we have an equal number of equations and unknowns.
However, it is also the case that a unique solution mandates that the (square) matrix
V
is of full rank. This condition on V
is an implication of the more general rank
condition that Boyd and Dotsey (1993) give for higher order linear rational expec-
tations models. Essentially, the rank condition rules out matrices W with unstable
roots associated with the predetermined variables rather than nonpredetermined
variables.
4.2.
PERFECT FORESIGHT (SEQUENCE) SOLUTIONS
Perfect foresight solutions, i.e., the response of the economy to a specified
sequence of x :{x
s
}
s=t
, are readily constructed. Expression (19) implies that:
u
t
=−[I µ
1
F]
1
[µ
1
V
u
d
(F)]E
t
x
t
. (20)
Call the result of evaluating the right-hand side of this expression u
t
=
u
(F)E
t
x
t
.
Then, u
t
= V
λ
t
+ V
k
t,
implies that the initial condition on λ
t
is
λ
t
=−V
1
V
uk
k
t
+ V
1
u
(F)E
t
x
t
. (21)
With knowledge of λ
t
, expressions (17) and (18) imply that:
f
t
=−[K
fk
K
V
1
V
uk
]k
t
−[K
V
1
u
(F) +
f
(F)]E
t
x
t
(22)
SYSTEM REDUCTION AND SOLUTION ALGORITHMS 73
k
t+1
= M
kk
k
t
+[W
V
1
u
(F) +
k
(F)]E
t
x
t
, (23)
where M
kk
= (W
kk
W
V
1
V
uk
). Under perfect foresight, F
n
x
t
= x
t+n
. These
solutions are of interest in their own right and also as inputs into the construction
of Markov decision rules, which we consider next.
4.3.
MARKOV DECISION RULES
Now suppose that x
t
is generated by the process
x
t
= ξ
t
(drvproc1)
and
ξ
t
= ρξ
t1
+ θε
t
, (drvproc2)
where ε
t
is a martingale difference sequence. Given the first-order character of the
driving process, ξ
t
is sufficient for E
t
x
t+k
for all k 0.
4.3.1. Evaluating the Driving Process Polynomials
Consider first the evaluation of
d
(F)E
t
x
t
,and
f
(F)E
t
x
t
. Each of these is a
polynomial of the form (F) =
0
+
1
F +
2
F
2
+···+
n
F
n
. With E
t
x
t+h
=
ρ
h
ξ
t
, it follows that
(F)E
t
x
t
=[
0
+
1
ρ +
2
ρ
2
+···+
n
ρ
n
]ξ
t
ϕξ
t
. (24)
Thus, for example it follows that V
u
d
(F)E
t
x
t
= V
u
[
d0
+
d1
ρ +
d2
ρ
2
+
···+
dn
ρ
n
]ξ
t
= ϕ
u
ξ
t
.
4.3.2. Computing the Initial Value of λ
t
To this point, V
u
has been treated as a matrix which ‘isolated the unstable roots of
W’. From a theoretical perspective, the best known such matrices are the eigen-
vector matrices or, in the case of repeated roots, the matrices in the Jordan form.
The computational approach in KPR-TA and related programs which were de-
signed to handle multiple state variables simply assumed distinct roots and used
the eigenvalue-eigenvector decompositions in GAUSS and MATLAB. However,
from the perspective of numerical analysis, a more accurate and stable approach is
provided by use of the Schur form, which means that µ is not diagonal but is only
lower triangular, with the eigenvalues appearing on the diagonal. There now is an
operational difference in our formulas, depending on whether we assume that µ is
diagonal or simply lower triangular.
Eigenvalue-Eigenvector Method: In the familiar first case of a diagonal µ,we
proceed as follows. Letting ϕ
ui
be the i’th row of ϕ
u
, it follows that the unstable
canonical variables each evolve according to equations of the form:
E
t
u
i,t +1
= µ
i
u
it
+ ϕ
ui
ξ
t
. (25)
74 ROBERT G. KING AND MARK W. WATSON
Iterating this expression forward yields
u
it
=−
j=0
µ
j1
i
ϕ
ui
E
t
ξ
t+j
=−
j=0
µ
j1
i
ϕ
ui
ρ
j
ξ
t
= ϕ
ui
[ρ µ
i
I ]
1
ξ
t
.
For convenience write this solution as u
it
= ν
i
ξ
t
, with v
i
= ϕ
ui
[ρ µ
i
I ]
1
and
correspondingly let u
t
= νξ
t
, with the i’th row of ν being ν
i
. (Alternatively, one can
use an undetermined coefficients representation, u
it
= ν
i
ξ
t
in E
t
u
i,t +1
= µ
i
u
it
+
ϕ
ui
ξ
t
to nd that ν
i
ρ = µ
i
ν
i
I + ϕ
ui
and thus conclude that v
i
= ϕ
ui
[ρ µ
i
I ]
1
:
this alternative derivation will be useful in discussing the general lower triangular
µ case below.) Thus, we have obtained a solution for u
t
, which can be used to solve
for λ
t
along the same lines as (21).
Schur Decomposition Method: With µ lower triangular, as is the case when the
Schur decomposition is used to construct V
u
and µ,lettheith row of µ be denoted
by [µ
i1
...µ
ii
0 ...0]. The expression analogous to (25) is:
E
t
u
i,t +1
=
i
j=1
µ
ij
u
jt
+ ϕ
ui
ξ
t
.
It follows that the first of these expressions can be solved exactly as previously
u
1t
= ν
1
ξ
t
, with ν
1
= ϕ
u1
[ρ µ
11
I ]
1
. Given this solution, it follows that u
2t
=
ν
2
ξ
t
, with ν
2
=
u2
+ µ
21
ν
1
)[ρ µ
22
I ]
1
. Proceeding recursively, the above
expression can be written as
E
t
u
i,t +1
= µ
ii
u
it
+
i1
j=1
µ
ij
u
jt
+ ϕ
ui
ξ
t
= µ
ii
u
it
+
i1
j=1
µ
ij
ν
j
+ ϕ
ui
ξ
t
and then solved using the same undetermined coefficients strategy as in the previ-
ous case, to conclude that ν
i
= (
i1
j=1
µ
ij
ν
j
+ ϕ
ui
)[ρ µ
ii
I ]
1
. Hence, there is
an easy-to-implement way of calculating u
t
= νξ
t
, under the Schur decomposition
as well.
In either case, then since u
t
= νξ
t
, the requirement that u
t
= V
λ
t
+ V
k
t
,
implies that the initial condition on λ
t
is:
λ
t
=−V
1
V
uk
k
t
+ V
1
νξ
t
. (26)
SYSTEM REDUCTION AND SOLUTION ALGORITHMS 75
4.3.3. Solutions for the Other Variables
With knowledge of this solution for λ
t
, expressions (17) and (18) imply that:
f
t
=[K
fk
K
V
1
V
uk
]k
t
+[K
V
1
ν + ϕ
f
]ξ
t
(27)
k
t+1
= M
kk
k
t
+[W
V
1
ν + ϕ
k
]ξ
t
, (28)
where M
kk
= (W
kk
W
V
1
V
uk
) as above and ϕ
f
and ϕ
k
are the evaluations of
the
f
and
k
polynomials (as in (24)).
4.3.4. The RE Solution in State Space Form
These solutions can be grouped together with other information to produce a state
space system, i.e., a pair of specifications Z
t
= S
t
and S
t+1
= MS
t
+ Ne
t+1
.
The specific form is:
f
t
λ
t
k
t
x
t
=
[K
fk
K
V
1
V
uk
][KV
1
ν + ϕ
f
]
V
1
V
uk
V
1
ν
I 0
0
k
t
ξ
t
(29)
k
t+1
ξ
t+1
=
M
kk
[W
V
1
ν + ϕ
k
]
0 ρ

k
t
ξ
t
+
0
θ
ε
t+1
. (30)
Further, given
f
t
d
t
= Ly
t
, it is easy to restate this representation in terms of the
original ordering of the variables.
4.4.
HOW IT WORKS IN PRACTICE
The algorithms described above are available in MATLAB and GAUSS.
15
In each
case, the researcher specifies the dynamic system of the form (2), specifying the
matrices A, B, [C
0
...C
l
]; the location of the predetermined variables in the vector
y
t
; and the number of leads l in the C polynomial. The researcher also specifies the
driving process (4.3, 4.3). The programs return a solution in the state space form
(29, 30), which makes it easy to compute impulse responses, spectra, population
moments and stochastic simulations. This solution is also a convenient one for
estimation and model evaluation.
We make it easy for the researcher to check the necessary condition for model
solvability, |Az B| = 0. If the model solution package fails when this condition
is satisfied, it must then be for one of two other reasons:
(1) the reduction algorithm terminates without reaching a solution. In our ex-
perience, this failure has nearly always meant that the model is misspecified, most
76 ROBERT G. KING AND MARK W. WATSON
likely as a result of a programming error, so that new flows cannot be isolated. Very
occasionally, we have included a behavioral equation of the form,
b
1λ
λ
t
+ b
1k
k
t
= eλ
t
+ b
1k
k
t
=−
1
(F)E
t
x
t
,
where e is a very small number or a vector containing zeros and very small num-
bers. Then, we must decide whether e = 0 so that the model is not solvable or to
adjust tolerance values to treat this as an equation to be solved for some new flows.
(2) the model solution algorithms reports the wrong number of unstable roots
relative to the number of predetermined variables. After excluding the possibility
that the locations of the predetermined variables has been erroneously specified,
there are two remaining possibilities. First, stability is defined in our code in a
relative rather than absolute sense: an adjustment of the critical value for stability,
the parameter β, may be necessary. Second, the model may actually not have an
RE equilibrium or there may be a multiplicity of equilibria.
16
5. Extension to Timing Models
An important class of macroeconomic models views the discrete time period t as
divided into subperiods that are distinguished by differing information sets. For
example, analyses of the real effects of monetary disturbances in exible price
models frequently make such assumptions, as in Svensson (1985) or Christiano
and Eichenbaum (1992). An extension of the approach outlined above can be used
for these models as well.
17
Thus, consider the a generic timing model in the form:
A
0
Ey
t+1
|I
0t
+ A
1
Ey
t+1
|I
1t
+···+A
J
Ey
t+1
|I
Jt
= B
0
Ey
t
|I
0t
+ B
1
Ey
t
|I
1t
+···+B
J
Ey
t
|I
Jt
+ C
0
(F)Ex
t
|I
0t
+ C
1
(F)Ex
t
|I
1t
+···+C
J
(F)Ex
t
|I
Jt
,
(31)
where I
0t
is no current information, I
Jt
is full current information, and the infor-
mation sets are ordered as increasing in information (I
0
I
1t
... I
jt
... I
Jt
).
In this expression, as earlier in this article, F is the lead operator defined so as to
shift the dating of the variable but not of the information set, so that C
j
(F)Ex
t
|I
jt
represents the distributed lead C
0j
Ex
t+1
|I
jt
+C
1j
Ex
t+1
|I
jt
+···+C
Jj
Ex
t+J
|I
jt
.
18
There is a simple two-stage strategy for solving such timing models. First, one
solves a common information model that is derived by conditioning on I
0t
:this
solution is straightforward given the results in this paper and constitutes the harder
part of the RE solution. Second, one uses that solution to solve for the influences
of the particular timing model under study.
SYSTEM REDUCTION AND SOLUTION ALGORITHMS 77
5.1.
THE COMMON INFORMATION MODEL
In order to solve the general timing model (31), we begin by study the operation
of a related common information model, so as to determine the effects of antici-
pated driving variables. For this purpose, take expectations of (31) conditional on
beginning of period t information:
(A
0
+ A
1
+···+A
J
)Ey
t+1
|I
0t
= (B
0
+ B
1
+···+B
J
)Ey
t
|I
0t
+ (C
0
(F) + C
1
(F) +···+C
J
(F))Ex
t
|I
0t
.
(32)
This looks just like the singular linear difference system under rational expectations
studied above, AEy
t+1
|I
t
= By
t
+ C(F)Ex
t
|I
t
, with two modifications which are
minor. First, the matrices A, B,andC(F) must be defined as appropriate sums of
the matrices in the timing model.
19
Second, the information set I
t
now corresponds
to I
Jt
, with I
0t
. But these modifications are (i) unimportant for the mechanics of
system reduction and (ii) easy to deal with in terms of the computation of Markov
decision rules. In terms of the latter, simply replace ξ
t
with
t
|I
0t
= ρξ
t1
.That
is, the anticipated driving variable part of the system is:
Ey
t
|I
0t
Ex
t
|I
0t
=
yk
0

k
t
t
|I
0t
(33)
Ek
t+1
|I
0t
t+1
|I
0t
=
M
kk
M
0 ρ

k
t
t
|I
0t
, (34)
where the and M matrices are exactly those discussed above.
5.2.
EFFECTS OF UNANTICIPATED DRIVING VARIABLES
The hard work in a timing model comes in determining the effects of ε
t
since it
is these shocks about which there is differential information implied by the tim-
ing structure. Subtracting (32) from (31) yields an expression that describes the
evolution of unanticipated endogenous variables:
A
1
(Ey
t+1
|I
1t
Ey
t+1
|I
0t
) +···+A
J
(Ey
t+1
|I
Jt
Ey
t+1
|I
0t
)
= B
1
(Ey
t
|I
1t
Ey
t
|I
0t
) +···+B
J
(Ey
t
|I
Jt
Ey
t
|I
0t
)
+ C
1
(F)(Ex
t
|I
1t
Ex
t
|I
0t
) +···+C
J
(F)(Ex
t
|I
Jt
Ex
t
|I
0t
),
(35)
where terms like (Ey
t
|I
jt
Ey
t
|I
0t
) indicate how the receipt of information be-
tween subperiod 0 and subperiod j alters the value of the endogenous variables.
This expression provides the basic restrictions that are exploited to determine the
model economy’s response to shocks at date t.
78 ROBERT G. KING AND MARK W. WATSON
It is useful if these restrictions are written in the following ‘one subperiod ahead
innovation’ form:
A
1
(Ey
t+1
|I
1t
Ey
t+1
|I
0t
) +···+A
J
(Ey
t+1
|I
Jt
Ey
t+1
|I
J 1,t
)
= B
1
(Ey
t
|I
1t
Ey
t
|I
0t
) +···+B
J
(Ey
t
|I
Jt
Ey
t
|I
J 1,t
)
+ C
1
(F)(Ex
t
|I
1t
Ex
t
|I
0t
) +···+C
J
(F)(Ex
t
|I
Jt
Ex
t
|I
J 1,t
),
(36)
where A
j
= (A
j
+···+A
J
); B
j
= (B
j
+···+B
J
);andC
j
(F) = (C
j
(F) +···+
C
J
(F)).
This expression implies that we can concentrate on finding the solution to
A
j
(Ey
t+1
|I
jt
Ey
t+1
|I
j1,t
)
= B
j
(Ey
t
|I
jt
Ey
t
|I
j1,t
)
C
j
(F)(Ex
t
|I
jt
Ex
t
|I
j1,t
)
(37)
for each j = 1, 2,...J.
5.2.1. Solving for the Effects of Shocks in Subperiod j
This subsection discusses how to use (37) to determine the effects of shocks in sub-
period j. It is useful to partition the vector y into parts that are nonpredetermined
() and parts that are predetermined (k). That is, write y =
k
.
Since k
t
is predetermined, it does not respond to shocks within any of the sub-
periods j of period t. Hence, the term B
j
(Ey
t
|I
jt
Ey
t
|I
j1,t
) in (37) simplifies to
B
j,
(E
t
|I
jt
E
t
|I
j1,t
),whereB
j,
involves the relevant columns of B
j
.Fur-
ther, from (34), Ey
t+1
|I
jt
= E[Ey
t+1
|I
0,t+1
]|I
jt
=
yk
Ek
t+1
|I
jt
+
t+1
|I
jt
and
t+1
|I
jt
= ρEξ
t
|I
jt
= ρ(ρξ
t1
+θEε
t
|I
jt
) given the driving process in (4.3)
above. Accordingly, it follows that:
A
j
(Ey
t+1
|I
jt
Ey
t+1
|I
j1,t
)
= A
j
yk
(Ek
t+1
|I
jt
Ek
t+1
|I
j1,t
)
+ A
j
ρθ(
t
|I
jt
t
|I
j1,t
).
(38)
Since C
j
(F) = C
j,0
+ C
j,1
F +···+C
j,p
F
p
, it is direct that C
j
(F)(Ex
t
|I
jt
Ex
t
|I
j1,t
) = (C
j,0
+ C
j,1
ρ +···+C
j,p
ρ
p
)(Eε
t
|I
jt
t
|I
j1,t
): call this
solution
j
(Eε
t
|I
jt
t
|I
j1,t
).
Combining these results, (37) can be written as:
B
j,
A
j
yk
E
t
|I
jt
E
t
|I
j1,t
Ek
t+1
|I
jt
Ek
t+1
|I
j1,t
=
j
A
j
θ

t
|I
jt
t
|I
j1,t
.
(39)
Solving (39), determines the variables at t as functions of shocks, taking as given
aspects of the future solution (as provided by
yk
and
).
SYSTEM REDUCTION AND SOLUTION ALGORITHMS 79
What is known about subperiod j that is useful in considering the solution to
(39)? The specification of the economic problem shows which variables have been
determined in subperiods prior to j , i.e., which elements of
q
jt
=
E
t
|I
jt
E
t
|I
j1,t
Ek
t+1
|I
jt
Ek
t+1
|I
j1,t
are zero by the structure of the model. Further, the model also supplies the
information that is revealed in the subperiod, i.e.,
t
|I
jt
t
|I
j1,t
.
From a mathematical point of view, (39) contains more equations (n(y)) than
unknowns since some of the elements of q
jt
are determined prior to subperiod
j. This is accommodated as follows. First, write (39) as Mq
jt
= N(Eε
t
|I
jt
t
|I
j1,t
) and use the fact that there is a permutation matrix P
q
which makes the
first p elements of q
jt
the predetermined ones.
20
Second, translate the system to:
(M P
q
)(P
q
q
jt
) = N(Eε
t
|I
jt
t
|I
j1,t
). Third, partition
M MP
q
as [
M
1
M
2
],
with the first p columns of
M being those that are attached to the predetermined
variables, q
1
jt
, and the last n(y) p columns being those attached to the variables
determined in subperiod j and later, q
2
jt
. Thus, the system takes the form:
M
2
q
2
jt
= N(Eε
t
|I
jt
t
|I
j1,t
).
This system is n(y) equations in n(y) p unknowns, which are coefficients η
j
that
describe how the elements of q
2
jt
respond to the shocks
t
|I
jt
t
|I
j1,t
.This
system can be solved using the QR factorization, checking in the process that there
is indeed a consistent solution (by determining that the last p rows of Q
N are all
zeros, where Q is the lead matrix in the QR factorization of
M
2
).
Proceeding similarly through subperiods j = 1, 2,...J yields a matrix η:
t
E
t
|I
0t
k
t+1
Ek
t+1
|I
0t
= η(ε
t
t
|I
0t
),
where η contains many zeros due to the subperiod structure. Partition this matrix
as:
η =
η
η
k
and use these matrices in augmenting the solutions for the anticipated components.
The overall solution for the observable variables is thus:
y
t
x
t
=
yk
0

k
t
ρξ
t1
+
η
y
0
ε
t
,
where η
y
=
η
0
with the last lines of zeros corresponding to the elements of k.
The predetermined variables and driving variables evolve according to:
k
t+1
ξ
t+1
=
M
kk
M
0 ρ

k
t
ρξ
t1
+
η
k
0
ε
t
+
0
I
ε
t+1
80 ROBERT G. KING AND MARK W. WATSON
These equations may easily be placed in a state space form. The observables evolve
according to:
z
t
=
y
t
x
t
=
yk
ρη
y
0 ρ
k
t
ξ
t1
ε
t
and the states evolve according to:
k
t+1
ξ
t
ε
t+1
=
M
kk
M
ρη
k
0 ρI
000
k
t
ξ
t1
ε
t
+
0
0
I
ε
t+1
.
6. Conclusions
In this article, we described an algorithm for the reduction of singular linear differ-
ence systems under rational expectations of the form (1) and alternative algorithms
for the solution of the resulting reduced dimension system. We also described an
extension to ‘timing models’, i.e., to variants of (1) with distinct informational
subperiods. This set of methods is exible and easy to use.
Appendix: Proof of Theorem 1
The proof relies on results in King and Watson (1998). That paper developed the
conditions under which singular linear difference system can be uniquely solved
under rational expectations. There were two necessary and sufficient conditions.
First, |Az B| = 0forsomez. Second, a particular matrix V
U
had full rank:
this matrix requirement was necessary to link non-predetermined variables to
unstable canonical variables U , where the definition of unstable canonical variables
contained both finite and infinite eigenvalues. For the convenience of a reader
making the transition from King and Watson (1998), the notation in this section
follows that paper and not the main text.
Begin with the solvable dynamic system written in a canonical form employed
in King and Watson (1998):
21
(A
F B
)V E
t
y
t
= C
E
t
x
t
,
where V is a variable transformation matrix and
[A
F B
]=
(NF I) 0
0 (FI J)
.
with N being a nilpotent matrix.
22
King and Watson (1998, particularly Ap-
pendix A) show that such an equivalent system can always be constructed for
SYSTEM REDUCTION AND SOLUTION ALGORITHMS 81
any model that possesses a unique, stable rational expectations solution. The
transformed variables in this canonical form are
U
t
s
t
i
t
u
t
s
t
= VE
t
y
t
with the partition defined by the magnitude of the eigenvalues. First, there are as
many elements of s as there are stable eigenvalues of J . Second, there are as many
elements of i
t
as there are columns of N. As explained in King and Watson (1998),
these are canonical variables associated with ‘infinite’ eigenvalues.
The intuition behind the proof of Theorem 1 is that the infinite canonical vari-
ables, i
t
, of King and Watson (1998) are governed by dynamic identities and,
consequently, can be related to the f
t
variables of the present article. Once this
relationship is established and exploited to eliminate the f
t
, the remaining variables
d
t
are governed by a reduced dimension dynamic system that depends on u
t
,s
t
in
the same way as the original model.
Linking i
t
and f
t
Premultiplication of the canonical dynamic system by
η(F) =
(NF I)
1
0
0 I
results in
I 0
0 (FI J)
VE
t
y
t
= η(F)C
E
t
x
t
.
Now, partition the transformation of variables matrix V as follows
V =
V
if
V
id
V
δf
V
δd
=
V
if
V
V
ik
V
uf
V
V
uk
V
sf
V
V
sk
,
using the notation d
=[λ
k
] to denote the dynamic variables and δ
=[u
s
].Let
V
U
=
V
if
V
V
uf
V
.
where λ are those elements left after the removal of f
t
that are thus part of the
dynamic vector d. Without loss of generality, we will assume that the elements of
y are ordered so that V
if
is square and nonsingular. This is always possible given
that V
U
is nonsingular.
Using the first partitions of V , the dynamic system can be written
V
if
V
id
(FI J)V
δf
(FI J)V
δd
E
t
y
t
= η(F)C
E
t
x
t
.
82 ROBERT G. KING AND MARK W. WATSON
Notice that there are no lead operators in the first n(i) = n(f ) rows of this
transformed system.
Construction of Reduced System
Now we proceed to construct a reduced dimension dynamic system. To accomplish
this, we premultiply by a matrix polynomial T(F) that is partitioned as
T
11
(F)T
12
(F)
T
21
(F)T
22
(F)
with components that satisfy:
T
11
(F)V
if
+ T
12
(F)(FI J)V
δf
= I
T
11
(F)V
id
+ T
12
(F)(FI J)V
δd
= K
T
21
(F)V
if
+ T
22
(F)(FI J)V
δf
= 0
T
21
(F)V
id
+ T
22
(F)(FI J)V
δd
=[FI W ] ,
where K and W are unknown matrices.
The first two of these conditions may be used to set
T
11
(F) = V
1
if
T
12
(F) = 0
which jointly imply that K = V
1
if
V
id
.
Employing the third and fourth condition together
T
22
(F){(FI J)[V
δd
V
δf
V
1
if
V
id
]} = FI W.
Standard results on the determinants and inverses of partitioned matrices imply that
ˆ
V =[V
δd
V
δf
V
1
if
V
id
] is an invertible matrix. Accordingly, this condition may
be satisfied with
T
22
(F) =
ˆ
V
1
and
W =
ˆ
V
1
J
ˆ
V.
A further implication is then that
T
21
(F) =−
ˆ
V
1
(FI J)V
δf
V
1
if
.
Hence, it follows that
T(F) =
V
1
if
0
V
1
(FI J)V
δf
V
1
if
V
1
SYSTEM REDUCTION AND SOLUTION ALGORITHMS 83
Notice that |T(F)|=(|V
if
||
ˆ
V |)
1
, which is a nonzero constant. Accordingly,
construction of the reduced system involves operating on (A
F B
)V E
t
y
t
=
C
E
t
x
t
with the equation transformation’
T(F) T(F(F) to produce (A
∗∗
F
B
∗∗
)E
t
y
t
= T(F(F)E
t
x
t
= (F)E
t
x
t
. In this expression, the transformed
system matrices take the form:
A
∗∗
=
00
0 I
B
∗∗
=
IK
0 W
which is the reduced system that we seek.
Properties of the Reduced System
We now demonstrate and discuss key properties of the reduced dimension nonsin-
gular system, which also establish that it satisfies the Blanchard and Kahn (1980)
conditions for solvability.
First, we indicate that the finite eigenvalues of the reduced system
E
t
d
t+1
= Wd
t
+
d
(F)E
t
x
t
are identical to those of the original system. Previously, we have seen that
W =
ˆ
V
1
J
ˆ
V.
Accordingly,
ˆ
V is a left eigenvector matrix of W . The corresponding eigenvalues
are the diagonal elements of J . The invariance of the eigenvalues may be viewed
as an implication of the fact that |
T(z)|=|T(z)||η(z)| is a nonzero constant and,
hence, this transformation does not affect the determinantal polynomial.
Second, we want to establish that solvability of the original dynamic system
implies solvability of the reduced dynamic system. The earlier solvability condition
was that V
U
was square and invertible. We now need to establish that this implies
that
ˆ
V
is square and nonsingular. To show this, we write
ˆ
V
V
V
uk
V
V
jk
=[V
δd
V
δf
V
1
if
V
id
] .
Using the second of the partitionings displayed above, we find that
ˆ
V
= V
V
uf
V
1
if
V
.
Since
|V
U
|=|
V
if
V
V
V
|=|V
if
|·|V
V
uf
V
1
if
V
|,
84 ROBERT G. KING AND MARK W. WATSON
it follows that |
ˆ
V
|=|V
U
|/|V
if
| = 0. Thus,
ˆ
V
is invertible and the reduced
dimension (nonsingular) system is solvable according to the Blanchard and Kahn
(1980) conditions.
Notes
1
A reader making comparisons with KPR-TA will notice that we have left two constraints on
the optimization problem rather than combining them into one and therefore added a multiplier. It
is frequently convenient to do this for simplicity in calculating and approximating the first order
conditions. One advantage of the system reduction algorithm developed in Section 3 is that it makes
it essentially costless for the researcher to do this.
Also, in this and the subsequent example, we ignore productivity trends for simplicity. KPR-
TA shows how to transform the growing economy to a stationary one with some modifications of
parameters.
2
Throughout the paper, we denote the number of elements in an arbitrary vector w
t
by n(w).We
also use the notation n
s
to indicate the number of variables s in several contexts.
3
Crucini (1991) encountered the sort of singularity discussed in this section while constructing
an international real business cycle model with two locations and costless mobility of capital. In the
more general model of Baxter and Crucini (1993), which incorporates investment adjustment costs,
the singularity arises only in the special case when adjustment costs are zero.
4
Before leaving this example, it is worth pointing out that it is possible to solve it ‘by hand’,
as in Crucini (1991). To do this, one needs to first be flexible about what is an elment of f
t
.The
first three linear equations above can clearly be used to solve for c
t
, p
t
,and
λ
1t
as functions of
λ
2t
.
Solving these out and manipulating remaining equations, it is possible to determine that there is a
reduced dynamic system in
λ
2t
,
k
1t
,and
k
2t
. These manipulations involve using relationships be-
tween variables at t + 1 (elements of the A matrix) and relationships between variables at t (elements
of the B matrix) that are carefully designed to produce the desired result. A further analysis of this
model, which stresses the interplay between the model’s economic structure and the nature of system
reduction is available at http://people.bu.edu/rking/kwre/irbc2.pdf
5
Another approach to solving singular linear difference systems is provided in work by Sims
(1989), who uses the QZ-algorithm to transform the a singular system into an equivalent system that
can be solved recursively. All models that can be solved with our technology can also be solved with
Sims’. The QZ-algorithm originates in Moler and Stewart (1971) and is discussed by Golub and Van
Loan (1989, p. 394).
6
This question was posed to us by Kei Mu Yi and Andrew John.
7
We thank Adrian Pagan for pointing out the relationship between our work and Luenberger’s
work on ‘descriptor systems’, which are specifications of the form Ay
t+1
= By
t
+Cx
t
. Luenberger’s
terminology makes clear that he has in mind systems for which equilibria are not the solution to a
control problem, but rather contain some ‘descriptive’ dynamic equations. While this accords with
our view that the analysis of ‘suboptimal dynamic equilibria’ is a major motivation for ‘singular linear
difference systems under rational expectations’, it is worthwhile stressing that there are benefits to
posing even standard control problems in this manner, as suggested by the two location growth model
example.
8
The svd commands in MATLAB and GAUSS produce positive singular values as the initial
entries on the diagonal of S so that the results must be transformed in our programs to match the
ordering in the text.
9
The QR factorization of a p × m matrix M with rank r is given by
QR = MP , (15)
SYSTEM REDUCTION AND SOLUTION ALGORITHMS 85
where P is an m by m permutation matrix (i.e., P can be constructed by reordering the rows of I
m
);
Q is a p by p unitary matrix; and R is a p by m matrix such that
R =
R
11
R
12
00
.
where R
11
is an r by r upper diagonal, nonsingular matrix and R
12
is an r by (m r) matrix.
The QR factorization is useful because it allows us to solve the equation system My = N for
r of the elements of y in terms of the remaining (m r) elements of y and the parameters N.That
is, we can write the equation My = N as RP
y = Q
N and partition P
y =[y
1
y
2
]
. Then, the
solution is y
1
= R
1
11
(G
1
R
12
y
2
),whereG
1
is the first r rows of Q
M. The equation system is
consistent only if G
2
= 0, where G
2
is the last (m r) rows of Q
N.
10
The matrix a is not necessarily nonsingular because because we may have solved for a smaller
number of new ows than actual flows. Or, in some cases, the transformations that eliminate some
flows may themselves induce new singularities. To see this latter point, consider the example
p
t
= λ
t
E
t
p
t+1
+ E
t
λ
t+1
= p
t
x
t
The associated 2 × 2 A matrix has rank 1, so that the first step is to eliminate p
t
= λ
t
. But when one
uses this implication, the resulting 1 × 1 a matrix is 0, so that it has rank 0. The resulting reduced
system involves no dynamic variables, but is simply the pair of equations p
t
= x
t
and λ
t
= x
t
.
11
Luenberger (1978) uses an argument like this one to establish the inevitable convergence of his
‘shuffle’ algorithm when |Az B| = 0.
12
Notice that the eigenvalue-eigenvector or Schur decomposition may involve complex numbers.
At the same time, the resulting rational expectations solutions are constrained to involve real numbers
only. As a double check, our computer programs test for the presence of non-negligible imaginary
components.
13
One interpretation of this transformation is that u
t
= V
u
d
t
is the vector of unstable canonical
variables and that V
u
is the matrix of left eigenvectors that corresponds to the unstable eigenvalues;
in this case, µ is a Jordan matrix with the entries below the main diagonal corresponding to repeated
unstable roots. However, a better computational method is to use the Schur decomposition. In this
case µ is a lower triangular matrix with the unstable eigenvalues on the diagonal.
14
It is also consistent with the implications of transversality conditions in dynamic optimizing
models, where the requirement is typically that β
1/2
µ
i
> 1.
15
The core programs in GAUSS and MATLAB as well as many example applications are available
at http://people.bu.edu/rking/kwre.html.
16
Given the nature of our reduction algorithm, it is fairly straightforward to calculate restric-
tions that rational expectations imposes on the solution of the reduced system, E
t
d
t+1
= Wd
t
+
d
(F)E
t
x
t
, in the case of multiple equilibria. But we have not yet written code for this purpose.
17
This approach was employed in King and Watson (1996) to study the empirical implications of
financial market frictions models in the style of Christiano and Eichenbaum (1992).
18
We make use of the fact that I
Jt
= I
0,t+1
in some of the derivations below.
19
We assume that this common information model is solvable in the sense discussed above. This
requires that |Az B| = 0, which may be verified prior to the solution of the model in question, and
also that additional ‘rank’ conditions are satisfied, which are verified during the solution process.
20
These are not the same matrices M and N which enter in the state equations.
21
For the theoretical characterization of solutions and the purposes of this appendix, the restriction
to only current x
t
is immaterial. This is because its is the properties of A and B that govern the
properties of the solutions and the existence of a reduced dimension dynamic subsystem.
86 ROBERT G. KING AND MARK W. WATSON
22
A nilpotent matrix has zero elements on and below its main diagonal, with zeros and ones
appearing abitrarily above the diagonal. Accordingly, there is a number l such that N
l
is a matrix
with all zero elements.
References
Baxter, Marianne and Crucini, Mario J. (1993). Explaining saving-investment correlations. American
Economic Review, 83(3), 416-436.
Blanchard, Olivier J. and Kahn, Charles (1980). The solution of linear difference models under
rational expectations. Econometrica, 48(5), 1305–1311.
Boyd, John H. III and Dotsey, Michael (1993). Interest rate rules and nominal determinacy. Federal
Reserve Bank of Richmond, Working Paper, revised February 1994.
Christiano, Lawrence J. and Eichenbaum, Martin (1992). Liquidity effects and the monetary
transmission mechanism. American-Economic-Review 82(2), 346–353.
Crucini, Mario (1991). Transmission of business cycles in the open economy. Ph.D. dissertation,
University of Rochester.
Farmer, Roger E.A. (1993). The Macroeconomics of Self-Fulfilling Prophecies, MIT Press.
Golub, G.H. and Van Loan, C.F. (1989). Matrix Computations, 2nd edn., Johns Hopkins University
Press, Baltimore.
King, R.G., Plosser, C.I., and Rebelo, S.T. (1988a). Production, growth and business cycles, I: The
basic neoclassical model. Journal of Monetary Economics, 21(2/3), 195–232.
King, R.G., Plosser, C.I., and Rebelo, S.T. (1988b). Production, growth and business cycles, technical
appendix. University of Rochester Working Paper.
King, R.G. and Watson, M.W. (1996). Money, interest rates, prices and the business cycle. Review of
Economics and Statistics.
King, R.G. and Watson, M.W. (1998). The solution of singular linear difference systems under
rational expectations. IER.
King, R.G. and Watson, M.W. (1995a). The solution of singular linear difference systems under
rational expectations. Working Paper.
King, R.G. and Watson, M.W. (1995b). Research notes on timing models.
Luenberger, David (1977). Dynamic equations in descriptor form. IEEE Transactions on Automatic
Control, AC-22(3), 312–321.
Luenberger, David, (1978). Time invariant descriptor systems. Automatica, 14(5), 473–480.
Luenberger, David (1979). Introduction to Dynamic Systems: Theory, Models and Applications, John
Wiley and Sons, New York.
McCallum, Bennett T. (1983). Non-uniqueness in rational expectations models: An attempt at
perspective. Journal of Monetary Economics, 11(2), 139–168.
Moler, Cleve B. and Stewart, G.W. (1973). An algorithm for generalized matrix eigenvalue problems.
SIAM Journal of Numerical Analysis, 10(2), 241–256.
Sargent, Thomas J. (1979). Macroeconomic Theory. Academic Press, New York.
Sims, Christopher A. (1989). Solving non-linear stochastic optimization problems backwards.
Discussion Paper 15, Institute for Empirical Macroeconomics, FRB Minneapolis.
Svensson, Lars E.O. (1985). Money and asset prices in a cash in advance economy. Journal of
Political Economy, 93, 919–944.