Chapter 4 Topic 02
4.1 Statistical Model
Statistical model for each observations \(i\) (using the same \(k\) regressors across \(m\) equations),
\[\begin{align*} \underset{\left(m \times 1\right)}{y_{i}} &= \underset{\left(m\times mk\right)}{\overline{X}_{i}} \underset{\left(mk \times 1\right)}{\beta} + \underset{\left(m \times 1\right)}{e_{i}}, \\ \begin{bmatrix} y_{1i} \\ y_{2i} \\ \vdots \\ y_{mi} \end{bmatrix} &= \begin{bmatrix} x_{1i}^{'} & 0 & \cdots & 0 \\ 0 & x_{2i}^{'} & \cdots & 0 \\ \vdots & \vdots & \ddots & 0 \\ 0 & 0& \cdots & x_{mi}^{'} \end{bmatrix} + \begin{bmatrix} \beta_{1} \\ \beta_{2} \\ \vdots \\ \beta_{m} \end{bmatrix} + \begin{bmatrix} u_{1i} \\ u_{2i} \\ \vdots \\ u_{mi} \end{bmatrix}, \end{align*}\]
with,
- \(y_{ji}\) and \(e_{ji}\) are scalars for \(j=1,...,m\).
- \(x_{ji}\) are \(\left(k \times 1\right)\) matrix for \(j=1,...,m\).
- \(\beta\) are \(\left(k \times 1\right)\) matrix for \(j=1,...,m\).
Using the same \(k\) regressors across \(m\) equations this could be simplified to,
\[\begin{align*} \underset{\left(m \times 1\right)}{y_{i}} &= \underset{\left(m\times mk\right)}{\left(\underset{\left(m \times m\right)}{I_{m}} \otimes \underset{\left(1\times k\right)}{x_{i}^{'}}\right)} \underset{\left(mk \times 1\right)}{\beta} + \underset{\left(m \times 1\right)}{e_{i}}. \end{align*}\]
Statistical model in matrix notation across observations \(i\) (using the same \(k\) regressors across \(m\) equations),
\[\begin{align*} \underset{\left(n \times m\right)}{Y} &= \underset{\left(n\times k\right)}{X} \underset{\left(k \times m\right)}{B} + \underset{\left(n \times k\right)}{E}. \end{align*}\]
4.2 Simulation
4.2.1 Set up
# clear workspace
rm (list = ls(all=TRUE))
# set seed
set.seed(1234567, kind="Mersenne-Twister")
4.2.2 Data Generating Process
\[\begin{align*} y_t B + x_t A &= u_t \\ u_t &= u_{t-1} P + v_t \\ v_t &= N\left(0, V_t\right) \\ V_t &= S_t S_t^{'} \\ S_t &= C + D w_t \\ x_{1t} &\sim U\left[x_{1l},x_{1u}\right] \\ x_{1t} &\sim N \left(\mu_{x_{1}},\sigma_{x_{1}}^2\right) \end{align*}\]
4.2.3 Simulation
# number of observations
<- 2000
t
# parameters
<- 0.6
b1 <- 0.2
b2
<- 0.4
a1 <- -0.5
a2
<- 1.0
c11 <- 0.5
c21 <- 2.0
c22
<- 0.5
d11 <- 0.2
d21 <- 0.2
d22
<- 0.8
p11 <- 0.1
p12 <- -0.2
p21 <- 0.6
p22
<- matrix(c(1, -b2,
b -b1, 1), nrow=2, byrow=T)
<- matrix(c(-a1, 0,
a 0, -a2), nrow=2, byrow=T)
<- matrix(c(c11, 0,
c nrow=2, byrow=T)
c21, c22), <- matrix(c(d11, 0,
d nrow=2, byrow=T)
d21, d22),
# exogenous variables
<- cbind(10*runif(t), 3*rnorm(t))
x <- runif(t)
w
# disturbances
<- array(0, c(t,2))
zeros <- zeros
u <- zeros
v for (i in 2:t) {
<- c + d * w[i]
l <- rnorm(2) %*% t(l)
v[i,] 1] <- p11*u[i-1,1] + p12*u[i-1,2] + v[i,1]
u[i,2] <- p21*u[i-1,1] + p22*u[i-1,2] + v[i,2]
u[i,
}
# simulate the reduced form
<- zeros
y for (i in seq(t)) {
<- -x[i,] %*% a %*% solve(b) + u[i,] %*% solve(b)
y[i,] }
4.3 Least-Squares Estimator
Use the model in matrix notation across observations \(i\).
# dimensions
<- ncol(y);m m
## [1] 2
<- nrow(y);t t
## [1] 2000
<- ncol(x);k k
## [1] 2
# stack regressands and regressors
<- as.vector(y) # stack y over observations
Y length(Y)
## [1] 4000
<- diag(m) %x% x # stack x over observations
X dim(X)
## [1] 4000 4
# estimation
<- lm(Y ~ X - 1)
lm.res $coefficients lm.res
## X1 X2 X3 X4
## 0.44495050 -0.37735285 0.07649982 -0.60879807
# comoare with reduced form parameters for simmulation
as.vector(ab)
## [1] 0.45454545 -0.34090909 0.09090909 -0.56818182
# expand residuals again
<- matrix(lm.res$residuals, ncol = m)
u dim(u)
## [1] 2000 2
<- 1/t * t(u) %*% u
Sig.u Sig.u
## [,1] [,2]
## [1,] 10.861852 7.894577
## [2,] 7.894577 9.537205
# comoare with reduced form parameters for simmulation
# ???