4.5 Inference for model parameters
The assumptions on which the logistic model is constructed allow to specify what is the asymptotic distribution of the random vector \(\hat{\boldsymbol{\beta}}\). Again, the distribution is derived conditionally on the sample predictors \(\mathbf{X}_1,\ldots,\mathbf{X}_n\). In other words, we assume that the randomness of \(Y\) comes only from \(Y|(X_1=x_1,\ldots,X_k=x_k)\sim\mathrm{Ber}(p(\mathbf{x}))\) and not from the predictors. To denote this, we employ lowercase for the sample predictors \(\mathbf{x}_1,\ldots,\mathbf{x}_n\).
There is an important difference between the inference results for the linear model and for logistic regression:
- In linear regression the inference is exact. This is due to the nice properties of the normal, least squares estimation and linearity. As a consequence, the distributions of the coefficients are perfectly known assuming that the assumptions hold.
- In logistic regression the inference is asymptotic. This means that the distributions of the coefficients are unknown except for large sample sizes \(n\), for which we have approximations. The reason is the more complexity of the model in terms of non-linearity. This is the usual situation for the majority of the regression models.
4.5.1 Distributions of the fitted coefficients
The distribution of \(\hat{\boldsymbol{\beta}}\) is given by the asymptotic theory of MLE: \[\begin{align} \hat{\boldsymbol{\beta}}\sim\mathcal{N}_{k+1}\left(\boldsymbol{\beta},I(\boldsymbol{\beta})^{-1}\right) \tag{4.10} \end{align}\] where \(\sim\) must be understood as approximately distributed as […] when \(n\to\infty\) for the rest of this chapter. \(I(\boldsymbol{\beta})\) is known as the Fisher information matrix, and receives that name because it measures the information available in the sample for estimating \(\boldsymbol{\beta}\). Therefore, the larger the matrix is, the more precise is the estimation of \(\boldsymbol{\beta}\), because that results in smaller variances in (4.10). The inverse of the Fisher information matrix is \[\begin{align} I(\boldsymbol{\beta})^{-1}=(\mathbf{X}^T\mathbf{V}\mathbf{X})^{-1}, \tag{4.11} \end{align}\] where \(\mathbf{V}\) is a diagonal matrix containing the different variances for each \(Y_i\) (remember that \(p(\mathbf{x})=1/(1+e^{-(\beta_0+\beta_1x_1+\cdots+\beta_kx_k)})\)): \[ \mathbf{V}=\begin{pmatrix} p(\mathbf{X}_1)(1-p(\mathbf{X}_1)) & & &\\ & p(\mathbf{X}_2)(1-p(\mathbf{X}_2)) & & \\ & & \ddots & \\ & & & p(\mathbf{X}_n)(1-p(\mathbf{X}_n)) \end{pmatrix} \] In the case of the multiple linear regression, \(I(\boldsymbol{\beta})^{-1}=\sigma^2(\mathbf{X}^T\mathbf{X})^{-1}\) (see (3.6)), so the presence of \(\mathbf{V}\) here is revealing the heteroskedasticity of the model.
The interpretation of (4.10) and (4.11) gives some useful insights on what concepts affect the quality of the estimation:
Bias. The estimates are asymptotically unbiased.
Variance. It depends on:
- Sample size \(n\). Hidden inside \(\mathbf{X}^T\mathbf{V}\mathbf{X}\). As \(n\) grows, the precision of the estimators increases.
- Weighted predictor sparsity \((\mathbf{X}^T\mathbf{V}\mathbf{X})^{-1}\). The more sparse the predictor is (small \(|(\mathbf{X}^T\mathbf{V}\mathbf{X})^{-1}|\)), the more precise \(\hat{\boldsymbol{\beta}}\) is.
Similar to linear regression, the problem with (4.10) and (4.11) is that \(\mathbf{V}\) is unknown in practice because it depends on \(\boldsymbol{\beta}\). Plugging-in the estimate \(\hat{\boldsymbol{\beta}}\) to \(\boldsymbol{\beta}\) in \(\mathbf{V}\) results in \(\hat{\mathbf{V}}\). Now we can use \(\hat{\mathbf{V}}\) to get \[\begin{align} \frac{\hat\beta_j-\beta_j}{\hat{\mathrm{SE}}(\hat\beta_j)}\sim \mathcal{N}(0,1),\quad\hat{\mathrm{SE}}(\hat\beta_j)^2=v_j^2\tag{4.12} \end{align}\] where \[ v_j\text{ is the }j\text{-th element of the diagonal of }(\mathbf{X}^T\hat{\mathbf{V}}\mathbf{X})^{-1}. \] The LHS of (3.7) is the Wald statistic for \(\beta_j\), \(j=0,\ldots,k\). They are employed for building confidence intervals and hypothesis tests.
4.5.2 Confidence intervals for the coefficients
Thanks to (4.12), we can have the \(100(1-\alpha)\%\) CI for the coefficient \(\beta_j\), \(j=0,\ldots,k\): \[\begin{align} \left(\hat\beta_j\pm\hat{\mathrm{SE}}(\hat\beta_j)z_{\alpha/2}\right)\tag{4.13} \end{align}\] where \(z_{\alpha/2}\) is the \(\alpha/2\)-upper quantile of the \(\mathcal{N}(0,1)\). In case we are interested in the CI for \(e^{\beta_j}\), we can just simply take the exponential on the above CI. So the \(100(1-\alpha)\%\) CI for \(e^{\beta_j}\), \(j=0,\ldots,k\), is \[ e^{\left(\hat\beta_j\pm\hat{\mathrm{SE}}(\hat\beta_j)z_{\alpha/2}\right)}. \] Of course, this CI is not the same as \(\left(e^{\hat\beta_j}\pm e^{\hat{\mathrm{SE}}(\hat\beta_j)z_{\alpha/2}}\right)\), which is not a CI for \(e^{\hat\beta_j}\).
Let’s see how we can compute the CIs. We return to the challenger
dataset, so in case you do not have it loaded, you can download it here. We analyze the CI for the coefficients of fail.field ~ temp
.
# Fit model
<- glm(fail.field ~ temp, family = "binomial", data = challenger)
nasa
# Confidence intervals at 95%
confint(nasa)
## Waiting for profiling to be done...
## 2.5 % 97.5 %
## (Intercept) 1.3364047 17.7834329
## temp -0.9237721 -0.1089953
# Confidence intervals at other levels
confint(nasa, level = 0.90)
## Waiting for profiling to be done...
## 5 % 95 %
## (Intercept) 2.2070301 15.7488590
## temp -0.8222858 -0.1513279
# Confidence intervals for the factors affecting the odds
exp(confint(nasa))
## Waiting for profiling to be done...
## 2.5 % 97.5 %
## (Intercept) 3.8053375 5.287456e+07
## temp 0.3970186 8.967346e-01
In this example, the 95% confidence interval for \(\beta_0\) is \((1.3364, 17.7834)\) and for \(\beta_1\) is \((-0.9238, -0.1090)\). For \(e^{\beta_0}\) and \(e^{\beta_1}\), the CIs are \((3.8053, 5.2874\times10^7)\) and \((0.3070, 0.8967)\), respectively. Therefore, we can say with a 95% confidence that:
- When
temp
=0, the probability offail.field
=1 is significantly lager than the probability offail.field
=0 (using the CI for \(\beta_0\)). Indeed,fail.field
=1 is between \(3.8053\) and \(5.2874\times10^7\) more likely thanfail.field
=0 (using the CI for \(e^{\beta_0}\)). temp
has a significantly negative effect in the probability offail.field
=1 (using the CI for \(\beta_1\)). Indeed, each unit increase intemp
produces a reduction of the odds offail.field
by a factor between \(0.3070\) and \(0.8967\) (using the CI for \(e^{\beta_1}\)).
Compute and interpret the CIs for the exponentiated coefficients, at level \(\alpha=0.05\), for the following regressions (challenger
dataset):
fail.field ~ temp + pres.field
fail.nozzle ~ temp + pres.nozzle
fail.field ~ temp + pres.nozzle
fail.nozzle ~ temp + pres.field
4.5.3 Testing on the coefficients
The distributions in (4.12) also allow to conduct a formal hypothesis test on the coefficients \(\beta_j\), \(j=0,\ldots,k\). For example, the test for significance: \[\begin{align*} H_0:\beta_j=0 \end{align*}\] for \(j=0,\ldots,k\). The test of \(H_0:\beta_j=0\) with \(1\leq j\leq k\) is especially interesting, since it allows to answer whether the variable \(X_j\) has a significant effect on \(\mathbb{P}[Y=1]\). The statistic used for testing for significance is the Wald statistic \[\begin{align*} \frac{\hat\beta_j-0}{\hat{\mathrm{SE}}(\hat\beta_j)}, \end{align*}\] which is asymptotically distributed as a \(\mathcal{N}(0,1)\) under the (veracity of) the null hypothesis. \(H_0\) is tested against the bilateral alternative hypothesis \(H_1:\beta_j\neq 0\).
The tests for significance are built-in in the summary
function. However, a note of caution is required when applying the rule of thumb:
Is the CI for \(\beta_j\) below (above) \(0\) at level \(\alpha\)?
- Yes \(\rightarrow\) reject \(H_0\) at level \(\alpha\).
- No \(\rightarrow\) the criterion is not conclusive.
The significances given in summary
and the output of confint
are slightly incoherent and the previous rule of thumb does not apply. The reason is because MASS
’s confint
is using a more sophisticated method (profile likelihood) to estimate the standard error of \(\hat\beta_j\), \(\hat{\mathrm{SE}}(\hat\beta_j)\), and not the asymptotic distribution behind Wald statistic.
By changing confint
to R’s default confint.default
, the results of the latter will be completely equivalent to the significances in summary
, and the rule of thumb still be completely valid. For the contents of this course we prefer confint.default
due to its better interpretability.
To illustrate this we consider the regression of fail.field ~ temp + pres.field
:
# Significances with asymptotic approximation for the standard errors
<- glm(fail.field ~ temp + pres.field, family = "binomial",
nasa2 data = challenger)
summary(nasa2)
##
## Call:
## glm(formula = fail.field ~ temp + pres.field, family = "binomial",
## data = challenger)
##
## Deviance Residuals:
## Min 1Q Median 3Q Max
## -1.2109 -0.6081 -0.4292 0.3498 2.0913
##
## Coefficients:
## Estimate Std. Error z value Pr(>|z|)
## (Intercept) 6.642709 4.038547 1.645 0.1000
## temp -0.435032 0.197008 -2.208 0.0272 *
## pres.field 0.009376 0.008821 1.063 0.2878
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## (Dispersion parameter for binomial family taken to be 1)
##
## Null deviance: 28.267 on 22 degrees of freedom
## Residual deviance: 19.078 on 20 degrees of freedom
## AIC: 25.078
##
## Number of Fisher Scoring iterations: 5
# CIs with asymptotic approximation - coherent with summary
confint.default(nasa2, level = 0.90)
## 5 % 95 %
## (Intercept) -0.000110501 13.28552771
## temp -0.759081468 -0.11098301
## pres.field -0.005132393 0.02388538
confint.default(nasa2, level = 0.99)
## 0.5 % 99.5 %
## (Intercept) -3.75989977 17.04531697
## temp -0.94249107 0.07242659
## pres.field -0.01334432 0.03209731
# CIs with profile likelihood - incoherent with summary
confint(nasa2, level = 0.90) # intercept still significant
## Waiting for profiling to be done...
## 5 % 95 %
## (Intercept) 0.945372123 14.93392497
## temp -0.845250023 -0.16532086
## pres.field -0.004184814 0.02602181
confint(nasa2, level = 0.99) # temp still significant
## Waiting for profiling to be done...
## 0.5 % 99.5 %
## (Intercept) -1.86541750 21.49637422
## temp -1.17556090 -0.04317904
## pres.field -0.01164943 0.03836968
For the previous exercise, check the differences of using confint
or confint.default
for computing the CIs.