next up previous
Link to Statistics and Actuarial Science main page
Next: Assignment 10 Up: 22S:194 Statistical Inference II Previous: Assignment 9

Solutions

9.27
a.
The posterior density is

$\displaystyle \pi(\lambda\vert X)$ $\displaystyle \propto \frac{1}{\lambda^n}e^{-\sum x_i/\lambda} \frac{1}{\lambda^{a+1}} e^{-1/(b\lambda)}$    
  $\displaystyle = \frac{1}{\lambda^{n+a+1}} e^{-[1/b + \sum x_i]/\lambda}$    
  $\displaystyle =$   IG$\displaystyle (n+a, [1/b + \sum x_i]^{-1})$    

The inverse gamma density is unimodal, so the HPD region is an interval $ [c_1,c_2]$ with $ c_1, c_2$ chosen to have equal density values and $ P(Y > 1/c_1)+P(Y<1/c_2) = \alpha$, with $ Y \sim$   Gamma$ (n+a, [1/b + \sum x_i]^{-1})$.
.b
The distribution of $ S^2$ is Gamma$ ((n-1)/2, 2
\sigma^2 /(n-1))$. The resulting posterior density is therefore

$\displaystyle \pi(\sigma^2\vert s^2)$ $\displaystyle \propto \frac{(s^2)^{(n-1)/2 - 1}}{(\sigma^2/(n-1))^{(n-1)/2}} e^{-(n-1)s^2/\sigma^2} \frac{1}{(\sigma^2)^{a+1}} e^{-1/(b \sigma^2)}$    
  $\displaystyle \propto \frac{1}{(\sigma^2)^{(n-1)/2 + a + 1}} e^{-[1/b + (n-1)s^2]/\sigma^2}$    
  $\displaystyle =$   IG$\displaystyle ((n-1)/2 + a, [1/b + (n-1)s^2]^{-1})$    

As in the previous part, the HPD region is an interval that can be determined by solving a corresponding set of equations.
c.
The limiting posterior distribution is IG$ ((n-1)/2,[(n-1)s^2]^{-1})$. The limiting HPD region is an interval $ [c_1,c_2]$ with $ c_1 =
(n-1)s^2/\chi^2_{n-1,\alpha_1}$ and $ c_2 =
(n-1)s^2/\chi^2_{n-1,1-\alpha_2}$ where $ \alpha_1+\alpha_2=\alpha$ and $ c_1, c_2$ have equal posterior density values.
9.33
a.
Since $ 0 \in C_{a}(x)$ for all $ a,x$,

$\displaystyle P_{\mu=0}(0 \in C_{a}(X)) = 1$    

For $ \mu < 0$,

$\displaystyle P_{\mu}(\mu \in C_{a}(X))$ $\displaystyle = P_{\mu}(\min\{0,X-a\} \le \mu)$    
  $\displaystyle = P_{\mu}(X-a\le\mu) = P_{\mu}(X-\mu\le a) = 1-\alpha$    

if $ a = z_{\alpha}$. For $ \mu > 0$,

$\displaystyle P_{\mu}(\mu \in C_{a}(X))$ $\displaystyle = P_{\mu}(\max\{0,X-a\} \ge \mu)$    
  $\displaystyle = P_{\mu}(X+a\ge\mu) = P_{\mu}(X-\mu\ge-a) = 1-\alpha$    

if $ a = z_{\alpha}$.
b.
For $ \pi(\mu) \equiv 1$, $ f(\mu\vert x) \sim N(x, 1)$.

$\displaystyle P(\min\{0,x-a\}\le\mu\le\max\{0,x-a\}\vert X=x)$ $\displaystyle = P(x-a \le \mu \le x+a\vert X=x)$    
  $\displaystyle = 1-2\alpha$    

if $ a = z_{\alpha}$ and $ -z_{\alpha}\le x \le z_{\alpha}$. For $ a = z_{\alpha}$ and $ x > z_{\alpha}$,

$\displaystyle P(\min\{0,x-a\}\le\mu\le\max\{0,x-a\}\vert X=x)$ $\displaystyle = P(-x \le Z \le a)$    
  $\displaystyle \rightarrow P(Z \le z) = 1-\alpha$    

as $ x \rightarrow \infty$.

10.1
The mean is $ \mu = \theta/3$, so the method of moments estimator is $ W_n = 3 \overline{X}_n$. By the law of large numbers $ \overline{X}_n \overset{P}{\rightarrow}\mu = \theta/3$, so $ W_n = 3
\overline{X}_n \overset{P}{\rightarrow}\theta$.


next up previous
Link to Statistics and Actuarial Science main page
Next: Assignment 10 Up: 22S:194 Statistical Inference II Previous: Assignment 9
Luke Tierney 2003-05-04