next up previous
Link to Statistics and Actuarial Science main page
Next: Assignment 3 Up: 22S:194 Statistical Inference II Previous: Assignment 2

Solutions

7.6
The joint PDF of the data can be written as

$\displaystyle f(x\vert\theta) = \theta^{n}\prod x_{i}^{-2}1_{[\theta,\infty)}(x_{(1)})$    

a.
$ X_{(1)}$ is sufficient.
b.
The likelihood increases up to $ x_{(1)}$ and then is zero. So the MLE is $ \widehat{\theta}=X_{(1)}$.
c.
The expected value of a single observation is

$\displaystyle E_{\theta}[X] = \int_{\theta}^{\infty}x\theta\frac{1}{x^{2}}dx = \theta\int_{\theta}^{\infty}\frac{1}{x}dx = \infty$    

So the (usual) method of moments estimator does not exist.

7.11
a.
The likelihood and log likelihood are

$\displaystyle L(\theta\vert x)$ $\displaystyle = \theta^{n}\left(\prod x_{i}\right)^{\theta-1}$    
$\displaystyle \log L(\theta\vert x)$ $\displaystyle = n\log\theta + (\theta-1)\sum \log x_{i}$    

The derivative of the log likelihood and its unique root are

$\displaystyle \frac{d}{d\theta}L(\theta\vert x)$ $\displaystyle = \frac{n}{\theta} + \sum \log x_{i}$    
$\displaystyle \widehat{\theta}$ $\displaystyle = -\frac{n}{\sum \log x_{i}}$    

Since $ \log L(\theta\vert x) \rightarrow -\infty$ as $ \theta
\rightarrow 0$ or $ \theta \rightarrow \infty$ and the likelihood is differentiable on the parameter space this root is a global maximum.

Now $ -\log X_{i} \sim$   Exponential$ (1/\theta) =$   Gamma$ (1,1/\theta)$. So $ -\sum \log X_{i} \sim$   Gamma$ (n,1/\theta)$. So

$\displaystyle E\left[-\frac{n}{\sum \log X_{i}}\right]$ $\displaystyle = n \int_{0}^{\infty} \frac{\theta^n}{x\Gamma(n)}x^{n-1}e^{-\theta x}dx$    
  $\displaystyle = n \frac{\Gamma(n-1)}{\Gamma(n)}\theta = \frac{n}{n-1}\theta$    

and

$\displaystyle E\left[\left(\frac{n}{\sum\log X_{i}}\right)^{2}\right] = n^{2}\frac{\Gamma(n-2)}{\Gamma(n)}\theta^{2} = \frac{n^{2}}{(n-1)(n-2)}\theta^{2}$    

So

Var$\displaystyle (\widehat{\theta}) = \theta^{2}\frac{n^{2}}{n-1}\left(\frac{1}{n-...
... \frac{\theta^{2}n^{2}}{(n-1)^{2}(n-2)} \sim \frac{\theta^{2}}{n} \rightarrow 0$    

as $ n \rightarrow \infty$.
b.
The mean of a single observation is

$\displaystyle E[X] = \int_{0}^{1}\theta x^{\theta} dx = \frac{\theta}{\theta+1}$    

So

$\displaystyle \overline{X} = \frac{\theta}{\theta+1}$    

is the method of moments equation, and

$\displaystyle \widetilde{\theta}\overline{X}+\overline{X}$ $\displaystyle = \widetilde{\theta}$ or    
$\displaystyle \widetilde{\theta}(\overline{X}-1)$ $\displaystyle = -\overline{X}$ or    
$\displaystyle \widetilde{\theta}$ $\displaystyle = \frac{\overline{X}}{1-\overline{X}}$    

We could use the delta method to find a normal approximation to the distribution of $ \widehat{\theta}$. The variance of the approximate disrtibtion is larger than the variance of the MLE.

7.13
The likelihood is

$\displaystyle L(\theta\vert x) = \frac{1}{2}\exp\{-\sum \vert x_{i}-\theta\vert\}$    

We know that the sample median $ \widetilde{X}$ minimizes $ \sum\vert x_{i}-\theta\vert$, so $ \widehat{\theta}=\widetilde{X}$. The minimizer is unique for odd $ n$. For even $ n$ any value between the two middle order statistics is a minimizer.

7.14
We need the joint ``density'' of $ W, Z$:

$\displaystyle P(W=1,Z \in [z,z+h))$ $\displaystyle = P(X \in [z,z+h), Y \ge z+h) + o(h)$    
  $\displaystyle = h \frac{1}{\lambda}e^{-z/\lambda}e^{-z/\mu}+o(h)$    
  $\displaystyle = h \frac{1}{\lambda}e^{-z\left(\frac{1}{\lambda}+\frac{1}{\mu}\right)} + o(h)$    

and, similarly,

$\displaystyle P(W=0,Z \in [z,z+h)) = h \frac{1}{\mu}e^{-z\left(\frac{1}{\lambda}+\frac{1}{\mu}\right)} + o(h)$    

So

$\displaystyle f(w,z) = \lim_{h \downarrow 0} \frac{1}{h}P(W=w,Z \in [z,z+h)) = \frac{1}{\lambda^{w}\mu^{1-w}} e^{-z\left(\frac{1}{\lambda}+\frac{1}{\mu}\right)}$    

and therefore

$\displaystyle f({w}_{1},\ldots,{w}_{n},{z}_{1},\ldots,{z}_{n}\vert\lambda,\mu) ...
...\sum w_{i}}\mu^{n-\sum w_{i}}} e^{-\sum z_{i}(\frac{1}{\lambda}+\frac{1}{\mu})}$    

Since this factors,

$\displaystyle f({w}_{1},\ldots,{w}_{n},{z}_{1},\ldots,{z}_{n}\vert\lambda,\mu) ...
..._{i}}}e^{-\sum z_{i}/\lambda} \frac{1}{\mu^{n-\sum w_{i}}}e^{-\sum z_{i}/{\mu}}$    

it is maximized by maximizing each term separately, which produces

$\displaystyle \widehat{\lambda}$ $\displaystyle = \frac{\sum z_{i}}{\sum w_{i}}$    
$\displaystyle \widehat{\mu}$ $\displaystyle = \frac{\sum z_{i}}{n-\sum w_{i}}$    

In words, if the $ X_{i}$ represent failure times, then

$\displaystyle \widehat{\lambda} = \frac{\text{total time on test}}{\text{number of observed failures}}$    


next up previous
Link to Statistics and Actuarial Science main page
Next: Assignment 3 Up: 22S:194 Statistical Inference II Previous: Assignment 2
Luke Tierney 2003-05-04