|
You really don't want to know...
\cl{\bf Statistics and Point Estimators - an Overview}\vskip 12 pt
Your Exam will cover Chapters 8 and 10. Chapter 8 deals with statistics and Chapter 10 with point estimators.\vskip 12 pt
\cl{Some Basic Statistical Concepts and Definitions}\vskip 12 pt
\ni 1. A {\bf random sample} $X_1,\dots ,X_n$ is a collection of independent random variables all drawn from the same distribution.
\ni 2. A {\bf statistic} is a function on a random sample. The most common, but by no means only statistcs are the sample mean and the sample variance:\vskip 1 in
\ni 3. The distribution of the sample mean is described in Theorem 8.1, which states that, if $X_1,\dots ,X_n$ is a random sample from an infinite population with mean $\mu$ and variance $\sigma^2$, then
$$E(\ol{X})=\mu,\quad\quad var(\ol{X})={\sigma^2\over n},\quad\quad\hbox{and}\quad\quad {\sigma_{\ol{X}}}={\sigma\over\sqrt{n}}$$
\ni where the last term is the {\it standard error of the mean}.\page
\cl{\bf A Basic Review of Estimators}\vskip 12 pt
A distribution may have any number of different (population) parameters associated with it. For example:\vskip 4 pt
\ni i) the mean $\mu$ and variance $\sigma^2$.
\ni ii) Consider the binomial distribution $$b(x:n,\theta}={n\choose x}\theta^x{(1-\theta)}^{n-x}$$
\ni which computes the probability that, out of $n$ trials of a Bernoulli process, exactly $x$ of the trials will be successes. We can consider both the sample size $n$ and the probability of success $\theta$ as population parameters of the binomial distribution.
\ni iii) Consider the gamma distribution $g(x;\alpha ,\beta)$ with parameters $\alpha$ and $\beta$. You may wish to go back and look at an earlier lab, where you studied the effects of changes in $\alpha$ or $\beta$ on the shape of the gamma distribution. \vskip 12 pt
Given a distribution of known type, we may take a random sample $X_1,\dots ,X_n$ and use it to derive an estimate of a population parameter $\theta$. This leads to the notion of an {\it estimator}. \vskip 12 pt
\ni{\bf Definition.} Consider a population with a given density function $f$. A statistic $\widehat{\Theta}$ is called an {\bf estimator} to a population parameter $\theta$ associated with $f$ if $\widehat{\Theta}$ is an attempt to approximate the actual value of $\theta$.\page
Most of Chapter 10 deals with various properties we might like estimators to have, uncluding being unbiased, efficiency, and consistency. We will conclude Chapter 10 on Wednesday with a discussion of sufficiency of an estimator and a brief mention of robustness. For now, we review the three properties we have already studied.\vskip 12 pt
\ni{\bf Unbiased Estimators.} $\widehat{\Theta}$ is an {\bf unbiased} estimator of the population parameter $\theta$ iff $E(\widehat{\Theta})=\theta$.\vskip 2 in
As is to be expected, unbiased estimators are almost always preferable to biased estimators, but are not always easy to obtain.\vskip 12 pt
\ni{\bf Efficient Estimators.} given a choice of unbiased estimators, general preference would be given to the one with the smallest variance.
\ni Note: for a biased estimator, we would measure its efficiency using the mean-square error $E<{((\widehat{\Theta})-\theta)}^2>$.\page
\ni{\bf Consistent Estimators.} We say that $(\widehat{\Theta})$ converges to $\theta$ {\bf in probability} if, given any small positive constant $c$, $$lim_{n\to\infty}P(|(\widehat{\Theta})-\theta|<c)=1$$
\ni We also say that an estimator which converges in probability to $\theta$ is a {\bf consistent} estimator.
|