The standard deviation (or, as it is usually called, the standard error) of the sampling distribution for the sample mean, , is equal to the standard deviation of the population from which the sample was selected, divided by the square root of the sample size. That is,
a. As the sample size is increased, what happens to the standard error of ? Why is this property considered important?
b. Suppose a sample statistic has a standard error that is not a function of the sample size. In other words, the standard error remains constant as n changes. What would this imply about the statistic as an estimator of a population parameter?
c. Suppose another unbiased estimator (call it A) of the population mean is a sample statistic with a standard error equal to
Which of the sample statistics, or A, is preferable as an estimator of the population mean? Why?
d. Suppose that the population standard deviation σ is equal to 10 and that the sample size is 64. Calculate the standard errors of and A. Assuming that the sampling distribution of A is approximately normal, interpret the standard errors. Why is the assumption of (approximate) normality unnecessary for the sampling distribution of ?