A student wants to estimate the mean score of all college students for a particular exam. First use the range rule of thumb to make a rough estimate of the standard deviation of those scores. Possible scores range from 500 to 2200. Use technology and the estimated standard deviation to determine the sample size corresponding to a 95​% confidence level and a margin of error of 100 points. What​ isn't quite right with this​ exercise?

Respuesta :

Answer:

70

Step-by-step explanation:

it is given that score varies from 500 to 2200

so range =2200-500=1700

standard deviation [tex]s=\frac{range}{4}=\frac{1700}{4}=425[/tex]

Error E=100

Confidence level =95%=0.95

significance level α=1-0.95=0.05

[tex]z_{\frac{\alpha }{2}}=z_{\frac{0.05}{2}}=z_{0.025=1.96}[/tex] from the z table

sample size [tex]n\geq\left ( z_\frac{\alpha }{2}\times \frac{s}{E} \right )^2[/tex]

[tex]n\geq\left ( 1.96\times \frac{425}{100} \right )^2[/tex]

[tex]n\geq69.3889[/tex]

so n will be 70