Statistics Appendix
Statistics Appendix
CHAPTER 1 PROBLEMS
1. (a) 1946
(b) There were more years in which the average number of years com-
pleted by the older group exceeded that of the younger group.
3. (a) From 1985 to 1990 sales declined.
(b) The total number of cars sold from 1985 to 1987 was 20,693,000
versus 18,120,000 from 1988 to 1990.
(c) No
5. Researchers with such knowledge may be influenced by their own biases
concerning the usefulness of the new drug.
7. (a) In 1936 automobile and telephone owners were probably not rep-
resentative of the total voter population.
(b) Yes. Automobile and telephone ownership is now more widespread
and thus more representative of the total voter population.
9. The average age of death for U.S. citizens whose obituary is listed in The
New York Times is about 82.4 years.
11. (a) No. Graduates who return the questionnaire may not be represen-
tative of the total population of graduates.
(b) If the number of questionnaires returned were very close to 200—
the number of questionnaires sent—then the approximation would
be better.
13. Graunt implicitly assumed that the parishes he surveyed were represen-
tative of the total London population.
15. Data on the ages at which people were dying can be used to determine
approximately how long on average the annuity payments will continue.
This can be used to determine how much to charge for the annuity.
17. (a) 64%
(b) 10%
(c) 48%
19. (a) Yes
(b) Yes
(c) No
(d) No
739
740 Answers to Odd-Numbered Problems
Section 2.2
(b)
(c)
3. (a) 12
(b) 1
(c) 11
(d) 3
(e) 3
5. Value Frequency
10 8
20 3
30 7
40 7
50 3
60 8
Chapter 1 Problems 741
9. (a) 0.13
(b) 0.25
(c) No
(b) 0.162
(c) 0.540
Section 2.3
1. (a)
(b)
(c) The chart in part (a) seems more informative since it shows a clearer
pattern.
Chapter 1 Problems 743
5. (a)
(b)
(b)
744 Answers to Odd-Numbered Problems
Female Relative
cholesterol Frequency frequency
170–180 1 1/46 = 0.02
180–190 5 5/46 = 0.11
190–200 13 13/46 = 0.28
200–210 15 15/46 = 0.33
210–220 9 9/46 = 0.20
220–230 3 3/46 = 0.07
Male Relative
cholesterol Frequency frequency
170–180 3 3/54 = 0.06
180–190 13 13/54 = 0.24
190–200 19 19/54 = 0.35
200–210 10 10/54 = 0.19
210–220 6 6/54 = 0.11
220–230 3 3/54 = 0.06
15. (a) It is the sum of the relative frequencies for all classes.
Chapter 1 Problems 745
Section 2.4
1. (a) 11 1, 4, 5, 6, 8, 8, 9, 9, 9
12 2, 2, 2, 2, 4, 5, 5, 6, 7, 7, 7, 8, 9
13 0, 2, 2, 3, 4, 5, 5, 7, 9
14 1, 1, 4, 6, 7
(b) 11 1, 4
11 5, 6, 8, 8, 9, 9, 9
12 2, 2, 2, 2, 4
12 5, 5, 6, 7, 7, 7, 8, 9
13 0, 2, 2, 3, 4
13 5, 5, 7, 9
14 1, 1, 4
14 6, 7
746 Answers to Odd-Numbered Problems
3. 1 4
1 5, 6, 6, 7, 7, 7, 7, 8, 8, 8, 9, 9, 9, 9
2 0, 0, 0, 0, 1, 2, 2, 2, 3, 4
2 5, 7, 7, 9
3 0, 1, 1, 2, 3
3
4 0, 4, 4
4 5
5 1, 3
5 5
6 1
6
7
7 9
The interval 15–20 contains 14 data points.
The interval 16–21 contains 17 data points.
5. (a) 3 2
4
5 2, 7, 8, 9
6 5, 8, 8
7 1, 4, 5, 5, 7, 8, 9
8 0, 1, 3, 3, 3, 4, 8, 8
9 0, 3, 4, 7
10 0, 4, 8
(b) Yes. The value 32 seems suspicious since it is so much smaller than
the others.
7. (a) 1 4, 6, 6, 6
2 0, 0, 1, 3, 4, 4, 6, 7, 7, 7
3 1, 2, 3, 5, 5, 8, 8, 9
4 2, 6
5 5
(b) 0 3, 6, 7, 7, 7, 7, 9
1 0, 0, 0, 0, 0, 0, 3, 4, 4, 6, 6, 7, 7, 9, 9
2 0, 1
3 1
(c) 0 1, 3, 4, 4, 4, 5, 7, 9
1 0, 0, 2, 6, 7, 7, 7, 8, 9, 9
2 1, 2, 5, 9
3 2, 6
4 5
9. (a) 6
(b) 43.75%
(c) 12.5%
Chapter 2 Review 747
Section 2.5
1. (a)
CHAPTER 2 REVIEW
(c)
3. (a) 389
(b)
(b) There are relatively few weights near the upper end of the weight
range.
11. Weight and blood pressure do not seem related.
13. Yes, high scores on one examination tend to go along with high scores
on the other.
750 Answers to Odd-Numbered Problems
(c)
Section 3.2
1. 1196/15 = 79.73
3. 429.03/13 = 33.00 inches; 1331/13 = 102.38 days
5. No. It also depends on the proportions of the two town populations that
are women. (For instance, suppose town A has 9 women whose aver-
age weight is 110 and 1 man whose weight is 200, while town B has 10
women whose average weight is 100 and 10 men whose average weight
is 190.)
7. larger than the average of the previous values.
9. 6; 18; 11
11. 78/11
13. 15
15. 12 (10) + 16 (20) + 13 (30) = 18.33
17. $81,120
Chapter 2 Review 751
19. No. For a counterexample suppose town A has one man who weighs 200
and 2 women whose average weight is 100, whereas town B has 2 men
with an average weight of 175 and a single women who weights 90.
Section 3.3
1. (a) 6580 yards
(b) 6545 yards
3. 23
5. (a) 22.0
(b) 8.1
(c) 23.68
(d) 9.68
7. 31.5 inches
9. (a) 99.4
(b) 14.9
(c) 204.55
11. (a) 20.74
(b) 20.5
(c) 19.74
(d) 19.5
(e) Mean = 20.21; median = 20.05
13. 0, 0
15. (a) 32.52
(b) 24.25
17. (a) 26.8
(b) 25.0
Section 3.3.1
1. (a) If the data are arranged in increasing order, then the sample 80 per-
centile is given by the average of the values in positions 60 and 61.
(b) If the data are arranged in increasing order, then the sample 60 per-
centile is given by the average of the values in positions 45 and 46.
(c) If the data are arranged in increasing order, then the sample 30 per-
centile is the value in position 23.
3. (a) 95.5
(b) 96
5. (a) 70
(b) 58
(c) 52
7. 230c
9. 74, 85, 92
11. 25
Section 3.4
1. 1B, 2C, 3A
752 Answers to Odd-Numbered Problems
3. (a) 126
(b) 102, 110, 114
(c) 196
5. 5, 6, 6, 6, 8, 10, 12, 14, 23 is one such data set.
7. (a) 8 loops
(b) 2 miles
9. Answer (c) is most likely, by Benford’s law.
Section 3.5
1. (a) 13; (b) 43.81818.
3. (a) 6.18
(b) 6.77
11. (a) s 2 = 2.5, s = 1.58
(b) s 2 = 2.5, s = 1.58
(c) s 2 = 2.5, s = 1.58
(d) s 2 = 10, s = 3.16
(e) s 2 = 250, s = 15.81
13. For the first 50 students, s 2 = 172.24 and x = 115.80.
For the last 50 students, s 2 = 178.96 and x = 120.98.
The values of the statistics for the two data sets are similar. This is not
surprising.
15. 78.56 thousand
17. (a) 0.805
(b) 2.77
(c) 1.22
Section 3.6
1. (a)
(b) 25.75
(c) 26.5
(d) No
5. (a) 168,045
(b) 172,500
Chapter 2 Review 753
(c)
(d) Yes, if we ignore the data value 82. No, if we use all the data.
7. 95%, 94.85%
9. Sample mean
Section 3.7
1. Let (xi , yi ), i = 1, 2, 3 be the middle set of data pairs. Then the first set is
(121xi , 360 + yi ) and the third is (xi , 12 yi ), i = 1, 2, 3.
3. (a)
(b) Almost 1
(c) 0.86
(d) There is a relatively strong linear relationship between them.
5. −0.59; the linear relationship is relatively weak.
7. −0.441202; the linear relationship is relatively weak. But there is an in-
dication that when one of the variables is high, then the other tends to
be low.
9. 0.7429
11. All data = −0.33; first seven countries = −0.046
13. All data = 0.25; first seven countries = −0.3035
15. (d) Correlation is not causation.
17. No, correlation is not causation.
754 Answers to Odd-Numbered Problems
Section 3.8
1. L(1/6) = 25/245; L(2/6) = 57/245; L(3/6) = 117/245; L(4/6) = 157/245;
L(5/6) = 195/245; L(1) = 245/245
3. G = .4271
7. yes
CHAPTER 3 REVIEW
1. (a) −2, −1, 1, 2
(b) −2, −1, 0, 1, 2
(c) Part (a): mean = 0, median = 0; part (b): mean = 0, median = 0
3. (a) 29.3
(b) No
(c) First quartile is 27.7; second quartile, 29.3; third quartile, 31.1.
(d) 31.7
9. No
11. No, association is not causation.
13. .1426
15. 0.99846
Section 4.2
1. (a) S = {(R, R), (R, B), (R, Y ), (B, R), (B, B), (B, Y ), (Y, R), (Y, B),
(Y, Y )}
(b) {(Y, R), (Y, B), (Y, Y )}
(c) {(R, R), (B, B), (Y, Y )}
3. (a) {(U of M, OSU), (U of M, SJSC), (RC, OSU), (RC, SJSC), (SJSC,
OSU), (SJSC, SJSC), (Yale, OSU), (Yale, SJSC), (OSU, OSU), (OSU,
SJSC)}
(b) {(SJSC, SJSC), (OSU, OSU)}
(c) {(U of M, OSU), (U of M, SJSC), (RC, OSU), (RC, SJSC), (SJSC,
OSU), (Yale, OSU), (Yale, SJSC), (OSU, SJSC)}
(d) {(RC, OSU), (OSU, OSU), (SJSC, SJSC)}
5. S = {(France, fly), (France, boat), (Canada, drive), (Canada, train),
(Canada, fly)}A = {(France, fly), (Canada, fly)}
7. (a) ø
(b) {1, 4, 6}
(c) {1, 3, 4, 5}
(d) {2}
9. (a) {(1, g), (1, f ), (1, s), (1, c), (0, g), (0, f ), (0, s), (0, c)}
(b) {(0, s), (0, c)}
(c) {(1, g), (1, f ), (0, g), (0, f )}
(d) {(1, g), (1, f ), (1, s), (1, c)}
Chapter 3 Review 755
11. (a) Ac is the event that a rolled die lands on an odd number.
(b) (Ac )c is the event a rolled die lands on an even number.
(c) (Ac )c = A.
13.
Section 4.3
1. (a) P (E) = 0.35; P (E) = 0.65; P (G) = 0.55
(b) P (E ∪ F ) = 1
(c) P (E ∪ G) = 0.8
(d) P (F ∪ G) = 0.75
(e) P (E ∪ F ∪ G) = 1
(f) P (E ∩ F ) = 0
(g) P (F ∩ G) = 0.45
(h) P (E ∩ G) = 0.1
(i) P (E ∩ F ∩ G) = 0
3. 1/10,000
5. If they are disjoint, it is impossible. If they are not disjoint, it is possible.
7. (a) 1
(b) 0.8
756 Answers to Odd-Numbered Problems
(c) 0.5
(d) 0.1
9. (a) 0.95
(b) 0.80
(c) 0.20
11. 0.7
13. 0.31%
15. 0.6
17. (a) A ∩ B c
(b) A ∩ B
(c) B ∩ Ac
(d) P (I) + P (II) + P (III)
(e) P (I) + P (II)
(f) P (II) + P (III)
(g) P (II)
Section 4.4
1. 88/216 ≈ 0.41
3. (a) 4/52 ≈ 0.08
(b) 48/52 ≈ 0.92
(c) 13/52 ≈ 0.25
(d) 1/52 ≈ 0.02
5. 2/3
7. (a) 0.56
(b) 0.1
9. (a) 0.4
(b) 0.1
11. 56
13. 1/19
15. (a) 0.1
(b) 0.1
17. (a) 10/31
(b) 9/31
(c) 1/3
(d) 11/31
(e) 7/31
Section 4.5
1. (a) 0.02/0.3 ≈ 0.067
(b) 0.02/0.03 ≈ 0.667
3. (a) 0.245
(b) 0.293
5. (a) 0.145
(b) 0.176
(c) 0.215
Chapter 3 Review 757
(d) 0.152
7. (a) 0.46
(b) 0.65
9. (a) 262/682
(b) 262/682
(c) 350/682
(d) 602/682
(e) 598/682
(f) 519/682
11. 1/169 ≈ 0.006
13. 0.6960
15. (a) 19/34 ≈ 0.56
(b) 1 − 19/34 ≈ 0.44
(c) 1/17 ≈ 0.06
17. Since P (B|A) > P (B), P (A ∩ B) > P (B)P (A)
P (A ∩ B) P (B)P (A)
Hence, P (A/B) = > = P (A)
P (B) P (B)
19. 0.24
21. 0.68
23. (a) 7/12 ≈ 0.58
(b) 50
(c) 13/119 ≈ 0.11
(d) 35/204 ≈ 0.17
(e) 0.338
25. (a) 0.79; 0.21
(b) 0.81; 0.27
27. (a) 1/2
(b) 3/8
(c) 2/3
29. 1/16
31. No; the friends do not know each other.
33. P (A) = 1/13; P (B) = 1/4; P (A ∩ B) = 1/52; thus P (A ∩ B) = P (A)P (B).
35. 1/365
37. (a) 0.64
(b) 0.96
(c) 0.8704
39. Yes, P (A)P (B) = P (A ∩ B).
41. (a) 32/4805 ≈ 0.0067
(b) 729/1922 ≈ 0.38
(c) 0.060
(d) 0.045
(e) 0.006
(f) 0.111
758 Answers to Odd-Numbered Problems
CHAPTER 4 REVIEW
1. (a) 3/4
(b) 3/4
(c) 6/11
(d) 1/22
(e) 9/22
3. (a) 0.68
(b) 0.06
(c) 0.12
5. (a) 11/24
(b) 13/23
7. (a) 1/64
(b) 1/64
(c) 1/64
9. (a) S = {(chicken, rice, melon), (chicken, rice, ice cream), (chicken,
rice, gelatin), (chicken, potatoes, melon), (chicken, potatoes, ice
cream), (chicken, potatoes, gelatin), (roast beef, rice, melon), (roast
beef, rice, ice cream), (roast beef, rice, gelatin), (roast beef, pota-
toes, melon), (roast beef, potatoes, ice cream), (roast beef, potatoes,
gelatin)}
(b) {(chicken, potatoes, ice cream), (chicken, potatoes, gelatin), (roast
beef, potatoes, ice cream), (roast beef, potatoes, gelatin)}
(c) 1/3
(d) 1/12
11. (a) 1/3
(b) 1/3
(c) 1/3
(d) 1/2
13. 14/33 ≈ 0.424
Chapter 4 Review 759
5. i P {Y = i}
1 11/36
2 1/4
3 7/36
4 5/36
5 1/12
6 1/36
7. i P {X = i}
2 0.58
3 0.42
9. i P {X = i}
0 1199/1428
1 55/357
2 3/476
11. i P {X = i}
0 0.075
1 0.325
2 0.6
Section 5.3
1. (a) 2
(b) 5/3
(c) 7/3
3. $8.40
5. 1.9
7. (a) 2.53
(b) 4.47
9. $880
Chapter 4 Review 761
P (Y = 2) = p(1, 2) = 1/36
P (Y = 3) = p(1, 3) + p(2, 3) = 2/36
P (Y = 4) = p(1, 4) + p(2, 4) + p(3, 4) = 3/36
5. 35/12
7. 0.7071
Section 5.6
1. (a) 24
(b) 120
(c) 5040
3. 3,628,800
5. (a) 0.278692
(b) 0.123863
(c) 0.00786432
7. (a) 0.468559
(b) 0.885735
9. (a) 3 or more
(b) 0.00856
11. 0.144531
13. (a) 0.517747
(b) 0.385802
(c) 0.131944
15. (a) 0.421875
(b) 0.421875
(c) 0.140625
(d) 0.015625
17. (a) 10/3
(b) 20/3
(c) 10
(d) 50/3
19. (a) 0.430467
(b) 0.382638
(c) 7.2
(d) 0.72
21. (a) 0.037481
(b) 0.098345
(c) 0.592571
(d) 1.76
(e) 0.992774
23. (a) 0.00604662
(b) 0
25. (a) 50; 5
(b) 40; 4.89898
(c) 60; 4.89898
Chapter 5 Review 763
CHAPTER 5 REVIEW
1. (a) 0.4
(b) 0.6
3. (a) 1, 2, 3, 4
(b) i P (X = i)
1 0.3
2 0.21
3 0.147
4 0.343
(c) 0.7599
(d) 2.53
(e) 1.53
5. (a) 0.723
(b) No, because if she wins then she will win $1, whereas if she loses
then she will lose $3.
(c) −0.108
7. (a) i P (X = i)
0 0.7
4000 0.15
6000 0.15
(b) 1500
(c) 5,550,000
(d) 2,355.84
9. The low bid will maximize their expected profit.
11. (a) 1/3
(b) 1/4
(c) 7/24
(d) 1/12
(e) 1/24
(f) $625
(g) $125
13. (a) 0
(b) −68,750
(c) −68,750
764 Answers to Odd-Numbered Problems
(b) X
(c) X and Y are equally likely to exceed 100.
19. (a) No
(b) No
(c) No
(d) Yes
Section 6.4
1. (a) 0.9861
(b) 0.1357
(c) 0.4772
(d) 0.7007
(e) 0.975
(f) 0.2358
(g) 0.899
(h) 0.2302
(i) 0.8710
3. 3
7. (a) 1.65
(b) 1.96
(c) 2.58
(d) 0
(e) 0.41
(f) 2.58
(g) 1.15
(h) 0.13
(i) 0.67
Section 6.6
x −u a−u
1. Since x > a, x − u > a − u. It follows that > since σ is posi-
σ σ
tive.
3. 0.3085
5. (a) 0.6179
(b) 0.8289
(c) 0.4468
7. 0.008
9. (a) 0.1587
(b) 0.2514
(c) 0.4772
11. 0.8664
13. 6.31
15. (a) 0.2660
(b) 0.9890
(c) 0.7230
(d) 0.9991
766 Answers to Odd-Numbered Problems
(e) 0.0384
17. (a) 0.6915
(b) 0.24
Section 6.7
1. (a) 1.48
(b) 1.17
(c) 0.52
(d) 1.88
(e) −0.39
(f) 0
(g) −1.64
(h) 2.41
3. (a) 50
(b) 57.68
(c) 61.76
(d) 40.16
(e) 57.02
5. 464.22
7. 525.6
9. 746
11. (a) True
(b) True
13. 99.28
CHAPTER 6 REVIEW
1. (a) 0.9236
(b) 0.8515
(c) 0.0324
(d) 0.9676
(e) 0.1423
(f) 0.0007
(g) 75.524
(h) 73.592
(i) 68.3
3. 4.969
5. (a) 0.1587
(b) 0.1587
(c) 0.1886
(d) 576.8
7. (a) 0.881
(b) 0.881
(c) 0.762
Chapter 6 Review 767
9. (a) 0.4483
(b) 0.201
(c) 0.4247
11. (a) 0.6915
(b) 0.3859
(c) 0.1587
13. (a) 1/4
(b) 0.28965
15. (a) 0.8413
(b) 0.042
(c) independence
Section 7.3
1/2
1. (a) SD(X) = √ ≈ 0.29
3
1/2
(b) SD(X) = √ = 0.25
4
3. (a) 2
√
(b) 2/3 ≈ 0.82
768 Answers to Odd-Numbered Problems
(c) i P {X = i}
1 1/9
1.5 2/9
2 3/9
2.5 2/9
3 1/9
√
(d) E(X) = 2, SD(X) = 1/ 3 ≈ .58
(e) Yes √
5. (a) E(X) = 2.4, SD(X) = 0.2/√36 ≈ 0.033
(b) E(X) = 2.4, SD(X) = 0.2/√64 ≈ 0.025
(c) E(X) = 2.4, SD(X) = 0.2/√100 ≈ 0.02
(d) E(X) = 2.4, SD(X) = 0.2/ 900 ≈ 0.007
7. Expected value = 15,500, standard deviation = 2800
Section 7.4
1. (a) 0.5468
(b) 0.7888
(c) 0.9876
3. 0.7888
5. (a) 0.0062
(b) 0.7888
7. 0.9713
9. 0.1416
11. (a) 0.905
(b) 0.5704
13. (a) 0
(b) 0
15. (a) 0.6826
(b) 0.9544
(c) 1
(d) 1
(e) 1
Section 7.5
1. (a) E(X) = 0.6, SD(X) = 0.15
(b) E(X) = 0.6, SD(X) = 0.049
(c) E(X) = 0.6, SD(X) = 0.015
(d) E(X) = 0.6, SD(X) = 0.0049
3. (a) 0.0122
(b) 0.119
(c) 0.5222
9. (a) 0.0125
(b) 0.8508
11. 0.1949
13. 0.4602
Chapter 7 Review 769
CHAPTER 7 REVIEW
1. (a) 0.8413
(b) 0.5
(c) 0.0228
(d) 0.0005 √
3. E(X) = 3; SD(X) = 1/ 2 ≈ 0.71
5. (a) Mean = 12, standard deviation = 3.25
(b) 0.5588
7. (a) 300√
(b) 7 2 ≈ 31.3
(c) 0.5
9. 0.1003
11. (a) 0.3669
(b) 0.9918
(c) 0.9128
15. (a) .5785 without the continuity correction; .6540 with the correction.
(b) Yes
(c) It is the event that X + Y ≥ 10 − X + Z
(d) .9369 without using the continuity correction; .9502 with the cor-
rection.
Section 8.2
1. 145.5
5. 165.6 hours
7. 12
9. 3.23
11. (a)
Section 8.3
1. 0.3849
3. 0.65; 0.107
5. 0.412; 0.05
770 Answers to Odd-Numbered Problems
7. (a) 0.122
(b) 0.01
9. (a) 0.0233
(b) 0.0375
(c) 0.0867
11. (a) 0.245
(b) 0.022
13. (c); accurate in terms of lowest standard error
Section 8.3.1
1. 0.28
3. (b) 3.32; 1.73; 1.45
Section 8.4
1. 18.36
3. 799.7; 193.12
5. 21.27
7. 30.5
9. 12.64
11. 1.35
13. 0.0474; 0.2386
Section 8.5
1. (a) (3.06, 3.24)
(b) (3.03, 3.27)
3. (11.43, 11.53)
5. (a) (8852.87, 9147.13)
(b) (8824.69, 9175.31)
7. (72.53, 76.67)
9. (a) (1337.35, 1362.65)
(b) (1334.92, 1365.08)
(c) (1330.18, 1369.82)
11. 13.716
13. 3176
15. (a) 72.99
(b) 72.53
(c) 76.67
(d) 77.53
17. No
Section 8.6
1. (a) (5.15, 5.25)
(b) (5.13, 5.27)
Chapter 7 Review 771
1. (0.548, 0.660)
3. (a) (0.502, 0.519)
(b) (0.498, 0.523)
5. (0.546, 0.734)
7. (0.359, 0.411)
9. (0, 0.306)
11. (0.801, 0.874)
13. (0, 0.45)
15. (a) (0.060, 0.108)
(b) (0.020, 0.052)
(c) (0.448, 0.536)
17. (a) A 95% confidence interval is given by 0.75 ± 0.0346.
(b) Rather than using p̂ to estimate p in the standard error term they
used the upper bound p(1 − p) ≤ 1/4.
19. (a) 1692
(b) Less than 0.04 but greater than 0.02
(c) (0.213, 0.247)
21. 6147
23. 0.868
25. (a) 0.139
(b) 0.101
27. (a) No
(b) No
772 Answers to Odd-Numbered Problems
CHAPTER 8 REVIEW
1. (a)
3. (22.35, 26.45)
5. (316.82, 323.18)
7. (a) (44.84, 54.36)
(b) (45.66, 53.54)
9. (1527.47, 2152.53)
11. (a) 88.56
(b) (83.05, 94.06)
13. (a) (34.02, 35.98)
(b) (33.04, 36.96)
(c) (31.08, 38.92)
15. (0.487, 0.549)
17. 0.004
19. (a) (0.373, 0.419)
(b) (0.353, 0.427)
21. Upper
Section 9.2
1. (a) Hypothesis B
3. (d) is most accurate; (b) is more accurate than not.
Section 9.3
1. TS = 1.55; zα/2 = 1.96; do not reject H0 .
3. (a) 0.0026
(b) 0.1336
(c) 0.3174
At the 5% level of significance we reject H0 in (a). At the 1% level of
significance we reject H0 in (a).
5. Yes
7. (a) No
(b) 0
9. The data do not support a mean of 13,500 miles.
11. Yes; Yes
13. The p value is 0.281. Thus we reject this hypothesis at a level of signifi-
cance of 0.281 or greater.
15. (a) 0.2616
(b) 0.2616
(c) 0.7549
Section 9.3.1
1. (a) No
(b) No
(c) 0.091
Chapter 8 Review 773
3. (a) 0
(b) 0
(c) 0.0085
5. (a) Yes
(b) No, because the reduction in cavities is so small.
7. Yes, but increase the sample size.
9. The mean amount dispensed is less than 6 ounces; H0: μ ≥ 6; H1: μ < 6;
p value = 0.
Section 9.4
1. The evidence is not strong enough to discredit the manufacturer’s claim
at the 5% level of significance.
3. (a) Yes
(b) No
5. (a) No
(b) No
(c) No
(d) The p value is 0.108.
7. Yes
11. H0: μ ≥ 23 versus H1: μ < 23. The judge should rule for the bakery.
13. (a) H0: μ ≥ 31
(b) H1: μ < 31
(c) No
(d) No
15. No, the p value is 0.0068.
17. No; no
Section 9.5
1. p value = 0.0365; normal approximation is 0.0416
3. No
5. (a) H0: p ≤ 0.5; H1: p > 0.5
(b) 0.1356
(c) 0.0519
(d) 0.0042
As n increases, the p value decreases, because we have more confidence
in the estimate for larger n.
7. (a) No
(b) No
(c) No
(d) Yes
9. No; no
11. No
13. (a) Yes
(b) No
(c) 0.2005
774 Answers to Odd-Numbered Problems
CHAPTER 9 REVIEW
1. (b)
5. (a) No
(b) Yes
(c) Yes
7. There is insufficient evidence to support the claim at the 5% level of
significance.
9. One would probably rule against Caputo since the p value of the test
H0 : p = 1/2 against H1: p = 1/2 is 0.000016.
15. (a) 20
(b) 0.4526
Section 10.2
1. (a) No
(b) 0
3. (a) There is evidence to support the hypothesis that the mean lengths
of their cuttings are equal.
(b) 0.8336
5. It suffices to relabel the data sets and use the given test.
7. No
Section 10.3
1. Yes; H0: μx = μy ; H1: μx = μy ; p value = 0.0206
3. p value = 0.5664
5. Yes; 0
7. No; H0: μx ≤ μy ; H1: μx > μy , where x corresponds to rural students and
y corresponds to urban students.
9. H0: μB ≤ μA ; H1: μB > μA ; p value = 0.0838. At the 5% level of signifi-
cance supplier B should be used.
11. (a) H0: μm ≤ μf ; H1: μf < μm
(c) It indicates that the female average wage is less than the male aver-
age wage.
13. (a) The null hypothesis should be rejected for α = 0.01.
(b) 0.0066
(c) Reduction in mean score
Section 10.4
1. No; yes
3. (a) No
(b) No
5. Yes
7. Reject H0: μx = μy for α = 0.05; p value = 0.0028.
9. (a) Reject H0: μx = μy
(b) Reject H0: μx = μy
(c) Do not reject H0: μx = μy
Chapter 10 Review 775
Section 10.5
1. (a) Reject the hypothesis at α = 0.05.
(b) p value = 0.0015
3. Do not reject H0 .
5. (a) Do not reject the hypothesis.
(b) There is not evidence to reject the hypothesis at the 5% level of
significance.
7. Reject the hypothesis at α = 0.05.
9. (a) H0: μbefore ≤ μafter ; H1 : μbefore > μafter
(b) No
11. The null hypothesis is not rejected.
Section 10.6
1. (a) No
(b) No
3. (a) Yes
(b) 0.0178
5. (a) No
(b) 0.0856
7. Reject the hypothesis that the proportions were the same in 1983 and
1990; p value = 0.0017.
9. Reject the hypothesis for α = 0.05; p value = 0.
11. (a) Yes
(b) 0
13. No
15. Yes; H0: p̂placebo ≤ p̂aspirin (where p̂ is the proportion that suffered heart
attacks); p̂ value = 0.
CHAPTER 10 REVIEW
1. (a) Reject H0: μx = μy
(b) 0
3. (a) Do not reject the hypothesis that the probabilities are the same.
(b) 0.5222
(c) No
(d) α ≥ 0.2611
5. (a) Reject H0: μx = μy
(b) 0.0497
7. Do not reject the hypothesis that the probabilities are the same.
9. Do not reject the hypothesis (p value = 0.79).
11. Do not reject the hypothesis that the proportions are the same in both
sports.
776 Answers to Odd-Numbered Problems
Section 11.2
1. (a) X1 = 8, X2 = 14, X3 = 11
(b) X = 11
3. Yes
5. No
7. Do not reject the hypothesis for α = 0.05.
9. Reject the hypothesis that death rates do not depend on season for a =
0.05.
11. No
Section 11.3
1. α̂ = 68.8, α̂1 = 14.2, α̂2 = 6.53, α̂3 = −3.47, α̂4 = −3.47, α̂5 = −13.8,
β̂1 = 0.8, β̂2 = −2.4, β̂3 = 1.6
3. α̂ = 28.33, α̂1 = 1, α̂2 = −2, α̂3 = 1, β̂1 = 3.67, β̂2 = −0.67, β̂3 = −3
7. α̂ = 9.585, α̂1 = −1.74, α̂2 = −1.96, α̂3 = 4.915, α̂4 = −1.36,
α̂5 = −3.335, β̂1 = 0.495, β̂2 = −0.405,
β̂3 = 0.795, β̂4 = −0.885
9. (a) 44
(b) 48
(c) 52
(d) 144
Section 11.4
1. (a) Yes
(b) No
3. (a) No
(b) No
5. (a) No (Reject H0 )
(b) Yes (Do not reject H0 )
7. The p-value in both cases is less than 0.0001.
9. (a) Reject the hypothesis for α = 0.05.
(b) Do not reject the hypothesis for α = 0.05.
CHAPTER 11 REVIEW
1. Reject the hypothesis for α = 0.05.
3. Yes for α = 0.05.
5. Do not reject the hypothesis for α = 0.05.
7. (a) Do not reject the hypothesis for α = 0.05.
(b) 30.6
(c) Reject the hypothesis for α = 0.05.
Chapter 11 Review 777
Section 12.2
1. (a)
(b) Yes
3. (a) Density; speed
(b)
(c) Yes
5. (a)
(b) No
778 Answers to Odd-Numbered Problems
Section 12.3
1. (a)
(b)
3. (a)
Section 12.4
1. 2.32
3. (a) 6
(b) 6
(c) 76
5. 0.000156
7. 6970.21
Section 12.5
1. Do not reject H0: β = 0.
3. Reject the hypothesis.
5. (a)
Section 12.6
1. (a) α = 10.48, β = 0.325
(b) Yes
7. Not as well as the heights
Section 12.7
1. (a) 12.6
(b) (6.4, 18.8)
3. (a) γ = 44.818 − 0.3138x
(b) 28.814
(c) (25.083, 32.545)
(d) (6026.89, 9520.09)
5. (a) 2.501
(b) (2.493, 2.510)
7. (a) $33,266
(b) (27,263, 39,268)
(c) $42,074; (35,608, 48,541)
Section 12.8
1. (a)
Section 12.9
1. (a) 0.9796; 0.9897
(b) 0.9796; 0.9897
This indicates that the value of the sample correlation coefficient does
not depend on which variable is considered the independent variable.
3. (a) 0.8
(b) 0.8
(c) −0.8
(d) −0.8
5. (a) y = −3.16 + 1.24x
(b) y = 7.25 + 0.66x
(c) 0.818; 0.904
(d) 0.818; 0.904
Section 12.11
3. y = −153.51 + 51.75x1 + 0.077x2 + 20.92x3 + 13.10x4 ; 183.62
5. 69.99
CHAPTER 12 REVIEW
1. (a)
CHAPTER 13 REVIEW
1. Do not reject the hypothesis.
5. Reject the hypothesis.
7. Do not reject the hypothesis for α = 0.05.
9. Yes
Chapter 13 Review 783
11. No; no
13. (a) Do not reject the hypothesis.
(b) 0.208
15. Do not reject the hypothesis; do not reject the hypothesis.
Section 14.2
1. (a) p value = 0.057. Reject the null hypothesis at any significance level
greater than or equal to 0.057.
(b) p value ≈ 0. Reject the null hypothesis at any significance level.
(c) p value ≈ 0. Reject the null hypothesis at any significance level.
3. We cannot reject the null hypothesis that the two guns are equally effec-
tive.
5. Since n is small we use the binomial distribution to calculate the p value
= 0.291. Thus we cannot reject the hypothesis that the median score will
be at least 72.
7. Yes, this discredits the hypothesis. p value = 0.0028.
Section 14.3
1. (a) TS = 39
(b) TS = 42
(c) TS = 20
3. (a) p value = 0.2460
(b) p value = 0.8336
(c) p value = 0.1470
5. (a) Yes, how the paper is presented had an effect on the score given.
(b) p value = 0.0102
7. (a) The null hypothesis is rejected at any significance level greater than
or equal to 0.1250.
(b) The null hypothesis is rejected at any significance level greater than
or equal to 0.0348.
9. No, we cannot reject the null hypothesis. Painting does not affect an
aircraft’s cruising speed.
Section 14.4
1. (a) 94
(b) 77
3. p value = 0.8572
5. Since the p value = 0.2112, we cannot reject the null hypothesis that the
starting salary distribution for MBAs from the two schools are the same.
7. p value = 0.4357
Section 14.5
1. (a) 41
(b) 2
3. Since the p value = 0.0648, we cannot reject the hypothesis that the data
constitutes a random sample.
784 Answers to Odd-Numbered Problems
5. Since the p value = 0.0548, we cannot reject the null hypothesis that the
interviewer interviewed them in a randomly chosen order.
7. (a) Median = 163.5
(b) Seven runs
(c) Since the p value = 0.0016, we must reject the null hypothesis at
any significance level greater than or equal to 0.0016. The sequence
of values do not constitute a random sample.
Section 14.7
1. The data strongly support the hypothesis that the student improved as
the semester progressed.
CHAPTER 14 REVIEW
1. Using the rank-sum test with TS = 113, we obtain a p value of 0.0348.
So we cannot reject the null hypothesis at the 1% level of significance,
but we must reject the null hypothesis at the 5% level.
3. Since p value ≈ 0, reject the null hypothesis, the median net worth has
decreased.
5. We do a runs test, with median = 145 and n = m = 20, and r = 21. Since
5 = 21, the p value is 1.0.
9. Using the signed–rank test with TS = 0. The p value = 0.0444. Thus we
reject the null hypothesis that there is no difference in the shoe sales at
any level of significance above 4.44%.
11. Since the p value = 0.5620, we cannot reject the null hypothesis.
Section 15.2
1. (a) LCL = 85, UCL = 115
(b) LCL = 86.58, UCL = 113.42
(c) LCL = 87.75, UCL = 112.25
(d) LCL = 90.51, UCL = 109.49
3. LCL = 66.58, UCL = 93.42. Since subgroup 9 falls outside this range, the
process would have been declared out of control at that point.
5. LCL = −0.00671, UCL = 0.00671. Since all the subgroups are within
these control limits, the process is in control.
Section 15.3
1. LCL = 0, UCL = 13.23. Since all the subgroups are within the control
limits, the process is in control.
3. (a) Since all the subgroups are within the control limits, the process is
in control.
(b) LCL = 0, UCL = 9.88
Chapter 15 Review 785
CHAPTER 15 REVIEW
1. LCL = 1.4985, UCL = 1.5015.
3. LCL = 0, UCL = 13.23. Since all the subgroups are within these control
limits, the process is in control.
APPENDIX A
A Data Set
Student Weight Cholesterol Pressure Gender Student Weight Cholesterol Pressure Gender
1 147 213 127 F 30 129 194 114 M
2 156 174 116 M 31 111 184 104 F
3 112 193 110 F 32 156 191 118 M
4 127 196 110 F 33 155 221 107 F
5 144 220 130 F 34 104 212 111 F
6 140 183 99 M 35 217 221 156 M
7 119 194 112 F 36 132 204 117 F
8 139 200 102 F 37 103 204 121 F
9 161 192 121 M 38 171 191 105 M
10 146 200 125 F 39 135 183 110 F
11 190 200 125 M 40 249 227 137 M
12 126 199 133 F 41 185 188 119 M
13 164 178 130 M 42 194 200 109 M
14 176 183 136 M 43 165 197 123 M
15 131 188 112 F 44 121 208 100 F
16 107 193 113 F 45 124 218 102 F
17 116 187 112 F 46 113 194 119 F
18 157 181 129 M 47 110 212 119 F
19 186 193 137 M 48 136 207 99 F
20 189 205 113 M 49 221 219 149 M
21 147 196 113 M 50 151 201 109 F
22 112 211 110 F 51 182 208 130 M
23 209 202 97 M 52 151 192 107 M
24 135 213 103 F 53 182 192 136 M
25 168 216 95 M 54 149 191 124 M
26 209 206 107 M 55 162 196 132 M
27 102 195 102 F 56 168 193 92 M
28 166 191 111 M 57 185 185 123 M
29 132 171 112 M 58 191 201 118 M
(Continued)
711
Introductory Statistics. DOI:10.1016/B978-0-12-804317-2.00024-2
Copyright © 2017 Elsevier Inc. All rights reserved.
712 APPENDIX A: A Data Set
(Continued)
Student Weight Cholesterol Pressure Gender Student Weight Cholesterol Pressure Gender
59 173 185 114 M 102 184 192 129 M
60 186 203 114 M 103 179 202 129 M
61 161 177 119 M 104 105 211 109 F
62 149 213 124 F 105 157 179 109 M
63 103 192 104 F 106 202 210 124 M
64 126 193 99 F 107 140 188 112 F
65 181 212 141 M 108 165 203 114 F
66 190 188 124 M 109 184 199 151 M
67 124 201 114 F 110 132 195 129 F
68 175 219 125 M 111 119 202 117 F
69 161 189 120 M 112 158 195 112 M
70 160 203 108 F 113 138 217 101 F
71 171 186 111 M 114 177 194 136 M
72 176 186 114 M 115 99 204 129 F
73 156 196 99 M 116 177 198 126 M
74 126 195 123 F 117 134 195 111 F
75 138 205 113 F 118 133 168 98 M
76 136 223 131 F 119 194 201 120 M
77 192 195 125 M 120 140 211 132 F
78 122 205 110 F 121 104 195 106 F
79 176 198 96 M 122 191 180 130 M
80 195 215 143 M 123 184 205 116 M
81 126 202 102 F 124 155 189 117 M
82 138 196 124 F 125 126 196 112 F
83 166 196 103 M 126 190 195 124 M
84 86 190 106 F 127 132 218 120 F
85 90 185 110 F 128 133 194 121 F
86 177 188 109 M 129 174 203 128 M
87 136 197 129 F 130 168 190 120 M
88 103 196 95 F 131 190 196 132 M
89 190 227 134 M 132 176 194 107 M
90 130 211 119 F 133 121 210 118 F
91 205 219 130 M 134 131 167 105 M
92 127 202 121 F 135 174 203 88 M
93 182 204 129 M 136 112 183 94 F
94 122 213 116 F 137 121 203 116 F
95 139 202 102 F 138 132 194 104 F
96 189 205 102 M 139 155 188 111 M
97 147 184 114 M 140 127 189 106 F
98 180 198 123 M 141 151 193 120 M
99 130 180 94 M 142 189 221 126 M
100 130 204 118 F 143 123 194 129 F
101 150 197 110 F 144 137 196 113 F
A Data Set 713
(Continued)
Student Weight Cholesterol Pressure Gender Student Weight Cholesterol Pressure Gender
145 122 201 113 F 187 108 185 96 F
146 126 212 121 F 188 126 194 122 F
147 136 210 120 F 189 175 201 138 M
148 145 168 115 M 190 168 182 118 M
149 202 202 122 M 191 115 194 122 F
150 151 206 108 F 192 129 193 90 F
151 137 178 128 M 193 131 209 119 F
152 90 178 100 F 194 187 182 134 M
153 177 220 123 M 195 185 200 127 M
154 139 214 120 F 196 114 196 113 F
155 172 191 117 M 197 206 216 124 M
156 107 179 106 F 198 151 212 113 F
157 186 209 129 M 199 128 204 110 F
158 198 196 140 M 200 128 204 115 F
159 113 184 110 F 201 183 190 136 M
160 143 209 105 F 202 104 192 93 F
161 205 198 137 M 203 99 209 110 F
162 186 206 111 M 204 201 208 120 M
163 174 189 129 M 205 129 204 100 F
164 171 197 132 M 206 149 193 117 F
165 209 202 128 M 207 123 200 120 F
166 126 203 134 F 208 179 191 122 M
167 160 185 109 M 209 150 216 128 F
168 127 212 124 F 210 133 193 110 F
169 112 193 115 F 211 112 190 107 F
170 155 184 112 M 212 175 188 113 M
171 111 181 111 F 213 120 182 126 F
172 151 196 129 M 214 126 207 110 F
173 110 181 113 F 215 170 201 101 M
174 159 192 115 M 216 175 211 115 M
175 173 196 131 M 217 134 219 129 F
176 148 191 101 M 218 118 211 113 F
177 141 216 110 F 219 118 178 109 F
178 161 186 123 M 220 164 196 107 M
179 125 209 113 F 221 186 190 134 M
180 114 200 109 F 222 172 189 134 M
181 125 206 135 F 223 173 207 101 M
182 129 214 100 F 224 185 206 128 M
183 115 207 115 F 225 190 198 117 M
184 142 197 118 F 226 146 200 112 F
185 183 202 114 M 227 103 179 100 F
186 181 212 118 M 228 124 215 124 F
(Continued)
714 APPENDIX A: A Data Set
(Continued)
Student Weight Cholesterol Pressure Gender Student Weight Cholesterol Pressure Gender
229 186 213 124 M 271 195 195 148 M
230 166 166 129 M 272 199 201 125 M
231 138 201 120 F 273 148 202 120 F
232 175 198 118 M 274 164 190 113 M
233 104 194 100 F 275 137 196 107 F
234 213 206 130 M 276 133 173 121 M
235 171 182 118 M 277 104 214 112 F
236 180 213 119 M 278 126 194 116 F
237 187 197 128 M 279 120 220 116 F
238 117 194 106 F 280 148 204 131 F
239 108 185 105 F 281 100 206 89 F
240 128 202 105 F 282 178 190 125 M
241 170 196 118 M 283 149 188 108 F
242 183 176 126 M 284 157 194 124 M
243 143 190 101 M 285 99 203 95 F
244 160 205 120 F 286 192 208 127 M
245 185 184 113 M 287 175 181 145 M
246 122 193 142 F 288 208 193 123 M
247 225 218 142 M 289 201 208 138 M
248 139 191 99 F 290 174 199 111 M
249 123 207 116 F 291 188 189 119 M
250 129 176 108 F 292 151 205 133 F
251 142 220 137 F 293 202 220 126 M
252 146 191 116 M 294 125 198 106 F
253 129 201 100 F 295 176 190 116 M
254 163 171 119 M 296 183 188 96 M
255 177 206 134 M 297 118 198 130 F
256 183 190 116 M 298 125 204 111 F
257 120 201 104 F 299 237 209 127 M
258 188 214 115 M 300 124 186 127 F
259 140 182 119 M 301 98 194 104 F
260 166 197 113 M 302 182 199 108 M
261 122 199 107 F 303 184 206 149 M
262 177 207 124 M 304 137 189 113 F
263 184 204 122 M 305 126 177 111 F
264 113 198 121 F 306 202 198 130 M
265 214 221 142 M 307 225 212 142 M
266 144 205 111 M 308 181 200 122 M
267 188 188 132 M 309 178 187 121 M
268 114 204 127 F 310 132 221 110 F
269 158 213 111 F 311 164 201 134 M
270 146 196 116 M 312 163 191 138 M
APPENDIX B
Mathematical Preliminaries
which means that s is equal to the sum of the xi values as i ranges from 1 to 4.
The summation notation is quite useful when we want to sum a large number
of quantities. For instance, suppose that we were given 100 numbers, desig-
nated as x1 , x2 , and so on, up to x100 . We could then compactly express s, the
sum of these numbers, as
100
s= xi
i=1
If we want the sum to include only the 60 numbers starting at x20 and ending
at x79 , then we could express this sum by the notation
79
xi
i=20
79
That is, i=20 xi is the sum of the xi values as i ranges from 20 to 79.
absolute value of a negative number is its negative. We use the symbol |x| to
denote the absolute value of the number x. Thus,
x if x ≥ 0
|x| =
−x if x < 0
If we represent each real number by a point on a straight line, then |x| is the
distance from point x to the origin 0. This is illustrated by Fig. B.1.
If x and y are any two numbers, then |x − y| is equal to the distance between x
and y. For instance, if x = 5 and y = 2, then |x − y| = |5 − 2| = |3| = 3. On the
other hand, if x = 5 and y = −2, then |x − y| = |5 − (−2)| = |5 + 2| = 7. That
is, the distance between 5 and 2 is 3, whereas the distance between 5 and −2
is 7.
of the elements in the first 6 positions. Thus we select a value that is equally
likely to be 1, 2, 3, 4, 5, or 6; the element in that position will become part
of the random sample. And to indicate this and to leave the first 5 positions
for the elements that have not yet been chosen, we interchange the element in
the position chosen with the element in position 6. For instance, if the value
chosen was 4, then the element in position 4 (that is, element number 7)
becomes part of the random sample, and the new list ordering is
1, 2, 3, 6, 5, 7, 4
The final element of the random sample is equally likely to be any of the el-
ements in positions 1 through 5, so we select a value that is equally likely to
be 1, 2, 3, 4, or 5 and interchange the element in that position with the one in
position 5. For instance, if the value is 2, then the new ordering is
1, 5, 3, 6, 2, 7, 4
Since there are now three elements in the random sample, namely, 2, 7, and 4,
the process is complete.
To implement the foregoing algorithm for generating a random sample, we
need to know how to generate the value of a random quantity that is equally
likely to be any of the numbers 1, 2, 3, . . . , k. The key to doing this is to make
use of random numbers that are the values of random variables that are uni-
formly distributed over the interval (0, 1). Most computers have a built-in
random number generator that allows one to call for the value of such a quan-
tity. If U designates a random number—that is, U is uniformly distributed over
the interval (0, 1)—then it can be shown that
I = Int (kU ) + 1
will be equally likely to be any of the values 1, 2, . . . , k, where Int (x) stands for
the integer part of x. For instance,
Int (4.3) = 4
Int (12.9) = 12
and so on.
Program A-1 uses these to generate a random sample of size n from the set
of numbers 1, 2, . . . , N . When running this program, you will be asked first to
enter the values of n and N and then to enter any four-digit number. For this
last request, just type and enter any number that comes to mind. The output
from this program is the subset of size n that constitutes the random sample.
■ Example C.1
Suppose we want to choose a random sample of size 12 from a popula-
tion of 200 members. To do so, we start by arbitrarily numbering the 200
members of the population so that they now have numbers 1 to 200. We
run Program A-1 to obtain the 12 members of the population that are to
constitute the random sample.
How to Choose a Random Sample 719
Tables
(Continued)
721
Introductory Statistics. DOI:10.1016/B978-0-12-804317-2.00027-8
Copyright © 2017 Elsevier Inc. All rights reserved.
722 APPENDIX D: Tables
n = degrees of freedom.
(Continued)
724 APPENDIX D: Tables
n = degrees of freedom.
Table D.4 Percentiles of F Distributions
95th Percentiles of Fn,m Distributions
Degrees of freedom for the numerator n
1 2 3 4 5 6 7 8 9 10 12 15 20 24 30 40 60 120 ∞
Degrees of freedom for the denominator m
1 161.4 199.5 215.7 224.6 230.2 234.0 236.8 238.9 240.5 241.9 243.9 245.9 248.0 249.1 250.1 251.1 252.2 253.3 254.3
2 18.51 19.00 19.16 19.25 19.30 19.33 19.35 19.37 19.38 19.40 19.41 19.43 19.45 19.45 19.46 19.47 19.48 19.49 19.50
3 10.13 9.55 9.28 9.12 9.01 8.94 8.89 8.85 8.81 8.79 8.74 8.70 8.66 8.64 8.62 8.59 8.57 8.55 8.53
4 7.71 6.94 6.59 6.39 6.26 6.16 6.09 6.04 6.00 5.96 5.91 5.86 5.80 5.77 5.75 5.72 5.69 5.66 5.63
5 6.61 5.79 5.41 5.19 5.05 4.95 4.88 4.82 4.77 4.74 4.68 4.62 4.56 4.53 4.50 4.46 4.43 4.40 4.36
6 5.99 5.14 4.76 4.53 4.39 4.28 4.21 4.15 4.10 4.06 4.00 3.94 3.87 3.84 3.81 3.77 3.74 3.70 3.67
7 5.59 4.74 4.35 4.12 3.97 3.87 3.79 3.73 3.68 3.64 3.57 3.51 3.44 3.41 3.38 3.34 3.30 3.27 3.23
8 5.32 4.46 4.07 3.84 3.69 3.58 3.50 3.44 3.39 3.35 3.28 3.22 3.15 3.12 3.08 3.04 3.01 2.97 2.93
9 5.12 4.26 3.86 3.63 3.48 3.37 3.29 3.23 3.18 3.14 3.07 3.01 2.94 2.90 2.86 2.83 2.79 2.75 2.71
10 4.96 4.10 3.71 3.48 3.33 3.22 3.14 3.07 3.02 2.98 2.91 2.85 2.77 2.74 2.70 2.66 2.62 2.58 2.54
11 4.84 3.98 3.59 3.36 3.20 3.09 3.01 2.95 2.90 2.85 2.79 2.72 2.65 2.61 2.57 2.53 2.49 2.45 2.40
12 4.75 3.89 3.49 3.26 3.11 3.00 2.91 2.85 2.80 2.75 2.69 2.62 2.54 2.51 2.47 2.43 2.38 2.34 2.30
13 4.67 3.81 3.41 3.18 3.03 2.92 2.83 2.77 2.71 2.67 2.60 2.53 2.46 2.42 2.38 2.34 2.30 2.25 2.21
14 4.60 3.74 3.34 3.11 2.96 2.85 2.76 2.70 2.65 2.60 2.53 2.46 2.39 2.35 2.31 2.27 2.22 2.18 2.13
15 4.54 3.68 3.29 3.06 2.90 2.79 2.71 2.64 2.59 2.54 2.48 2.40 2.33 2.29 2.25 2.20 2.16 2.11 2.07
16 4.49 3.63 3.24 3.01 2.85 2.74 2.66 2.59 2.54 2.49 2.42 2.35 2.28 2.24 2.19 2.15 2.11 2.06 2.01
17 4.45 3.59 3.20 2.96 2.81 2.70 2.61 2.55 2.49 2.45 2.38 2.31 2.23 2.19 2.15 2.10 2.06 2.01 1.96
18 4.41 3.55 3.16 2.93 2.77 2.66 2.58 2.51 2.46 2.41 2.34 2.27 2.19 2.15 2.11 2.06 2.02 1.97 1.92
(Continued)
Table D.4 (Continued )
95th Percentiles of Fn,m Distributions
Degrees of freedom for the numerator n
1 2 3 4 5 6 7 8 9 10 12 15 20 24 30 40 60 120 ∞
Degrees of freedom for the denominator m
19 4.38 3.52 3.13 2.90 2.74 2.63 2.54 2.48 2.42 2.38 2.31 2.23 2.16 2.11 2.07 2.03 1.98 1.93 1.88
20 4.35 3.49 3.10 2.87 2.71 2.60 2.51 2.45 2.39 2.35 2.28 2.20 2.12 2.08 2.04 1.99 1.95 1.90 1.84
21 4.32 3.47 3.07 2.84 2.68 2.57 2.49 2.42 2.37 2.32 2.25 2.18 2.10 2.05 2.01 1.96 1.92 1.87 1.81
22 4.30 3.44 3.05 2.82 2.66 2.55 2.46 2.40 2.34 2.30 2.23 2.15 2.07 2.03 1.98 1.94 1.89 1.84 1.78
23 4.28 3.42 3.03 2.80 2.64 2.53 2.44 2.37 2.32 2.27 2.20 2.13 2.05 2.01 1.96 1.91 1.86 1.81 1.76
24 4.26 3.40 3.01 2.78 2.62 2.51 2.42 2.36 2.30 2.25 2.18 2.11 2.03 1.98 1.94 1.89 1.84 1.79 1.73
25 4.24 3.39 2.99 2.76 2.60 2.49 2.40 2.34 2.28 2.24 2.16 2.09 2.01 1.96 1.92 1.87 1.82 1.77 1.71
26 4.23 3.37 2.98 2.74 2.59 2.47 2.39 2.32 2.27 2.22 2.15 2.07 1.99 1.95 1.90 1.85 1.80 1.75 1.69
27 4.21 3.35 2.96 2.73 2.57 2.46 2.37 2.31 2.25 2.20 2.13 2.06 1.97 1.93 1.88 1.84 1.79 1.73 1.67
28 4.20 3.34 2.95 2.71 2.56 2.45 2.36 2.29 2.24 2.19 2.12 2.04 1.96 1.91 1.87 1.82 1.77 1.71 1.65
29 4.18 3.33 2.93 2.70 2.55 2.43 2.35 2.28 2.22 2.18 2.10 2.03 1.94 1.90 1.85 1.81 1.75 1.70 1.64
30 4.17 3.32 2.92 2.69 2.53 2.42 2.33 2.27 2.21 2.16 2.09 2.01 1.93 1.89 1.84 1.79 1.74 1.68 1.62
40 4.08 3.23 2.84 2.61 2.45 2.34 2.25 2.18 2.12 2.08 2.00 1.92 1.84 1.79 1.74 1.69 1.64 1.58 1.51
60 4.00 3.15 2.76 2.53 2.37 2.25 2.17 2.10 2.04 1.99 1.92 1.84 1.75 1.70 1.65 1.59 1.53 1.47 1.39
120 3.92 3.07 2.68 2.45 2.29 2.17 2.09 2.02 1.96 1.91 1.83 1.75 1.66 1.61 1.55 1.55 1.43 1.35 1.25
∞ 3.84 3.00 2.60 2.37 2.21 2.10 2.01 1.94 1.88 1.83 1.75 1.67 1.57 1.52 1.46 1.39 1.32 1.22 1.00
Table D.4 (Continued )
90th Percentiles of F Distributions
Degrees of freedom for the numerator n
1 2 3 4 5 6 7 8 9 10 12 15 20 24 30 40 60 120 ∞
1 39.86 49.50 53.59 55.83 57.24 58.20 58.91 59.44 59.86 60.19 60.71 61.22 61.74 62.00 62.26 62.53 62.79 63.06 63.33
Degrees of freedom for the denominator m
2 8.53 9.00 9.16 9.24 9.29 9.33 9.35 9.37 9.38 9.39 9.41 9.42 9.44 9.45 9.46 9.47 9.47 9.48 9.49
3 5.54 5.46 5.39 5.34 5.31 5.28 5.27 5.25 5.24 5.23 5.22 5.20 5.18 5.18 5.17 5.16 5.15 5.14 5.13
4 4.54 4.32 4.19 4.11 4.05 4.01 3.98 3.95 3.94 3.92 3.90 3.87 3.84 3.83 3.82 3.80 3.79 3.78 3.76
5 4.06 3.78 3.62 3.52 3.45 3.40 3.37 3.34 3.32 3.30 3.27 3.24 3.21 3.19 3.17 3.16 3.14 3.12 3.10
6 3.78 3.46 3.29 3.18 3.11 3.05 3.01 2.98 2.96 2.94 2.90 2.87 2.84 2.82 2.80 2.78 2.76 2.74 2.72
7 3.59 3.26 3.07 2.96 2.88 2.83 2.78 2.75 2.72 2.70 2.67 2.63 2.59 2.58 2.56 2.54 2.51 2.49 2.47
8 3.46 3.11 2.92 2.81 2.73 2.67 2.62 2.59 2.56 2.54 2.50 2.46 2.42 2.40 2.38 2.36 2.34 2.32 2.29
9 3.36 3.01 2.81 2.69 2.61 2.55 2.51 2.47 2.44 2.42 2.38 2.34 2.30 2.28 2.25 2.23 2.21 2.18 2.16
10 3.29 2.92 2.73 2.61 2.52 2.46 2.41 2.38 2.35 2.32 2.28 2.24 2.20 2.18 2.16 2.13 2.11 2.08 2.06
11 3.23 2.86 2.66 2.54 2.45 2.39 2.34 2.30 2.27 2.25 2.21 2.17 2.12 2.10 2.08 2.05 2.03 2.00 1.97
12 3.18 2.81 2.61 2.48 2.39 2.33 2.28 2.24 2.21 2.19 2.15 2.10 2.06 2.04 2.01 1.99 1.96 1.93 1.90
13 3.14 2.76 2.56 2.43 2.35 2.28 2.23 2.20 2.16 2.14 2.10 2.05 2.01 1.98 1.96 1.93 1.90 1.88 1.85
14 3.10 2.73 2.52 2.39 2.31 2.24 2.19 2.15 2.12 2.10 2.05 2.01 1.96 1.94 1.91 1.89 1.86 1.83 1.80
15 3.07 2.70 2.49 2.36 2.27 2.21 2.16 2.12 2.09 2.06 2.02 1.97 1.92 1.90 1.87 1.85 1.82 1.79 1.76
16 3.05 2.67 2.46 2.33 2.24 2.18 2.13 2.09 2.06 2.03 1.99 1.94 1.89 1.87 1.84 1.81 1.78 1.75 1.72
17 3.03 2.64 2.44 2.31 2.22 2.15 2.10 2.06 2.03 2.00 1.96 1.91 1.86 1.84 1.81 1.78 1.75 1.72 1.69
(Continued)
Table D.4 (Continued )
90th Percentiles of F Distributions
Degrees of freedom for the numerator n
1 2 3 4 5 6 7 8 9 10 12 15 20 24 30 40 60 120 ∞
18 3.01 2.62 2.42 2.29 2.20 2.13 2.08 2.04 2.00 1.98 1.93 1.89 1.84 1.81 1.78 1.75 1.72 1.69 1.66
Degrees of freedom for the denominator m
19 2.99 2.61 2.40 2.27 2.18 2.11 2.06 2.02 1.98 1.96 1.91 1.86 1.81 1.79 1.76 1.73 1.70 1.67 1.63
20 2.97 2.59 2.38 2.25 2.16 2.09 2.04 2.00 1.96 1.94 1.89 1.84 1.79 1.77 1.74 1.71 1.68 1.64 1.61
21 2.96 2.57 2.36 2.23 2.14 2.08 2.02 1.98 1.95 1.92 1.87 1.83 1.78 1.75 1.72 1.69 1.66 1.62 1.59
22 2.95 2.56 2.35 2.22 2.13 2.06 2.01 1.97 1.93 1.90 1.86 1.81 1.76 1.73 1.70 1.67 1.64 1.60 1.57
23 2.94 2.55 2.34 2.21 2.11 2.05 1.99 1.95 1.92 1.89 1.84 1.80 1.74 1.72 1.69 1.66 1.62 1.59 1.55
24 2.93 2.54 2.33 2.19 2.10 2.04 1.98 1.94 1.91 1.88 1.83 1.78 1.73 1.70 1.67 1.64 1.61 1.57 1.53
25 2.92 2.53 2.32 2.18 2.09 2.02 1.97 1.93 1.89 1.87 1.82 1.77 1.72 1.69 1.66 1.63 1.59 1.56 1.52
26 2.91 2.52 2.31 2.17 2.08 2.01 1.96 1.92 1.88 1.86 1.81 1.76 1.71 1.68 1.65 1.61 1.58 1.54 1.50
27 2.90 2.51 2.30 2.17 2.07 2.00 1.95 1.91 1.87 1.85 1.80 1.75 1.70 1.67 1.64 1.60 1.57 1.53 1.49
28 2.89 2.50 2.29 2.16 2.06 2.00 1.94 1.90 1.87 1.84 1.79 1.74 1.69 1.66 1.63 1.59 1.56 1.52 1.48
29 2.89 2.50 2.28 2.15 2.06 1.99 1.93 1.89 1.86 1.83 1.78 1.73 1.68 1.65 1.62 1.58 1.55 1.51 1.47
30 2.88 2.49 2.28 2.14 2.03 1.98 1.93 1.88 1.85 1.82 1.77 1.72 1.67 1.64 1.61 1.57 1.54 1.50 1.46
40 2.84 2.44 2.23 2.09 2.00 1.93 1.87 1.83 1.79 1.76 1.71 1.66 1.61 1.57 1.54 1.51 1.47 1.42 1.38
60 2.79 2.39 2.18 2.04 1.95 1.87 1.82 1.77 1.74 1.71 1.66 1.60 1.54 1.51 1.48 1.44 1.40 1.35 1.29
120 2.75 2.35 2.13 1.99 1.90 1.82 1.77 1.72 1.68 1.65 1.60 1.55 1.48 1.45 1.41 1.37 1.32 1.26 1.19
∞ 2.71 2.30 2.08 1.94 1.85 1.77 1.72 1.67 1.63 1.60 1.55 1.49 1.42 1.38 1.34 1.30 1.24 1.17 1.00
Table D.4 (Continued )
99th Percentiles of F Distributions
Degrees of freedom for the numerator n
1 2 3 4 5 6 7 8 9 10 12 15 20 24 30 40 60 120 ∞
Degrees of freedom for the denominator m
1 4052 4999.5 5403 5625 5764 5859 5928 5982 6022 6056 6106 6157 6209 6235 6261 6287 6313 6339 6366
2 98.50 99.00 99.17 99.25 99.30 99.33 99.36 99.37 99.39 99.40 99.42 99.43 99.45 99.46 99.47 99.47 99.48 99.49 99.50
3 34.12 30.82 29.46 28.71 28.24 27.91 27.67 27.49 27.35 27.23 27.05 26.87 26.69 26.00 26.50 26.41 26.32 26.22 26.13
4 21.20 18.00 16.69 15.98 15.52 15.21 14.98 14.80 14.66 14.55 14.37 14.20 14.02 13.93 13.84 13.75 13.65 13.56 13.46
5 16.26 13.27 12.06 11.39 10.97 10.67 10.46 10.29 10.16 10.05 9.89 9.72 9.55 9.47 9.38 9.29 9.20 9.11 9.02
6 13.75 10.92 9.78 9.15 8.75 8.47 8.26 8.10 7.98 7.87 7.72 7.56 7.40 7.31 7.23 7.14 7.06 6.97 6.88
7 12.25 9.55 8.45 7.85 7.46 7.19 6.99 6.84 6.72 6.62 6.47 6.31 6.16 6.07 5.99 5.91 5.82 5.74 5.65
8 11.26 8.65 7.59 7.01 6.63 6.37 6.18 6.03 5.91 5.81 5.67 5.52 5.36 5.28 5.20 5.12 5.03 4.95 4.46
9 10.56 8.02 6.99 6.42 6.06 5.80 5.61 5.47 5.35 5.26 5.11 4.96 4.81 4.73 4.65 4.57 4.48 4.40 4.31
10 10.04 7.56 6.55 5.99 5.64 5.39 5.20 5.06 4.94 4.85 4.71 4.56 4.41 4.33 4.25 4.17 4.08 4.00 3.91
11 9.65 7.21 6.22 5.67 5.32 5.07 4.89 4.74 4.63 4.54 4.40 4.25 4.10 4.02 3.94 3.86 3.78 3.69 3.60
12 9.33 6.93 5.95 5.41 5.06 4.82 4.64 4.50 4.39 4.30 4.16 4.01 3.86 3.78 3.70 3.62 3.54 3.45 3.36
13 9.07 6.70 5.74 5.21 4.86 4.62 4.44 4.30 4.19 4.10 3.96 3.82 3.66 3.59 3.51 3.43 3.34 3.25 3.17
14 8.86 6.51 5.56 5.04 4.69 4.46 4.28 4.14 4.03 3.94 3.80 3.66 3.51 3.43 3.35 3.27 3.18 3.09 3.00
15 8.68 6.36 5.42 4.89 4.36 4.32 4.14 4.00 3.89 3.80 3.67 3.52 3.37 3.29 3.21 3.13 3.05 2.96 2.87
16 8.53 6.23 5.29 4.77 4.44 4.20 4.03 3.89 3.78 3.69 3.55 3.41 3.26 3.18 3.10 3.02 2.93 2.84 2.75
(Continued)
Table D.4 (Continued )
99th Percentiles of F Distributions
Degrees of freedom for the numerator n
1 2 3 4 5 6 7 8 9 10 12 15 20 24 30 40 60 120 ∞
17 8.40 6.11 5.18 4.67 4.34 4.10 3.93 3.79 3.68 3.59 3.46 3.31 3.16 3.08 3.00 2.92 2.83 2.75 2.65
Degrees of freedom for the denominator m
18 8.29 6.01 5.09 4.58 4.25 4.01 3.84 3.71 3.60 3.51 3.37 3.23 3.08 3.00 2.92 2.84 2.75 2.66 2.57
19 8.18 5.93 5.01 4.50 4.17 3.94 3.77 3.63 3.52 3.43 3.30 3.15 3.00 2.92 2.84 2.76 2.67 2.58 2.59
20 8.10 5.85 4.94 4.43 4.10 3.87 3.70 3.56 3.46 3.37 3.23 3.09 2.94 2.86 2.78 2.69 2.61 2.52 2.42
21 8.02 5.78 4.87 4.37 4.04 3.81 3.64 3.51 3.40 3.31 3.17 3.03 2.88 2.80 2.72 2.64 2.55 2.46 2.36
22 7.95 5.72 4.82 4.31 3.99 3.76 3.59 3.45 3.35 3.26 3.12 2.98 2.83 2.75 2.67 2.58 2.50 2.40 2.31
23 7.88 5.66 4.76 4.26 3.94 3.71 3.54 3.41 3.30 3.21 3.07 2.93 2.78 2.70 2.62 2.54 2.45 2.35 2.26
24 7.82 5.61 4.72 4.22 3.90 3.67 3.50 3.36 3.26 3.17 3.03 2.89 2.74 2.66 2.58 2.49 2.40 2.31 2.21
25 7.77 5.57 4.68 4.18 3.85 3.63 3.46 3.32 3.22 3.13 2.99 2.85 2.70 2.62 2.54 2.45 2.36 2.27 2.17
26 7.72 5.53 4.64 4.14 3.82 3.59 3.42 3.29 3.18 3.09 2.96 2.81 2.66 2.58 2.50 2.42 2.33 2.23 2.13
27 7.68 5.49 4.60 4.11 3.78 3.56 3.39 3.26 3.15 3.06 2.93 2.78 2.63 2.55 2.47 2.38 2.29 2.20 2.10
28 7.64 5.45 4.57 4.07 3.75 3.53 3.36 3.23 3.12 3.03 2.90 2.75 2.60 2.52 2.44 2.35 2.26 2.17 2.06
29 7.60 5.42 4.54 4.04 3.73 3.50 3.33 3.20 3.09 3.00 2.87 2.73 2.57 2.49 2.41 2.33 2.23 2.14 2.03
30 7.56 5.39 4.51 4.02 3.70 3.47 3.30 3.17 3.07 2.98 2.84 2.70 2.55 2.47 2.39 2.30 2.21 2.11 2.01
40 7.31 5.18 4.31 3.83 3.51 3.29 3.12 2.99 2.89 2.80 2.66 2.52 2.37 2.29 2.20 2.11 2.02 1.92 1.80
60 7.08 4.98 4.13 3.65 3.34 3.12 2.95 2.82 2.72 2.63 2.50 2.35 2.20 2.12 2.03 1.94 1.84 1.73 1.60
120 6.85 4.79 3.95 3.48 3.17 2.96 2.79 2.66 2.56 2.47 2.34 2.19 2.03 1.95 1.86 1.76 1.66 1.53 1.38
∞ 6.63 4.61 3.78 3.32 3.02 2.80 2.64 2.51 2.41 2.32 2.18 2.04 1.88 1.79 1.70 1.59 1.47 1.32 1.00
Tables 731
(Continued)
732 APPENDIX D: Tables
(Continued)
734 APPENDIX D: Tables
(Continued)
736 APPENDIX D: Tables
Programs
737
Introductory Statistics. DOI:10.1016/B978-0-12-804317-2.00028-X
Copyright © 2017 Elsevier Inc. All rights reserved.
Index
Central limit theorem, 303, 305, 307, problems for, 169 Course of Experimental Agriculture
309, 324 Confidence (Young), 446
approximately normal data sets, 90, 95, and 99 percent of, 346, Critical region, definition of, 424
99, 100 347 Cumulative relative frequency table,
definition of, 303 centered intervals, 349 40
error measurement, 304 definition of, 344 Cumulative sum control charts, 668
for various populations, 309 interval estimators, 346, 349, 351 problems for, 672
historical perspective on, 306 level percentiles of, 346
normal curves, 305 Confidence bounds, 351, 360, 369 D
problems for, 310 for interval estimation of Data
sample mean distribution and, population proportions, 369 approximately symmetric, 20, 25,
309 upper and lower, 360, 369 37
sample sizes for, 309 Confidence intervals, lengths of, 377 collection, 3
Chi-squared density function, 322 Constants and properties of detected by histograms, 34
Chi-squared distributions, degrees of expected value, 214 manipulation of, and scientific
freedom of, 323 sample variance, 92 fraud, 585
Chi-squared goodness-of-fit tests Contingency table modern approach to, 2
concepts, 588 definition of, 600 paired, 47, 49
definition of, 656 independence testing, 599 symmetric, 20, 25
independence in contingency with fixed marginal totals, 608 Data mining, 400
tables with fixed marginal problems for independence Data sets
totals, 608 testing, 611 approximately normal, 99, 100,
independence of two Continuity correction, 316 102, 105, 106
characteristics, 604 bimodal, 101, 102
Continuous distributions, 263
contingency tables, 608 biological, 520, 548
Continuous random variables, 264
degrees of freedom, 601 central limit theorem, 548
definition of, 264
summary table, 604 central tendencies of, 66, 76, 90
probability density function in,
introduction to, 585 comparison rankings, 655, 657
263–265, 267, 268
key terms for, 614 constructing histograms from, 32
problems for, 265
p value of, 591 finding sample variance for, 90
Control charts, 676, 681, 685, 689
review problems, 617 frequency tables and graphs, 18,
cumulative sum, 689
summary of, 614 19, 21, 23
EWMA, 686–688
summary table, 594 grouped data and histograms, 31,
testing null hypothesis in, 589 for fraction defective, 681
33, 35
Chi-squared percentiles, 590, 595 problems for, 672
histograms of, 99
Chi-squared random variables, 321, Control/control group, 4 introduction to, 17
322, 505, 506, 534, 590, 614 Arthur Young (historical key terms for, 54
Class boundaries, 31, 32 perspective), 446 normal, 99, 101, 103
Class intervals, 32, 34 hypothesis testing, 443, 472 paired data, 47, 49
Coefficient of determination, 557, normal distributions and, 671 review problems, 58
576 samples as, 445 skewed, 100
definition of, 264, 557 Correlations, see also Sample stem-and-leaf plots, 41, 102
problems for, 558 correlation coefficient summarizing, see Statistics
Column factors, in ANOVA, 489, 500 associations measured by, 109 De Mere, Chevaller, on probability,
Combinations, 188 negative, 109 150
Comparison rankings, 655, 657 positive, 108 De Moivre, Abraham, normal
Complement, 142, 143, 145, 156, Counting arguments, 187 distribution and, 264, 265,
169, 172, 191, 193, 279 Counting principles, 181, 183, 185, 292
Complements, 142 187 Degrees of freedom
Conditional probability, 159, 161, basic, 182, 183 definition of, 321
163, 165, 167, 191 generalized basic, 183, 184 error random variables, 533, 535
Bayes’ theorem and, 177 notation of, 185 of chi-squared distributions, 322
definition of, 159 problems for, 188 of random variables, 355
Index 789
one-factor ANOVA, 489, 495 review problems, 377 point estimator of population
remarks on F random variable, summary of, 324 proportion, 333
492–495 Doll, R., 65 problems for, 332
values of F , 492, 493 Doll-Hill study, 66 review problems, 377
Densities Double-summation notation, 502 summary, 375
of sample means, 300 Dummy variables for categorical Estimators
symmetric about zero, 355, 630, data, 567 case studies for, 367
632 definition of, 330
Density curves, 65 in one-factor ANOVA, 491
Density curves, see also Probability
E in two-factor ANOVA, 504
Empirical frequencies, bell-shaped
density function least-square, 521
curves, 304
of uniform random variables, 266 point, of population mean, 330
Empirical rule, 99–101, 103
Density percentile, in interval population variance, 450
approximation rule, 269
estimations, 356 unbiased, definition of, 330
definition of, 100
Dependent variables, 520 Events, 141, 153, 166, 167, 191
Descartes, René, 53 historical perspective on, 104
complements, 143
Descriptive statistics, 10 normal data sets and, 99
definition of, 141
definition of, 4 problems for, 104
disjoint or mutually exclusive, 142
Deviations Equality testing independent, 166, 167
definition of, 71 of equality of means intersection/union, 141
historical perspective on, 71 known variances, 435 null, 142, 191
ith deviation, 71, 130 small-sample, with equal EWMA control charts, see
Dice, fair, 167 unknown variances, 450 Exponentially weighted
Discrete random variables unknown variances and large moving-average control charts
binomial, 243–245, 248–250, sample size, 442 Expectation, 228
252, 255, 257, 258 of multiple probability definition of, 212
concepts, 205, 207 distributions, 467, 469, 471, Expected value, 211, 213, 215, 217
definition of, 206 473 center of gravity analogy, 214
expected value, 203, 212, 215, 216 of population proportions, 467 definition of, 228
hypergeometric, 249, 250 Equally likely outcomes, 155 frequency interpretation of
hypergeometric random variables, random selection, 190 probabilities, 212
249, 250 Error random variable, 533, 535 of binomial random variables,
key terms for, 256 multiple linear regression model, 245
Poisson, 251–255, 258 519, 521 of chi-squared random variables,
Poisson random variables, 252 problems for, 536 322
probability distribution, 207, 222 Error sum of squares, in two-factor of hypergeometric random
problems for, random variables, ANOVA tests, 506, 515 variables, 249, 250
232–234, 236 Estimated regression line of Poisson random variables, 251
review problems, 258 definition of, 526 of sample means, 299
summary of, 256 scatter diagram of, 528 of sums, 215–217, 226, 653
variance of, 225, 226, 228, 245, Estimation of sums/products using constants,
249, 250, 258 interval estimators of mean, 343, 214
Disjoint events, 142, 143, 145–147, 345, 347, 349, 351 of zero, 224
150, 151, 165 interval estimators of population population means and, 300
Distributions proportion, 365 problems for, 218
central limit theorem and sample introduction to, 369 properties of, 214
mean, 306 key terms for, 374 Experiment, 140, 141
chi-squared, 321, 322 of population variance, 339 definition of, 140
continuous, 263 of probability of sensitive events, equally likely outcomes in, 154
key terms for, 290 337 problems for, 143, 149
of sample variance of normal point estimator for population Exponential bell-shaped curve, 292
populations, 321 mean, 330 Exponential distribution, 309
790 Index
two population tests introduction, randomizing sets in, 471 case studies for, 367
433 review problems, 428 confidence and, 366
analysis of variance, see Analysis review problems for, concerning length of confidence interval, 377
of variance (ANOVA) two populations, 484 problems for, 377
defined, 452 significance levels and, 383 IQ, stem-and-leaf plots for, 47
errors, 384 summary of, 425
first published (historical concerning two populations, 480
perspective), 420 t tests, 401
K
for non-normal distributions, 398 Kruskal-Wallis test, 654
types of (summary tables), 398,
goodness of fit, see Chi-squared 408, 419, 439, 447, 455, 474
goodness-of-fit tests Z test, 389, 424 L
introduction to, 381 Laplace, Pierre Simon, 9
key terms for, 424 Law of frequency of error, 306
concerning two populations, 480
I Least squares method, 519, 565–567,
In control process, 667
linear regression and β equals 575, 576, 579
Independent events, 168
zero, 537 Least-square estimators, 525
any number of, and probability,
mean of normal populations, 355 of regression parameters, 526
168
misinterpreting rejections, 474 Left-end inclusion convention, 31
definition of, 166
nonparametric, see Legendre, Adrien Marie, 566, 567
problems for, 169
Nonparametric hypotheses Levels of significance, see Significance
tests testing in contingency tables, 608
testing in two characteristics of levels
observational studies and, 471 Life table, 152
of binomial parameters, 414 populations, 599
Independent random variables, 226 Line graphs, 19
of population proportion, 413, example of, 20
415, 417 Independent trials
Bernoulli, Jacques, and, 246 summary of, 55
one-sided Linear regression
defined, 397 for binomial random variables,
241 biological data sets, 548
of median, 627 coefficient of determination, 559,
one-sided tests, 397 Independent variables, 520
Inferential statistics, overview of, 4 576
p values, 391
Input variables dummy variables for categorical
of population proportion, 415
definition of, 520 data, 567
pictorial depiction of, 389, 396,
in simple linear regression, 521 estimated regression line, 528,
403, 406
Interquartile range, 131 530, 545
point estimators for, 385
Intersection of events, 191 estimating regression parameters,
population proportion equality
Interval estimators of mean, 343 550
tests, 467, 469, 471, 473
90, 95, 99 percent confidence, Galton and, 543
population proportions, 413, 415,
346, 347 introduction, 520
417
problems for confidence and, 344, 345 key terms for, 575
normal population with known confidence bounds in, 351, 360, linear equation, 574
variance, 379 369 prediction intervals for future
one-sided tests, 349 definition of, 344 responses, 551
paired-sample t tests, 463 for population means, 357 problems for
population proportions, 419, 475 introduction to, 330 β equals zero, 556
small-sample, with unknown of normal populations with analysis of residuals, 562
population variances, 455 known variance, 344, 345 coefficient of determination, 558
t tests, 409 of normal populations with error random variable, 533
two normal populations with unknown variance, 355 estimating parameters, 533
known variances, 439 problems for, 361, 371 multiple, 562
two-factor ANOVA, 511 sample size for, 349 multiple regression model, 569
unknown variance and large t random variable, 355 prediction intervals, 569
sample sizes, 428 Interval estimators of population regression to mean, 580
proposed case studies for, 428 proportion, 365 sample correlation coefficient, 561
792 Index
simple model, 523 Multiple linear regression model, Normal approximation, binomial
regression to the mean, 543, 544, 563, 565, 567 distribution, 315
546, 548, 550, 576, 578 definition of, 564 Normal curves
residuals, analysis of, 579 problems for, 569 approximate areas under, 270
review problems, 579 Multiplication rule of probability, approximation rule and, 269, 279
simple model for, 521 163, 165, 192 central limit theorem and, 297
summary of, 576 Gauss (historical perspective), 292
Mutually exclusive events, 142
testing β equals zero hypothesis, standard, 269, 291
556 Normal data sets, 99, 101, 103, 133
Linear relationships N definition of, 292
equation for, 520 Naive Bayes, empirical rule and, 99–101, 103
sample correlation coefficient, 131 Natural Inheritance (Galton), 306 historical perspective on, 104
Long-run relative frequency, 146, Negative correlations, in sample problems for, 105
148, 160 correlation coefficients, 107 summary of, 131
Lorenz curve, Normal distribution
Newton, Isaac, 9
Lower confidence bounds biological data sets, 549
Neyman, Jerzy, 10, 386 control and, 671
for interval estimation of means,
350, 373 Nightingale, Florence (historical historical perspective on, 265
perspective), 605 introduction to, 263
for interval estimation of
population proportions, 369 Noise, 345 normal random variables, 267
Lower control limit (LCL), 668 Noise, random, 390 Normal populations
Nonparametric hypotheses tests interval estimators of
with known variance, 343
M comparison rankings, 655, 657
with unknown variance, 355
Mann-Whitney test, 641 definition of, 622
one-sided tests of, 395
Margin of error, 377 equality of multiple probability
problems for distribution in, 323
Marlowe, Christopher, 643 distributions, 652
problems for mean tests of, 393
Mathematical preliminaries, Freedman test, 657 problems for testing two, with
summation of, 516 key terms for, 662 known variances, 439
Mean, see also Population mean; Kruskal-Wallis test, 654 sample variance distribution in,
Sample mean normal distribution tests 321
Mean, 301 compared to, 643 tests, see Hypothesis testing
definition of, 212 permutation tests, 658 Normal probabilities, finding, 277
detecting shifts in, 668 Normal random variable, 267, 273,
problems for
regression to, 543 275, 291
X control charts for detecting equality of multiple probability
distributions, 652 approximation rule for, 269
shifts in, 668 key terms for, 290
unknown mean and variance, 674 permutation tests, 661
percentiles of, 284
Men of Mathematics (Bell), 292 rank-sum test, 662 probabilities associated with, 273,
Mendel, Gregor, 585, 593 runs test for randomness, 646 275
data manipulation, 593 sign test, 662 problems for
Mendenhall, Thomas (historical signed-rank test, 662 additive property, 263
perspective), 642 rank-sum test, 662–664 continuous random variables, 264
Method of least squares, 565–567, review problems, 664 percentiles of, 292
575, 576, 579 probabilities associated with, 276
runs test for randomness, 646
Method of maximum likelihood, 386 review problems, 293
Method of moments, 386 sign test, 626, 630, 631, 662
standard, 273, 275, 291
Modal values, 87 null hypothesis, 626, 629–631, standard deviation of, 277
Mortality tables, 8 662 standardizing, 277
history of, 9 p value, 629 summary of, 291
Mosteller, Frederic, 643 signed-rank test, 630, 631, 636, Null event, 142, 191
Moving average, see Exponentially 637, 662 Null hypothesis
weighted moving average summary of, 662 appropriate, 407
Index 793
classical testing procedure for, 384 Pearson, Egon, 386 graphs of, 252
definition of, 383 Pearson, Karl parameters, 250
not rejecting, 389 chi-squared goodness-of-fit test probabilities, 252
rejection of, 383 and, 9, 587, 595 problems for, 254
significance levels necessary for histograms used by, 54 Pooled estimator
rejection of, 397 on De Moivre, 265 testing equality of means, 451
statistical test of, 425 on Nightingale, 605 testing population proportions,
testing, in chi-square product-moment correlation 468
goodness-of-fit tests, 590 coefficient of, 115 Population distributions
Numerical science, 9 regression to the mean and, 544 introduction to, 297
role, in history of statistics, 9, 292, probability distributions of
386
O Percent confidence interval estimator,
sample mean, 301
Observational studies, for hypothesis sign test of, 625
344 Population mean
tests, 472 Percentiles
One-factor analysis of variance, 491 confidence bounds for, 351
by conversion to standard normal, definition of, 299
definition of, 490 277
problems for, 496 hypothesis tests for normal, 387
chi-squared, 590, 595 interval estimators for, 357
summary of, 513 confidence levels, 346
summary table, 495 obtaining, 300
definition of, 292
One-sided tests, 425, 461 one-sided tests concerning two,
of normal random variables, 285,
definition of, 397 455
287
problems for, 398 point estimator of, 331
problems for, of normal random
sign tests, 627 problems for point estimation of,
variables, 288
two population tests, 435, 458, 332
sample, 81
461 sample means and, 331
Permutation tests, 658, 659
Outcomes t test for, 401
problems for, 661
definition of, 140 Permutations, 183 Population proportions
equally likely, 154, 155 Philosophical Transactions of the Royal case studies for, 367
problems for equally likely, 157 Society (Arbuthnot), 420 hypothesis tests concerning, 413
sample space and events of Piazzi, Giuseppe, 566 interval estimators of, 365
experiment, 140 Pie charts, 24 one-sided hypothesis and, 472
Outliers, 39, 63 Placebo effect, 3, 404, 434, 444 point estimators for, 333
Placebos, 3 pooled estimators in, 470
Plague, 7, 13 problems for hypothesis tests of,
P Plague and history of statistics, 7, 13 419
P value problems for interval estimators
Playfair, William, pie charts used by,
hypothesis testing, 391 of, 371
53
in population proportion problems for point estimation of,
Poincaré, Henri, 310
hypothesis tests, 414, 468 335
Point estimator
in sign tests, 623 for population proportions, 333 testing equality of, 467
of chi-squared test, 592 hypothesis testing, 385 Population size, sample size and, 314
of signed-rank tests, 632 introduction to, 330 Population standard deviation,
summary of, 425 of population mean, 331, 333 estimating, 302
two-sided tests of, 416 problems for, 335 Population variance
types of tests (summary tables), standard errors and, 331 definition of, 299
398, 408, 419, 439, 447, 455, Poisson, Simeon, 136 hypothesis tests for small sample
474 Poisson random variables, 251, 252 with equal unknown, 450
Paired data, 47, 49 binomial random variables and, interval estimators of normal
paired sample t test, 458 252 populations with known, 343
problems for, 50, 463 definition of, 251 interval estimators of normal
Parallel circuit, 175 expected value and variance of, populations with unknown,
Pascal, 150 245 355
794 Index
Sample 50th percentile, 82, 84, 86, Signed-rank test, 637, 662 t test
131, 132 p value, 632 historical perspective on, 409
Sample correlation coefficient, 131, Significance levels, 383 paired-sample, 458, 459, 461
576 hypothesis tests (summary Test statistic, definition of, 398
Pearson’s product-moment, 115 tables), 398, 408, 419, 439, Tukey, John, stem-and-leaf plots used
Sample mean, 67, 69, 71, 88, 299, 447, 455, 474 by, 54
301 α test, 388
central tendencies described by, Two-factor analysis of variance, 499,
Simon, Pierre, central limit theorem 513
76 and, 309
compared to sample median, 80 estimators, 505
Simple linear regression model,
frequency tables and, 70 scatter diagram and, 522 problems for testing hypotheses
standard deviation of, 302 and, 511
Skewed data, 131
variance of, 303 Standard deviation summary table, 513
Sample median, 75, 77, 88, 130 definition of, 227 Two-sample rank-sum test, 641
as percentile (50th), 82, 132 of binomial random variables, Two-sided tests
central tendencies described by, 249 of p value, 416
76
of normal random variables, 277 significance level of t tests, 403
compared to sample mean, 80
of random variables, 256 two population tests, 439, 455
definition of, 75
sample, 91, 93 Type I error, 384
historical perspective on, 77
Standard normal, 263, 273, 275, 277, Type II error, 384
problems for, 84
291, 356
summary of, 132
conversion to, 277
Sample mode, 87, 88, 131
summary of, 132 Standardized residuals, 562 U
scatter diagrams of, 562, 563 Union of events, 141
Sample percentiles, 81
Sample proportions, probabilities Standardizing, normal random Upper confidence bounds, for
and, 315 variables, 277 interval estimation of means,
Sample size, 349, 449, 450, 486 Statistics 351, 360, 369
hypothesis tests for large, changing definition of, 11 Upper control limit (UCL), 668
appropriate sizes for, 443 history of, 7, 9
population size and, 314 key terms for, 10, 130 V
Sample space, 141, 191 sampling
Variance
Sample standard deviation, 91, 93, distribution of sample variance,
computational formula for, 224
131, 450 321
Sample variance, 91, 93 problems for, 323 definition of, 224
distribution of, 321, 323 sample standard deviation, 90 hypothesis tests for unknown, 401
Samples sample variance, 90 measuring, in response values,
definition of, 5 Statistics, Central limit theorem, 303, 556
random, 448, 476, 612, 613 306, 310, 316, 324 of hypergeometric random
Sampling proportions, for finite Stem-and-leaf plots, 43 variables, 249
populations, 313, 315, 317 examples of, 42 of independent random variable
Scatter diagram for salaries, 47 sums, 226
regression to the mean, 543, 547 Stratified random sampling, of random variables, 223, 225,
standardized residuals and, 562, definition of, 6 227, 229
563 Symmetry of sample means, 303
Second quartile, 86, 131 approximately symmetric, 20 population mean tests with
Sensitive events, estimating
bar graphs and, 21 known, 387
probability of, 337
histograms, 34 properties of, 214
Series circuit, 175
Shewhart, Walter, 671 two population tests of equality of
Shewhart control chart, 671 T means, known variances, 435
Sign test, 623, 625, 627, 637, 662 t distribution, 356 Variation, chance, 667
one-sided, 627 t random variable, 355 Venn diagrams, 142, 143, 146
796 Index
8 ESTIMATION
4 PROBABILITY X̄ is the estimator of the population mean μ.
p̂, the proportion of the sample that has a certain property, esti-
0 ≤ P (A) ≤ 1
mates p, the population proportion having this property.
P (S) = 1, where S is the set of all possible values
S 2 estimates σ 2 , and S estimates σ .
P (A ∪ B) = P (A) + P (B), when A and B are disjoint
100(1 − α) confidence interval estimator for μ:
Probability of the complement: P (Ac ) = 1 − P (A)
√
Addition rule: P (A ∪ B) = P (A) + P (B) − P (A ∩ B) data normal or n large, σ known: X̄ ± zα/2 σ/ n
√
Conditional probability: P (B|A) = P (A ∩ B)/P (A) data normal, σ unknown: X̄ ± tn−1,α/2 S/ n
Multiplication rule: P (A ∩ B) = P (A)P (B|A)
Independent events: P (A ∩ B) = P (A)P (B) 100(1 − α) confidence interval for p : p̂ ± zα/2 p̂ 1 − p̂ /n
Sp2 = n−1 2
n+m−2 Sx + m−1 2
n+m−2 Sy = pooled estimator of σx2 = σy2
μ is the grand mean, αi is the deviation from the grand mean due
to row i, and βj is the deviation from the grand mean due to col-
Hypothesis Tests Concerning p umn j . Their estimators are
(the proportion of a large population that has a certain character-
istic) μ̂ = X.. α̂i = Xi . − X.. β̂j = X.j − X..
different input values. Its square root is the absolute value of the
Significance- p value if sample correlation coefficient.
H0 H1 Test statistic TS level-α test TS = v
p1 −p2 Multiple linear regression model:
p1 = p2 p1 = p2 √
(1/n1 +1/n2 )p(1−p)
Reject H0 if 2P {Z ≥
|TS| ≥ zα/2 |v|} Y = β0 + β1 x1 + . . . + βk xk + e
p1 −p2
p1 = p2 p1 > p2 √
(1/n1 +1/n2 )p(1−p)
Reject H0 if P {Z ≥ v}
|TS| ≥ z
13 CHI-SQUARED GOODNESS OF FIT TESTS
Two-Factor ANOVA Table P i is the proportion of population with value i, i = 1, . . . , k.
Sum of squares Degrees of freedom
To test H0 : Pi = pi , i = 1, . . . , k, take a sample of size n. Let Ni be
m
Row SSr = n (Xi . − X..)2 m−1 the number equal to i, ei = npi , TS = ki=1 (Ni − ei )2 /ei
i=1
n 2
Column SSc = m X.j − X.. n−1 Significance level α test rejects H0 if T S ≥ xk−1,n
2 . If TS = v, then p
j =1 value = P {xk−1,α
2 ≥ v}.
m n
Error SSe = (Xij − Xi .N
i=1 j =1 Suppose each member of a population has an X and a Y charac-
= (n − 1)(m − 1) − Xij + X., )2 teristic. Assume r possible X and s possible Y characteristics. To
test for independence of the characteristics of a randomly chosen
member, choose a sample of size n.
Test Significance- p value if
Null hypothesis statistic level-α test TS = v Nij = number with X characteristic i and
SSr /(m − 1) Y characteristic j
No row effect Reject if P {Fm−1,N ≥ v}
SSe /N Ni = number with X characteristic i
(all αi = 0) TS ≥ Fm−1,N,α
SSc /(n − 1) Mj = number with Y characteristic j
No column effect Reject if P {Fn−1,N ≥ v}
SSe /N êij = Ni Mj /n
(all βj = 0) TS ≥ Fn−1,N,α
2
If i j Nij − êij /êij ≥ X(r−1)(s−1).α
2 then the hypothesis of in-
12 LINEAR REGRESSION dependence is rejected at significance level α.
Estimated regression line: y = α̂ + β̂x where N is a binomial (n, 1/2) random variable.
Error term e is normal withmean 0 and variance σ 2 . Estimator of The signed rank test is used to test the hypothesis that a population
2 distribution is symmetric about 0. It ranks the data in terms of ab-
σ 2 is SSR /(n − 2), SSR = Yi − α̂ − β̂xi = Sxx SY Y − SxY
2 /S
xx
i solute value. TS is the sum of the ranks of the negative values. If
√ TS = t , then
To test H0 : β = 0. Use TS = (n − 2) Sxx /SSR β̂
p value = 2 Min (P {TS ≤ t}, P {TS ≥ t})
Significance level γ test is to reject H0 if |TS| ≥ tn−2,γ /2 .
TS is approximately normal with mean n(n + 1)/4 and variance
If TS = v, p value = 2P {Tn−2 ≥ v} n(n + 1)(2n + 1)/24.
100(1 − γ ) confidence prediction interval for response at input x0 To test equality of two population distributions, draw random
samples of sizes n and m and rank the n + m data values. The rank
α̂ + β̂x0 ± tn−2,γ /2 1 + 1/n + (x0 − x̄)2 /Sxx SSR / (n − 2) sum test uses TS = sum of ranks of first sample. It rejects H0 if TS is
either significantly large or significantly small. If TS = t , then
Coefficient of determination: R 2 = 1 − SSR /SY Y is the proportion
of the variation in the response variables that is explained by the p value = 2 Min (P {TS ≤ t}, P {TS ≥ t})
TS is approximately normal with mean n(n + m + 1)/2 and variance
nm(n + m + 1)/12.
15 QUALITY CONTROL
√
Control chart limits μ ± 3σ/ n n = subgroup size
Area under the Standard Normal Curve to the Left of x
x .00 .01 .02 .03 .04 .05 .06 .07 .08 .09
.0 .5000 .5040 .5080 .5120 .5160 .5199 .5239 .5279 .5319 .5359
.1 .5398 .5438 .5478 .5517 .5557 .5596 .5636 .5675 .5714 .5733
.2 .5793 .5832 .5871 .5910 .5948 .5987 .6026 .6064 .6103 .6141
.3 .6179 .6217 .6255 .6293 .6331 .6368 .6406 .6443 .6480 .6517
.4 .6554 .6591 .6628 .6664 .6700 .6736 .6772 .6808 .6844 .6879
.5 .6915 .6950 .6985 .7019 .7054 .7088 .7123 .7157 .7190 .7224
.6 .7257 .7291 .7324 .7357 .7389 .7422 .7454 .7486 .7517 .7549
.7 .7580 .7611 .7642 .7673 .7704 .7734 .7764 .7794 .7823 .7852
.8 .7881 .7910 .7939 .7967 .7995 .8023 .8051 .8078 .8106 .8133
.9 .8159 .8186 .8212 .8238 .8264 .8289 .8315 .8340 .8365 .8389
1.0 .8413 .8438 .8461 .8485 .8508 .8531 .8554 .8577 .8599 .8621
1.1 .8643 .8665 .8686 .8708 .8729 .8749 .8770 .8790 .8810 .8830
1.2 .8849 .8869 .8888 .8907 .8925 .8944 .8962 .8980 .8997 .9015
1.3 .9032 .9049 .9066 .9082 .9099 .9115 .9131 .9147 .9162 .9177
1.4 .9192 .9207 .9222 .9236 .9251 .9265 .9279 .9292 .9306 .9319
1.5 .9332 .9345 .9357 .9370 .9382 .9394 .9406 .9418 .9429 .9441
1.6 .9452 .9463 .9474 .9484 .9495 .9505 .9515 .9525 .9535 .9545
1.7 .9554 .9564 .9573 .9582 .9591 .9599 .9608 .9616 .9625 .9633
1.8 .9641 .9649 .9656 .9664 .9671 .9678 .9686 .9693 .9699 .9706
1.9 .9713 .9719 .9726 .9732 .9738 .9744 .9750 .9756 .9761 .9767
2.0 .9772 .9778 .9783 .9788 .9793 .9798 .9803 .9808 .9812 .9817
2.1 .9821 .9826 .9830 .9834 .9838 .9842 .9846 .9850 .9854 .9857
2.2 .9861 .9864 .9868 .9871 .9875 .9878 .9881 .9884 .9887 .9890
2.3 .9893 .9896 .9898 .9901 .9904 .9906 .9909 .9911 .9913 .9916
2.4 .9918 .9920 .9922 .9925 .9927 .9929 .9931 .9932 .9934 .9936
2.5 .9938 .9940 .9941 .9943 .9945 .9946 .9948 .9949 .9951 .9952
2.6 .9953 .9955 .9956 .9957 .9959 .9960 .9961 .9962 .9963 .9964
2.7 .9965 .9966 .9967 .9968 .9969 .9970 .9971 .9972 .9973 .9974
2.8 .9974 .9975 .9976 .9977 .9977 .9978 .9979 .9979 .9980 .9981
2.9 .9981 .9982 .9982 .9983 .9984 .9984 .9985 .9985 .9986 .9986
3.0 .9987 .9987 .9987 .9988 .9988 .9989 .9989 .9989 .9990 .9990
3.1 .9990 .9991 .9991 .9991 .9992 .9992 .9992 .9992 .9993 .9993
3.2 .9993 .9993 .9994 .9994 .9994 .9994 .9994 .9995 .9995 .9995
3.3 .9995 .9995 .9995 .9996 .9996 .9996 .9996 .9996 .9996 .9997
3.4 .9997 .9997 .9997 .9997 .9997 .9997 .9997 .9997 .9997 .9998