[go: up one dir, main page]

0% found this document useful (0 votes)
16 views87 pages

Statistics Appendix

The document provides answers to odd-numbered problems from a statistics textbook, covering various topics including averages, frequencies, and relative frequencies. It includes specific data points and calculations related to family sizes, cholesterol levels, and other statistical measures. The answers are presented in a structured format, detailing findings from different sections of the textbook.

Uploaded by

gukanru2025
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
16 views87 pages

Statistics Appendix

The document provides answers to odd-numbered problems from a statistics textbook, covering various topics including averages, frequencies, and relative frequencies. It includes specific data points and calculations related to family sizes, cholesterol levels, and other statistical measures. The answers are presented in a structured format, detailing findings from different sections of the textbook.

Uploaded by

gukanru2025
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 87

Answers to Odd-Numbered Problems

CHAPTER 1 PROBLEMS
1. (a) 1946
(b) There were more years in which the average number of years com-
pleted by the older group exceeded that of the younger group.
3. (a) From 1985 to 1990 sales declined.
(b) The total number of cars sold from 1985 to 1987 was 20,693,000
versus 18,120,000 from 1988 to 1990.
(c) No
5. Researchers with such knowledge may be influenced by their own biases
concerning the usefulness of the new drug.
7. (a) In 1936 automobile and telephone owners were probably not rep-
resentative of the total voter population.
(b) Yes. Automobile and telephone ownership is now more widespread
and thus more representative of the total voter population.
9. The average age of death for U.S. citizens whose obituary is listed in The
New York Times is about 82.4 years.
11. (a) No. Graduates who return the questionnaire may not be represen-
tative of the total population of graduates.
(b) If the number of questionnaires returned were very close to 200—
the number of questionnaires sent—then the approximation would
be better.
13. Graunt implicitly assumed that the parishes he surveyed were represen-
tative of the total London population.
15. Data on the ages at which people were dying can be used to determine
approximately how long on average the annuity payments will continue.
This can be used to determine how much to charge for the annuity.
17. (a) 64%
(b) 10%
(c) 48%
19. (a) Yes
(b) Yes
(c) No
(d) No
739
740 Answers to Odd-Numbered Problems

Section 2.2

1. (a) Family size Frequency


4 1
5 1
6 3
7 5
8 5
9 3
10 5
11 2
12 3
13 1
14 0
15 1

(b)

(c)

3. (a) 12
(b) 1
(c) 11
(d) 3
(e) 3
5. Value Frequency
10 8
20 3
30 7
40 7
50 3
60 8
Chapter 1 Problems 741

7. Family size Frequency Relative frequency


4 1 0.03
5 1 0.03
6 3 0.10
7 5 0.17
8 5 0.17
9 3 0.10
10 5 0.17
11 2 0.07
12 3 0.10
13 1 0.03
14 0 0.00
15 1 0.03

9. (a) 0.13

(b) 0.25

(c) No

11. (a) 0.649

(b) 0.162

(c) 0.540

13. Average number


of rainy days in
Nov. or Dec. Frequency
7 1
9 1
10 1
11 1
16 1
17 3
18 1
20 1
23 1
40 1
742 Answers to Odd-Numbered Problems

Section 2.3
1. (a)

(b) Class intervals 100–110 and 110–120


(c) No
(d) No
3. (a)

(b)

(c) The chart in part (a) seems more informative since it shows a clearer
pattern.
Chapter 1 Problems 743

5. (a)

(b)

(c) The chart in part (b) seems more informative.


7. (a)

(b)
744 Answers to Odd-Numbered Problems

Female Relative
cholesterol Frequency frequency
170–180 1 1/46 = 0.02
180–190 5 5/46 = 0.11
190–200 13 13/46 = 0.28
200–210 15 15/46 = 0.33
210–220 9 9/46 = 0.20
220–230 3 3/46 = 0.07

Male Relative
cholesterol Frequency frequency
170–180 3 3/54 = 0.06
180–190 13 13/54 = 0.24
190–200 19 19/54 = 0.35
200–210 10 10/54 = 0.19
210–220 6 6/54 = 0.11
220–230 3 3/54 = 0.06

Female students appear to have higher cholesterol levels.


13.

15. (a) It is the sum of the relative frequencies for all classes.
Chapter 1 Problems 745

(b) Percentage of workers


Blood pressure less than Ages 30–40 Ages 50–60
90 0.12 0.14
100 0.79 0.41
110 5.43 3.56
120 23.54 11.35
130 53.78 28.04
140 80.35 48.43
150 92.64 71.27
160 97.36 81.26
170 99.13 89.74
180 99.84 94.53
190 99.96 97.26
200 100.00 98.50
210 100.00 98.91
220 100.00 99.59
230 100.00 99.86
240 100.00 100.00
(c) Ages 30 to 40 tend to have smaller values.
(d)

Section 2.4
1. (a) 11 1, 4, 5, 6, 8, 8, 9, 9, 9
12 2, 2, 2, 2, 4, 5, 5, 6, 7, 7, 7, 8, 9
13 0, 2, 2, 3, 4, 5, 5, 7, 9
14 1, 1, 4, 6, 7
(b) 11 1, 4
11 5, 6, 8, 8, 9, 9, 9
12 2, 2, 2, 2, 4
12 5, 5, 6, 7, 7, 7, 8, 9
13 0, 2, 2, 3, 4
13 5, 5, 7, 9
14 1, 1, 4
14 6, 7
746 Answers to Odd-Numbered Problems

3. 1 4
1 5, 6, 6, 7, 7, 7, 7, 8, 8, 8, 9, 9, 9, 9
2 0, 0, 0, 0, 1, 2, 2, 2, 3, 4
2 5, 7, 7, 9
3 0, 1, 1, 2, 3
3
4 0, 4, 4
4 5
5 1, 3
5 5
6 1
6
7
7 9
The interval 15–20 contains 14 data points.
The interval 16–21 contains 17 data points.
5. (a) 3 2
4
5 2, 7, 8, 9
6 5, 8, 8
7 1, 4, 5, 5, 7, 8, 9
8 0, 1, 3, 3, 3, 4, 8, 8
9 0, 3, 4, 7
10 0, 4, 8
(b) Yes. The value 32 seems suspicious since it is so much smaller than
the others.
7. (a) 1 4, 6, 6, 6
2 0, 0, 1, 3, 4, 4, 6, 7, 7, 7
3 1, 2, 3, 5, 5, 8, 8, 9
4 2, 6
5 5
(b) 0 3, 6, 7, 7, 7, 7, 9
1 0, 0, 0, 0, 0, 0, 3, 4, 4, 6, 6, 7, 7, 9, 9
2 0, 1
3 1
(c) 0 1, 3, 4, 4, 4, 5, 7, 9
1 0, 0, 2, 6, 7, 7, 7, 8, 9, 9
2 1, 2, 5, 9
3 2, 6
4 5
9. (a) 6
(b) 43.75%
(c) 12.5%
Chapter 2 Review 747

Section 2.5

1. (a)

(b) The number of defective parts tends to increase as the temperature


increases.
(c) About 23 or 24
5. (a)

(b) Attention span and IQ are not related.

CHAPTER 2 REVIEW

1. (a) Blood type Frequency


A 19
B 8
O 19
AB 4

(b) Blood type Relative frequency


A 0.38
B 0.16
O 0.38
AB 0.08
748 Answers to Odd-Numbered Problems

(c)

3. (a) 389
(b)

5. (a) Value Frequency


1 2
2 1
3 4
4 1
5 2

(b) Value Frequency


1 2
2 3
3 3
4 2
(c) 3, 2.5
Chapter 2 Review 749

7. (a) about 46 percent


(b) about 3 percent
9. (a)

(b) There are relatively few weights near the upper end of the weight
range.
11. Weight and blood pressure do not seem related.

13. Yes, high scores on one examination tend to go along with high scores
on the other.
750 Answers to Odd-Numbered Problems

15. (a) 0 0.27, 0.78, 0.93


1 0.19, 0.31, 0.49, 0.53, 0.81
2 0.30, 0.92, 0.93
3 0.07, 0.21, 0.32,0.39, 0.66, 0.68, 0.81
4 0.02, 0.11, 0.43, 0.50
5 0.35, 0.41
(b)

(c)

Section 3.2
1. 1196/15 = 79.73
3. 429.03/13 = 33.00 inches; 1331/13 = 102.38 days
5. No. It also depends on the proportions of the two town populations that
are women. (For instance, suppose town A has 9 women whose aver-
age weight is 110 and 1 man whose weight is 200, while town B has 10
women whose average weight is 100 and 10 men whose average weight
is 190.)
7. larger than the average of the previous values.
9. 6; 18; 11
11. 78/11
13. 15
15. 12 (10) + 16 (20) + 13 (30) = 18.33
17. $81,120
Chapter 2 Review 751

19. No. For a counterexample suppose town A has one man who weighs 200
and 2 women whose average weight is 100, whereas town B has 2 men
with an average weight of 175 and a single women who weights 90.
Section 3.3
1. (a) 6580 yards
(b) 6545 yards
3. 23
5. (a) 22.0
(b) 8.1
(c) 23.68
(d) 9.68
7. 31.5 inches
9. (a) 99.4
(b) 14.9
(c) 204.55
11. (a) 20.74
(b) 20.5
(c) 19.74
(d) 19.5
(e) Mean = 20.21; median = 20.05
13. 0, 0
15. (a) 32.52
(b) 24.25
17. (a) 26.8
(b) 25.0
Section 3.3.1
1. (a) If the data are arranged in increasing order, then the sample 80 per-
centile is given by the average of the values in positions 60 and 61.
(b) If the data are arranged in increasing order, then the sample 60 per-
centile is given by the average of the values in positions 45 and 46.
(c) If the data are arranged in increasing order, then the sample 30 per-
centile is the value in position 23.
3. (a) 95.5
(b) 96
5. (a) 70
(b) 58
(c) 52
7. 230c
9. 74, 85, 92
11. 25
Section 3.4
1. 1B, 2C, 3A
752 Answers to Odd-Numbered Problems

3. (a) 126
(b) 102, 110, 114
(c) 196
5. 5, 6, 6, 6, 8, 10, 12, 14, 23 is one such data set.
7. (a) 8 loops
(b) 2 miles
9. Answer (c) is most likely, by Benford’s law.
Section 3.5
1. (a) 13; (b) 43.81818.
3. (a) 6.18
(b) 6.77
11. (a) s 2 = 2.5, s = 1.58
(b) s 2 = 2.5, s = 1.58
(c) s 2 = 2.5, s = 1.58
(d) s 2 = 10, s = 3.16
(e) s 2 = 250, s = 15.81
13. For the first 50 students, s 2 = 172.24 and x = 115.80.
For the last 50 students, s 2 = 178.96 and x = 120.98.
The values of the statistics for the two data sets are similar. This is not
surprising.
15. 78.56 thousand
17. (a) 0.805
(b) 2.77
(c) 1.22
Section 3.6
1. (a)

(b) 25.75
(c) 26.5
(d) No
5. (a) 168,045
(b) 172,500
Chapter 2 Review 753

(c)

(d) Yes, if we ignore the data value 82. No, if we use all the data.
7. 95%, 94.85%
9. Sample mean
Section 3.7

1. Let (xi , yi ), i = 1, 2, 3 be the middle set of data pairs. Then the first set is
(121xi , 360 + yi ) and the third is (xi , 12 yi ), i = 1, 2, 3.
3. (a)

(b) Almost 1
(c) 0.86
(d) There is a relatively strong linear relationship between them.
5. −0.59; the linear relationship is relatively weak.
7. −0.441202; the linear relationship is relatively weak. But there is an in-
dication that when one of the variables is high, then the other tends to
be low.
9. 0.7429
11. All data = −0.33; first seven countries = −0.046
13. All data = 0.25; first seven countries = −0.3035
15. (d) Correlation is not causation.
17. No, correlation is not causation.
754 Answers to Odd-Numbered Problems

Section 3.8
1. L(1/6) = 25/245; L(2/6) = 57/245; L(3/6) = 117/245; L(4/6) = 157/245;
L(5/6) = 195/245; L(1) = 245/245
3. G = .4271
7. yes

CHAPTER 3 REVIEW
1. (a) −2, −1, 1, 2
(b) −2, −1, 0, 1, 2
(c) Part (a): mean = 0, median = 0; part (b): mean = 0, median = 0
3. (a) 29.3
(b) No
(c) First quartile is 27.7; second quartile, 29.3; third quartile, 31.1.
(d) 31.7
9. No
11. No, association is not causation.
13. .1426
15. 0.99846
Section 4.2
1. (a) S = {(R, R), (R, B), (R, Y ), (B, R), (B, B), (B, Y ), (Y, R), (Y, B),
(Y, Y )}
(b) {(Y, R), (Y, B), (Y, Y )}
(c) {(R, R), (B, B), (Y, Y )}
3. (a) {(U of M, OSU), (U of M, SJSC), (RC, OSU), (RC, SJSC), (SJSC,
OSU), (SJSC, SJSC), (Yale, OSU), (Yale, SJSC), (OSU, OSU), (OSU,
SJSC)}
(b) {(SJSC, SJSC), (OSU, OSU)}
(c) {(U of M, OSU), (U of M, SJSC), (RC, OSU), (RC, SJSC), (SJSC,
OSU), (Yale, OSU), (Yale, SJSC), (OSU, SJSC)}
(d) {(RC, OSU), (OSU, OSU), (SJSC, SJSC)}
5. S = {(France, fly), (France, boat), (Canada, drive), (Canada, train),
(Canada, fly)}A = {(France, fly), (Canada, fly)}
7. (a) ø
(b) {1, 4, 6}
(c) {1, 3, 4, 5}
(d) {2}
9. (a) {(1, g), (1, f ), (1, s), (1, c), (0, g), (0, f ), (0, s), (0, c)}
(b) {(0, s), (0, c)}
(c) {(1, g), (1, f ), (0, g), (0, f )}
(d) {(1, g), (1, f ), (1, s), (1, c)}
Chapter 3 Review 755

11. (a) Ac is the event that a rolled die lands on an odd number.
(b) (Ac )c is the event a rolled die lands on an even number.
(c) (Ac )c = A.
13.

Section 4.3
1. (a) P (E) = 0.35; P (E) = 0.65; P (G) = 0.55
(b) P (E ∪ F ) = 1
(c) P (E ∪ G) = 0.8
(d) P (F ∪ G) = 0.75
(e) P (E ∪ F ∪ G) = 1
(f) P (E ∩ F ) = 0
(g) P (F ∩ G) = 0.45
(h) P (E ∩ G) = 0.1
(i) P (E ∩ F ∩ G) = 0
3. 1/10,000
5. If they are disjoint, it is impossible. If they are not disjoint, it is possible.
7. (a) 1
(b) 0.8
756 Answers to Odd-Numbered Problems

(c) 0.5
(d) 0.1
9. (a) 0.95
(b) 0.80
(c) 0.20
11. 0.7
13. 0.31%
15. 0.6
17. (a) A ∩ B c
(b) A ∩ B
(c) B ∩ Ac
(d) P (I) + P (II) + P (III)
(e) P (I) + P (II)
(f) P (II) + P (III)
(g) P (II)
Section 4.4
1. 88/216 ≈ 0.41
3. (a) 4/52 ≈ 0.08
(b) 48/52 ≈ 0.92
(c) 13/52 ≈ 0.25
(d) 1/52 ≈ 0.02
5. 2/3
7. (a) 0.56
(b) 0.1
9. (a) 0.4
(b) 0.1
11. 56
13. 1/19
15. (a) 0.1
(b) 0.1
17. (a) 10/31
(b) 9/31
(c) 1/3
(d) 11/31
(e) 7/31
Section 4.5
1. (a) 0.02/0.3 ≈ 0.067
(b) 0.02/0.03 ≈ 0.667
3. (a) 0.245
(b) 0.293
5. (a) 0.145
(b) 0.176
(c) 0.215
Chapter 3 Review 757

(d) 0.152
7. (a) 0.46
(b) 0.65
9. (a) 262/682
(b) 262/682
(c) 350/682
(d) 602/682
(e) 598/682
(f) 519/682
11. 1/169 ≈ 0.006
13. 0.6960
15. (a) 19/34 ≈ 0.56
(b) 1 − 19/34 ≈ 0.44
(c) 1/17 ≈ 0.06
17. Since P (B|A) > P (B), P (A ∩ B) > P (B)P (A)

P (A ∩ B) P (B)P (A)
Hence, P (A/B) = > = P (A)
P (B) P (B)
19. 0.24
21. 0.68
23. (a) 7/12 ≈ 0.58
(b) 50
(c) 13/119 ≈ 0.11
(d) 35/204 ≈ 0.17
(e) 0.338
25. (a) 0.79; 0.21
(b) 0.81; 0.27
27. (a) 1/2
(b) 3/8
(c) 2/3
29. 1/16
31. No; the friends do not know each other.
33. P (A) = 1/13; P (B) = 1/4; P (A ∩ B) = 1/52; thus P (A ∩ B) = P (A)P (B).
35. 1/365
37. (a) 0.64
(b) 0.96
(c) 0.8704
39. Yes, P (A)P (B) = P (A ∩ B).
41. (a) 32/4805 ≈ 0.0067
(b) 729/1922 ≈ 0.38
(c) 0.060
(d) 0.045
(e) 0.006
(f) 0.111
758 Answers to Odd-Numbered Problems

43. (a) 1/4


(b) 2/3
Section 4.6
1. (a) 0.55
(b) 5/9
3. (a) 0.672
(b) 0.893
5. 0.398
7. (a) 0.534
(b) 0.402
9. (a) 0.0103
(b) 0.3046

CHAPTER 4 REVIEW
1. (a) 3/4
(b) 3/4
(c) 6/11
(d) 1/22
(e) 9/22
3. (a) 0.68
(b) 0.06
(c) 0.12
5. (a) 11/24
(b) 13/23
7. (a) 1/64
(b) 1/64
(c) 1/64
9. (a) S = {(chicken, rice, melon), (chicken, rice, ice cream), (chicken,
rice, gelatin), (chicken, potatoes, melon), (chicken, potatoes, ice
cream), (chicken, potatoes, gelatin), (roast beef, rice, melon), (roast
beef, rice, ice cream), (roast beef, rice, gelatin), (roast beef, pota-
toes, melon), (roast beef, potatoes, ice cream), (roast beef, potatoes,
gelatin)}
(b) {(chicken, potatoes, ice cream), (chicken, potatoes, gelatin), (roast
beef, potatoes, ice cream), (roast beef, potatoes, gelatin)}
(c) 1/3
(d) 1/12
11. (a) 1/3
(b) 1/3
(c) 1/3
(d) 1/2
13. 14/33 ≈ 0.424
Chapter 4 Review 759

15. (a) 1/52


(b) 1/52
(c) equally
(d) 1/52
17. (a) 0.42
(b) 0.18
(c) 0.24
(d) 0.58
(e) 0.724
19. No
21. (a) 0.496
(b) 54/252
(c) 36/248
(d) No
23. (a) 4
(b) (i) 4/86
(b) (ii) 1/2
(b) (iii) No
25. (a) 0.077
(b) 0.0494
(c) 0.0285
27. (a) 0.64
(b) 0.06, assuming independence
29. (a) 0.5
(b) 0.44
(c) 0.024
(d) 0
31. 2/3
33. (a) same chance, because in order for it to end in 6 games one of the
teams must be ahead 3 to 2 after the first 5 games and the leading
team must win the next game; in order for it to end in 7 games
one of the teams must be ahead 3 to 2 after the first 5 games and
the team that is behind must win the next game. Because we are
assuming that each team is equally likely to win game 6, the two
probabilities are the same.
(b) 10/32
Section 5.2
1. P {Y = 0} = 1/4
P {Y = 1} = 3/4
3. (a) 5/12
(b) 5/12
(c) 0
(d) 1/4
760 Answers to Odd-Numbered Problems

5. i P {Y = i}
1 11/36
2 1/4
3 7/36
4 5/36
5 1/12
6 1/36

7. i P {X = i}
2 0.58
3 0.42

9. i P {X = i}
0 1199/1428
1 55/357
2 3/476

11. i P {X = i}
0 0.075
1 0.325
2 0.6

13. No; P (4) is negative.


15. i P {X = i}
0 38/223
1 82/223
2 57/223
3 34/223
4 10/223
5 2/223
17. (a) 0.1
(b) 0.5
19. i P {X = i}
0 0.30
1 0.35
2 0.20
3 0.15

Section 5.3
1. (a) 2
(b) 5/3
(c) 7/3
3. $8.40
5. 1.9
7. (a) 2.53
(b) 4.47
9. $880
Chapter 4 Review 761

11. (a) 2/3


(b) 4/3
(c) 2
13. (a) Second location
(b) First location
15. −$5
17. (a) No
(b) No
(c) Yes
(d) 4/95 ≈ 0.042
19. −$0.40
21. 2.5
23. $150
25. 0
27. (a) $16,800
(b) $18,000
(c) $18,000
29. 3
31. (a) 7
(b) 7
33. 12
35. 3.6
Section 5.4
1. Var(U ) = 0, Var(V ) = 1, Var(W ) = 100
3. 0
5. 0.49
7. 0.25
9. (b) 0.8
(c) 0.6
11. (a) 0.5
(b) 0.5
13. (a) 0
(b) $3666
15. (a) 4.06
(b) 1.08
17. 3 SD(X) = 6
19. (a) 2
(b) 2
Section 5.5
1. For i and j being any of the integers 1, 2, 3, 4, 5, 6,
P (X = i, Y = i + j ) = 1/36
3. With p(i, k) = P (X = i, Y = k)
P (Y = 1) = 0
762 Answers to Odd-Numbered Problems

P (Y = 2) = p(1, 2) = 1/36
P (Y = 3) = p(1, 3) + p(2, 3) = 2/36
P (Y = 4) = p(1, 4) + p(2, 4) + p(3, 4) = 3/36
5. 35/12
7. 0.7071
Section 5.6
1. (a) 24
(b) 120
(c) 5040
3. 3,628,800
5. (a) 0.278692
(b) 0.123863
(c) 0.00786432
7. (a) 0.468559
(b) 0.885735
9. (a) 3 or more
(b) 0.00856
11. 0.144531
13. (a) 0.517747
(b) 0.385802
(c) 0.131944
15. (a) 0.421875
(b) 0.421875
(c) 0.140625
(d) 0.015625
17. (a) 10/3
(b) 20/3
(c) 10
(d) 50/3
19. (a) 0.430467
(b) 0.382638
(c) 7.2
(d) 0.72
21. (a) 0.037481
(b) 0.098345
(c) 0.592571
(d) 1.76
(e) 0.992774
23. (a) 0.00604662
(b) 0
25. (a) 50; 5
(b) 40; 4.89898
(c) 60; 4.89898
Chapter 5 Review 763

(d) 25; 3.53553


(e) 75; 6.12372
(f) 50; 6.12372
Section 5.7
1. Hypergeometric, n = 20, N = 200, p = 0.09
3. Hypergeometric, n = 6, N = 54, p = 6/54
5. Hypergeometric, n = 20, N = 100, p = 0.05
7. Binomial, n = 10, p = 1/13

CHAPTER 5 REVIEW
1. (a) 0.4
(b) 0.6
3. (a) 1, 2, 3, 4
(b) i P (X = i)
1 0.3
2 0.21
3 0.147
4 0.343
(c) 0.7599
(d) 2.53
(e) 1.53
5. (a) 0.723
(b) No, because if she wins then she will win $1, whereas if she loses
then she will lose $3.
(c) −0.108
7. (a) i P (X = i)
0 0.7
4000 0.15
6000 0.15
(b) 1500
(c) 5,550,000
(d) 2,355.84
9. The low bid will maximize their expected profit.
11. (a) 1/3
(b) 1/4
(c) 7/24
(d) 1/12
(e) 1/24
(f) $625
(g) $125
13. (a) 0
(b) −68,750
(c) −68,750
764 Answers to Odd-Numbered Problems

17. (a) 0.6


(b) 0.648
(c) 0.68256
(d) 0.710208
(e) 0.733432
(f) 0.813908
19. (a) 0.064
(b) 0.432
(c) 0.820026
21. It is more likely that it does not.
23. (a) E[X = 1.94, E[Y ] = 2.22
(b) Var(X) = .6964, Var(Y ) = .6516
(c) Cov(X, Y ) = .0932
(c) Corr(X, Y ) = .1384
25. No
Section 6.2
1. (a) 0.29
(b) 0.56
(c) 0.33
(d) 0.27
3. (a) 2/3
(b) 0.7
(c) 0.6
(d) 0.6
5. (a) 2/3
(b) 1/6
(c) 1/3
7. (a) 1/2
(b) 0
(c) 3/4
(d) 3/8
Section 6.3
1. (a) 108.8 to 148
(b) 89.2 to 167.6
(c) 69.6 to 187.2
3. (b)
5. (d)
7. (c)
9. (a)
11. (b)
13. (d)
15. (b)
17. (a) Y
Chapter 5 Review 765

(b) X
(c) X and Y are equally likely to exceed 100.
19. (a) No
(b) No
(c) No
(d) Yes
Section 6.4
1. (a) 0.9861
(b) 0.1357
(c) 0.4772
(d) 0.7007
(e) 0.975
(f) 0.2358
(g) 0.899
(h) 0.2302
(i) 0.8710
3. 3
7. (a) 1.65
(b) 1.96
(c) 2.58
(d) 0
(e) 0.41
(f) 2.58
(g) 1.15
(h) 0.13
(i) 0.67
Section 6.6
x −u a−u
1. Since x > a, x − u > a − u. It follows that > since σ is posi-
σ σ
tive.
3. 0.3085
5. (a) 0.6179
(b) 0.8289
(c) 0.4468
7. 0.008
9. (a) 0.1587
(b) 0.2514
(c) 0.4772
11. 0.8664
13. 6.31
15. (a) 0.2660
(b) 0.9890
(c) 0.7230
(d) 0.9991
766 Answers to Odd-Numbered Problems

(e) 0.0384
17. (a) 0.6915
(b) 0.24
Section 6.7
1. (a) 1.48
(b) 1.17
(c) 0.52
(d) 1.88
(e) −0.39
(f) 0
(g) −1.64
(h) 2.41
3. (a) 50
(b) 57.68
(c) 61.76
(d) 40.16
(e) 57.02
5. 464.22
7. 525.6
9. 746
11. (a) True
(b) True
13. 99.28

CHAPTER 6 REVIEW
1. (a) 0.9236
(b) 0.8515
(c) 0.0324
(d) 0.9676
(e) 0.1423
(f) 0.0007
(g) 75.524
(h) 73.592
(i) 68.3
3. 4.969
5. (a) 0.1587
(b) 0.1587
(c) 0.1886
(d) 576.8
7. (a) 0.881
(b) 0.881
(c) 0.762
Chapter 6 Review 767

9. (a) 0.4483
(b) 0.201
(c) 0.4247
11. (a) 0.6915
(b) 0.3859
(c) 0.1587
13. (a) 1/4
(b) 0.28965
15. (a) 0.8413
(b) 0.042
(c) independence

Section 7.3

1/2
1. (a) SD(X) = √ ≈ 0.29
3

1/2
(b) SD(X) = √ = 0.25
4

3. (a) 2

(b) 2/3 ≈ 0.82
768 Answers to Odd-Numbered Problems

(c) i P {X = i}
1 1/9
1.5 2/9
2 3/9
2.5 2/9
3 1/9

(d) E(X) = 2, SD(X) = 1/ 3 ≈ .58
(e) Yes √
5. (a) E(X) = 2.4, SD(X) = 0.2/√36 ≈ 0.033
(b) E(X) = 2.4, SD(X) = 0.2/√64 ≈ 0.025
(c) E(X) = 2.4, SD(X) = 0.2/√100 ≈ 0.02
(d) E(X) = 2.4, SD(X) = 0.2/ 900 ≈ 0.007
7. Expected value = 15,500, standard deviation = 2800
Section 7.4
1. (a) 0.5468
(b) 0.7888
(c) 0.9876
3. 0.7888
5. (a) 0.0062
(b) 0.7888
7. 0.9713
9. 0.1416
11. (a) 0.905
(b) 0.5704
13. (a) 0
(b) 0
15. (a) 0.6826
(b) 0.9544
(c) 1
(d) 1
(e) 1
Section 7.5
1. (a) E(X) = 0.6, SD(X) = 0.15
(b) E(X) = 0.6, SD(X) = 0.049
(c) E(X) = 0.6, SD(X) = 0.015
(d) E(X) = 0.6, SD(X) = 0.0049
3. (a) 0.0122
(b) 0.119
(c) 0.5222
9. (a) 0.0125
(b) 0.8508
11. 0.1949
13. 0.4602
Chapter 7 Review 769

15. (a) 0.9147


(b) 0.0043
(c) 0.5188
17. (a) 0.9599
(b) 0.3121
19. (a) 0.9974
(b) 0.0268
Section 7.6
1. (a) 5.7; 4 degrees of freedom
(b) 0.018; 5 degrees of freedom
(c) 1.13; 2 degrees of freedom

CHAPTER 7 REVIEW
1. (a) 0.8413
(b) 0.5
(c) 0.0228
(d) 0.0005 √
3. E(X) = 3; SD(X) = 1/ 2 ≈ 0.71
5. (a) Mean = 12, standard deviation = 3.25
(b) 0.5588
7. (a) 300√
(b) 7 2 ≈ 31.3
(c) 0.5
9. 0.1003
11. (a) 0.3669
(b) 0.9918
(c) 0.9128
15. (a) .5785 without the continuity correction; .6540 with the correction.
(b) Yes
(c) It is the event that X + Y ≥ 10 − X + Z
(d) .9369 without using the continuity correction; .9502 with the cor-
rection.
Section 8.2
1. 145.5
5. 165.6 hours
7. 12
9. 3.23
11. (a)
Section 8.3
1. 0.3849
3. 0.65; 0.107
5. 0.412; 0.05
770 Answers to Odd-Numbered Problems

7. (a) 0.122
(b) 0.01
9. (a) 0.0233
(b) 0.0375
(c) 0.0867
11. (a) 0.245
(b) 0.022
13. (c); accurate in terms of lowest standard error
Section 8.3.1
1. 0.28
3. (b) 3.32; 1.73; 1.45
Section 8.4
1. 18.36
3. 799.7; 193.12
5. 21.27
7. 30.5
9. 12.64
11. 1.35
13. 0.0474; 0.2386
Section 8.5
1. (a) (3.06, 3.24)
(b) (3.03, 3.27)
3. (11.43, 11.53)
5. (a) (8852.87, 9147.13)
(b) (8824.69, 9175.31)
7. (72.53, 76.67)
9. (a) (1337.35, 1362.65)
(b) (1334.92, 1365.08)
(c) (1330.18, 1369.82)
11. 13.716
13. 3176
15. (a) 72.99
(b) 72.53
(c) 76.67
(d) 77.53
17. No
Section 8.6
1. (a) (5.15, 5.25)
(b) (5.13, 5.27)
Chapter 7 Review 771

3. (a) (73.82, 93.91)


(b) (71.63, 96.10)
(c) (66.89, 100.84)
5. (a) (127.71, 163.29)
(b) (119.18, 171.82)
7. (446.28, 482.01)
9. (280.04, 284.96)
11. (1849.4, 2550.6)
13. (a) (4.60, 4.80)
(b) (4.58, 4.82)
15. (1124.95, 1315.05)
17. No
19. (a) (27.59, 38.64)
(b) No
21. 68.897, 98.836
23. The average daily receipts exceed $2857.
Section 8.7

1. (0.548, 0.660)
3. (a) (0.502, 0.519)
(b) (0.498, 0.523)
5. (0.546, 0.734)
7. (0.359, 0.411)
9. (0, 0.306)
11. (0.801, 0.874)
13. (0, 0.45)
15. (a) (0.060, 0.108)
(b) (0.020, 0.052)
(c) (0.448, 0.536)
17. (a) A 95% confidence interval is given by 0.75 ± 0.0346.
(b) Rather than using p̂ to estimate p in the standard error term they
used the upper bound p(1 − p) ≤ 1/4.
19. (a) 1692
(b) Less than 0.04 but greater than 0.02
(c) (0.213, 0.247)
21. 6147
23. 0.868
25. (a) 0.139
(b) 0.101
27. (a) No
(b) No
772 Answers to Odd-Numbered Problems

CHAPTER 8 REVIEW
1. (a)
3. (22.35, 26.45)
5. (316.82, 323.18)
7. (a) (44.84, 54.36)
(b) (45.66, 53.54)
9. (1527.47, 2152.53)
11. (a) 88.56
(b) (83.05, 94.06)
13. (a) (34.02, 35.98)
(b) (33.04, 36.96)
(c) (31.08, 38.92)
15. (0.487, 0.549)
17. 0.004
19. (a) (0.373, 0.419)
(b) (0.353, 0.427)
21. Upper
Section 9.2
1. (a) Hypothesis B
3. (d) is most accurate; (b) is more accurate than not.
Section 9.3
1. TS = 1.55; zα/2 = 1.96; do not reject H0 .
3. (a) 0.0026
(b) 0.1336
(c) 0.3174
At the 5% level of significance we reject H0 in (a). At the 1% level of
significance we reject H0 in (a).
5. Yes
7. (a) No
(b) 0
9. The data do not support a mean of 13,500 miles.
11. Yes; Yes
13. The p value is 0.281. Thus we reject this hypothesis at a level of signifi-
cance of 0.281 or greater.
15. (a) 0.2616
(b) 0.2616
(c) 0.7549
Section 9.3.1
1. (a) No
(b) No
(c) 0.091
Chapter 8 Review 773

3. (a) 0
(b) 0
(c) 0.0085
5. (a) Yes
(b) No, because the reduction in cavities is so small.
7. Yes, but increase the sample size.
9. The mean amount dispensed is less than 6 ounces; H0: μ ≥ 6; H1: μ < 6;
p value = 0.
Section 9.4
1. The evidence is not strong enough to discredit the manufacturer’s claim
at the 5% level of significance.
3. (a) Yes
(b) No
5. (a) No
(b) No
(c) No
(d) The p value is 0.108.
7. Yes
11. H0: μ ≥ 23 versus H1: μ < 23. The judge should rule for the bakery.
13. (a) H0: μ ≥ 31
(b) H1: μ < 31
(c) No
(d) No
15. No, the p value is 0.0068.
17. No; no
Section 9.5
1. p value = 0.0365; normal approximation is 0.0416
3. No
5. (a) H0: p ≤ 0.5; H1: p > 0.5
(b) 0.1356
(c) 0.0519
(d) 0.0042
As n increases, the p value decreases, because we have more confidence
in the estimate for larger n.
7. (a) No
(b) No
(c) No
(d) Yes
9. No; no
11. No
13. (a) Yes
(b) No
(c) 0.2005
774 Answers to Odd-Numbered Problems

CHAPTER 9 REVIEW
1. (b)
5. (a) No
(b) Yes
(c) Yes
7. There is insufficient evidence to support the claim at the 5% level of
significance.
9. One would probably rule against Caputo since the p value of the test
H0 : p = 1/2 against H1: p = 1/2 is 0.000016.
15. (a) 20
(b) 0.4526
Section 10.2
1. (a) No
(b) 0
3. (a) There is evidence to support the hypothesis that the mean lengths
of their cuttings are equal.
(b) 0.8336
5. It suffices to relabel the data sets and use the given test.
7. No
Section 10.3
1. Yes; H0: μx = μy ; H1: μx = μy ; p value = 0.0206
3. p value = 0.5664
5. Yes; 0
7. No; H0: μx ≤ μy ; H1: μx > μy , where x corresponds to rural students and
y corresponds to urban students.
9. H0: μB ≤ μA ; H1: μB > μA ; p value = 0.0838. At the 5% level of signifi-
cance supplier B should be used.
11. (a) H0: μm ≤ μf ; H1: μf < μm
(c) It indicates that the female average wage is less than the male aver-
age wage.
13. (a) The null hypothesis should be rejected for α = 0.01.
(b) 0.0066
(c) Reduction in mean score
Section 10.4
1. No; yes
3. (a) No
(b) No
5. Yes
7. Reject H0: μx = μy for α = 0.05; p value = 0.0028.
9. (a) Reject H0: μx = μy
(b) Reject H0: μx = μy
(c) Do not reject H0: μx = μy
Chapter 10 Review 775

Section 10.5
1. (a) Reject the hypothesis at α = 0.05.
(b) p value = 0.0015
3. Do not reject H0 .
5. (a) Do not reject the hypothesis.
(b) There is not evidence to reject the hypothesis at the 5% level of
significance.
7. Reject the hypothesis at α = 0.05.
9. (a) H0: μbefore ≤ μafter ; H1 : μbefore > μafter
(b) No
11. The null hypothesis is not rejected.
Section 10.6
1. (a) No
(b) No
3. (a) Yes
(b) 0.0178
5. (a) No
(b) 0.0856
7. Reject the hypothesis that the proportions were the same in 1983 and
1990; p value = 0.0017.
9. Reject the hypothesis for α = 0.05; p value = 0.
11. (a) Yes
(b) 0
13. No
15. Yes; H0: p̂placebo ≤ p̂aspirin (where p̂ is the proportion that suffered heart
attacks); p̂ value = 0.

CHAPTER 10 REVIEW
1. (a) Reject H0: μx = μy
(b) 0
3. (a) Do not reject the hypothesis that the probabilities are the same.
(b) 0.5222
(c) No
(d) α ≥ 0.2611
5. (a) Reject H0: μx = μy
(b) 0.0497
7. Do not reject the hypothesis that the probabilities are the same.
9. Do not reject the hypothesis (p value = 0.79).
11. Do not reject the hypothesis that the proportions are the same in both
sports.
776 Answers to Odd-Numbered Problems

Section 11.2
1. (a) X1 = 8, X2 = 14, X3 = 11
(b) X = 11
3. Yes
5. No
7. Do not reject the hypothesis for α = 0.05.
9. Reject the hypothesis that death rates do not depend on season for a =
0.05.
11. No
Section 11.3
1. α̂ = 68.8, α̂1 = 14.2, α̂2 = 6.53, α̂3 = −3.47, α̂4 = −3.47, α̂5 = −13.8,
β̂1 = 0.8, β̂2 = −2.4, β̂3 = 1.6
3. α̂ = 28.33, α̂1 = 1, α̂2 = −2, α̂3 = 1, β̂1 = 3.67, β̂2 = −0.67, β̂3 = −3
7. α̂ = 9.585, α̂1 = −1.74, α̂2 = −1.96, α̂3 = 4.915, α̂4 = −1.36,
α̂5 = −3.335, β̂1 = 0.495, β̂2 = −0.405,
β̂3 = 0.795, β̂4 = −0.885
9. (a) 44
(b) 48
(c) 52
(d) 144
Section 11.4
1. (a) Yes
(b) No
3. (a) No
(b) No
5. (a) No (Reject H0 )
(b) Yes (Do not reject H0 )
7. The p-value in both cases is less than 0.0001.
9. (a) Reject the hypothesis for α = 0.05.
(b) Do not reject the hypothesis for α = 0.05.

CHAPTER 11 REVIEW
1. Reject the hypothesis for α = 0.05.
3. Yes for α = 0.05.
5. Do not reject the hypothesis for α = 0.05.
7. (a) Do not reject the hypothesis for α = 0.05.
(b) 30.6
(c) Reject the hypothesis for α = 0.05.
Chapter 11 Review 777

9. (a) Do not reject the hypothesis for α = 0.05.


(b) Reject the hypothesis for α = 0.05.

Section 12.2

1. (a)

(b) Yes
3. (a) Density; speed
(b)

(c) Yes
5. (a)

(b) No
778 Answers to Odd-Numbered Problems

Section 12.3

1. (a)

(b)

3. (a)

(c) y = 14.79 + 2.43x


7. (a) y = −8.31 + 0.27x
(b) 31.66
(c) y = 31.66 + 3.61x
(d) 147.12
9. At random
11. (a) y = 67.56 + 0.23x
(b) 204.62
(c) 261.73
(d) 296.00
13. 121.85
Chapter 11 Review 779

Section 12.4
1. 2.32
3. (a) 6
(b) 6
(c) 76
5. 0.000156
7. 6970.21
Section 12.5
1. Do not reject H0: β = 0.
3. Reject the hypothesis.
5. (a)

(b) y = 0.75 + 0.0013x


(c) Reject the hypothesis.
(d) Reject the hypothesis.
7. (a)

(b) y = 2.12 + 0.0003x


(c) Do not reject the hypothesis.
(d) Do not reject the hypothesis.
9. Reject the hypothesis.
780 Answers to Odd-Numbered Problems

Section 12.6
1. (a) α = 10.48, β = 0.325
(b) Yes
7. Not as well as the heights
Section 12.7
1. (a) 12.6
(b) (6.4, 18.8)
3. (a) γ = 44.818 − 0.3138x
(b) 28.814
(c) (25.083, 32.545)
(d) (6026.89, 9520.09)
5. (a) 2.501
(b) (2.493, 2.510)
7. (a) $33,266
(b) (27,263, 39,268)
(c) $42,074; (35,608, 48,541)
Section 12.8
1. (a)

(b) y = 8.885 + 56.32x


(c) 97%
(d) (144,628, 165,929)
3. (a) 0.9996
(b) Yes
(c) 41.975
(d) (40.692, 43.258)
5. 0.149
7. 0.059
Chapter 12 Review 781

Section 12.9
1. (a) 0.9796; 0.9897
(b) 0.9796; 0.9897
This indicates that the value of the sample correlation coefficient does
not depend on which variable is considered the independent variable.
3. (a) 0.8
(b) 0.8
(c) −0.8
(d) −0.8
5. (a) y = −3.16 + 1.24x
(b) y = 7.25 + 0.66x
(c) 0.818; 0.904
(d) 0.818; 0.904
Section 12.11
3. y = −153.51 + 51.75x1 + 0.077x2 + 20.92x3 + 13.10x4 ; 183.62
5. 69.99

CHAPTER 12 REVIEW
1. (a)

(b) y = 177.93 + 6.89x


(c) 522.61
(d) (480.53, 564.68)
3. (a) α = 94.13; β = 0.155
(b) (93.17, 132.34)
(c) 100%
5. Not necessarily, doing well (or poorly) might just be a chance phe-
nomenon that will tend to regress to the mean on the next attempt.
9. (a) 34.9
(b) (4.34, 23.40)
782 Answers to Odd-Numbered Problems

11. (a) y = 177.41 + 1.07x1 + 11.7x2


(b) 241.90
15. Alcohol consumption, which is associated with both cigarette consump-
tion and bladder cancer incidence, might be the primary cause. A multi-
ple linear regression would be useful.
Section 13.2
1. (a) 15.086
(b) 11.070
(c) 23.209
(d) 18.307
(e) 31.410
3. H0: P1 = 0.52, P2 = 0.32, P3 = 0.16. No, H0 is not rejected.
5. Yes, the null hypothesis is rejected.
7. No, p value = 0.0002.
9. Yes; yes
11. Yes
13. Do not reject the hypothesis.
15. Reject the hypothesis.
Section 13.3
1. (a) 7.08
(b) Yes
(c) No
3. Reject the hypothesis.
5. Do not reject that the characteristics are independent.
7. Reject the hypothesis.
9. No
11. Reject the hypothesis; reject the hypothesis.
13. Do not reject the hypothesis.
Section 13.4
1. No, we cannot conclude that smoking causes lung cancer, but we can
conclude that the per capita lung cancer rate is higher for smokers than
for nonsmokers.
3. Do not reject the hypothesis.
5. No
7. Yes; no
9. Do not reject in each case.

CHAPTER 13 REVIEW
1. Do not reject the hypothesis.
5. Reject the hypothesis.
7. Do not reject the hypothesis for α = 0.05.
9. Yes
Chapter 13 Review 783

11. No; no
13. (a) Do not reject the hypothesis.
(b) 0.208
15. Do not reject the hypothesis; do not reject the hypothesis.
Section 14.2
1. (a) p value = 0.057. Reject the null hypothesis at any significance level
greater than or equal to 0.057.
(b) p value ≈ 0. Reject the null hypothesis at any significance level.
(c) p value ≈ 0. Reject the null hypothesis at any significance level.
3. We cannot reject the null hypothesis that the two guns are equally effec-
tive.
5. Since n is small we use the binomial distribution to calculate the p value
= 0.291. Thus we cannot reject the hypothesis that the median score will
be at least 72.
7. Yes, this discredits the hypothesis. p value = 0.0028.
Section 14.3
1. (a) TS = 39
(b) TS = 42
(c) TS = 20
3. (a) p value = 0.2460
(b) p value = 0.8336
(c) p value = 0.1470
5. (a) Yes, how the paper is presented had an effect on the score given.
(b) p value = 0.0102
7. (a) The null hypothesis is rejected at any significance level greater than
or equal to 0.1250.
(b) The null hypothesis is rejected at any significance level greater than
or equal to 0.0348.
9. No, we cannot reject the null hypothesis. Painting does not affect an
aircraft’s cruising speed.
Section 14.4
1. (a) 94
(b) 77
3. p value = 0.8572
5. Since the p value = 0.2112, we cannot reject the null hypothesis that the
starting salary distribution for MBAs from the two schools are the same.
7. p value = 0.4357
Section 14.5
1. (a) 41
(b) 2
3. Since the p value = 0.0648, we cannot reject the hypothesis that the data
constitutes a random sample.
784 Answers to Odd-Numbered Problems

5. Since the p value = 0.0548, we cannot reject the null hypothesis that the
interviewer interviewed them in a randomly chosen order.
7. (a) Median = 163.5
(b) Seven runs
(c) Since the p value = 0.0016, we must reject the null hypothesis at
any significance level greater than or equal to 0.0016. The sequence
of values do not constitute a random sample.
Section 14.7
1. The data strongly support the hypothesis that the student improved as
the semester progressed.

CHAPTER 14 REVIEW
1. Using the rank-sum test with TS = 113, we obtain a p value of 0.0348.
So we cannot reject the null hypothesis at the 1% level of significance,
but we must reject the null hypothesis at the 5% level.
3. Since p value ≈ 0, reject the null hypothesis, the median net worth has
decreased.
5. We do a runs test, with median = 145 and n = m = 20, and r = 21. Since
5 = 21, the p value is 1.0.
9. Using the signed–rank test with TS = 0. The p value = 0.0444. Thus we
reject the null hypothesis that there is no difference in the shoe sales at
any level of significance above 4.44%.
11. Since the p value = 0.5620, we cannot reject the null hypothesis.
Section 15.2
1. (a) LCL = 85, UCL = 115
(b) LCL = 86.58, UCL = 113.42
(c) LCL = 87.75, UCL = 112.25
(d) LCL = 90.51, UCL = 109.49
3. LCL = 66.58, UCL = 93.42. Since subgroup 9 falls outside this range, the
process would have been declared out of control at that point.
5. LCL = −0.00671, UCL = 0.00671. Since all the subgroups are within
these control limits, the process is in control.
Section 15.3
1. LCL = 0, UCL = 13.23. Since all the subgroups are within the control
limits, the process is in control.
3. (a) Since all the subgroups are within the control limits, the process is
in control.
(b) LCL = 0, UCL = 9.88
Chapter 15 Review 785

CHAPTER 15 REVIEW
1. LCL = 1.4985, UCL = 1.5015.
3. LCL = 0, UCL = 13.23. Since all the subgroups are within these control
limits, the process is in control.
APPENDIX A

A Data Set

Student Weight Cholesterol Pressure Gender Student Weight Cholesterol Pressure Gender
1 147 213 127 F 30 129 194 114 M
2 156 174 116 M 31 111 184 104 F
3 112 193 110 F 32 156 191 118 M
4 127 196 110 F 33 155 221 107 F
5 144 220 130 F 34 104 212 111 F
6 140 183 99 M 35 217 221 156 M
7 119 194 112 F 36 132 204 117 F
8 139 200 102 F 37 103 204 121 F
9 161 192 121 M 38 171 191 105 M
10 146 200 125 F 39 135 183 110 F
11 190 200 125 M 40 249 227 137 M
12 126 199 133 F 41 185 188 119 M
13 164 178 130 M 42 194 200 109 M
14 176 183 136 M 43 165 197 123 M
15 131 188 112 F 44 121 208 100 F
16 107 193 113 F 45 124 218 102 F
17 116 187 112 F 46 113 194 119 F
18 157 181 129 M 47 110 212 119 F
19 186 193 137 M 48 136 207 99 F
20 189 205 113 M 49 221 219 149 M
21 147 196 113 M 50 151 201 109 F
22 112 211 110 F 51 182 208 130 M
23 209 202 97 M 52 151 192 107 M
24 135 213 103 F 53 182 192 136 M
25 168 216 95 M 54 149 191 124 M
26 209 206 107 M 55 162 196 132 M
27 102 195 102 F 56 168 193 92 M
28 166 191 111 M 57 185 185 123 M
29 132 171 112 M 58 191 201 118 M
(Continued)
711
Introductory Statistics. DOI:10.1016/B978-0-12-804317-2.00024-2
Copyright © 2017 Elsevier Inc. All rights reserved.
712 APPENDIX A: A Data Set

(Continued)
Student Weight Cholesterol Pressure Gender Student Weight Cholesterol Pressure Gender
59 173 185 114 M 102 184 192 129 M
60 186 203 114 M 103 179 202 129 M
61 161 177 119 M 104 105 211 109 F
62 149 213 124 F 105 157 179 109 M
63 103 192 104 F 106 202 210 124 M
64 126 193 99 F 107 140 188 112 F
65 181 212 141 M 108 165 203 114 F
66 190 188 124 M 109 184 199 151 M
67 124 201 114 F 110 132 195 129 F
68 175 219 125 M 111 119 202 117 F
69 161 189 120 M 112 158 195 112 M
70 160 203 108 F 113 138 217 101 F
71 171 186 111 M 114 177 194 136 M
72 176 186 114 M 115 99 204 129 F
73 156 196 99 M 116 177 198 126 M
74 126 195 123 F 117 134 195 111 F
75 138 205 113 F 118 133 168 98 M
76 136 223 131 F 119 194 201 120 M
77 192 195 125 M 120 140 211 132 F
78 122 205 110 F 121 104 195 106 F
79 176 198 96 M 122 191 180 130 M
80 195 215 143 M 123 184 205 116 M
81 126 202 102 F 124 155 189 117 M
82 138 196 124 F 125 126 196 112 F
83 166 196 103 M 126 190 195 124 M
84 86 190 106 F 127 132 218 120 F
85 90 185 110 F 128 133 194 121 F
86 177 188 109 M 129 174 203 128 M
87 136 197 129 F 130 168 190 120 M
88 103 196 95 F 131 190 196 132 M
89 190 227 134 M 132 176 194 107 M
90 130 211 119 F 133 121 210 118 F
91 205 219 130 M 134 131 167 105 M
92 127 202 121 F 135 174 203 88 M
93 182 204 129 M 136 112 183 94 F
94 122 213 116 F 137 121 203 116 F
95 139 202 102 F 138 132 194 104 F
96 189 205 102 M 139 155 188 111 M
97 147 184 114 M 140 127 189 106 F
98 180 198 123 M 141 151 193 120 M
99 130 180 94 M 142 189 221 126 M
100 130 204 118 F 143 123 194 129 F
101 150 197 110 F 144 137 196 113 F
A Data Set 713

(Continued)
Student Weight Cholesterol Pressure Gender Student Weight Cholesterol Pressure Gender
145 122 201 113 F 187 108 185 96 F
146 126 212 121 F 188 126 194 122 F
147 136 210 120 F 189 175 201 138 M
148 145 168 115 M 190 168 182 118 M
149 202 202 122 M 191 115 194 122 F
150 151 206 108 F 192 129 193 90 F
151 137 178 128 M 193 131 209 119 F
152 90 178 100 F 194 187 182 134 M
153 177 220 123 M 195 185 200 127 M
154 139 214 120 F 196 114 196 113 F
155 172 191 117 M 197 206 216 124 M
156 107 179 106 F 198 151 212 113 F
157 186 209 129 M 199 128 204 110 F
158 198 196 140 M 200 128 204 115 F
159 113 184 110 F 201 183 190 136 M
160 143 209 105 F 202 104 192 93 F
161 205 198 137 M 203 99 209 110 F
162 186 206 111 M 204 201 208 120 M
163 174 189 129 M 205 129 204 100 F
164 171 197 132 M 206 149 193 117 F
165 209 202 128 M 207 123 200 120 F
166 126 203 134 F 208 179 191 122 M
167 160 185 109 M 209 150 216 128 F
168 127 212 124 F 210 133 193 110 F
169 112 193 115 F 211 112 190 107 F
170 155 184 112 M 212 175 188 113 M
171 111 181 111 F 213 120 182 126 F
172 151 196 129 M 214 126 207 110 F
173 110 181 113 F 215 170 201 101 M
174 159 192 115 M 216 175 211 115 M
175 173 196 131 M 217 134 219 129 F
176 148 191 101 M 218 118 211 113 F
177 141 216 110 F 219 118 178 109 F
178 161 186 123 M 220 164 196 107 M
179 125 209 113 F 221 186 190 134 M
180 114 200 109 F 222 172 189 134 M
181 125 206 135 F 223 173 207 101 M
182 129 214 100 F 224 185 206 128 M
183 115 207 115 F 225 190 198 117 M
184 142 197 118 F 226 146 200 112 F
185 183 202 114 M 227 103 179 100 F
186 181 212 118 M 228 124 215 124 F
(Continued)
714 APPENDIX A: A Data Set

(Continued)
Student Weight Cholesterol Pressure Gender Student Weight Cholesterol Pressure Gender
229 186 213 124 M 271 195 195 148 M
230 166 166 129 M 272 199 201 125 M
231 138 201 120 F 273 148 202 120 F
232 175 198 118 M 274 164 190 113 M
233 104 194 100 F 275 137 196 107 F
234 213 206 130 M 276 133 173 121 M
235 171 182 118 M 277 104 214 112 F
236 180 213 119 M 278 126 194 116 F
237 187 197 128 M 279 120 220 116 F
238 117 194 106 F 280 148 204 131 F
239 108 185 105 F 281 100 206 89 F
240 128 202 105 F 282 178 190 125 M
241 170 196 118 M 283 149 188 108 F
242 183 176 126 M 284 157 194 124 M
243 143 190 101 M 285 99 203 95 F
244 160 205 120 F 286 192 208 127 M
245 185 184 113 M 287 175 181 145 M
246 122 193 142 F 288 208 193 123 M
247 225 218 142 M 289 201 208 138 M
248 139 191 99 F 290 174 199 111 M
249 123 207 116 F 291 188 189 119 M
250 129 176 108 F 292 151 205 133 F
251 142 220 137 F 293 202 220 126 M
252 146 191 116 M 294 125 198 106 F
253 129 201 100 F 295 176 190 116 M
254 163 171 119 M 296 183 188 96 M
255 177 206 134 M 297 118 198 130 F
256 183 190 116 M 298 125 204 111 F
257 120 201 104 F 299 237 209 127 M
258 188 214 115 M 300 124 186 127 F
259 140 182 119 M 301 98 194 104 F
260 166 197 113 M 302 182 199 108 M
261 122 199 107 F 303 184 206 149 M
262 177 207 124 M 304 137 189 113 F
263 184 204 122 M 305 126 177 111 F
264 113 198 121 F 306 202 198 130 M
265 214 221 142 M 307 225 212 142 M
266 144 205 111 M 308 181 200 122 M
267 188 188 132 M 309 178 187 121 M
268 114 204 127 F 310 132 221 110 F
269 158 213 111 F 311 164 201 134 M
270 146 196 116 M 312 163 191 138 M
APPENDIX B

Mathematical Preliminaries

B.1 SUMMATION CONTENTS


Consider four numbers that we will call x1 , x2 , x3 , and x4 . If s is equal to the
Summation....... 715
sum of these numbers, then we can express this fact either by writing
Absolute Value . 715
s = x1 + x 2 + x3 + x 4
 Set Notation ..... 716
or by using the summation notation . In this latter situation we write

4
s= xi
i=1

which means that s is equal to the sum of the xi values as i ranges from 1 to 4.
The summation notation is quite useful when we want to sum a large number
of quantities. For instance, suppose that we were given 100 numbers, desig-
nated as x1 , x2 , and so on, up to x100 . We could then compactly express s, the
sum of these numbers, as

100
s= xi
i=1

If we want the sum to include only the 60 numbers starting at x20 and ending
at x79 , then we could express this sum by the notation

79
xi
i=20
79
That is, i=20 xi is the sum of the xi values as i ranges from 20 to 79.

B.2 ABSOLUTE VALUE


The absolute value of a number is its magnitude regardless of its sign. For
instance, the absolute value of 4 is 4, whereas the absolute value of −5 is 5. In
general, the absolute value of a positive number is that number, whereas the
715
Introductory Statistics. DOI:10.1016/B978-0-12-804317-2.00025-4
Copyright © 2017 Elsevier Inc. All rights reserved.
716 APPENDIX B: Mathematical Preliminaries

FIGURE B.1 Distance from −2 to 0 is |−2| = 2.

absolute value of a negative number is its negative. We use the symbol |x| to
denote the absolute value of the number x. Thus,

x if x ≥ 0
|x| =
−x if x < 0
If we represent each real number by a point on a straight line, then |x| is the
distance from point x to the origin 0. This is illustrated by Fig. B.1.
If x and y are any two numbers, then |x − y| is equal to the distance between x
and y. For instance, if x = 5 and y = 2, then |x − y| = |5 − 2| = |3| = 3. On the
other hand, if x = 5 and y = −2, then |x − y| = |5 − (−2)| = |5 + 2| = 7. That
is, the distance between 5 and 2 is 3, whereas the distance between 5 and −2
is 7.

B.3 SET NOTATION


Consider a collection of numbers, for instance, all the real numbers. Some-
times we are interested in the subcollection of these numbers that satisfies a
particular property. Let A designate a certain property; for instance, A could
be the property that the number is positive or that it is an even integer or that
it is a prime integer. We express the numbers in the collection that have the
property A by the notation
{x: x has property A}
which is read as “the set of all the values x in the collection that have the
property A.” For instance,
{x: x is an even integer between 1 and 7}
is just the set consisting of the three values 2, 4, and 6. That is
{x: x is an even integer between 1 and 7} = {2, 4, 6}
We are sometimes interested in the set of all numbers that are within some
fixed distance of a specified number. For instance, consider the set of all num-
bers that are within 2 of the number 5. This set can be expressed as
{x: |x − 5| ≤ 2}
Because a number will be within 2 of the number 5 if and only if that number
lies between 3 and 7, we have
{x: |x − 5| ≤ 2} = {x: 3 ≤ x ≤ 7}
APPENDIX C

How to Choose a Random Sample

As we have seen in this book, it is extremely important to be able to choose a


random sample. Suppose that we want to choose a random sample of size n
from a population of size N . How can we accomplish this?
The first step is to number the population from 1 to N in any arbitrary man-
ner. Then we will choose a random sample by designating n elements of the
population that are to be in the sample. To do this, we start by letting the first
element of the sample be equally likely to be any of the N elements. The next
element is then chosen so that it is equally likely to be any of the remaining
N − 1 elements, the next so that it is equally likely to be any of the remaining
N − 2 elements, and so on, until we have amassed a total of n elements, which
constitute the random sample.
To implement this scheme, it seems that we would always have to keep track
of which elements had already been selected. However, by a neat trick, it turns
out that this is not necessary. Indeed, we can arrange the N elements in an
ordered list and then randomly choose not the elements themselves but rather
the positions of the elements that are to be put in the random sample. Let us
see how it works when N = 7 and n = 3. We start by numbering each of the 7
elements in the population and then arranging them in a list. Say the initial
order is
1, 2, 3, 4, 5, 6, 7
We now choose a number that is equally likely to be 1, 2, 3, 4, 5, 6, or 7; say
4 is chosen. This means that the element in position 4 (element number 4 in
this case) is put in the random sample. To indicate that this element is in the
random sample and to make certain that this element will not be chosen again,
we interchange in the list the element in position 4 with the one in position 7.
This results in the new list ordering
1, 2, 3, 7, 5, 6, 4
where we have underlined the element that is in the random sample. The next
element to be put in the random sample should be equally likely to be any
717
Introductory Statistics. DOI:10.1016/B978-0-12-804317-2.00026-6
Copyright © 2017 Elsevier Inc. All rights reserved.
718 APPENDIX C: How to Choose a Random Sample

of the elements in the first 6 positions. Thus we select a value that is equally
likely to be 1, 2, 3, 4, 5, or 6; the element in that position will become part
of the random sample. And to indicate this and to leave the first 5 positions
for the elements that have not yet been chosen, we interchange the element in
the position chosen with the element in position 6. For instance, if the value
chosen was 4, then the element in position 4 (that is, element number 7)
becomes part of the random sample, and the new list ordering is
1, 2, 3, 6, 5, 7, 4
The final element of the random sample is equally likely to be any of the el-
ements in positions 1 through 5, so we select a value that is equally likely to
be 1, 2, 3, 4, or 5 and interchange the element in that position with the one in
position 5. For instance, if the value is 2, then the new ordering is
1, 5, 3, 6, 2, 7, 4
Since there are now three elements in the random sample, namely, 2, 7, and 4,
the process is complete.
To implement the foregoing algorithm for generating a random sample, we
need to know how to generate the value of a random quantity that is equally
likely to be any of the numbers 1, 2, 3, . . . , k. The key to doing this is to make
use of random numbers that are the values of random variables that are uni-
formly distributed over the interval (0, 1). Most computers have a built-in
random number generator that allows one to call for the value of such a quan-
tity. If U designates a random number—that is, U is uniformly distributed over
the interval (0, 1)—then it can be shown that
I = Int (kU ) + 1
will be equally likely to be any of the values 1, 2, . . . , k, where Int (x) stands for
the integer part of x. For instance,
Int (4.3) = 4
Int (12.9) = 12
and so on.
Program A-1 uses these to generate a random sample of size n from the set
of numbers 1, 2, . . . , N . When running this program, you will be asked first to
enter the values of n and N and then to enter any four-digit number. For this
last request, just type and enter any number that comes to mind. The output
from this program is the subset of size n that constitutes the random sample.
■ Example C.1
Suppose we want to choose a random sample of size 12 from a popula-
tion of 200 members. To do so, we start by arbitrarily numbering the 200
members of the population so that they now have numbers 1 to 200. We
run Program A-1 to obtain the 12 members of the population that are to
constitute the random sample.
How to Choose a Random Sample 719

THIS PROGRAM GENERATES A RANDOM SAMPLE OF K


OF THE INTEGERS 1 THRU N
ENTER THE VALUE OF N
? 200
ENTER THE VALUE OF K
? 12
Random Number Seed (-232,768 to 32,767)? 355
THE RANDOM SAMPLE CONSISTS OF THE FOLLOWING 12 ELEMENTS
90 89 82 162 21 81 182 45 38 195 64 1 ■
APPENDIX D

Tables

Table D.1 Standard Normal Probabilities


Table entries give P {Z ≤ x}.
x 0.00 0.01 0.02 0.03 0.04 0.05 0.06 0.07 0.08 0.09
0.0 0.5000 0.5040 0.5080 0.5120 0.5160 0.5199 0.5239 0.5279 0.5319 0.5359
0.1 0.5398 0.5438 0.5478 0.5517 0.5557 0.5596 0.5636 0.5675 0.5714 0.5753
0.2 0.5793 0.5832 0.5871 0.5910 0.5948 0.5987 0.6026 0.6064 0.6103 0.6141
0.3 0.6179 0.6217 0.6255 0.6293 0.6331 0.6368 0.6406 0.6443 0.6480 0.6517
0.4 0.6554 0.6591 0.6628 0.6664 0.6700 0.6736 0.6772 0.6808 0.6844 0.6879
0.5 0.6915 0.6950 0.6985 0.7019 0.7054 0.7088 0.7123 0.7157 0.7190 0.7224
0.6 0.7257 0.7291 0.7324 0.7357 0.7389 0.7422 0.7454 0.7486 0.7517 0.7549
0.7 0.7580 0.7611 0.7642 0.7673 0.7704 0.7734 0.7764 0.7794 0.7823 0.7852
0.8 0.7881 0.7910 0.7939 0.7967 0.7995 0.8023 0.8051 0.8078 0.8106 0.8133
0.9 0.8159 0.8186 0.8212 0.8238 0.8264 0.8289 0.8315 0.8340 0.8365 0.8389
1.0 0.8413 0.8438 0.8461 0.8485 0.8508 0.8531 0.8554 0.8577 0.8599 0.8621
1.1 0.8643 0.8665 0.8686 0.8708 0.8729 0.8749 0.8770 0.8790 0.8810 0.8830
1.2 0.8849 0.8869 0.8888 0.8907 0.8925 0.8944 0.8962 0.8980 0.8997 0.9015
1.3 0.9032 0.9049 0.9066 0.9082 0.9099 0.9115 0.9131 0.9147 0.9162 0.9177
1.4 0.9192 0.9207 0.9222 0.9236 0.9251 0.9265 0.9279 0.9292 0.9306 0.9319
1.5 0.9332 0.9345 0.9357 0.9370 0.9382 0.9394 0.9406 0.9418 0.9429 0.9441
1.6 0.9452 0.9463 0.9474 0.9484 0.9495 0.9505 0.9515 0.9525 0.9535 0.9545
1.7 0.9554 0.9564 0.9573 0.9582 0.9591 0.9599 0.9608 0.9616 0.9625 0.9633
1.8 0.9641 0.9649 0.9656 0.9664 0.9671 0.9678 0.9686 0.9693 0.9699 0.9706
1.9 0.9713 0.9719 0.9726 0.9732 0.9738 0.9744 0.9750 0.9756 0.9761 0.9767
2.0 0.9772 0.9778 0.9783 0.9788 0.9793 0.9798 0.9803 0.9808 0.9812 0.9817
2.1 0.9821 0.9826 0.9830 0.9834 0.9838 0.9842 0.9846 0.9850 0.9854 0.9857
2.2 0.9861 0.9864 0.9868 0.9871 0.9875 0.9878 0.9881 0.9884 0.9887 0.9890
2.3 0.9893 0.9896 0.9898 0.9901 0.9904 0.9906 0.9909 0.9911 0.9913 0.9916
2.4 0.9918 0.9920 0.9922 0.9925 0.9927 0.9929 0.9931 0.9932 0.9934 0.9936
2.5 0.9938 0.9940 0.9941 0.9943 0.9945 0.9946 0.9948 0.9949 0.9951 0.9952
2.6 0.9953 0.9955 0.9956 0.9957 0.9959 0.9960 0.9961 0.9962 0.9963 0.9964
2.7 0.9965 0.9966 0.9967 0.9968 0.9969 0.9970 0.9971 0.9972 0.9973 0.9974

(Continued)
721
Introductory Statistics. DOI:10.1016/B978-0-12-804317-2.00027-8
Copyright © 2017 Elsevier Inc. All rights reserved.
722 APPENDIX D: Tables

Table D.1 (Continued )


x 0.00 0.01 0.02 0.03 0.04 0.05 0.06 0.07 0.08 0.09
2.8 0.9974 0.9975 0.9976 0.9977 0.9977 0.9978 0.9979 0.9979 0.9980 0.9981
2.9 0.9981 0.9982 0.9982 0.9983 0.9984 0.9984 0.9985 0.9985 0.9986 0.9986
3.0 0.9987 0.9987 0.9987 0.9988 0.9988 0.9989 0.9989 0.9989 0.9990 0.9990
3.1 0.9990 0.9991 0.9991 0.9991 0.9992 0.9992 0.9992 0.9992 0.9993 0.9993
3.2 0.9993 0.9993 0.9994 0.9994 0.9994 0.9994 0.9994 0.9995 0.9995 0.9995
3.3 0.9995 0.9995 0.9995 0.9996 0.9996 0.9996 0.9996 0.9996 0.9996 0.9997
3.4 0.9997 0.9997 0.9997 0.9997 0.9997 0.9997 0.9997 0.9997 0.9997 0.9998

Table D.2 Percentiles tn, α of t Distributions


α
n 0.40 0.25 0.10 0.05 0.025 0.01 0.005 0.0025 0.001 0.0005
1 0.325 1.000 3.078 6.314 12.706 31.821 63.657 127.32 318.31 636.62
2 0.289 0.816 1.886 2.920 4.303 6.965 9.925 14.089 23.326 31.598
3 0.277 0.765 1.638 2.353 3.182 4.541 5.841 7.453 10.213 12.924
4 0.271 0.741 1.533 2.132 2.776 3.747 4.604 5.598 7.173 8.610
5 0.267 0.727 1.476 2.015 2.571 3.365 4.032 4.773 5.893 6.869
6 0.265 0.718 1.440 1.943 2.447 3.143 3.707 4.317 5.208 5.959
7 0.263 0.711 1.415 1.895 2.365 2.998 3.499 4.029 4.785 5.408
8 0.262 0.706 1.397 1.860 2.306 2.896 3.355 3.833 4.501 5.041
9 0.261 0.703 1.383 1.833 2.262 2.821 3.250 3.690 4.297 4.781
10 0.260 0.700 1.372 1.812 2.228 2.764 3.169 3.581 4.144 4.587
11 0.260 0.697 1.363 1.796 2.201 2.718 3.106 3.497 4.025 4.437
12 0.259 0.695 1.356 1.782 2.179 2.681 3.055 3.428 3.930 4.318
13 0.259 0.694 1.350 1.771 2.160 2.650 3.012 3.372 3.852 4.221
14 0.258 0.692 1.345 1.761 2.145 2.624 2.977 3.326 3.787 4.140
15 0.258 0.691 1.341 1.753 2.131 2.602 2.947 3.286 3.733 4.073
16 0.258 0.690 1.337 1.746 2.120 2.583 2.921 3.252 3.686 4.015
17 0.257 0.689 1.333 1.740 2.110 2.567 2.898 3.222 3.646 3.965
18 0.257 0.688 1.330 1.734 2.101 2.552 2.878 3.197 3.610 3.922
19 0.257 0.688 1.328 1.729 2.093 2.539 2.861 3.174 3.579 3.883
20 0.257 0.687 1.325 1.725 2.086 2.528 2.845 3.153 3.552 3.850
21 0.257 0.686 1.323 1.721 2.080 2.518 2.831 3.135 3.527 3.819
22 0.256 0.686 1.321 1.717 2.074 2.508 2.819 3.119 3.505 3.792
Tables 723

Table D.2 (Continued )


23 0.256 0.685 1.319 1.714 2.069 2.500 2.807 3.104 3.485 3.767
24 0.256 0.685 1.318 1.711 2.064 2.492 2.797 3.091 3.467 3.745
25 0.256 0.684 1.316 1.708 2.060 2.485 2.787 3.078 3.450 3.725
26 0.256 0.684 1.315 1.706 2.056 2.479 2.779 3.067 3.435 3.707
27 0.256 0.684 1.314 1.703 2.052 2.473 2.771 3.057 3.421 3.690
28 0.256 0.683 1.313 1.701 2.048 2.467 2.763 3.047 3.408 3.674
29 0.256 0.683 1.311 1.699 2.045 2.462 2.756 3.038 3.396 3.659
30 0.256 0.683 1.310 1.697 2.042 2.457 2.750 3.030 3.385 3.646
40 0.255 0.681 1.303 1.684 2.021 2.423 2.704 2.971 3.307 3.551
60 0.254 0.679 1.296 1.671 2.000 2.390 2.660 2.915 3.232 3.460
120 0.254 0.677 1.289 1.658 1.980 2.358 2.617 2.860 3.160 3.373
∞ 0.253 0.674 1.282 1.645 1.960 2.326 2.576 2.807 3.090 3.291

n = degrees of freedom.

2 of the Chi-Squared Distributions


Table D.3 Percentiles χn,α
α
n 0.995 0.990 0.975 0.950 0.900 0.500 0.100 0.050 0.025 0.010 0.005
1 0.00+ 0.00+ 0.00+ 0.00+ 0.02 0.45 2.71 3.84 5.02 6.63 7.88
2 0.01 0.02 0.05 0.10 0.21 1.39 4.61 5.99 7.38 9.21 10.60
3 0.07 0.11 0.22 0.35 0.58 2.37 6.25 7.81 9.35 11.34 12.84
4 0.21 0.30 0.48 0.71 1.06 3.36 7.78 9.49 11.14 13.28 14.86
5 0.41 0.55 0.83 1.15 1.61 4.35 9.24 11.07 12.83 15.09 16.75
6 0.68 0.87 1.24 1.64 2.20 5.35 10.65 12.59 14.45 16.81 18.55
7 0.99 1.24 1.69 2.17 2.83 6.35 12.02 14.07 16.01 18.48 20.28
8 1.34 1.65 2.18 2.73 3.49 7.34 13.36 15.51 17.53 20.09 21.96
9 1.73 2.09 2.70 3.33 4.17 8.34 14.68 16.92 19.02 21.67 23.59
10 2.16 2.56 3.25 3.94 4.87 9.34 15.99 18.31 20.48 23.21 25.19
11 2.60 3.05 3.82 4.57 5.58 10.34 17.28 19.68 21.92 24.72 26.76
12 3.07 3.57 4.40 5.23 6.30 11.34 18.55 21.03 23.34 26.22 28.30
13 3.57 4.11 5.01 5.89 7.04 12.34 19.81 22.36 24.74 27.69 29.82
14 4.07 4.66 5.63 6.57 7.79 13.34 21.06 23.68 26.12 29.14 31.32
15 4.60 5.23 6.27 7.26 8.55 14.34 22.31 25.00 27.49 30.58 32.80

(Continued)
724 APPENDIX D: Tables

Table D.3 (Continued )


α
n 0.995 0.990 0.975 0.950 0.900 0.500 0.100 0.050 0.025 0.010 0.005
16 5.14 5.81 6.91 7.96 9.31 15.34 23.54 26.30 28.85 32.00 34.27
17 5.70 6.41 7.56 8.67 210.09 16.34 24.77 27.59 30.19 33.41 35.72
18 6.26 7.01 8.23 9.39 10.87 17.34 25.99 28.87 31.53 34.81 37.16
19 6.84 7.63 8.91 10.12 11.65 18.34 27.20 30.14 32.85 36.19 38.58
20 7.43 8.26 9.59 10.85 12.44 19.34 28.41 31.41 34.17 37.57 40.00
21 8.03 8.90 10.28 11.59 13.24 20.34 29.62 32.67 35.48 38.93 41.40
22 8.64 9.54 10.98 12.34 14.04 21.34 30.81 33.92 36.78 40.29 42.80
23 9.26 10.20 11.69 13.09 14.85 22.34 32.01 35.17 38.08 41.64 44.18
24 9.89 10.86 12.40 13.85 15.66 23.34 33.20 36.42 39.36 42.98 45.56
25 10.52 11.52 13.12 14.61 16.47 24.34 34.28 37.65 40.65 44.31 46.93
26 11.16 12.20 13.84 15.38 17.29 25.34 35.56 38.89 41.92 45.64 48.29
27 11.81 12.88 14.57 16.15 18.11 26.34 36.74 40.11 43.19 46.96 49.65
28 12.46 13.57 15.31 16.93 18.94 27.34 37.92 41.34 44.46 48.28 50.99
29 13.12 14.26 16.05 17.71 19.77 28.34 39.09 42.56 45.72 49.59 52.34
30 13.79 14.95 16.79 18.49 20.60 29.34 40.26 43.77 46.98 50.89 53.67
40 20.71 22.16 24.43 26.51 29.05 39.34 51.81 55.76 59.34 63.69 66.77
50 27.99 29.71 32.36 34.76 37.69 49.33 63.17 67.50 71.42 76.15 79.49
60 35.53 37.48 40.48 43.19 46.46 59.33 74.40 79.08 83.30 88.38 91.95
70 43.28 45.44 48.76 51.74 55.33 69.33 85.53 90.53 95.02 100.42 104.22
80 51.17 53.54 57.15 60.39 64.28 79.33 96.58 101.88 106.63 112.33 116.32
90 59.20 61.75 65.65 69.13 73.29 89.33 107.57 113.14 118.14 124.12 128.30
100 67.33 70.06 74.22 77.93 82.36 99.33 118.50 124.34 129.56 135.81 140.17

n = degrees of freedom.
Table D.4 Percentiles of F Distributions
95th Percentiles of Fn,m Distributions
Degrees of freedom for the numerator n
1 2 3 4 5 6 7 8 9 10 12 15 20 24 30 40 60 120 ∞
Degrees of freedom for the denominator m

1 161.4 199.5 215.7 224.6 230.2 234.0 236.8 238.9 240.5 241.9 243.9 245.9 248.0 249.1 250.1 251.1 252.2 253.3 254.3
2 18.51 19.00 19.16 19.25 19.30 19.33 19.35 19.37 19.38 19.40 19.41 19.43 19.45 19.45 19.46 19.47 19.48 19.49 19.50
3 10.13 9.55 9.28 9.12 9.01 8.94 8.89 8.85 8.81 8.79 8.74 8.70 8.66 8.64 8.62 8.59 8.57 8.55 8.53
4 7.71 6.94 6.59 6.39 6.26 6.16 6.09 6.04 6.00 5.96 5.91 5.86 5.80 5.77 5.75 5.72 5.69 5.66 5.63
5 6.61 5.79 5.41 5.19 5.05 4.95 4.88 4.82 4.77 4.74 4.68 4.62 4.56 4.53 4.50 4.46 4.43 4.40 4.36
6 5.99 5.14 4.76 4.53 4.39 4.28 4.21 4.15 4.10 4.06 4.00 3.94 3.87 3.84 3.81 3.77 3.74 3.70 3.67
7 5.59 4.74 4.35 4.12 3.97 3.87 3.79 3.73 3.68 3.64 3.57 3.51 3.44 3.41 3.38 3.34 3.30 3.27 3.23
8 5.32 4.46 4.07 3.84 3.69 3.58 3.50 3.44 3.39 3.35 3.28 3.22 3.15 3.12 3.08 3.04 3.01 2.97 2.93
9 5.12 4.26 3.86 3.63 3.48 3.37 3.29 3.23 3.18 3.14 3.07 3.01 2.94 2.90 2.86 2.83 2.79 2.75 2.71
10 4.96 4.10 3.71 3.48 3.33 3.22 3.14 3.07 3.02 2.98 2.91 2.85 2.77 2.74 2.70 2.66 2.62 2.58 2.54
11 4.84 3.98 3.59 3.36 3.20 3.09 3.01 2.95 2.90 2.85 2.79 2.72 2.65 2.61 2.57 2.53 2.49 2.45 2.40
12 4.75 3.89 3.49 3.26 3.11 3.00 2.91 2.85 2.80 2.75 2.69 2.62 2.54 2.51 2.47 2.43 2.38 2.34 2.30
13 4.67 3.81 3.41 3.18 3.03 2.92 2.83 2.77 2.71 2.67 2.60 2.53 2.46 2.42 2.38 2.34 2.30 2.25 2.21
14 4.60 3.74 3.34 3.11 2.96 2.85 2.76 2.70 2.65 2.60 2.53 2.46 2.39 2.35 2.31 2.27 2.22 2.18 2.13
15 4.54 3.68 3.29 3.06 2.90 2.79 2.71 2.64 2.59 2.54 2.48 2.40 2.33 2.29 2.25 2.20 2.16 2.11 2.07
16 4.49 3.63 3.24 3.01 2.85 2.74 2.66 2.59 2.54 2.49 2.42 2.35 2.28 2.24 2.19 2.15 2.11 2.06 2.01
17 4.45 3.59 3.20 2.96 2.81 2.70 2.61 2.55 2.49 2.45 2.38 2.31 2.23 2.19 2.15 2.10 2.06 2.01 1.96
18 4.41 3.55 3.16 2.93 2.77 2.66 2.58 2.51 2.46 2.41 2.34 2.27 2.19 2.15 2.11 2.06 2.02 1.97 1.92

(Continued)
Table D.4 (Continued )
95th Percentiles of Fn,m Distributions
Degrees of freedom for the numerator n
1 2 3 4 5 6 7 8 9 10 12 15 20 24 30 40 60 120 ∞
Degrees of freedom for the denominator m

19 4.38 3.52 3.13 2.90 2.74 2.63 2.54 2.48 2.42 2.38 2.31 2.23 2.16 2.11 2.07 2.03 1.98 1.93 1.88
20 4.35 3.49 3.10 2.87 2.71 2.60 2.51 2.45 2.39 2.35 2.28 2.20 2.12 2.08 2.04 1.99 1.95 1.90 1.84
21 4.32 3.47 3.07 2.84 2.68 2.57 2.49 2.42 2.37 2.32 2.25 2.18 2.10 2.05 2.01 1.96 1.92 1.87 1.81
22 4.30 3.44 3.05 2.82 2.66 2.55 2.46 2.40 2.34 2.30 2.23 2.15 2.07 2.03 1.98 1.94 1.89 1.84 1.78
23 4.28 3.42 3.03 2.80 2.64 2.53 2.44 2.37 2.32 2.27 2.20 2.13 2.05 2.01 1.96 1.91 1.86 1.81 1.76
24 4.26 3.40 3.01 2.78 2.62 2.51 2.42 2.36 2.30 2.25 2.18 2.11 2.03 1.98 1.94 1.89 1.84 1.79 1.73
25 4.24 3.39 2.99 2.76 2.60 2.49 2.40 2.34 2.28 2.24 2.16 2.09 2.01 1.96 1.92 1.87 1.82 1.77 1.71
26 4.23 3.37 2.98 2.74 2.59 2.47 2.39 2.32 2.27 2.22 2.15 2.07 1.99 1.95 1.90 1.85 1.80 1.75 1.69
27 4.21 3.35 2.96 2.73 2.57 2.46 2.37 2.31 2.25 2.20 2.13 2.06 1.97 1.93 1.88 1.84 1.79 1.73 1.67
28 4.20 3.34 2.95 2.71 2.56 2.45 2.36 2.29 2.24 2.19 2.12 2.04 1.96 1.91 1.87 1.82 1.77 1.71 1.65
29 4.18 3.33 2.93 2.70 2.55 2.43 2.35 2.28 2.22 2.18 2.10 2.03 1.94 1.90 1.85 1.81 1.75 1.70 1.64
30 4.17 3.32 2.92 2.69 2.53 2.42 2.33 2.27 2.21 2.16 2.09 2.01 1.93 1.89 1.84 1.79 1.74 1.68 1.62
40 4.08 3.23 2.84 2.61 2.45 2.34 2.25 2.18 2.12 2.08 2.00 1.92 1.84 1.79 1.74 1.69 1.64 1.58 1.51
60 4.00 3.15 2.76 2.53 2.37 2.25 2.17 2.10 2.04 1.99 1.92 1.84 1.75 1.70 1.65 1.59 1.53 1.47 1.39
120 3.92 3.07 2.68 2.45 2.29 2.17 2.09 2.02 1.96 1.91 1.83 1.75 1.66 1.61 1.55 1.55 1.43 1.35 1.25
∞ 3.84 3.00 2.60 2.37 2.21 2.10 2.01 1.94 1.88 1.83 1.75 1.67 1.57 1.52 1.46 1.39 1.32 1.22 1.00
Table D.4 (Continued )
90th Percentiles of F Distributions
Degrees of freedom for the numerator n
1 2 3 4 5 6 7 8 9 10 12 15 20 24 30 40 60 120 ∞
1 39.86 49.50 53.59 55.83 57.24 58.20 58.91 59.44 59.86 60.19 60.71 61.22 61.74 62.00 62.26 62.53 62.79 63.06 63.33
Degrees of freedom for the denominator m

2 8.53 9.00 9.16 9.24 9.29 9.33 9.35 9.37 9.38 9.39 9.41 9.42 9.44 9.45 9.46 9.47 9.47 9.48 9.49
3 5.54 5.46 5.39 5.34 5.31 5.28 5.27 5.25 5.24 5.23 5.22 5.20 5.18 5.18 5.17 5.16 5.15 5.14 5.13
4 4.54 4.32 4.19 4.11 4.05 4.01 3.98 3.95 3.94 3.92 3.90 3.87 3.84 3.83 3.82 3.80 3.79 3.78 3.76
5 4.06 3.78 3.62 3.52 3.45 3.40 3.37 3.34 3.32 3.30 3.27 3.24 3.21 3.19 3.17 3.16 3.14 3.12 3.10
6 3.78 3.46 3.29 3.18 3.11 3.05 3.01 2.98 2.96 2.94 2.90 2.87 2.84 2.82 2.80 2.78 2.76 2.74 2.72
7 3.59 3.26 3.07 2.96 2.88 2.83 2.78 2.75 2.72 2.70 2.67 2.63 2.59 2.58 2.56 2.54 2.51 2.49 2.47
8 3.46 3.11 2.92 2.81 2.73 2.67 2.62 2.59 2.56 2.54 2.50 2.46 2.42 2.40 2.38 2.36 2.34 2.32 2.29
9 3.36 3.01 2.81 2.69 2.61 2.55 2.51 2.47 2.44 2.42 2.38 2.34 2.30 2.28 2.25 2.23 2.21 2.18 2.16
10 3.29 2.92 2.73 2.61 2.52 2.46 2.41 2.38 2.35 2.32 2.28 2.24 2.20 2.18 2.16 2.13 2.11 2.08 2.06
11 3.23 2.86 2.66 2.54 2.45 2.39 2.34 2.30 2.27 2.25 2.21 2.17 2.12 2.10 2.08 2.05 2.03 2.00 1.97
12 3.18 2.81 2.61 2.48 2.39 2.33 2.28 2.24 2.21 2.19 2.15 2.10 2.06 2.04 2.01 1.99 1.96 1.93 1.90
13 3.14 2.76 2.56 2.43 2.35 2.28 2.23 2.20 2.16 2.14 2.10 2.05 2.01 1.98 1.96 1.93 1.90 1.88 1.85
14 3.10 2.73 2.52 2.39 2.31 2.24 2.19 2.15 2.12 2.10 2.05 2.01 1.96 1.94 1.91 1.89 1.86 1.83 1.80
15 3.07 2.70 2.49 2.36 2.27 2.21 2.16 2.12 2.09 2.06 2.02 1.97 1.92 1.90 1.87 1.85 1.82 1.79 1.76
16 3.05 2.67 2.46 2.33 2.24 2.18 2.13 2.09 2.06 2.03 1.99 1.94 1.89 1.87 1.84 1.81 1.78 1.75 1.72
17 3.03 2.64 2.44 2.31 2.22 2.15 2.10 2.06 2.03 2.00 1.96 1.91 1.86 1.84 1.81 1.78 1.75 1.72 1.69

(Continued)
Table D.4 (Continued )
90th Percentiles of F Distributions
Degrees of freedom for the numerator n
1 2 3 4 5 6 7 8 9 10 12 15 20 24 30 40 60 120 ∞
18 3.01 2.62 2.42 2.29 2.20 2.13 2.08 2.04 2.00 1.98 1.93 1.89 1.84 1.81 1.78 1.75 1.72 1.69 1.66
Degrees of freedom for the denominator m

19 2.99 2.61 2.40 2.27 2.18 2.11 2.06 2.02 1.98 1.96 1.91 1.86 1.81 1.79 1.76 1.73 1.70 1.67 1.63
20 2.97 2.59 2.38 2.25 2.16 2.09 2.04 2.00 1.96 1.94 1.89 1.84 1.79 1.77 1.74 1.71 1.68 1.64 1.61
21 2.96 2.57 2.36 2.23 2.14 2.08 2.02 1.98 1.95 1.92 1.87 1.83 1.78 1.75 1.72 1.69 1.66 1.62 1.59
22 2.95 2.56 2.35 2.22 2.13 2.06 2.01 1.97 1.93 1.90 1.86 1.81 1.76 1.73 1.70 1.67 1.64 1.60 1.57
23 2.94 2.55 2.34 2.21 2.11 2.05 1.99 1.95 1.92 1.89 1.84 1.80 1.74 1.72 1.69 1.66 1.62 1.59 1.55
24 2.93 2.54 2.33 2.19 2.10 2.04 1.98 1.94 1.91 1.88 1.83 1.78 1.73 1.70 1.67 1.64 1.61 1.57 1.53
25 2.92 2.53 2.32 2.18 2.09 2.02 1.97 1.93 1.89 1.87 1.82 1.77 1.72 1.69 1.66 1.63 1.59 1.56 1.52
26 2.91 2.52 2.31 2.17 2.08 2.01 1.96 1.92 1.88 1.86 1.81 1.76 1.71 1.68 1.65 1.61 1.58 1.54 1.50
27 2.90 2.51 2.30 2.17 2.07 2.00 1.95 1.91 1.87 1.85 1.80 1.75 1.70 1.67 1.64 1.60 1.57 1.53 1.49
28 2.89 2.50 2.29 2.16 2.06 2.00 1.94 1.90 1.87 1.84 1.79 1.74 1.69 1.66 1.63 1.59 1.56 1.52 1.48
29 2.89 2.50 2.28 2.15 2.06 1.99 1.93 1.89 1.86 1.83 1.78 1.73 1.68 1.65 1.62 1.58 1.55 1.51 1.47
30 2.88 2.49 2.28 2.14 2.03 1.98 1.93 1.88 1.85 1.82 1.77 1.72 1.67 1.64 1.61 1.57 1.54 1.50 1.46
40 2.84 2.44 2.23 2.09 2.00 1.93 1.87 1.83 1.79 1.76 1.71 1.66 1.61 1.57 1.54 1.51 1.47 1.42 1.38
60 2.79 2.39 2.18 2.04 1.95 1.87 1.82 1.77 1.74 1.71 1.66 1.60 1.54 1.51 1.48 1.44 1.40 1.35 1.29
120 2.75 2.35 2.13 1.99 1.90 1.82 1.77 1.72 1.68 1.65 1.60 1.55 1.48 1.45 1.41 1.37 1.32 1.26 1.19
∞ 2.71 2.30 2.08 1.94 1.85 1.77 1.72 1.67 1.63 1.60 1.55 1.49 1.42 1.38 1.34 1.30 1.24 1.17 1.00
Table D.4 (Continued )
99th Percentiles of F Distributions
Degrees of freedom for the numerator n
1 2 3 4 5 6 7 8 9 10 12 15 20 24 30 40 60 120 ∞
Degrees of freedom for the denominator m

1 4052 4999.5 5403 5625 5764 5859 5928 5982 6022 6056 6106 6157 6209 6235 6261 6287 6313 6339 6366
2 98.50 99.00 99.17 99.25 99.30 99.33 99.36 99.37 99.39 99.40 99.42 99.43 99.45 99.46 99.47 99.47 99.48 99.49 99.50
3 34.12 30.82 29.46 28.71 28.24 27.91 27.67 27.49 27.35 27.23 27.05 26.87 26.69 26.00 26.50 26.41 26.32 26.22 26.13
4 21.20 18.00 16.69 15.98 15.52 15.21 14.98 14.80 14.66 14.55 14.37 14.20 14.02 13.93 13.84 13.75 13.65 13.56 13.46
5 16.26 13.27 12.06 11.39 10.97 10.67 10.46 10.29 10.16 10.05 9.89 9.72 9.55 9.47 9.38 9.29 9.20 9.11 9.02
6 13.75 10.92 9.78 9.15 8.75 8.47 8.26 8.10 7.98 7.87 7.72 7.56 7.40 7.31 7.23 7.14 7.06 6.97 6.88
7 12.25 9.55 8.45 7.85 7.46 7.19 6.99 6.84 6.72 6.62 6.47 6.31 6.16 6.07 5.99 5.91 5.82 5.74 5.65
8 11.26 8.65 7.59 7.01 6.63 6.37 6.18 6.03 5.91 5.81 5.67 5.52 5.36 5.28 5.20 5.12 5.03 4.95 4.46
9 10.56 8.02 6.99 6.42 6.06 5.80 5.61 5.47 5.35 5.26 5.11 4.96 4.81 4.73 4.65 4.57 4.48 4.40 4.31
10 10.04 7.56 6.55 5.99 5.64 5.39 5.20 5.06 4.94 4.85 4.71 4.56 4.41 4.33 4.25 4.17 4.08 4.00 3.91
11 9.65 7.21 6.22 5.67 5.32 5.07 4.89 4.74 4.63 4.54 4.40 4.25 4.10 4.02 3.94 3.86 3.78 3.69 3.60
12 9.33 6.93 5.95 5.41 5.06 4.82 4.64 4.50 4.39 4.30 4.16 4.01 3.86 3.78 3.70 3.62 3.54 3.45 3.36
13 9.07 6.70 5.74 5.21 4.86 4.62 4.44 4.30 4.19 4.10 3.96 3.82 3.66 3.59 3.51 3.43 3.34 3.25 3.17
14 8.86 6.51 5.56 5.04 4.69 4.46 4.28 4.14 4.03 3.94 3.80 3.66 3.51 3.43 3.35 3.27 3.18 3.09 3.00
15 8.68 6.36 5.42 4.89 4.36 4.32 4.14 4.00 3.89 3.80 3.67 3.52 3.37 3.29 3.21 3.13 3.05 2.96 2.87
16 8.53 6.23 5.29 4.77 4.44 4.20 4.03 3.89 3.78 3.69 3.55 3.41 3.26 3.18 3.10 3.02 2.93 2.84 2.75

(Continued)
Table D.4 (Continued )
99th Percentiles of F Distributions
Degrees of freedom for the numerator n
1 2 3 4 5 6 7 8 9 10 12 15 20 24 30 40 60 120 ∞
17 8.40 6.11 5.18 4.67 4.34 4.10 3.93 3.79 3.68 3.59 3.46 3.31 3.16 3.08 3.00 2.92 2.83 2.75 2.65
Degrees of freedom for the denominator m

18 8.29 6.01 5.09 4.58 4.25 4.01 3.84 3.71 3.60 3.51 3.37 3.23 3.08 3.00 2.92 2.84 2.75 2.66 2.57
19 8.18 5.93 5.01 4.50 4.17 3.94 3.77 3.63 3.52 3.43 3.30 3.15 3.00 2.92 2.84 2.76 2.67 2.58 2.59
20 8.10 5.85 4.94 4.43 4.10 3.87 3.70 3.56 3.46 3.37 3.23 3.09 2.94 2.86 2.78 2.69 2.61 2.52 2.42
21 8.02 5.78 4.87 4.37 4.04 3.81 3.64 3.51 3.40 3.31 3.17 3.03 2.88 2.80 2.72 2.64 2.55 2.46 2.36
22 7.95 5.72 4.82 4.31 3.99 3.76 3.59 3.45 3.35 3.26 3.12 2.98 2.83 2.75 2.67 2.58 2.50 2.40 2.31
23 7.88 5.66 4.76 4.26 3.94 3.71 3.54 3.41 3.30 3.21 3.07 2.93 2.78 2.70 2.62 2.54 2.45 2.35 2.26
24 7.82 5.61 4.72 4.22 3.90 3.67 3.50 3.36 3.26 3.17 3.03 2.89 2.74 2.66 2.58 2.49 2.40 2.31 2.21
25 7.77 5.57 4.68 4.18 3.85 3.63 3.46 3.32 3.22 3.13 2.99 2.85 2.70 2.62 2.54 2.45 2.36 2.27 2.17
26 7.72 5.53 4.64 4.14 3.82 3.59 3.42 3.29 3.18 3.09 2.96 2.81 2.66 2.58 2.50 2.42 2.33 2.23 2.13
27 7.68 5.49 4.60 4.11 3.78 3.56 3.39 3.26 3.15 3.06 2.93 2.78 2.63 2.55 2.47 2.38 2.29 2.20 2.10
28 7.64 5.45 4.57 4.07 3.75 3.53 3.36 3.23 3.12 3.03 2.90 2.75 2.60 2.52 2.44 2.35 2.26 2.17 2.06
29 7.60 5.42 4.54 4.04 3.73 3.50 3.33 3.20 3.09 3.00 2.87 2.73 2.57 2.49 2.41 2.33 2.23 2.14 2.03
30 7.56 5.39 4.51 4.02 3.70 3.47 3.30 3.17 3.07 2.98 2.84 2.70 2.55 2.47 2.39 2.30 2.21 2.11 2.01
40 7.31 5.18 4.31 3.83 3.51 3.29 3.12 2.99 2.89 2.80 2.66 2.52 2.37 2.29 2.20 2.11 2.02 1.92 1.80
60 7.08 4.98 4.13 3.65 3.34 3.12 2.95 2.82 2.72 2.63 2.50 2.35 2.20 2.12 2.03 1.94 1.84 1.73 1.60
120 6.85 4.79 3.95 3.48 3.17 2.96 2.79 2.66 2.56 2.47 2.34 2.19 2.03 1.95 1.86 1.76 1.66 1.53 1.38
∞ 6.63 4.61 3.78 3.32 3.02 2.80 2.64 2.51 2.41 2.32 2.18 2.04 1.88 1.79 1.70 1.59 1.47 1.32 1.00
Tables 731

Table D.5 Binomial Distribution Function


Data in the table are the values of P {Bin(n, p) ≤ i}, where Bin(n, p) is a binomial random variable with parameters
n and p. For values of p > 0.05, use the identity P {Bin(n, p) ≤ i} = 1 − P {Bin(n, 1 − p) ≤ n − i − 1}.
p
n i 0.05 0.10 0.15 0.20 0.25 0.30 0.35 0.40 0.45 0.50
2 0 0.9025 0.8100 0.7225 0.6400 0.5625 0.4900 0.4225 0.3600 0.3025 0.2500
1 0.9975 0.9900 0.9775 0.9600 0.9375 0.9100 0.8755 0.8400 0.7975 0.7500
3 0 0.8574 0.7290 0.6141 0.5120 0.4219 0.3430 0.2746 0.2160 0.1664 0.1250
1 0.9928 0.9720 0.9392 0.8960 0.8438 0.7840 0.7182 0.6480 0.5748 0.5000
2 0.9999 0.9990 0.9966 0.9920 0.9844 0.9730 0.9571 0.9360 0.9089 0.8750
4 0 0.8145 0.6561 0.5220 0.4096 0.3164 0.2401 0.1785 0.1296 0.0915 0.0625
1 0.9860 0.9477 0.8905 0.8192 0.7383 0.6517 0.5630 0.4752 0.3910 0.3125
2 0.9995 0.9963 0.9880 0.9728 0.9492 0.9163 0.8735 0.8208 0.7585 0.6875
3 1.0000 0.9999 0.9995 0.9984 0.9961 0.9919 0.9850 0.9744 0.9590 0.9375
5 0 0.7738 0.5905 0.4437 0.3277 0.2373 0.1681 0.1160 0.0778 0.0503 0.0312
1 0.9774 0.9185 0.8352 0.7373 0.6328 0.5282 0.4284 0.3370 0.2562 0.1875
2 0.9988 0.9914 0.9734 0.9421 0.8965 0.8369 0.7648 0.6826 0.5931 0.5000
3 1.0000 0.9995 0.9978 0.9933 0.9844 0.9692 0.9460 0.9130 0.8688 0.8125
4 1.0000 1.0000 0.9999 0.9997 0.9990 0.9976 0.9947 0.9898 0.9815 0.9688
6 0 0.7351 0.5314 0.3771 0.2621 0.1780 0.1176 0.0754 0.0467 0.0277 0.0156
1 0.9672 0.8857 0.7765 0.6554 0.5339 0.4202 0.3191 0.2333 0.1636 0.1094
2 0.9978 0.9842 0.9527 0.9011 0.8306 0.7443 0.6471 0.5443 0.4415 0.3438
3 0.9999 0.9987 0.9941 0.9830 0.9624 0.9295 0.8826 0.8208 0.7447 0.6562
4 1.0000 0.9999 0.9996 0.9984 0.9954 0.9891 0.9777 0.9590 0.9308 0.8906
5 1.0000 1.0000 1.0000 0.9999 0.9998 0.9993 0.9982 0.9959 0.9917 0.9844
7 0 0.6983 0.4783 0.3206 0.2097 0.1335 0.0824 0.0490 0.0280 0.0152 0.0078
1 0.9556 0.8503 0.7166 0.5767 0.4449 0.3294 0.2338 0.1586 0.1024 0.0625
2 0.9962 0.9743 0.9262 0.8520 0.7564 0.6471 0.5323 0.4199 0.3164 0.2266
3 0.9998 0.9973 0.9879 0.9667 0.9294 0.8740 0.8002 0.7102 0.6083 0.5000
4 1.0000 0.9998 0.9988 0.9953 0.9871 0.9712 0.9444 0.9037 0.8471 0.7734
5 1.0000 1.0000 0.9999 0.9996 0.9987 0.9962 0.9910 0.9812 0.9643 0.9375
6 1.0000 1.0000 1.0000 1.0000 0.9999 0.9998 0.9994 0.9984 0.9963 0.9922
8 0 0.6634 0.4305 0.2725 0.1678 0.1001 0.0576 0.0319 0.0168 0.0084 0.0039
1 0.9428 0.8131 0.6572 0.5033 0.3671 0.2553 0.1691 0.1064 0.0632 0.0352
2 0.9942 0.9619 0.8948 0.7969 0.6785 0.5518 0.4278 0.3154 0.2201 0.1445
3 0.9996 0.9950 0.9786 0.9437 0.8862 0.8059 0.7064 0.5941 0.4770 0.3633
4 1.0000 0.9996 0.9971 0.9896 0.9727 0.9420 0.8939 0.8263 0.7396 0.6367
5 1.0000 1.0000 0.9998 0.9988 0.9958 0.9887 0.9747 0.9502 0.9115 0.8555
6 1.0000 1.0000 1.0000 0.9999 0.9996 0.9987 0.9964 0.9915 0.9819 0.9648

(Continued)
732 APPENDIX D: Tables

Table D.5 (Continued )


p
n i 0.05 0.10 0.15 0.20 0.25 0.30 0.35 0.40 0.45 0.50
7 1.0000 1.0000 1.0000 1.0000 1.0000 0.9999 0.9998 0.9993 0.9983 0.9961
9 0 0.6302 0.3874 0.2316 0.1342 0.0751 0.0404 0.0207 0.0101 0.0046 0.0020
1 0.9288 0.7748 0.5995 0.4362 0.3003 0.1960 0.1211 0.0705 0.0385 0.0195
2 0.9916 0.9470 0.8591 0.7382 0.6007 0.4628 0.3373 0.2318 0.1495 0.0898
3 0.9994 0.9917 0.9661 0.9144 0.8343 0.7297 0.6089 0.4826 0.3614 0.2539
4 1.0000 0.9991 0.9944 0.9804 0.9511 0.9012 0.8283 0.7334 0.6214 0.5000
5 1.0000 0.9999 0.9994 0.9969 0.9900 0.9747 0.9464 0.9006 0.8342 0.7461
6 1.0000 1.0000 1.0000 0.9997 0.9987 0.9957 0.9888 0.9750 0.9502 0.9102
7 1.0000 1.0000 1.0000 1.0000 0.9999 0.9996 0.9986 0.9962 0.9909 0.9805
8 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 0.9999 0.9997 0.9992 0.9980
10 0 0.5987 0.3487 0.1969 0.1074 0.0563 0.0282 0.0135 0.0060 0.0025 0.0010
1 0.9139 0.7361 0.5443 0.3758 0.2440 0.1493 0.0860 0.0464 0.0232 0.0107
2 0.9885 0.9298 0.8202 0.6778 0.5256 0.3828 0.2616 0.1673 0.0996 0.0547
3 0.9990 0.9872 0.9500 0.8791 0.7759 0.6496 0.5138 0.3823 0.2660 0.1719
4 0.9999 0.9984 0.9901 0.9672 0.9219 0.8497 0.7515 0.6331 0.5044 0.3770
5 1.0000 0.9999 0.9986 0.9936 0.9803 0.9527 0.9051 0.8338 0.7384 0.6230
6 1.0000 1.0000 0.9999 0.9991 0.9965 0.9894 0.9740 0.9452 0.8980 0.8281
7 1.0000 1.0000 1.0000 0.9999 0.9996 0.9984 0.9952 0.9877 0.9726 0.9453
8 1.0000 1.0000 1.0000 1.0000 1.0000 0.9999 0.9995 0.9983 0.9955 0.9893
9 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 0.9999 0.9997 0.9990
11 0 0.5688 0.3138 0.1673 0.0859 0.0422 0.0198 0.0088 0.0036 0.0014 0.0005
1 0.8981 0.6974 0.4922 0.3221 0.1971 0.1130 0.0606 0.0302 0.0139 0.0059
2 0.9848 0.9104 0.7788 0.6174 0.4552 0.3127 0.2001 0.1189 0.0652 0.0327
3 0.9984 0.9815 0.9306 0.8389 0.7133 0.5696 0.4256 0.2963 0.1911 0.1133
4 0.9999 0.9972 0.9841 0.9496 0.8854 0.7897 0.6683 0.5328 0.3971 0.2744
5 1.0000 0.9997 0.9973 0.9883 0.9657 0.9218 0.8513 0.7535 0.6331 0.5000
6 1.0000 1.0000 0.9997 0.9980 0.9924 0.9784 0.9499 0.9006 0.8262 0.7256
7 1.0000 1.0000 1.0000 0.9998 0.9988 0.9957 0.9878 0.9707 0.9390 0.8867
8 1.0000 1.0000 1.0000 1.0000 0.9999 0.9994 0.9980 0.9941 0.9852 0.9673
9 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 0.9998 0.9993 0.9978 0.9941
10 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 0.9998 0.9995
12 0 0.5404 0.2824 0.1422 0.0687 0.0317 0.0138 0.0057 0.0022 0.0008 0.0002
1 0.8816 0.6590 0.4435 0.2749 0.1584 0.0850 0.0424 0.0196 0.0083 0.0032
2 0.9804 0.8891 0.7358 0.5583 0.3907 0.2528 0.1513 0.0834 0.0421 0.0193
3 0.9978 0.9744 0.9078 0.7946 0.6488 0.4925 0.3467 0.2253 0.1345 0.0730
4 0.9998 0.9957 0.9761 0.9274 0.8424 0.7237 0.5833 0.4382 0.3044 0.1938
5 1.0000 0.9995 0.9954 0.9806 0.9456 0.8822 0.7873 0.6652 0.5269 0.3872
6 1.0000 0.9999 0.9993 0.9961 0.9857 0.9614 0.9154 0.8418 0.7393 0.6128
Tables 733

Table D.5 (Continued )


p
n i 0.05 0.10 0.15 0.20 0.25 0.30 0.35 0.40 0.45 0.50
7 1.0000 1.0000 0.9999 0.9994 0.9972 0.9905 0.9745 0.9427 0.8883 0.8062
8 1.0000 1.0000 1.0000 0.9999 0.9996 0.9983 0.9944 0.9847 0.9644 0.9270
9 1.0000 1.0000 1.0000 1.0000 1.0000 0.9998 0.9992 0.9972 0.9921 0.9807
10 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 0.9999 0.9997 0.9989 0.9968
11 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 0.9999 0.9998
13 0 0.5133 0.2542 0.1209 0.0550 0.0238 0.0097 0.0037 0.0013 0.0004 0.0001
1 0.8646 0.6213 0.3983 0.2336 0.1267 0.0637 0.0296 0.0126 0.0049 0.0017
2 0.9755 0.8661 0.6920 0.5017 0.3326 0.2025 0.1132 0.0579 0.0269 0.0112
3 0.9969 0.9658 0.8820 0.7437 0.5843 0.4206 0.2783 0.1686 0.0929 0.0461
4 0.9997 0.9935 0.9658 0.9009 0.7940 0.6543 0.5005 0.3530 0.2279 0.1334
5 1.0000 0.9991 0.9925 0.9700 0.9198 0.8346 0.7159 0.5744 0.4268 0.2905
6 1.0000 0.9999 0.9987 0.9930 0.9757 0.9376 0.8705 0.7712 0.6437 0.5000
7 1.0000 1.0000 0.9998 0.9988 0.9944 0.9818 0.9538 0.9023 0.8212 0.7095
8 1.0000 1.0000 1.0000 0.9998 0.9990 0.9960 0.9874 0.9679 0.9302 0.8666
9 1.0000 1.0000 1.0000 1.0000 0.9999 0.9993 0.9975 0.9922 0.9797 0.9539
10 1.0000 1.0000 1.0000 1.0000 1.0000 0.9999 0.9997 0.9987 0.9959 0.9888
11 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 0.9999 0.9995 0.9983
12 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 0.9999
14 0 0.4877 0.2288 0.1028 0.0440 0.0178 0.0068 0.0024 0.0008 0.0002 0.0001
1 0.8470 0.5846 0.3567 0.1979 0.1010 0.0475 0.0205 0.0081 0.0029 0.0009
2 0.9699 0.8416 0.6479 0.4481 0.2811 0.1608 0.0839 0.0398 0.0170 0.0065
3 0.9958 0.9559 0.8535 0.6982 0.5213 0.3552 0.2205 0.1243 0.0632 0.0287
4 0.9996 0.9908 0.9533 0.8702 0.7415 0.5842 0.4227 0.2793 0.1672 0.0898
5 1.0000 0.9985 0.9885 0.9561 0.8883 0.7805 0.6405 0.4859 0.3373 0.2120
6 1.0000 0.9998 0.9978 0.9884 0.9617 0.9067 0.8164 0.6925 0.5461 0.3953
7 1.0000 1.0000 0.9997 0.9976 0.9897 0.9685 0.9247 0.8499 0.7414 0.6074
8 1.0000 1.0000 1.0000 0.9996 0.9978 0.9917 0.9757 0.9417 0.8811 0.7880
9 1.0000 1.0000 1.0000 1.0000 0.9997 0.9983 0.9940 0.9825 0.9574 0.9102
10 1.0000 1.0000 1.0000 1.0000 1.0000 0.9998 0.9989 0.9961 0.9886 0.9713
11 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 0.9999 0.9994 0.9978 0.9935
12 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 0.9999 0.9997 0.9991
13 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 0.9999
15 0 0.4633 0.2059 0.0874 0.0352 0.0134 0.0047 0.0016 0.0005 0.0001 0.0000
1 0.8290 0.5490 0.3186 0.1671 0.0802 0.0353 0.0142 0.0052 0.0017 0.0005
2 0.9638 0.8159 0.6042 0.3980 0.2361 0.1268 0.0617 0.0271 0.0107 0.0037
3 0.9945 0.9444 0.8227 0.6482 0.4613 0.2969 0.1727 0.0905 0.0424 0.0176
4 0.9994 0.9873 0.9383 0.8358 0.6865 0.5155 0.3519 0.2173 0.1204 0.0592

(Continued)
734 APPENDIX D: Tables

Table D.5 (Continued )


p
n i 0.05 0.10 0.15 0.20 0.25 0.30 0.35 0.40 0.45 0.50
5 0.9999 0.9978 0.9832 0.9389 0.8516 0.7216 0.5643 0.4032 0.2608 0.1509
6 1.0000 0.9997 0.9964 0.9819 0.9434 0.8689 0.7548 0.6098 0.4522 0.3036
7 1.0000 1.0000 0.9996 0.9958 0.9827 0.9500 0.8868 0.7869 0.6535 0.5000
8 1.0000 1.0000 0.9999 0.9992 0.9958 0.9848 0.9578 0.9050 0.8182 0.6964
9 1.0000 1.0000 1.0000 0.9999 0.9992 0.9963 0.9876 0.9662 0.9231 0.8491
10 1.0000 1.0000 1.0000 1.0000 0.9999 0.9993 0.9972 0.9907 0.9745 0.9408
11 1.0000 1.0000 1.0000 1.0000 1.0000 0.9999 0.9995 0.9981 0.9937 0.9824
12 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 0.9999 0.9997 0.9989 0.9963
13 1.0000 1.0000 1.9000 1.0000 1.0000 1.0000 1.0000 1.0000 0.9999 0.9995
14 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000
16 0 0.4401 0.1853 0.0743 0.0281 0.0100 0.0033 0.0010 0.0003 0.0001 0.0000
1 0.8108 0.5147 0.2839 0.1407 0.0635 0.0261 0.0098 0.0033 0.0010 0.0003
2 0.9571 0.7892 0.5614 0.3518 0.1971 0.0994 0.0451 0.0183 0.0066 0.0021
3 0.9930 0.9316 0.7899 0.5981 0.4050 0.2459 0.1339 0.0651 0.0281 0.0106
4 0.9991 0.9830 0.9209 0.7982 0.6302 0.4499 0.2892 0.1666 0.0853 0.0384
5 0.9999 0.9967 0.9765 0.9183 0.8103 0.6598 0.4900 0.3288 0.1976 0.1051
6 1.0000 0.9995 0.9944 0.9733 0.9204 0.8247 0.6881 0.5272 0.3660 0.2272
7 1.0000 0.9999 0.9989 0.9930 0.9729 0.9256 0.8406 0.7161 0.5629 0.4018
8 1.0000 1.0000 0.9998 0.9985 0.9925 0.9743 0.9329 0.8577 0.7441 0.5982
9 1.0000 1.0000 1.0000 0.9998 0.9984 0.9929 0.9771 0.9417 0.8759 0.7728
10 1.0000 1.0000 1.0000 1.0000 0.9997 0.9984 0.9938 0.9809 0.9514 0.8949
11 1.0000 1.0000 1.0000 1.0000 1.0000 0.9997 0.9987 0.9951 0.9851 0.9616
12 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 0.9998 0.9991 0.9965 0.9894
13 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 0.9999 0.9994 0.9979
14 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 0.9997
15 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000
17 0 0.4181 0.1668 0.0631 0.0225 0.0075 0.0023 0.0007 0.0002 0.0000 0.0000
1 0.7922 0.4818 0.2525 0.1182 0.0501 0.0193 0.0067 0.0021 0.0006 0.0001
2 0.9497 0.7618 0.5198 0.3096 0.1637 0.0774 0.0327 0.0123 0.0041 0.0012
3 0.9912 0.9174 0.7556 0.5489 0.3530 0.2019 0.1028 0.0464 0.0184 0.0063
4 0.9988 0.9779 0.9013 0.7582 0.5739 0.3887 0.2348 0.1260 0.0596 0.0245
5 0.9999 0.9953 0.9681 0.8943 0.7653 0.5968 0.4197 0.2639 0.1471 0.0717
6 1.0000 0.9992 0.9917 0.9623 0.8929 0.7752 0.6188 0.4478 0.2902 0.1662
7 1.0000 0.9999 0.9983 0.9891 0.9598 0.8954 0.7872 0.6405 0.4743 0.3145
8 1.0000 1.0000 0.9997 0.9974 0.9876 0.9597 0.9006 0.8011 0.6626 0.5000
9 1.0000 1.0000 1.0000 0.9995 0.9969 0.9873 0.9617 0.9081 0.8166 0.6855
10 1.0000 1.0000 1.0000 0.9999 0.9994 0.9968 0.9880 0.9652 0.9174 0.8338
11 1.0000 1.0000 1.0000 1.0000 0.9999 0.9993 0.9970 0.9894 0.9699 0.9283
Tables 735

Table D.5 (Continued )


p
n i 0.05 0.10 0.15 0.20 0.25 0.30 0.35 0.40 0.45 0.50
12 1.0000 1.0000 1.0000 1.0000 1.0000 0.9999 0.9994 0.9975 0.9914 0.9755
13 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 0.9999 0.9995 0.9981 0.9936
14 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 0.9999 0.9997 0.9988
15 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 0.9999
16 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000
18 0 0.3972 0.1501 0.0536 0.0180 0.0056 0.0016 0.0004 0.0001 0.0000 0.0000
1 0.7735 0.4503 0.2241 0.0991 0.0395 0.0142 0.0046 0.0013 0.0003 0.0001
2 0.9419 0.7338 0.4797 0.2713 0.1353 0.0600 0.0236 0.0082 0.0025 0.0007
3 0.9891 0.9018 0.7202 0.5010 0.3057 0.1646 0.0783 0.0328 0.0120 0.0038
4 0.9985 0.9718 0.8794 0.7164 0.5187 0.3327 0.1886 0.0942 0.0411 0.0154
5 0.9998 0.9936 0.9581 0.8671 0.7175 0.5344 0.3550 0.2088 0.1077 0.0481
6 1.0000 0.9988 0.9882 0.9487 0.8610 0.7217 0.5491 0.3743 0.2258 0.1189
7 1.0000 0.9998 0.9973 0.9837 0.9431 0.8593 0.7283 0.5634 0.3915 0.2403
8 1.0000 1.0000 0.9995 0.9957 0.9807 0.9404 0.8609 0.7368 0.5778 0.4073
9 1.0000 1.0000 0.9999 0.9991 0.9946 0.9790 0.9403 0.8653 0.7473 0.5927
10 1.0000 1.0000 1.0000 0.9998 0.9988 0.9939 0.9788 0.9424 0.8720 0.7597
11 1.0000 1.0000 1.0000 1.0000 0.9998 0.9986 0.9938 0.9797 0.9463 0.8811
12 1.0000 1.0000 1.0000 1.0000 1.0000 0.9997 0.9986 0.9942 0.9817 0.9519
13 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 0.9997 0.9987 0.9951 0.9846
14 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 0.9998 0.9990 0.9962
15 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 0.9999 0.9993
16 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 0.9999
19 0 0.3774 0.1351 0.0456 0.0144 0.0042 0.0011 0.0003 0.0001 0.0000 0.0000
1 0.7547 0.4203 0.1985 0.0829 0.0310 0.0104 0.0031 0.0008 0.0002 0.0000
2 0.9335 0.7054 0.4413 0.2369 0.1113 0.0462 0.0170 0.0055 0.0015 0.0004
3 0.9868 0.8850 0.6841 0.4551 0.2630 0.1332 0.0591 0.0230 0.0077 0.0022
4 0.9980 0.9648 0.8556 0.6733 0.4654 0.2822 0.1500 0.0696 0.0280 0.0096
5 0.9998 0.9914 0.9463 0.8369 0.6678 0.4739 0.2968 0.1629 0.0777 0.0318
6 1.0000 0.9983 0.9837 0.9324 0.8251 0.6655 0.4812 0.3081 0.1727 0.0835
7 1.0000 0.9997 0.9959 0.9767 0.9225 0.8180 0.6656 0.4878 0.3169 0.1796
8 1.0000 1.0000 0.9992 0.9933 0.9713 0.9161 0.8145 0.6675 0.4940 0.3238
9 1.0000 1.0000 0.9999 0.9984 0.9911 0.9674 0.9125 0.8139 0.6710 0.5000
10 1.0000 1.0000 1.0000 0.9997 0.9977 0.9895 0.9653 0.9115 0.8159 0.6762
11 1.0000 1.0000 1.0000 1.0000 0.9995 0.9972 0.9886 0.9648 0.9129 0.8204
12 1.0000 1.0000 1.0000 1.0000 0.9999 0.9994 0.9969 0.9884 0.9658 0.9165
13 1.0000 1.0000 1.0000 1.0000 1.0000 0.9999 0.9993 0.9969 0.9891 0.9682
14 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 0.9999 0.9994 0.9972 0.9904

(Continued)
736 APPENDIX D: Tables

Table D.5 (Continued )


p
n i 0.05 0.10 0.15 0.20 0.25 0.30 0.35 0.40 0.45 0.50
15 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 0.9999 0.9995 0.9978
16 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 0.9999 0.9996
17 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000
20 0 0.3585 0.1216 0.0388 0.0115 0.0032 0.0008 0.0002 0.0000 0.0000 0.0000
1 0.7358 0.3917 0.1756 0.0692 0.0243 0.0076 0.0021 0.0005 0.0001 0.0000
2 0.9245 0.6769 0.4049 0.2061 0.0913 0.0355 0.0121 0.0036 0.0009 0.0002
3 0.9841 0.8670 0.6477 0.4114 0.2252 0.1071 0.0444 0.0160 0.0049 0.0013
4 0.9974 0.9568 0.8298 0.6296 0.4148 0.2375 0.1182 0.0510 0.0189 0.0059
5 0.9997 0.9887 0.9327 0.8042 0.6172 0.4164 0.2454 0.1256 0.0553 0.0207
6 1.0000 0.9976 0.9781 0.9133 0.7858 0.6080 0.4166 0.2500 0.1299 0.0577
7 1.0000 0.9996 0.9941 0.9679 0.8982 0.7723 0.6010 0.4159 0.2520 0.1316
8 1.0000 0.9999 0.9987 0.9900 0.9591 0.8867 0.7624 0.5956 0.4143 0.2517
9 1.0000 1.0000 0.9998 0.9974 0.9861 0.9520 0.8782 0.7553 0.5914 0.4119
10 1.0000 1.0000 1.0000 0.9994 0.9961 0.9829 0.9468 0.8725 0.7507 0.5881
11 1.0000 1.0000 1.0000 0.9999 0.9991 0.9949 0.9804 0.9435 0.8692 0.7483
12 1.0000 1.0000 1.0000 1.0000 0.9998 0.9987 0.9940 0.9790 0.9420 0.8684
13 1.0000 1.0000 1.0000 1.0000 1.0000 0.9997 0.9985 0.9935 0.9786 0.9423
14 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 0.9997 0.9984 0.9936 0.9793
15 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 0.9997 0.9985 0.9941
16 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 0.9997 0.9987
17 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 0.9998
18 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000
APPENDIX E

Programs

Program What It Computes


3-1 Sample mean, sample variance, sample standard deviation
3-2 Sample correlation coefficient
5-1 Binomial probabilities
6-1 Standard normal probability distribution
6-2 Percentiles of the standard normal distribution
8-1 Percentiles of t distributions
8-2 The t distribution probabilities
8-3 Confidence interval estimates and bounds for a mean
9-1 The p value for the t test
10-1 The p value in the two-sample t test
11-1 The p value in one-factor ANOVA
11-2 The p value in two-factor ANOVA
12-1 Statistics in simple linear regression model
12-2 Least-squares estimators in multiple linear regression
13-1 The p value in chi-squared goodness-of-fit test
13-2 The p value in test of independence in a contingency table
14-1 The p value in the signed-rank test
14-2 The p value in the rank-sum test
14-3 The p value in the runs test
A-1 A random subset

737
Introductory Statistics. DOI:10.1016/B978-0-12-804317-2.00028-X
Copyright © 2017 Elsevier Inc. All rights reserved.
Index

A Analytical Theory of Probability Binomial distributions


Absolute values of sample correlation (Simon), 309 normal approximation to, 315,
coefficients, 576 Approximation rule 365
Addition rule of probabilities, 147 empirical rule and, 269 of hypergeometric random
definition of, 147 for normal random variables, 267 variables, 249
example of, 148 normal curves and, 269, 270 Binomial parameters, hypothesis test
Additive property of normal random Arbuthnot, John, 420 of, 414
variables, 279 Ars Conjectandi (Bernoulli), 246 Binomial probabilities, 243, 474
problems for, 281, 283, 288, 293 hypothesis tests, 474
Averages, weighted, 69
α test of significance levels Binomial random variables,
definition of, 384 240–243, 245
types of tests (summary tables), B definition of, 240
398, 408, 439, 447, 455, 474 Bacon, Francis, 445, 621, 643 independent trials for, 241
Alternative hypothesis, 424 Bar graphs, 19, 21 Poisson random variables,
definition of, 424 and symmetry, 21 251–254, 258
establishing, 385 example of, 19 probabilities, with various
types of tests (summary tables), first use of, by Guerry, 54 parameters, 243
398, 408, 419, 439, 447, 455, relative frequency, 20 probability formula, 241
474 summary of, 55 Probability mass functions of, 317
Analogous hypothesis, in two-factor Bayes’ theorem, 176, 177, 179 problems for, 250
ANOVA tests, 489 standard deviation of, 315
conditional probability, 180
Analysis of variance (ANOVA) with parameters n and p, 257, 258
definition of, 177
column factors in, 489, 500 Biological data sets, 548
false-positive results, 178 normal distribution of, 549
double-summation notation, 502 problems for, 180
Fisher, invented by, 490, 513 Biometrics, 477
Bell, E.T., 292 Birth rates, 35
grand mean, 500
Bell-shaped curves, see also Normal histograms for, 36
introduction to, 489
curves history of mortality tables, 9
invented by, 490
Bell-shaped curves, 291, 292 Blood cholesterol levels, 31, 32
key terms for, 513
density curves, 268 frequency tables of, 32
one-factor, 489, 495, 513
empirical frequencies, 304 Box plot, definition of, 94
problems for
one-factor tests, 495 Bernoulli, Jacob, 9, 246 Braudel, F., 54
Bernoulli, Jacques, 246 Burt, Cyril, 585
review problems, 516, 517
row factors in, 500, 505 independent trials and, 246
summary of, 513, 515, 516 Bernoulli, Jean, 246 C
testing hypotheses, 509 Bernoulli, Nicholas, 246 Categorical data, 567
two-factor, 499, 501, 504, 505, Bimodal data sets, 131 Causation, correlation, 115
507, 509, 512–514, 518 histograms of, 102 Center of gravity, 214
787
788 Index

Central limit theorem, 303, 305, 307, problems for, 169 Course of Experimental Agriculture
309, 324 Confidence (Young), 446
approximately normal data sets, 90, 95, and 99 percent of, 346, Critical region, definition of, 424
99, 100 347 Cumulative relative frequency table,
definition of, 303 centered intervals, 349 40
error measurement, 304 definition of, 344 Cumulative sum control charts, 668
for various populations, 309 interval estimators, 346, 349, 351 problems for, 672
historical perspective on, 306 level percentiles of, 346
normal curves, 305 Confidence bounds, 351, 360, 369 D
problems for, 310 for interval estimation of Data
sample mean distribution and, population proportions, 369 approximately symmetric, 20, 25,
309 upper and lower, 360, 369 37
sample sizes for, 309 Confidence intervals, lengths of, 377 collection, 3
Chi-squared density function, 322 Constants and properties of detected by histograms, 34
Chi-squared distributions, degrees of expected value, 214 manipulation of, and scientific
freedom of, 323 sample variance, 92 fraud, 585
Chi-squared goodness-of-fit tests Contingency table modern approach to, 2
concepts, 588 definition of, 600 paired, 47, 49
definition of, 656 independence testing, 599 symmetric, 20, 25
independence in contingency with fixed marginal totals, 608 Data mining, 400
tables with fixed marginal problems for independence Data sets
totals, 608 testing, 611 approximately normal, 99, 100,
independence of two Continuity correction, 316 102, 105, 106
characteristics, 604 bimodal, 101, 102
Continuous distributions, 263
contingency tables, 608 biological, 520, 548
Continuous random variables, 264
degrees of freedom, 601 central limit theorem, 548
definition of, 264
summary table, 604 central tendencies of, 66, 76, 90
probability density function in,
introduction to, 585 comparison rankings, 655, 657
263–265, 267, 268
key terms for, 614 constructing histograms from, 32
problems for, 265
p value of, 591 finding sample variance for, 90
Control charts, 676, 681, 685, 689
review problems, 617 frequency tables and graphs, 18,
cumulative sum, 689
summary of, 614 19, 21, 23
EWMA, 686–688
summary table, 594 grouped data and histograms, 31,
testing null hypothesis in, 589 for fraction defective, 681
33, 35
Chi-squared percentiles, 590, 595 problems for, 672
histograms of, 99
Chi-squared random variables, 321, Control/control group, 4 introduction to, 17
322, 505, 506, 534, 590, 614 Arthur Young (historical key terms for, 54
Class boundaries, 31, 32 perspective), 446 normal, 99, 101, 103
Class intervals, 32, 34 hypothesis testing, 443, 472 paired data, 47, 49
Coefficient of determination, 557, normal distributions and, 671 review problems, 58
576 samples as, 445 skewed, 100
definition of, 264, 557 Correlations, see also Sample stem-and-leaf plots, 41, 102
problems for, 558 correlation coefficient summarizing, see Statistics
Column factors, in ANOVA, 489, 500 associations measured by, 109 De Mere, Chevaller, on probability,
Combinations, 188 negative, 109 150
Comparison rankings, 655, 657 positive, 108 De Moivre, Abraham, normal
Complement, 142, 143, 145, 156, Counting arguments, 187 distribution and, 264, 265,
169, 172, 191, 193, 279 Counting principles, 181, 183, 185, 292
Complements, 142 187 Degrees of freedom
Conditional probability, 159, 161, basic, 182, 183 definition of, 321
163, 165, 167, 191 generalized basic, 183, 184 error random variables, 533, 535
Bayes’ theorem and, 177 notation of, 185 of chi-squared distributions, 322
definition of, 159 problems for, 188 of random variables, 355
Index 789

one-factor ANOVA, 489, 495 review problems, 377 point estimator of population
remarks on F random variable, summary of, 324 proportion, 333
492–495 Doll, R., 65 problems for, 332
values of F , 492, 493 Doll-Hill study, 66 review problems, 377
Densities Double-summation notation, 502 summary, 375
of sample means, 300 Dummy variables for categorical Estimators
symmetric about zero, 355, 630, data, 567 case studies for, 367
632 definition of, 330
Density curves, 65 in one-factor ANOVA, 491
Density curves, see also Probability
E in two-factor ANOVA, 504
Empirical frequencies, bell-shaped
density function least-square, 521
curves, 304
of uniform random variables, 266 point, of population mean, 330
Empirical rule, 99–101, 103
Density percentile, in interval population variance, 450
approximation rule, 269
estimations, 356 unbiased, definition of, 330
definition of, 100
Dependent variables, 520 Events, 141, 153, 166, 167, 191
Descartes, René, 53 historical perspective on, 104
complements, 143
Descriptive statistics, 10 normal data sets and, 99
definition of, 141
definition of, 4 problems for, 104
disjoint or mutually exclusive, 142
Deviations Equality testing independent, 166, 167
definition of, 71 of equality of means intersection/union, 141
historical perspective on, 71 known variances, 435 null, 142, 191
ith deviation, 71, 130 small-sample, with equal EWMA control charts, see
Dice, fair, 167 unknown variances, 450 Exponentially weighted
Discrete random variables unknown variances and large moving-average control charts
binomial, 243–245, 248–250, sample size, 442 Expectation, 228
252, 255, 257, 258 of multiple probability definition of, 212
concepts, 205, 207 distributions, 467, 469, 471, Expected value, 211, 213, 215, 217
definition of, 206 473 center of gravity analogy, 214
expected value, 203, 212, 215, 216 of population proportions, 467 definition of, 228
hypergeometric, 249, 250 Equally likely outcomes, 155 frequency interpretation of
hypergeometric random variables, random selection, 190 probabilities, 212
249, 250 Error random variable, 533, 535 of binomial random variables,
key terms for, 256 multiple linear regression model, 245
Poisson, 251–255, 258 519, 521 of chi-squared random variables,
Poisson random variables, 252 problems for, 536 322
probability distribution, 207, 222 Error sum of squares, in two-factor of hypergeometric random
problems for, random variables, ANOVA tests, 506, 515 variables, 249, 250
232–234, 236 Estimated regression line of Poisson random variables, 251
review problems, 258 definition of, 526 of sample means, 299
summary of, 256 scatter diagram of, 528 of sums, 215–217, 226, 653
variance of, 225, 226, 228, 245, Estimation of sums/products using constants,
249, 250, 258 interval estimators of mean, 343, 214
Disjoint events, 142, 143, 145–147, 345, 347, 349, 351 of zero, 224
150, 151, 165 interval estimators of population population means and, 300
Distributions proportion, 365 problems for, 218
central limit theorem and sample introduction to, 369 properties of, 214
mean, 306 key terms for, 374 Experiment, 140, 141
chi-squared, 321, 322 of population variance, 339 definition of, 140
continuous, 263 of probability of sensitive events, equally likely outcomes in, 154
key terms for, 290 337 problems for, 143, 149
of sample variance of normal point estimator for population Exponential bell-shaped curve, 292
populations, 321 mean, 330 Exponential distribution, 309
790 Index

Exponential random variables, Frequency interpretation, of expected frequency polygons, 19


density of average of, 309 value, 212 line, 19
Exponentially weighted Frequency polygons, 19, 54 problems for, 25
moving-average control charts, example of, 20 relative frequency, 20, 23
685 relative, 20, 22, 36 Graunt, John
definition of, 685 summary of, 55 life table, 152
example of, 687 Frequency tables, 17, 19, 21, 23 mortality table of, 8
problems for, 687 constructing, 21 role, in history of statistics, 7–9
standard, definition of, 685 definition of, 18 Grouped data, 31, 33, 35
Exponentially weighted of blood cholesterol levels, 32 problems for, 37
moving-average (EWMA), 685 of sick leave, 21 Guerry, A.M., bar graphs used by, 54
of symmetric data, 18
F problems for, 25 H
F distribution sample means and, 68 Halley, Edmund, 9
definition of, 492 summary of, 55 graphical plotting and, 53
degrees of freedom, 492 Frequency tables and graphs, 18, 19, Halley, graphical plotting, 53
example of, 492 21, 23 Hawthorne effect, 444
Factorial notation (!), 183 Future responses, prediction intervals historical perspective, 446
Fair dice, 167 for, 551 Heredity, Galton, Francis, on, 115
False-positive results, Bayes’ theorem, Hill, A.B., 65
178 G Histograms, 31, 33, 35
Fermat, Pierre, 150 Galileo, 150 constructing, 33
Finite populations Galton, Francis definition of, 32
problems for, 318 on frequency of error, 306 importance of, 33
random variables in, 313 on heredity, 9, 11, 104, 114, 115, of approximately normal data
sampling proportions for, 312, 306, 485, 519, 520, 543, 544, sets, 100
313, 315, 317 549 of bimodal data sets, 102
First quartile, 86, 130 regression and, 9, 104, 519, 520, of birth rates, 36
definition of, 84 543, 544 of normal data sets, 99
First-generation hybrids role, in history of statistics, 9 of skewed data sets, 100
crossing, 587 Gauss, Karl Friedrich, 9, 292, 566 Pearson and, 54
described, 586 Gauss, Karl Friedrich and normal problems for, 37
Fisher, Ronald A. curves, 292 summary of, 55
analysis of Mendel’s data, 588 Gaussian distribution, 292 Huyghens, Christian, 77
ANOVA and, 490, 513 Genes, 585 Huyghens, Ludwig, 77, 150
on sample correlation coefficients, Geometrically weighted moving Hybrid genes, 242
115 average, 685 Hypergeometric random variables,
role, in history of statistics, 10 Gini index, 249
significance levels and, 386 Goodness-of-fit tests, 591, 593 binomial distributions of, 250
t test and, 409 Goodness-of-fit tests, see also definition of, 249
Fixed margins, 608 Chi-squared goodness-of-fit expected value and variance of,
Fraction defective tests 245
control charts for, 681 Gosset, W.S. problems for, 250
problems for control charts for, on sample correlation coefficients, Hypothesis
683 115 alternative, 382
Fraud and data manipulation, 585 role, in history of statistics, 10 establishing, 385
Frequency, relative, 149, 160 t statistic distributions and, 409 null, see Null hypothesis
Frequency, relative, see also Grand mean, in ANOVA, 500 proving, 385, 397
Probabilities Graphical plotting, of Edmund statistical, 420
Frequency histograms, 32 Halley, 53 Hypothesis testing
example of, 33 Graphs, 19–21, 23 Three Mile Island case, 399, 400
summary of, 55 bar, 19–21 two binomial probabilities, 474
Index 791

two population tests introduction, randomizing sets in, 471 case studies for, 367
433 review problems, 428 confidence and, 366
analysis of variance, see Analysis review problems for, concerning length of confidence interval, 377
of variance (ANOVA) two populations, 484 problems for, 377
defined, 452 significance levels and, 383 IQ, stem-and-leaf plots for, 47
errors, 384 summary of, 425
first published (historical concerning two populations, 480
perspective), 420 t tests, 401
K
for non-normal distributions, 398 Kruskal-Wallis test, 654
types of (summary tables), 398,
goodness of fit, see Chi-squared 408, 419, 439, 447, 455, 474
goodness-of-fit tests Z test, 389, 424 L
introduction to, 381 Laplace, Pierre Simon, 9
key terms for, 424 Law of frequency of error, 306
concerning two populations, 480
I Least squares method, 519, 565–567,
In control process, 667
linear regression and β equals 575, 576, 579
Independent events, 168
zero, 537 Least-square estimators, 525
any number of, and probability,
mean of normal populations, 355 of regression parameters, 526
168
misinterpreting rejections, 474 Left-end inclusion convention, 31
definition of, 166
nonparametric, see Legendre, Adrien Marie, 566, 567
problems for, 169
Nonparametric hypotheses Levels of significance, see Significance
tests testing in contingency tables, 608
testing in two characteristics of levels
observational studies and, 471 Life table, 152
of binomial parameters, 414 populations, 599
Independent random variables, 226 Line graphs, 19
of population proportion, 413, example of, 20
415, 417 Independent trials
Bernoulli, Jacques, and, 246 summary of, 55
one-sided Linear regression
defined, 397 for binomial random variables,
241 biological data sets, 548
of median, 627 coefficient of determination, 559,
one-sided tests, 397 Independent variables, 520
Inferential statistics, overview of, 4 576
p values, 391
Input variables dummy variables for categorical
of population proportion, 415
definition of, 520 data, 567
pictorial depiction of, 389, 396,
in simple linear regression, 521 estimated regression line, 528,
403, 406
Interquartile range, 131 530, 545
point estimators for, 385
Intersection of events, 191 estimating regression parameters,
population proportion equality
Interval estimators of mean, 343 550
tests, 467, 469, 471, 473
90, 95, 99 percent confidence, Galton and, 543
population proportions, 413, 415,
346, 347 introduction, 520
417
problems for confidence and, 344, 345 key terms for, 575
normal population with known confidence bounds in, 351, 360, linear equation, 574
variance, 379 369 prediction intervals for future
one-sided tests, 349 definition of, 344 responses, 551
paired-sample t tests, 463 for population means, 357 problems for
population proportions, 419, 475 introduction to, 330 β equals zero, 556
small-sample, with unknown of normal populations with analysis of residuals, 562
population variances, 455 known variance, 344, 345 coefficient of determination, 558
t tests, 409 of normal populations with error random variable, 533
two normal populations with unknown variance, 355 estimating parameters, 533
known variances, 439 problems for, 361, 371 multiple, 562
two-factor ANOVA, 511 sample size for, 349 multiple regression model, 569
unknown variance and large t random variable, 355 prediction intervals, 569
sample sizes, 428 Interval estimators of population regression to mean, 580
proposed case studies for, 428 proportion, 365 sample correlation coefficient, 561
792 Index

simple model, 523 Multiple linear regression model, Normal approximation, binomial
regression to the mean, 543, 544, 563, 565, 567 distribution, 315
546, 548, 550, 576, 578 definition of, 564 Normal curves
residuals, analysis of, 579 problems for, 569 approximate areas under, 270
review problems, 579 Multiplication rule of probability, approximation rule and, 269, 279
simple model for, 521 163, 165, 192 central limit theorem and, 297
summary of, 576 Gauss (historical perspective), 292
Mutually exclusive events, 142
testing β equals zero hypothesis, standard, 269, 291
556 Normal data sets, 99, 101, 103, 133
Linear relationships N definition of, 292
equation for, 520 Naive Bayes, empirical rule and, 99–101, 103
sample correlation coefficient, 131 Natural Inheritance (Galton), 306 historical perspective on, 104
Long-run relative frequency, 146, Negative correlations, in sample problems for, 105
148, 160 correlation coefficients, 107 summary of, 131
Lorenz curve, Normal distribution
Newton, Isaac, 9
Lower confidence bounds biological data sets, 549
Neyman, Jerzy, 10, 386 control and, 671
for interval estimation of means,
350, 373 Nightingale, Florence (historical historical perspective on, 265
perspective), 605 introduction to, 263
for interval estimation of
population proportions, 369 Noise, 345 normal random variables, 267
Lower control limit (LCL), 668 Noise, random, 390 Normal populations
Nonparametric hypotheses tests interval estimators of
with known variance, 343
M comparison rankings, 655, 657
with unknown variance, 355
Mann-Whitney test, 641 definition of, 622
one-sided tests of, 395
Margin of error, 377 equality of multiple probability
problems for distribution in, 323
Marlowe, Christopher, 643 distributions, 652
problems for mean tests of, 393
Mathematical preliminaries, Freedman test, 657 problems for testing two, with
summation of, 516 key terms for, 662 known variances, 439
Mean, see also Population mean; Kruskal-Wallis test, 654 sample variance distribution in,
Sample mean normal distribution tests 321
Mean, 301 compared to, 643 tests, see Hypothesis testing
definition of, 212 permutation tests, 658 Normal probabilities, finding, 277
detecting shifts in, 668 Normal random variable, 267, 273,
problems for
regression to, 543 275, 291
X control charts for detecting equality of multiple probability
distributions, 652 approximation rule for, 269
shifts in, 668 key terms for, 290
unknown mean and variance, 674 permutation tests, 661
percentiles of, 284
Men of Mathematics (Bell), 292 rank-sum test, 662 probabilities associated with, 273,
Mendel, Gregor, 585, 593 runs test for randomness, 646 275
data manipulation, 593 sign test, 662 problems for
Mendenhall, Thomas (historical signed-rank test, 662 additive property, 263
perspective), 642 rank-sum test, 662–664 continuous random variables, 264
Method of least squares, 565–567, review problems, 664 percentiles of, 292
575, 576, 579 probabilities associated with, 276
runs test for randomness, 646
Method of maximum likelihood, 386 review problems, 293
Method of moments, 386 sign test, 626, 630, 631, 662
standard, 273, 275, 291
Modal values, 87 null hypothesis, 626, 629–631, standard deviation of, 277
Mortality tables, 8 662 standardizing, 277
history of, 9 p value, 629 summary of, 291
Mosteller, Frederic, 643 signed-rank test, 630, 631, 636, Null event, 142, 191
Moving average, see Exponentially 637, 662 Null hypothesis
weighted moving average summary of, 662 appropriate, 407
Index 793

classical testing procedure for, 384 Pearson, Egon, 386 graphs of, 252
definition of, 383 Pearson, Karl parameters, 250
not rejecting, 389 chi-squared goodness-of-fit test probabilities, 252
rejection of, 383 and, 9, 587, 595 problems for, 254
significance levels necessary for histograms used by, 54 Pooled estimator
rejection of, 397 on De Moivre, 265 testing equality of means, 451
statistical test of, 425 on Nightingale, 605 testing population proportions,
testing, in chi-square product-moment correlation 468
goodness-of-fit tests, 590 coefficient of, 115 Population distributions
Numerical science, 9 regression to the mean and, 544 introduction to, 297
role, in history of statistics, 9, 292, probability distributions of
386
O Percent confidence interval estimator,
sample mean, 301
Observational studies, for hypothesis sign test of, 625
344 Population mean
tests, 472 Percentiles
One-factor analysis of variance, 491 confidence bounds for, 351
by conversion to standard normal, definition of, 299
definition of, 490 277
problems for, 496 hypothesis tests for normal, 387
chi-squared, 590, 595 interval estimators for, 357
summary of, 513 confidence levels, 346
summary table, 495 obtaining, 300
definition of, 292
One-sided tests, 425, 461 one-sided tests concerning two,
of normal random variables, 285,
definition of, 397 455
287
problems for, 398 point estimator of, 331
problems for, of normal random
sign tests, 627 problems for point estimation of,
variables, 288
two population tests, 435, 458, 332
sample, 81
461 sample means and, 331
Permutation tests, 658, 659
Outcomes t test for, 401
problems for, 661
definition of, 140 Permutations, 183 Population proportions
equally likely, 154, 155 Philosophical Transactions of the Royal case studies for, 367
problems for equally likely, 157 Society (Arbuthnot), 420 hypothesis tests concerning, 413
sample space and events of Piazzi, Giuseppe, 566 interval estimators of, 365
experiment, 140 Pie charts, 24 one-sided hypothesis and, 472
Outliers, 39, 63 Placebo effect, 3, 404, 434, 444 point estimators for, 333
Placebos, 3 pooled estimators in, 470
Plague, 7, 13 problems for hypothesis tests of,
P Plague and history of statistics, 7, 13 419
P value problems for interval estimators
Playfair, William, pie charts used by,
hypothesis testing, 391 of, 371
53
in population proportion problems for point estimation of,
Poincaré, Henri, 310
hypothesis tests, 414, 468 335
Point estimator
in sign tests, 623 for population proportions, 333 testing equality of, 467
of chi-squared test, 592 hypothesis testing, 385 Population size, sample size and, 314
of signed-rank tests, 632 introduction to, 330 Population standard deviation,
summary of, 425 of population mean, 331, 333 estimating, 302
two-sided tests of, 416 problems for, 335 Population variance
types of tests (summary tables), standard errors and, 331 definition of, 299
398, 408, 419, 439, 447, 455, Poisson, Simeon, 136 hypothesis tests for small sample
474 Poisson random variables, 251, 252 with equal unknown, 450
Paired data, 47, 49 binomial random variables and, interval estimators of normal
paired sample t test, 458 252 populations with known, 343
problems for, 50, 463 definition of, 251 interval estimators of normal
Parallel circuit, 175 expected value and variance of, populations with unknown,
Pascal, 150 245 355
794 Index

obtaining, 300 definition of, 264 R


pooled estimators, 452 density curves, 266 Random error, 521
problems for estimation of, 341 of sample means from standard Random noise, 390
standard normal distributions of normal populations, 300 Random numbers, 708
equal, 451 of t random variables, 356 Random samples, 448, 476, 612, 613
Populations Probability distribution, discrete Random selection, equally likely
central limit theorem for various, random variables, 206 outcomes, 155
303 Probability mass functions Random variables, 205
definition of, 1 graph of, 301 binomial, 245
densities of sample means of, 300 of binomial random variables, chi-squared, 321
finite, 312 317 continuous, 263
normal, 321 Probability models, overview of, 5 definition of, 204
numerical values associated with, Problem of the points, 150 expected value, 212
298 hypergeometric, 249, 250
Programs
sampling from correctly, 318 independent, 204, 226, 280
Program 10-1, 453, 454
Positive correlations, 108 normal, 267–269
Program 11-1, 494, 514
in sample correlation coefficients, Poisson, 205, 251–255, 258
109 Program 11-2, 508, 510, 516
standard normal, 268
Prediction intervals, 551 Program 12-1, 527, 535, 545,
546, 552, 553, 557, 558, 578 uniform, 265, 266
definition of, 521 variance of, 204, 223, 225, 227,
for future responses, 551 Program 14-1, 632–636, 663
229
problems for, 523 Program 14-2, 642, 643, 646, 664
Random walk model, 431
Probabilities Program 14-3, 647–651, 664
geometric, 363
addition rule for, 147 Program 3-1, 133, 446
linear, 343
as long-run relative frequencies, Program 5-1, 244, 317, 413–415, Randomness, runs test for, 646, 647
146 417, 419, 428, 624, 627, 662
Range, 131
Bayes’ theorem, 176 Program 6-1, 276, 278, 291, 307, Rankings, comparison, 655
binomial, 255, 474 316
Regression fallacy, regression to the
counting principles, 181 Program 6-2, 285, 286, 292 mean and, 575, 576
definition of, 5, 140 Program 8-1, 357, 461 Regression parameters
finding normal, 277 Program 8-2, 357, 404, 407, 408, errors and, 526
for negative x, 274, 291 538 estimating, 525, 527
historical perspective on, 150 Program A-1, 313 Regression to the mean, 543, 545,
key terms for, 191 Proportions, in finite populations, 547, 548, 575
multiplication rule, 163, 165, 192 313 regression fallacy, 548, 576
of binomial random variables, Pure dominance, 242 scatter diagram, 546, 547
240 Pure recessiveness, 242 Relative frequency graphs, 20
of equally likely outcomes, 154 Relative frequency histograms, 32
of sensitive events, 337
of standard normal random
Q Relative frequency polygons, 37
Quality control methods, 489 Residuals, 534
variables, 272
cumulative sum control charts, analysis of, 561
problems for, 180
688 standardized, 562
of sensitive events, 337
exponentially weighted Row factors, in ANOVA, 500
properties of, 146
moving-average control charts, Row sum of squares
review problems, 293
684 definition of, 507
sample proportions, 315
summary of, 691 two-factor ANOVA tests, 507
sample space and events of
X control charts, 682 Runs test for randomness, 647, 649
experiment, 140
standard normal, 273, 275 Shewhart control chart, 671
summary of, 192 Quartiles, interquartile range, 93 S
Probability density function, 264, Quetelet, Adolphe, 54, 605 S control charts, 676
266, 291 on normal data sets, 104 Salaries, stem-and-leaf plots for, 48
Index 795

Sample 50th percentile, 82, 84, 86, Signed-rank test, 637, 662 t test
131, 132 p value, 632 historical perspective on, 409
Sample correlation coefficient, 131, Significance levels, 383 paired-sample, 458, 459, 461
576 hypothesis tests (summary Test statistic, definition of, 398
Pearson’s product-moment, 115 tables), 398, 408, 419, 439, Tukey, John, stem-and-leaf plots used
Sample mean, 67, 69, 71, 88, 299, 447, 455, 474 by, 54
301 α test, 388
central tendencies described by, Two-factor analysis of variance, 499,
Simon, Pierre, central limit theorem 513
76 and, 309
compared to sample median, 80 estimators, 505
Simple linear regression model,
frequency tables and, 70 scatter diagram and, 522 problems for testing hypotheses
standard deviation of, 302 and, 511
Skewed data, 131
variance of, 303 Standard deviation summary table, 513
Sample median, 75, 77, 88, 130 definition of, 227 Two-sample rank-sum test, 641
as percentile (50th), 82, 132 of binomial random variables, Two-sided tests
central tendencies described by, 249 of p value, 416
76
of normal random variables, 277 significance level of t tests, 403
compared to sample mean, 80
of random variables, 256 two population tests, 439, 455
definition of, 75
sample, 91, 93 Type I error, 384
historical perspective on, 77
Standard normal, 263, 273, 275, 277, Type II error, 384
problems for, 84
291, 356
summary of, 132
conversion to, 277
Sample mode, 87, 88, 131
summary of, 132 Standardized residuals, 562 U
scatter diagrams of, 562, 563 Union of events, 141
Sample percentiles, 81
Sample proportions, probabilities Standardizing, normal random Upper confidence bounds, for
and, 315 variables, 277 interval estimation of means,
Sample size, 349, 449, 450, 486 Statistics 351, 360, 369
hypothesis tests for large, changing definition of, 11 Upper control limit (UCL), 668
appropriate sizes for, 443 history of, 7, 9
population size and, 314 key terms for, 10, 130 V
Sample space, 141, 191 sampling
Variance
Sample standard deviation, 91, 93, distribution of sample variance,
computational formula for, 224
131, 450 321
Sample variance, 91, 93 problems for, 323 definition of, 224
distribution of, 321, 323 sample standard deviation, 90 hypothesis tests for unknown, 401
Samples sample variance, 90 measuring, in response values,
definition of, 5 Statistics, Central limit theorem, 303, 556
random, 448, 476, 612, 613 306, 310, 316, 324 of hypergeometric random
Sampling proportions, for finite Stem-and-leaf plots, 43 variables, 249
populations, 313, 315, 317 examples of, 42 of independent random variable
Scatter diagram for salaries, 47 sums, 226
regression to the mean, 543, 547 Stratified random sampling, of random variables, 223, 225,
standardized residuals and, 562, definition of, 6 227, 229
563 Symmetry of sample means, 303
Second quartile, 86, 131 approximately symmetric, 20 population mean tests with
Sensitive events, estimating
bar graphs and, 21 known, 387
probability of, 337
histograms, 34 properties of, 214
Series circuit, 175
Shewhart, Walter, 671 two population tests of equality of
Shewhart control chart, 671 T means, known variances, 435
Sign test, 623, 625, 627, 637, 662 t distribution, 356 Variation, chance, 667
one-sided, 627 t random variable, 355 Venn diagrams, 142, 143, 146
796 Index

W X problems for, detecting shifts in


Wallace, David, 643 X control chart mean, 672
Weighted averages, 69 examples of, 670, 679
for detecting shifts in mean,
Wilcoxon sum-of-ranks test, 641 unknown mean and variance,
Z
Wright, Sewell, 386 674 Z test, 389, 424
INTRODUCTORY STATISTICS
Sheldon M. Ross

1 INTRODUCTION TO STATISTICS Variance: Var(X) = E[(X − E[X])2 ] = E[X2 ] − (E[X])2



Standard deviation: SD(X) = Var(X)
Statistics: the art of learning from data
Var(X + Y ) = Var(X) + Var(Y ) if X and Y are independent
Descriptive statistics: describes and summarizes data
Binomial random variable:
Inferential statistics: draws conclusions from data
n!
Population: collection of elements of interest P {X = i} = pi (1 − p)n−i , i = 0, . . . , n
i!(n − i)!
Sample: the part of the population from which data is obtained
E[X] = np Var(X) = np(1 − p)

2 DESCRIBING DATA SETS


6 NORMAL RANDOM VARIABLES
Frequency and relative frequency tables and graphs
Histograms Normal random variable X: characterized by μ = E[X], σ = SD(X)
Stem-and-leaf plots Standard normal random variable Z: normal with μ = 0, σ = 1
Scatter plots for paired data P {|Z| > x} = 2P {Z > x}, x > 0
P {Z < −x} = 2P {Z > x}
zα is such that P {Z > zα } = α
If X is normal then Z = (X − μ)/σ is standard normal.
3 USING STATISTICS TO SUMMARIZE DATA
Additive property: If X and Y are independent normals then X + Y
SETS is normal with mean μx + μy , and variance σx2 + σy2
n 
Sample mean: x̄ = i=1 xi /n
Sample median: the middle value
 7 DISTRIBUTIONS OF SAMPLING STATISTICS
Sample variance: s 2 = ni=1 (xi −√x̄) / (n − 1)
2

Sample standard deviation: s = s 2


X1 , . . . , Xn is sample from population: E[Xi ] = μ, Var(Xi ) = σ 2
 
Algebraic identity: ni=1 (xi − x̄)2 = ni=1 xi2 − nx̄ 2 E[X̄] = μ
 
Var X̄ = σ 2 /n
Empirical rule for normal data sets: 
Central limit theorem: ni=1 Xi is, for large n, approximately nor-

approximately 68% of the data lies within x̄ ± s mal with mean nμ and standard deviation σ n;
√  
approximately 95% of the data lies within x̄ ± 2s equivalently n X̄ − μ /σ is approximately standard normal.
approximately 99.7% of the data lies within x̄ ± 3s Normal approximation to binomial: If np ≥ 5, n(1 − p) ≥ 5 then

[Bin(n, p) − np]/ np(1 − p) is approximately standard normal.
Sample correlation coefficient:
  
r = ni=1 (xi − x̄) (yi − ȳ) / (n − 1) sx sy

8 ESTIMATION
4 PROBABILITY X̄ is the estimator of the population mean μ.
p̂, the proportion of the sample that has a certain property, esti-
0 ≤ P (A) ≤ 1
mates p, the population proportion having this property.
P (S) = 1, where S is the set of all possible values
S 2 estimates σ 2 , and S estimates σ .
P (A ∪ B) = P (A) + P (B), when A and B are disjoint
100(1 − α) confidence interval estimator for μ:
Probability of the complement: P (Ac ) = 1 − P (A)

Addition rule: P (A ∪ B) = P (A) + P (B) − P (A ∩ B) data normal or n large, σ known: X̄ ± zα/2 σ/ n

Conditional probability: P (B|A) = P (A ∩ B)/P (A) data normal, σ unknown: X̄ ± tn−1,α/2 S/ n
Multiplication rule: P (A ∩ B) = P (A)P (B|A)   
Independent events: P (A ∩ B) = P (A)P (B) 100(1 − α) confidence interval for p : p̂ ± zα/2 p̂ 1 − p̂ /n

5 DISCRETE RANDOM VARIABLES 9 TESTING STATISTICAL HYPOTHESES


n
Expected value (or mean): E[X] = i=1 xi P {X = xi } E[X + Y ] = H0 = null hypothesis: hypothesis that is to be tested
E[X] + E[Y ] Significance level α: the (largest possible) probability of rejecting
H0 when it is true X is the number of population members in a sample of size n that
p value: the smallest significance level at which H0 would be re- have the characteristic. B is a binomial random variable with
jected parameters n and p0 .
H0 H1 Test statistic TS p value if TS = x
p ≤ p0 p > p0 X P {B ≥ x}
Hypothesis Tests Concerning the Mean μ of a
p = p0 p = p0 X 2 Min{P {B ≤ x}, P {B ≥ x}}
Population
Assumption: Either the distribution is normal or sample size n is
large. Tests Concerning Two Population Proportions
Test statistic Significance p value if
P1 and p2 are the proportions of the members of two populations that
H0 H1 TS level α test TS = v
√   have a certain characteristic. A random sample of size n1 is chosen from
n X̄ − μ0 † the first population, and an independent random sample of size n2 is cho-
μ = μ0 μ = μ0 Reject H0 if 2P {z ≥ |v|}
σ sen from the second. p̂1 and p̂2 are the proportions of the samples that
|TS| ≥ zα/2
√   have the characteristic and p̂ is the proportion of the combined samples
n X̄ − μ0 †
μ ≤ μ0 μ > μ0 Reject H0 if P {z ≥ v} that has it.
σ
|TS| ≥ zα
√  
n X̄ − μ0
μ = μ0 μ = μ0 Reject H0 if 2P {Tn−1 ≥ |v|} 11 ANALYSIS OF VARIANCE
S
|TS| ≥
tn−1 , α/2
√   One-Factor ANOVA Table
n X̄ − μ0
μ ≤ μ0 μ > μ0 Reject H0 if P {Tn−1 ≥ v} X̄i and Si2 , i = 1, . . . , m, are the sample means and sample
S
|TS| ≥ tn−1 , α variances of independent samples of size n from normal
† Assumption: σ known. populations having means μi and a common variance σ 2 .
Note: To test H0 :μ ≥ μ0 , multiply data by −1 and use the above. Source of Value of test
estimator Estimator of σ 2 statistic
m  2
10 HYPOTHESES TESTS CONCERNING TWO n X̄i − X̄¯
i=1 nS̄ 2
POPULATIONS Between samples mS̄ =
2 TS =
(m − 1) 
m
Si2 /m
i=1
Tests Concerning the Means of Two Populations 
m
Within samples Si2 /m
When Samples Are Independent i=1

The X sample of size n and the Y sample of size m are


Significance level α test of H0 : all μi are equal
independent.
Reject H0 if TS ≥ Fm−1,m(n−1),α
Test statistic Assump- Significance p value
H0 H1 TS tions level α test if TS = v Do not reject otherwise
μx = μy μx = μy 
X̄ − Ȳ
n, m large Reject if 2P {Z ≥
If TS = v then
Sx2 /n + Sy2 /m |TS| ≥ zα/2 |v|} p value = P {Fm−1,m(n−1) ≥ v}
X̄ − Ȳ where Fm−1,m(n−1) is an F random variable with m − 1 numerator
μx ≤ μy μx > μy  n, m large Reject if P {Z ≥ v}
Sx2 /n + Sy2 /m |TS| ≥ zα and m(n − 1) denominator degrees of freedom.
X̄ − Ȳ Two-factor ANOVA model: For i = 1, . . . , m, j = 1, . . . , n
μx = μy μx = μy  Normal Reject if 2P {Tn+m−2
Sp2 (1/n + 1/m) populations |TS| ≥ tn+m−2,α/2 ≥ |v|}  
σx = σy E Xij = μ + αi + βj
X̄ − Ȳ m n
μx ≤ μy μx > μy  Normal Reject if P {Tn+m−2
Sp2 (1/n + 1/m) populations |TS| ≥ tn+m−2,α ≥ v} αi = βj =0
σx = σy i=1 j =1

Sp2 = n−1 2
n+m−2 Sx + m−1 2
n+m−2 Sy = pooled estimator of σx2 = σy2
μ is the grand mean, αi is the deviation from the grand mean due
to row i, and βj is the deviation from the grand mean due to col-
Hypothesis Tests Concerning p umn j . Their estimators are
(the proportion of a large population that has a certain character-
istic) μ̂ = X.. α̂i = Xi . − X.. β̂j = X.j − X..
different input values. Its square root is the absolute value of the
Significance- p value if sample correlation coefficient.
H0 H1 Test statistic TS level-α test TS = v
p1 −p2 Multiple linear regression model:
p1 = p2 p1 = p2 √
(1/n1 +1/n2 )p(1−p)
Reject H0 if 2P {Z ≥
|TS| ≥ zα/2 |v|} Y = β0 + β1 x1 + . . . + βk xk + e
p1 −p2
p1 = p2 p1 > p2 √
(1/n1 +1/n2 )p(1−p)
Reject H0 if P {Z ≥ v}
|TS| ≥ z
13 CHI-SQUARED GOODNESS OF FIT TESTS
Two-Factor ANOVA Table P i is the proportion of population with value i, i = 1, . . . , k.
Sum of squares Degrees of freedom
 To test H0 : Pi = pi , i = 1, . . . , k, take a sample of size n. Let Ni be
m

Row SSr = n (Xi . − X..)2 m−1 the number equal to i, ei = npi , TS = ki=1 (Ni − ei )2 /ei
i=1
n  2
Column SSc = m X.j − X.. n−1 Significance level α test rejects H0 if T S ≥ xk−1,n
2 . If TS = v, then p
j =1 value = P {xk−1,α
2 ≥ v}.

m  n
Error SSe = (Xij − Xi .N
i=1 j =1 Suppose each member of a population has an X and a Y charac-
= (n − 1)(m − 1) − Xij + X., )2 teristic. Assume r possible X and s possible Y characteristics. To
test for independence of the characteristics of a randomly chosen
member, choose a sample of size n.
Test Significance- p value if
Null hypothesis statistic level-α test TS = v Nij = number with X characteristic i and
SSr /(m − 1) Y characteristic j
No row effect Reject if P {Fm−1,N ≥ v}
SSe /N Ni = number with X characteristic i
(all αi = 0) TS ≥ Fm−1,N,α
SSc /(n − 1) Mj = number with Y characteristic j
No column effect Reject if P {Fn−1,N ≥ v}
SSe /N êij = Ni Mj /n
(all βj = 0) TS ≥ Fn−1,N,α
   2
If i j Nij − êij /êij ≥ X(r−1)(s−1).α
2 then the hypothesis of in-
12 LINEAR REGRESSION dependence is rejected at significance level α.

Simple linear regression model: Y = α + βx + e

Least square estimators: β̂ = SxY /Sxx , α̂ = Ȳ − β̂ x̄ 14 NONPARAMETRIC HYPOTHESES


n
  n Let η = median of population. The sign test of
SxY = (xi − x̄) Yi − Ȳ = xi Yi − nx̄ Ȳ
H0 : η = m against H1 : η = m
i=1 1
n n
takes a sample of size n. If i are less than m, then
Sxx = (xi − x̄)2 = xi2 − nx̄ 2
1 1 p value = 2 Min(P {N ≤ i}, P {N ≥ i})

Estimated regression line: y = α̂ + β̂x where N is a binomial (n, 1/2) random variable.

Error term e is normal withmean 0 and variance σ 2 . Estimator of The signed rank test is used to test the hypothesis that a population
 2   distribution is symmetric about 0. It ranks the data in terms of ab-
σ 2 is SSR /(n − 2), SSR = Yi − α̂ − β̂xi = Sxx SY Y − SxY
2 /S
xx
i solute value. TS is the sum of the ranks of the negative values. If
√ TS = t , then
To test H0 : β = 0. Use TS = (n − 2) Sxx /SSR β̂
p value = 2 Min (P {TS ≤ t}, P {TS ≥ t})
Significance level γ test is to reject H0 if |TS| ≥ tn−2,γ /2 .
TS is approximately normal with mean n(n + 1)/4 and variance
If TS = v, p value = 2P {Tn−2 ≥ v} n(n + 1)(2n + 1)/24.

100(1 − γ ) confidence prediction interval for response at input x0 To test equality of two population distributions, draw random
  samples of sizes n and m and rank the n + m data values. The rank
α̂ + β̂x0 ± tn−2,γ /2 1 + 1/n + (x0 − x̄)2 /Sxx SSR / (n − 2) sum test uses TS = sum of ranks of first sample. It rejects H0 if TS is
either significantly large or significantly small. If TS = t , then
Coefficient of determination: R 2 = 1 − SSR /SY Y is the proportion
of the variation in the response variables that is explained by the p value = 2 Min (P {TS ≤ t}, P {TS ≥ t})
TS is approximately normal with mean n(n + m + 1)/2 and variance
nm(n + m + 1)/12.

To test the hypothesis that a sequence of 0s and 1s is random, use


the runs test by counting R, the number of runs. Reject randomness
when R is either too small or too large to be explained by chance.
Use the result that when H0 is true, R is approximately normal
with mean 1 + 2nm/(n + m) and variance
2nm (2nm − n − m)
(n + m)2 (n + m − 1)

15 QUALITY CONTROL

Control chart limits μ ± 3σ/ n n = subgroup size
Area under the Standard Normal Curve to the Left of x
x .00 .01 .02 .03 .04 .05 .06 .07 .08 .09
.0 .5000 .5040 .5080 .5120 .5160 .5199 .5239 .5279 .5319 .5359
.1 .5398 .5438 .5478 .5517 .5557 .5596 .5636 .5675 .5714 .5733
.2 .5793 .5832 .5871 .5910 .5948 .5987 .6026 .6064 .6103 .6141
.3 .6179 .6217 .6255 .6293 .6331 .6368 .6406 .6443 .6480 .6517
.4 .6554 .6591 .6628 .6664 .6700 .6736 .6772 .6808 .6844 .6879
.5 .6915 .6950 .6985 .7019 .7054 .7088 .7123 .7157 .7190 .7224
.6 .7257 .7291 .7324 .7357 .7389 .7422 .7454 .7486 .7517 .7549
.7 .7580 .7611 .7642 .7673 .7704 .7734 .7764 .7794 .7823 .7852
.8 .7881 .7910 .7939 .7967 .7995 .8023 .8051 .8078 .8106 .8133
.9 .8159 .8186 .8212 .8238 .8264 .8289 .8315 .8340 .8365 .8389
1.0 .8413 .8438 .8461 .8485 .8508 .8531 .8554 .8577 .8599 .8621

1.1 .8643 .8665 .8686 .8708 .8729 .8749 .8770 .8790 .8810 .8830
1.2 .8849 .8869 .8888 .8907 .8925 .8944 .8962 .8980 .8997 .9015
1.3 .9032 .9049 .9066 .9082 .9099 .9115 .9131 .9147 .9162 .9177
1.4 .9192 .9207 .9222 .9236 .9251 .9265 .9279 .9292 .9306 .9319
1.5 .9332 .9345 .9357 .9370 .9382 .9394 .9406 .9418 .9429 .9441
1.6 .9452 .9463 .9474 .9484 .9495 .9505 .9515 .9525 .9535 .9545
1.7 .9554 .9564 .9573 .9582 .9591 .9599 .9608 .9616 .9625 .9633
1.8 .9641 .9649 .9656 .9664 .9671 .9678 .9686 .9693 .9699 .9706
1.9 .9713 .9719 .9726 .9732 .9738 .9744 .9750 .9756 .9761 .9767
2.0 .9772 .9778 .9783 .9788 .9793 .9798 .9803 .9808 .9812 .9817

2.1 .9821 .9826 .9830 .9834 .9838 .9842 .9846 .9850 .9854 .9857
2.2 .9861 .9864 .9868 .9871 .9875 .9878 .9881 .9884 .9887 .9890
2.3 .9893 .9896 .9898 .9901 .9904 .9906 .9909 .9911 .9913 .9916
2.4 .9918 .9920 .9922 .9925 .9927 .9929 .9931 .9932 .9934 .9936
2.5 .9938 .9940 .9941 .9943 .9945 .9946 .9948 .9949 .9951 .9952
2.6 .9953 .9955 .9956 .9957 .9959 .9960 .9961 .9962 .9963 .9964
2.7 .9965 .9966 .9967 .9968 .9969 .9970 .9971 .9972 .9973 .9974
2.8 .9974 .9975 .9976 .9977 .9977 .9978 .9979 .9979 .9980 .9981
2.9 .9981 .9982 .9982 .9983 .9984 .9984 .9985 .9985 .9986 .9986
3.0 .9987 .9987 .9987 .9988 .9988 .9989 .9989 .9989 .9990 .9990

3.1 .9990 .9991 .9991 .9991 .9992 .9992 .9992 .9992 .9993 .9993
3.2 .9993 .9993 .9994 .9994 .9994 .9994 .9994 .9995 .9995 .9995
3.3 .9995 .9995 .9995 .9996 .9996 .9996 .9996 .9996 .9996 .9997
3.4 .9997 .9997 .9997 .9997 .9997 .9997 .9997 .9997 .9997 .9998

You might also like