ST3236_Note3
ST3236_Note3
ST3236_Note3
Somabha Mukherjee
1 / 17
Outline
1 Random Variables
2 / 17
What are Random Variables?
The outcomes of a random experiment are often quantified/summarized by
random variables. A random variable assigns values to each outcome of a
random experiment
Ω := {(i, j) : 1 ≤ i ≤ 6, 1 ≤ j ≤ 6} .
Now, define X := sum of the two faces that you observe. This is a random
variable, since
X (i, j) = i + j .
3 / 17
Distribution Functions
FX (t) = P(X ≤ t) .
4 / 17
Properties of Distribution Functions
5 / 17
Discrete Distributions
A random variable X is said to have a discrete distribution, if there exists a
countable set C such that P(X ∈ C ) = 1.
th
Number of independent trials needed to get the r success (with success
probability p).
6 / 17
Continuous Distributions
A Word of Caution!
There are continuous distributions that are not absolutely continuous. However,
these examples are a bit hard, and beyond the scope of this module. That is why,
some textbooks simply use the name continuous distributions for absolutely
continuous distributions, which is technically incorrect.
7 / 17
Examples of Absolutely Continuous Distributions
1
1 Uniform Distribution: f (t) = b−a (a < t < b).
Assigns equal probability to sets of equal size.
2 2
2 Normal Distribution: f (t) = (2πσ 2 )−1/2 e −(t−µ) /2σ (t ∈ R)
Used to model the weights and heights of individuals in a population.
8 / 17
Outline
1 Random Variables
9 / 17
Joint Distribution of Random Variables
EXERCISE!!
10 / 17
Outline
1 Random Variables
11 / 17
Expected Value of a Random Variable
The expected value of a random variable is a measure of the (weighted)
average of all the values it can take.
Note that E(X ) depends only on the distribution of X and not on the exact
function X .
12 / 17
Some Properties of Expectation
13 / 17
Indicator Functions and Examples
For an event A, we define the indicator of the event A as:
(
1 if ω ∈ A
1A (ω) =
0 if ω ∈
/ A.
For each 1 ≤ i ≤ 100, let Ai denote the event that the letters at the i th place
26 1
match. Then, P(Ai ) = 26 2 = 26 .
14 / 17
Variance and Higher Moments
15 / 17
Covariance
Var(X ) = Cov(X , X ).
16 / 17
Some Properties of Variance and Covariance
Var(aX + bY ) = a2 Var(X ) + b 2 Var(Y ) + 2ab Cov(X , Y ).
Remark:
Even for random variables X1 , . . . , Xn which are not independent , but
Cov(Xi , Xj ) = 0 for all i ̸= j, we have: