[go: up one dir, main page]

0% found this document useful (0 votes)
25 views4 pages

CS70 Cheatsheet

This document is a comprehensive cheatsheet covering various topics in computer science, including propositional logic, proofs, sets, countability, graph theory, polynomials, probability theory, and distributions. It outlines key concepts, theorems, and formulas relevant to each topic, such as the Chinese Remainder Theorem, RSA encryption, and the Central Limit Theorem. The cheatsheet serves as a quick reference for students studying these subjects.

Uploaded by

Patrick Kwon
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
25 views4 pages

CS70 Cheatsheet

This document is a comprehensive cheatsheet covering various topics in computer science, including propositional logic, proofs, sets, countability, graph theory, polynomials, probability theory, and distributions. It outlines key concepts, theorems, and formulas relevant to each topic, such as the Chinese Remainder Theorem, RSA encryption, and the Central Limit Theorem. The cheatsheet serves as a quick reference for students studying these subjects.

Uploaded by

Patrick Kwon
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 4

CS 70 C HEATSHEET A LEC L I

Note 1 (Propositional Logic) Note 7 (Modular Artithmetic)


• P =⇒ Q ≡ ¬P ∨Q • P ∧ (Q ∨ R) ≡ (P ∧Q) ∨ (P ∧ R) • x −1 (modular inverse) exists mod m if and only if gcd(x, m) = 1
• P =⇒ Q ≡ ¬Q =⇒ ¬P • P ∨ (Q ∧ R) ≡ (P ∨Q) ∧ (P ∨ R) • Extended Euclidean Algorithm:
• ¬(P ∧Q) ≡ ¬P ∨ ¬Q • ¬(∀xP (x)) ≡ ∃x¬P (x) ¥ ¦
x y x/y a b
• ¬(P ∨Q) ≡ ¬P ∧ ¬Q • ¬(∃xP (x)) ≡ ∀x¬P (x) – new a = old bj k
35 12 2 −1 3 answer
Note 2/3 (Proofs) 12 11 1 1 −1 – new b = a − b xy
• Direct proof 11 1 11 0 1 – if gcd(x, y) = 1, then a =
• Proof by contraposition 1 0 start x −1 mod y, b = y −1 mod x
• Proof by cases gcd
• Proof by induction • Chinese Remainder Theorem:
– Base case (prove smallest case is true) – find bases b i that are ≡ 1 mod m i and ≡ 0 mod m j for j ̸= i
– Inductive hypothesis (assume n = k true for weak induction, → b i = c i (c i−1 mod m i ) where c i = i ̸= j m j
Q
assume n ≤ k true for strong induction)
P Q
– x ≡ a i b i (mod m i )
– Inductive step (prove n = k + 1 is true)
Q
– solution is unique mod m i
• Pigeonhole principle – m i must be pairwise relatively prime in order to use CRT
– Putting n + m balls in n bins =⇒ ≥ 1 bin has ≥ 2 balls Note 8 (RSA)
Note 4 (Sets) • Scheme: for primes p, q, find e coprime to (p − 1)(q − 1)
• P (S) = powerset/set of all subsets; if |S| = k, |P (S)| = 2k – public key: N = pq and e
• One to one (injection); f (x) = f (y) =⇒ x = y – private key: d = e −1 mod (p − 1)(q − 1)
– encryption of message m: m e (mod N ) = y
¡ ¢¡ ¢
• Onto (surjection); ∀y∃x f (x) = y ; “hits" all of range
• Bijection: both injective and surjective – decryption of encrypted message y: y d (mod N ) = m
Note 5 (Countability & Computability) • Fermat’s Little Theorem (FLT): x p ≡ x (mod p), or x p−1 ≡ 1
(mod p) if x coprime to p
• Countable if bijection to N
• Prime Number Theorem: π(n) ≥ lnnn for n ≥ 17, where π(n) = # of
• Cantor-Schroder-Bernstein Theorem: bijection between A and B if
primes ≤ n
there exists injections f : A → B and g : B → A
• Breaking RSA if we know d :
• Cantor diagonalization: to prove uncountability, list out possibil-
– we know d e − 1 = k(p − 1)(q − 1), where k ≤ e because
ities, construct new possibility different from each listed at one
d < (p − 1)(q − 1)
place (ex. reals ∈ (0, 1), infinite binary strings, etc)
– so d e−1
k
= pq − p − q + 1; pq = N , so we can find p, q because we
• A ⊆ B and B is countable =⇒ A is countable
know d , e, k
• A ⊇ B and A is uncountable =⇒ B is uncountable
• Infinite cartesian product sometimes countable (∅ × ∅ × · · · ), some- Note 9 (Polynomials)
times uncountable ({0, 1}∞ ) • Property 1: nonzero polynomial of degree d has at most d roots
• Halting Problem: can’t determine for every program whether it • Property 2: d + 1 pairs of points (x i distinct) uniquely defines a
halts (uncomputable) polynomial of degree at most d
• Reduction of TestHalt(P, x) to some task (here, TestTask) • Lagrange Interpolation:
x−x j
– define inner function that does the task if and only if P(x) halts – ∆i (x) = i ̸= j x −x
Q
i j
– call TestTask on the inner function and return the result in – P (x) =
P
y ∆ (x)i i i
TestHalt • Secret Sharing (normally under GF (p)):
Note 6 (Graph Theory) – P (0) = secret, P (1), . . . , P (n) given to all people
– P (x) = polynomial of degree k − 1, where k people are needed to
• K n has n(n−1)
2 edges
get the secret
• Handshaking lemma: total degree = 2e
• Rational Root Theorem: for P (x) = a n x n + · · · + a 0 , the roots of P (x)
• Trees: (all must be true) p
that are of the form q must have p | a 0 , q | a n
– connected & no cycles
– connected & has n − 1 edges (n = |V |) Note 10 (Error Correcting Codes)
– connected & removing an edge disconnects the graph • Erasure Errors: k packets lost, message length n; need to send n + k
– acyclic & adding an edge makes a cycle packets because P (x) of degree n − 1 needs n points to define it
• Hypercubes: • General Errors: k packets corrupted, message length n; send n + 2k
– n-length bit strings, connected by an edge if differs by exactly 1 packets
bit • Berlekamp Welch:
– n-dimensional hypercube has n2n−1 edges, and is bipartite – P (x) encodes message (degree n − 1)
(even vs odd parity bitstring) – E (x) constructed so that roots are where the errors are (degree
• Eulerian walk: visits each edge once; only possible if connected and k); coefficients unknown
all even degree or exactly 2 odd degree – Q(x) = P (x)E (x) (degree n + k − 1)
• Eulerian tour: Eulerian walk but starts & ends at the same vertex; – substitute all (x i , r i ) into Q(x i ) = r i E (x i ), make system of equa-
only possible if all even degree and connected tions
Q(x)
• Planar graphs – solve for coefficients; P (x) = E (x)
– v + f = e +2
Pf
– i =1 s i = 2e where s i = number of sides of face i
– e ≤ 3v − 6 if planar (because s i ≥ 3)
– e ≤ 2v − 4 if planar for bipartite graphs (because s i ≥ 4)
– nonplanar if and only if the graph contains K 5 or K 3,3
– all planar graphs can be colored with ≤ 4 colors

1
CS 70 C HEATSHEET A LEC L I

Note 11 (Counting) Note 16 (Geometric/Poisson Distributions


• 1st rule of counting: multiply # of ways for each choice • Geometric distribution: P(X = i ) = exactly i trials until success with
• 2nd rule of counting: count ordered arrangements, divide by # of probability p; use X − 1 for failures until success
ways
¡ ¢ to order to get unordered – Memoryless Property: P(X > a + b | X > a) = P(X > b); i.e. wait-
• nk = k!(n−k)!
n!
= # ways to select k from n ing > b units has same probability, no matter where we start
• Stars and bars: n objects, k groups → n stars, k − 1 bars • Poisson distribution: λ = average # of successes in a unit of time
→ n+k−1
¢ ¡n+k−1¢
– X ∼ Pois(λ), Y ∼ Pois(µ) independent: X + Y ∼ Pois(λ + µ)
¡
k−1 = n λ
• Zeroth rule of counting: if bijection ¡between A and B , then |A| = |B | – X ∼ Bin(n, n ) where λ > 0 is constant, as n → ∞, X → Pois(λ)
n ¢ k n−k
• Binomial theorem: (a + b)n = n
P
k=0¢ k ¡
a b Note 20 (Continuous Distributions)
¡ n ¢ ¡n−1
= k + n−2
¢ ¡k ¢
• Hockey-stick identity: k+1 k +···+ k
• Probability density function:
(−1)k – Rf X (x) ≥ 0 for x ∈ R
• Derangements: D n = (n − 1)(D n−1 + D n−2 ) = n! n
P
k=0 k! ∞
– −∞ f X (x) dx = 1
• Principle of Inclusion-Exclusion: |A 1 ∪ A 2 | = |A 1 | + |A 2 | − |A 1 ∩ A 2 | Rx
• Cumulative density function: F X (x) = P(X ≤ x) = −∞ f X (t ) dt ,
More generally, alternate add/subtract
p ¡ ¢all combinations
n d
• Stirling’s approximation: n! ≈ 2πn ne f X (x) = dx F X (x) R

Note 12 (Probability Theory)
• Expectation: £ E[X R ∞−∞ x f X (x) dx
¤ ]=
• LOTUS: E g (X ) = −∞ g (x) f X (x) dx
• Sample points = outcomes • Joint distribution: P(a ≤ X ≤ b, c ≤ Y ≤ d )
• Sample space = Ω = all possible outcomes – Rf X Y (x, y) ≥ 0, ∀x, y ∈ R
• Probability space: (Ω, P(ω)); (sample space, probability function) ∞ R∞
– −∞ −∞ f X Y (x, y) dx dy = 1R
0 ≤ P(ω) ≤ 1, ∀ω ∈ Ω; ω∈Ω P(ω) = 1
P
• • Marginal distribution: f X (x) = −∞ ∞
f X Y (x, y) dy; integrate over all y
1
• Uniform probability: P(ω) = |Ω| , ∀ω ∈ Ω • Independence: f X Y (x, y) = f X (x) f Y (y)
P(A) = ω∈A P(ω) where A is an event
P
• X f (x) f (x,y)
# sample points in A
|A| • Conditional probability: f X |A (x) = P(A) , f X |Y (x | y) = XfY (y)
• If uniform: P(A) = # sample points in Ω = |Ω| Y
• Exponential distribution: continuous analog to geometric distribu-
• P(A) = 1 − P(A) tion
Note 13 (Conditional Probability) – Memoryless property: P(X > t + s | X > t ) = P(X > s)
– Additionally, P(X < Y | min(X , Y ) > t ) = P(X < Y )
• P(ω | B ) = PP(ω)
(B ) for ω ∈ B – If X ∼ Exp(λ X ), Y ∼ Exp(λY ) independent, then
• P(A | B ) = P(A∩B )
P(B ) → P(A ∩ B ) = P(A | B ) P(B ) λX
min(X , Y ) ∼ Exp(λ X + λY ) and P(X ≤ Y ) = λ +λ
• Bayes’ Rule: X Y
• Normal distribution (Gaussian distribution)
P(B | A) P(A) P(B | A) P(A)
P(A | B ) = = . – If X ∼ N (µ X , σ2X ), Y ∼ N (µY , σ2Y ) independent:
P(B ) P(B | A) P(A) + P(B | A) P(A)
Z = aX + bY ∼ N (aµ X + bµY , a 2 σ2X + b 2 σ2Y )
• Total Probability Rule (denom of Bayes’ Rule):
• Central Limit Theorem: if S n = n X , all X i iid with mean µ,
P
n ¡ n ¡ i =1 i
P(B ) = P B ∩ Ai = P B | Ai P Ai
X ¢ X ¢ ¡ ¢
variance σ2 ,
i =1 i =1 Ã !
for A i partitioning Ω Sn σ2 S n − nµ
≈ N µ, ; p ≈ N (0, 1).
• Independence: P³ (A ∩ B ) = n n σ n
´ P(A) P(B ) or P(A | B ) = P(A)
• Union bound: P n
Pn
P Ai Note 17 (Concentration Inequalities, LLN)
S ¡ ¢
i =1
A i ≤ i =1
Note 14 (Random Variables) • Markov’s Inequality: P(X ≥ c) ≤ E[X ]
c , if X nonnegative, c > 0
E[|Y |r ]
• Bernoulli distribution: used as an indicator RV • Generalized Markov: P(|Y | ≥ c) ≤ c r for c, r > 0
• Binomial distribution: P(X = i ) = i successes in n trials, success • Chebyshev’s Inequality: P ¯ X − µ¯ ≥ c ≤ Var(X
¡¯ ¯ ¢ )
for µ = E[X ], c > 0
2
probability p ¢ 1 pc
– Corollary: P X − µ ≥ kσ ≤ for σ = Var(X ), k > 0
¡¯ ¯
¯ ¯
– If X ∼ Bin(n, p), Y ∼ Bin(m, p) independent, X +Y ∼ Bin(n +m, p) k2
• Confidence intervals:¡¯
• Hypergeometric distribution: P(X = k) = k successes in N draws ¢ Var(p̂)
– For proportions, P ¯p̂ − p ¯ ≥ ε ≤ ≤ δ, where δ is the confi-
¯
w/o replacement from size N population with B objects (as suc- ε2
dence level (95% interval → δ = 0.05)
cesses) p(1−p)
– p̂ = proportion of successes in n trials, Var(p̂) = n
• Joint distribution: P(X = a, Y = b) 1
• Marginal distribution: P(X = a) = b∈B P(X = a, Y = b)
P – =⇒ n ≥ 2
4ε δ ³¯
2
≤ σ2 =δ
¯ ´
• Independence: P(X = a, Y = b) = P(X = a) P(Y = b) – For means, P
¯1
S n − µ
¯
≥ ε
¯ n ¯ nε
£ E[X x P(X = x)
P
• Expectation: ¤ ]=P x∈X – Sn = n X , all X i ’s iid mean µ, variance σ2
P
• LOTUS: E g (X ) = x∈X g (x) P(X = x) i =1 i
• Linearity of expectation: E[aX + bY ] = a E[X ] + b E[Y ] – =⇒ ε = pσ , interval = S n ± pσ
nδ nδ
• X , Y independent: E[X Y ] = E[X ] E[Y ] – With CLT, p ¯ p ´ ³ p ´
¯ (A −µ) n ¯ ε n ε n
³¯
P ¯ A n − µ¯ ≤ ε = P ¯ n σ ¯ ≤ σ ≈ 1 − 2Φ − σ = 1 − δ.
¡¯ ¯ ¢
Note 15 (Variance/Covariance)
2
Here, A n = n1 S n and CLT gives A n ≈ N µ, σn ; use inverse cdf to
³ ´
¢2
• Variance: Var(X ) = E[ X − µ ] = E X 2 − E[X ]2
¡ £ ¤
2
– Var(c X ) = c Var(X ), Var(X + Y ) = Var(X ) + Var(Y ) + 2 Cov(X , Y ) get ε
– if indep: Var(X + Y ) = Var(X ) + Var(Y ) • Law of large numbers: as n → ∞, sample average of iid X 1 , . . . X n
• Covariance: Cov(X , Y ) = E[X Y ] − E[X ] E[Y ] tends to population mean
• Correlation: Corr(X , Y ) = Cov(X ,Y )
σ X σY , always in [−1, 1]
• Indep. implies uncorrelated (Cov = 0), but not other way around,
ex. 
( 1 X = −1, p = 0.5
1 p = 0.5 
X= , Y = −1 X = −1, p = 0.5
−1 p = −0.5 
X =1

0

2
CS 70 C HEATSHEET A LEC L I

Note 24 (Markov Chains) Note: Do not blindly copy down the content on this
• Markov chain = sequence of states cheatsheet (this is one of the reasons why I typed this in
• X n = state at time step n LATEX, as you cannot just print it out and use it). My goal
• X = state space (finite) in making my cheatsheet publicly available is to ease
• πn = distribution of states at time step n your studying and to provide a general guideline for the
• Markov property (memoryless): P(X n+1 = i | X n , X n−1 , . . . , X 0 ) = kinds of things that you could put on your cheatsheet.
P(X n+1 = i | X n ) The most helpful part of a cheatsheet is the process of
• Transition matrix (P): transition probabilities, from row to column making it—you synthesize and review the course mate-
• π n = π 0 Pn rial yourself and pick out what you think is important.
• Invariant distribution: π = πP; solve balance equations in addition I guarantee that there are concepts in the notes/lec-
to π(i ) = 1
P
tures/discussions not on my cheatsheet that you should
• Irreducible Markov chain: any state can be reached from any other know, and I also guarantee that there are things on my
• All irreducible Markov chains have a unique invariant distribution: cheatsheet that may be of no use to you. This is a collec-
1 n−1 tion of items that helped me when I took the class, and
π(i ) = lim
X
1{X m = i }.
n→∞ n
m=0 only you know what benefits you the most. My advice is
That is, average time spent at state i = π(i ) to make your own cheatsheet first, without referencing
• Periodicity: let d (i ) = gcd{n > 0 : P(X n = i | X 0 = i ) > 0} mine, and then going over my cheatsheet afterward to
– If d (i ) > 1, periodic with period d add things that you may have missed.
– If d (i ) = 1, aperiodic Good luck on your exams!
– If irreducible, all states have the same d (i )
• Aperiodic irreducible Markov chains will always converge to the
invariant distribution
• Hitting time: # of steps before first reaching state i
– Let β( j ) denote this value, starting at state j ; set up system of
equations and solve
• P(A before B ): similarly, let α( j ) denote this probability, starting at
state j ; set up system of equations and solve (use TPR)
• Similar problems: let f (i ) denote # of steps or probability, starting
at state i ; set up equations and solve
• 2-state Markov chain:
a
1−a 1 2 1−b
b
· ¸
1−a a h
b a
i
P= and π = a+b a+b
b 1−b
Other
• In CS70, naturals and whole numbers both include 0
n
• Sum of finite geometric series: 1−r 1−r a where r is ratio, a is first
term, n is number of terms
• Memoryless property independence: if X , Y both memoryless
(i.e. geometric or exponential), then min(X , Y ) and max(X , Y ) −
min(X , Y ) are independent
n
P
• If S n = X i , then (useful for variance of indicators)
i =1
h i X n h i X h i Xn h i X h i
2
E Sn = E X i2 + E Xi X j = E X i2 + 2 E Xi X j
i =1 i ̸= j i =1 i<j
• Coupon Collector Problem:
– n distinct items, each with equal probability; X i = # of tries be-
fore i th new item, given i − 1 already
P
– S n = X i = total tries before getting all items
– We have X i ∼ Geom( n−in+1 ) because i − 1 old items, so n − i + 1
new items
– Hence,
n £ ¤ X n n n 1
E[S n ] = E Xi =
X X
=n ≈ n(ln n + 0.5772)
i =1 i =1 n − i + 1 i =1 i

3
CS 70
Table 1: Common Discrete Distributions

Distribution Parameters PMF (P(X = k)) CMF (P(X ≤ k)) Expectation (E[X ]) Variance (Var(X )) Support
1 k −a +1 a +b (b − a + 1)2 − 1
Uniform Uniform(a, b) X ∈ [a, b]
b −a +1 b −a +1 2 12
(
p k =1
Bernoulli Bernoulli(p) — p p(1 − p) X ∈ {0, 1}
1−p k = 0
à !
n k
Binomial Bin(n, p) p (1 − p)n−k — np np(1 − p) X ∈ {0, 1, 2, . . .}
k
1 1−p
Geometric Geom(p) p(1 − p)k−1 1 − (1 − p)k X ∈ {1, 2, 3, . . .}
p p2
λk e −λ
Poisson Pois(λ) — λ λ X ∈ {0, 1, 2, . . .}
k!
¡K ¢¡N −K ¢
k n−k K K (N − K )(N − n)
Hypergeometric Hypergeometric(N , K , n) — n n X ∈ {0, 1, 2, . . .}
N 2 (N − 1)
¡N ¢
N
n

Table 2: Common Continuous Distributions

C HEATSHEET
Distribution Parameters PDF ( f X (x)) CDF (F X (x) = P(X ≤ x)) Expectation (E[X ]) Variance (Var(X )) Support
4

1 x −a a +b (b − a)2
Uniform Uniform(a, b) X ∈ [a, b]
b−a b−a 2 12
1 1
Exponential Exp(λ) λe −λx 1 − e −λx X ∈ [0, ∞)
λ λ2
à !
1 (x − µ)2
Normal/Gaussian N (µ, σ2 ) exp − Φ(x) µ σ2 X ∈R
2σ2
p
2πσ2

A LEC L I

You might also like