[go: up one dir, main page]

0% found this document useful (0 votes)
36 views45 pages

Linear Algebra: Vectors, Matrices, and Transformations

Linear-Algebra in digital Image Processing
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
36 views45 pages

Linear Algebra: Vectors, Matrices, and Transformations

Linear-Algebra in digital Image Processing
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

Lecture # 03

Linear Algebra
Outline

● Vectors
○ Operations
● Matrix
○ Operations
● Transformations
○ Scaling
○ Rotation
○ Translation
● Singular value decomposition
Outline

● Vectors
○ Operations
● Matrix
○ Operations
● Transformations
○ Scaling
○ Rotation
○ Translation
● Singular value decomposition
Vector
● Scalar: 𝑥 ∈ R
● Vector: 𝒙 ∈ R𝑁
○ Row Vector v ∈ R1×𝑛

𝒙 = [𝒙1 𝒙2 … 𝒙n]

● Column vector v∈R𝑛×1 :

● Transpose
Vectors – use

● Store data in memory


○ Feature vectors
○ Pixel values
○ Any other data for processing

● Any point in coordinate system


○ Can be n dimensional

● Difference between two points

[𝑥1 − 𝑦1 𝑥2 − 𝑦2 𝑥3 − 𝑦3]
Outline

● Vectors
○ Operations
● Matrix
○ Operations
● Transformations
○ Scaling
○ Rotation
○ Translation
● Singular value decomposition
Vector operations
● Norm – size of the vector

● P-norm

● Euclidean norm

● L1-norm
Cont’d

● Inner product (dot product)


○ Scalar number
○ Multiply corresponding entries and odd
Cont’d

● Inner product (dot product)

● A.B is also |A||B|cos (angle between A and B)

● If B is a unit vector, A.B gives projection of A on B


Cont’d

● Outer product

( Matrix )
Outline

● Vectors
○ Operations
● Matrix
○ Operations
● Transformations
○ Scaling
○ Rotation
○ Translation
● Singular value decomposition
Matrix

● Array 𝑨 ∈ R𝑚×𝑛 of numbers with shape m by n


○ m rows and n columns

● A row vector is a matrix with single row


● A column vector is a matric with single column
Matrix — Use

● Image representation – grayscale


○ One number per pixel
○ Stored as n x m matrix
Cont’d

● Image representation – RGB


○ 3 numbers per pixel
○ Stored as n x m x 3 matrix
Outline

● Vectors
○ Operations
● Matrix
○ Operations
● Transformations
○ Scaling
○ Rotation
○ Translation
● Singular value decomposition
Matrix Operations
● Addition

● Both matrices should have same shape, except with a scalar

● Same with subtraction


Cont’d

● Scaling

● Hadamard product
Matrix Operations

● Matrix Multiplication
○ Complexity?
○ m x n and n x p
○ Results in m x p matrix
Cont’d

● Matrix multiplication
● Let 𝒂𝑖 denote the i-th column of the matrix 𝑨, and
● 𝒃j denote the j-th column of the matrix B

● The product of the two matrices is defined as


Matrix Operations

● Matrix Multiplication – another interpretation (very intuity)


● The first of column of AB
○ Linear combination of all the columns in A

[𝒂1𝑏11 +𝒂2𝑏12 +...+𝒂n𝑏1n]

● Similarly we can get other columns...


Cont’d

● How about linear combination of all rows of B?


○ • Each row of C = AB is a linear combination of rows of B
Matrix Operations

● Transpose
Cont’d

● Determinant
○ A scalar

● For A = , det(A) = ad-bc

● Represents area of the parallelogram described by the vectors in the rows of the
matrix
Cont’d
● Determinant

where 𝐶𝑖𝑗 is the cofactor of 𝑎𝑖𝑗 defined by


𝐶𝑖𝑗 = (-1)i+1 |Mij|, and
Mij is the minor of matrix 𝑨 formed by eliminating row i and column j of 𝑨
● Some Properties
○ |AB| = |A||B|
○ |AB| = |BA|
○ |AT| = |A|
Matrix Operations
● Trace
○ Tr(A) = Sum of diagonal elements

● Properties
○ tr(AB) = tr(BA)
○ tr(A+B) = tr(A) + tr(B)
Cont’d

● Inverse
○ Given a matrix A, its inverse A-1 is a matrix such that

AA-1 = A-1A = I

● Inverse does not always exist


○ Singular vs non-singular

● Properties
○ (A-1)-1 = A
○ (AB)-1 = B-1A-1
Special matrices

● Symmetric matrix
AT = A

● Skew- symmetric matrix

AT = -A
Cont’d

● Diagonal matrix
○ Used for row scaling

● Identity matrix
○ Special diagonal matrix
○ 1 along diagonals

I.A = A
Outline

● Vectors
○ Operations
● Matrix
○ Operations
● Transformations
○ Scaling
○ Rotation
○ Translation
● Singular value decomposition
Transformation - scaling
● Matrices are useful for vector transformations
● Matrix multiplication
X’ = AX

● Linear combination of columns


Transformation

● Rotation
○ Matrix multiplication to rotate a vector

● Rotation matrix
Transformation - rotation

● Rotating axis first


● Vector v = [x y]
○ x – projection of v on x axis
○ y– projection of v on y axis
Transformation - rotation

● Rotating axis first


● Vector v = [x y]
○ x – projection of v on x axis
○ y– projection of v on y axis

● Remember vector dot product?


Transformation - rotation

● Now we need new x and y axis


○ x’ = v dot new x-axis
○ y’ = v dot new y-axis

● For rotation of 𝞱
○ New x-axis = [cos𝞱 , sin𝞱]
○ New y-axis = [-sin𝞱, cos𝞱]
● We can form a matrix using new axis
Cont’d

● When we rotate the axis to left


● We are rotating the vector to right
● We can use rotation matrix to rotate the vector
● We need new x y axis coordinates
○ When we rotate the axis right

● Updated rotation matrix


Transformation

● Linear combination

● Sufficient for
○ Scaling
○ Rotating and
○ Skew transformations
● But no shifting
Cont’d
● Linear combination

● Still be able to:


○ Scaling
○ Rotating and
○ Skew transformations
Transformation

● Homogeneous System

● We can also perform shifting now


● This is called homogeneous coordinates
Scaling + rotation + translation

● Careful about the order


○ V’ = (TRS)V
Outline

● Vectors
○ Operations
● Matrix
○ Operations
● Transformations
○ Scaling
○ Rotation
○ Translation
● Singular value decomposition
Linear independence

● Linear Independence
○ {𝒙1, 𝒙2 ⋯ , 𝒙𝑀} is a set of linearly independent vectors provided

𝑎1𝒙1 + 𝑎2𝒙2 + ⋯ + 𝑎𝑀𝒙𝑀 = 𝟎 ⇒ 𝑎1 = 0 = 𝑎2= 𝑎𝑀

● In other words, none of the vectors can be expressed as a linear combination of


the the other vectors
○ Each vector is perpendicular to every other vector
○ For example axis in cartesian coordinate system
Intuition

● In terms of features
○ Person recognition – [height, hair color, weight, specs, eye color, etc.]

Linearly dependent Linearly independent Linearly dependent Linearly independent Linearly independent
Matrix factorization
● Singular value decomposition (SVD)
A = UΣVT
● If A is mxn matrix, then
○ U will be m x m,
○ Σ will be m x n, and
○ VT will be n x n

● U and V are unitary matrices


○ Each Column is a unit Vector

● Σ is a diagonal matrix
Singular value decomposition

● Interpretation
A = UΣVT

● Columns of U are scaled by values in Σ


● The resultant columns are linearly combined by V
● A is formed as a linear combination of columns of U
● If we use all column, we will get original A
● We can just use few columns of U and we get an approximation
○ We call these columns principal components
Application

You might also like