[go: up one dir, main page]

0% found this document useful (0 votes)
28 views47 pages

Adv Eng Math Lecture Notes 1 v2

The document outlines the objectives for Chapter 7 of the Advanced Engineering Mathematics course, focusing on matrices and vectors. Students will learn to perform matrix operations, recognize special matrices, define and manipulate vectors, and solve systems of linear equations relevant to civil engineering. It includes examples of matrix analysis in structural mechanics and water distribution systems.

Uploaded by

benlee05
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
28 views47 pages

Adv Eng Math Lecture Notes 1 v2

The document outlines the objectives for Chapter 7 of the Advanced Engineering Mathematics course, focusing on matrices and vectors. Students will learn to perform matrix operations, recognize special matrices, define and manipulate vectors, and solve systems of linear equations relevant to civil engineering. It includes examples of matrix analysis in structural mechanics and water distribution systems.

Uploaded by

benlee05
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 47

Advanced Engineering Mathematics (Spring 2025)-SNU CEE Hyoseob Noh (hyoddubi1@snu.ac.

kr)

F31.201 Advanced Engineering Mathematics


In-Class Material: 1
Ch. 7 in the textbook

Chapter Objectives: Matrices and Vectors

By the end of this chapter, students should be able to:

1. Perform Matrix Operations: Execute matrix addition, multiplication, and transposition, and under-

stand the significance of these operations in civil engineering contexts (e.g. combining stiffness matrices,

transforming coordinate systems).

2. Recognize and Use Special Matrices: Identify common matrix types (diagonal, symmetric, orthogonal,

etc.) and know their properties and relevance in structural analysis, finite element methods, and other

applications.

3. Define and Manipulate Vectors: Understand the concepts of vector addition, scalar multiplication,

and the geometric interpretation of vectors in Rn . Be able to represent forces, displacements, or other

engineering quantities as vectors.

4. Solve Systems of Linear Equations: Set up engineering problems in the form Ax = b, and apply

row-reduction or other methods (e.g. matrix inverses, LU decomposition) to find unknowns such as forces,

flows, or displacements.

1
Advanced Engineering Mathematics (Spring 2025)-SNU CEE Hyoseob Noh (hyoddubi1@snu.ac.kr)

Example: Matrix analysis in structural mechanics

 
EA
− EA
   
L 0 0 L 0 0 
 pi     ui 

    
   12EI 6EI
 
  
 qi   0 L3 L2 0 − 12EI
L3
6EI   
L 2   vi 
    
    
   6EI 4EI
 
 Mi  
   0 L2 L 0 − 6EI
L2
2EI   
L   θi 
 =   .
 
    
   EA EA
 pj  − 0 0 0 0   uj 
   L L
 
    

    
 qj   6EI   vj 
− 12EI − 6EI 12EI

   0
   L3 L2 0 L3 − L2   
    

Mj 
6EI 2EI
 θj
0 L2 L 0 − 6EI
L2
4EI
L
| {z }
Kelement

Example: water distribution or pipe flow network

h i  
−(K12 + K23 ) h2 + K23 h3 = D2 − K12 H,
| {z } | {z }
coefficient of h2 coefficient of h3

   
K23 h2 − K23 h3 = D3 .
| {z } | {z }
coefficient of h2 coefficient of h3

    
−(K12 + K23 ) K23  h2  D2 − K12 H 
   =  .
    
K23 −K23 h3 D3

2
Chapter 7

Linear Algebra – Matrices, Vectors,

Determinants, and Linear Systems

7.1 Matrices, Vectors: Addition and Scalar Multiplication

7.1.1 Basic Definitions

• Matrix: A rectangular array of numbers (or functions) enclosed in brackets.

– Entry (or element): A number (or function) in the matrix.

– Row: A horizontal line of entries.

– Column: A vertical line of entries.

• Notation and General Concepts:

– Matrices are denoted by capital boldface letters (or by specifying the general entry in brackets).

– An m × n matrix is a matrix with m rows and n columns.

– Each entry is typically denoted by a double subscript, e.g., aij where i is the row number and j is the

column number.

3
Advanced Engineering Mathematics (Spring 2025)-SNU CEE Hyoseob Noh (hyoddubi1@snu.ac.kr)

A general m × n matrix is written as

 
 a11 a12 ··· a1n 
 
 
a
 21 a22 ··· a2n 

Am×n =
 .
.
 .. .. .. .. 
 . . . 

 
 
am1 am2 ··· amn

7.1.2 Vectors

• A vector is a matrix with only one row or one column.

• Components: The entries of the vector.

• Vectors are denoted by lowercase boldface letters or by listing their components.

• Row Vector: A 1 × n matrix, e.g.,


 
a= a1 a2 ··· an .

• Column Vector: An n × 1 matrix, e.g.,  


 a1 
 
 
a 
 2
a=
 . .

 .. 
 
 
 
an

7.1.3 Matrix Addition and Scalar Multiplication


Definition: Addition of Matrices
Two matrices of the same size are added by adding their corresponding entries.

 
 a11 + b11 a12 + b12 ··· a1n + b1n 
 
 
 a +b
 21 21 a22 + b22 ··· a2n + b2n 

A+B = .
 .
.. .. .. .. 

 . . . 

 
 
am1 + bm1 am2 + bm2 ··· amn + bmn

4
Advanced Engineering Mathematics (Spring 2025)-SNU CEE Hyoseob Noh (hyoddubi1@snu.ac.kr)

Definition: Scalar Multiplication

The product of a matrix A and a scalar k is obtained by multiplying each entry of A by k.

 
 ka11 ka12 ··· ka1n 
 
 
 ka
 21 ka22 ··· ka2n 

kA = 
 .
.
 .. .. .. .. 
 . . . 

 
 
kam1 kam2 ··· kamn

Definition: Equality of Matrices

Two matrices A and B are equal if they have the same dimensions and all corresponding entries are

equal.

7.2 Matrix Multiplication

7.2.1 Definition
Definition: Matrix Multiplication

• Let A be an m × n matrix and B be an n × p matrix. Their product is an m × p matrix C = AB

with entries
n
X
cij = aik bkj .
k=1

• Note that in general, matrix multiplication is not commutative; i.e., AB ̸= BA.

Matrix multiplication is defined as follows:

Am×n · Bn×p = Cm×p

The entry cjk is obtained by multiplying each entry in the jth row of A by the corresponding entry in the

kth column of B and summing the products. This is called multiplication of rows into columns.

For n = 3, this is illustrated as follows:

5
Advanced Engineering Mathematics (Spring 2025)-SNU CEE Hyoseob Noh (hyoddubi1@snu.ac.kr)

   
a11 a12 a13  c11 c12 
 
b11 b12 
   
   
a a22 a23    c c22 
 21     21 
A=

,
 B=
b21 ,
b22  C=



a a32 a33    c c32 
 31     31 



 b31 b32 



a41 a42 a43 c41 c42

Example 1: Matrix Multiplication


     
3 5 −1 2 −2 3 1  22 −2 43
42 
     
     
AB = 
4 0 2 · 5 0  =  26
7 8 −16 14 6 
 

     
     
−6 −3 2 9 −4 7 1 −9 4 −37 −28

Example 2: Multiplication of a Matrix and a Vector


       
4 2 3 4 · 3 + 2 · 5 22
 · = = 
       
1 8 5 1·3+8·5 43

Example 3: Products of Row and Column Vectors


 
1
   
 
3 6 1 ·
2 = [19],

 
 
4

   
1 3 6 1
     
   
2 · 3 6 1 =
  6 12 2
   
   
4 12 24 4

Example 4: Matrix Multiplication is Not Commutative

In general, AB ̸= BA:

     
 1 1  −1 1  0 0
 · = 
     
100 100 1 −1 0 0

But:

     
−1 1  1 1   99 99 
 · = 
     
1 −1 100 100 −99 −99

6
Advanced Engineering Mathematics (Spring 2025)-SNU CEE Hyoseob Noh (hyoddubi1@snu.ac.kr)

Thus, AB = 0 does not necessarily imply BA = 0 or A = 0 or B = 0.

7.2.2 Properties

• Associative Law: (AB)C = A(BC).

• Distributive Law: A(B + C) = AB + AC and (A + B)C = AC + BC.

• Scalar Multiplication: (kA)B = k(AB) = A(kB).

7.2.3 Transposition
Definition: Matrix Transposition

• The transpose of a matrix A, denoted by AT , is obtained by interchanging the rows and columns:

   
a11 a12  a11 a21 
A=


 =⇒ AT = 

.

a21 a22 a12 a22

• Properties of Transposition:

– (AT )T = A.

– (cA)T = cAT .

– (A + B)T = AT + B T .

– (AB)T = B T AT .

7.2.4 Special Matrices

• Square Matrix: A matrix with the same number of rows and columns.

 
 a11 a12 ··· a1n 
 
 
a
 21 a22 ··· a2n 

A=
 .
.
 .. .. .. .. 
 . . . 

 
 
an1 an2 ··· ann

7
Advanced Engineering Mathematics (Spring 2025)-SNU CEE Hyoseob Noh (hyoddubi1@snu.ac.kr)

• Main Diagonal: The diagonal containing the entries a11 , a22 , . . . , ann .

Main Diagonal of A : {a11 , a22 , . . . , ann }.

• Symmetric Matrix: A square matrix satisfying AT = A.

 
a11 a12 a13 
 
 
A=
a12 a22 .
a23 
 
 
a13 a23 a33

• Skew-symmetric Matrix: A square matrix satisfying AT = −A.

 
 0 a12 a13 
 
 
A=
−a12 0 .
a23 
 
 
−a13 −a23 0

• Triangular Matrices:

– Upper Triangular Matrix: All entries below the main diagonal are zero.

 
a11 a12 a13 
 
 
A=
 0 a22 .
a23 
 
 
0 0 a33

– Lower Triangular Matrix: All entries above the main diagonal are zero.

 
a11 0 0 
 
 
A=
a21 a22 0 .
 
 
a31 a32 a33

8
Advanced Engineering Mathematics (Spring 2025)-SNU CEE Hyoseob Noh (hyoddubi1@snu.ac.kr)

• Diagonal Matrix: A square matrix in which all off-diagonal entries are zero.

 
a11 0 0 
 
 
A=
 0 a22 0 .
 
 
0 0 a33

• Scalar Matrix: A diagonal matrix with all diagonal entries equal.

 
λ 0 0
 
 
A=
0 λ 0.
 
 
0 0 λ

• Identity Matrix (Unit Matrix): A scalar matrix with diagonal entries all equal to 1.

 
1 0 0
 
 
I=
0 1 .
0
 
 
0 0 1

7.3 Linear Systems of Equations: Gauss Elimination

7.3.1 Matrix Representation

• A linear system of m equations in n unknowns can be written in matrix form as

Ax = b,

where A is the coefficient matrix, x the solution vector, and b the constant vector.

• A homogeneous system is one where b = 0.

• A nonhomogeneous system has at least one nonzero entry in b.

9
Advanced Engineering Mathematics (Spring 2025)-SNU CEE Hyoseob Noh (hyoddubi1@snu.ac.kr)

A linear system of m equations in n unknowns:







 a11 x1 + a12 x2 + · · · + a1n xn = b1 ,





a21 x1 + a22 x2 + · · · + a2n xn = b2 ,


 ..
.









am1 x1 + am2 x2 + · · · + amn xn = bm .

Matrix Form of the Linear System: Ax = b

     
 a11 a12 ··· a1n   x1   b1 
     
     
a
 21 a22 ··· a2n 

x 
 2
b 
 2
A=
 .
,
 . ,
x= 
 . .
b= 
 .. .. .. ..   ..   .. 
 . . . 
    
     
     
am1 am2 ··· amn xn bm

 
 a11 a12 ··· a1n b1 
 
 
a
 21 a22 ··· a2n b2 
Augmented matrix: [A | b] = 
 .
.
 .. .. .. .. .. 
 . . . . 

 
 
am1 am2 ··· amn bm

Homogeneous System: all bj = 0.

Nonhomogeneous System: at least one bj ̸= 0.

Example: Consider the system of equations




2x + 3y = 5,



4x − y = 7.

This system can be expressed in matrix form as:

    
2 3  x 5
   =  .
    
4 −1 y 7

10
Advanced Engineering Mathematics (Spring 2025)-SNU CEE Hyoseob Noh (hyoddubi1@snu.ac.kr)

7.3.2 Gauss Elimination Process (3×3 Example)

Consider the system of equations: 




x + 2y − z = 3,







2x + y + z = 1,







3x + 4y + 2z = 7.

This system can be written in augmented matrix form as:

 
 1 2 −1 3 
 
 
.
 2 1 1 1 

 
 
3 4 2 7

1. Forward Elimination: Eliminate variables to obtain an upper triangular (row echelon) form.

Step 1: Eliminate x from the second and third equations using the first row.

Row 2: Replace with Row 2 − 2 × Row 1:

2 − 2(1) 1 − 2(2) 1 − 2(−1) 1 − 2(3)

0 −3 3 −5

Row 3: Replace with Row 3 − 3 × Row 1:

3 − 3(1) 4 − 3(2) 2 − 3(−1) 7 − 3(3)

0 −2 5 −2

The augmented matrix now becomes:

 
 1 2 −1 3 
 
 
−3 .
−5 
 0 3

 
 
0 −2 5 −2

11
Advanced Engineering Mathematics (Spring 2025)-SNU CEE Hyoseob Noh (hyoddubi1@snu.ac.kr)

Step 2: Eliminate y from Row 3 using Row 2. Compute the factor:

−2 2
Factor = = .
−3 3

2
Replace Row 3 with Row 3 − 3 × Row 2:

0 −2 − 23 (−3) 5 − 32 (3) −2 − 23 (−5)

Simplifying:

10
0 −2 + 2 5−2 −2 + 3

0 0 3 − 63 + 10
3 = 4
3

The matrix in row echelon form is:  


 1 2 −1 3 
 
 
−3 .
−5 
 0 3

 
 
4
0 0 3 3

2. Back Substitution: Solve for the variables starting from the last equation.

Step 1: From the third row:


4 4
3z = =⇒ z= .
3 9

4
Step 2: Substitute z = 9 into the second row:

 
4
−3y + 3z = −5 =⇒ −3y + 3 = −5.
9

Simplify:
4 4 15 4 19
−3y + = −5 =⇒ −3y = −5 − =− − =− ,
3 3 3 3 3

19
y= .
9

12
Advanced Engineering Mathematics (Spring 2025)-SNU CEE Hyoseob Noh (hyoddubi1@snu.ac.kr)

19 4
Step 3: Substitute y = 9 and z = 9 into the first row:

 
19 4
x + 2y − z = 3 =⇒ x+2 − = 3.
9 9

Simplify:
38 4 34
x+ − =3 =⇒ x+ = 3,
9 9 9

34 27 34 7
x=3− = − =− .
9 9 9 9

Thus, the solution is:


7 19 4
x=− , y= , z= .
9 9 9

Comparison of Elementary Operations for Equations and Matrices

Elementary Operations for Equations Elementary Row Operations for Matrices

- Interchange of two equations - Interchange of two rows

- Addition of a constant multiple of one equation - Addition of a constant multiple of one row to an-

to another equation other row

- Multiplication of an equation by a nonzero con- - Multiplication of a row by a nonzero constant

stant

Note: These operations apply to rows (not columns) and preserve the solution set.

7.3.3 Row-Equivalent Systems


Theorem 1: Row-Equivalent Systems

Row-equivalent linear systems have the same set of solutions.

• A linear system S1 is row-equivalent to a linear system S2 if S2 can be obtained from S1 by a sequence of

row operations.

• Two linear systems are row-equivalent if one can be obtained from the other by a sequence of elementary

row operations.

13
Advanced Engineering Mathematics (Spring 2025)-SNU CEE Hyoseob Noh (hyoddubi1@snu.ac.kr)

Example: Row-Equivalent Systems

Consider the linear system S1 : 



x + y = 3,



2x + 3y = 8.

We perform an elementary row operation on the second equation:

(Row 2) ←− (Row 2) − 2 × (Row 1).

Concretely,

2x + 3y − 2(x + y) = 8 − 2 × 3,

which simplifies to

2x + 3y − 2x − 2y = 8 − 6 =⇒ y = 2.

Hence, the transformed system S2 is: 



x + y = 3,



y = 2.

Since S2 is obtained from S1 by an elementary row operation, the two systems are row-equivalent and share

the same solution set.

7.3.4 Types of Solutions

A system of linear equations may have:

• A unique solution (r = n, where r is the rank of the coefficient matrix).

• Infinitely many solutions (r < n).

• No solution (if the augmented matrix has a row that leads to an inconsistency, e.g., 0 = c, with c ̸= 0).

14
Advanced Engineering Mathematics (Spring 2025)-SNU CEE Hyoseob Noh (hyoddubi1@snu.ac.kr)

Figure 7.1: Three cases of linear system solutions

Figure 7.2: Three equations in three unknowns interpreted as planes in space

7.3.5 Examples

• Example 1: Gauss elimination with a unique solution.

• Example 2: Gauss elimination leading to infinitely many solutions.

• Example 3: Gauss elimination showing that no solution exists.

15
Advanced Engineering Mathematics (Spring 2025)-SNU CEE Hyoseob Noh (hyoddubi1@snu.ac.kr)

Example 2: Gauss Elimination Leading to Infinitely Many Solutions

Ex.3 Solve the following linear system of three equations in four unknowns whose augmented matrix is:

 
 3.0 2.0 2.0 −5.0 8.0 
 
 
.
 0.6 1.5 1.5 5.4 2.7 

 
 
1.2 0.3 0.3 2.4 2.1

This corresponds to the system



3.0 x1 + 2.0 x2 + 2.0 x3 − 5.0 x4 = 8.0,








 0.6 x1 + 1.5 x2 + 1.5 x3 + 5.4 x4 = 2.7,






1.2 x1 + 0.3 x2 + 0.3 x3 + 2.4 x4 = 2.1.

Step 1 Elimination of x1 :

0.6
• Multiply the first equation by − = −0.2 and add to the second equation to eliminate x1 there.
3.0
1.2
• Multiply the first equation by − = −0.4 and add to the third equation to eliminate x1 there.
3.0

After these operations, the new augmented matrix might look like this (schematically):

 
 3.0 2.0 2.0 −5.0 8.0 
 
 
.
 
 0 1.1 1.1 4.4 1.1 
 
 
 
0 0.2 0.2 3.4 1.9

Step 2 Elimination of x2 :

• Use the second row to eliminate x2 from the third row.

0.2
• Suppose the factor is − .
1.1

• Perform the row operation on Row 3.

16
Advanced Engineering Mathematics (Spring 2025)-SNU CEE Hyoseob Noh (hyoddubi1@snu.ac.kr)

You obtain something like:  


 3.0 2.0 2.0 −5.0 8.0 
 
 
,
 
 0 1.1 1.1 4.4 1.1 
 
 
 
0 0 0 ... 0

which indicates one entire row reduces to zeros on the left (and also zero on the right), so there is no contradiction.

Step 3 Substitution:

From the second equation (in its simplified form), you might find

x2 = 1 − x3 + 4 x4 .

Then using x2 in the first equation yields

x1 = 2 − x4 (for example).

Because x3 and x4 do not get pinned down, they are free variables, and we thus have infinitely many solutions.

Concretely, 





 x1 = 2 − x4 ,





x 2 = 1 − x 3 + 4 x 4 ,



x3 = x3 ,










x 4 = x 4 ,

where x3 , x4 are arbitrary real parameters.

Conclusion: Since two variables remain free, the system has infinitely many solutions.

Example 3: Gauss Elimination Showing No Solution

Ex.4: Consider the system 






 3x1 + 2x2 + x3 = 3,




 2x1 + x2 + x3 = 0,






6x + 2x + 4x = 6.

1 2 3

17
Advanced Engineering Mathematics (Spring 2025)-SNU CEE Hyoseob Noh (hyoddubi1@snu.ac.kr)

In augmented form:  
 3 2 1 3 
 
 
.
 2 1 1 0 

 
 
6 2 4 6

Step 1 Elimination of x1 :

2
Row 2 ← Row 2 − Row 1, Row 3 ← Row 3 − 2 Row 1.
3

(This removes x1 from Rows 2 and 3.)

Step 2 Elimination of x2 : Use the new Row 2 to eliminate x2 from Row 3. If at some point you obtain an

equation that simplifies to

0 = 12,

this is a contradiction.

Conclusion: Because 0 = 12 is false, the system is inconsistent and has no solution.

7.3.6 Row Echelon Form and Information From It

Row Echelon Form:

At the end of the Gauss elimination process, the coefficient matrix (and, if applicable, the augmented matrix) is

transformed into a row echelon form. In this form, each leading (nonzero) entry in a row is to the right of the

leading entry in the row above it, and any all-zero rows (if present) appear at the bottom.

   
a11 a12 ··· a1r ··· a1n   b1 
   
   
 0
 a22 ··· a2r ··· a2n 
b 
 2
   
 . .. .. .. .. ..   . 
 .. . . . . .   .. 
   
   
   
 0
 0 ··· arr ··· arn 
b 
 r
   
 . .. .. ..   . 
 .. . . .   .. 
   
   
   
0 0 ··· 0 ··· 0 bm
| {z } | {z }
Row echelon form of A Constants

If A is the coefficient matrix of an m × n system, its rank is the number of nonzero rows in this echelon form

(denoted r). Let b be the vector of constants in the augmented matrix [A | b]. Then:

18
Advanced Engineering Mathematics (Spring 2025)-SNU CEE Hyoseob Noh (hyoddubi1@snu.ac.kr)

• Exactly one solution: If r = n and the extra rows (if any) in the augmented matrix have zero entries on

the left and also zero on the right (i.e., br+1 , . . . , bm = 0), the system has a unique solution.

• Infinitely many solutions: If r < n and the extra rows (if any) still have zero entries on the left and zero

on the right, then the system has infinitely many solutions (due to the presence of free variables).

• No solution: If r < m but one of the corresponding constants br+1 , . . . , bm is not zero, then the system is

inconsistent, hence no solution.

These three cases cover all possible outcomes once the system is in row echelon form.

7.4 Linear Independence, Rank of a Matrix, and Vector Spaces

7.4.1 Linear Independence

A set of vectors {v1 , v2 , . . . , vm } is said to be linearly independent if the vector equation

c1 v1 + c2 v2 + · · · + cm vm = 0

has only the trivial solution c1 = c2 = · · · = cm = 0. Otherwise, the vectors are linearly dependent.

Geometric Interpretation

In R2 , two vectors are linearly independent if they are not collinear. In R3 , three vectors are linearly

independent if they do not lie in the same plane.

Example: Determine if the following vectors are linearly independent:

   
1 3
v1 = 
 ,
 v2 = 
 .

2 4

Solve for c1 and c2 in c1 v1 + c2 v2 = 0: 



c1 + 3c2 = 0,



2c1 + 4c2 = 0.

Only the trivial solution exists, so they are linearly independent.

19
Advanced Engineering Mathematics (Spring 2025)-SNU CEE Hyoseob Noh (hyoddubi1@snu.ac.kr)

7.4.2 Rank of a Matrix


Definition: Rank
The rank of a matrix is the maximum number of linearly independent row or column vectors.

Theorem 1: Row-Equivalent Matrix

Row-equivalent matrices have the same rank.

Theorem 2: Linear Independence and Dependence of Vectors

p vectors with n components each are linearly independent if the matrix with these vectors as row vectors

has rank p, but they are linearly dependent if that rank is less than p.

Theorem 3: Rank in Terms of Column Vectors


The rank r equals the maximum number of linearly independent column vectors. Hence the matrix and

its transpose have the same rank.

Theorem 4: Linear Dependence of Vectors

p vectors with n < p components are always linearly dependent.

Example: Find the rank of  


1 2 3
 
 
A=
2 4 6.
 
 
1 −1 0

Row operations yield two nonzero rows, so rank(A) = 2.

7.4.3 Vector Spaces

A vector space V over R is a set with two operations: vector addition and scalar multiplication, satisfying:

• Closure under addition and scalar multiplication.

• Existence of zero vector and additive inverses.

• Associativity and commutativity of addition.

• Distributive properties.

20
Advanced Engineering Mathematics (Spring 2025)-SNU CEE Hyoseob Noh (hyoddubi1@snu.ac.kr)

Definition: Basis, Dimension, Span, and Subspace

• Dimension: The number of vectors in a basis of V .

• Basis: A set of linearly independent vectors that span V .

• Span: The set of all linear combinations of given vectors with the same number of components.-

• Subspace: A nonempty subset that forms itself a vector space with respect to the two algebraic

operations ( addition and scalar multiplication ) defined for the vectors.

Example: Span The standard basis for R3 is


span{v1 , . . . , vk } = c1 v1 + · · · + ck vk c1 , . . . , ck ∈ R .

Example: Basis The standard basis for R3 is

{e1 = (1, 0, 0), e2 = (0, 1, 0), e3 = (0, 0, 1)}.

Theorem 1 Let {v1 , . . . , vk } be any finite set of vectors in a vector space V . Then span{v1 , . . . , vk } is a subspace

of V .

Proof ) We must check the three subspace properties for W = span{v1 , . . . , vk }:

• Zero vector in W : If we take all scalars ci = 0, then 0 v1 + · · · + 0 vk = 0. Thus 0 ∈ W .

• Closed under addition: Let w1 , w2 ∈ W . Then w1 = a1 v1 + · · · + ak vk and w2 = b1 v1 + · · · + bk vk

for some scalars ai , bi . Hence

w1 + w2 = (a1 + b1 ) v1 + · · · + (ak + bk ) vk ,

which is again a linear combination of the same vectors v1 , . . . , vk . Therefore w1 + w2 ∈ W .

• Closed under scalar multiplication: Let w ∈ W and let α be any scalar. Then w = c1 v1 + · · · + ck vk

for some scalars ci . Thus


α w = α c1 v1 + · · · + ck vk = (αc1 ) v1 + · · · + (αck ) vk ,

21
Advanced Engineering Mathematics (Spring 2025)-SNU CEE Hyoseob Noh (hyoddubi1@snu.ac.kr)

which is again in W .

Hence W satisfies all three conditions and is a subspace.

Theorem 5: Vector Space Rn

The vector space Rn consisting of all vectors with n components ( n real numbers ) has dimension n.

Theorem 6: Row Space and Column Space

The row space and the column space of a matrix A have the same dimension, equal to rank A.

• Null Space of A: is the solution set of the homogeneous system Ax = 0

• Nullity of A: The dimension of the null space rank A + nullity A = Number of columns of A

Important Properties

• If rank(A) = n for an n × n matrix, A is invertible.

• If rank(A) < n, the system Ax = b has either no solution or infinitely many solutions.

7.5 Solutions of Linear Systems: Existence and Uniqueness


Theorem 1: Fundamental Theorem for Linear Systems

• Existence: A linear system of m equations in n unknowns is consistent, that is, has solutions, if and

only if the coefficient matrix and the augmented matrix have the same rank.

• Uniqueness: The linear system has precisely one solution if and only if this common rank r of the

coefficient matrix and the augmented matrix equals n.

• Infinitely Many Solutions: If this common rank r is less than n, the system has infinitely many

solutions. All of these solutions are obtained by determining r suitable unknowns (whose submatrix

of coefficients must have rank r ) in terms of the remaining n–r unknowns, to which arbitrary values

can be assigned.

• Gauss Elimination: If solutions exist, they can all be obtained by the Gauss elimination. (This

method will automatically reveal whether or not solutions exist)

22
Advanced Engineering Mathematics (Spring 2025)-SNU CEE Hyoseob Noh (hyoddubi1@snu.ac.kr)

7.5.1 Existence and Uniqueness of Solutions

A fundamental question when dealing with a linear system Ax = b is whether solutions exist, and if they

do, whether they are unique. This is determined by examining the ranks of the coefficient matrix A and the

augmented matrix [A | b].

• A solution exists if and only if:

rank(A) = rank([A | b]).

• If a solution exists, it is unique if:

rank(A) = rank([A | b]) = n,

where n is the number of unknowns.

• If rank(A) = rank([A | b]) < n, the system has infinitely many solutions.

• If rank(A) ̸= rank([A | b]), the system has no solution and is inconsistent.

7.5.2 Geometric Interpretation

For systems of two or three equations, geometric interpretations help illustrate the existence and uniqueness of

solutions:

• Two linear equations in two unknowns represent lines in a plane:

– One solution: The lines intersect at a single point.

– Infinite solutions: The lines coincide.

– No solution: The lines are parallel and distinct.

• Three linear equations in three unknowns represent planes in space:

– One solution: The planes intersect at a single point.

– Infinite solutions: The planes intersect along a line or coincide entirely.

– No solution: The planes do not share any common point.

23
Advanced Engineering Mathematics (Spring 2025)-SNU CEE Hyoseob Noh (hyoddubi1@snu.ac.kr)

7.5.3 Homogeneous Systems


Theorem 2: Homogeneous Linear System

• A homogeneous linear system of m equations in n unknowns always has the trivial solution.

• Nontrivial solutions exist if and only if rank A < n.

• If rank A = r < n, these solutions, together with x = 0, form a vector space of dimension n–r,

called the solution space.

• Linear combination of two solution vectors of the homogeneous linear system is a solution vector.

Theorem 3: Homogeneous Linear System with Fewer Equations Than Unknowns

A homogeneous linear system with fewer equations than unknowns always has nontrivial solutions.

A homogeneous system has the form:

Ax = 0.

• The system is always consistent, with at least the trivial solution x = 0.

• Nontrivial solutions (solutions other than the trivial one) exist if and only if:

rank(A) < n.

• The set of all solutions to a homogeneous system forms a vector space known as the null space of A.

Example: Consider the system: 






x + y + z = 0,




2x + 3y + 4z = 0,






3x + 4y + 5z = 0.

The coefficient matrix is:  


1 1 1
 
 
A=
2 3 .
4
 
 
3 4 5

Performing row operations, if the rank is less than 3, nontrivial solutions exist.

24
Advanced Engineering Mathematics (Spring 2025)-SNU CEE Hyoseob Noh (hyoddubi1@snu.ac.kr)

7.5.4 Nonhomogeneous Systems


Theorem 4: Nonhomogeneous Linear System

If a nonhomogeneous linear system is consistent, then all of its solutions are obtained as x = x0 + xh

where x0 is any solution of the nonhomogeneous linear system and xh runs through all the solutions of

the corresponding homogeneous system.

A nonhomogeneous system has the form:

Ax = b, b ̸= 0.

• The system has a solution if and only if rank(A) = rank([A | b]).

• If the rank equals the number of unknowns, the solution is unique.

• If the rank is less than the number of unknowns, there are infinitely many solutions.

Example: Solve the system: 



x + 2y = 4,



2x + 4y = 8.

The augmented matrix is:  


 1 2 4 
 .
 
2 4 8

Here, rank(A) = rank([A | b]) = 1 but n = 2, so the system has infinitely many solutions.

7.5.5 Summary of Solution Conditions


Solution Conditions

• rank(A) = rank([A | b]) = n: Unique solution.

• rank(A) = rank([A | b]) < n: Infinitely many solutions.

• rank(A) ̸= rank([A | b]): No solution.

25
Advanced Engineering Mathematics (Spring 2025)-SNU CEE Hyoseob Noh (hyoddubi1@snu.ac.kr)

7.6 Second- and Third-Order Determinants and Cramer’s Rule

7.6.1 Determinants of Second Order

The determinant of a 2 × 2 matrix  


a11 a12 
A=



a21 a22

is defined as:

det(A) = a11 a22 − a12 a21 .

This formula geometrically represents the area of a parallelogram spanned by the row (or column) vectors of the

matrix.
 
3 2
Example: For A = 

,

5 4
det(A) = 3 × 4 − 2 × 5 = 12 − 10 = 2.

7.6.2 Determinants of Third Order

For a 3 × 3 matrix  
a11 a12 a13 
 
 
A=
a21 a22 ,
a23 
 
 
a31 a32 a33

the determinant is computed by expansion along the first row:

     
a22 a23  a21 a23  a21 a22 
det(A) = a11 det   − a12 det   + a13 det  .
     
a32 a33 a31 a33 a31 a32

This method, known as cofactor expansion, can also be applied along any row or column.

Example: Let  
1 2 3
 
 
A=
0 4 5.
 
 
1 0 6

26
Advanced Engineering Mathematics (Spring 2025)-SNU CEE Hyoseob Noh (hyoddubi1@snu.ac.kr)

Then:      
4 5 0 5 0 4
det(A) = 1 det   − 2 det   + 3 det  
     
0 6 1 6 1 0

= 1(24 − 0) − 2(0 − 5) + 3(0 − 4)

= 24 + 10 − 12 = 22.

7.6.3 Cramer’s Rule

Given a system of linear equations Ax = b where A is an n × n matrix with det(A) ̸= 0, the solution for the

kth unknown is:


det(Ak )
xk = ,
det(A)

where Ak is the matrix obtained by replacing the kth column of A with the vector b.

Example: Solve the system: 



2x + 3y = 5,



4x − y = 7.

Here,    
2 3 5
A=

,
 b=
 .

4 −1 7

Compute determinants:

det(A) = 2(−1) − 3(4) = −2 − 12 = −14.

Replace first column with b:

 
5 3
A1 = 

,
 det(A1 ) = 5(−1) − 3(7) = −5 − 21 = −26.
7 −1

Thus,
−26 13
x= = .
−14 7

Similarly, for y:  
2 5
A2 = 

,
 det(A2 ) = 2(7) − 5(4) = 14 − 20 = −6.
4 7

27
Advanced Engineering Mathematics (Spring 2025)-SNU CEE Hyoseob Noh (hyoddubi1@snu.ac.kr)

−6 3
y= = .
−14 7

13 3

Solution: (x, y) = 7 , 7 .

Example: Solving a 3 × 3 System Using Cramer’s Rule

Consider the system of equations: 






 x + y + z = 6,




 2x − y + z = 3,






x + 2y − z = 3.

Its coefficient matrix and constant vector are:

   
1 1
1 6
   
   
A=
2 −1 1  and b = 
3 .

   
   
1 2 −1 3

Step 1: Compute det A


−1 1 2 1 2 −1
det(A) = 1 −1 +1 .
2 −1 1 −1 1 2

Compute each minor:


−1 1
= (−1)(−1) − (1)(2) = 1 − 2 = −1,
2 −1

2 1
= (2)(−1) − (1)(1) = −2 − 1 = −3,
1 −1

2 −1
= (2)(2) − (−1)(1) = 4 + 1 = 5.
1 2

Thus:

det(A) = (1)(−1) − (1)(−3) + (1)(5) = −1 + 3 + 5 = 7.

Step 2: Compute det A1 , det A2 , and det A3 Replace each column of A with b:

28
Advanced Engineering Mathematics (Spring 2025)-SNU CEE Hyoseob Noh (hyoddubi1@snu.ac.kr)

     
6 1 1 1 6 1 1 1 6
     
     
A1 = 
3 −1 ,
1 A2 = 
2 3 1, A3 = 
2 −1 .
3
     
     
3 2 −1 1 3 −1 1 2 3

Compute det A1 :
−1 1 3 1 3 −1
det(A1 ) = 6 −1 +1 .
2 −1 3 −1 3 2

Calculate each minor:

−1 1 3 1 3 −1
= −1, = (3)(−1) − (1)(3) = −6, = (3)(2) − (−1)(3) = 9.
2 −1 3 −1 3 2

So:

det(A1 ) = 6(−1) − 1(−6) + 1(9) = −6 + 6 + 9 = 9.

Compute det A2 :
3 1 2 1 2 3
det(A2 ) = 1 −6 +1 .
3 −1 1 −1 1 3

Minors:
3 1 2 1 2 3
= −6, = −3, = 3.
3 −1 1 −1 1 3

Thus:

det(A2 ) = 1(−6) − 6(−3) + 1(3) = −6 + 18 + 3 = 15.

Compute det A3 :
−1 3 2 3 2 −1
det(A3 ) = 1 −1 +6 .
2 3 1 3 1 2

Minors:
−1 3 2 3 2 −1
= −9, = 3, = 5.
2 3 1 3 1 2

Hence:

det(A3 ) = 1(−9) − 1(3) + 6(5) = −9 − 3 + 30 = 18.

29
Advanced Engineering Mathematics (Spring 2025)-SNU CEE Hyoseob Noh (hyoddubi1@snu.ac.kr)

Step 3: Compute the solution using Cramer’s Rule

det(A1 ) 9 det(A2 ) 15 det(A3 ) 18


x= = , y= = , z= = .
det(A) 7 det(A) 7 det(A) 7

Final Answer:

x = 97 , y= 15
7 , z= 18
7 .

“‘latex

7.7 Determinants and Cramer’s Rule

7.7.1 Definition of the Determinant

For a square matrix A of order n, the determinant is a scalar value associated with the matrix and is denoted

by det(A) or |A|. The determinant provides important information about the matrix, such as whether it is

invertible and the volume scaling factor of the linear transformation represented by A.

• Determinant of Order n

a11 a12 ··· a1n

a21 a22 ··· a2n


D = det(A) =
.. .. .. ..
. . . .

an1 an2 ··· ann

• If n = 1 then D = a11 .

• If n ≥ 2 then

D = aj1 Cj1 + aj2 Cj2 + · · · + ajn Cjn , (j = 1, 2, . . . , n)

or

D = a1k C1k + a2k C2k + · · · + ank Cnk , (k = 1, 2, . . . , n)

• The minor Mjk of ajk in D : a determinant of order n − 1.

30
Advanced Engineering Mathematics (Spring 2025)-SNU CEE Hyoseob Noh (hyoddubi1@snu.ac.kr)

• The cofactor Cjk of ajk in D :

Cjk = (−1)j+k Mjk

7.7.2 Determinants of Second and Third Order

Second-Order Determinant

For a 2 × 2 matrix:  
a11 a12 
A=

,
 det(A) = a11 a22 − a12 a21 .
a21 a22

Example: Evaluate the determinant of  


3 4
 .
 
2 5

Solution:  
3 4
det   = (3)(5) − (4)(2) = 15 − 8 = 7.
 
2 5

Third-Order Determinant

For a 3 × 3 matrix:  
a11 a12 a13 
 
 
A=
a21 a22 ,
a23 
 
 
a31 a32 a33

its determinant is given by the cofactor expansion along the first row:

a22 a23 a21 a23 a21 a22


det(A) = a11 − a12 + a13 .
a32 a33 a31 a33 a31 a32

31
Advanced Engineering Mathematics (Spring 2025)-SNU CEE Hyoseob Noh (hyoddubi1@snu.ac.kr)

Example: Compute the determinant of  


1 2 3
 
 
−1 .
0 4

 
 
5 0 2

Solution:
−1 4 0 4 0 −1
det(A) = 1 −2 +3 .
0 2 5 2 5 0

Compute each minor:


−1 4
= (−1)(2) − (4)(0) = −2,
0 2

0 4
= (0)(2) − (4)(5) = −20,
5 2

0 −1
= (0)(0) − (−1)(5) = 5.
5 0

Substitute back:

det(A) = 1(−2) − 2(−20) + 3(5) = −2 + 40 + 15 = 53.

32
Advanced Engineering Mathematics (Spring 2025)-SNU CEE Hyoseob Noh (hyoddubi1@snu.ac.kr)

7.7.3 Properties of Determinants


Theorem 1: Behavior of an nth-Order Determinant under Elementary Row Operations

• Swapping two rows (or columns) changes the sign of the determinant. If matrix B is

obtained by swapping two rows of A, then:

det(B) = − det(A).

• Multiplying a row (or column) by a scalar c multiplies the determinant by c. If B is

obtained from A by multiplying a row by c, then:

det(B) = c · det(A).

• Adding a multiple of one row to another does not change the determinant. If B is obtained

from A by replacing a row with itself plus k times another row, then:

det(B) = det(A).

• det(A) = det(AT ). The determinant remains the same when the matrix is transposed:

   
a b a c
det   = ad − bc = det  .
   
c d b d

• If a matrix has a row (or column) of zeros, its determinant is zero.

   
0 0 a b 
det 

=0
 det 

 = 0.

c d 0 0

Example: Swapping rows

   
a b c d
det   = ad − bc ⇒ det   = −(ad − bc).
   
c d a b

33
Advanced Engineering Mathematics (Spring 2025)-SNU CEE Hyoseob Noh (hyoddubi1@snu.ac.kr)

Example: Multiplying a row by a scalar c

   
2a 2b a b
det   = 2 · det   = 2(ad − bc).
   
c d c d

Example: Adding a row Replace Row 2 with (Row 2 + k· Row 1):

 
 a b 
det   = ad − bc.
 
c + ka d + kb

Theorem 2: Further Properties of nth-Order Determinants

• Interchange of two columns multiplies the value of the determinant by -1.

• Addition of a multiple of a column to another column does not alter the value of the determinant.

• Multiplication of a column by a nonzero constant c multiplies the value of the determinant by c.

• Transposition leaves the value of a determinant unaltered.

• A zero row or column renders the value of a determinant zero.

• Proportional rows or columns render the value of a determinant zero.

Theorem 3: Rank in Terms of Determinants

Consider an m × n matrix A = [ajk ]:

• A has rank r ≥ 1 if and only if A has an r × r submatrix with nonzero determinant.

• The determinant of any square submatrix with more than r rows, contained in A has a value equal

to zero.

• An n × n square matrix A has rank n if and only if detA ̸= 0.

34
Advanced Engineering Mathematics (Spring 2025)-SNU CEE Hyoseob Noh (hyoddubi1@snu.ac.kr)

7.7.4 Cramer’s Rule


Theorem 4: Cramer’s Theorem (Solution of Linear Systems by Determinants)

(a) If a linear system of n equations in the same number of unknowns x1 , x2 , . . . , xn







 a11 x1 + a12 x2 + · · · + a1n xn = b1 ,





a21 x1 + a22 x2 + · · · + a2n xn = b2 ,


 ..
.









an1 x1 + an2 x2 + · · · + ann xn = bn

has a nonzero coefficient determinant D = det(A), the system has precisely one solution. This solution

is given by the formulas


D1 D2 Dn
x1 = , x2 = , ..., xn =
D D D

(Cramer’s rule) where Dk is the determinant obtained from D by replacing in A the kth column

by the column with the entries b1 , b2 , . . . , bn .

(b) Hence if the system (6) is homogeneous and D ̸= 0, it has only the trivial solution x1 = 0, x2 =

0, . . . , xn = 0. If D = 0, the homogeneous system also has nontrivial solutions.

Cramer’s Rule provides a method to solve a system of linear equations:

Ax = b,

where A is an invertible n × n matrix. The solution is given by:

det(Ai )
xi = , i = 1, 2, . . . , n,
det(A)

where Ai is obtained by replacing the i-th column of A with the vector b.

35
Advanced Engineering Mathematics (Spring 2025)-SNU CEE Hyoseob Noh (hyoddubi1@snu.ac.kr)

Example: Solving a 3 × 3 System

Consider the system: 






 x + y + z = 6,




 2x − y + z = 3,






x + 2y − z = 3.

The coefficient matrix and constant vector are:

   
1 1 1 6
   
   
A=
2 −1 ,
1 3 .
b= 
   
   
1 2 −1 3

Step 1: Compute det(A) (as shown in previous examples):

det(A) = 7.

Step 2: Compute det(A1 ), det(A2 ), and det(A3 ):

     
6 1 1 1 6 1 1 1 6
     
     
A1 = 
3 −1 ,
1 A2 = 
2 3 1, A3 = 
2 −1 .
3
     
     
3 2 −1 1 3 −1 1 2 3

Calculate each determinant:

det(A1 ) = 9, det(A2 ) = 15, det(A3 ) = 18.

Step 3: Solve using Cramer’s Rule:


9 15 18
x= , y= , z= .
7 7 7

Final Answer:

x = 97 , y= 15
7 , z= 18
7 .

36
Advanced Engineering Mathematics (Spring 2025)-SNU CEE Hyoseob Noh (hyoddubi1@snu.ac.kr)

7.8 Inverse of a Matrix. Gauss–Jordan Elimination

7.9 Inverse of a Matrix and Related Theorems

7.9.1 Inverse Matrix

• A−1 : The inverse of an n × n matrix A = [ajk ] is defined by the property:

AA−1 = A−1 A = I,

where I is the identity matrix.

• Nonsingular Matrix: A matrix whose determinant is not equal to zero.

• Singular Matrix: A square matrix whose determinant is zero.

• If a matrix has an inverse, the inverse is unique.

Theorem 1: Existence of the Inverse

• The inverse of a matrix A exists if and only if rank(A) = n, which is equivalent to det(A) ̸= 0.

• A is nonsingular if rank(A) = n.

• A is singular if rank(A) < n.

7.9.2 Determination of the Inverse by the Gauss–Jordan Method

To find A−1 , we use the augmented matrix à = [A | I] and apply Gauss–Jordan elimination to obtain [I | K],

where K = A−1 .

à = [A | I] −−−−−−−−−−−→ [I | A−1 ].
Gauss elimination

37
Advanced Engineering Mathematics (Spring 2025)-SNU CEE Hyoseob Noh (hyoddubi1@snu.ac.kr)

Theorem 2: Inverse of a Matrix

Let A = [ajk ] be a nonsingular n × n matrix. The inverse of A is given by:

 
 C11 C12 ··· C1n 
 
 
1
C
 21 C22 ··· C2n 
A−1

=  ,
 ...
det(A)  ..
.
..
.
.. 
. 
 
 
 
Cn1 Cn2 ··· Cnn

where Cjk is the cofactor of the element ajk in A.

In particular, for a 2 × 2 matrix:

   
a11 a12  1  a22 −a12 
A= , A−1 =  .
  det(A)  
a21 a22 −a21 a11

Example: Find the Inverse of a 3 × 3 Matrix

Consider the matrix:  


−1 1 2
 
 
A=
3 −1 .
1
 
 
1 3 4

First, compute the determinant:

det(A) = −1(−7) − 1(13) + 2(8) = 10.

Next, find all cofactors:

−1 1 3 1 3 −1
C11 = = −7, C12 = − = −13, C13 = = 8,
3 4 1 4 1 3

1 2 −1 2 −1 1
C21 = − = 2, C22 = = −2, C23 = − = 2,
3 4 1 4 1 3

1 2 −1 2 −1 1
C31 = = 3, C32 = − = 7, C33 = = −2.
−1 1 3 1 3 −1

38
Advanced Engineering Mathematics (Spring 2025)-SNU CEE Hyoseob Noh (hyoddubi1@snu.ac.kr)

The adjoint of A is:  


 −7 2 3
 
 
adj(A) = 
−13 −2 .
7
 
 
8 2 −2

Finally, the inverse is:    


 −7 2 3  −0.7 0.2 0.3 
   
1 
A−1
  
= −13 −2 7 = −1.3
 −0.2 .
0.7 
10 
   
   
8 2 −2 0.8 0.2 −0.2

7.9.3 Inverse of Diagonal Matrices

If A = [ajk ] is diagonal with ajj ̸= 0 for all j, then A−1 is also diagonal with entries:

 
1/a11 0 ··· 0 
 
 
 0 1/a22 ··· 0 
A−1
 
=
 .
.
 .. .. .. .. 
 . . . 

 
 
0 0 ··· 1/ann

Example: Inverse of a Diagonal Matrix


   
−0.5 0 0 −2 0 0
   
A−1
   
A=
 0 ,
4 0 =
0 0.25 .
0
   
   
0 0 1 0 0 1

7.9.4 Properties of Inverses

• (AC)−1 = C −1 A−1 .

• (AC · · · P Q)−1 = Q−1 P −1 · · · C −1 A−1 .

• (A−1 )−1 = A.

7.9.5 Unusual Properties of Matrix Multiplication and Cancellation Laws

Matrix multiplication is not generally commutative; that is, AB ̸= BA.

39
Advanced Engineering Mathematics (Spring 2025)-SNU CEE Hyoseob Noh (hyoddubi1@snu.ac.kr)

Example: Non-Commutativity
         
1 1 −1 1  0 0 −1 1  1 1 0 0
  =  but    ̸=  .
         
2 2 1 −1 0 0 1 −1 2 2 0 0

Theorem 3: Cancellation Laws


Let A, B, C be n × n matrices. Then:

• If rank(A) = n and AB = AC, then B = C.

• If rank(A) = n and AB = 0, then B = 0.

• If AB = 0 with A ̸= 0 and B ̸= 0, then rank(A) < n and rank(B) < n.

• If A is singular, then so are BA and AB.

Theorem 4: Determinant of Product of Matrices


For any n × n matrices A and B:

det(AB) = det(BA) = det(A) det(B).

40
Advanced Engineering Mathematics (Spring 2025)-SNU CEE Hyoseob Noh (hyoddubi1@snu.ac.kr)

7.10 Vector Spaces, Inner Product Spaces, and Linear Transforma-

tions

7.10.1 Real Vector Space


Definition: Real Vector Space

A real vector space is a set V equipped with two operations: vector addition and scalar multiplication,

satisfying the following axioms:

• Vector Addition: a + b

– Commutativity: a + b = b + a

– Associativity: (u + v) + w = u + (v + w)

– Zero Vector: a + 0 = a

– Additive Inverse: a + (−a) = 0

• Scalar Multiplication: ka

– Distributivity over Vector Addition: c(a + b) = ca + cb

– Distributivity over Scalar Addition: (c + k)a = ca + ka

– Associativity: c(ka) = (ck)a

– Identity: 1a = a

41
Advanced Engineering Mathematics (Spring 2025)-SNU CEE Hyoseob Noh (hyoddubi1@snu.ac.kr)

7.10.2 Real Inner Product Space


Definition: Real Inner Product Space

If a and b are vectors in Rn , regarded as column vectors, we can form the product aT b. This is a 1 × 1

matrix, which we can identify with its single entry, that is, with a number.

This product is called the inner product or dot product of a and b. Other notations for it are (a, b)

and a · b. Thus,

 
 b1 
   X n
.
aT b = (a, b) = a · b = a1 ··· an 
 
..  = ai bi = a1 b1 + · · · + an bn .
  i=1
 
bn

Properties

An inner product space is a vector space V with an inner product ⟨a, b⟩ satisfying:

• Linearity: ⟨qa + kb, c⟩ = q⟨a, c⟩ + k⟨b, c⟩

• Symmetry: ⟨a, b⟩ = ⟨b, a⟩

• Positive-Definiteness: ⟨a, a⟩ ≥ 0 with equality if and only if a = 0

Inner Product and Angle Relation

For vectors a and b, the inner product is related to the angle θ between them by:

⟨a, b⟩ = ∥a∥∥b∥ cos θ.

Related Concepts:

• Orthogonality: Two vectors are orthogonal if ⟨a, b⟩ = 0.

• Norm: ∥a∥ =
p
⟨a, a⟩.

• Unit Vector: A vector with norm 1.

42
Advanced Engineering Mathematics (Spring 2025)-SNU CEE Hyoseob Noh (hyoddubi1@snu.ac.kr)

Cf. Generalized p-Norms

The p-norm for a vector a = (a1 , a2 , . . . , an ) is defined as:

n
!p1
X
∥a∥p = |ai |p , p ≥ 1.
i=1

Common cases:

• p = 1: Manhattan norm (ℓ1 norm)

• p = 2: Euclidean norm (ℓ2 norm)

• p = ∞: Maximum norm (ℓ∞ norm)

7.10.3 Basic Inequalities

• Cauchy–Schwarz Inequality: |⟨a, b⟩| ≤ ∥a∥∥b∥.

• Triangle Inequality: ∥a + b∥ ≤ ∥a∥ + ∥b∥.

• Parallelogram Law: ∥a + b∥2 + ∥a − b∥2 = 2(∥a∥2 + ∥b∥2 ).

7.10.4 Linear Transformations

Definition of a Linear Transformation

A linear transformation (or linear mapping) T from a vector space V into a vector space W is a rule that

assigns to each vector v ∈ V a unique vector T (v) ∈ W , satisfying the following two properties for all u, v ∈ V

and scalars c:

• Additivity (Linearity of Addition):

T (u + v) = T (u) + T (v).

• Homogeneity (Linearity of Scalar Multiplication):

T (cv) = cT (v).

43
Advanced Engineering Mathematics (Spring 2025)-SNU CEE Hyoseob Noh (hyoddubi1@snu.ac.kr)

Key Idea

A transformation is linear if and only if it preserves vector addition and scalar multiplication.

Example 1: Matrix Transformation Let

 
1 2
A=

.

0 1

Define T : R2 → R2 by T (x) = Ax. For x = (x1 , x2 )T ,

    
1 2 x1  x1 + 2x2 
T (x) = 

  = 
  
.

0 1 x2 x2

This is a linear transformation since it satisfies both additivity and homogeneity.

Linear Transformation from Rn to Rm

Any linear transformation T : Rn → Rm can be represented by an m × n matrix A such that T (x) = Ax.

Theorem: Standard Matrix of a Linear Transformation


If T is a linear transformation from Rn to Rm , then there exists a unique matrix A such that

T (x) = Ax ∀x ∈ Rn .

The columns of A are given by T (e1 ), T (e2 ), . . . , T (en ) where ei are the standard basis vectors of Rn .

Example 2: Reflection Transformation Let T : R2 → R2 be the reflection about the x-axis. The standard

matrix is:  
1 0
A=

.

0 −1

Then, for x = (x1 , x2 )T ,     


1 0  x1   x1 
T (x) = 

  = 
  
.

0 −1 x2 −x2

44
Advanced Engineering Mathematics (Spring 2025)-SNU CEE Hyoseob Noh (hyoddubi1@snu.ac.kr)

Properties of Linear Transformations

• T (0) = 0.

• T (−v) = −T (v).

• T preserves linear combinations:

T (au + bv) = aT (u) + bT (v).

• ker(T ) = {x ∈ Rn : T (x) = 0} is the kernel of T .

• Im(T ) = {y ∈ Rm : ∃x ∈ Rn , T (x) = y} is the image of T .

Invertibility of Linear Transformations

A linear transformation T : Rn → Rn is invertible if there exists T −1 : Rn → Rn such that:

T −1 (T (x)) = x and T (T −1 (y)) = y.

Theorem: Invertibility Criterion

A linear transformation T is invertible if and only if its standard matrix A is invertible. In that case,

T −1 (x) = A−1 x.

Example 3: Inverse Transformation Let

 
2 1
A=

.

1 1

Then,    
1 1 −1  1 −1
A−1 =  = .
1    
−1 2 −1 2

For x = (x1 , x2 )T ,   
1 −1 x1 
T −1 (x) = A−1 x = 

 .
 
−1 2 x2

45
Advanced Engineering Mathematics (Spring 2025)-SNU CEE Hyoseob Noh (hyoddubi1@snu.ac.kr)

7.10.5 Summary
Summary of Linear Transformations

• A linear transformation is fully determined by its action on a basis.

• Every linear transformation from Rn to Rm corresponds to a unique matrix.

• Invertibility of T is equivalent to invertibility of its standard matrix.

Summary of Main Contents: Matrices and Vectors

• Matrix Operations:

– Addition, scalar multiplication, and matrix multiplication

– Properties of matrix multiplication (associativity, distributivity)

– Transposition and its uses

• Matrix Inverses and Special Matrices:

– Definition of the inverse of a square matrix

– Invertibility criteria (e.g. det(A) ̸= 0)

– Special types: diagonal, symmetric, orthogonal, etc.

• Systems of Linear Equations:

– Representing systems in matrix-vector form Ax = b

– Row-reduction (Gaussian elimination) to solve for unknowns

– Interpreting solutions in geometric terms (lines, planes, etc.)

• Vectors in Rn :

– Definition and notation of vectors

– Geometric interpretation (direction and magnitude)

– Operations: addition, scalar multiplication, dot product, and (in R3 ) cross product

46
References

47

You might also like