Contents
[hide]- 1 What is Linear algebra?
- 2 System of linear equations
- 3 Linear independence
- 4 Linear Transformation
- 5 Invertible
- 6 Subspace
- 7 Dimension and Rank
- 8 Determinant
- 9 Terms of Matrices
- 9.1 Matrix and vector
- 9.2 square matrix
- 9.3 identity matrix
- 9.4 transpose of a matrix or a vector
- 9.5 determinant
- 9.6 trace
- 9.7 diagonal matrix
- 9.8 eigenvalue, eigenvector
- 9.9 eigen decomposition
- 9.10 characteristic equation
- 9.11 Cayley-Hamilton theorem
- 9.12 matrices with specific condition
- 9.13 SVD(Singular value decomposition)
I think I need to study Linear algebra from the beginning. Because of proving some equations of matrix, I don’t have enough knowledge to understand machine learning algorithm based on Linear algebra.
Also to understand machine learning algorithm process and logic, It is necessary to know that. Most algorithms are derived from linear algebra.
So, I’ll share theorem
and definition of Linear algebra
.
I’m studying from the book 1
What is Linear algebra?
Linear algebra is the branch of mathematics concerning vector spaces and linear mappings between such spaces. It includes the study of lines, planes, and subspaces, but is also concerned with properties common to all vector spaces. - Wikipedia
A Linear equation in the variable x1,…,xnx1,…,xn is an equation that can be written in the form
Fields using linear algebra
-
analytic geometry
-
engineering, physics
-
natural sciences
-
computer science
-
computer animation
-
advanced facial recognition algorithms and the social sciences (particularly in economics)
Because linear algebra is such a well-developed theory, nonlinear mathematical models are sometimes approximated by linear models.
System of linear equations
A system of linear equations has either properties below.
- no solution
- exactly one solution
- infinitely many solutions
- A system of linear equations is said to be
consistent
if it has either one solution. - A system is
inconsistent
if it has no solution.
The basic form of a linear equation
a1x1+a2x2+…+anxn=ba1x1+a2x2+…+anxn=bwhere bb and the coefficients a1,…,an a1,…,an are real or complex numbers
Matrix and vector
A=[a1⋯an] X=[x1⋮xn]an is column vectorsA=[a1⋯an] X=⎡⎢ ⎢⎣x1⋮xn⎤⎥ ⎥⎦an is column vectorsThe linear equation can be denoted by the product of AA and XX, AX=bAX=b
Homogeneous equation
A system of linear equation is said to be homogeneous
if it can be written in the form AX=0AX=0, where AA is a m×nm×n matrix and 0 is the vector in RmRm.
Such a system AX=0AX=0 always has at least one solution X=0X=0(zero vector in RnRn). This zero solution is called the trivial solution
The homogeneous equation AX=0AX=0 has a nontrivial solution if and only if the equation has at least one free variable
Non-homogeneous
Non-homogeneous
equation is denoted by AX=bAX=b. If it is consistent for some given b, assume that the solution for this equation is pp. Then the solution set of AX=bAX=b is the set of all vectors of the form w=p+vnw=p+vn (vnvn is any solution of the homogeneous equation AX=0AX=0)
Linear independence
The Linear independence
is important in Linear algebra. Because it means that each vector has no relation each others.
Linearly independent
if x1v1+x2v2+…+xpvp=0has only the trivial solutionif x1v1+x2v2+…+xpvp=0has only the trivial solutionIt can show using matrix. The matrix AA is linearly independent
and only if the equation Ax=0Ax=0 has only a trivial solution.
The trivial solution
means x1,x2,…,xpx1,x2,…,xp are all zero.
Linearly dependent
if there exist weights c1,c2,…,cp not all zero, such that c1v1+c2v2+…+cpvp=0if there exist weights c1,c2,…,cp not all zero, such that c1v1+c2v2+…+cpvp=0It can be expressed by matrix.
[v1v2⋯vn][c1c2⋮cn]=[00⋮0][v1v2⋯vn]⎡⎢ ⎢ ⎢ ⎢⎣c1c2⋮cn⎤⎥ ⎥ ⎥ ⎥⎦=⎡⎢ ⎢ ⎢ ⎢⎣00⋮0⎤⎥ ⎥ ⎥ ⎥⎦Linear Transformation
Generally, we can transform vector using a matrix. We called the matrix as a transformation
(function or mapping) TT from Rn→ RmRn→ Rm.
T:Rn→RmRn is domaion of TRm is codomaion of Tfor x in Rn, the vector T(x) in Rn is called the image of xThe set of all images T(x) is called the range of T.T:Rn→RmRn is domaion of TRm is codomaion of Tfor x in Rn, the vector T(x) in Rn is called the image of xThe set of all images T(x) is called the range of T.
When Rn→Rm(n=m) is called shear transformation, It make the images tilt
Transformation equation
A transformation(or mapping)
T is linear if:
- T(u+v)=T(u)+T(v)
- T(cu)=cT(u)
- T(0)=0
- T(cu+dv)=cT(u)+dT(v)
Let T:Rn→Rm be linear transformation
. Then there exist a unique matrix A. such that
T(x)=A(x) for all x in Rn
Let T:Rn→Rm be a linear transformation
and let A be the standard matrix for T. Then:
- T maps Rn onto Rm if and only if the columns of A span Rm
- T is one-to-one if and only if the columns of A are linearly independent. onto Rm if and only if the columns of A span Rm
Invertible
If matrix A has the property like det(A)≠0, Matrix A is invertible
.
Inverse matrix
is denoted by A−1.
A−1A=I and AA−1=I
Properties of Invertible matrix
- (A−1)−1=A
- (AB)−1=B−1A−1
- (AT)−1=(A−1)T
Subspace
A subspace
of Rn is any set H in Rn that has three properties.
- The zero vector is in H.
- For each u and v in H, the sum u+v is in H.
- For each u in H and each scalar c, the vector cu is in H.
The column space
of a matrix A is the set col A of all linear combination of the columns of A.
The null space
of a matrix A is the set Null A of all solutions to the homogeneous equation AX=0.
A basis for a subspace H of Rn is a linearly independent set in H that spans H. The pivot columns of a matrix A form a basis for the column space of A.
Dimension and Rank
Determinant
Terms of Matrices
Matrix and vector
A=[a11⋯a1n⋮⋱⋮am1⋯amn]X=[x1⋮xn]square matrix
A=[a11⋯a1n⋮⋱⋮an1⋯ann]identity matrix
A=[10⋯001⋯0⋮⋮⋱⋮00⋯1]transpose of a matrix or a vector
- (AT)T=A
- (A+B)T=AT+BT
- For any scalar r, (ra)T=raT
- (AB)T=BTAT
determinant
trace
diagonal matrix
A=diag(a1,a2,⋯,an)=[a10⋯00a2⋯0⋮⋮⋱⋮00⋯an]eigenvalue, eigenvector
eigen decomposition
characteristic equation
Cayley-Hamilton theorem
matrices with specific condition
Orthogonal matrix
AAT=ATA=ESymmetric matrix
AT=AUnitary matrix
Hermitian matrix
SVD(Singular value decomposition)
A=UΣVTATA=V(ΣTΣ)VTAAT=U(ΣΣT)UT Σ=[σ0⋯00σ⋯0⋮⋮⋱⋮00⋯σ00⋯0] or [σ0⋯000σ⋯00⋮⋮⋱⋮000⋯σ0]-
Linear Algebra and its application Third edition, David C.lay, 2003. ↩