Linear Algebra
LINEAR ALGEBRA FOR
TEST AND ANALYSIS
IMAC - XIX
Daniel C. Kammer
Department of Engineering Physics
University of Wisconsin
D. C. Kammer
University of Wisconsin - Engineering Physics
Linear Algebra
MOTIVATION
The use of matrix and vector algebra is an absolute requirement
for the efficient manipulation of the large sets of data that are
fundamental to applications in structural dynamics, both test
and analysis.
Primary problems to be solved:
[ A]{x} = {b}
[ M ]{x} + [C]{x} + [ K ]{x} = {F(t )}
D. C. Kammer
University of Wisconsin - Engineering Physics
Linear Algebra
LECTURE AGENDA
Common Nomenclature and Definitions
Solution of Determined Sets of Equations
Solution of Overdetermined Sets of Equations
Solution of UnderDetermined Sets of Equations
Example Applications
D. C. Kammer
University of Wisconsin - Engineering Physics
Linear Algebra
NOMENCLATURE
[ A]
{ x}
matrix
vector
n
m
number of rows (equations)
number of columns (unknowns)
matrix transpose
matrix Hermitian transpose
matrix inverse
matrix generalized inverse
complex conjugate
[ A]T
[ A]H
[ A]1
[ A]+
*
D. C. Kammer
University of Wisconsin - Engineering Physics
Linear Algebra
MATRIX EQUATIONS,
THREE CASES CAN OCCUR
[ A]nm {x}m1 = {b}n1
1. Underdetermined: n < m
Optimization
FEM updating
Projection of data onto subspaces
2. Determined: n = m
Analytical structural dynamics using
finite element models
3. Overdetermined: n > m
Time and frequency domain parameter
estimation
Least squares applications
Sensor placement algorithms
D. C. Kammer
University of Wisconsin - Engineering Physics
Linear Algebra
BASIC DEFINITIONS
Matrix:
A matrix is an array of numbers. Entries can be
referred to by their row and column location.
a11
a
21
[ A] =
a31
a
41
Vector:
a12
a22
a32
a42
a13
a23
a33
a43
A vector is a special case of a matrix with either one
row or one column.
b1
b
2
{b} =
b3
b4
Column subscript is dropped
D. C. Kammer
University of Wisconsin - Engineering Physics
Linear Algebra
CAN TRANSFORM A SET OF
ALGEBRAIC EQUATIONS TO A SINGLE
MATRIX EQUATION
3 x1 2 x3 = 2
2 x1 + 5 x 2 4 x3 = 1
7 x1 3 x3 = 3
4 x1 + 6 x 2 + 2 x3 = 5
Equivalent to:
3
2
7
4
0 2
2
x1
5 4 1
x2 =
0 3 3
x3
5
6 2
No Exact
Solution
D. C. Kammer
University of Wisconsin - Engineering Physics
Linear Algebra
RULES FOR MATRIX OPERATIONS
ka11
k[ A] = ka21
ka31
Multiplication of a matrix by a matrix:
Multiplication by a scalar:
ka12
ka22
ka32
ka13
ka23
ka33
[ A][ B] [ B][ A]
NOT cummutative in general
([ A][ B])[C] = [ A]([ B][C])
Associative
([ A] + [ B])([C] + [ D]) = [ A][C] + [ A][ D] + [ B][C] + [ B][ D]
Matrix cancellation:
[ A][ B] = [0]
Implies one
of following
[ A] = [0]
[ B] = [0]
[ A] and [ B] singular
D. C. Kammer
University of Wisconsin - Engineering Physics
Linear Algebra
MATRIX MULTIPLICATION
[ A][ B] = [C]
a11
a
21
a31
a12
a22
a31
a13
a23
a31
b11
a14
b21
a24
b31
a31
b41
b12
c11
b22
= c21
b32
c31
b42
c21
c22
c32
c22 = a21b12 + a22 b22 + a23 b32 + a24 b42
cij = aik bkj
k
D. C. Kammer
University of Wisconsin - Engineering Physics
Linear Algebra
SPECIAL MATRICES
Identity Matrix:
1s on the diagonal, zeros everywhere else.
1 0 0
[ I ] = 0 1 0
0 0 1
Zero Matrix:
Zeros in all locations.
0 0 0
[0] = 0 0 0
0 0 0
D. C. Kammer
University of Wisconsin - Engineering Physics
Linear Algebra
MATRIX TRANSPOSES
Transpose:
Interchange rows and columns.
1 2 3
[ A] = 4 5 6
7 8 9
Hermitian Transpose:
1 4 7
[ A]T = 2 5 8
3 6 9
Interchange rows and columns and then take
complex conjugate of each element.
2
3i
1+ i
5
6
[ A] = 4 4i
8 + 8i 9
7
7
1 i 4 + 4i
5
8 8i
[ A]H = 2
6
9
3i
D. C. Kammer
University of Wisconsin - Engineering Physics
Linear Algebra
SPECIAL MATRIX FORMS
Symmetric:
[ A] = [ A]T
1 2 3
[ A] = 2 4 5
3 5 6
Hermitian:
[ A] = [ A]H
4 + 3i 5i
1
2
2 + i
[ A] = 4 3i
2i
0
5i
All diagonal terms real
Skew Symmetric: [ A] = [ A]
0 2 3
[ A] = 2 0 5
3 5 0
Skew Hermitian:
All diagonal
terms 0
[ A] = [ A]H
4 + 4i
7
i
(8 8i )
[ A] = ( 4 4i ) 5i
7
8 + 8i
0
All diagonal terms imaginary or 0
D. C. Kammer
University of Wisconsin - Engineering Physics
Linear Algebra
SPECIAL MATRIX FORMS
Orthogonal
[ A][ A]T = [ A]T [ A] = [ I ]
Unitary
[ A][ A]H = [ A]H [ A] = [ I ]
Idempotent
[ A]m = [ A]
Nilpotent
[ A]k = [0]
any integer m
for some integer k
D. C. Kammer
University of Wisconsin - Engineering Physics
Linear Algebra
SPECIAL MATRIX FORMS
Diagonal:
Triangular:
1 0 0
[ A] = 0 2 0
0 0 3
1 4 5
[ A] = 0 2 6
0 0 3
Upper
triangular
D. C. Kammer
University of Wisconsin - Engineering Physics
Linear Algebra
TOEPLITZ MATRIX
All elements on any superdiagonal and subdiagonal are equal.
t1
t
2
[T ] = t3
t
4
t5
t6
t1
t2
t3
t4
t7
t6
t1
t2
t3
t8
t7
t6
t1
t2
t9
t8
t7
t6
t1
Does not have to be square.
D. C. Kammer
University of Wisconsin - Engineering Physics
Linear Algebra
TOEPLITZ MATRIX EXAMPLE
Discrete Time
Invariant SISO System:
Combine to produce
Convolution equation
{x(k + 1)} = [ A]{x(k )} + [ B]u(k )
y( k ) = [C ]{x (k )} + Du( k )
y( k ) = H ( k )u( k i )
i=0
Can write convolution equation in matrix form.
H0
0
0
0
H1
H0
0
0
0
H2
H1
H0
0
0
H3
H2
H1
H0
0
H4 u4 y4
H3 u3 y3
H2 u2 = y2
H1 u1 y1
H0 u0 y0
For MIMO systems, Hi
are matrices, producing
a block Toeplitz matrix
Time domain identification techniques
D. C. Kammer
University of Wisconsin - Engineering Physics
Linear Algebra
HANKEL MATRIX
All elements on any superdiagonal and subdiagonal perpendicular
to the main diagonal are equal.
h1
h
2
[ H ] = h3
h
4
h5
h2
h3
h4
h5
h6
h3
h4
h5
h6
h7
h4
h5
h6
h7
h8
h5
h6
h7
h8
h9
Does not have to be square.
D. C. Kammer
University of Wisconsin - Engineering Physics
Linear Algebra
HANKEL MATRIX EXAMPLE
Modal parameter estimation using Eigensystem Realization Algorithm (ERA)
For SISO system, form Hankel matrix:
hi is ith Markov parameter
derived from measured FRF
or free-decay data
MIMO results in a block
Hankel matrix
h1
h
2
[ H (0)] = h3
h
4
M
h2
h3
h4
h5
M
h3
h4
h5
h6
M
h4
h5
h6
h7
M
L
L
L
L
A singular value decomposition (SVD) is performed that then leads
to estimates of the modal parameters.
D. C. Kammer
University of Wisconsin - Engineering Physics
Linear Algebra
VANDERMONDE MATRIX
First column is 1s with successive columns being the second column
with its elements raised to increasing integer powers.
1
[V ] = 1
1
1
v1
v2
v3
v12
v22
v32
v4
v5
v42
v52
v13
v23
v33
v43
v53
Nonsingular iff
All vi distinct
Occurs in curve fitting and some frequency domain
parameter estimation methods.
D. C. Kammer
University of Wisconsin - Engineering Physics
Linear Algebra
VANDERMONDE MATRIX EXAMPLE
{y1
Fit a polynomial to a set of data points
either test or analytical.
Find an expression for y as a
function of variable x
y2
y2 = a0 + a1 x 2 + a2 x 22 + a3 x 23
y3 = a0 + a1 x3 + a2 x32 + a3 x33
y4 = a0 + a1 x 4 + a2 x 42 + a3 x 43
y5 = a0 + a1 x5 + a2 x52 + a3 x53
y4
y5 }
y = p( x ) = a0 + a1 x + a2 x 2 + a3 x 3
yi = p( xi )
Generate matrix equation using x-y pairs.
y1 = a0 + a1 x1 + a2 x12 + a3 x13
y3
1
1
1
1
x1
x12
x2
x3
x4
x5
x 22
x32
x 42
x52
x13
y1
a
3 0
x 2 y2
a
3 1
x3 = y3
a
3 2
y
x4
4
3
3
y5
x5
D. C. Kammer
University of Wisconsin - Engineering Physics
Linear Algebra
RESPONSE SURFACES CAN BE SYNTHESIZED
TO ACT AS SURROGATES FOR COMPLEX
NUMERICAL SIMULATIONS
D. C. Kammer
University of Wisconsin - Engineering Physics
Linear Algebra
SOME MATRIX MEASURES
a b
= ad cb
[ A] =
c d
Determinant:
a11
[ A] = a21
a31
a12
a22
a32
a13
a22
a23 = a11
a32
a33
a23
a21
a12
a32
a31
Exists only for square matrices.
[ A][ B] = [ A] [ B]
[ A] = [ A]T
a23
a21
+ a13
a33
a31
a22
a32
Etc.
Determinant is zero for singular matrices.
[ A]* = [ A]H
k[ A] = k n [ A]
Trace:
Sum of diagonal elements
for a square matrix.
tr([ A]) = aii
i
D. C. Kammer
University of Wisconsin - Engineering Physics
Linear Algebra
VECTOR SPACE
The set of all n dimensional vectors make up an n dimensional vector
space, Rn.
n vectors {e}i in Rn are said to be linearly independent if the equation
0 = a1 {e}1 + a2 {e}2 + a3 {e}3 L an {e}n
only has a solution in which the constants ai are all zero. The n vectors {e}i
are said to span the vector space.
Any n dimensional vector {x} can be expressed uniquely as a linear
combination of the n linearly independent vectors {e}i :
{x} = b1 {e}1 + b2 {e}2 + b3 {e}3 L bn {e}n
D. C. Kammer
University of Wisconsin - Engineering Physics
Linear Algebra
VECTOR SPACE THEORY APPLIED TO
MATRICES
[ A]nm {x}m1 = {b}n1
Rank of a matrix is the number of linear
independent columns or rows.
rk ([ A]) = r
rk([A]) = m
[A] is full column rank.
rk([A]) = r < m,n
[A] is rank deficient. If [A] square, it is
singular and [A]-1 does not exist.
Column Space of [A] is the vector space spanned by its columns.
R([ A])
Row Space of [A] is the vector space spanned by its rows.
Null Space of [A] is the set of vectors {x} such that:
N ([ A])
[ A]{x} = {0}
D. C. Kammer
University of Wisconsin - Engineering Physics
Linear Algebra
EXAMPLE
1
3
[ A] =
8
1
0
7
3 13
4 6
2
1
0.12
0.35
R([ A]) =
0.92
0.12
0.37
0.18
0.55
0.73
rk ([ A]) = 2
[ A]T [ A] = 0
Determinant of any 2-by-2 submatrix
is nonzero.
0.82
N ([ A]) = 0.41
0.41
D. C. Kammer
University of Wisconsin - Engineering Physics
Linear Algebra
FOUR SUBSETS OF MATRIX ALGEBRA
[ A]nm {x}m1 = {b}n1
R
[A] has rank r
[ A]
Rn
Dim(m-r)
R([A])
N([A]T)
N([A])
Dim(n-r)
Dim(r)
R([A]T)
Dim(r)
[ A]T
D. C. Kammer
University of Wisconsin - Engineering Physics
Linear Algebra
SPECTRAL DECOMPOSITION
In most cases, a square n-by-n matrix [A] can be decomposed into a product
of three matrices
[ A] = [][][]1
where [] is diagonal with entries called eigenvalues and [] is the
modal matrix containing columns {}i called eigenvectors.
This implies that [A] is diagonalizable.
[]1[ A][] = []
Eigenvalues and eigenvectors satisfy
the eigen-problem:
([ A] i [ I ]){}i = {0}
Eigenvalues satisfy the characteristic
equation:
([ A] i [ I ]) = 0
Note: Eigenvectors {}i are in null
space of matrix ([A] - [ ])
[ A] = i
i =1
D. C. Kammer
University of Wisconsin - Engineering Physics
Linear Algebra
SPECTRAL DECOMPOSITION
Matrix [A] is diagonalizable iff it possesses n linearly independent
eigenvectors.
If [A] has distinct eigenvalues, it is diagonalizable.
1
a12 11 12 1 0
11
12 a11
a
=0
21
22 21
22 21
22
2
1 2 3
[ A] = 4 5 6
7 8 9
0.41
0.23 0.79
[] = 0.53 0.09 0.82
0.82 0.61 0.41
0
0
16.12
1.12 0
[] = 0
0
0
0
Note: Real nonsymmetric matrices can have complex eigenvalues and
eigenvectors. If so, they occur in complex conjugate pairs.
D. C. Kammer
University of Wisconsin - Engineering Physics
Linear Algebra
SPECIAL CASE
REAL SYMMETRIC MATRICES
Always are diagonalizable.
Always possess real eigenvalues and eigenvectors.
Modal matrix is orthogonal.
[]T = []1
[]T [ A][] = []
D. C. Kammer
University of Wisconsin - Engineering Physics
Linear Algebra
STRUCTURAL DYNAMICS
[ M ]{x} + [ K ]{x} = {0}
it
Assume: {x} = {}e
[M] real symmetric positive definite. [K] real symmetric positive semidefinite.
Multiplying through by [M]-1 and
substituting leads to eigenvalue problem:
([ M ]
[ K ] i [ I ]){}i = {0}
Modal matrix simultaneously
diagonalizes [M] and [K].
[]T [ M ][] = [ I ]
[]T [ K ][] = []
Y
Z
Decoupled system in modal
coordinates.
{q} + []{q} = []T {F}
D. C. Kammer
University of Wisconsin - Engineering Physics
Linear Algebra
SINGULAR VALUE DECOMPOSITION
Any n x m matrix [A] of rank r can be
decomposed into a product of three matrices:
[U ](n n)
[ S](n m)
[V ]( m m)
[ A] = [U ][ S][V ]H
[U ]H [U ] = [ I ]
[V ] H [V ] = [ I ]
[U] and [V]
are Unitary
Matrices can be partitioned as:
[ A] = [[U1 ]
or
[U1 ](n r ) [U2 ](n (n r ))
[V1 ](m r ) [V2 ](m (m r ))
H
[ ] [0] [V1 ]
[U2 ]] 0 0 H
[ ] [ ] [V2 ]
[ A] = [U1 ][ ][V1 ]
[ A] = {u1 }i i {v1 }i
i =1
{u1}i is ith
column of [U1]
D. C. Kammer
University of Wisconsin - Engineering Physics
Linear Algebra
SINGULAR VALUE DECOMPOSITION
1 0 0 L
0
0 L
2
[ ](r r ) =
0 0 O 0
M
M
M r
i2 , {u1 }i
{u2 }i
i2 , {v1 }i
{v2 }i
Singular values are
real and satisfy:
1 2 L r > 0
- are eigenvalues and eigenvectors of [A][A]T
- are eigenvectors of [A][A]T with zero eigenvalues.
- are eigenvalues and eigenvectors of [A]T[A]
- are eigenvectors of [A]T [A] with zero eigenvalues.
[U1] spans the column space of [A].
[U2] spans the column null space of [A].
[V1]T spans the row space of [A].
[V1]T spans the row null space of [A].
D. C. Kammer
University of Wisconsin - Engineering Physics
Linear Algebra
SVD CAN BE USED TO DETERMINE
RANK OF A MATRIX
2
10
0
10
-2
10
-4
10
SINGULAR VALUE
It is obvious to the
casual observer that
matrix [A] has
a rank of 11
-6
10
-8
10
-10
10
-12
10
-14
10
-16
10
8
10
12
14
SINGULAR VALUE NUMBER
16
18
20
D. C. Kammer
University of Wisconsin - Engineering Physics
Linear Algebra
UNFORTUNATELY, IT IS NOT ALWAYS
SO EASY
1
10
10
SINGULAR VALUE
Singular values for a
block Hankel matrix
640 rows and 1380
columns.
-1
10
Whats the rank?
-2
10
20
40
60
80
100
120
SINGULAR VALUE NUMBER
140
160
180
D. C. Kammer
University of Wisconsin - Engineering Physics
Linear Algebra
MAXIMUM SINGULAR VALUE OF FRF
MATRIX VS. FREQUENCY
Gives measure of
participation of
modes in system
response.
Use to rank dynamic
importance of modes.
D. C. Kammer
University of Wisconsin - Engineering Physics
Linear Algebra
SOLUTION OF DETERMINED
EQUATIONS
Have as many independent
equations as unknowns.
[A] is square, full rank,
and invertible.
{b} is always in range
space of [A].
[ A]nn {x}n1 = {b}n1
{x} = [ A]1 {b}
A unique solution always
exists.
D. C. Kammer
University of Wisconsin - Engineering Physics
Linear Algebra
MATRIX INVERSE
The inverse of a nonsingular matrix [A]
is a matrix [A]-1 that when multiplied by
[A] is the identity matrix.
Properties:
Application:
([ A][ B]) 1 = [ B]1[ A]1
([ A] )
([ A] )
[ A][ A]1 = [ A]1[ A] = [ I ]
T 1
= ([ A]
H 1
= ([ A]1
)
)
Solve for static
deflection
[ K ]{x} = {F}
1 T
H
1
k
1
=
[ A]
(k[ A]) 1 = [ A]1
[ A]1
D. C. Kammer
University of Wisconsin - Engineering Physics
Linear Algebra
A METHOD TO COMPUTE INVERSE
[ A]1 =
[ Adjo int([ A])]
[ A]
Adjoint of [A] is a matrix with elements
equal to the cofactors of [A] transposed.
Let [Mij] be the submatrix of [A]
obtained by deleting the ith row
and jth column. The determinant
of [Mij] is called a minor of [A].
a b
[ A] = d e
g h
c
f
There are better methods!
[ ]
Cofactor([ A])ij = cij = ( 1)i + j Mij
e f
+ h i
d f
1
1
[ A] =
[ A] g i
d e
+
g h
h
a
+
g
a
c
i
c
i
b
h
b
+
e
a
d
a
+
d
c
f
c
f
b
D. C. Kammer
University of Wisconsin - Engineering Physics
Linear Algebra
LU DECOMPOSITION
Any nonsingular matrix square matrix [A] can be factored into the
product of two matrices.
[ A] = [ L][U ]
[L] is a lower triangular matrix and [U] is an upper triangular matrix.
6 2 4 4
3
3 6 1
[ A] =
21 8
12 8
6 0 10 7
2 0 0
1 1 0
[ L] =
4 2 1
2 1 2
0
0
0
2
3 1 2 2
0 2 4
1
[U ] =
0 0 5 2
0 0 0
4
D. C. Kammer
University of Wisconsin - Engineering Physics
Linear Algebra
LU DECOMPOSITION
WHATS IT GOOD FOR?
Inverse:
Determinant:
Solving determined set
of equations.
Can be written as two
sets of equations.
[ A]1 = ([ L][U ]) 1 = [U ]1[ L]1
[ A] = [ L] [U ]
[ A]{x} = {b}
[ L][U ]{x} = {b}
[ L]{z} = {b}
Solve using
forward substitution
[U ]{x} = {z}
Solve using
backward substitution
D. C. Kammer
University of Wisconsin - Engineering Physics
Linear Algebra
BACKWARD SUBSTITUTION
3 1 2 2 x1 1
0 2 4
1 x 2 5
=
0 0 5 2 x3 2
0 0 0
4 x 4 6
[U ]{x} = {z}
Last equation
implies:
x4 =
6
4
Substitute into
third equation, etc.
D. C. Kammer
University of Wisconsin - Engineering Physics
Linear Algebra
TEST-ANALYSIS-MODEL DEVELOPMENT
USING STATIC REDUCTION
[ M ]{x} + [C]{x} + [ K ]{x} = {F(t )}
Reduce mass matrix to test sensor locations for test-analysis
correlation and still maintain accurate modal properties.
Objective:
Partition static equation
into sensor dof (a-set) and
dof to be reduced out (o-set).
Solve first equation
for {xo}.
{xo } = [ Koo ]
n dof in FEM
[ K oo ]
K
[ ao ]
[ Koa ] {xo } = {0}
[ Kaa ] {xa } {F}
1
x o [ K oo ] [ K oa ]
{ x} = =
{x a } = [T ]{x a }
x
[I]
a
[ Koa ]{xa }
[T] is a transformation from {xa} to {x}.
[ M ]TAM = [T ]T [ M ][T ]
D. C. Kammer
University of Wisconsin - Engineering Physics
Linear Algebra
STATIC REDUCTION OFTEN GIVES
GOOD RESULTS
300 Degrees of Freedom
90 Shell and Bar Elements
17 Modes with Frequencies below 70.0 Hz
Want TAM to be able to Predict
all 17 Modes Under 70.0 Hz.
- denotes master
dof location
D. C. Kammer
University of Wisconsin - Engineering Physics
Linear Algebra
20 DOF STATIC TAM CAN ACCURATELY
REPRESENT GPSC FEM MODAL PARAMETERS
IN 0.0 - 70.0 Hz. FREQUENCY RANGE
Absolute Frequency Error (%)
2.0
1.5
1.0
0.5
0.0
1
10 11 12 13 14 15 16 17
FEM Mode Number
D. C. Kammer
University of Wisconsin - Engineering Physics
Linear Algebra
X-33 REPRESENTS A MORE
CHALLENGING PROBLEM
100,000 Dof
833 modes
below 55.0 Hz.
D. C. Kammer
University of Wisconsin - Engineering Physics
Linear Algebra
17 TARGET MODES BELOW 25 Hz.
D. C. Kammer
University of Wisconsin - Engineering Physics
Linear Algebra
X-33 FEM/TAM FREQUENCY ERROR
6
1200 dof static TAMs
accurately predict
most of the target modes
below 25.0 Hz., but
two are not predicted
and none of the 10
target modes above 25.0
Hz. are predicted accurately.
4
TAM5
3
MSFC
84
83
81
75
71
68
46
23
22
21
16
15
12
11
Frequency Error - %
Target Mode Number
D. C. Kammer
University of Wisconsin - Engineering Physics
Linear Algebra
SOLUTION OF UNDERDETERMINED
EQUATIONS
Have fewer independent
equations than unknowns.
[A] is rectangular with more
columns than rows and is
assumed to be full row rank.
A solution always exists,
but there are infinitely many.
Dim(N([A])) = m - r = m - n
[ A]nm {x}m1 = {b}n1
n<m
{b} is always in column
space of [A].
Suppose {x}1 is a solution,
then if {x}n is any vector in
N([A]),
{x}s = {x}1 + {x}n
is also a solution.
D. C. Kammer
University of Wisconsin - Engineering Physics
Linear Algebra
BRUTE FORCE SOLUTION PROCEDURE
FOR UNDERDETERMINED SYSTEMS
[ A]nm {x}m1 = {b}n1
The original matrix
Equation becomes:
n<m
Define a new
vector:
[A] is full row rank, therefore
[A][A]T is nonsingular.
[ A][ A]T {z} = {b}
Solve for {z}:
{x} = [ A]T {z}
[ A]T {z} = [ A]T ([ A][ A]T ) {b} = {x}m
1
{z} = ([ A][ A]T ) {b}
1
[ A]+ = [ A]T ([ A][ A]T ) Right Generalized
1
Inverse of [A]
Out of infinity, which
solution is {x}m?
Substitute:
For arbitrary solution {x}s:
{x}m = [ A] ([ A][ A]
T
T 1
{b} = [A]{x}s
[ A]{x}s = [ P]{x}s
D. C. Kammer
University of Wisconsin - Engineering Physics
Linear Algebra
PROJECTORS
{x}m = [ P]{x}s
[ P]mm = [ A] ([ A][ A]
T
T 1
[ A]
Is the orthogonal projection of general solution {x}s
onto the row space of [A] or column space of [A]T.
{x}m is the minimum norm (length) solution.
[P] is an orthogonal projector.
N([P])
{x}s
[P] is Idempotent:
[ P][ P] = [ P]
tr([ P]) = rk ([ P]) = r = n
r eigenvalues of magnitude 1.0
R( [A]T )
{x}m
m-r eigenvalues of magnitude 0.0
D. C. Kammer
University of Wisconsin - Engineering Physics
Linear Algebra
APPLICATION TO TIME DOMAIN
SYSTEM IDENTIFICATION
Identify MIR space station
modal parameters using
response from excitation due
to Space Shuttle docking.
D. C. Kammer
University of Wisconsin - Engineering Physics
Linear Algebra
IDENTIFY SYSTEM MARKOV
PARAMETERS USING ONE DATA SET
Measure input and output and form
underdetermined matrix equation.
102 015
[YS ][ H ] = [YD ]
101 022
101 016
101 014
Solution process projects [H] onto
Row space of data set.
10 100 9
1 070 12
10 6010
104 014
105 011
X
118009
Z
Y
ys 0
y
s1
ys 2
M
ysnt 1
0
ys 0
ys1
M
ysnt 2
L
0
ys 0
M
L
L
0
H 0 yd 0
L
0
H1 yd1
=
L
M
M M
L
M
H N ydn 1
L ysnt NR R t
[Ys] has 650 rows
and 1950 columns
in application.
D. C. Kammer
University of Wisconsin - Engineering Physics
Linear Algebra
PULSE RESPONSE AT 104014z
PROJECTED ONTO STS-81 DATA
0.2
0.15
0.1
0.05
____ FEM
0
____ Docking Data
-0.05
-0.1
-0.15
-0.2
0
5
Seconds
10
D. C. Kammer
University of Wisconsin - Engineering Physics
Linear Algebra
SOLUTION OF OVERDETERMINED
EQUATIONS
Have more independent
equations than unknowns.
[A] is rectangular with more
rows than columns and is
assumed to be full column rank.
[ A]nm {x}m1 = {b}n1
n>m
{b} may or may not be
in column space of [A].
If {b} is in R([A]):
A unique solution exists.
If {b} is not in R([A]):
No exact solution exists. But
can find an approximate one.
D. C. Kammer
University of Wisconsin - Engineering Physics
Linear Algebra
BRUTE FORCE SOLUTION PROCEDURE
FOR OVERDETERMINED SYSTEMS
[ A]nm {x}m1 = {b}n1
([ A] [ A]){x} = [ A]
T
{b} = {b }
{x} = ([ A] [ A]) [ A] {b}
T
Plug this solution into
original equation:
[ P ]
Premultiply by [A]T
n>m
([A]T[A]) is nonsingular, therefore
equation can be solved using previously
discussed techniques.
[ A] = ([ A] [ A]) [ A]
+
Left Generalized
Inverse of [A]
[ A]{x} = [ A]([ A] [ A]) [ A]T {b} = [ P ]{b} = {b }
is an orthogonal
projector onto R([A])
{b} is the orthogonal projection
of {b} onto R([A])
D. C. Kammer
University of Wisconsin - Engineering Physics
Linear Algebra
BRUTE FORCE SOLUTION PROCEDURE
FOR OVERDETERMINED SYSTEMS
Case I:
{b} R([ A])
{x} = ([ A] [ A]) [ A]T {b}
T
[ ]
Unique solution
to [A]{x}={b}
P {b} = {b}
Case II: {b} R([ A])
{x } = ([ A]T [ A]) [ A]T {b}
1
[ ]
{}
P {b} = b
{x } Minimizes Euclidean
norm (length) of error
vector {e}.
{e} = {b} [ A]{x}
Unique solution
to [A]{x}= b
{}
Least-Squares
Solution
Works well in the presence of noise.
D. C. Kammer
University of Wisconsin - Engineering Physics
Linear Algebra
CURVE FITTING
M2
YSM
YBMA
M1
Data
D. C. Kammer
University of Wisconsin - Engineering Physics
Linear Algebra
PREVIOUS BRUTE FORCE TECHNIQUE
FOR LEAST-SQUARES USES THE
NORMAL FORM
([ A] [ A]){x} = [ A]
T
{b}
Direct inversion of ([A]T [A]) is costly and inaccurate:
o(n 3 )
LU decomposition of ([A]T [A]) and backward/forward
substitution is faster but can still be inaccurate:
o(n 2 )
In general, its best not to use the Normal Form of the equations to
obtain the Least-Squares solution.
D. C. Kammer
University of Wisconsin - Engineering Physics
Linear Algebra
SOLUTION OF LEAST-SQUARES
PROBLEM USING QR DECOMPOSITION
Operates directly on matrix
equation for general full
column rank matrix:
[A] can be uniquely
factored in the form:
[Q][ R]{x} = {b}
Partition:
[ A]nm {x}m1 = {b}n1
[Q]T [Q] = [Q][Q]T = [ I ]
[ A]nm = [Q]nn [ R]nm
Premultiply
by [Q]T
n>m
[R] upper
triangular
[U ]
[ R] =
[0]
[Q]T [Q][ R]{x} = [ R]{x} = [Q]T {b}
[Q1 ]T
[U ]
[0] {x} = Q T {b}
[ 2 ]
Preferred Technique for L.S.
2
o
n
(
)
Fast and Accurate
[U ]{x} = [Q1 ] {b}
T
Solve by
backsubstitution.
D. C. Kammer
University of Wisconsin - Engineering Physics
Linear Algebra
EXAMPLE - POLYNOMIAL CURVE
FITTING (JUNKINS)
Matrix has 20 rows
{t1
t
t
M
t m2
10
n 1
1
n 1
2
L t x1 y1
L t x 2 y2
=
O M M M
L t mn1 x n ym
t2 L tm } = {0 1 2 L m 1}
10
-2
10
Direct
Inversion
-4
10
Fractional Error
1 t1
1 t2
M M
1 t m
2
1
2
2
-6
LU
10
-8
10
QR
-10
10
-12
10
-14
{y1
y2 L ym } = {1 1 1 L 1}
10
-16
10
5
6
7
8
Number of Columns in 20 Row Data Matrix
D. C. Kammer
University of Wisconsin - Engineering Physics
10
Linear Algebra
MOORE-PENROSE GENERALIZED INVERSE
For EVERY matrix [A]nxm of rank r, an mxn matrix [A]+ is called a
Generalized Inverse if it satisfies:
[ A][ A]+ [ A] = [ A]
Infinitely Many
generalized inverses
[ A]+ [ A][ A]+ = [ A]+
If in addition, [A]+ satisfies:
[ A] [ A] = ([ A] [ A])
+
[ A][ A] = ([ A][ A]
+
Then, inverse is UNIQUE and called the
Moore-Penrose Generalized Inverse.
+ H
[ A]
D. C. Kammer
University of Wisconsin - Engineering Physics
Linear Algebra
M-P INVERSE MOST ACCURATELY
CALCULATED USING SVD
[ A]nm {x}m1 = {b}n1
SVD
[ A] = [U1 ][ ][V1 ]
[A] has rank r
Moore-Penrose
Inverse given by:
[ A] = [[U1 ]
H
[ ] [0] [V1 ]
[U2 ]] 0 0 H
[ ] [ ] [V2 ]
[ A] = [V1 ][ ]1[U1 ]
0 L
11 0
0 1 0 L
1
2
[ ] (r r ) =
0
0 O 0
M
M
M
SVD is costly but very
Accurate and Stable.
Trick is to determine correct
value of r.
How small is small, when it
comes to singular values?
MATLAB truncates singular values at:
tol = max (n, m) 1 2.22 10 16
D. C. Kammer
University of Wisconsin - Engineering Physics
Linear Algebra
MOORE-PENROSE GENERALIZED INVERSE
CAN SOLVE MATRIX EQUATIONS FOR
ANY CASE
[ A]nm {x}m1 = {b}n1
{x } = [ A] {b}
The following
equivalencies
can be derived.
Case I: Minimum norm: n < m and r = rank([A]) = n.
[ A] = [ A] ([ A][ A]
T 1
Case II: Least squares: n > m and r = rank([A]) = m.
[ A] = ([ A]T [ A]) [ A]T
1
Case III: General or rank deficient case: r = rank([A]) min( n, m).
[ A] = [V1 ][ ]1[U1 ]
D. C. Kammer
University of Wisconsin - Engineering Physics
Linear Algebra
RANK DEFICIENT EXAMPLE
1
0
[ A] =
0
1
2 3 0 0
0 4 5 6
0 0 0 7
2 7 5 6
1
1
{x} = 1
1
6
15
{b} =
7
21
0.3209
0.6419
{x } = [ A] {b} = 1.4651
0.6279
1.0000
U =
1.3032e-01
5.9082e-01
3.3749e-01
7.2114e-01
-4.3184e-01
6.8625e-02
8.2273e-01
-3.6321e-01
6.8059e-01
-5.5937e-01
4.5740e-01
1.2122e-01
-5.7735e-01
-5.7735e-01
-3.5691e-18
5.7735e-01
1.4570e+01
0
0
0
0
5.8276e+00
0
0
0
0
2.9611e+00
0
0
0
0
3.7220e-16
0
0
0
0
5.8442e-02
1.1688e-01
5.3552e-01
4.5024e-01
7.0244e-01
-1.3643e-01
-2.7286e-01
-6.1149e-01
-2.5275e-01
6.8494e-01
2.7078e-01
5.4156e-01
2.2047e-01
-7.3983e-01
1.9349e-01
-6.9696e-01
-3.7060e-01
4.7939e-01
-3.8351e-01
-1.8459e-16
6.4722e-01
-6.9373e-01
2.4674e-01
-1.9740e-01
-1.1620e-16
Minimum
norm
{x} = 2.236
{x } = 2.014
S =
tol = 1.62e-14
V =
D. C. Kammer
University of Wisconsin - Engineering Physics
Linear Algebra
MATRIX CONDITIONING
[ A]nm {x}m1 = {b}n1
Condition Number:
Matrix [A] may be considered
full rank, but it can still be
Ill-conditioned.
Ill-conditioning of [A] implies that
small changes in the elements of [A]
and {b} can cause very large changes
in the computed solution {x}.
Relative error in {x} can be times
the relative error in the data.
If
([ A]) =
max
min
1
approaches the computers
([ A])
floating point precision, the matrix
is ill-conditioned.
1.0 ([ A])
Good
Bad
Regularization can help.
Look for solution of well-posed
problem in neighborhood of
ill-posed problem.
D. C. Kammer
University of Wisconsin - Engineering Physics
Linear Algebra
GENERAL PURPOSE SPACECRAFT
ESTIMATE LOADS AT INTERFACE WITH LAUNCH VEHICLE
USING MEASURED RESPONSE AND MARKOV PARAMETERS
6 Inputs - 6 Outputs
Spacecraft - 0.0-300.0 Hz.
Inputs - 0.0-150.0 Hz.
Inverse problem is ill-posed
leading to an ill-conditioned
matrix convolution equation.
[ H ]{U} = {Y }
45(z)
44(xyz)
49(z)
[H] has more rows than columns
and quickly becomes ill-conditioned
as more data is used.
48(x)
D. C. Kammer
University of Wisconsin - Engineering Physics
Linear Algebra
COMPUTE LEAST SQUARES SOLUTION
Ill-conditioning of
data matrix [H] results
in unstable computation
of input forces.
D. C. Kammer
University of Wisconsin - Engineering Physics
Linear Algebra
COMPUTATION CAN BE REGULARIZED
[ H ]{U} = {Y }
Matrix Convolution
Equation
[H] =
[h0 ]
[h1 ]
[h2 ]
[h0 ]
[h1 ]
[h ] [h ]
nt 1
nt 2
L
0
O
L
L
L
L
0
O
[h1 ]
{Y } =
{U} =
[h0 ]
0
M
M
0
Replace ill-posed problem with closely
related well-posed problem.
Regularized solution:
{{y(0)}
{{u(0)}
{y(1)}
{u(1)}
}
{u(n 1)} }
{y(nt 1)}
L
L
([ H ] [ H ] + [ I ]){U} = [ H ]
T
{Y }
{U} = ([ H ]T [ H ] + [ I ]) [ H ]T {Y }
1
D. C. Kammer
University of Wisconsin - Engineering Physics
Linear Algebra
REGULARIZED FORCE COMPUTATION
Exact
Predicted
= 0.00005
D. C. Kammer
University of Wisconsin - Engineering Physics
Linear Algebra
MIR SYSTEM IDENTIFICATION
Priroda
Kvant-1
Core
Spektr
Soyuz
Krystall
Progress
Identify MIR Markov
parameters from response
due to Shuttle docking.
Kvant-2
Shuttle
D. C. Kammer
University of Wisconsin - Engineering Physics
Linear Algebra
ACCELERATION MEASURED ON
KRISTALL - STS-81
0.015
Acceleration - g
0.01
Response data
filtered to 5.0 Hz.
by NASA.
0.005
-0.005
-0.01
10
15
Seconds
20
25
30
D. C. Kammer
University of Wisconsin - Engineering Physics
Linear Algebra
SOLVE FOR MARKOV PARAMETERS
[ H ] [Yr ] = [Ye ]
[ yr 0 ]
0
[H] 0
M
0
[ yr 1 ]
[ yr 0 ]
0
M
L
[H] has 1,950 rows and 3,300 columns
L
M
M
[ yr 0 ]
L
L
L
L
[ y ]
[y ]
rnt 1
rnt 2
[y
M
M
rnt N R
Compute Moore-Penrose
Inverse of [Yr] using SVD.
Due to ill-conditioning, SVD
algorithm FAILS TO CONVERGE.
= [ ye 0 ]
[ ye1 ]
[ y ]]
ent 1
Regularize data:
Adjust SVD singular
value tolerance.
Add a little artificial noise.
D. C. Kammer
University of Wisconsin - Engineering Physics
Linear Algebra
ADD 1.0% RMS NOISE, COMPUTE
PSEUDO-INVERSE
0.25
0.2
0.15
Compute Markov
parameters and
filter to 5.0 Hz.
0.1
0.05
____ FEM
-0.05
____ Docking Data
-0.1
-0.15
-0.2
0
5
Seconds
10
D. C. Kammer
University of Wisconsin - Engineering Physics
Linear Algebra
SUMMARY
We have only scratched the surface of what there is to know
about matrix algebra.
As an experimental or analytical structural dynamicist, you
CANNOT do your work without using matrix analysis.
There is a veritable galaxy of neat applications, many of which
have not been thought of yet.
D. C. Kammer
University of Wisconsin - Engineering Physics