Partial Differential Equations: Graduate Level Problems
and Solutions
Igor Yanovsky
1
Partial Differential Equations Igor Yanovsky, 2005 2
Disclaimer: This handbook is intended to assist graduate students with qualifying
examination preparation. Please be aware, however, that the handbook might contain,
and almost certainly contains, typos as well as incorrect or inaccurate solutions. I can
not be made responsible for any inaccuracies contained in this handbook.
Partial Differential Equations Igor Yanovsky, 2005 3
Contents
1 Trigonometric Identities 6
2 Simple Eigenvalue Problem 8
3 Separation of Variables:
Quick Guide 9
4 Eigenvalues of the Laplacian: Quick Guide 9
5 First-Order Equations 10
5.1 Quasilinear Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
5.2 Weak Solutions for Quasilinear Equations . . . . . . . . . . . . . . . . . 12
5.2.1 Conservation Laws and Jump Conditions . . . . . . . . . . . . . 12
5.2.2 Fans and Rarefaction Waves . . . . . . . . . . . . . . . . . . . . . 12
5.3 General Nonlinear Equations . . . . . . . . . . . . . . . . . . . . . . . . 13
5.3.1 Two Spatial Dimensions . . . . . . . . . . . . . . . . . . . . . . . 13
5.3.2 Three Spatial Dimensions . . . . . . . . . . . . . . . . . . . . . . 13
6 Second-Order Equations 14
6.1 Classification by Characteristics . . . . . . . . . . . . . . . . . . . . . . . 14
6.2 Canonical Forms and General Solutions . . . . . . . . . . . . . . . . . . 14
6.3 Well-Posedness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
7 Wave Equation 23
7.1 The Initial Value Problem . . . . . . . . . . . . . . . . . . . . . . . . . . 23
7.2 Weak Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
7.3 Initial/Boundary Value Problem . . . . . . . . . . . . . . . . . . . . . . 24
7.4 Duhamel’s Principle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
7.5 The Nonhomogeneous Equation . . . . . . . . . . . . . . . . . . . . . . . 24
7.6 Higher Dimensions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
7.6.1 Spherical Means . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
7.6.2 Application to the Cauchy Problem . . . . . . . . . . . . . . . . 26
7.6.3 Three-Dimensional Wave Equation . . . . . . . . . . . . . . . . . 27
7.6.4 Two-Dimensional Wave Equation . . . . . . . . . . . . . . . . . . 28
7.6.5 Huygen’s Principle . . . . . . . . . . . . . . . . . . . . . . . . . . 28
7.7 Energy Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
7.8 Contraction Mapping Principle . . . . . . . . . . . . . . . . . . . . . . . 30
8 Laplace Equation 31
8.1 Green’s Formulas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
8.2 Polar Coordinates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
8.3 Polar Laplacian in R2 for Radial Functions . . . . . . . . . . . . . . . . 32
8.4 Spherical Laplacian in R3
and Rn
for Radial Functions . . . . . . . . . . 32
8.5 Cylindrical Laplacian in R3 for Radial Functions . . . . . . . . . . . . . 33
8.6 Mean Value Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
8.7 Maximum Principle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
8.8 The Fundamental Solution . . . . . . . . . . . . . . . . . . . . . . . . . . 34
8.9 Representation Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
8.10 Green’s Function and the Poisson Kernel . . . . . . . . . . . . . . . . . . 42
Partial Differential Equations Igor Yanovsky, 2005 4
8.11 Properties of Harmonic Functions . . . . . . . . . . . . . . . . . . . . . . 44
8.12 Eigenvalues of the Laplacian . . . . . . . . . . . . . . . . . . . . . . . . . 44
9 Heat Equation 45
9.1 The Pure Initial Value Problem . . . . . . . . . . . . . . . . . . . . . . . 45
9.1.1 Fourier Transform . . . . . . . . . . . . . . . . . . . . . . . . . . 45
9.1.2 Multi-Index Notation . . . . . . . . . . . . . . . . . . . . . . . . 45
9.1.3 Solution of the Pure Initial Value Problem . . . . . . . . . . . . . 49
9.1.4 Nonhomogeneous Equation . . . . . . . . . . . . . . . . . . . . . 50
9.1.5 Nonhomogeneous Equation with Nonhomogeneous Initial Condi-
tions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
9.1.6 The Fundamental Solution . . . . . . . . . . . . . . . . . . . . . 50
10 Schr¨odinger Equation 52
11 Problems: Quasilinear Equations 54
12 Problems: Shocks 75
13 Problems: General Nonlinear Equations 86
13.1 Two Spatial Dimensions . . . . . . . . . . . . . . . . . . . . . . . . . . . 86
13.2 Three Spatial Dimensions . . . . . . . . . . . . . . . . . . . . . . . . . . 93
14 Problems: First-Order Systems 102
15 Problems: Gas Dynamics Systems 127
15.1 Perturbation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127
15.2 Stationary Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128
15.3 Periodic Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130
15.4 Energy Estimates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136
16 Problems: Wave Equation 139
16.1 The Initial Value Problem . . . . . . . . . . . . . . . . . . . . . . . . . . 139
16.2 Initial/Boundary Value Problem . . . . . . . . . . . . . . . . . . . . . . 141
16.3 Similarity Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155
16.4 Traveling Wave Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . 156
16.5 Dispersion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 171
16.6 Energy Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174
16.7 Wave Equation in 2D and 3D . . . . . . . . . . . . . . . . . . . . . . . . 187
17 Problems: Laplace Equation 196
17.1 Green’s Function and the Poisson Kernel . . . . . . . . . . . . . . . . . . 196
17.2 The Fundamental Solution . . . . . . . . . . . . . . . . . . . . . . . . . . 205
17.3 Radial Variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 216
17.4 Weak Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 221
17.5 Uniqueness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 223
17.6 Self-Adjoint Operators . . . . . . . . . . . . . . . . . . . . . . . . . . . . 232
17.7 Spherical Means . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 242
17.8 Harmonic Extensions, Subharmonic Functions . . . . . . . . . . . . . . . 249
Partial Differential Equations Igor Yanovsky, 2005 5
18 Problems: Heat Equation 255
18.1 Heat Equation with Lower Order Terms . . . . . . . . . . . . . . . . . . 263
18.1.1 Heat Equation Energy Estimates . . . . . . . . . . . . . . . . . . 264
19 Contraction Mapping and Uniqueness - Wave 271
20 Contraction Mapping and Uniqueness - Heat 273
21 Problems: Maximum Principle - Laplace and Heat 279
21.1 Heat Equation - Maximum Principle and Uniqueness . . . . . . . . . . . 279
21.2 Laplace Equation - Maximum Principle . . . . . . . . . . . . . . . . . . 281
22 Problems: Separation of Variables - Laplace Equation 282
23 Problems: Separation of Variables - Poisson Equation 302
24 Problems: Separation of Variables - Wave Equation 305
25 Problems: Separation of Variables - Heat Equation 309
26 Problems: Eigenvalues of the Laplacian - Laplace 323
27 Problems: Eigenvalues of the Laplacian - Poisson 333
28 Problems: Eigenvalues of the Laplacian - Wave 338
29 Problems: Eigenvalues of the Laplacian - Heat 346
29.1 Heat Equation with Periodic Boundary Conditions in 2D
(with extra terms) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 360
30 Problems: Fourier Transform 365
31 Laplace Transform 385
32 Linear Functional Analysis 393
32.1 Norms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 393
32.2 Banach and Hilbert Spaces . . . . . . . . . . . . . . . . . . . . . . . . . 393
32.3 Cauchy-Schwarz Inequality . . . . . . . . . . . . . . . . . . . . . . . . . 393
32.4 H¨older Inequality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 393
32.5 Minkowski Inequality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 394
32.6 Sobolev Spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 394
Partial Differential Equations Igor Yanovsky, 2005 6
1 Trigonometric Identities
cos(a + b) = cos a cos b − sina sinb
cos(a − b) = cos a cos b + sina sinb
sin(a + b) = sin a cos b + cos a sinb
sin(a − b) = sin a cos b − cos a sinb
cos a cos b =
cos(a + b) + cos(a − b)
2
sin a cos b =
sin(a + b) + sin(a − b)
2
sin a sinb =
cos(a − b) − cos(a + b)
2
cos 2t = cos2
t − sin2
t
sin2t = 2 sint cos t
cos2 1
2
t =
1 + cos t
2
sin2 1
2
t =
1 − cos t
2
1 + tan2
t = sec2
t
cot2
t + 1 = csc2
t
cos x =
eix
+ e−ix
2
sinx =
eix
− e−ix
2i
coshx =
ex + e−x
2
sinhx =
ex − e−x
2
d
dx
cosh x = sinh(x)
d
dx
sinh x = cosh(x)
cosh2
x − sinh2
x = 1
du
a2 + u2
=
1
a
tan−1 u
a
+ C
du
√
a2 − u2
= sin−1 u
a
+ C
L
−L
cos
nπx
L
cos
mπx
L
dx =
0 n = m
L n = m
L
−L
sin
nπx
L
sin
mπx
L
dx =
0 n = m
L n = m
L
−L
sin
nπx
L
cos
mπx
L
dx = 0
L
0
cos
nπx
L
cos
mπx
L
dx =
0 n = m
L
2 n = m
L
0
sin
nπx
L
sin
mπx
L
dx =
0 n = m
L
2 n = m
L
0
einx
eimx
dx =
0 n = m
L n = m
L
0
einx
dx =
0 n = 0
L n = 0
sin2
x dx =
x
2
−
sin x cos x
2
cos2
x dx =
x
2
+
sin x cos x
2
tan2
x dx = tanx − x
sinx cos x dx = −
cos2
x
2
ln(xy) = ln(x) + ln(y)
ln
x
y
= ln(x) − ln(y)
ln xr
= r lnx
ln x dx = x ln x − x
x ln x dx =
x2
2
ln x −
x2
4
R
e−z2
dz =
√
π
R
e−z2
2 dz =
√
2π
Partial Differential Equations Igor Yanovsky, 2005 7
A =
a b
c d
, A−1
=
1
det(A)
d −b
−c a
Partial Differential Equations Igor Yanovsky, 2005 8
2 Simple Eigenvalue Problem
X + λX = 0
Boundary conditions Eigenvalues λn Eigenfunctions Xn
X(0) = X(L) = 0 nπ
L
2
sin nπ
L x n = 1, 2, . . .
X(0) = X (L) = 0
(n−1
2
)π
L
2
sin
(n−1
2
)π
L x n = 1, 2, . . .
X (0) = X(L) = 0
(n−1
2
)π
L
2
cos
(n−1
2
)π
L x n = 1, 2, . . .
X (0) = X (L) = 0 nπ
L
2
cos nπ
L x n = 0, 1, 2, . . .
X(0) = X(L), X (0) = X (L) 2nπ
L
2
sin 2nπ
L x n = 1, 2, . . .
cos 2nπ
L x n = 0, 1, 2, . . .
X(−L) = X(L), X (−L) = X (L) nπ
L
2
sin nπ
L x n = 1, 2, . . .
cos nπ
L x n = 0, 1, 2, . . .
X − λX = 0
Boundary conditions Eigenvalues λn Eigenfunctions Xn
X(0) = X(L) = 0, X (0) = X (L) = 0 nπ
L
4
sin nπ
L x n = 1, 2, . . .
X (0) = X (L) = 0, X (0) = X (L) = 0 nπ
L
4
cos nπ
L x n = 0, 1, 2, . . .
Partial Differential Equations Igor Yanovsky, 2005 9
3 Separation of Variables:
Quick Guide
Laplace Equation: u = 0.
X (x)
X(x)
= −
Y (y)
Y (y)
= −λ.
X + λX = 0.
X (t)
X(t)
= −
Y (θ)
Y (θ)
= λ.
Y (θ) + λY (θ) = 0.
Wave Equation: utt − uxx = 0.
X (x)
X(x)
=
T (t)
T(t)
= −λ.
X + λX = 0.
utt + 3ut + u = uxx.
T
T
+ 3
T
T
+ 1 =
X
X
= −λ.
X + λX = 0.
utt − uxx + u = 0.
T
T
+ 1 =
X
X
= −λ.
X + λX = 0.
utt + μut = c2uxx + βuxxt, (β > 0)
X
X
= −λ,
1
c2
T
T
+
μ
c2
T
T
= 1 +
β
c2
T
T
X
X
.
4th Order: utt = −k uxxxx.
−
X
X
=
1
k
T
T
= −λ.
X − λX = 0.
Heat Equation: ut = kuxx.
T
T
= k
X
X
= −λ.
X +
λ
k
X = 0.
4th Order: ut = −uxxxx.
T
T
= −
X
X
= −λ.
X − λX = 0.
4 Eigenvalues of the Lapla-
cian: Quick Guide
Laplace Equation: uxx +uyy +λu = 0.
X
X
+
Y
Y
+ λ = 0. (λ = μ2
+ ν2
)
X + μ2
X = 0, Y + ν2
Y = 0.
uxx + uyy + k2
u = 0.
−
X
X
=
Y
Y
+ k2
= c2
.
X + c2
X = 0,
Y + (k2
− c2
)Y = 0.
uxx + uyy + k2u = 0.
−
Y
Y
=
X
X
+ k2
= c2
.
Y + c2
Y = 0,
X + (k2
− c2
)X = 0.
Partial Differential Equations Igor Yanovsky, 2005 10
5 First-Order Equations
5.1 Quasilinear Equations
Consider the Cauchy problem for the quasilinear equation in two variables
a(x, y, u)ux + b(x, y, u)uy = c(x, y, u),
with Γ parameterized by (f(s), g(s), h(s)). The characteristic equations are
dx
dt
= a(x, y, z),
dy
dt
= b(x, y, z),
dz
dt
= c(x, y, z),
with initial conditions
x(s, 0) = f(s), y(s, 0) = g(s), z(s, 0) = h(s).
In a quasilinear case, the characteristic equations for dx
dt and dy
dt need not decouple from
the dz
dt equation; this means that we must take the z values into account even to find
the projected characteristic curves in the xy-plane. In particular, this allows for the
possibility that the projected characteristics may cross each other.
The condition for solving for s and t in terms of x and y requires that the Jacobian
matrix be nonsingular:
J ≡
xs ys
xt yt
= xsyt − ysxt = 0.
In particular, at t = 0 we obtain the condition
f (s) · b(f(s), g(s), h(s)) − g (s) · a(f(s), g(s), h(s)) = 0.
Burger’s Equation. Solve the Cauchy problem
ut + uux = 0,
u(x, 0) = h(x).
(5.1)
The characteristic equations are
dx
dt
= z,
dy
dt
= 1,
dz
dt
= 0,
and Γ may be parametrized by (s, 0, h(s)).
x = h(s)t + s, y = t, z = h(s).
u(x, y) = h(x − uy) (5.2)
The characteristic projection in the xt-plane1 passing through the point (s, 0) is the
line
x = h(s)t + s
along which u has the constant value u = h(s). Two characteristics x = h(s1)t + s1
and x = h(s2)t + s2 intersect at a point (x, t) with
t = −
s2 − s1
h(s2) − h(s1)
.
1
y and t are interchanged here
Partial Differential Equations Igor Yanovsky, 2005 11
From (5.2), we have
ux = h (s)(1 − uxt) ⇒ ux =
h (s)
1 + h (s)t
Hence for h (s) < 0, ux becomes infinite at the positive time
t =
−1
h (s)
.
The smallest t for which this happens corresponds to the value s = s0 at which h (s)
has a minimum (i.e.−h (s) has a maximum). At time T = −1/h (s0) the solution u
experiences a “gradient catastrophe”.
Partial Differential Equations Igor Yanovsky, 2005 12
5.2 Weak Solutions for Quasilinear Equations
5.2.1 Conservation Laws and Jump Conditions
Consider shocks for an equation
ut + f(u)x = 0, (5.3)
where f is a smooth function of u. If we integrate (5.3) with respect to x for a ≤ x ≤ b,
we obtain
d
dt
b
a
u(x, t) dx + f(u(b, t)) − f(u(a, t)) = 0. (5.4)
This is an example of a conservation law. Notice that (5.4) implies (5.3) if u is C1, but
(5.4) makes sense for more general u.
Consider a solution of (5.4) that, for fixed t, has a jump discontinuity at x = ξ(t).
We assume that u, ux, and ut are continuous up to ξ. Also, we assume that ξ(t) is C1
in t.
Taking a < ξ(t) < b in (5.4), we obtain
d
dt
ξ
a
u dx +
b
ξ
u dx + f(u(b, t)) − f(u(a, t))
= ξ (t)ul(ξ(t), t) − ξ (t)ur(ξ(t), t) +
ξ
a
ut(x, t) dx +
b
ξ
ut(x, t) dx
+ f(u(b, t)) − f(u(a, t)) = 0,
where ul and ur denote the limiting values of u from the left and right sides of the shock.
Letting a ↑ ξ(t) and b ↓ ξ(t), we get the Rankine-Hugoniot jump condition:
ξ (t)(ul − ur) + f(ur) − f(ul) = 0,
ξ (t) =
f(ur) − f(ul)
ur − ul
.
5.2.2 Fans and Rarefaction Waves
For Burgers’ equation
ut +
1
2
u2
x
= 0,
we have f (u) = u, f ˜u
x
t
=
x
t
⇒ ˜u
x
t
=
x
t
.
For a rarefaction fan emanating from (s, 0) on xt-plane, we have:
u(x, t) =
⎧
⎪⎨
⎪⎩
ul, x−s
t ≤ f (ul) = ul,
x−s
t , ul ≤ x−s
t ≤ ur,
ur, x−s
t ≥ f (ur) = ur.
Partial Differential Equations Igor Yanovsky, 2005 13
5.3 General Nonlinear Equations
5.3.1 Two Spatial Dimensions
Write a general nonlinear equation F(x, y, u, ux, uy) = 0 as
F(x, y, z, p, q) = 0.
Γ is parameterized by
Γ : f(s)
x(s,0)
, g(s)
y(s,0)
, h(s)
z(s,0)
, φ(s)
p(s,0)
, ψ(s)
q(s,0)
We need to complete Γ to a strip. Find φ(s) and ψ(s), the initial conditions for p(s, t)
and q(s, t), respectively:
• F(f(s), g(s), h(s), φ(s), ψ(s)) = 0
• h (s) = φ(s)f (s) + ψ(s)g (s)
The characteristic equations are
dx
dt
= Fp
dy
dt
= Fq
dz
dt
= pFp + qFq
dp
dt
= −Fx − Fzp
dq
dt
= −Fy − Fzq
We need to have the Jacobian condition. That is, in order to solve the Cauchy problem
in a neighborhood of Γ, the following condition must be satisfied:
f (s) · Fq[f, g, h, φ, ψ](s) − g (s) · Fp[f, g, h, φ, ψ](s) = 0.
5.3.2 Three Spatial Dimensions
Write a general nonlinear equation F(x1, x2, x3, u, ux1, ux2, ux3 ) = 0 as
F(x1, x2, x3, z, p1, p2, p3) = 0.
Γ is parameterized by
Γ : f1(s1, s2)
x1(s1,s2,0)
, f2(s1, s2)
x2(s1,s2,0)
, f3(s1, s2)
x3(s1,s2,0)
, h(s1, s2)
z(s1,s2,0)
, φ1(s1, s2)
p1(s1,s2,0)
, φ2(s1, s2)
p2(s1,s2,0)
, φ3(s1, s2)
p3(s1,s2,0)
We need to complete Γ to a strip. Find φ1(s1, s2), φ2(s1, s2), and φ3(s1, s2), the initial
conditions for p1(s1, s2, t), p2(s1, s2, t), and p3(s1, s2, t), respectively:
• F f1(s1, s2), f2(s1, s2), f3(s1, s2), h(s1, s2), φ1, φ2, φ3 = 0
•
∂h
∂s1
= φ1
∂f1
∂s1
+ φ2
∂f2
∂s1
+ φ3
∂f3
∂s1
•
∂h
∂s2
= φ1
∂f1
∂s2
+ φ2
∂f2
∂s2
+ φ3
∂f3
∂s2
The characteristic equations are
dx1
dt
= Fp1
dx2
dt
= Fp2
dx3
dt
= Fp3
dz
dt
= p1Fp1 + p2Fp2 + p3Fp3
dp1
dt
= −Fx1 − p1Fz
dp2
dt
= −Fx2 − p2Fz
dp3
dt
= −Fx3 − p3Fz
Partial Differential Equations Igor Yanovsky, 2005 14
6 Second-Order Equations
6.1 Classification by Characteristics
Consider the second-order equation in which the derivatives of second-order all occur
linearly, with coefficients only depending on the independent variables:
a(x, y)uxx + b(x, y)uxy + c(x, y)uyy = d(x, y, u, ux, uy). (6.1)
The characteristic equation is
dy
dx
=
b ±
√
b2 − 4ac
2a
.
• b2
− 4ac > 0 ⇒ two characteristics, and (6.1) is called hyperbolic;
• b2 − 4ac = 0 ⇒ one characteristic, and (6.1) is called parabolic;
• b2
− 4ac < 0 ⇒ no characteristics, and (6.1) is called elliptic.
These definitions are all taken at a point x0 ∈ R2; unless a, b, and c are all constant,
the type may change with the point x0.
6.2 Canonical Forms and General Solutions
➀ uxx − uyy = 0 is hyperbolic (one-dimensional wave equation).
➁ uxx − uy = 0 is parabolic (one-dimensional heat equation).
➂ uxx + uyy = 0 is elliptic (two-dimensional Laplace equation).
By the introduction of new coordinates μ and η in place of x and y, the equation
(6.1) may be transformed so that its principal part takes the form ➀, ➁, or ➂.
If (6.1) is hyperbolic, parabolic, or elliptic, there exists a change of variables μ(x, y) and
η(x, y) under which (6.1) becomes, respectively,
uμη = ˜d(μ, η, u, uμ, uη) ⇔ u¯x¯x − u¯y¯y = ¯d(¯x, ¯y, u, u¯x, u¯y),
uμμ = ˜d(μ, η, u, uμ, uη),
uμμ + uηη = ˜d(μ, η, u, uμ, uη).
Example 1. Reduce to canonical form and find the general solution:
uxx + 5uxy + 6uyy = 0. (6.2)
Proof. a = 1, b = 5, c = 6 ⇒ b2 − 4ac = 1 > 0 ⇒ hyperbolic ⇒ two
characteristics.
The characteristics are found by solving
dy
dx
=
5 ± 1
2
=
3
2
to find y = 3x + c1 and y = 2x + c2.
Partial Differential Equations Igor Yanovsky, 2005 15
Let μ(x, y) = 3x − y, η(x, y) = 2x − y.
μx = 3, ηx = 2,
μy = −1, ηy = −1.
u = u(μ(x, y), η(x, y));
ux = uμμx + uηηx = 3uμ + 2uη,
uy = uμμy + uηηy = −uμ − uη,
uxx = (3uμ + 2uη)x = 3(uμμμx + uμηηx) + 2(uημμx + uηηηx) = 9uμμ + 12uμη + 4uηη,
uxy = (3uμ + 2uη)y = 3(uμμμy + uμηηy) + 2(uημμy + uηηηy) = −3uμμ − 5uμη − 2uηη,
uyy = −(uμ + uη)y = −(uμμμy + uμηηy + uημμy + uηηηy) = uμμ + 2uμη + uηη.
Inserting these expressions into (6.2) and simplifying, we obtain
uμη = 0, which is the Canonical form,
uμ = f(μ),
u = F(μ) + G(η),
u(x, y) = F(3x − y) + G(2x − y), General solution.
Example 2. Reduce to canonical form and find the general solution:
y2
uxx − 2yuxy + uyy = ux + 6y. (6.3)
Proof. a = y2, b = −2y, c = 1 ⇒ b2 −4ac = 0 ⇒ parabolic ⇒ one characteristic.
The characteristics are found by solving
dy
dx
=
−2y
2y2
= −
1
y
to find −
y2
2
+ c = x.
Let μ = y2
2 + x. We must choose a second constant function η(x, y) so that η is not
parallel to μ. Choose η(x, y) = y.
μx = 1, ηx = 0,
μy = y, ηy = 1.
u = u(μ(x, y), η(x, y));
ux = uμμx + uηηx = uμ,
uy = uμμy + uηηy = yuμ + uη,
uxx = (uμ)x = uμμμx + uμηηx = uμμ,
uxy = (uμ)y = uμμμy + uμηηy = yuμμ + uμη,
uyy = (yuμ + uη)y = uμ + y(uμμμy + uμηηy) + (uημμy + uηηηy)
= uμ + y2
uμμ + 2yuμη + uηη.
Partial Differential Equations Igor Yanovsky, 2005 16
Inserting these expressions into (6.3) and simplifying, we obtain
uηη = 6y,
uηη = 6η, which is the Canonical form,
uη = 3η2
+ f(μ),
u = η3
+ ηf(μ) + g(μ),
u(x, y) = y3
+ y · f
y2
2
+ x + g
y2
2
+ x , General solution.
Partial Differential Equations Igor Yanovsky, 2005 17
Problem (F’03, #4). Find the characteristics of the partial differential equation
xuxx + (x − y)uxy − yuyy = 0, x > 0, y > 0, (6.4)
and then show that it can be transformed into the canonical form
(ξ2
+ 4η)uξη + ξuη = 0
whence ξ and η are suitably chosen canonical coordinates. Use this to obtain the general
solution in the form
u(ξ, η) = f(ξ) +
η
g(η ) dη
(ξ2 + 4η )
1
2
where f and g are arbitrary functions of ξ and η.
Proof. a = x, b = x − y, c = −y ⇒ b2
− 4ac = (x − y)2
+ 4xy > 0 for x > 0,
y > 0 ⇒ hyperbolic ⇒ two characteristics.
➀ The characteristics are found by solving
dy
dx
=
b ±
√
b2 − 4ac
2a
=
x − y ± (x − y)2 + 4xy
2x
=
x − y ± (x + y)
2x
=
2x
2x = 1
−2y
2x = −y
x
⇒ y = x + c1,
dy
y
= −
dx
x
,
ln y = ln x−1
+ ˜c2,
y =
c2
x
.➁ Let μ = x − y and η = xy
μx = 1, ηx = y,
μy = −1, ηy = x.
u = u(μ(x, y), η(x, y));
ux = uμμx + uηηx = uμ + yuη,
uy = uμμy + uηηy = −uμ + xuη,
uxx = (uμ + yuη)x = uμμμx + uμηηx + y(uημμx + uηηηx) = uμμ + 2yuμη + y2
uηη,
uxy = (uμ + yuη)y = uμμμy + uμηηy + uη + y(uημμy + uηηηy) = −uμμ + xuμη + uη − yuημ + xyuηη,
uyy = (−uμ + xuη)y = −uμμμy − uμηηy + x(uημμy + uηηηy) = uμμ − 2xuμη + x2
uηη,
Inserting these expressions into (6.4), we obtain
x(uμμ + 2yuμη + y2
uηη) + (x − y)(−uμμ + xuμη + uη − yuημ + xyuηη) − y(uμμ − 2xuμη + x2
uηη) = 0,
(x2
+ 2xy + y2
)uμη + (x − y)uη = 0,
(x − y)2
+ 4xy uμη + (x − y)uη = 0,
(μ2
+ 4η)uμη + μuη = 0, which is the Canonical form.
Partial Differential Equations Igor Yanovsky, 2005 18
➂ We need to integrate twice to get the general solution:
(μ2
+ 4η)(uη)μ + μuη = 0,
(uη)μ
uη
dμ = −
μ
μ2 + 4η
dμ,
ln uη = −
1
2
ln (μ2
+ 4η) + ˜g(η),
ln uη = ln (μ2
+ 4η)−1
2 + ˜g(η),
uη =
g(η)
(μ2 + 4η)
1
2
,
u(μ, η) = f(μ) +
g(η) dη
(μ2 + 4η)
1
2
, General solution.
Partial Differential Equations Igor Yanovsky, 2005 19
6.3 Well-Posedness
Problem (S’99, #2). In R2 consider the unit square Ω defined by 0 ≤ x, y ≤ 1.
Consider
a) ux + uyy = 0;
b) uxx + uyy = 0;
c) uxx − uyy = 0.
Prescribe data for each problem separately on the boundary of Ω so that each of these
problems is well-posed. Justify your answers.
Proof. • The initial / boundary value problem for the HEAT EQUATION is well-
posed:
⎧
⎪⎨
⎪⎩
ut = u x ∈ Ω, t > 0,
u(x, 0) = g(x) x ∈ Ω,
u(x, t) = 0 x ∈ ∂Ω, t > 0.
Existence - by eigenfunction expansion.
Uniqueness and continuous dependence on the data -
by maximum principle.
The method of eigenfunction expansion and maximum
principle give well-posedness for more general problems:
⎧
⎪⎨
⎪⎩
ut = u + f(x, t) x ∈ Ω, t > 0,
u(x, 0) = g(x) x ∈ Ω,
u(x, t) = h(x, t) x ∈ ∂Ω, t > 0.
It is also possible to replace the Dirichlet boundary condition u(x, t) = h(x, t) by a
Neumann or Robin condition, provided we replace λn, φn by the eigenvalues and eigen-
functions for the appropriate boundary value problem.
a) • Relabel the variables (x → t, y → x).
We have the BACKWARDS HEAT EQUATION:
ut + uxx = 0.
Need to define initial conditions u(x, 1) = g(x), and
either Dirichlet, Neumann, or Robin boundary conditions.
b) • The solution to the LAPLACE EQUATION
u = 0 in Ω,
u = g on ∂Ω
exists if g is continuous on ∂Ω, by Perron’s method. Maximum principle gives unique-
ness.
To show the continuous dependence on the data, assume
u1 = 0 in Ω,
u1 = g1 on ∂Ω;
u2 = 0 in Ω,
u2 = g2 on ∂Ω.
Partial Differential Equations Igor Yanovsky, 2005 20
Then (u1 − u2) = 0 in Ω. Maximum principle gives
max
Ω
(u1 − u2) = max
∂Ω
(g1 − g2). Thus,
max
Ω
|u1 − u2| = max
∂Ω
|g1 − g2|.
Thus, |u1 − u2| is bounded by |g1 − g2|, i.e. continuous dependence on data.
• Perron’s method gives existence of the solution to the POISSON EQUATION
u = f in Ω,
∂u
∂n = h on ∂Ω
for f ∈ C∞
(Ω) and h ∈ C∞
(∂Ω), satisfying the compatibility condition ∂Ω h dS =
Ω f dx. It is unique up to an additive constant.
c) • Relabel the variables (y → t).
The solution to the WAVE EQUATION
utt − uxx = 0,
is of the form u(x, y) = F(x + t) + G(x − t).
The existence of the solution to the initial/boundary value problem
⎧
⎪⎨
⎪⎩
utt − uxx = 0 0 < x < 1, t > 0
u(x, 0) = g(x), ut(x, 0) = h(x) 0 < x < 1
u(0, t) = α(t), u(1, t) = β(t) t ≥ 0.
is given by the method of separation of variables
(expansion in eigenfunctions)
and by the parallelogram rule.
Uniqueness is given by the energy method.
Need initial conditions u(x, 0), ut(x, 0).
Prescribe u or ux for each of the two boundaries.
Partial Differential Equations Igor Yanovsky, 2005 21
Problem (F’95, #7). Let a, b be real numbers. The PDE
uy + auxx + buyy = 0
is to be solved in the box Ω = [0, 1]2.
Find data, given on an appropriate part of ∂Ω, that will make this a well-posed prob-
lem.
Cover all cases according to the possible values of a and b. Justify your statements.
Proof.
➀ ab < 0 ⇒ two sets of characteristics ⇒ hyperbolic.
Relabeling the variables (y → t), we have
utt +
a
b
uxx = −
1
b
ut.
The solution of the equation is of the form
u(x, t) = F(x + −a
b t) + G(x − −a
b t).
Existence of the solution to the initial/boundary
value problem is given by the method of separation
of variables (expansion in eigenfunctions)
and by the parallelogram rule.
Uniqueness is given by the energy method.
Need initial conditions u(x, 0), ut(x, 0).
Prescribe u or ux for each of the two boundaries.
➁ ab > 0 ⇒ no characteristics ⇒ elliptic.
The solution to the Laplace equation with boundary conditions u = g on ∂Ω exists
if g is continuous on ∂Ω, by Perron’s method.
To show uniqueness, we use maximum principle. Assume there are two solutions u1
and u2 with with u1 = g(x), u2 = g(x) on ∂Ω. By maximum principle
max
Ω
(u1 − u2) = max
∂Ω
(g(x) − g(x)) = 0. Thus, u1 = u2.
➂ ab = 0 ⇒ one set of characteristics ⇒ parabolic.
• a = b = 0. We have uy = 0, a first-order ODE.
u must be specified on y = 0, i.e. x -axis.
• a = 0, b = 0. We have uy + buyy = 0, a second-order ODE.
u and uy must be specified on y = 0, i.e. x -axis.
• a > 0, b = 0. We have a Backwards Heat Equation.
ut = −auxx.
Need to define initial conditions u(x, 1) = g(x), and
either Dirichlet, Neumann, or Robin boundary conditions.
Partial Differential Equations Igor Yanovsky, 2005 22
• a < 0, b = 0. We have a Heat Equation.
ut = −auxx.
The initial / boundary value problem for the heat equation is well-posed:
⎧
⎪⎨
⎪⎩
ut = u x ∈ Ω, t > 0,
u(x, 0) = g(x) x ∈ Ω,
u(x, t) = 0 x ∈ ∂Ω, t > 0.
Existence - by eigenfunction expansion.
Uniqueness and continuous dependence on the data -
by maximum principle.
Partial Differential Equations Igor Yanovsky, 2005 23
7 Wave Equation
The one-dimensional wave equation is
utt − c2
uxx = 0. (7.1)
The characteristic equation with a = −c2, b = 0, c = 1 would be
dt
dx
=
b ±
√
b2 − 4ac
2a
= ±
√
4c2
−2c2
= ±
1
c
,
and thus
t = −
1
c
x + c1 and t =
1
c
x + c2,
μ = x + ct η = x − ct,
which transforms (7.1) to
uμη = 0. (7.2)
The general solution of (7.2) is u(μ, η) = F(μ)+G(η), where F and G are C1
functions.
Returning to the variables x, t we find that
u(x, t) = F(x + ct) + G(x − ct) (7.3)
solves (7.1). Moreover, u is C2 provided that F and G are C2.
If F ≡ 0, then u has constant values along the lines x−ct = const, so may be described
as a wave moving in the positive x-direction with speed dx/dt = c; if G ≡ 0, then u is
a wave moving in the negative x-direction with speed c.
7.1 The Initial Value Problem
For an initial value problem, consider the Cauchy problem
utt − c2
uxx = 0,
u(x, 0) = g(x), ut(x, 0) = h(x).
(7.4)
Using (7.3) and (7.4), we find that F and G satisfy
F(x) + G(x) = g(x), cF (x) − cG (x) = h(x). (7.5)
If we integrate the second equation in (7.5), we get cF(x) − cG(x) =
x
0 h(ξ) dξ + C.
Combining this with the first equation in (7.5), we can solve for F and G to find
F(x) = 1
2 g(x) + 1
2c
x
0 h(ξ) dξ + C1
G(x) = 1
2 g(x) − 1
2c
x
0 h(ξ) dξ − C1,
Using these expressions in (7.3), we obtain d’Alembert’s Formula for the solution
of the initial value problem (7.4):
u(x, t) =
1
2
(g(x + ct) + g(x − ct)) +
1
2c
x+ct
x−ct
h(ξ) dξ.
If g ∈ C2 and h ∈ C1, then d’Alembert’s Formula defines a C2 solution of (7.4).
Partial Differential Equations Igor Yanovsky, 2005 24
7.2 Weak Solutions
Equation (7.3) defines a weak solution of (7.1) when F and G are not C2 functions.
Consider the parallelogram with sides that are
segments of characteristics. Since
u(x, t) = F(x + ct) + G(x − ct), we have
u(A) + u(C) =
= F(k1) + G(k3) + F(k2) + G(k4)
= u(B) + u(D),
which is the parallelogram rule.
7.3 Initial/Boundary Value Problem
⎧
⎪⎨
⎪⎩
utt − c2uxx = 0 0 < x < L, t > 0
u(x, 0) = g(x), ut(x, 0) = h(x) 0 < x < L
u(0, t) = α(t), u(L, t) = β(t) t ≥ 0.
(7.6)
Use separation of variables to obtain an expansion in eigenfunctions. Find u(x, t) in
the form
u(x, t) =
a0(t)
2
+
∞
n=1
an(t) cos
nπx
L
+ bn(t) sin
nπx
L
.
7.4 Duhamel’s Principle
⎧
⎪⎨
⎪⎩
utt − c2
uxx = f(x, t)
u(x, 0) = 0
ut(x, 0) = 0.
⇒
⎧
⎪⎨
⎪⎩
Utt − c2
Uxx = 0
U(x, 0, s) = 0
Ut(x, 0, s) = f(x, s)
u(x, t) =
t
0
U(x, t−s, s) ds.
⎧
⎪⎨
⎪⎩
an + λnan = fn(t)
an(0) = 0
an(0) = 0
⇒
⎧
⎪⎨
⎪⎩
˜an + λn˜an = 0
˜an(0, s) = 0
˜an(0, s) = fn(s)
an(t) =
t
0
˜an(t−s, s) ds.
7.5 The Nonhomogeneous Equation
Consider the nonhomogeneous wave equation with homogeneous initial conditions:
utt − c2
uxx = f(x, t),
u(x, 0) = 0, ut(x, 0) = 0.
(7.7)
Duhamel’s Principle provides the solution of (7.7):
u(x, t) =
1
2c
t
0
x+c(t−s)
x−c(t−s)
f(ξ, s) dξ ds.
If f(x, t) is C1
in x and C0
in t, then Duhamel’s Principle provides a C2
solution of
(7.7).
Partial Differential Equations Igor Yanovsky, 2005 25
We can solve (7.7) with nonhomogeneous initial conditions,
utt − c2
uxx = f(x, t),
u(x, 0) = g(x), ut(x, 0) = h(x),
(7.8)
by adding together d’Alembert’s formula and Duhamel’s principle gives the solution:
u(x, t) =
1
2
(g(x + ct) + g(x − ct)) +
1
2c
x+ct
x−ct
h(ξ) dξ +
1
2c
t
0
x+c(t−s)
x−c(t−s)
f(ξ, s) dξ ds.
Partial Differential Equations Igor Yanovsky, 2005 26
7.6 Higher Dimensions
7.6.1 Spherical Means
For a continuous function u(x) on Rn, its spherical mean or average on a sphere of
radius r and center x is
Mu(x, r) =
1
ωn |ξ|=1
u(x + rξ)dSξ,
where ωn is the area of the unit sphere Sn−1 = {ξ ∈ Rn : |ξ| = 1} and dSξ is surface
measure. Since u is continuous in x, Mu(x, r) is continuous in x and r, so
Mu(x, 0) = u(x).
Using the chain rule, we find
∂
∂r
Mu(x, r) =
1
ωn |ξ|=1
n
i=1
uxi (x + rξ) ξi dSξ =
To compute the RHS, we apply the divergence theorem in Ω = {ξ ∈ Rn
: |ξ| < 1},
which has boundary ∂Ω = Sn−1 and exterior unit normal n(ξ) = ξ. The integrand is
V · n where V (ξ) = r−1
∇ξu(x + rξ) = ∇xu(x + rξ). Computing the divergence of V ,
we obtain
div V (ξ) = r
n
i=1
uxixi (x + rξ) = r xu(x + rξ), so,
=
1
ωn |ξ|<1
r xu(x + rξ) dξ =
r
ωn
x
|ξ|<1
u(x + rξ) dξ (ξ = rξ)
=
r
ωn
1
rn x
|ξ |<r
u(x + ξ ) dξ (spherical coordinates)
=
1
ωnrn−1 x
r
0
ρn−1
|ξ|=1
u(x + ρξ) dSξ dρ
=
1
ωnrn−1
ωn x
r
0
ρn−1
Mu(x, ρ) dρ =
1
rn−1 x
r
0
ρn−1
Mu(x, ρ) dρ.
If we multiply by rn−1
, differentiate with respect to r, and then divide by rn−1
,
we obtain the Darboux equation:
∂2
∂r2
+
n − 1
r
∂
∂r
Mu(x, r) = xMu(x, r).
Note that for a radial function u = u(r), we have Mu = u, so the equation provides the
Laplacian of u in spherical coordinates.
7.6.2 Application to the Cauchy Problem
We want to solve the equation
utt = c2
u x ∈ Rn
, t > 0, (7.9)
u(x, 0) = g(x), ut(x, 0) = h(x) x ∈ Rn
.
We use Poisson’s method of spherical means to reduce this problem to a partial differ-
ential equation in the two variables r and t.
Partial Differential Equations Igor Yanovsky, 2005 27
Suppose that u(x, t) solves (7.9). We can view t as a parameter and take the spherical
mean to obtain Mu(x, r, t), which satisfies
∂2
∂t2
Mu(x, r, t) =
1
ωn |ξ|=1
utt(x + rξ, t)dSξ =
1
ωn |ξ|=1
c2
u(x + rξ, t)dSξ = c2
Mu(x, r, t).
Invoking the Darboux equation, we obtain the Euler-Poisson-Darboux equation:
∂2
∂t2
Mu(x, r, t) = c2 ∂2
∂r2
+
n − 1
r
∂
∂r
Mu(x, r, t).
The initial conditions are obtained by taking the spherical means:
Mu(x, r, 0) = Mg(x, r),
∂Mu
∂t
(x, r, 0) = Mh(x, r).
If we find Mu(x, r, t), we can then recover u(x, t) by:
u(x, t) = lim
r→0
Mu(x, r, t).
7.6.3 Three-Dimensional Wave Equation
When n = 3, we can write the Euler-Poisson-Darboux equation as 2
∂2
∂t2
rMu(x, r, t) = c2 ∂2
∂r2
rMu(x, r, t) .
For each fixed x, consider V x
(r, t) = rMu(x, r, t) as a solution of the one-dimensional
wave equation in r, t > 0:
∂2
∂t2
V x
(r, t) = c2 ∂2
∂r2
V x
(r, t),
V x
(r, 0) = rMg(x, r) ≡ Gx
(r), (IC)
V x
t (r, 0) = rMh(x, r) ≡ Hx
(r), (IC)
V x
(0, t) = lim
r→0
rMu(x, r, t) = 0 · u(x, t) = 0. (BC)
Gx
(0) = Hx
(0) = 0.
We may extend Gx
and Hx
as odd functions of r and use d’Alembert’s formula for
V x(r, t):
V x
(r, t) =
1
2
Gx
(r + ct) + Gx
(r − ct) +
1
2c
r+ct
r−ct
Hx
(ρ) dρ.
Since Gx
and Hx
are odd functions, we have for r < ct:
Gx
(r − ct) = −Gx
(ct − r) and
r+ct
r−ct
Hx
(ρ) dρ =
ct+r
ct−r
Hx
(ρ) dρ.
After some more manipulations, we find that the solution of (7.9) is given by the
Kirchhoff’s formula:
u(x, t) =
1
4π
∂
∂t
t
|ξ|=1
g(x + ctξ)dSξ +
t
4π |ξ|=1
h(x + ctξ)dSξ.
If g ∈ C3(R3) and h ∈ C2(R3), then Kirchhoff’s formula defines a C2-solution of (7.9).
2
It is seen by expanding the equation below.
Partial Differential Equations Igor Yanovsky, 2005 28
7.6.4 Two-Dimensional Wave Equation
This problem is solved by Hadamard’s method of descent, namely, view (7.9) as a special
case of a three-dimensional problem with initial conditions independent of x3.
We need to convert surface integrals in R3 to domain integrals in R2.
u(x1, x2, t) =
1
4π
∂
∂t
2t
ξ2
1+ξ2
2 <1
g(x1 + ctξ1, x2 + ctξ2)dξ1dξ2
1 − ξ2
1 − ξ2
2
+
t
4π
2
ξ2
1+ξ2
2 <1
h(x1 + ctξ1, x2 + ctξ2)dξ1dξ2
1 − ξ2
1 − ξ2
2
If g ∈ C3
(R2
) and h ∈ C2
(R2
), then this equation defines a C2
-solution of (7.9).
7.6.5 Huygen’s Principle
Notice that u(x, t) depends only on the Cauchy data g, h on the surface of the hyper-
sphere {x + ctξ : |ξ| = 1} in Rn
, n = 2k + 1; in other words we have sharp signals.
If we use the method of descent to obtain the solution for n = 2k, the hypersurface
integrals become domain integrals. This means that there are no sharp signals.
The fact that sharp signals exist only for odd dimensions n ≥ 3 is known as Huygen’s
principle.
3
3
For x ∈ Rn
:
∂
∂t |ξ|=1
f(x + tξ)dSξ =
1
tn−1
|y|≤t
f(x + y)dy
∂
∂t |y|≤t
f(x + y)dy = tn−1
|ξ|=1
f(x + tξ)dSξ
Partial Differential Equations Igor Yanovsky, 2005 29
7.7 Energy Methods
Suppose u ∈ C2(Rn × (0, ∞)) solves
utt = c2 u x ∈ Rn, t > 0,
u(x, 0) = g(x), ut(x, 0) = h(x) x ∈ Rn,
(7.10)
where g and h have compact support.
Define energy for a function u(x, t) at time t by
E(t) =
1
2 Rn
(u2
t + c2
|∇u|2
) dx.
If we differentiate this energy function, we obtain
dE
dt
=
d
dt
1
2 Rn
u2
t + c2
n
i=1
u2
xi
dx =
Rn
ututt + c2
n
i=1
uxi uxit dx
=
Rn
ututt dx + c2
n
i=1
uxi ut
∂Rn
−
Rn
c2
n
i=1
uxixi ut dx
=
Rn
ut(utt − c2
u) dx = 0,
or
dE
dt
=
d
dt
1
2 Rn
u2
t + c2
n
i=1
u2
xi
dx =
Rn
ututt + c2
n
i=1
uxi uxit dx
=
Rn
ututt + c2
∇u · ∇ut dx
=
Rn
ututt dx + c2
∂Rn
ut
∂u
∂n
ds −
Rn
ut u dx
=
Rn
ut(utt − c2
u) dx = 0.
Hence, E(t) is constant, or E(t) ≡ E(0).
In particular, if u1 and u2 are two solutions of (7.10), then w = u1 −u2 has zero Cauchy
data and hence Ew(0) = 0. By discussion above, Ew(t) ≡ 0, which implies w(x, t) ≡
const. But w(x, 0) = 0 then implies w(x, t) ≡ 0, so the solution is unique.
Partial Differential Equations Igor Yanovsky, 2005 30
7.8 Contraction Mapping Principle
Suppose X is a complete metric space with distance function represented by d(·, ·).
A mapping T : X → X is a strict contraction if there exists 0 < α < 1 such that
d(Tx, Ty) ≤ α d(x, y) ∀ x, y ∈ X.
An obvious example on X = Rn
is Tx = αx, which shrinks all of Rn
, leaving 0 fixed.
The Contraction Mapping Principle. If X is a complete metric space and T :
X → X is a strict contraction, then T has a unique fixed point.
The process of replacing a differential equation by an integral equation occurs in
time-evolution partial differential equations.
The Contraction Mapping Principle is used to establish the local existence and unique-
ness of solutions to various nonlinear equations.
Partial Differential Equations Igor Yanovsky, 2005 31
8 Laplace Equation
Consider the Laplace equation
u = 0 in Ω ⊂ Rn
(8.1)
and the Poisson equation
u = f in Ω ⊂ Rn
. (8.2)
Solutions of (8.1) are called harmonic functions in Ω.
Cauchy problems for (8.1) and (8.2) are not well posed. We use separation of variables
for some special domains Ω to find boundary conditions that are appropriate for (8.1),
(8.2).
Dirichlet problem: u(x) = g(x), x ∈ ∂Ω
Neumann problem:
∂u(x)
∂n
= h(x), x ∈ ∂Ω
Robin problem:
∂u
∂n
+ αu = β, x ∈ ∂Ω
8.1 Green’s Formulas
Ω
∇u · ∇v dx =
∂Ω
v
∂u
∂n
ds −
Ω
v u dx (8.3)
∂Ω
v
∂u
∂n
− u
∂v
∂n
ds =
Ω
(v u − u v) dx
∂Ω
∂u
∂n
ds =
Ω
u dx (v = 1 in (8.3))
Ω
|∇u|2
dx =
∂Ω
u
∂u
∂n
ds −
Ω
u u dx (u = v in (8.3))
Ω
uxvx dxdy =
∂Ω
vuxn1 ds −
Ω
vuxx dxdy n = (n1, n2) ∈ R2
Ω
uxk
v dx =
∂Ω
uvnk ds −
Ω
uvxk
dx n = (n1, . . ., nn) ∈ Rn
.
Ω
u 2
v dx =
∂Ω
u
∂ v
∂n
ds −
∂Ω
v
∂u
∂n
ds +
Ω
u v dx.
Ω
u 2
v − v 2
u dx =
∂Ω
u
∂ v
∂n
− v
∂ u
∂n
ds +
∂Ω
u
∂v
∂n
− v
∂u
∂n
ds.
Partial Differential Equations Igor Yanovsky, 2005 32
8.2 Polar Coordinates
Polar Coordinates. Let f : Rn → R be continuous. Then
Rn
f dx =
∞
0 ∂Br(x0)
f dS dr
for each x0 ∈ Rn. In particular
d
dr Br(x0)
f dx =
∂Br(x0)
f dS
for each r > 0.
u = u(x(r, θ), y(r, θ))
x(r, θ) = r cos θ
y(r, θ) = r sin θ
ur = uxxr + uyyr = ux cos θ + uy sinθ,
uθ = uxxθ + uyyθ = −uxr sin θ + uyr cos θ,
urr = (ux cos θ + uy sinθ)r = (uxxxr + uxyyr) cosθ + (uyxxr + uyyyr) sinθ
= uxx cos2
θ + 2uxy cos θ sinθ + uyy sin2
θ,
uθθ = (−uxr sinθ + uyr cos θ)θ
= (−uxxxθ − uxyyθ)r sinθ − uxr cos θ + (uyxxθ + uyyyθ)r cos θ − uyr sin θ
= (uxxr sin θ − uxyr cos θ)r sinθ − uxr cos θ + (−uyxr sin θ + uyyr cos θ)r cos θ − uyr sin θ
= r2
(uxx sin2
θ − 2uxy cos θ sinθ + uyy cos2
θ) − r(ux cos θ + uy sinθ).
urr + 1
r2 uθθ
= uxx cos2 θ + 2uxy cos θ sinθ + uyy sin2
θ + uxx sin2
θ − 2uxy cos θ sinθ + uyy cos2 θ − 1
r (ux cos θ + uy sin θ)
= uxx + uyy − 1
r ur.
uxx + uyy = urr +
1
r
ur +
1
r2
uθθ.
∂2
∂x2
+
∂2
∂y2
=
∂2
∂r2
+
1
r
∂
∂r
+
1
r2
∂2
∂θ2
.
8.3 Polar Laplacian in R2
for Radial Functions
u =
1
r
rur r
=
∂2
∂r2
+
1
r
∂
∂r
u.
8.4 Spherical Laplacian in R3
and Rn
for Radial Functions
u =
∂2
∂r2
+
n − 1
r
∂
∂r
u.
In R3: 4
u =
1
r2
r2
ur r
=
1
r
ru rr
=
∂2
∂r2
+
2
r
∂
∂r
u.
4
These formulas are taken from S. Farlow, p. 411.
Partial Differential Equations Igor Yanovsky, 2005 33
8.5 Cylindrical Laplacian in R3
for Radial Functions
u =
1
r
rur r
=
∂2
∂r2
+
1
r
∂
∂r
u.
8.6 Mean Value Theorem
Gauss Mean Value Theorem. If u ∈ C2
(Ω) is harmonic in Ω, let ξ ∈ Ω and pick
r > 0 so that Br(ξ) = {x : |x − ξ| ≤ r} ⊂ Ω. Then
u(ξ) = Mu(ξ, r) ≡
1
ωn |x|=1
u(ξ + rx) dSx,
where ωn is the measure of the (n − 1)-dimensional sphere in Rn
.
8.7 Maximum Principle
Maximum Principle. If u ∈ C2(Ω) satisfies u ≥ 0 in Ω, then either u is a constant,
or
u(ξ) < sup
x∈Ω
u(x)
for all ξ ∈ Ω.
Proof. We may assume A = supx∈Ω u(x) ≤ ∞, so by continuity of u we know that
{x ∈ Ω : u(x) = A} is relatively closed in Ω. But since
u(ξ) ≤
n
ωn |x|≤1
u(ξ + rx) dx,
if u(ξ) = A at an interior point ξ, then u(x) = A for all x in a ball about ξ, so
{x ∈ Ω : u(x) = A} is open. The connectedness of Ω implies u(ξ) < A or u(ξ) ≡ A for
all ξ ∈ Ω.
The maximum principle shows that u ∈ C2
(Ω) with u ≥ 0 can attain an interior
maximum only if u is constant. In particular, if Ω is compact, and u ∈ C2(Ω) ∩ C(Ω)
satisfies u ≥ 0 in Ω, we have the weak maximum principle:
max
x∈Ω
u(x) = max
x∈∂Ω
u(x).
Partial Differential Equations Igor Yanovsky, 2005 34
8.8 The Fundamental Solution
A fundamental solution K(x) for the Laplace operator is a distribution satisfying
K(x) = δ(x) (8.4)
where δ is the delta distribution supported at x = 0. In order to solve (8.4), we should
first observe that is symmetric in the variables x1, . . ., xn, and δ(x) is also radially
symmetric (i.e., its value only depends on r = |x|). Thus, we try to solve (8.4) with a
radially symmetric function K(x). Since δ(x) = 0 for x = 0, we see that (8.4) requires
K to be harmonic for r > 0. For the radially symmetric function K, Laplace equation
becomes (K = K(r)):
∂2K
∂r2
+
n − 1
r
∂K
∂r
= 0. (8.5)
The general solution to (8.5) is
K(r) =
c1 + c2 log r if n = 2
c1 + c2r2−n if n ≥ 3.
(8.6)
After we determine c2, we find the fundamental solution for the Laplace operator:
K(x) =
1
2π log r if n = 2
1
(2−n)ωn
r2−n if n ≥ 3.
• We can derive, (8.6) for any given n. For intance, when n = 3, we have:
K +
2
r
K = 0.
Let
K =
1
r
w(r),
K =
1
r
w −
1
r2
w,
K =
1
r
w −
2
r2
w +
2
r3
w.
Plugging these into , we obtain:
1
r
w = 0, or
w = 0.
Thus,
w = c1r + c2,
K =
1
r
w(r) = c1 +
c2
r
.
See the similar problem, F’99, #2, where the fundamental solution for ( − I) is
found in the process.
Partial Differential Equations Igor Yanovsky, 2005 35
Find the Fundamental Solution of the Laplace Operator for n = 3
We found that starting with the Laplacian in R3 for a radially symmetric function K,
K +
2
r
K = 0,
and letting K = 1
r w(r), we obtained the equation: w = c1r + c2, which implied:
K = c1 +
c2
r
.
We now find the constant c2 that ensures that for v ∈ C∞
0 (R3
), we have
R3
K(|x|) v(x) dx = v(0).
Suppose v(x) ≡ 0 for |x| ≥ R and let Ω = BR(0); for small > 0 let
Ω = Ω − B (0).
K(|x|) is harmonic ( K(|x|) = 0) in Ω . Consider Green’s identity (∂Ω = ∂Ω ∪
∂B (0)):
Ω
K(|x|) v dx =
∂Ω
K(|x|)
∂v
∂n
− v
∂K(|x|)
∂n
dS
=0, since v≡0 for x≥R
+
∂B (0)
K(|x|)
∂v
∂n
− v
∂K(|x|)
∂n
dS.
lim
→0 Ω
K(|x|) v dx =
Ω
K(|x|) v dx. Since K(r) = c1 +
c2
r
is integrable at x = 0.
On ∂B (0), K(|x|) = K( ). Thus, 5
∂B (0)
K(|x|)
∂v
∂n
dS = K( )
∂B (0)
∂v
∂n
dS ≤ c1 +
c2
4π 2
max ∇v → 0, as → 0.
∂B (0)
v(x)
∂K(|x|)
∂n
dS =
∂B (0)
c2
2
v(x) dS
=
∂B (0)
c2
2
v(0) dS +
∂B (0)
c2
2
[v(x) − v(0)] dS
=
c2
2
v(0) 4π 2
+ 4πc2 max
x∈∂B (0)
v(x) − v(0)
→0, (v is continuous)
= 4πc2 v(0) → −v(0).
Thus, taking 4πc2 = −1, i.e. c2 = − 1
4π , we obtain
Ω
K(|x|) v dx = lim
→0 Ω
K(|x|) v dx = v(0),
that is K(r) = − 1
4πr is the fundamental solution of .
5
In R3
, for |x| = ,
K(|x|) = K( ) = c1 +
c2
.
∂K(|x|)
∂n
= −
∂K( )
∂r
=
c2
2
, (since n points inwards.)
n points toward 0 on the sphere |x| = (i.e., n = −x/|x|).
Partial Differential Equations Igor Yanovsky, 2005 36
Show that the Fundamental Solution of the Laplace Operator is given by.
K(x) =
1
2π log r if n = 2
1
(2−n)ωn
r2−n
if n ≥ 3.
(8.7)
Proof. For v ∈ C∞
0 (Rn), we want to show
Rn
K(|x|) v(x) dx = v(0).
Suppose v(x) ≡ 0 for |x| ≥ R and let Ω = BR(0); for small > 0 let
Ω = Ω − B (0).
K(|x|) is harmonic ( K(|x|) = 0) in Ω . Consider Green’s identity (∂Ω = ∂Ω ∪
∂B (0)):
Ω
K(|x|) v dx =
∂Ω
K(|x|)
∂v
∂n
− v
∂K(|x|)
∂n
dS
=0, since v≡0 for x≥R
+
∂B (0)
K(|x|)
∂v
∂n
− v
∂K(|x|)
∂n
dS.
lim
→0 Ω
K(|x|) v dx =
Ω
K(|x|) v dx. Since K(r) is integrable at x = 0.
On ∂B (0), K(|x|) = K( ). Thus, 6
∂B (0)
K(|x|)
∂v
∂n
dS = K( )
∂B (0)
∂v
∂n
dS ≤ K( ) ωn
n−1
max ∇v → 0, as → 0.
∂B (0)
v(x)
∂K(|x|)
∂n
dS =
∂B (0)
−
1
ωn
n−1
v(x) dS
=
∂B (0)
−
1
ωn
n−1
v(0) dS +
∂B (0)
−
1
ωn
n−1
[v(x) − v(0)] dS
= −
1
ωn
n−1
v(0) ωn
n−1
− max
x∈∂B (0)
v(x) − v(0)
→0, (v is continuous)
= −v(0).
Thus,
Ω
K(|x|) v dx = lim
→0 Ω
K(|x|) v dx = v(0).
6
Note that for |x| = ,
K(|x|) = K( ) =
1
2π
log if n = 2
1
(2−n)ωn
2−n
if n ≥ 3.
∂K(|x|)
∂n
= −
∂K( )
∂r
= −
1
2π
if n = 2
1
ωn
n−1 if n ≥ 3,
= −
1
ωn
n−1
, (since n points inwards.)
n points toward 0 on the sphere |x| = (i.e., n = −x/|x|).
Partial Differential Equations Igor Yanovsky, 2005 37
8.9 Representation Theorem
Representation Theorem, n = 3.
Let Ω be bounded domain in R3
and let n be the unit exterior normal to ∂Ω. Let
u ∈ C2(Ω). Then the value of u at any point x ∈ Ω is given by the formula
u(x) =
1
4π ∂Ω
1
|x − y|
∂u(y)
∂n
− u(y)
∂
∂n
1
|x − y|
dS −
1
4π Ω
u(y)
|x − y|
dy. (8.8)
Proof. Consider the Green’s identity:
Ω
(u w − w u) dy =
∂Ω
u
∂w
∂n
− w
∂u
∂n
dS,
where w is the harmonic function
w(y) =
1
|x − y|
,
which is singular at x ∈ Ω. In order to be able to apply Green’s identity, we consider
a new domain Ω :
Ω = Ω − B (x).
Since u, w ∈ C2(Ω ), Green’s identity can be applied. Since w is harmonic ( w = 0)
in Ω and since ∂Ω = ∂Ω ∪ ∂B (x), we have
−
Ω
u(y)
|x − y|
dy =
∂Ω
u(y)
∂
∂n
1
|x − y|
−
1
|x − y|
∂u(y)
∂n
dS (8.9)
+
∂B (x)
u(y)
∂
∂n
1
|x − y|
−
1
|x − y|
∂u(y)
∂n
dS. (8.10)
We will show that formula (8.8) is obtained by letting → 0.
lim
→0
−
Ω
u(y)
|x − y|
dy = −
Ω
u(y)
|x − y|
dy. Since
1
|x − y|
is integrable at x = y.
The first integral on the right of (8.10) does not depend on . Hence, the limit as → 0
of the second integral on the right of (8.10) exists, and in order to obtain (8.8), need
lim
→0 ∂B (x)
u(y)
∂
∂n
1
|x − y|
−
1
|x − y|
∂u(y)
∂n
dS = 4πu(x).
∂B (x)
u(y)
∂
∂n
1
|x − y|
−
1
|x − y|
∂u(y)
∂n
dS =
∂B (x)
1
2
u(y) −
1 ∂u(y)
∂n
dS
=
∂B (x)
1
2
u(x) dS +
∂B (x)
1
2
[u(y) − u(x)] −
1 ∂u(y)
∂n
dS
= 4πu(x) +
∂B (x)
1
2
[u(y) − u(x)] −
1 ∂u(y)
∂n
dS.
Partial Differential Equations Igor Yanovsky, 2005 38
7
The last integral tends to 0 as → 0:
∂B (x)
1
2
[u(y) − u(x)] −
1 ∂u(y)
∂n
dS ≤
1
2
∂B (x)
u(y) − u(x) +
1
∂B (x)
∂u(y)
∂n
dS
≤ 4π max
y∈∂B (x)
u(y) − u(x)
→0, (u continuous in Ω)
+ 4π max
y∈Ω
∇u(y)
→0, (|∇u| is finite)
.
7
Note that for points y on ∂B (x),
1
|x − y|
=
1
and
∂
∂n
1
|x − y|
=
1
2
.
Partial Differential Equations Igor Yanovsky, 2005 39
Representation Theorem, n = 2.
Let Ω be bounded domain in R2 and let n be the unit exterior normal to ∂Ω. Let
u ∈ C2
(Ω). Then the value of u at any point x ∈ Ω is given by the formula
u(x) =
1
2π Ω
u(y) log|x − y| dy +
1
2π ∂Ω
u(y)
∂
∂n
log |x − y| − log |x − y|
∂u(y)
∂n
dS.(8.11)
Proof. Consider the Green’s identity:
Ω
(u w − w u) dy =
∂Ω
u
∂w
∂n
− w
∂u
∂n
dS,
where w is the harmonic function
w(y) = log |x − y|,
which is singular at x ∈ Ω. In order to be able to apply Green’s identity, we consider
a new domain Ω :
Ω = Ω − B (x).
Since u, w ∈ C2(Ω ), Green’s identity can be applied. Since w is harmonic ( w = 0)
in Ω and since ∂Ω = ∂Ω ∪ ∂B (x), we have
−
Ω
u(y) log |x − y| dy (8.12)
=
∂Ω
u(y)
∂
∂n
log |x − y| − log |x − y|
∂u(y)
∂n
dS
+
∂B (x)
u(y)
∂
∂n
log |x − y| − log |x − y|
∂u(y)
∂n
dS.
We will show that formula (8.11) is obtained by letting → 0.
lim
→0
−
Ω
u(y) log|x − y| dy = −
Ω
u(y) log |x − y| dy. since log |x − y| is integrable at x = y.
The first integral on the right of (8.12) does not depend on . Hence, the limit as → 0
of the second integral on the right of (8.12) exists, and in order to obtain (8.11), need
lim
→0 ∂B (x)
u(y)
∂
∂n
log |x − y| − log |x − y|
∂u(y)
∂n
dS = 2πu(x).
∂B (x)
u(y)
∂
∂n
log |x − y| − log |x − y|
∂u(y)
∂n
dS =
∂B (x)
1
u(y) − log
∂u(y)
∂n
dS
=
∂B (x)
1
u(x) dS +
∂B (x)
1
[u(y) − u(x)] − log
∂u(y)
∂n
dS
= 2πu(x) +
∂B (x)
1
[u(y) − u(x)] − log
∂u(y)
∂n
dS.
Partial Differential Equations Igor Yanovsky, 2005 40
8
The last integral tends to 0 as → 0:
∂B (x)
1
[u(y) − u(x)] − log
∂u(y)
∂n
dS ≤
1
∂B (x)
u(y) − u(x) + log
∂B (x)
∂u(y)
∂n
dS
≤ 2π max
y∈∂B (x)
u(y) − u(x)
→0, (u continuous in Ω)
+ 2π log max
y∈Ω
∇u(y)
→0, (|∇u| is finite)
.
8
Note that for points y on ∂B (x),
log |x − y| = log and
∂
∂n
log |x − y| =
1
.
Partial Differential Equations Igor Yanovsky, 2005 41
Representation Theorems, n > 3 can be obtained in the same way. We use the
Green’s identity with
w(y) =
1
|x − y|n−2
,
which is a harmonic function in Rn with a singularity at x.
The fundamental solution for the Laplace operator is (r = |x|):
K(x) =
1
2π log r if n = 2
1
(2−n)ωn
r2−n if n ≥ 3.
Representation Theorem. If Ω ∈ Rn
is bounded, u ∈ C2
(Ω), and x ∈ Ω, then
u(x) =
Ω
K(x − y) u(y) dy +
∂Ω
u(y)
∂K(x − y)
∂n
− K(x − y)
∂u(y)
∂n
dS.(8.13)
Proof. Consider the Green’s identity:
Ω
(u w − w u) dy =
∂Ω
u
∂w
∂n
− w
∂u
∂n
dS,
where w is the harmonic function
w(y) = K(x − y),
which is singular at y = x. In order to be able to apply Green’s identity, we consider a
new domain Ω :
Ω = Ω − B (x).
Since u, K(x − y) ∈ C2(Ω ), Green’s identity can be applied. Since K(x − y) is
harmonic ( K(x − y) = 0) in Ω and since ∂Ω = ∂Ω ∪ ∂B (x), we have
−
Ω
K(x − y) u(y) dy =
∂Ω
u(y)
∂K(x − y)
∂n
− K(x − y)
∂u(y)
∂n
dS (8.14)
+
∂B (x)
u(y)
∂K(x − y)
∂n
− K(x − y)
∂u(y)
∂n
dS.(8.15)
We will show that formula (8.13) is obtained by letting → 0.
lim
→0
−
Ω
K(x − y) u(y) dy = −
Ω
K(x − y) u(y) dy. since K(x − y) is integrable at x = y.
The first integral on the right of (8.15) does not depend on . Hence, the limit as → 0
of the second integral on the right of (8.15) exists, and in order to obtain (8.13), need
lim
→0 ∂B (x)
u(y)
∂K(x − y)
∂n
− K(x − y)
∂u(y)
∂n
dS = −u(x).
Partial Differential Equations Igor Yanovsky, 2005 42
∂B (x)
u(y)
∂K(x − y)
∂n
− K(x − y)
∂u(y)
∂n
dS =
∂B (x)
u(y)
∂K( )
∂n
− K( )
∂u(y)
∂n
dS
=
∂B (x)
u(x)
∂K( )
∂n
dS +
∂B (x)
∂K( )
∂n
[u(y) − u(x)] − K( )
∂u(y)
∂n
dS
= −
1
ωn
n−1
∂B (x)
u(x) dS −
1
ωn
n−1
∂B (x)
[u(y) − u(x)] dS −
∂B (x)
K( )
∂u(y)
∂n
dS
= −
1
ωn
n−1
u(x)ωn
n−1
−u(x)
−
1
ωn
n−1
∂B (x)
[u(y) − u(x)] dS −
∂B (x)
K( )
∂u(y)
∂n
dS.
9 The last two integrals tend to 0 as → 0:
−
1
ωn
n−1
∂B (x)
[u(y) − u(x)] dS −
∂B (x)
K( )
∂u(y)
∂n
dS
≤
1
ωn
n−1
max
y∈∂B (x)
u(y) − u(x) ωn
n−1
→0, (u continuous in Ω)
+ K( ) max
y∈Ω
∇u(y) ωn
n−1
→0, (|∇u| is finite)
.
8.10 Green’s Function and the Poisson Kernel
With a slight change in notation, the Representation Theorem has the following special
case.
Theorem. If Ω ∈ Rn is bounded, u ∈ C2(Ω) C1(Ω) is harmonic, and ξ ∈ Ω, then
u(ξ) =
∂Ω
u(x)
∂K(x − ξ)
∂n
− K(x − ξ)
∂u(x)
∂n
dS. (8.16)
Let ω(x) be any harmonic function in Ω, and for x, ξ ∈ Ω consider
G(x, ξ) = K(x − ξ) + ω(x).
If we use the Green’s identity (with u = 0 and ω = 0), we get:
0 =
∂Ω
u
∂ω
∂n
− ω
∂u
∂n
ds. (8.17)
Adding (8.16) and (8.17), we obtain:
u(ξ) =
∂Ω
u(x)
∂G(x, ξ)
∂n
− G(x, ξ)
∂u(x)
∂n
dS. (8.18)
Suppose that for each ξ ∈ Ω we can find a function ωξ(x) that is harmonic in Ω and
satisfies ωξ(x) = −K(x − ξ) for all x ∈ ∂Ω. Then G(x, ξ) = K(x − ξ) + ωξ(x) is a
fundamental solution such that
G(x, ξ) = 0 x ∈ ∂Ω.
9
Note that for points y on ∂B (x),
K(x − y) = K( ) =
1
2π
log if n = 2
1
(2−n)ωn
2−n
if n ≥ 3.
∂K(x − y)
∂n
= −
∂K( )
∂r
= −
1
2π
if n = 2
1
ωn
n−1 if n ≥ 3,
= −
1
ωn
n−1
, (since n points inwards.)
Partial Differential Equations Igor Yanovsky, 2005 43
G is called the Green’s function and is useful in satisfying Dirichlet boundary conditions.
The Green’s function is difficult to construct for a general domain Ω since it requires
solving the Dirichlet problem ωξ = 0 in Ω, ωξ(x) = −K(x − ξ) for x ∈ ∂Ω, for each
ξ ∈ Ω.
From (8.18) we find 10
u(ξ) =
∂Ω
u(x)
∂G(x, ξ)
∂n
dS.
Thus if we know that the Dirichlet problem has a solution u ∈ C2
(Ω), then we can
calculate u from the Poisson integral formula (provided of course that we can compute
G(x, ξ)).
10
If we did not assume u = 0 in our derivation, we would have (8.13) instead of (8.16), and an
extra term in (8.17), which would give us a more general expression:
u(ξ) =
Ω
G(x, ξ) u dx +
∂Ω
u(x)
∂G(x, ξ)
∂n
dS.
Partial Differential Equations Igor Yanovsky, 2005 44
8.11 Properties of Harmonic Functions
Liouville’s Theorem. A bounded harmonic function defined on all of Rn must be a
constant.
8.12 Eigenvalues of the Laplacian
Consider the equation
u + λu = 0 in Ω
u = 0 on ∂Ω,
(8.19)
where Ω is a bounded domain and λ is a (complex) number. The values of λ for which
(8.19) admits a nontrivial solution u are called the eigenvalues of in Ω, and the
solution u is an eigenfunction associated to the eigenvalue λ. (The convention
u + λu = 0 is chosen so that all eigenvalues λ will be positive.)
Properties of the Eigenvalues and Eigenfunctions for (8.19):
1. The eigenvalues of (8.19) form a countable set {λn}∞
n=1 of positive numbers with
λn → ∞ as n → ∞.
2. For each eigenvalue λn there is a finite number (called the multiplicity of λn) of
linearly independent eigenfunctions un.
3. The first (or principal) eigenvalue, λ1, is simple and u1 does not change sign in Ω.
4. Eigenfunctions corresponding to distinct eigenvalues are orthogonal.
5. The eigenfunctions may be used to expand certain functions on Ω in an infinite
series.
Partial Differential Equations Igor Yanovsky, 2005 45
9 Heat Equation
The heat equation is
ut = k u for x ∈ Ω, t > 0, (9.1)
with initial and boundary conditions.
9.1 The Pure Initial Value Problem
9.1.1 Fourier Transform
If u ∈ C∞
0 (Rn), define its Fourier transform u by
u(ξ) =
1
(2π)
n
2 Rn
e−ix·ξ
u(x) dx for ξ ∈ Rn
.
We can differentiate ˆu:
∂
∂ξj
u(ξ) =
1
(2π)
n
2 Rn
e−ix·ξ
(−ixj)u(x) dx = (−ixj) u (ξ).
Iterating this computation, we obtain
∂
∂ξj
k
u(ξ) = (−ixj)k u (ξ). (9.2)
Similarly, integrating by parts shows
∂u
∂xj
(ξ) =
1
(2π)
n
2 Rn
e−ix·ξ ∂u
∂xj
(x) dx = −
1
(2π)
n
2 Rn
∂
∂xj
(e−ix·ξ
)u(x) dx
=
1
(2π)
n
2 Rn
(iξj)e−ix·ξ
u(x) dx
= (iξj)u(ξ).
Iterating this computation, we obtain
∂ku
∂xk
j
(ξ) = (iξj)k
u(ξ). (9.3)
Formulas (9.2) and (9.3) express the fact that Fourier transform interchanges differen-
tiation and multiplication by the coordinate function.
9.1.2 Multi-Index Notation
A multi-index is a vector α = (α1, . . ., αn) where each αi is a nonnegative integer.
The order of the multi-index is |α| = α1 + . . . + αn. Given a multi-index α, define
Dα
u =
∂|α|
u
∂xα1
1 · · ·∂xαn
n
= ∂α1
x1
· · ·∂αn
xn
u.
We can generalize (9.3) in multi-index notation:
Dαu(ξ) =
1
(2π)
n
2 Rn
e−ix·ξ
Dα
u(x) dx =
(−1)|α|
(2π)
n
2 Rn
Dα
x (e−ix·ξ
)u(x) dx
=
1
(2π)
n
2 Rn
(iξ)α
e−ix·ξ
u(x) dx
= (iξ)α
u(ξ).
(iξ)α
= (iξ1)α1
· · ·(iξn)αn
.
Partial Differential Equations Igor Yanovsky, 2005 46
Parseval’s theorem (Plancherel’s theorem).
Assume u ∈ L1(Rn) ∩ L2(Rn). Then u, u∨ ∈ L2(Rn) and
||u||L2(Rn) = ||u∨
||L2(Rn) = ||u||L2(Rn), or
∞
−∞
|u(x)|2
dx =
∞
−∞
|u(ξ)|2
dξ.
Also,
∞
−∞
u(x) v(x)dx =
∞
−∞
u(ξ) v(ξ) dξ.
The properties (9.2) and (9.3) make it very natural to consider the fourier transform
on a subspace of L1
(Rn
) called the Schwartz class of functions, S, which consists of the
smooth functions whose derivatives of all orders decay faster than any polynomial, i.e.
S = {u ∈ C∞
(Rn
) : for every k ∈ N and α ∈ Nn
, |x|k
|Dα
u(x)| is bounded on Rn
}.
For u ∈ S, the Fourier transform u exists since u decays rapidly at ∞.
Lemma. (i) If u ∈ L1(Rn), then u is bounded. (ii) If u ∈ S, then u ∈ S.
Define the inverse Fourier transform for u ∈ L1(Rn):
u∨
(ξ) =
1
(2π)
n
2 Rn
eix·ξ
u(x) dx for ξ ∈ Rn
, or
u(x) =
1
(2π)
n
2 Rn
eix·ξ
u(ξ) dξ for x ∈ Rn
.
Fourier Inversion Theorem (McOwen). If u ∈ S, then (u)∨
= u; that is,
u(x) =
1
(2π)
n
2 Rn
eix·ξ
u(ξ) dξ =
1
(2π)n
R2n
ei(x−y)·ξ
u(y) dy dξ = (u)∨
(x).
Fourier Inversion Theorem (Evans). Assume u ∈ L2
(Rn
). Then, u = (u)∨
.
Partial Differential Equations Igor Yanovsky, 2005 47
Shift: Let u(x − a
y
) = v(x), and determinte v(ξ):
u(x − a)(ξ) = v(ξ) =
1
√
2π R
e−ixξ
v(x) dx =
1
√
2π R
e−i(y+a)ξ
u(y) dy
=
1
√
2π R
e−iyξ
e−iaξ
u(y) dy = e−iaξ
u(ξ).
u(x − a)(ξ) = e−iaξ
u(ξ).
Delta function:
δ(x)(ξ) =
1
√
2π R
e−ixξ
δ(x) dx =
1
√
2π
, since u(x) =
R
δ(x − y) u(y) dy .
δ(x − a)(ξ) = e−iaξ
δ(ξ) =
1
√
2π
e−iaξ
. (using result from ‘Shift’)
Convolution:
(f ∗ g)(x) =
Rn
f(x − y)g(y) dy,
(f ∗ g)(ξ) =
1
(2π)
n
2 Rn
e−ix·ξ
Rn
f(x − y) g(y) dydx =
1
(2π)
n
2 Rn Rn
e−ix·ξ
f(x − y) g(y) dydx
=
1
(2π)
n
2 Rn Rn
e−i(x−y)·ξ
f(x − y) dx e−iy·ξ
g(y) dy
=
1
(2π)
n
2 Rn
e−iz·ξ
f(z) dz ·
Rn
e−iy·ξ
g(y) dy = (2π)
n
2 f(ξ)g(ξ).
(f ∗ g)(ξ) = (2π)
n
2 f(ξ) g(ξ).
Gaussian: (completing the square)
e−x2
2 (ξ) =
1
√
2π R
e−ixξ
e−x2
2 dx =
1
√
2π R
e−x2 +2ixξ
2 dx =
1
√
2π R
e−x2+2ixξ−ξ2
2 dx e−ξ2
2
=
1
√
2π R
e−
(x+iξ)2
2 dx e−ξ2
2 =
1
√
2π R
e
−y2
2 dy e−ξ2
2 =
1
√
2π
√
2πe−ξ2
2 = e−ξ2
2 .
e−x2
2 (ξ) = e−ξ2
2 .
Multiplication by x:
−ixu(ξ) =
1
√
2π R
e−ixξ
− ixu(x) dx =
d
dξ
u(ξ).
xu(x)(ξ) = i
d
dξ
u(ξ).
Partial Differential Equations Igor Yanovsky, 2005 48
Multiplication of ux by x: (using the above result)
xux(x)(ξ) =
1
√
2π R
e−ixξ
xux(x) dx =
1
√
2π
e−ixξ
xu
∞
−∞
= 0
−
1
√
2π R
(−iξ)e−ixξ
x + e−ixξ
u dx
=
1
√
2π
iξ
R
e−ixξ
x u dx −
1
√
2π R
e−ixξ
u dx
= iξ xu(x)(ξ) − u(ξ) = iξ i
d
dξ
u(ξ) − u(ξ) = −ξ
d
dξ
u(ξ) − u(ξ).
xux(x)(ξ) = −ξ
d
dξ
u(ξ) − u(ξ).
Table of Fourier Transforms: 11
e−ax2
2 (ξ) =
1
√
a
e−ξ2
2a , (Gaussian)
eibxf(ax)(ξ) =
1
a
f
ξ − b
a
,
f(x) =
1, |x| ≤ L
0, |x| > L,
f(x)(ξ) =
1
√
2π
2 sin(ξL)
ξ
,
e−a|x|(ξ) =
1
√
2π
2a
a2 + ξ2
, (a > 0)
1
a2 + x2
(ξ) =
√
2π
2a
e−a|ξ|
, (a > 0)
H(a − |x|)(ξ) =
2
π
1
ξ
sinaξ,
H(x)(ξ) =
1
√
2π
πδ(ξ) +
1
iξ
,
H(x) − H(−x) (ξ) =
2
π
1
iξ
, (sign)
1(ξ) =
√
2πδ(ξ).
11
Results with marked with were taken from W. Strauss, where the definition of Fourier Transform
is different. An extra multiple of 1√
2π
was added to each of these results.
Partial Differential Equations Igor Yanovsky, 2005 49
9.1.3 Solution of the Pure Initial Value Problem
Consider the pure initial value problem
ut = u for t > 0, x ∈ Rn
u(x, 0) = g(x) for x ∈ Rn
.
(9.4)
We take the Fourier transform of the heat equation in the x-variables.
(ut)(ξ, t) =
1
(2π)
n
2 Rn
e−ix·ξ
ut(x, t) dx =
∂
∂t
u(ξ, t)
u(ξ, t) =
n
j=1
(iξj)2
u(ξ, t) = −|ξ|2
u(ξ, t).
The heat equation therefore becomes
∂
∂t
u(ξ, t) = −|ξ|2
u(ξ, t),
which is an ordinary differential equation in t, with the solution u(ξ, t) = Ce−|ξ|2t
.
The initial condition u(ξ, 0) = g(ξ) gives
u(ξ, t) = g(ξ) e−|ξ|2t
,
u(x, t) = g(ξ) e−|ξ|2t
∨
=
1
(2π)
n
2
g ∗ e−|ξ|2t ∨
=
1
(2π)
n
2
g ∗
1
(2π)
n
2 Rn
e−|ξ|2t
eix·ξ
dξ
=
1
(4π2)
n
2
g ∗
Rn
eix·ξ−|ξ|2t
dξ =
1
(4π2)
n
2
g ∗ e−|x|2
4t
π
t
n
2
=
1
(4πt)
n
2
g ∗ e−
|x|2
4t =
1
(4πt)
n
2 Rn
e−
|x−y|2
4t g(y) dy.
Thus, 12 solution of the initial value problem (9.4) is
u(x, t) =
Rn
K(x, y, t) g(y) dy =
1
(4πt)
n
2 Rn
e−
|x−y|2
4t g(y) dy.
Uniqueness of solutions for the pure initial value problem fails: there are nontrivial
solutions of (9.4) with g = 0. 13 Thus, the pure initial value problem for the heat
equation is not well-posed, as it was for the wave equation. However, the nontrivial
solutions are unbounded as functions of x when t > 0 is fixed; uniqueness can be
regained by adding a boundedness condition on the solution.
12
Identity (Evans, p. 187.) :
Rn
eix·ξ−|ξ|2
t
dξ = e−
|x|2
4t
π
t
n
2
.
13
The following function u satisfies ut = uxx for t > 0 with u(x, 0) = 0:
u(x, t) =
∞
k=0
1
(2k)!
x2k dk
dtk
e−1/t2
.
Partial Differential Equations Igor Yanovsky, 2005 50
9.1.4 Nonhomogeneous Equation
Consider the pure initial value problem with homogeneous initial condition:
ut = u + f(x, t) for t > 0, x ∈ Rn
u(x, 0) = 0 for x ∈ Rn
.
(9.5)
Duhamel’s principle gives the solution:
u(x, t) =
t
0 Rn
˜K(x − y, t − s) f(y, s) dyds.
9.1.5 Nonhomogeneous Equation with Nonhomogeneous Initial Conditions
Combining two solutions above, we find that the solution of the initial value problem
ut = u + f(x, t) for t > 0, x ∈ Rn
u(x, 0) = g(x) for x ∈ Rn.
(9.6)
is given by
u(x, t) =
Rn
˜K(x − y, t) g(y) dy +
t
0 Rn
˜K(x − y, t − s) f(y, s) dyds.
9.1.6 The Fundamental Solution
Suppose we want to solve the Cauchy problem
ut = Lu x ∈ Rn, t > 0
u(x, 0) = g(x) x ∈ Rn
.
(9.7)
where L is a differential operator in Rn with constant coefficients. Suppose K(x, t) is
a distribution in Rn
for each value of t ≥ 0, K is C1
in t and satisfies
Kt − LK = 0,
K(x, 0) = δ(x).
(9.8)
We call K a fundamental solution for the initial value problem. The solution of
(9.7) is then given by convolution in the space variables:
u(x, t) =
Rn
K(x − y, t) g(y) dy.
Partial Differential Equations Igor Yanovsky, 2005 51
For operators of the form ∂t −L, the fundamental solution of the initial value problem,
K(x, t) as defined in (9.8), coincides with the “free space” fundamental solution, which
satisfies
∂t − L K(x, t) = δ(x, t),
provided we extend K(x, t) by zero to t < 0. For the heat equation, consider
˜K(x, t) =
⎧
⎨
⎩
1
(4πt)n/2 e−
|x|2
4t t > 0
0 t ≤ 0.
(9.9)
Notice that ˜K is smooth for (x, t) = (0, 0).
˜K defined as in (9.9), is the fundamental solution of the “free space” heat
equation.
Proof. We need to show:
∂t − K(x, t) = δ(x, t). (9.10)
To verify (9.10) as distributions, we must show that for any v ∈ C∞
0 (Rn+1): 14
Rn+1
˜K(x, t) − ∂t − v dx dt =
Rn+1
δ(x, t) v(x, t) dx dt ≡ v(0, 0).
To do this, let us take > 0 and define
˜K (x, t) =
⎧
⎨
⎩
1
(4πt)n/2 e−
|x|2
4t t >
0 t ≤ .
Then ˜K → ˜K as distributions, so it suffices to show that (∂t − ) ˜K → δ as distribu-
tions. Now
˜K − ∂t − v dx dt =
∞
Rn
˜K(x, t) − ∂t − v(x, t) dx dt
= −
∞
Rn
˜K(x, t) ∂tv(x, t) dx dt −
∞
Rn
˜K(x, t) v(x, t) dx dt
= −
Rn
˜K(x, t) v(x, t) dx
t=∞
t=
+
∞
Rn
∂t
˜K(x, t) v(x, t) dx dt −
∞
Rn
˜K(x, t) v(x, t) dx dt
=
∞
Rn
∂t − ˜K(x, t) v(x, t) dx dt +
Rn
˜K(x, ) v(x, ) dx.
But for t > , (∂t − ) ˜K(x, t) = 0; moreover, since limt→0+ ˜K(x, t) = δ0(x) = δ(x),
we have ˜K(x, ) → δ0(x) as → 0, so the last integral tends to v(0, 0).
14
Note, for the operator L = ∂/∂t, the adjoint operator is L∗
= −∂/∂t.
Partial Differential Equations Igor Yanovsky, 2005 52
10 Schr¨odinger Equation
Problem (F’96, #5). The Gauss kernel
G(t, x, y) =
1
(4πt)
1
2
e−
(x−y)2
4t
is the fundamental solution of the heat equation, solving
Gt = Gxx, G(0, x, y) = δ(x − y).
By analogy with the heat equation, find the fundamental solution H(t, x, y) of the
Schr¨odinger equation
Ht = iHxx, H(0, x, y) = δ(x − y).
Show that your expression H(x) is indeed the fundamental solution for the
Schr¨odinger equation. You may use the following special integral
∞
−∞
e
−ix2
4 dx =
√
−i4π.
Proof. • Remark: Consider the initial value problem for the Schr¨odinger equation
ut = i u x ∈ Rn, t > 0,
u(x, 0) = g(x) x ∈ Rn.
If we formally replace t by it in the heat kernel, we obtain the Fundamental
Solution of the Schr¨odinger Equation: 15
H(x, t) =
1
(4πit)
n
2
e−
|x|2
4it (x ∈ Rn
, t = 0)
u(x, t) =
1
(4πit)
n
2 Rn
e−
|x−y|2
4it g(y) dy.
In particular, the Schr¨odinger equation is reversible in time, whereas the heat equation
is not.
• Solution: We have already found the fundamental solution for the heat equation
using the Fourier transform. For the Schr¨odinger equation is one dimension, we have
∂
∂t
u(ξ, t) = −iξ2
u(ξ, t),
which is an ordinary differential equation in t, with the solution u(ξ, t) = Ce−iξ2t
.
The initial condition u(ξ, 0) = g(ξ) gives
u(ξ, t) = g(ξ) e−iξ2t
,
u(x, t) = g(ξ) e−iξ2t
∨
=
1
√
2π
g ∗ e−iξ2t ∨
=
1
√
2π
g ∗
1
√
2π R
e−iξ2t
eix·ξ
dξ
=
1
2π
g ∗
R
eix·ξ−iξ2t
dξ = (need some work) =
=
1
√
4πit
g ∗ e−
|x|2
4it =
1
√
4πit R
e−
|x−y|2
4it g(y) dy.
15
Evans, p. 188, Example 3.
Partial Differential Equations Igor Yanovsky, 2005 53
• For the Schr¨odinger equation, consider
˜Ψ(x, t) =
⎧
⎨
⎩
1
(4πit)n/2 e−
|x|2
4it t > 0
0 t ≤ 0.
(10.1)
Notice that ˜Ψ is smooth for (x, t) = (0, 0).
˜Ψ defined as in (10.1), is the fundamental solution of the Schr¨odinger equa-
tion. We need to show:
∂t − i Ψ(x, t) = δ(x, t). (10.2)
To verify (10.2) as distributions, we must show that for any v ∈ C∞
0 (Rn+1): 16
Rn+1
˜Ψ(x, t) − ∂t − i v dx dt =
Rn+1
δ(x, t) v(x, t) dx dt ≡ v(0, 0).
To do this, let us take > 0 and define
˜Ψ (x, t) =
⎧
⎨
⎩
1
(4πit)n/2 e−
|x|2
4it t >
0 t ≤ .
Then ˜Ψ → ˜Ψ as distributions, so it suffices to show that (∂t − i )˜Ψ → δ as distribu-
tions. Now
˜Ψ − ∂t − i v dx dt =
∞
Rn
˜Ψ(x, t) − ∂t − i v(x, t) dx dt
=
∞
Rn
∂t − i ˜Ψ(x, t) v(x, t) dx dt +
Rn
˜Ψ(x, ) v(x, ) dx.
But for t > , (∂t − i )˜Ψ(x, t) = 0; moreover, since limt→0+ ˜Ψ(x, t) = δ0(x) = δ(x),
we have ˜Ψ(x, ) → δ0(x) as → 0, so the last integral tends to v(0, 0).
16
Note, for the operator L = ∂/∂t, the adjoint operator is L∗
= −∂/∂t.
Partial Differential Equations Igor Yanovsky, 2005 54
11 Problems: Quasilinear Equations
Problem (F’90, #7). Use the method of characteristics to find the solution of the
first order partial differential equation
x2
ux + xyuy = u2
which passes through the curve u = 1, x = y2. Determine where this solution becomes
singular.
Proof. We have a condition u(x = y2) = 1. Γ is parametrized by Γ : (s2, s, 1).
dx
dt
= x2
⇒ x =
1
−t − c1(s)
⇒ x(0, s) =
1
−c1(s)
= s2
⇒ x =
1
−t + 1
s2
=
s2
1 − ts2
,
dy
dt
= xy ⇒
dy
dt
=
s2y
1 − ts2
⇒ y =
c2(s)
1 − ts2
⇒ y(s, 0) = c2(s) = s ⇒ y =
s
1 − ts2
,
dz
dt
= z2
⇒ z =
1
−t − c3(s)
⇒ z(0, s) =
1
−c3(s)
= 1 ⇒ z =
1
1 − t
.
Thus,
x
y
= s ⇒ y =
x
y
1 − tx2
y2
⇒ t =
y2
x2
−
1
x
.
⇒ u(x, y) =
1
1 − y2
x2 + 1
x
=
x2
x2 + x − y2
.
The solution becomes singular when y2
= x2
+ x.
It can be checked that the solution satisfies the PDE and u(x = y2) = y4
y4+y2 −y2 = 1.
Problem (S’91, #7). Solve the first order PDE
fx + x2
yfy + f = 0
f(x = 0, y) = y2
using the method of characteristics.
Proof. Rewrite the equation
ux + x2
yuy = −u,
u(0, y) = y2
.
Γ is parameterized by Γ : (0, s, s2).
dx
dt
= 1 ⇒ x = t,
dy
dt
= x2
y ⇒
dy
dt
= t2
y ⇒ y = se
t3
3 ,
dz
dt
= −z ⇒ z = s2
e−t
.
Thus, x = t and s = ye−t3
3 = ye−x3
3 , and
u(x, y) = (ye−x3
3 )2
e−x
= y2
e−2
3
x3−x
.
The solution satisfies both the PDE and initial conditions.
Partial Differential Equations Igor Yanovsky, 2005 55
Problem (S’92, #1). Consider the Cauchy problem
ut = xux − u + 1 − ∞ < x < ∞, t ≥ 0
u(x, 0) = sinx − ∞ < x < ∞
and solve it by the method of characteristics. Discuss the properties of the solution; in
particular investigate the behavior of |ux(·, t)|∞ for t → ∞.
Proof. Γ is parametrized by Γ : (s, 0, sins). We have
dx
dt
= −x ⇒ x = se−t
,
dy
dt
= 1 ⇒ y = t,
dz
dt
= 1 − z ⇒ z = 1 −
1 − sins
et
.
Thus, t = y, s = xey, and
u(x, y) = 1 −
1
ey
+
sin(xey)
ey
.
It can be checked that the solution satisfies the PDE and the initial condition.
As t → ∞, u(x, t) → 1. Also,
|ux(x, y)|∞ = | cos(xey
)|∞ = 1.
ux(x, y) oscillate between −1 and 1. If x = 0, ux = 1.
Problem (W’02, #6). Solve the Cauchy problem
ut + u2
ux = 0, t > 0,
u(0, x) = 2 + x.
Proof. Solved
Partial Differential Equations Igor Yanovsky, 2005 56
Problem (S’97, #1). Find the solution of the Burgers’ equation
ut + uux = −x, t ≥ 0
u(x, 0) = f(x), −∞ < x < ∞.
Proof. Γ is parameterized by Γ : (s, 0, f(s)).
dx
dt
= z,
dy
dt
= 1 ⇒ y = t,
dz
dt
= −x.
Note that we have a coupled system:
˙x = z,
˙z = −x,
which can be written as a second order ODE:
¨x + x = 0, x(s, 0) = s, ˙x(s, 0) = z(0) = f(s).
Solving the equation, we get
x(s, t) = s cos t + f(s) sint, and thus,
z(s, t) = ˙x(t) = −s sint + f(s) cos t.
x = s cos y + f(s) siny,
u = −s sin y + f(s) cos y.
⇒
x cos y = s cos2
y + f(s) siny cos y,
u siny = −s sin2
y + f(s) cos y siny.
⇒ x cos y − u siny = s(cos2
y + sin2
y) = s.
⇒ u(x, y) = f(x cosy − u siny) cosy − (x cos y − u siny) siny.
Problem (F’98, #2). Solve the partial differential equation
uy − u2
ux = 3u, u(x, 0) = f(x)
using method of characteristics. (Hint: find a parametric representation of the solu-
tion.)
Proof. Γ is parameterized by Γ : (s, 0, f(s)).
dx
dt
= −z2
⇒
dx
dt
= −f2
(s)e6t
⇒ x = −
1
6
f2
(s)e6t
+
1
6
f2
(s) + s,
dy
dt
= 1 ⇒ y = t,
dz
dt
= 3z ⇒ z = f(s)e3t
.
Partial Differential Equations Igor Yanovsky, 2005 57
Thus,
x = −1
6 f2(s)e6y + 1
6 f2(s) + s,
f(s) = z
e3y
⇒ x = −
1
6
z2
e6y
e6y
+
1
6
z2
e6y
+ s =
z2
6e6y
−
z2
6
+ s,
⇒ s = x −
z2
6e6y
+
z2
6
.
⇒ z = f x −
z2
6e6y
+
z2
6
e3y
.
⇒ u(x, y) = f x −
u2
6e6y
+
u2
6
e3y
.
Partial Differential Equations Igor Yanovsky, 2005 58
Problem (S’99, #1) Modified Problem. a) Solve
ut +
u3
3 x
= 0 (11.1)
for t > 0, −∞ < x < ∞ with initial data
u(x, 0) = h(x) =
−a(1 − ex), x < 0
−a(1 − e−x), x > 0
where a > 0 is constant. Solve until the first appearance of discontinuous derivative
and determine that critical time.
b) Consider the equation
ut +
u3
3 x
= −cu. (11.2)
How large does the constant c > 0 has to be, so that a smooth solution (with no discon-
tinuities) exists for all t > 0? Explain.
Proof. a) Characteristic form: ut + u2ux = 0. Γ : (s, 0, h(s)).
dx
dt
= z2
,
dy
dt
= 1,
dz
dt
= 0.
x = h(s)2
t + s, y = t, z = h(s).
u(x, y) = h(x − u2
y) (11.3)
The characteristic projection in the xt-plane17 passing through the point (s, 0) is the
line
x = h(s)2
t + s
along which u has the constant value u = h(s).
The derivative of the initial data is discontinuous, and that leads to a
rarefaction-like behavior at t = 0. However, if the question meant to ask to
determine the first time when a shock forms, we proceed as follows.
Two characteristics x = h(s1)2t + s1 and x = h(s2)2t + s2 intersect at a point (x, t)
with
t = −
s2 − s1
h(s2)2 − h(s1)2
.
From (11.3), we have
ux = h (s)(1 − 2uuxt) ⇒ ux =
h (s)
1 + 2h(s)h (s)t
Hence for 2h(s)h (s) < 0, ux becomes infinite at the positive time
t =
−1
2h(s)h (s)
.
The smallest t for which this happens corresponds to the value s = s0 at which h(s)h (s)
has a minimum (i.e.−h(s)h (s) has a maximum). At time T = −1/(2h(s0)h (s0)) the
17
y and t are interchanged here
Partial Differential Equations Igor Yanovsky, 2005 59
solution u experiences a “gradient catastrophe”.
Therefore, need to find a minimum of
f(x) = 2h(x)h (x) =
−2a(1 − ex
) · aex
−2a(1 − e−x) · (−ae−x)
=
−2a2
ex
(1 − ex
), x < 0
2a2e−x(1 − e−x), x > 0
f (x) =
−2a2
ex
(1 − 2ex
), x < 0
−2a2e−x(1 − 2e−x), x > 0
= 0 ⇒
x = ln(1
2) = − ln(2), x < 0
x = ln(2), x > 0
⇒
f(ln(1
2)) = −2a2
eln( 1
2
)
(1 − eln( 1
2
)
) = −2a2
(1
2 )(1
2) = −a2
2 , x < 0
f(ln(2)) = 2a2(1
2 )(1 − 1
2 ) = a2
2 , x > 0
⇒ t = −
1
min{2h(s)h (s)}
=
2
a2
Proof. b) Characteristic form: ut + u2ux = −cu. Γ : (s, 0, h(s)).
dx
dt
= z2
= h(s)2
e−2ct
⇒ x = s +
1
2c
h(s)2
(1 − e−2ct
),
dy
dt
= 1 ⇒ y = t,
dz
dt
= −cz ⇒ z = h(s)e−ct
(⇒ h(s) = uecy
).
Solving for s and t in terms of x and y, we get:
t = y, s = x −
1
2c
h(s)2
(1 − e−2cy
).
Thus,
u(x, y) = h x −
1
2c
u2
e2cy
(1 − e−2cy
) · e−cy
.
ux = h (s)e−cy
· (1 −
1
c
uuxe2cy
(1 − e−2cy
)),
ux =
h (s)e−cy
1 + 1
c h (s)ecyu · (1 − e−2cy)
=
h (s)e−cy
1 + 1
c h (s)h(s)(1 − e−2cy)
.
Thus, c > 0 that would allow a smooth solution to exist for all t > 0 should satisfy
1 +
1
c
h (s)h(s)(1 − e−2cy
) = 0.
We can perform further calculations taking into account the result from part (a):
min{2h(s)h (s)} = −
a2
2
.
Partial Differential Equations Igor Yanovsky, 2005 60
Problem (S’99, #1). Original Problem. a). Solve
ut +
u3
x
3
= 0 (11.4)
for t > 0, −∞ < x < ∞ with initial data
u(x, 0) = h(x) =
−a(1 − ex
), x < 0
−a(1 − e−x), x > 0
where a > 0 is constant.
Proof. Rewrite the equation as
F(x, y, u, ux, uy) =
u3
x
3
+ uy = 0,
F(x, y, z, p, q) =
p3
3
+ q = 0.
Γ is parameterized by Γ : (s, 0, h(s), φ(s), ψ(s)).
We need to complete Γ to a strip. Find φ(s) and ψ(s), the initial conditions for p(s, t)
and q(s, t), respectively:
• F(f(s), g(s), h(s), φ(s), ψ(s)) = 0,
φ(s)3
3
+ ψ(s) = 0,
ψ(s) = −
φ(s)3
3
.
• h (s) = φ(s)f (s) + ψ(s)g (s)
aes = φ(s), x < 0
−ae−s
= φ(s), x > 0
⇒
ψ(s) = −a3e3s
3 , x < 0
ψ(s) = a3e−3s
3 , x > 0
Therefore, now Γ is parametrized by
Γ : (s, 0, −a(1 − es
), aes
, −a3e3s
3 ), x < 0
Γ : (s, 0, −a(1 − e−s
), −ae−s
, a3e−3s
3 ), x > 0
dx
dt
= Fp = p2
=
a2
e2s
a2
e−2s
⇒ x(s, t) =
a2
e2s
t + c4(s)
a2
e−2s
t + c5(s)
⇒ x =
a2
e2s
t + s
a2
e−2s
t + s
dy
dt
= Fq = 1 ⇒ y(s, t) = t + c1(s) ⇒ y = t
dz
dt
= pFp + qFq = p3
+ q =
a3
e3s
− a3e3s
3 = 2
3 a3
e3s
, x < 0
−a3e−3s + a3e−3s
3 = −2
3 a3e−3s, x > 0
⇒ z(s, t) =
2
3 a3
e3s
t + c6(s), x < 0
−2
3 a3e−3st + c7(s), x > 0
⇒ z =
2
3 a3
e3s
t − a(1 − es
), x < 0
−2
3 a3e−3st − a(1 − e−s), x > 0
dp
dt
= −Fx − Fzp = 0 ⇒ p(s, t) = c2(s) ⇒ p =
aes
, x < 0
−ae−s, x > 0
Partial Differential Equations Igor Yanovsky, 2005 61
dq
dt
= −Fy − Fzq = 0 ⇒ q(s, t) = c3(s) ⇒ q =
−a3e3s
3 , x < 0
a3e−3s
3 , x > 0
Thus,
u(x, y) =
2
3a3e3sy − a(1 − es), x < 0
−2
3 a3
e−3s
y − a(1 − e−s
), x > 0
where s is defined as
x =
a2e2sy + s, x < 0
a2
e−2s
y + s, x > 0.
b). Solve the equation
ut +
u3
x
3
= −cu. (11.5)
Proof. Rewrite the equation as
F(x, y, u, ux, uy) =
u3
x
3
+ uy + cu = 0,
F(x, y, z, p, q) =
p3
3
+ q + cz = 0.
Γ is parameterized by Γ : (s, 0, h(s), φ(s), ψ(s)).
We need to complete Γ to a strip. Find φ(s) and ψ(s), the initial conditions for p(s, t)
and q(s, t), respectively:
• F(f(s), g(s), h(s), φ(s), ψ(s)) = 0,
φ(s)3
3
+ ψ(s) + ch(s) = 0,
ψ(s) = −
φ(s)3
3
− ch(s) =
−
φ(s)3
3 + ca(1 − ex
), x < 0
−φ(s)3
3 + ca(1 − e−x), x > 0
• h (s) = φ(s)f (s) + ψ(s)g (s)
aes
= φ(s), x < 0
−ae−s = φ(s), x > 0
⇒
ψ(s) = −a3e3s
3 + ca(1 − ex
), x < 0
ψ(s) = a3e−3s
3 + ca(1 − e−x), x > 0
Therefore, now Γ is parametrized by
Γ : (s, 0, −a(1 − es), aes, −a3e3s
3 + ca(1 − ex), x < 0
Γ : (s, 0, −a(1 − e−s), −ae−s, a3e−3s
3 + ca(1 − e−x), x > 0
Partial Differential Equations Igor Yanovsky, 2005 62
dx
dt
= Fp = p2
dy
dt
= Fq = 1
dz
dt
= pFp + qFq = p3
+ q
dp
dt
= −Fx − Fzp = −cp
dq
dt
= −Fy − Fzq = −cq
We can proceed solving the characteristic equations with initial conditions above.
Partial Differential Equations Igor Yanovsky, 2005 63
Problem (S’95, #7). a) Solve the following equation, using characteristics,
ut + u3
ux = 0,
u(x, 0) =
a(1 − ex
), for x < 0
−a(1 − e−x), for x > 0
where a > 0 is a constant. Determine the first time when a shock forms.
Proof. a) Γ is parameterized by Γ : (s, 0, h(s)).
dx
dt
= z3
,
dy
dt
= 1,
dz
dt
= 0.
x = h(s)3
t + s, y = t, z = h(s).
u(x, y) = h(x − u3
y) (11.6)
The characteristic projection in the xt-plane18
passing through the point (s, 0) is the line
x = h(s)3
t + s
along which u has a constant value u = h(s).
Characteristics x = h(s1)3t+s1 and x = h(s2)3t+s2 intersect at a point (x, t) with
t = −
s2 − s1
h(s2)3 − h(s1)3
.
From (11.6), we have
ux = h (s)(1 − 3u2
uxt) ⇒ ux =
h (s)
1 + 3h(s)2h (s)t
Hence for 3h(s)2h (s) < 0, ux becomes infinite at the positive time
t =
−1
3h(s)2h (s)
.
The smallest t for which this happens corresponds to the value s = s0 at which
h(s)2h (s) has a minimum (i.e.−h(s)2h (s) has a maximum). At time T = −1/(3h(s0)2h (s0))
the solution u experiences a “gradient catastrophe”.
Therefore, need to find a minimum of
f(x) = 3h(x)2
h (x) =
−3a2
(1 − ex
)2
aex
= −3a3
ex
(1 − ex
)2
, x < 0
−3a2
(1 − e−x
)2
ae−x
= −3a3
e−x
(1 − e−x
)2
, x > 0
f (x) =
−3a3
ex
(1 − ex
)2
− ex
2(1 − ex
)ex
= −3a3
ex
(1 − ex
)(1 − 3ex
), x < 0
−3a3
− e−x
(1 − e−x
)2
+ e−x
2(1 − e−x
)e−x
= −3a3
e−x
(1 − e−x
)(−1 + 3e−x
), x > 0
= 0
The zeros of f (x) are
x = 0, x = − ln 3, x < 0,
x = 0, x = ln 3, x > 0.
We check which ones give the minimum of f(x) :
⇒
f(0) = −3a3
, f(− ln3) = −3a3 1
3 (1 − 1
3)2
= −4a3
9 , x < 0
f(0) = −3a3, f(ln3) = −3a3 1
3 (1 − 1
3)2 = −4a3
9 , x > 0
18
y and t are interchanged here
Partial Differential Equations Igor Yanovsky, 2005 64
⇒ t = −
1
min{3h(s)2h (s)}
= −
1
min f(s)
=
1
3a3
.
Partial Differential Equations Igor Yanovsky, 2005 65
b) Now consider
ut + u3
ux + cu = 0
with the same initial data and a positive constant c. How large does c need to be in
order to prevent shock formation?
b) Characteristic form: ut + u3ux = −cu. Γ : (s, 0, h(s)).
dx
dt
= z3
= h(s)3
e−3ct
⇒ x = s +
1
3c
h(s)3
(1 − e−3ct
),
dy
dt
= 1 ⇒ y = t,
dz
dt
= −cz ⇒ z = h(s)e−ct
(⇒ h(s) = uecy
).
⇒ z(s, t) = h x −
1
3c
h(s)3
(1 − e−3ct
) e−ct
,
⇒ u(x, y) = h x −
1
3c
u3
e3cy
(1 − e−3cy
) e−cy
.
ux = h (s) · e−cy
· 1 −
1
c
u2
uxe3cy
(1 − e−3cy
) ,
ux =
h (s)e−cy
1 + 1
c h (s)u2e2cy(1 − e−3cy)
=
h (s)e−cy
1 + 1
c h (s)h(s)2(1 − e−3cy)
.
Thus, we need
1 +
1
c
h (s)h(s)2
(1 − e−3cy
) = 0.
We can perform further calculations taking into account the result from part (a):
min{3h(s)2
h (s)} = −3a3
.
Partial Differential Equations Igor Yanovsky, 2005 66
Problem (F’99, #4). Consider the Cauchy problem
uy + a(x)ux = 0,
u(x, 0) = h(x).
Give an example of an (unbounded) smooth a(x) for which the solution of the Cauchy
problem is not unique.
Proof. Γ is parameterized by Γ : (s, 0, h(s)).
dx
dt
= a(x) ⇒ x(t) − x(0) =
t
0
a(x)dt ⇒ x =
t
0
a(x)dt + s,
dy
dt
= 1 ⇒ y(s, t) = t + c1(s) ⇒ y = t,
dz
dt
= 0 ⇒ z(s, t) = c2(s) ⇒ z = h(s).
Thus,
u(x, t) = h x −
y
0
a(x)dy
Problem (F’97, #7). a) Solve the Cauchy problem
ut − xuux = 0 − ∞ < x < ∞, t ≥ 0,
u(x, 0) = f(x) − ∞ < x < ∞.
b) Find a class of initial data such that this problem has a global solution for all t.
Compute the critical time for the existence of a smooth solution for initial data, f,
which is not in the above class.
Proof. a) Γ is parameterized by Γ : (s, 0, f(s)).
dx
dt
= −xz ⇒
dx
dt
= −xf(s) ⇒ x = se−f(s)t
,
dy
dt
= 1 ⇒ y = t,
dz
dt
= 0 ⇒ z = f(s).
⇒ z = f xef(s)t
,
⇒ u(x, y) = f xeuy
.
Check:
ux = f (s) · (euy
+ xeuy
uxy)
uy = f (s) · xeuy(uyy + u)
⇒
ux − f (s)xeuy
uxy = f (s)euy
uy − f (s)xeuyuyy = f (s)xeuyu
⇒
ux = f (s)euy
1−f (s)xyeuy
uy =
f (s)euyxu
1−f (s)xyeuy
⇒ uy − xuux =
f (s)euyxu
1 − f (s)xyeuy
− xu
f (s)euy
1 − f (s)xyeuy
= 0.
u(x, 0) = f(x).
Partial Differential Equations Igor Yanovsky, 2005 67
b) The characteristics would intersect when 1 − f (s)xyeuy
= 0. Thus,
tc =
1
f (s)xeutc
.
Partial Differential Equations Igor Yanovsky, 2005 68
Problem (F’96, #6). Find an implicit formula for the solution u of the initial-value
problem
ut = (2x − 1)tux + sin(πx) − t,
u(x, t = 0) = 0.
Evaluate u explicitly at the point (x = 0.5, t = 2).
Proof. Rewrite the equation as
uy + (1 − 2x)yux = sin(πx) − y.
Γ is parameterized by Γ : (s, 0, 0).
dx
dt
= (1 − 2x)y = (1 − 2x)t ⇒ x =
1
2
(2s − 1)e−t2
+
1
2
, ⇒ s = (x −
1
2
)et2
+
1
2
,
dy
dt
= 1 ⇒ y = t,
dz
dt
= sin(πx) − y = sin
π
2
(2s − 1)e−t2
+
π
2
− t.
⇒ z(s, t) =
t
0
sin
π
2
(2s − 1)e−t2
+
π
2
− t dt + z(s, 0),
z(s, t) =
t
0
sin
π
2
(2s − 1)e−t2
+
π
2
− t dt.
⇒ u(x, y) =
y
0
sin
π
2
(2s − 1)e−y2
+
π
2
− y dy
=
y
0
sin
π
2
(2x − 1)ey2
e−y2
+
π
2
− y dy
=
y
0
sin
π
2
(2x − 1) +
π
2
− y dy =
y
0
sin(πx) − y dy,
⇒ u(x, y) = y sin(πx) −
y2
2
.
Note: This solution does not satisfy the PDE.
Problem (S’90, #8). Consider the Cauchy problem
ut = xux − u, −∞ < x < ∞, t ≥ 0,
u(x, 0) = f(x), f(x) ∈ C∞
.
Assume that f ≡ 0 for |x| ≥ 1.
Solve the equation by the method of characteristics and discuss the behavior of the
solution.
Proof. Rewrite the equation as
uy − xux = −u,
Γ is parameterized by Γ : (s, 0, f(s)).
dx
dt
= −x ⇒ x = se−t
,
dy
dt
= 1 ⇒ y = t,
dz
dt
= −z ⇒ z = f(s)e−t
.
⇒ u(x, y) = f(xey
)e−y
.
Partial Differential Equations Igor Yanovsky, 2005 69
The solution satisfies the PDE and initial conditions.
As y → +∞, u → 0. u = 0 for |xey| ≥ 1 ⇒ u = 0 for |x| ≥ 1
ey .
Partial Differential Equations Igor Yanovsky, 2005 70
Problem (F’02, #4). Consider the nonlinear hyperbolic equation
uy + uux = 0 − ∞ < x < ∞.
a) Find a smooth solution to this equation for initial condition u(x, 0) = x.
b) Describe the breakdown of smoothness for the solution if u(x, 0) = −x.
Proof. a) Γ is parameterized by Γ : (s, 0, s).
dx
dt
= z = s ⇒ x = st + s ⇒ s =
x
t + 1
=
x
y + 1
.
dy
dt
= 1 ⇒ y = t,
dz
dt
= 0 ⇒ z = s.
⇒ u(x, y) =
x
y + 1
; solution is smooth for all positive time y.
b) Γ is parameterized by Γ : (s, 0, −s).
dx
dt
= z = −s ⇒ x = −st + s ⇒ s =
x
1 − t
=
x
1 − y
.
dy
dt
= 1 ⇒ y = t,
dz
dt
= 0 ⇒ z = −s.
⇒ u(x, y) =
x
y − 1
; solution blows up at time y = 1.
Partial Differential Equations Igor Yanovsky, 2005 71
Problem (F’97, #4). Solve the initial-boundary value problem
ut + (x + 1)2
ux = x for x > 0, t > 0
u(x, 0) = f(x) 0 < x < +∞
u(0, t) = g(t) 0 < t < +∞.
Proof. Rewrite the equation as
uy + (x + 1)2
ux = x for x > 0, y > 0
u(x, 0) = f(x) 0 < x < +∞
u(0, y) = g(y) 0 < y < +∞.
• For region I, we solve the following characteristic equations with Γ is parameterized
19 by Γ : (s, 0, f(s)).
dx
dt
= (x + 1)2
⇒ x = −
s + 1
(s + 1)t − 1
− 1,
dy
dt
= 1 ⇒ y = t,
dz
dt
= x = −
s + 1
(s + 1)t − 1
− 1,
⇒ z = −ln|(s + 1)t − 1| − t + c1(s),
⇒ z = −ln|(s + 1)t − 1| − t + f(s).
In region I, characteristics are of the form
x = −
s + 1
(s + 1)y − 1
− 1.
Thus, region I is bounded above by the line
x = −
1
y − 1
− 1, or y =
x
x + 1
.
Since t = y, s = x−xy−y
xy+y+1 , we have
u(x, y) = −ln
x − xy − y
xy + y + 1
+ 1 y − 1 − y + f
x − xy − y
xy + y + 1
,
⇒ u(x, y) = −ln
−1
xy + y + 1
− y + f
x − xy − y
xy + y + 1
.
• For region II, Γ is parameterized by Γ : (0, s, g(s)).
dx
dt
= (x + 1)2
⇒ x = −
1
t − 1
− 1,
dy
dt
= 1 ⇒ y = t + s,
dz
dt
= x = −
1
t − 1
− 1,
⇒ z = −ln|t − 1| − t + c2(s),
⇒ z = −ln|t − 1| − t + g(s).
19
Variable t as a third coordinate of u and variable t used to parametrize characteristic equations
are two different entities.
Partial Differential Equations Igor Yanovsky, 2005 72
Since t = x
x+1 , s = y − x
x+1 , we have
u(x, y) = −ln
x
x + 1
− 1 −
x
x + 1
+ g y −
x
x + 1
.
Note that on y = x
x+1 , both solutions are equal if f(0) = g(0).
Partial Differential Equations Igor Yanovsky, 2005 73
Problem (S’93, #3). Solve the following equation
ut + ux + yuy = sint
for 0 ≤ t, 0 ≤ x, −∞ < y < ∞ and with
u = x + y for t = 0, x ≥ 0 and
u = t2
+ y for x = 0, t ≥ 0.
Proof. Rewrite the equation as (x ↔ x1, y ↔ x2, t ↔ x3):
ux3 + ux1 + x2ux2 = sinx3 for 0 ≤ x3, 0 ≤ x1, −∞ < x2 < ∞,
u(x1, x2, 0) = x1 + x2,
u(0, x2, x3) = x2
3 + x2.
• For region I, we solve the following characteristic equations with Γ is parameterized
20
by Γ : (s1, s2, 0, s1 + s2).
dx1
dt
= 1 ⇒ x1 = t + s1,
dx2
dt
= x2 ⇒ x2 = s2et
,
dx3
dt
= 1 ⇒ x3 = t,
dz
dt
= sin x3 = sin t
⇒ z = − cos t + s1 + s2 + 1.
Since in region I, in x1x3-plane, characteristics are of the form x1 = x3 + s1, region
I is bounded above by the line x1 = x3. Since t = x3, s1 = x1 − x3, s2 = x2e−x3 , we
have
u(x1, x2, x3) = − cos x3 + x1 − x3 + x2e−x3
+ 1, or
u(x, y, t) = − cos t + x − t + ye−t
+ 1, x ≥ t.
• For region II, we solve the following characteristic equations with Γ is parameterized
by Γ : (0, s2, s3, s2 + s2
3).
dx1
dt
= 1 ⇒ x1 = t,
dx2
dt
= x2 ⇒ x2 = s2et
,
dx3
dt
= 1 ⇒ x3 = t + s3,
dz
dt
= sin x3 = sin(t + s3) ⇒ z = − cos(t + s3) + cos s3 + s2 + s2
3.
Since t = x1, s3 = x3 − x1, s2 = x2e−x3 , we have
u(x1, x2, x3) = − cos x3 + cos(x3 − x1) + x2e−x3
+ (x3 − x1)2
, or
u(x, y, t) = − cos t + cos(t − x) + ye−t
+ (t − x)2
, x ≤ t.
Note that on x = t, both solutions are u(x = t, y) = − cos x + ye−x
+ 1.
20
Variable t as a third coordinate of u and variable t used to parametrize characteristic equations
are two different entities.
Partial Differential Equations Igor Yanovsky, 2005 74
Problem (W’03, #5). Find a solution to
xux + (x + y)uy = 1
which satisfies u(1, y) = y for 0 ≤ y ≤ 1. Find a region in {x ≥ 0, y ≥ 0} where u is
uniquely determined by these conditions.
Proof. Γ is parameterized by Γ : (1, s, s).
dx
dt
= x ⇒ x = et
.
dy
dt
= x + y ⇒ y − y = et
.
dz
dt
= 1 ⇒ z = t + s.
The homogeneous solution for the second equation is yh(s, t) = c1(s)et
. Since the
right hand side and yh are linearly dependent, our guess for the particular solution is
yp(s, t) = c2(s)tet
. Plugging in yp into the differential equation, we get
c2(s)tet
+ c2(s)et
− c2(s)tet
= et
⇒ c2(s) = 1.
Thus, yp(s, t) = tet and
y(s, t) = yh + yp = c1(s)et
+ tet
.
Since y(s, 0) = s = c1(s), we get
y = set
+ tet
.
With and , we can solve for s and t in terms of x and y to get
t = ln x,
y = sx + x ln x ⇒ s =
y − x ln x
x
.
u(x, y) = t + s = ln x +
y − x ln x
x
.
u(x, y) =
y
x
.
We have found that the characteristics in the xy-plane are of the form
y = sx + x ln x,
where s is such that 0 ≤ s ≤ 1. Also, the characteristics originate from Γ.
Thus, u is uniquely determined in the region between the graphs:
y = x ln x,
y = x + x ln x.
Partial Differential Equations Igor Yanovsky, 2005 75
12 Problems: Shocks
Example 1. Determine the exact solution to Burgers’ equation
ut +
1
2
u2
x
= 0, t > 0
with initial data
u(x, 0) = h(x) =
⎧
⎪⎨
⎪⎩
1 if x < −1,
0 if − 1 < x < 1,
−1 if x > 1.
Proof. Characteristic form: ut + uux = 0.
The characteristic projection in xt-plane passing through the point (s, 0) is the line
x = h(s)t + s.
• Rankine-Hugoniot shock condition at s = −1:
shock speed: ξ (t) =
F(ur) − F(ul)
ur − ul
=
1
2 u2
r − 1
2 u2
l
ur − ul
=
0 − 1
2
0 − 1
=
1
2
.
The “1/slope” of the shock curve = 1/2. Thus,
x = ξ(t) =
1
2
t + s,
and since the jump occurs at (−1, 0), ξ(0) = −1 = s. Therefore,
x =
1
2
t − 1.
• Rankine-Hugoniot shock condition at s = 1:
shock speed: ξ (t) =
F(ur) − F(ul)
ur − ul
=
1
2 u2
r − 1
2 u2
l
ur − ul
=
1
2 − 0
−1 − 0
= −
1
2
.
The “1/slope” of the shock curve = −1/2. Thus,
x = ξ(t) = −
1
2
t + s,
and since the jump occurs at (1, 0), ξ(0) = 1 = s. Therefore,
x = −
1
2
t + 1.
• At t = 2, Rankine-Hugoniot shock condition at s = 0:
shock speed: ξ (t) =
F(ur) − F(ul)
ur − ul
=
1
2 u2
r − 1
2 u2
l
ur − ul
=
1
2 − 1
2
−1 − 1
= 0.
The “1/slope” of the shock curve = 0. Thus,
x = ξ(t) = s,
and since the jump occurs at (x, t) = (0, 2), ξ(2) = 0 = s. Therefore,
x = 0.
Partial Differential Equations Igor Yanovsky, 2005 76
➡ For t < 2, u(x, t) =
⎧
⎪⎨
⎪⎩
1 if x < 1
2 t − 1,
0 if 1
2 t − 1 < x < −1
2 t + 1,
−1 if x > −1
2 t + 1.
➡ and for t > 2, u(x, t) =
1 if x < 0,
−1 if x > 0.
Partial Differential Equations Igor Yanovsky, 2005 77
Example 2. Determine the exact solution to Burgers’ equation
ut +
1
2
u2
x
= 0, t > 0
with initial data
u(x, 0) = h(x) =
⎧
⎪⎨
⎪⎩
−1 if x < −1,
0 if − 1 < x < 1,
1 if x > 1.
Proof. Characteristic form: ut + uux = 0.
The characteristic projection in xt-plane passing through the point (s, 0) is the line
x = h(s)t + s.
For Burgers’ equation, for a rarefaction fan emanating from (s, 0) on xt-plane, we have:
u(x, t) =
⎧
⎪⎨
⎪⎩
ul, x−s
t ≤ ul,
x−s
t , ul ≤ x−s
t ≤ ur,
ur, x−s
t ≥ ur.
➡ u(x, t) =
⎧
⎪⎪⎪⎪⎪⎪⎨
⎪⎪⎪⎪⎪⎪⎩
−1, x < −t − 1,
x+1
t , −t − 1 < x < −1, i.e. − 1 < x+1
t < 0
0, −1 < x < 1,
x−1
t , 1 < x < t + 1, i.e. 0 < x−1
t < 1
1, x > t + 1.
Partial Differential Equations Igor Yanovsky, 2005 78
Partial Differential Equations Igor Yanovsky, 2005 79
Example 3. Determine the exact solution to Burgers’ equation
ut +
1
2
u2
x
= 0, t > 0
with initial data
u(x, 0) = h(x) =
2 if 0 < x < 1,
0 if otherwise.
Proof. Characteristic form: ut + uux = 0.
The characteristic projection in xt-plane passing through the point (s, 0) is the line
x = h(s)t + s.
• Shock: Rankine-Hugoniot shock condition at s = 1:
shock speed: ξ (t) =
F(ur) − F(ul)
ur − ul
=
1
2 u2
r − 1
2 u2
l
ur − ul
=
0 − 2
0 − 2
= 1.
The “1/slope” of the shock curve = 1. Thus,
x = ξ(t) = t + s,
and since the jump occurs at (1, 0), ξ(0) = 1 = s. Therefore,
x = t + 1.
• Rarefaction: A rarefaction emanates from (0, 0) on xt-plane.
➡ For 0 < t < 1, u(x, t) =
⎧
⎪⎪⎪⎪⎨
⎪⎪⎪⎪⎩
0 if x < 0,
x
t if 0 < x < 2t,
2 if 2t < x < t + 1.
0 if x > t + 1.
Rarefaction catches up to shock at t = 1.
• Shock: At (x, t) = (2, 1), ul = x/t, ur = 0. Rankine-Hugoniot shock condition:
ξ (t) =
F(ur) − F(ul)
ur − ul
=
1
2 u2
r − 1
2 u2
l
ur − ul
=
0 − 1
2 (x
t )2
0 − x
t
=
1
2
x
t
,
dxs
dt
=
x
2t
,
x = c
√
t,
and since the jump occurs at (x, t) = (2, 1), x(1) = 2 = c. Therefore, x = 2
√
t.
➡ For t > 1, u(x, t) =
⎧
⎪⎨
⎪⎩
0 if x < 0,
x
t if 0 < x < 2
√
t,
0 if x > 2
√
t.
Partial Differential Equations Igor Yanovsky, 2005 80
Partial Differential Equations Igor Yanovsky, 2005 81
Example 4. Determine the exact solution to Burgers’ equation
ut +
1
2
u2
x
= 0, t > 0
with initial data
u(x, 0) = h(x) =
1 + x if x < 0,
0 if x > 0.
Proof. Characteristic form: ut + uux = 0.
The characteristic projection in xt-plane passing through the point (s, 0) is the line
x = h(s)t + s.
➀ For s > 0, the characteristics are x = s.
➁ For s < 0, the characteristics are x = (1 + s)t + s.
• There are two ways to look for the solution on the left half-plane. One is to notice
that the characteristic at s = 0− is x = t and characteristic at s = −1 is x = −1 and
that characteristics between s = −∞ and s = 0−
are intersecting at (x, t) = (−1, −1).
Also, for a fixed t, u is a linear function of x, i.e. for t = 0, u = 1 + x, allowing
a continuous change of u with x. Thus, the solution may be viewed as an ‘implicit’
rarefaction, originating at (−1, −1), thus giving rise to the solution
u(x, t) =
x + 1
t + 1
.
Another way to find a solution on the left half-plane is to solve ➁ for s to find
s =
x − t
1 + t
. Thus, u(x, t) = h(s) = 1 + s = 1 +
x − t
1 + t
=
x + 1
t + 1
.
• Shock: At (x, t) = (0, 0), ul = x+1
t+1 , ur = 0. Rankine-Hugoniot shock condition:
ξ (t) =
F(ur) − F(ul)
ur − ul
=
1
2 u2
r − 1
2 u2
l
ur − ul
=
0 − 1
2 (x+1
t+1 )2
0 − x+1
t+1
=
1
2
x + 1
t + 1
,
dxs
dt
=
1
2
x + 1
t + 1
,
x = c
√
t + 1 − 1,
and since the jump occurs at (x, t) = (0, 0), x(0) = 0 = c − 1, or c = 1. Therefore,
the shock curve is x =
√
t + 1 − 1.
➡ u(x, t) =
x+1
t+1 if x <
√
t + 1 − 1,
0 if x >
√
t + 1 − 1.
Partial Differential Equations Igor Yanovsky, 2005 82
Partial Differential Equations Igor Yanovsky, 2005 83
Example 5. Determine the exact solution to Burgers’ equation
ut +
1
2
u2
x
= 0, t > 0
with initial data
u(x, 0) = h(x) =
⎧
⎪⎨
⎪⎩
u0 if x < 0,
u0 · (1 − x) if 0 < x < 1,
0 if x ≥ 1,
where u0 > 0.
Proof. Characteristic form: ut + uux = 0.
The characteristic projection in xt-plane passing through the point (s, 0) is the line
x = h(s)t + s.
➀ For s > 1, the characteristics are x = s.
➁ For 0 < s < 1, the characteristics are x = u0(1 − s)t + s.
➂ For s < 0, the characteristics are x = u0t + s.
The characteristics emanating from (s, 0), 0 < s < 1 on xt-plane intersect at (1, 1
u0
).
Also, we can check that the characteristics do not intersect before t = 1
u0
for this
problem:
tc = min
−1
h (s)
=
1
u0
.
• To find solution in a triangular domain between x = u0t and x = 1, we note that
characteristics there are x = u0 · (1 − s)t + s. Solving for s we get
s =
x − u0t
1 − u0t
. Thus, u(x, t) = h(s) = u0 · (1 − s) = u0 · 1 −
x − u0t
1 − u0t
=
u0 · (1 − x)
1 − u0t
.
We can also find a solution in the triangular domain as follows. Note, that the charac-
teristics are the straight lines
dx
dt
= u = const.
Integrating the equation above, we obtain
x = ut + c
Since all characteristics in the triangular domain meet at (1, 1
u0
), we have c = 1 − u
u0
,
and
x = ut + 1 −
u
u0
or u =
u0 · (1 − x)
1 − u0t
.
➡ For 0 < t <
1
u0
, u(x, t) =
⎧
⎪⎨
⎪⎩
u0 if x < u0t,
u0·(1−x)
1−u0t if u0t < x < 1,
0 if x > 1.
Partial Differential Equations Igor Yanovsky, 2005 84
• Shock: At (x, t) = (1, 1
u0
), Rankine-Hugoniot shock condition:
ξ (t) =
F(ur) − F(ul)
ur − ul
=
1
2 u2
r − 1
2 u2
l
ur − ul
=
0 − 1
2 u2
0
0 − u0
=
1
2
u0,
ξ(t) =
1
2
u0t + c,
and since the jump occurs at (x, t) = (1, 1
u0
), x 1
u0
= 1 = 1
2 +c, or c = 1
2. Therefore,
the shock curve is x = u0t+1
2 .
Partial Differential Equations Igor Yanovsky, 2005 85
➡ For t >
1
u0
, u(x, t) =
u0 if x < u0t+1
2 ,
0 if x > u0t+1
2 .
Problem. Show that for u = f(x/t) to be a nonconstant solution of ut + a(u)ux = 0,
f must be the inverse of the function a.
Proof. If u = f(x/t),
ut = −f
x
t
·
x
t2
and ux = f
x
t
·
1
t
.
Hence, ut + a(u)ux = 0 implies that
−f
x
t
·
x
t2
+ a f
x
t
f
x
t
·
1
t
= 0
or, assuming f is not identically 0 to rule out the constant solution, that
a f
x
t
=
x
t
.
This shows the functions a and f to be inverses of each other.
Partial Differential Equations Igor Yanovsky, 2005 86
13 Problems: General Nonlinear Equations
13.1 Two Spatial Dimensions
Problem (S’01, #3). Solve the initial value problem
1
2
u2
x − uy = −
x2
2
,
u(x, 0) = x.
You will find that the solution blows up in finite time. Explain this in terms of the
characteristics for this equation.
Proof. Rewrite the equation as
F(x, y, z, p, q) =
p2
2
− q +
x2
2
= 0.
Γ is parameterized by Γ : (s, 0, s, φ(s), ψ(s)).
We need to complete Γ to a strip. Find φ(s) and ψ(s), the initial conditions for p(s, t)
and q(s, t), respectively:
• F(f(s), g(s), h(s), φ(s), ψ(s)) = 0,
F(s, 0, s, φ(s), ψ(s)) = 0,
φ(s)2
2
− ψ(s) +
s2
2
= 0,
ψ(s) =
φ(s)2
+ s2
2
.
• h (s) = φ(s)f (s) + ψ(s)g (s),
1 = φ(s).
⇒ ψ(s) =
s2
+ 1
2
.
Therefore, now Γ is parametrized by Γ : (s, 0, s, 1, s2+1
2 ).
dx
dt
= Fp = p,
dy
dt
= Fq = −1 ⇒ y(s, t) = −t + c1(s) ⇒ y = −t,
dz
dt
= pFp + qFq = p2
− q,
dp
dt
= −Fx − Fzp = −x,
dq
dt
= −Fy − Fzq = 0 ⇒ q(s, t) = c2(s) ⇒ q =
s2
+ 1
2
.
Thus, we found y and q in terms of s and t. Note that we have a coupled system:
x = p,
p = −x,
which can be written as two second order ODEs:
x + x = 0, x(s, 0) = s, x (s, 0) = p(s, 0) = 1,
p + p = 0, p(s, 0) = 1, p (s, 0) = −x(s, 0) = −s.
Partial Differential Equations Igor Yanovsky, 2005 87
Solving the two equations separately, we get
x(s, t) = s · cos t + sint,
p(s, t) = cos t − s · sint.
From this, we get
dz
dt
= p2
− q = cos t − s · sint
2
−
s2 + 1
2
= cos2
t − 2s cos t sint + s2
sin2
t −
s2 + 1
2
.
z(s, t) =
t
0
cos2
t − 2s cos t sint + s2
sin2
t −
s2
+ 1
2
dt + z(s, 0),
z(s, t) =
t
2
+
sint cos t
2
+ s cos2
t +
s2t
2
−
s2 sint cos t
2
−
t(s2 + 1)
2
t
0
+ s,
=
sin t cos t
2
+ s cos2
t −
s2 sint cos t
2
t
0
+ s,
=
sint cos t
2
+ s cos2
t −
s2 sin t cos t
2
− s + s =
=
sint cos t
2
+ s cos2
t −
s2
sin t cos t
2
.
Plugging in x and y found earlier for s and t, we get
u(x, y) =
sin(−y) cos(−y)
2
+
x − sin(−y)
cos(−y)
cos2
(−y) −
(x − sin(−y))2
cos2(−y)
·
sin(−y) cos(−y)
2
= −
sin y cos y
2
+
x + siny
cos y
cos2
y +
(x + sin y)2
cos2 y
·
siny cos y
2
= −
sin y cos y
2
+ (x + sin y) cosy +
(x + siny)2 sin y
2 cosy
= x cos y +
sin y cos y
2
+
(x + sin y)2 siny
2 cos y
.
Partial Differential Equations Igor Yanovsky, 2005 88
Problem (S’98, #3). Find the solution of
ut +
u2
x
2
=
−x2
2
, t ≥ 0, −∞ < x < ∞
u(x, 0) = h(x), −∞ < x < ∞,
where h(x) is smooth function which vanishes for |x| large enough.
Proof. Rewrite the equation as
F(x, y, z, p, q) =
p2
2
+ q +
x2
2
= 0.
Γ is parameterized by Γ : (s, 0, h(s), φ(s), ψ(s)).
We need to complete Γ to a strip. Find φ(s) and ψ(s), the initial conditions for p(s, t)
and q(s, t), respectively:
• F(f(s), g(s), h(s), φ(s), ψ(s)) = 0,
F(s, 0, h(s), φ(s), ψ(s)) = 0,
φ(s)2
2
+ ψ(s) +
s2
2
= 0,
ψ(s) = −
φ(s)2
+ s2
2
.
• h (s) = φ(s)f (s) + ψ(s)g (s),
h (s) = φ(s).
⇒ ψ(s) = −
h (s)2
+ s2
2
.
Therefore, now Γ is parametrized by Γ : (s, 0, s, h (s), −h (s)2+s2
2 ).
dx
dt
= Fp = p,
dy
dt
= Fq = 1 ⇒ y(s, t) = t + c1(s) ⇒ y = t,
dz
dt
= pFp + qFq = p2
+ q,
dp
dt
= −Fx − Fzp = −x,
dq
dt
= −Fy − Fzq = 0 ⇒ q(s, t) = c2(s) ⇒ q = −
h (s)2 + s2
2
.
Thus, we found y and q in terms of s and t. Note that we have a coupled system:
x = p,
p = −x,
which can be written as a second order ODE:
x + x = 0, x(s, 0) = s, x (s, 0) = p(s, 0) = h (s).
Solving the equation, we get
x(s, t) = s cos t + h (s) sint,
p(s, t) = x (s, t) = h (s) cos t − s sin t.
Partial Differential Equations Igor Yanovsky, 2005 89
From this, we get
dz
dt
= p2
+ q = h (s) cost − s sint
2
−
h (s)2 + s2
2
= h (s)2
cos2
t − 2sh (s) cost sin t + s2
sin2
t −
h (s)2
+ s2
2
.
z(s, t) =
t
0
h (s)2
cos2
t − 2sh (s) cos t sint + s2
sin2
t −
h (s)2
+ s2
2
dt + z(s, 0)
=
t
0
h (s)2
cos2
t − 2sh (s) cos t sint + s2
sin2
t −
h (s)2
+ s2
2
dt + h(s).
We integrate the above expression similar to S 01#3 to get an expression for z(s, t).
Plugging in x and y found earlier for s and t, we get u(x, y).
Partial Differential Equations Igor Yanovsky, 2005 90
Problem (S’97, #4).
Describe the method of the bicharacteristics for solving the initial value problem
∂
∂x
u(x, y)
2
+
∂
∂y
u(x, y)
2
= 2 + y,
u(x, 0) = u0(x) = x.
Assume that | ∂
∂xu0(x)| < 2 and consider the solution such that ∂u
∂y > 0.
Apply all general computations for the particular case u0(x) = x.
Proof. We have
u2
x + u2
y = 2 + y
u(x, 0) = u0(x) = x.
Rewrite the equation as
F(x, y, z, p, q) = p2
+ q2
− y − 2 = 0.
Γ is parameterized by Γ : (s, 0, s, φ(s), ψ(s)).
We need to complete Γ to a strip. Find φ(s) and ψ(s), the initial conditions for p(s, t)
and q(s, t), respectively:
• F(f(s), g(s), h(s), φ(s), ψ(s)) = 0,
F(s, 0, s, φ(s), ψ(s)) = 0,
φ(s)2
+ ψ(s)2
− 2 = 0,
φ(s)2
+ ψ(s)2
= 2.
• h (s) = φ(s)f (s) + ψ(s)g (s),
1 = φ(s).
⇒ ψ(s) = ±1.
Since we have a condition that q(s, t) > 0, we choose q(s, 0) = ψ(s) = 1.
Therefore, now Γ is parametrized by Γ : (s, 0, s, 1, 1).
dx
dt
= Fp = 2p ⇒
dx
dt
= 2 ⇒ x = 2t + s,
dy
dt
= Fq = 2q ⇒
dy
dt
= 2t + 2 ⇒ y = t2
+ 2t,
dz
dt
= pFp + qFq = 2p2
+ 2q2
= 2y + 4 ⇒
dz
dt
= 2t2
+ 4t + 4,
⇒ z =
2
3
t3
+ 2t2
+ 4t + s =
2
3
t3
+ 2t2
+ 4t + x − 2t =
2
3
t3
+ 2t2
+ 2t + x,
dp
dt
= −Fx − Fzp = 0 ⇒ p = 1,
dq
dt
= −Fy − Fzq = 1 ⇒ q = t + 1.
We solve y = t2
+ 2t, a quadratic equation in t, t2
+ 2t − y = 0, for t in terms of y to
get:
t = −1 ± 1 + y.
⇒ u(x, y) =
2
3
(−1 ± 1 + y)3
+ 2(−1 ± 1 + y)2
+ 2(−1 ± 1 + y) + x.
Both u± satisfy the PDE. ux = 1, uy = ±
√
y + 1 ⇒ u2
x + u2
y = y + 2
u+ satisfies u+(x, 0) = x . However, u− does not satisfy IC, i.e. u−(x, 0) = x−4
3.
Partial Differential Equations Igor Yanovsky, 2005 91
Problem (S’02, #6). Consider the equation
ux + uxuy = 1,
u(x, 0) = f(x).
Assuming that f is differentiable, what conditions on f insure that the problem is
noncharacteristic? If f satisfies those conditions, show that the solution is
u(x, y) = f(r) − y +
2y
f (r)
,
where r must satisfy y = (f (r))2
(x − r).
Finally, show that one can solve the equation for (x, y) in a sufficiently small neighbor-
hood of (x0, 0) with r(x0, 0) = x0.
Proof. Solved.
In order to solve the Cauchy problem in a neighborhood of Γ, need:
f (s) · Fq[f, g, h, φ, ψ](s) − g (s) · Fp[f, g, h, φ, ψ](s) = 0,
1 · h (s) − 0 · 1 +
1 − h (s)
h (s)
= 0,
h (s) = 0.
Thus, h (s) = 0 ensures that the problem is noncharacteristic.
To show that one can solve y = (f (s))2(x − s) for (x, y) in a sufficiently small
neighborhood of (x0, 0) with s(x0, 0) = x0, let
G(x, y, s) = (f (s))2
(x − s) − y = 0,
G(x0, 0, x0) = 0,
Gr(x0, 0, x0) = −(f (s))2
.
Hence, if f (s) = 0, ∀s, then Gs(x0, 0, x0) = 0 and we can use the implicit function
theorem in a neighborhood of (x0, 0, x0) to get
G(x, y, h(x, y)) = 0
and solve the equation in terms of x and y.
Partial Differential Equations Igor Yanovsky, 2005 92
Problem (S’00, #1). Find the solutions of
(ux)2
+ (uy)2
= 1
in a neighborhood of the curve y = x2
2 satisfying the conditions
u x,
x2
2
= 0 and uy x,
x2
2
> 0.
Leave your answer in parametric form.
Proof. Rewrite the equation as
F(x, y, z, p, q) = p2
+ q2
− 1 = 0.
Γ is parameterized by Γ : (s, s2
2 , 0, φ(s), ψ(s)).
We need to complete Γ to a strip. Find φ(s) and ψ(s), the initial conditions for p(s, t)
and q(s, t), respectively:
• F(f(s), g(s), h(s), φ(s), ψ(s)) = 0,
F s,
s2
2
, 0, φ(s), ψ(s) = 0,
φ(s)2
+ ψ(s)2
= 1.
• h (s) = φ(s)f (s) + ψ(s)g (s),
0 = φ(s) + sψ(s),
φ(s) = −sψ(s).
Thus, s2
ψ(s)2
+ ψ(s)2
= 1 ⇒ ψ(s)2
=
1
s2 + 1
.
Since, by assumption, ψ(s) > 0, we have ψ(s) = 1√
s2+1
.
Therefore, now Γ is parametrized by Γ : s, s2
2 , 0, −s√
s2+1
, 1√
s2+1
.
dx
dt
= Fp = 2p =
−2s
√
s2 + 1
⇒ x =
−2st
√
s2 + 1
+ s,
dy
dt
= Fq = 2q =
2
√
s2 + 1
⇒ y =
2t
√
s2 + 1
+
s2
2
,
dz
dt
= pFp + qFq = 2p2
+ 2q2
= 2 ⇒ z = 2t,
dp
dt
= −Fx − Fzp = 0 ⇒ p =
−s
√
s2 + 1
,
dq
dt
= −Fy − Fzq = 0 ⇒ q =
1
√
s2 + 1
.
Thus, in parametric form,
z(s, t) = 2t,
x(s, t) =
−2st
√
s2 + 1
+ s,
y(s, t) =
2t
√
s2 + 1
+
s2
2
.
Partial Differential Equations Igor Yanovsky, 2005 93
13.2 Three Spatial Dimensions
Problem (S’96, #2). Solve the following Cauchy problem21:
ux + u2
y + u2
z = 1,
u(0, y, z) = y · z.
Proof. Rewrite the equation as
ux1 + u2
x2
+ u2
x3
= 1,
u(0, x2, x3) = x2 · x3.
Write a general nonlinear equation
F(x1, x2, x3, z, p1, p2, p3) = p1 + p2
2 + p2
3 − 1 = 0.
Γ is parameterized by
Γ : 0
x1(s1,s2,0)
, s1
x2(s1,s2,0)
, s2
x3(s1,s2,0)
, s1s2
z(s1,s2,0)
, φ1(s1, s2)
p1(s1,s2,0)
, φ2(s1, s2)
p2(s1,s2,0)
, φ3(s1, s2)
p3(s1,s2,0)
We need to complete Γ to a strip. Find φ1(s1, s2), φ2(s1, s2), and φ3(s1, s2), the initial
conditions for p1(s1, s2, t), p2(s1, s2, t), and p3(s1, s2, t), respectively:
• F f1(s1, s2), f2(s1, s2), f3(s1, s2), h(s1, s2), φ1, φ2, φ3 = 0,
F 0, s1, s2, s1s2, φ1, φ2, φ3 = φ1 + φ2
2 + φ2
3 − 1 = 0,
⇒ φ1 + φ2
2 + φ2
3 = 1.
•
∂h
∂s1
= φ1
∂f1
∂s1
+ φ2
∂f2
∂s1
+ φ3
∂f3
∂s1
,
⇒ s2 = φ2.
•
∂h
∂s2
= φ1
∂f1
∂s2
+ φ2
∂f2
∂s2
+ φ3
∂f3
∂s2
,
⇒ s1 = φ3.
Thus, we have: φ2 = s2, φ3 = s1, φ1 = −s2
1 − s2
2 + 1.
Γ : 0
x1(s1,s2,0)
, s1
x2(s1,s2,0)
, s2
x3(s1,s2,0)
, s1s2
z(s1,s2,0)
, −s2
1 − s2
2 + 1
p1(s1,s2,0)
, s2
p2(s1,s2,0)
, s1
p3(s1,s2,0)
21
This problem is very similar to an already hand-written solved problem F’95 #2.
Partial Differential Equations Igor Yanovsky, 2005 94
The characteristic equations are
dx1
dt
= Fp1 = 1 ⇒ x1 = t,
dx2
dt
= Fp2 = 2p2 ⇒
dx2
dt
= 2s2 ⇒ x2 = 2s2t + s1,
dx3
dt
= Fp3 = 2p3 ⇒
dx3
dt
= 2s1 ⇒ x3 = 2s1t + s2,
dz
dt
= p1Fp1 + p2Fp2 + p3Fp3 = p1 + 2p2
2 + 2p2
3 = −s2
1 − s2
2 + 1 + 2s2
2 + 2s2
1
= s2
1 + s2
2 + 1 ⇒ z = (s2
1 + s2
2 + 1)t + s1s2,
dp1
dt
= −Fx1 − p1Fz = 0 ⇒ p1 = −s2
1 − s2
2 + 1,
dp2
dt
= −Fx2 − p2Fz = 0 ⇒ p2 = s2,
dp3
dt
= −Fx3 − p3Fz = 0 ⇒ p3 = s1.
Thus, we have
⎧
⎪⎪⎪⎪⎨
⎪⎪⎪⎪⎩
x1 = t
x2 = 2s2t + s1
x3 = 2s1t + s2
z = (s2
1 + s2
2 + 1)t + s1s2
⇒
⎧
⎪⎪⎪⎪⎨
⎪⎪⎪⎪⎩
t = x1
s1 = x2 − 2s2t
s2 = x3 − 2s1t
z = (s2
1 + s2
2 + 1)t + s1s2
⇒
⎧
⎪⎪⎪⎪⎨
⎪⎪⎪⎪⎩
t = x1
s1 = x2−2x1x3
1−4x2
1
s2 = x3−2x1x2
1−4x2
1
z = (s2
1 + s2
2 + 1)t + s1s2
⇒ u(x1, x2, x3) =
x2 − 2x1x3
1 − 4x2
1
2
+
x3 − 2x1x2
1 − 4x2
1
2
+ 1 x1 +
x2 − 2x1x3
1 − 4x2
1
x3 − 2x1x2
1 − 4x2
1
.
Problem (F’95, #2). Solve the following Cauchy problem
ux + uy + u3
z = x + y + z,
u(x, y, 0) = xy.
Proof. Solved
Partial Differential Equations Igor Yanovsky, 2005 95
Problem (S’94, #1). Solve the following PDE for f(x, y, t):
ft + xfx + 3t2
fy = 0
f(x, y, 0) = x2
+ y2
.
Proof. Rewrite the equation as (x → x1, y → x2, t → x3, f → u):
x1ux1 + 3x2
3ux2 + ux3 = 0,
u(x1, x2, 0) = x2
1 + x2
2.
F(x1, x2, x3, z, p1, p2, p3) = x1p1 + 3x2
3p2 + p3 = 0.
Γ is parameterized by
Γ : s1
x1(s1,s2,0)
, s2
x2(s1,s2,0)
, 0
x3(s1,s2,0)
, s2
1 + s2
2
z(s1,s2,0)
, φ1(s1, s2)
p1(s1,s2,0)
, φ2(s1, s2)
p2(s1,s2,0)
, φ3(s1, s2)
p3(s1,s2,0)
We need to complete Γ to a strip. Find φ1(s1, s2), φ2(s1, s2), and φ3(s1, s2), the initial
conditions for p1(s1, s2, t), p2(s1, s2, t), and p3(s1, s2, t), respectively:
• F f1(s1, s2), f2(s1, s2), f3(s1, s2), h(s1, s2), φ1, φ2, φ3 = 0,
F s1, s2, 0, s2
1 + s2
2, φ1, φ2, φ3 = s1φ1 + φ3 = 0,
⇒ φ3 = s1φ1.
•
∂h
∂s1
= φ1
∂f1
∂s1
+ φ2
∂f2
∂s1
+ φ3
∂f3
∂s1
,
⇒ 2s1 = φ1.
•
∂h
∂s2
= φ1
∂f1
∂s2
+ φ2
∂f2
∂s2
+ φ3
∂f3
∂s2
,
⇒ 2s2 = φ2.
Thus, we have: φ1 = 2s1, φ2 = 2s2, φ3 = 2s2
1.
Γ : s1
x1(s1,s2,0)
, s2
x2(s1,s2,0)
, 0
x3(s1,s2,0)
, s2
1 + s2
2
z(s1,s2,0)
, 2s1
p1(s1,s2,0)
, 2s2
p2(s1,s2,0)
, 2s2
1
p3(s1,s2,0)
The characteristic equations are
dx1
dt
= Fp1 = x1 ⇒ x1 = s1et
,
dx2
dt
= Fp2 = 3x2
3 ⇒
dx2
dt
= 3t2
⇒ x2 = t3
+ s2,
dx3
dt
= Fp3 = 1 ⇒ x3 = t,
dz
dt
= p1Fp1 + p2Fp2 + p3Fp3 = p1x1 + p23x2
3 + p3 = 0 ⇒ z = s2
1 + s2
2,
dp1
dt
= −Fx1 − p1Fz = −p1 ⇒ p1 = 2s1e−t
,
dp2
dt
= −Fx2 − p2Fz = 0 ⇒ p2 = 2s2,
dp3
dt
= −Fx3 − p3Fz = −6x3p2 ⇒
dp3
dt
= −12ts2 ⇒ p3 = −6t2
s2 + 2s2
1.
With t = x3, s1 = x1e−x3
, s2 = x2 − x3
3, we have
u(x1, x2, x3) = x2
1e−2x3
+ (x2 − x3
3)2
. f(x, y, t) = x2
e−2t
+ (y − t3
)2
.
The solution satisfies the PDE and initial condition.
Partial Differential Equations Igor Yanovsky, 2005 96
Problem (F’93, #3). Find the solution of the following equation
ft + xfx + (x + t)fy = t3
f(x, y, 0) = xy.
Proof. Rewrite the equation as (x → x1, y → x2, t → x3, f → u):
x1ux1 + (x1 + x3)ux2 + ux3 = x3
,
u(x1, x2, 0) = x1x2.
Method I: Treat the equation as a QUASILINEAR equation.
Γ is parameterized by Γ : (s1, s2, 0, s1s2).
dx1
dt
= x1 ⇒ x1 = s1et
,
dx2
dt
= x1 + x3 ⇒
dx2
dt
= s1et
+ t ⇒ x2 = s1et
+
t2
2
+ s2 − s1,
dx3
dt
= 1 ⇒ x3 = t,
dz
dt
= x3
3 ⇒
dz
dt
= t3
⇒ z =
t4
4
+ s1s2.
Since t = x3, s1 = x1e−x3 , s2 = x2 − s1et
− t2
2 + s1 = x2 − x1 −
x2
3
2 + x1e−x3 , we have
u(x1, x2, x3) =
x4
3
4
+ x1e−x3
(x2 − x1 −
x2
3
2
+ x1e−x3
), or
f(x, y, t) =
t4
4
+ xe−t
(y − x −
t2
2
+ xe−t
).
The solution satisfies the PDE and initial condition.
Method II: Treat the equation as a fully NONLINEAR equation.
F(x1, x2, x3, z, p1, p2, p3) = x1p1 + (x1 + x3)p2 + p3 − x3
3 = 0.
Γ is parameterized by
Γ : s1
x1(s1,s2,0)
, s2
x2(s1,s2,0)
, 0
x3(s1,s2,0)
, s1s2
z(s1,s2,0)
, φ1(s1, s2)
p1(s1,s2,0)
, φ2(s1, s2)
p2(s1,s2,0)
, φ3(s1, s2)
p3(s1,s2,0)
We need to complete Γ to a strip. Find φ1(s1, s2), φ2(s1, s2), and φ3(s1, s2), the initial
conditions for p1(s1, s2, t), p2(s1, s2, t), and p3(s1, s2, t), respectively:
• F f1(s1, s2), f2(s1, s2), f3(s1, s2), h(s1, s2), φ1, φ2, φ3 = 0,
F s1, s2, 0, s1s2, φ1, φ2, φ3 = s1φ1 + s1φ2 + φ3 = 0,
⇒ φ3 = −s1(φ1 + φ2).
•
∂h
∂s1
= φ1
∂f1
∂s1
+ φ2
∂f2
∂s1
+ φ3
∂f3
∂s1
,
⇒ s2 = φ1.
•
∂h
∂s2
= φ1
∂f1
∂s2
+ φ2
∂f2
∂s2
+ φ3
∂f3
∂s2
,
⇒ s1 = φ2.
Thus, we have: φ1 = s2, φ2 = s1, φ3 = −s2
1 − s1s2.
Γ : s1
x1(s1,s2,0)
, s2
x2(s1,s2,0)
, 0
x3(s1,s2,0)
, s1s2
z(s1,s2,0)
, s2
p1(s1,s2,0)
, s1
p2(s1,s2,0)
, −s2
1 − s1s2
p3(s1,s2,0)
Partial Differential Equations Igor Yanovsky, 2005 97
The characteristic equations are
dx1
dt
= Fp1 = x1 ⇒ x1 = s1et
,
dx2
dt
= Fp2 = x1 + x3 ⇒
dx2
dt
= s1et
+ t ⇒ x2 = s1et
+
t2
2
+ s2 − s1,
dx3
dt
= Fp3 = 1 ⇒ x3 = t,
dz
dt
= p1Fp1 + p2Fp2 + p3Fp3 = p1x1 + p2(x1 + x3) + p3 = x3
3 = t3
⇒ z =
t4
4
+ s1s2,
dp1
dt
= −Fx1 − p1Fz = −p1 − p2 = −p1 − s1 ⇒ p1 = 2s1e−t
− s1,
dp2
dt
= −Fx2 − p2Fz = 0 ⇒ p2 = s1,
dp3
dt
= −Fx3 − p3Fz = 3x2
3 − p2 = 3t2
− s1 ⇒ p3 = t3
− s1t − s2
1 − s1s2.
With t = x3, s1 = x1e−x3
, s2 = x2 − s1et
− t2
2 + s1 = x2 − x1 −
x2
3
2 + x1e−x3
, we have
u(x1, x2, x3) =
x4
3
4
+ x1e−x3
(x2 − x1 −
x2
3
2
+ x1e−x3
), or
f(x, y, t) =
t4
4
+ xe−t
(y − x −
t2
2
+ xe−t
).
22 The solution satisfies the PDE and initial condition.
22
Variable t in the derivatives of characteristics equations and t in the solution f(x,y, t) are different
entities.
Partial Differential Equations Igor Yanovsky, 2005 98
Problem (F’92, #1). Solve the initial value problem
ut + αux + βuy + γu = 0 for t > 0
u(x, y, 0) = ϕ(x, y),
in which α, β and γ are real constants and ϕ is a smooth function.
Proof. Rewrite the equation as (x → x1, y → x2, t → x3)23:
αux1 + βux2 + ux3 = −γu,
u(x1, x2, 0) = ϕ(x1, x2).
Γ is parameterized by Γ : (s1, s2, 0, ϕ(s1, s2)).
dx1
dt
= α ⇒ x1 = αt + s1,
dx2
dt
= β ⇒ x2 = βt + s2,
dx3
dt
= 1 ⇒ x3 = t,
dz
dt
= −γz ⇒
dz
z
= −γdt ⇒ z = ϕ(s1, s2)e−γt
.
J ≡ det
∂(x1, x2, x3)
∂(s1, s2, t)
=
1 0 0
0 1 0
α β 1
= 1 = 0 ⇒ J is invertible.
Since t = x3, s1 = x1 − αx3, s2 = x2 − βx3, we have
u(x1, x2, x3) = ϕ(x1 − αx3, x2 − βx3)e−γx3
, or
u(x, y, t) = ϕ(x − αt, y − βt)e−γt
.
The solution satisfies the PDE and initial condition.24
23
Variable t as a third coordinate of u and variable t used to parametrize characteristic equations
are two different entities.
24
Chain Rule: u(x1, x2, x3) = ϕ(f(x1, x2, x3), g(x1, x2, x3)), then ux1 = ∂ϕ
∂f
∂f
∂x1
+ ∂ϕ
∂g
∂g
∂x1
.
Partial Differential Equations Igor Yanovsky, 2005 99
Problem (F’94, #2). Find the solution of the Cauchy problem
ut(x, y, t) + aux(x, y, t) + buy(x, y, t) + c(x, y, t)u(x, y, t) = 0
u(x, y, 0) = u0(x, y),
where 0 < t < +∞, −∞ < x < +∞, −∞ < y < +∞,
a, b are constants, c(x, y, t) is a continuous function of (x, y, t), and u0(x, y) is a con-
tinuous function of (x, y).
Proof. Rewrite the equation as (x → x1, y → x2, t → x3):
aux1 + bux2 + ux3 = −c(x1, x2, x3)u,
u(x1, x2, 0) = u0(x1, x2).
Γ is parameterized by Γ : (s1, s2, 0, u0(s1, s2)).
dx1
dt
= a ⇒ x1 = at + s1,
dx2
dt
= b ⇒ x2 = bt + s2,
dx3
dt
= 1 ⇒ x3 = t,
dz
dt
= −c(x1, x2, x3)z ⇒
dz
dt
= −c(at + s1, bt + s2, t)z ⇒
dz
z
= −c(at + s1, bt + s2, t)dt
⇒ ln z = −
t
0
c(aξ + s1, bξ + s2, ξ)dξ + c1(s1, s2),
⇒ z(s1, s2, t) = c2(s1, s2)e−
t
0 c(aξ+s1,bξ+s2,ξ)dξ
⇒ z(s1, s2, 0) = c2(s1, s2) = u0(s2, s2),
⇒ z(s1, s2, t) = u0(s1, s2)e− t
0 c(aξ+s1,bξ+s2,ξ)dξ
.
J ≡ det
∂(x1, x2, x3)
∂(s1, s2, t)
=
1 0 0
0 1 0
a b 1
= 1 = 0 ⇒ J is invertible.
Since t = x3, s1 = x1 − ax3, s2 = x2 − bx3, we have
u(x1, x2, x3) = u0(x1 − ax3, x2 − bx3)e−
x3
0 c(aξ+x1−ax3,bξ+x2−bx3,ξ)dξ
= u0(x1 − ax3, x2 − bx3)e−
x3
0 c(x1+a(ξ−x3),x2+b(ξ−x3),ξ)dξ
, or
u(x, y, t) = u0(x − at, y − bt)e−
t
0 c(x+a(ξ−t),y+b(ξ−t),ξ)dξ
.
Partial Differential Equations Igor Yanovsky, 2005 100
Problem (F’89, #4). Consider the first order partial differential equation
ut + (α + βt)ux + γet
uy = 0 (13.1)
in which α, β and γ are constants.
a) For this equation, solve the initial value problem with initial data
u(x, y, t = 0) = sin(xy) (13.2)
for all x and y and for t ≥ 0.
b) Suppose that this initial data is prescribed only for x ≥ 0 (and all y) and consider
(13.1) in the region x ≥ 0, t ≥ 0 and all y. For which values of α, β and γ is it possible
to solve the initial-boundary value problem (13.1), (13.2) with u(x = 0, y, t) given for
t ≥ 0?
For non-permissible values of α, β and γ, where can boundary values be prescribed in
order to determine a solution of (13.1) in the region (x ≥ 0, t ≥ 0, all y).
Proof. a) Rewrite the equation as (x → x1, y → x2, t → x3):
(α + βx3)ux1 + γex3
ux2 + ux3 = 0,
u(x1, x2, 0) = sin(x1x2).
Γ is parameterized by Γ : (s1, s2, 0, sin(s1s2)).
dx1
dt
= α + βx3 ⇒
dx1
dt
= α + βt ⇒ x1 =
βt2
2
+ αt + s1,
dx2
dt
= γex3
⇒
dx2
dt
= γet
⇒ x2 = γet
− γ + s2,
dx3
dt
= 1 ⇒ x3 = t,
dz
dt
= 0 ⇒ z = sin(s1s2).
J ≡ det
∂(x1, x2, x3)
∂(s1, s2, t)
=
1 0 0
0 1 0
βt + α γet
1
= 1 = 0 ⇒ J is invertible.
Since t = x3, s1 = x1 −
βx2
3
2 − αx3, s2 = x2 − γex3 + γ, we have
u(x1, x2, x3) = sin((x1 −
βx2
3
2
− αx3)(x2 − γex3
+ γ)), or
u(x, y, t) = sin((x −
βt2
2
− αt)(y − γet
+ γ)).
The solution satisfies the PDE and initial condition.
b) We need a compatibility condition between the initial and boundary values to hold
on y-axis (x = 0, t = 0):
u(x = 0, y, 0) = u(0, y, t = 0),
0 = 0.
Partial Differential Equations Igor Yanovsky, 2005 101
Partial Differential Equations Igor Yanovsky, 2005 102
14 Problems: First-Order Systems
Problem (S’01, #2a). Find the solution u =
u1(x, t)
u2(x, t)
, (x, t) ∈ R × R,
to the (strictly) hyperbolic equation
ut −
1 0
5 3
ux = 0,
satisfying
u1(x, 0)
u2(x, 0)
=
eixa
0
, a ∈ R.
Proof. Rewrite the equation as
Ut +
−1 0
−5 −3
Ux = 0,
U(x, 0) =
u(1)(x, 0)
u(2)
(x, 0)
=
eixa
0
.
The eigenvalues of the matrix A are λ1 = −1, λ2 = −3 and the corresponding
eigenvectors are e1 =
2
−5
, e2 =
0
1
. Thus,
Λ =
−1 0
0 −3
, Γ =
2 0
−5 1
, Γ−1
=
1
det Γ
· Γ =
1
2 0
5
2 1
.
Let U = ΓV . Then,
Ut + AUx = 0,
ΓVt + AΓVx = 0,
Vt + Γ−1
AΓVx = 0,
Vt + ΛVx = 0.
Thus, the transformed problem is
Vt +
−1 0
0 −3
Vx = 0,
V (x, 0) = Γ−1
U(x, 0) =
1
2 0
5
2 1
eixa
0
=
1
2
eixa 1
5
.
We have two initial value problems
v
(1)
t − v
(1)
x = 0,
v(1)
(x, 0) = 1
2 eixa
;
v
(2)
t − 3v
(2)
x = 0,
v(2)
(x, 0) = 5
2 eixa
,
which we solve by characteristics to get
v(1)
(x, t) =
1
2
eia(x+t)
, v(2)
(x, t) =
5
2
eia(x+3t)
.
We solve for U: U = ΓV = Γ
v(1)
v(2) =
2 0
−5 1
1
2eia(x+t)
5
2 eia(x+3t) .
Thus, U =
u(1)
(x, t)
u(2)
(x, t)
=
eia(x+t)
−5
2 eia(x+t)
+ 5
2 eia(x+3t) .
Can check that this is the correct solution by plugging it into the original equation.
Partial Differential Equations Igor Yanovsky, 2005 103
Part (b) of the problem is solved in the Fourier Transform section.
Partial Differential Equations Igor Yanovsky, 2005 104
Problem (S’96, #7). Solve the following initial-boundary value problem in the do-
main x > 0, t > 0, for the unknown vector U =
u(1)
u(2) :
Ut +
−2 3
0 1
Ux = 0. (14.1)
U(x, 0) =
sinx
0
and u(2)
(0, t) = t.
Proof. The eigenvalues of the matrix A are λ1 = −2, λ2 = 1 and the corresponding
eigenvectors are e1 =
1
0
, e2 =
1
1
. Thus,
Λ =
−2 0
0 1
, Γ =
1 1
0 1
, Γ−1
=
1
det Γ
· Γ =
1 −1
0 1
.
Let U = ΓV . Then,
Ut + AUx = 0,
ΓVt + AΓVx = 0,
Vt + Γ−1
AΓVx = 0,
Vt + ΛVx = 0.
Thus, the transformed problem is
Vt +
−2 0
0 1
Vx = 0, (14.2)
V (x, 0) = Γ−1
U(x, 0) =
1 −1
0 1
sin x
0
=
sinx
0
. (14.3)
Equation (14.2) gives traveling wave solutions of the form
v(1)
(x, t) = F(x + 2t), v(2)
(x, t) = G(x − t).
We can write U in terms of V :
U = ΓV =
1 1
0 1
v(1)
v(2) =
1 1
0 1
F(x + 2t)
G(x − t)
=
F(x + 2t) + G(x − t)
G(x − t)
.
(14.4)
Partial Differential Equations Igor Yanovsky, 2005 105
• For region I, (14.2) and (14.3) give two initial value problems (since any point in
region I can be traced back along both characteristics to initial conditions):
v
(1)
t − 2v
(1)
x = 0,
v(1)
(x, 0) = sinx;
v
(2)
t + v
(2)
x = 0,
v(2)
(x, 0) = 0.
which we solve by characteristics to get traveling wave solutions:
v(1)
(x, t) = sin(x + 2t), v(2)
(x, t) = 0.
➡ Thus, for region I,
U = ΓV =
1 1
0 1
sin(x + 2t)
0
=
sin(x + 2t)
0
.
• For region II, solutions of the form F(x+2t) can be traced back to initial conditions.
Thus, v(1) is the same as in region I. Solutions of the form G(x −t) can be traced back
to the boundary. Since from (14.4), u(2)
= v(2)
, we use boundary conditions to get
u(2)
(0, t) = t = G(−t).
Hence, G(x − t) = −(x − t).
➡ Thus, for region II,
U = ΓV =
1 1
0 1
sin(x + 2t)
−(x − t)
=
sin(x + 2t) − (x − t)
−(x − t)
.
Solutions for regions I and II satisfy (14.1).
Solution for region I satisfies both initial conditions.
Solution for region II satisfies given boundary condition.
Partial Differential Equations Igor Yanovsky, 2005 106
Problem (S’02, #7). Consider the system
∂
∂t
u
v
=
−1 2
2 2
∂
∂x
u
v
. (14.5)
Find an explicit solution for the following mixed problem for the system (14.5):
u(x, 0)
v(x, 0)
=
f(x)
0
for x > 0,
u(0, t) = 0 for t > 0.
You may assume that the function f is smooth and vanishes on a neighborhood of x = 0.
Proof. Rewrite the equation as
Ut +
1 −2
−2 −2
Ux = 0,
U(x, 0) =
u(1)
(x, 0)
u(2)
(x, 0)
=
f(x)
0
.
The eigenvalues of the matrix A are λ1 = −3, λ2 = 2 and the corresponding eigen-
vectors are e1 =
1
2
, e2 =
−2
1
. Thus,
Λ =
−3 0
0 2
, Γ =
1 −2
2 1
, Γ−1
=
1
det Γ
· Γ =
1
5
1 2
−2 1
.
Let U = ΓV . Then,
Ut + AUx = 0,
ΓVt + AΓVx = 0,
Vt + Γ−1
AΓVx = 0,
Vt + ΛVx = 0.
Thus, the transformed problem is
Vt +
−3 0
0 2
Vx = 0, (14.6)
V (x, 0) = Γ−1
U(x, 0) =
1
5
1 2
−2 1
f(x)
0
=
f(x)
5
1
−2
. (14.7)
Equation (14.6) gives traveling wave solutions of the form:
v(1)
(x, t) = F(x + 3t), v(2)
(x, t) = G(x − 2t). (14.8)
We can write U in terms of V :
U = ΓV =
1 −2
2 1
v(1)
v(2) =
1 −2
2 1
F(x + 3t)
G(x − 2t)
=
F(x + 3t) − 2G(x − 2t)
2F(x + 3t) + G(x − 2t)
.
(14.9)
Partial Differential Equations Igor Yanovsky, 2005 107
• For region I, (14.6) and (14.7) give two initial value problems (since value at any
point in region I can be traced back along both characteristics to initial conditions):
v
(1)
t − 3v
(1)
x = 0,
v(1)(x, 0) = 1
5 f(x);
v
(2)
t + 2v
(2)
x = 0,
v(2)(x, 0) = −2
5 f(x).
which we solve by characteristics to get traveling wave solutions:
v(1)
(x, t) =
1
5
f(x + 3t), v(2)
(x, t) = −
2
5
f(x − 2t).
➡ Thus, for region I, U = ΓV =
1 −2
2 1
1
5 f(x + 3t)
−2
5 f(x − 2t)
=
1
5 f(x + 3t) + 4
5 f(x − 2t)
2
5 f(x + 3t) − 2
5 f(x − 2t)
.
• For region II, solutions of the form F(x+3t) can be traced back to initial conditions.
Thus, v(1) is the same as in region I. Solutions of the form G(x−2t) can be traced back
to the boundary. Since from (14.9),
u(1)
= v(1)
− 2v(2)
, we have
u(1)
(x, t) = F(x + 3t) − 2G(x − 2t) =
1
5
f(x + 3t) − 2G(x − 2t).
The boundary condition gives
u(1)
(0, t) = 0 =
1
5
f(3t) − 2G(−2t),
2G(−2t) =
1
5
f(3t),
G(t) =
1
10
f −
3
2
t ,
G(x − 2t) =
1
10
f −
3
2
(x − 2t) .
➡ Thus, for region II, U = ΓV =
1 −2
2 1
1
5 f(x + 3t)
1
10 f(−3
2 (x − 2t))
=
1
5 f(x + 3t) − 1
5 f(−3
2 (x − 2t))
2
5 f(x + 3t) + 1
10f(−3
2 (x − 2t))
.
Solutions for regions I and II satisfy (14.5).
Solution for region I satisfies both initial conditions.
Solution for region II satisfies given boundary condition.
Partial Differential Equations Igor Yanovsky, 2005 108
Problem (F’94, #1; S’97, #7). Solve the initial-boundary value problem
ut + 3vx = 0,
vt + ux + 2vx = 0
in the quarter plane 0 ≤ x, t < ∞, with initial conditions 25
u(x, 0) = ϕ1(x), v(x, 0) = ϕ2(x), 0 < x < +∞
and boundary condition
u(0, t) = ψ(t), t > 0.
Proof. Rewrite the equation as Ut + AUx = 0:
Ut +
0 3
1 2
Ux = 0, (14.10)
U(x, 0) =
u(1)(x, 0)
u(2)
(x, 0)
=
ϕ1(x)
ϕ2(x)
.
The eigenvalues of the matrix A are λ1 = −1, λ2 = 3 and the corresponding eigen-
vectors are e1 =
−3
1
, e2 =
1
1
. Thus,
Λ =
−1 0
0 3
, Γ =
−3 1
1 1
, Γ−1
=
1
det Γ
· Γ =
1
4
−1 1
1 3
.
Let U = ΓV . Then,
Ut + AUx = 0,
ΓVt + AΓVx = 0,
Vt + Γ−1
AΓVx = 0,
Vt + ΛVx = 0.
Thus, the transformed problem is
Vt +
−1 0
0 3
Vx = 0, (14.11)
V (x, 0) = Γ−1
U(x, 0) =
1
4
−1 1
1 3
ϕ1(x)
ϕ2(x)
=
1
4
−ϕ1(x) + ϕ2(x)
ϕ1(x) + 3ϕ2(x)
. (14.12)
Equation (14.11) gives traveling wave solutions of the form:
v(1)
(x, t) = F(x + t), v(2)
(x, t) = G(x − 3t). (14.13)
We can write U in terms of V :
U = ΓV =
−3 1
1 1
v(1)
v(2) =
−3 1
1 1
F(x + t)
G(x − 3t)
=
−3F(x + t) + G(x − 3t)
F(x + t) + G(x − 3t)
.
(14.14)
25
In S’97, #7, the zero initial conditions are considered.
Partial Differential Equations Igor Yanovsky, 2005 109
• For region I, (14.11) and (14.12) give two initial value problems (since value at
any point in region I can be traced back along characteristics to initial conditions):
v
(1)
t − v
(1)
x = 0,
v(1)(x, 0) = −1
4 ϕ1(x) + 1
4 ϕ2(x);
v
(2)
t + 3v
(2)
x = 0,
v(2)(x, 0) = 1
4ϕ1(x) + 3
4ϕ2(x),
which we solve by characteristics to get traveling wave solutions:
v(1)
(x, t) = −
1
4
ϕ1(x + t) +
1
4
ϕ2(x + t), v(2)
(x, t) =
1
4
ϕ1(x − 3t) +
3
4
ϕ2(x − 3t).
➡ Thus, for region I,
U = ΓV =
−3 1
1 1
−1
4 ϕ1(x + t) + 1
4ϕ2(x + t)
1
4 ϕ1(x − 3t) + 3
4ϕ2(x − 3t)
=
1
4
3ϕ1(x + t) − 3ϕ2(x + t) + ϕ1(x − 3t) + 3ϕ2(x − 3t)
−ϕ1(x + t) + ϕ2(x + t) + ϕ1(x − 3t) + 3ϕ2(x − 3t)
.
• For region II, solutions of the form F(x + t) can be traced back to initial conditions.
Thus, v(1)
is the same as in region I. Solutions of the form G(x−3t) can be traced back
to the boundary. Since from (14.14),
u(1)
= −3v(1)
+ v(2)
, we have
u(1)
(x, t) =
3
4
ϕ1(x + t) −
3
4
ϕ2(x + t) + G(x − 3t).
The boundary condition gives
u(1)
(0, t) = ψ(t) =
3
4
ϕ1(t) −
3
4
ϕ2(t) + G(−3t),
G(−3t) = ψ(t) −
3
4
ϕ1(t) +
3
4
ϕ2(t),
G(t) = ψ −
t
3
−
3
4
ϕ1 −
t
3
+
3
4
ϕ2 −
t
3
,
G(x − 3t) = ψ −
x − 3t
3
−
3
4
ϕ1 −
x − 3t
3
+
3
4
ϕ2 −
x − 3t
3
.
➡ Thus, for region II,
U = ΓV =
−3 1
1 1
−1
4 ϕ1(x + t) + 1
4ϕ2(x + t)
ψ(−x−3t
3 ) − 3
4ϕ1(−x−3t
3 ) + 3
4ϕ2(−x−3t
3 )
=
3
4 ϕ1(x + t) − 3
4 ϕ2(x + t) + ψ(−x−3t
3 ) − 3
4 ϕ1(−x−3t
3 ) + 3
4 ϕ2(−x−3t
3 )
−1
4ϕ1(x + t) + 1
4ϕ2(x + t) + ψ(−x−3t
3 ) − 3
4ϕ1(−x−3t
3 ) + 3
4ϕ2(−x−3t
3 )
.
Partial Differential Equations Igor Yanovsky, 2005 110
Solutions for regions I and II satisfy (14.10).
Solution for region I satisfies both initial conditions.
Solution for region II satisfies given boundary condition.
Partial Differential Equations Igor Yanovsky, 2005 111
Problem (F’91, #1). Solve explicitly the following initial-boundary value problem for
linear 2×2 hyperbolic system
ut = ux + vx
vt = 3ux − vx,
where 0 < t < +∞, 0 < x < +∞ with initial conditions
u(x, 0) = u0(x), v(x, 0) = v0(x), 0 < x < +∞,
and the boundary condition
u(0, t) + bv(0, t) = ϕ(t), 0 < t < +∞,
where b = 1
3 is a constant.
What happens when b = 1
3 ?
Proof. Let us change the notation (u ↔ u(1)
, v ↔ u(2)
). Rewrite the equation as
Ut +
−1 −1
−3 1
Ux = 0, (14.15)
U(x, 0) =
u(1)
(x, 0)
u(2)(x, 0)
=
u
(1)
0 (x)
u
(2)
0 (x)
.
The eigenvalues of the matrix A are λ1 = −2, λ2 = 2 and the corresponding eigen-
vectors are e1 =
1
1
, e2 =
1
−3
. Thus,
Λ =
−2 0
0 2
, Γ =
1 1
1 −3
, Γ−1
=
1
4
3 1
1 −1
.
Let U = ΓV . Then,
Ut + AUx = 0,
ΓVt + AΓVx = 0,
Vt + Γ−1
AΓVx = 0,
Vt + ΛVx = 0.
Thus, the transformed problem is
Vt +
−2 0
0 2
Vx = 0, (14.16)
V (x, 0) = Γ−1
U(x, 0) =
1
4
3 1
1 −1
u(1)(x, 0)
u(2)
(x, 0)
=
1
4
3u
(1)
0 (x) + u
(2)
0 (x)
u
(1)
0 (x) − u
(2)
0 (x)
.
(14.17)
Equation (14.16) gives traveling wave solutions of the form:
v(1)
(x, t) = F(x + 2t), v(2)
(x, t) = G(x − 2t). (14.18)
Partial Differential Equations Igor Yanovsky, 2005 112
We can write U in terms of V :
U = ΓV =
1 1
1 −3
v(1)
v(2) =
1 1
1 −3
F(x + 2t)
G(x − 2t)
=
F(x + 2t) + G(x − 2t)
F(x + 2t) − 3G(x − 2t)
.
(14.19)
• For region I, (14.16) and (14.17) give two initial value problems (since value at any
point in region I can be traced back along characteristics to initial conditions):
v
(1)
t − 2v
(1)
x = 0,
v(1)(x, 0) = 3
4 u
(1)
0 (x) + 1
4u
(2)
0 (x);
v
(2)
t + 2v
(2)
x = 0,
v(2)(x, 0) = 1
4u
(1)
0 (x) − 1
4 u
(2)
0 (x),
which we solve by characteristics to get traveling wave solutions:
v(1)
(x, t) =
3
4
u
(1)
0 (x + 2t) +
1
4
u
(2)
0 (x + 2t); v(2)
(x, t) =
1
4
u
(1)
0 (x − 2t) −
1
4
u
(2)
0 (x − 2t).
➡ Thus, for region I,
U = ΓV =
1 1
1 −3
3
4u
(1)
0 (x + 2t) + 1
4 u
(2)
0 (x + 2t)
1
4u
(1)
0 (x − 2t) − 1
4 u
(2)
0 (x − 2t)
=
3
4u
(1)
0 (x + 2t) + 1
4 u
(2)
0 (x + 2t) + 1
4u
(1)
0 (x − 2t) − 1
4 u
(2)
0 (x − 2t)
3
4u
(1)
0 (x + 2t) + 1
4 u
(2)
0 (x + 2t) − 3
4u
(1)
0 (x − 2t) + 3
4 u
(2)
0 (x − 2t)
.
• For region II, solutions of the form F(x+2t) can be traced back to initial conditions.
Thus, v(1)
is the same as in region I. Solutions of the form G(x−2t) can be traced back
to the boundary. The boundary condition gives
u(1)
(0, t) + bu(2)
(0, t) = ϕ(t).
Using (14.19),
v(1)
(0, t) + G(−2t) + bv(1)
(0, t) − 3bG(−2t) = ϕ(t),
(1 + b)v(1)
(0, t) + (1 − 3b)G(−2t) = ϕ(t),
(1 + b)
3
4
u
(1)
0 (2t) +
1
4
u
(2)
0 (2t) + (1 − 3b)G(−2t) = ϕ(t),
G(−2t) =
ϕ(t) − (1 + b) 3
4 u
(1)
0 (2t) + 1
4 u
(2)
0 (2t)
1 − 3b
,
G(t) =
ϕ(−t
2) − (1 + b) 3
4u
(1)
0 (−t) + 1
4 u
(2)
0 (−t)
1 − 3b
,
G(x − 2t) =
ϕ(−x−2t
2 ) − (1 + b) 3
4 u
(1)
0 (−(x − 2t)) + 1
4 u
(2)
0 (−(x − 2t))
1 − 3b
.
➡ Thus, for region II,
U = ΓV =
1 1
1 −3
⎛
⎝
3
4 u
(1)
0 (x + 2t) + 1
4u
(2)
0 (x + 2t)
ϕ(−x−2t
2
)−(1+b) 3
4
u
(1)
0 (−(x−2t))+1
4
u
(2)
0 (−(x−2t))
1−3b
⎞
⎠
=
⎛
⎝
3
4u
(1)
0 (x + 2t) + 1
4 u
(2)
0 (x + 2t) +
ϕ(−x−2t
2
)−(1+b) 3
4
u
(1)
0 (−(x−2t))+1
4
u
(2)
0 (−(x−2t))
1−3b
3
4 u
(1)
0 (x + 2t) + 1
4 u
(2)
0 (x + 2t) −
3ϕ(−x−2t
2
)−3(1+b) 3
4
u
(1)
0 (−(x−2t))+1
4
u
(2)
0 (−(x−2t))
1−3b
⎞
⎠ .
The following were performed, but are arithmetically complicated:
Solutions for regions I and II satisfy (14.15).
Partial Differential Equations Igor Yanovsky, 2005 113
Solution for region I satisfies both initial conditions.
Solution for region II satisfies given boundary condition.
If b = 1
3 , u(1)
(0, t) + 1
3u(2)
(0, t) = F(2t) + G(−2t) + 1
3 F(2t) − G(−2t) = 4
3 F(2t) = ϕ(t).
Thus, the solutions of the form v(2) = G(x − 2t) are not defined at x = 0, which leads
to ill-posedness.
Partial Differential Equations Igor Yanovsky, 2005 114
Problem (F’96, #8). Consider the system
ut = 3ux + 2vx
vt = −vx − v
in the region x ≥ 0, t ≥ 0. Which of the following sets of initial and boundary data
make this a well-posed problem?
a) u(x, 0) = 0, x ≥ 0
v(x, 0) = x2
, x ≥ 0
v(0, t) = t2
, t ≥ 0.
b) u(x, 0) = 0, x ≥ 0
v(x, 0) = x2
, x ≥ 0
u(0, t) = t, t ≥ 0.
c) u(x, 0) = 0, x ≥ 0
v(x, 0) = x2
, x ≥ 0
u(0, t) = t, t ≥ 0
v(0, t) = t2
, t ≥ 0.
Proof. Rewrite the equation as Ut + AUx = BU. Initial conditions are same for
(a),(b),(c):
Ut +
−3 −2
0 1
Ux =
0 0
0 −1
U,
U(x, 0) =
u(1)(x, 0)
u(2)
(x, 0)
=
0
x2 .
The eigenvalues of the matrix A are λ1 = −3, λ2 = 1, and the corresponding eigen-
vectors are e1 =
1
0
, e2 =
1
−2
. Thus,
Λ =
−3 0
0 1
, Γ =
1 1
0 −2
, Γ−1
=
1
2
2 1
0 −1
.
Let U = ΓV . Then,
Ut + AUx = BU,
ΓVt + AΓVx = BΓV,
Vt + Γ−1
AΓVx = Γ−1
BΓV,
Vt + ΛVx = Γ−1
BΓV.
Thus, the transformed problem is
Vt +
−3 0
0 1
Vx =
0 1
0 −1
V, (14.20)
V (x, 0) = Γ−1
U(x, 0) =
1
2
2 1
0 −1
0
x2 =
x2
2
1
−1
. (14.21)
Equation (14.20) gives traveling wave solutions of the form
v(1)
(x, t) = F(x + 3t), v(2)
(x, t) = G(x − t). (14.22)
Partial Differential Equations Igor Yanovsky, 2005 115
We can write U in terms of V :
U = ΓV =
1 1
0 −2
v(1)
v(2) =
1 1
0 −2
F(x + 3t)
G(x − t)
=
F(x + 3t) + G(x − t)
−2G(x − t)
.
(14.23)
• For region I, (14.20) and (14.21) give two initial value problems (since a value at any
point in region I can be traced back along both characteristics to initial conditions):
v
(1)
t − 3v
(1)
x = v(2),
v(1)(x, 0) = x2
2 ;
v
(2)
t + v
(2)
x = −v(2),
v(2)(x, 0) = −x2
2 ,
which we do not solve here. Thus, initial conditions for v(1) and v(2) have to be defined.
Since (14.23) defines u(1) and u(2) in terms of v(1) and v(2), we need to define two initial
conditions for U.
• For region II, solutions of the form F(x+3t) can be traced back to initial conditions.
Thus, v(1)
is the same as in region I. Solutions of the form G(x − t) are traced back to
the boundary at x = 0. Since from (14.23), u(2)(x, t) = −2v(2)(x, t) = −2G(x − t), i.e.
u(2)
is written in term of v(2)
only, u(2)
requires a boundary condition to be defined on
x = 0.
Thus,
a) u(2)
(0, t) = t2
, t ≥ 0. Well-posed.
b) u(1)(0, t) = t, t ≥ 0. Not well-posed.
c) u(1)
(0, t) = t, u(2)
(0, t) = t2
, t ≥ 0. Not well-posed.
Partial Differential Equations Igor Yanovsky, 2005 116
Problem (F’02, #3). Consider the first order system
ut + ux + vx = 0
vt + ux − vx = 0
on the domain 0 < t < ∞ and 0 < x < 1. Which of the following sets of initial-
boundary data are well posed for this system? Explain your answers.
a) u(x,0) = f(x), v(x,0) = g(x);
b) u(x,0) = f(x), v(x,0) = g(x), u(0,t) = h(x), v(0,t) = k(x);
c) u(x,0) = f(x), v(x,0) = g(x), u(0,t) = h(x), v(1,t) = k(x).
Proof. Rewrite the equation as Ut+AUx = 0. Initial conditions are same for (a),(b),(c):
Ut +
1 1
1 −1
Ux = 0,
U(x, 0) =
u(1)
(x, 0)
u(2)(x, 0)
=
f(x)
g(x)
.
The eigenvalues of the matrix A are λ1 =
√
2, λ2 = −
√
2 and the corresponding
eigenvectors are e1 =
1
−1 +
√
2
, e2 =
1
−1 −
√
2
. Thus,
Λ =
√
2 0
0 −
√
2
, Γ =
1 1
−1 +
√
2 −1 −
√
2
, Γ−1
=
1
2
√
2
1 +
√
2 1
−1 +
√
2 −1
.
Let U = ΓV . Then,
Ut + AUx = 0,
ΓVt + AΓVx = 0,
Vt + Γ−1
AΓVx = 0,
Vt + ΛVx = 0.
Thus, the transformed problem is
Vt +
√
2 0
0 −
√
2
Vx = 0, (14.24)
V (x, 0) = Γ−1
U(x, 0) =
1
2
√
2
1 +
√
2 1
−1 +
√
2 −1
f(x)
g(x)
=
1
2
√
2
(1 +
√
2)f(x) + g(x)
(−1 +
√
2)f(x) − g(x)
.
(14.25)
Equation (14.24) gives traveling wave solutions of the form:
v(1)
(x, t) = F(x −
√
2t), v(2)
(x, t) = G(x +
√
2t). (14.26)
However, we can continue and obtain the solutions. We have two initial value problems
v
(1)
t +
√
2v
(1)
x = 0,
v(1)(x, 0) = (1+
√
2)
2
√
2
f(x) + 1
2
√
2
g(x);
v
(2)
t −
√
2v
(2)
x = 0,
v(2)(x, 0) = (−1+
√
2)
2
√
2
f(x) − 1
2
√
2
g(x),
which we solve by characteristics to get traveling wave solutions:
v(1)
(x, t) =
(1 +
√
2)
2
√
2
f(x −
√
2t) +
1
2
√
2
g(x −
√
2t),
v(2)
(x, t) =
(−1 +
√
2)
2
√
2
f(x +
√
2t) −
1
2
√
2
g(x +
√
2t).
Partial Differential Equations Igor Yanovsky, 2005 117
We can obtain general solution U by writing U in terms of V :
U = ΓV = Γ
v(1)
v(2) =
1 1
−1 +
√
2 −1 −
√
2
1
2
√
2
(1 +
√
2)f(x −
√
2t) + g(x −
√
2t)
(−1 +
√
2)f(x +
√
2t) − g(x +
√
2t)
.
(14.27)
• In region I, the solution is obtained by solving two initial value problems(since a
value at any point in region I can be traced back along both characteristics to initial
conditions).
• In region II, the solutions of the form v(2) = G(x+
√
2t) can be traced back to initial
conditions and those of the form v(1)
= F(x −
√
2t), to left boundary. Since by (14.27),
u(1) and u(2) are written in terms of both v(1) and v(2), one initial condition and one
boundary condition at x = 0 need to be prescribed.
• In region III, the solutions of the form v(2) = G(x +
√
2t) can be traced back to
right boundary and those of the form v(1)
= F(x −
√
2t), to initial condition. Since by
(14.27), u(1)
and u(2)
are written in terms of both v(1)
and v(2)
, one initial condition
and one boundary condition at x = 1 need to be prescribed.
• To obtain the solution for region IV, two boundary conditions, one for each bound-
ary, should be given.
Thus,
a) No boundary conditions. Not well-posed.
b) u(1)(0, t) = h(x), u(2)(0, t) = k(x). Not well-posed.
c) u(1)
(0, t) = h(x), u(2)
(1, t) = k(x). Well-posed.
Partial Differential Equations Igor Yanovsky, 2005 118
Problem (S’94, #3). Consider the system of equations
ft + gx = 0
gt + fx = 0
ht + 2hx = 0
on the set x ≥ 0, t ≥ 0, with the following initial-boundary values:
a) f, g, h prescribed on t = 0, x ≥ 0; f, h prescribed on x = 0, t ≥ 0.
b) f, g, h prescribed on t = 0, x ≥ 0; f − g, h prescribed on x = 0, t ≥ 0.
c) f + g, h prescribed on t = 0, x ≥ 0; f, g, h prescribed on x = 0, t ≥ 0.
For each of these 3 sets of data, determine whether or not the system is well-posed.
Justify your conclusions.
Proof. The third equation is decoupled from the first two and can be considered sepa-
rately. Its solution can be written in the form
h(x, t) = H(x − 2t),
and therefore, h must be prescribed on t = 0 and on x = 0, since the characteristics
propagate from both the x and t axis.
We rewrite the first two equations as (f ↔ u1, g ↔ u2):
Ut +
0 1
1 0
Ux = 0,
U(x, 0) =
u(1)
(x, 0)
u(2)(x, 0)
.
The eigenvalues of the matrix A are λ1 = −1, λ2 = 1 and the corresponding eigen-
vectors are e1 =
−1
1
, e2 =
1
1
. Thus,
Λ =
−1 0
0 1
, Γ =
−1 1
1 1
, Γ−1
=
1
2
−1 1
1 1
.
Let U = ΓV . Then,
Ut + AUx = 0,
ΓVt + AΓVx = 0,
Vt + Γ−1
AΓVx = 0,
Vt + ΛVx = 0.
Thus, the transformed problem is
Vt +
−1 0
0 1
Vx = 0, (14.28)
V (x, 0) = Γ−1
U(x, 0) =
1
2
−1 1
1 1
u(1)
(x, 0)
u(1)
(x, 0)
. (14.29)
Equation (14.28) gives traveling wave solutions of the form:
v(1)
(x, t) = F(x + t), v(2)
(x, t) = G(x − t). (14.30)
Partial Differential Equations Igor Yanovsky, 2005 119
We can write U in terms of V :
U = ΓV =
−1 1
1 1
v(1)
v(2) =
−1 1
1 1
F(x + t)
G(x − t)
=
−F(x + t) + G(x − t)
F(x + t) + G(x − t)
.
(14.31)
• For region I, (14.28) and (14.29) give two initial value problems (since a value at any
point in region I can be traced back along both characteristics to initial conditions).
Thus, initial conditions for v(1) and v(2) have to be defined. Since (14.31) defines u(1)
and u(2)
in terms of v(1)
and v(2)
, we need to define two initial conditions for U.
• For region II, solutions of the form F(x + t) can be traced back to initial conditions.
Thus, v(1)
is the same as in region I. Solutions of the form G(x − t) are traced back
to the boundary at x = 0. Since from (14.31), u(2)(x, t) = v(1)(x, t) + v(2)(x, t) =
F(x + t) + G(x − t), i.e. u(2)
is written in terms of v(2)
= G(x − t), u(2)
requires a
boundary condition to be defined on x = 0.
a) u(1)
, u(2)
prescribed on t = 0; u(1)
prescribed on x = 0.
Since u(1)
(x, t) = −F(x + t) + G(x − t),
u(2)(x, t) = F(x + t) + G(x − t), i.e. both
u(1)
and u(2)
are written in terms of F(x + t)
and G(x − t), we need to define two initial
conditions for U (on t = 0).
A boundary condition also needs to be prescribed
on x = 0 to be able to trace back v(2)
= G(x − t).
Well-posed.
b) u(1), u(2) prescribed on t = 0; u(1) − u(2) prescribed on x = 0.
As in part (a), we need to define two initial
conditions for U.
Since u(1)
− u(2)
= −2F(x + t), its definition
on x = 0 leads to ill-posedness. On the
contrary, u(1)
+ u(2)
= 2G(x − t) should be
defined on x = 0 in order to be able to trace
back the values through characteristics.
Ill-posed.
c) u(1) + u(2) prescribed on t = 0; u(1), u(2) prescribed on x = 0.
Since u(1) + u(2) = 2G(x − t), another initial
condition should be prescribed to be able to
trace back solutions of the form v(2) = F(x + t),
without which the problem is ill-posed.
Also, two boundary conditions for both u(1)
and u(2)
define solutions of both v(1)
= G(x − t)
and v(2) = F(x + t) on the boundary. The former
boundary condition leads to ill-posedness.
Ill-posed.
Partial Differential Equations Igor Yanovsky, 2005 120
Problem (F’92, #8). Consider the system
ut + ux + avx = 0
vt + bux + vx = 0
for 0 < x < 1 with boundary and initial conditions
u = v = 0 for x = 0
u = u0, v = v0 for t = 0.
a) For which values of a and b is this a well-posed problem?
b) For this class of a, b, state conditions on u0 and v0 so that the solution u, v will be
continuous and continuously differentiable.
Proof. a) Let us change the notation (u ↔ u(1), v ↔ u(2)). Rewrite the equation as
Ut +
1 a
b 1
Ux = 0, (14.32)
U(x, 0) =
u(1)
(x, 0)
u(2)
(x, 0)
=
u
(1)
0 (x)
u
(2)
0 (x)
,
U(0, t) =
u(1)
(0, t)
u(2)(0, t)
= 0.
The eigenvalues of the matrix A are λ1 = 1 −
√
ab, λ2 = 1 +
√
ab.
Λ =
1 −
√
ab 0
0 1 +
√
ab
.
Let U = ΓV , where Γ is a matrix of eigenvectors. Then,
Ut + AUx = 0,
ΓVt + AΓVx = 0,
Vt + Γ−1
AΓVx = 0,
Vt + ΛVx = 0.
Thus, the transformed problem is
Vt +
1 −
√
ab 0
0 1 +
√
ab
Vx = 0, (14.33)
V (x, 0) = Γ−1
U(x, 0).
The equation (14.33) gives traveling wave solutions of the form:
v(1)
(x, t) = F(x − (1 −
√
ab)t), v(2)
(x, t) = G(x − (1 +
√
ab)t). (14.34)
We also have U = ΓV , i.e. both u(1)
and u(2)
(and their initial and boundary conditions)
are combinations of v(1) and v(2).
In order for this problem to be well-posed, both sets of characteristics should emanate
from the boundary at x = 0. Thus, the eigenvalues of the system are real (ab > 0) and
λ1,2 > 0 (ab < 1). Thus,
0 < ab < 1.
b) For U to be C1, we require the compatibility condition, u
(1)
0 (0) = 0, u
(2)
0 (0) =
0.
Partial Differential Equations Igor Yanovsky, 2005 121
Problem (F’93, #2). Consider the initial-boundary value problem
ut + ux = 0
vt − (1 − cx2
)vx + ux = 0
on −1 ≤ x ≤ 1 and 0 ≤ t, with the following prescribed data:
u(x, 0), v(x, 0),
u(−1, t), v(1, t).
For which values of c is this a well-posed problem?
Proof. Let us change the notation (u ↔ u(1)
, v ↔ u(2)
).
The first equation can be solved with u(1)(x, 0) = F(x) to get a solution in the form
u(1)
(x, t) = F(x − t), which requires u(1)
(x, 0) and u(1)
(−1, t) to be defined.
With u(1) known, we can solve the second equation
u
(2)
t − (1 − cx2
)u(2)
x + F(x − t) = 0.
Solving the equation by characteristics, we obtain
the characteristics in the xt-plane are of the form
dx
dt
= cx2
− 1.
We need to determine c such that the prescribed
data u(2)(x, 0) and u(2)(1, t) makes the problem to
be well-posed. The boundary condition for u(2)
(1, t)
requires the characteristics to propagate to the
left with t increasing. Thus, x(t) is a decreasing
function, i.e.
dx
dt
< 0 ⇒ cx2
− 1 < 0 for − 1 < x < 1 ⇒ c < 1.
We could also do similar analysis we have done in other problems on first order sys-
tems involving finding eigenvalues/eigenvectors of the system and using the fact that
u(1)(x, t) is known at both boundaries (i.e. values of u(1)(1, t) can be traced back either
to initial conditions or to boundary conditions on x = −1).
Partial Differential Equations Igor Yanovsky, 2005 122
Problem (S’91, #4). Consider the first order system
ut + aux + bvx = 0
vt + cux + dvx = 0
for 0 < x < 1, with prescribed initial data:
u(x, 0) = u0(x)
v(x, 0) = v0(x).
a) Find conditions on a, b, c, d such that there is a full set of characteristics and, in
this case, find the characteristic speeds.
b) For which values of a, b, c, d can boundary data be prescribed on x = 0 and for which
values can it be prescribed on x = 1? How many pieces of data can be prescribed on
each boundary?
Proof. a) Let us change the notation (u ↔ u(1), v ↔ u(2)). Rewrite the equation as
Ut +
a b
c d
Ux = 0, (14.35)
U(x, 0) =
u(1)(x, 0)
u(2)
(x, 0)
=
u
(1)
0 (x)
u
(2)
0 (x)
.
The system is hyperbolic if for each value of u(1) and u(2) the eigenvalues are real
and the matrix is diagonalizable, i.e. there is a complete set of linearly independent
eigenvectors. The eigenvalues of the matrix A are
λ1,2 =
a + d ± (a + d)2 − 4(ad − bc)
2
=
a + d ± (a − d)2 + 4bc
2
.
We need (a − d)2 + 4bc > 0. This also makes the problem to be diagonalizable.
Let U = ΓV , where Γ is a matrix of eigenvectors. Then,
Ut + AUx = 0,
ΓVt + AΓVx = 0,
Vt + Γ−1
AΓVx = 0,
Vt + ΛVx = 0.
Thus, the transformed problem is
Vt +
λ1 0
0 λ2
Vx = 0, (14.36)
Equation (14.36) gives traveling wave solutions of the form:
v(1)
(x, t) = F(x − λ1t), v(2)
(x, t) = G(x − λ2t). (14.37)
The characteristic speeds are dx
dt = λ1, dx
dt = λ2.
b) We assume (a + d)2
− 4(ad − bc) > 0.
a + d > 0, ad − bc > 0 ⇒ λ1, λ2 > 0 ⇒ 2 B.C. on x = 0.
a + d > 0, ad − bc < 0 ⇒ λ1 < 0, λ2 > 0 ⇒ 1 B.C. on x = 0, 1 B.C. on x = 1.
a + d < 0, ad − bc > 0 ⇒ λ1, λ2 < 0 ⇒ 2 B.C. on x = 1.
Partial Differential Equations Igor Yanovsky, 2005 123
a + d < 0, ad − bc < 0 ⇒ λ1 < 0, λ2 > 0 ⇒ 1 B.C. on x = 0, 1 B.C. on x = 1.
a + d > 0, ad − bc = 0 ⇒ λ1 = 0, λ2 > 0 ⇒ 1 B.C. on x = 0.
a + d < 0, ad − bc = 0 ⇒ λ1 = 0, λ2 < 0 ⇒ 1 B.C. on x = 1.
a + d = 0, ad − bc < 0 ⇒ λ1 < 0, λ2 > 0 ⇒ 1 B.C. on x = 0, 1 B.C. on
x = 1.
Partial Differential Equations Igor Yanovsky, 2005 124
Problem (S’94, #2). Consider the differential operator
L
u
v
=
ut + 9vx − uxx
vt − ux − vxx
on 0 ≤ x ≤ 2π, t ≥ 0, in which the vector
u(x, t)
v(x, t)
consists of two functions that
are periodic in x.
a) Find the eigenfunctions and eigenvalues of the operator L.
b) Use the results of (a) to solve the initial value problem
L
u
v
= 0 for t ≥ 0,
u
v
=
eix
0
for t = 0.
Proof. a) We find the ”space” eigenvalues and eigenfunctions. We rewrite the system
as
Ut +
0 9
−1 0
Ux +
−1 0
0 −1
Uxx = 0,
and find eigenvalues
0 9
−1 0
Ux +
−1 0
0 −1
Uxx = λU. (14.38)
Set U =
u(x, t)
v(x, t)
=
n=∞
n=−∞ un(t)einx
n=∞
n=−∞ vn(t)einx . Plugging this into (14.38), we get
0 9
−1 0
inun(t)einx
invn(t)einx +
−1 0
0 −1
−n2
un(t)einx
−n2vn(t)einx = λ
un(t)einx
vn(t)einx ,
0 9
−1 0
inun(t)
invn(t)
+
−1 0
0 −1
−n2
un(t)
−n2vn(t)
= λ
un(t)
vn(t)
,
0 9in
−in 0
un(t)
vn(t)
+
n2 0
0 n2
un(t)
vn(t)
= λ
un(t)
vn(t)
,
n2
− λ 9in
−in n2 − λ
un(t)
vn(t)
= 0,
(n2
− λ)2
− 9n2
= 0,
which gives λ1 = n2+3n, λ2 = n2−3n, are eigenvalues, and v1 =
3i
1
, v2 =
3i
−1
,
are corresponding eigenvectors.
Partial Differential Equations Igor Yanovsky, 2005 125
b) We want to solve
u
v t
+ L
u
v
= 0, L
u
v
=
9vx − uxx
−ux − vxx
. We have
u
v t
= −L
u
v
= −λ
u
v
, i.e.
u
v
= e−λt
. We can write the solution as
U(x, t) =
un(t)einx
vn(t)einx =
∞
n=−∞
ane−λ1t
v1einx
+ bne−λ2t
v2einx
=
∞
n=−∞
ane−(n2+3n)t 3i
1
einx
+ bne−(n2−3n)t 3i
−1
einx
.
U(x, 0) =
∞
n=−∞
an
3i
1
einx
+ bn
3i
−1
einx
=
eix
0
,
⇒ an = bn = 0, n = 1;
a1 + b1 =
1
3i
and a1 = b1 ⇒ a1 = b1 =
1
6i
.
⇒ U(x, t) =
1
6i
e−4t 3i
1
eix
+
1
6i
e2t 3i
−1
eix
=
1
2(e−4t
+ e2t
)
1
6i(e−4t − e2t)
eix
.
26 27
26
ChiuYen’s and Sung-Ha’s solutions give similar answers.
27
Questions about this problem:
1. Needed to find eigenfunctions, not eigenvectors.
2. The notation of L was changed. The problem statement incorporates the derivatives wrt. t into L.
3. Why can we write the solution in this form above?
Partial Differential Equations Igor Yanovsky, 2005 126
Problem (W’04, #6). Consider the first order system
ut − ux = vt + vx = 0
in the diamond shaped region −1 < x + t < 1, −1 < x − t < 1. For each of
the following boundary value problems state whether this problem is well-posed. If it is
well-posed, find the solution.
a) u(x + t) = u0(x + t) on x − t = −1, v(x − t) = v0(x − t) on x + t = −1.
b) v(x + t) = v0(x + t) on x − t = −1, u(x − t) = u0(x − t) on x + t = −1.
Proof. We have
ut − ux = 0,
vt + vx = 0.
• u is constant along the characteristics: x + t = c1(s).
Thus, its solution is u(x, t) = u0(x + t).
It the initial condition is prescribed at x − t = −1,
the solution can be determined in the entire region
by tracing back through the characteristics.
• v is constant along the characteristics: x − t = c2(s).
Thus, its solution is v(x, t) = v0(x − t).
It the initial condition is prescribed at x + t = −1,
the solution can be determined in the entire region
by tracing forward through the characteristics.
Partial Differential Equations Igor Yanovsky, 2005 127
15 Problems: Gas Dynamics Systems
15.1 Perturbation
Problem (S’92, #3). 28 29
Consider the gas dynamic equations
ut + uux + (F(ρ))x = 0,
ρt + (uρ)x = 0.
Here F(ρ) is a given C∞-smooth function of ρ. At t = 0, 2π-periodic initial data
u(x, 0) = f(x), ρ(x, 0) = g(x).
a) Assume that
f(x) = U0 + εf1(x), g(x) = R0 + εg1(x)
where U0, R0 > 0 are constants and εf1(x), εg1(x) are “small” perturbations. Lin-
earize the equations and given conditions for F such that the linearized problem is
well-posed.
b) Assume that U0 > 0 and consider the above linearized equations for 0 ≤ x ≤ 1,
t ≥ 0. Construct boundary conditions such that the initial-boundary value problem is
well-posed.
Proof. a) We write the equations in characteristic form:
ut + uux + F (ρ)ρx = 0,
ρt + uxρ + uρx = 0.
Consider the special case of nearly constant initial data
u(x, 0) = u0 + εu1(x, 0),
ρ(x, 0) = ρ0 + ερ1(x, 0).
Then we can approximate nonlinear equations by linear equations. Assuming
u(x, t) = u0 + εu1(x, t),
ρ(x, t) = ρ0 + ερ1(x, t)
remain valid with u1 = O(1), ρ1 = O(1), we find that
ut = εu1t, ρt = ερ1t,
ux = εu1x, ρx = ερ1x,
F (ρ) = F (ρ0 + ερ1(x, t)) = F (ρ0) + ερ1F (ρ0) + O(ε2
).
Plugging these into , gives
εu1t + (u0 + εu1)εu1x + F (ρ0) + ερ1F (ρ0) + O(ε2
) ερ1x = 0,
ερ1t + εu1x(ρ0 + ερ1) + (u0 + εu1)ερ1x = 0.
Dividing by ε gives
u1t + u0u1x + F (ρ0)ρ1x = −εu1u1x − ερ1ρ1xF (ρ0) + O(ε2
),
ρ1t + u1xρ0 + u0ρ1x = −εu1xρ1 − εu1ρ1x.
28
See LeVeque, Second Edition, Birkh¨auser Verlag, 1992, p. 44.
29
This problem has similar notation with S’92, #4.
Partial Differential Equations Igor Yanovsky, 2005 128
For small ε, we have
u1t + u0u1x + F (ρ0)ρ1x = 0,
ρ1t + u1xρ0 + u0ρ1x = 0.
This can be written as
u1
ρ1 t
+
u0 F (ρ0)
ρ0 u0
u1
ρ1 x
=
0
0
.
u0 − λ F (ρ0)
ρ0 u0 − λ
= (u0 − λ)(u0 − λ) − ρ0F (ρ0) = 0,
λ2
− 2u0λ + u2
0 − ρ0F (ρ0) = 0,
λ1,2 = u0 ± ρ0F (ρ0), u0 > 0, ρ0 > 0.
For well-posedness, need λ1,2 ∈ R or F (ρ0) ≥ 0.
b) We have u0 > 0, and λ1 = u0 + ρ0F (ρ0), λ2 = u0 − ρ0F (ρ0).
• If u0 > ρ0F (ρ0) ⇒ λ1 > 0, λ2 > 0 ⇒ 2 BC at x = 0.
• If u0 = ρ0F (ρ0) ⇒ λ1 > 0, λ2 = 0 ⇒ 1 BC at x = 0.
• If 0 < u0 < ρ0F (ρ0) ⇒ λ1 > 0, λ2 < 0 ⇒ 1 BC at x = 0, 1 BC at x = 1.
15.2 Stationary Solutions
Problem (S’92, #4). 30 Consider
ut + uux + ρx = νuxx,
ρt + (uρ)x = 0
for t ≥ 0, −∞ < x < ∞.
Give conditions for the states U+, U−, R+, R−, such that the system has
stationary solutions (i.e. ut = ρt = 0) satisfying
lim
x→+∞
u
ρ
=
U+
R+
, lim
x→−∞
u
ρ
=
U−
R−
.
Proof. For stationary solutions, we need
ut = −
u2
2 x
− ρx + νuxx = 0,
ρt = −(uρ)x = 0.
Integrating the above equations, we obtain
−
u2
2
− ρ + νux = C1,
−uρ = C2.
30
This problem has similar notation with S’92, #3.
Partial Differential Equations Igor Yanovsky, 2005 129
Conditions give ux = 0 at x = ±∞. Thus
U2
+
2
+ R+ =
U2
−
2
+ R−,
U+R+ = U−R−.
Partial Differential Equations Igor Yanovsky, 2005 130
15.3 Periodic Solutions
Problem (F’94, #4). Let u(x, t) be a solution of the Cauchy problem
ut = −uxxxx − 2uxx, −∞ < x < +∞, 0 < t < +∞,
u(x, 0) = ϕ(x),
where u(x, t) and ϕ(x) are C∞
functions periodic in x with period 2π;
i.e. u(x + 2π, t) = u(x, t), ∀x, ∀t.
Prove that
||u(·, t)|| ≤ Ceat
||ϕ||
where ||u(·, t)|| =
2π
0 |u(x, t)|2 dx, ||ϕ|| =
2π
0 |ϕ(x)|2 dx, C, a are some constants.
Proof. METHOD I: Since u is 2π-periodic, let
u(x, t) =
∞
n=−∞
an(t)einx
.
Plugging this into the equation, we get
∞
n=−∞
an(t)einx
= −
∞
n=−∞
n4
an(t)einx
+ 2
∞
n=−∞
n2
an(t)einx
,
an(t) = (−n4
+ 2n2
)an(t),
an(t) = an(0)e(−n4+2n2)t
.
Also, initial condition gives
u(x, 0) =
∞
n=−∞
an(0)einx
= ϕ(x),
∞
n=−∞
an(0)einx
= |ϕ(x)|.
||u(x, t)||2
2 =
2π
0
u2
(x, t) dx =
2π
0
∞
n=−∞
an(t)einx
∞
m=−∞
an(t)eimx
dx
=
∞
n=−∞
a2
n(t)
2π
0
einx
e−inx
dx = 2π
∞
n=−∞
a2
n(t) = 2π
∞
n=−∞
a2
n(0)e2(−n4+2n2)t
≤ 2π
∞
n=−∞
a2
n(0)
∞
n=−∞
e2(−n4+2n2)t
= 2π
∞
n=−∞
a2
n(0)
||ϕ||2
e2t
∞
n=−∞
e−2(n2−1)2t
= C1, (convergent)
= C2e2t
||ϕ||2
.
⇒ ||u(x, t)|| ≤ Cet
||ϕ||.
Partial Differential Equations Igor Yanovsky, 2005 131
METHOD II: Multiply this equation by u and integrate
uut = −uuxxxx − 2uuxx,
1
2
d
dt
(u2
) = −uuxxxx − 2uuxx,
1
2
d
dt
2π
0
u2
dx = −
2π
0
uuxxxx dx −
2π
0
2uuxx dx,
1
2
d
dt
||u||2
2 = −uuxxx
2π
0
=0
+ uxuxx
2π
0
=0
−
2π
0
u2
xx dx −
2π
0
2uuxx dx,
1
2
d
dt
||u||2
2 = −
2π
0
u2
xx dx −
2π
0
2uuxx dx (−2ab ≤ a2
+ b2
)
≤ −
2π
0
u2
xx dx +
2π
0
(u2
+ u2
xx) dx =
2π
0
u2
dx = ||u||2
,
⇒
d
dt
||u||2
≤ 2||u||2
,
||u||2
≤ ||u(0)||2
e2t
,
||u|| ≤ ||u(0)||et
.
METHOD III: Can use Fourier transform. See ChiuYen’s solutions, that have both
Method II and III.
Partial Differential Equations Igor Yanovsky, 2005 132
Problem (S’90, #4).
Let f(x) ∈ C∞ be a 2π-periodic function, i.e., f(x) = f(x + 2π) and denote by
||f||2
=
2π
0
|f(x)|2
dx
the L2-norm of f.
a) Express ||dp
f/dxp
||2
in terms of the Fourier coefficients of f.
b) Let q > p > 0 be integers. Prove that ∀ > 0, ∃K = N(ε, p, q), constant, such that
dp
f
dxp
2
≤ ε
dq
f
dxq
2
+ K||f||2
.
c) Discuss how K depends on ε.
Proof. a) Let 31
f(x) =
∞
−∞
fneinx
,
dp
f
dxp
=
∞
−∞
fn(in)p
einx
,
dp
f
dxp
2
=
2π
0
∞
−∞
fn(in)p
einx 2
dx =
2π
0
|i2
|p
∞
−∞
fnnp
einx 2
dx
=
2π
0
∞
−∞
fnnp
einx 2
dx = 2π
∞
n=0
f2
nn2p
.
b) We have
dpf
dxp
2
≤ ε
dqf
dxq
2
+ K||f||2
,
2π
∞
n=0
f2
nn2p
≤ ε 2π
∞
n=0
f2
nn2q
+ K 2π
∞
n=0
f2
n,
n2p
− εn2q
≤ K,
n2p
(1 − εnq
)
< 0, for n large
≤ K, some q > 0.
Thus, the above inequality is true for n large enough. The statement follows.
31
Note:
L
0
einx
eimx
dx =
0 n = m
L n = m
Partial Differential Equations Igor Yanovsky, 2005 133
Problem (S’90, #5). 32
Consider the flame front equation
ut + uux + uxx + uxxxx = 0
with 2π-periodic initial data
u(x, 0) = f(x), f(x) = f(x + 2π) ∈ C∞
.
a) Determine the solution, if f(x) ≡ f0 = const.
b) Assume that
f(x) = 1 + εg(x), 0 < ε 1, |g|∞ = 1, g(x) = g(x + 2π).
Linearize the equation. Is the Cauchy problem well-posed for the linearized equation,
i.e., do its solutions v satisfy an estimate
||v(·, t)|| ≤ Keα(t−t0)
||v(·, t0)||?
c) Determine the best possible constants K, α.
Proof. a) The solution to
ut + uux + uxx + uxxxx = 0,
u(x, 0) = f0 = const,
is u(x, t) = f0 = const.
b) We consider the special case of nearly constant initial data
u(x, 0) = 1 + εu1(x, 0).
Then we can approximate the nonlinear equation by a linear equation. Assuming
u(x, t) = 1 + εu1(x, t),
remain valid with u1 = O(1), from , we find that
εu1t + (1 + εu1)εu1x + εu1xx + εu1xxxx = 0.
Dividing by ε gives
u1t + u1x + εu1u1x + u1xx + u1xxxx = 0.
For small ε, we have
u1t + u1x + u1xx + u1xxxx = 0.
Multiply this equation by u1 and integrate
u1u1t + u1u1x + u1u1xx + u1u1xxxx = 0,
d
dt
u2
1
2
+
u2
1
2 x
+ u1u1xx + u1u1xxxx = 0,
1
2
d
dt
2π
0
u2
1 dx +
u2
1
2
2π
0
=0
+
2π
0
u1u1xx dx +
2π
0
u1u1xxxx dx = 0,
1
2
d
dt
||u1||2
2 + u1u1x
2π
0
=0
−
2π
0
u2
1x dx + u1u1xxx
2π
0
=0
− u1xu1xx
2π
0
=0
+
2π
0
u2
1xx dx = 0,
1
2
d
dt
||u1||2
2 =
2π
0
u2
1x dx −
2π
0
u2
1xx dx.
32
S’90 #5, #6, #7 all have similar formulations.
Partial Differential Equations Igor Yanovsky, 2005 134
Since u1 is 2π-periodic, let
u1 =
∞
n=−∞
an(t)einx
. Then,
u1x = i
∞
n=−∞
nan(t)einx
⇒ u2
1x = −
∞
n=−∞
nan(t)einx
2
,
u1xx = −
∞
n=−∞
n2
an(t)einx
⇒ u2
1xx =
∞
n=−∞
n2
an(t)einx
2
.
Thus,
1
2
d
dt
||u1||2
2 =
2π
0
u2
1x dx −
2π
0
u2
1xx dx
= −
2π
0
nan(t)einx
2
dx −
2π
0
n2
an(t)einx
2
dx
= −2π n2
an(t)2
− 2π n4
an(t)2
= −2π an(t)2
(n2
+ n4
) ≤ 0.
⇒ ||u1(·, t)||2 ≤ ||u1(·, 0)||2,
where K = 1, α = 0.
Problem (W’03, #4). Consider the PDE
ut = ux + u4
for t > 0
u = u0 for t = 0
for 0 < x < 2π. Define the set A = {u = u(x) : ˆu(k) = 0 if k < 0}, in which
{ˆu(k, t)}∞
−∞ is the Fourier series of u in x on [0, 2π].
a) If u0 ∈ A, show that u(t) ∈ A.
b) Find differential equations for ˆu(0, t), ˆu(1, t), and ˆu(2, t).
Proof. a) Solving
ut = ux + u4
u(x, 0) = u0(x)
by the method of characteristics, we get
u(x, t) =
u0(x + t)
(1 − 3t(u0(x + t))3)
1
3
.
Since u0 ∈ A, u0k = 0 if k < 0. Thus,
u0(x) =
∞
k=0
u0k eikx
2 .
Since
uk =
1
2π
2π
0
u(x, t) e−ikx
2 dx,
Partial Differential Equations Igor Yanovsky, 2005 135
we have
u(x, t) =
∞
k=0
uk eikx
2 ,
that is, u(t) ∈ A.
Partial Differential Equations Igor Yanovsky, 2005 136
15.4 Energy Estimates
Problem (S’90, #6). Let U(x, t) ∈ C∞ be 2π-periodic in x. Consider the linear
equation
ut + Uux + uxx + uxxxx = 0,
u(x, 0) = f(x), f(x) = f(x + 2π) ∈ C∞
.
a) Derive an energy estimate for u.
b) Prove that one can estimate all derivatives ||∂p
u/∂xp
||.
c) Indicate how to prove existence of solutions. 33
Proof. a) Multiply the equation by u and integrate
uut + Uuux + uuxx + uuxxxx = 0,
1
2
d
dt
(u2
) +
1
2
U(u2
)x + uuxx + uuxxxx = 0,
1
2
d
dt
2π
0
u2
dx +
1
2
2π
0
U(u2
)x dx +
2π
0
uuxx dx +
2π
0
uuxxxx dx = 0,
1
2
d
dt
||u||2
+
1
2
Uu2
2π
0
=0
−
1
2
2π
0
Uxu2
dx + uux
2π
0
−
2π
0
u2
x dx
+uuxxx
2π
0
− uxuxx
2π
0
+
2π
0
u2
xx dx = 0,
1
2
d
dt
||u||2
−
1
2
2π
0
Uxu2
dx −
2π
0
u2
x dx +
2π
0
u2
xx dx = 0,
1
2
d
dt
||u||2
=
1
2
2π
0
Uxu2
dx +
2π
0
u2
x dx −
2π
0
u2
xx dx ≤ (from S’90, #5) ≤
≤
1
2
2π
0
Uxu2
dx ≤
1
2
max
x
Ux
2π
0
u2
dx.
⇒
d
dt
||u||2
≤ max
x
Ux||u||2
,
||u(x, t)||2
≤ ||u(x, 0)||2
e(maxx Ux)t
.
This can also been done using Fourier Transform. See ChiuYen’s solutions where the
above method and the Fourier Transform methods are used.
33
S’90 #5, #6, #7 all have similar formulations.
Partial Differential Equations Igor Yanovsky, 2005 137
Problem (S’90, #7). 34
Consider the nonlinear equation
ut + uux + uxx + uxxxx = 0,
u(x, 0) = f(x), f(x) = f(x + 2π) ∈ C∞
.
a) Derive an energy estimate for u.
b) Show that there is an interval 0 ≤ t ≤ T, T depending on f,
such that also ||∂u(·, t)/∂x|| can be bounded.
Proof. a) Multiply the above equation by u and integrate
uut + u2
ux + uuxx + uuxxxx = 0,
1
2
d
dt
(u2
) +
1
3
(u3
)x + uuxx + uuxxxx = 0,
1
2
d
dt
2π
0
u2
dx +
1
3
2π
0
(u3
)x dx +
2π
0
uuxx dx +
2π
0
uuxxxx dx = 0,
1
2
d
dt
||u||2
+
1
3
u3
2π
0
=0
−
2π
0
u2
x dx +
2π
0
u2
xx dx = 0,
1
2
d
dt
||u||2
=
2π
0
u2
x dx −
2π
0
u2
xx dx ≤ 0, (from S’90, #5)
⇒ ||u(·, t)|| ≤ ||u(·, 0)||.
b) In order to find a bound for ||ux(·, t)||, differentiate with respect to x:
utx + (uux)x + uxxx + uxxxxx = 0,
Multiply the above equation by ux and integrate:
uxutx + ux(uux)x + uxuxxx + uxuxxxxx = 0,
1
2
d
dt
2π
0
(ux)2
dx +
2π
0
ux(uux)x dx +
2π
0
uxuxxx dx +
2π
0
uxuxxxxx dx = 0.
We evaluate one of the integrals in the above expression using the periodicity:
2π
0
ux(uux)x dx = −
2π
0
uxxuux =
2π
0
ux(u2
x + uuxx) =
2π
0
u3
x +
2π
0
uuxuxx,
⇒
2π
0
uxxuux = −
1
2
2π
0
u3
x,
⇒
2π
0
ux(uux)x =
1
2
2π
0
u3
x.
We have
1
2
d
dt
||ux||2
+
2π
0
u3
x dx +
2π
0
uxuxxx dx +
2π
0
uxuxxxxx dx = 0.
34
S’90 #5, #6, #7 all have similar formulations.
Partial Differential Equations Igor Yanovsky, 2005 138
Let w = ux, then
1
2
d
dt
||w||2
= −
2π
0
w3
dx −
2π
0
wwxx dx −
2π
0
wwxxxx dx
= −
2π
0
w3
dx +
2π
0
w2
x dx −
2π
0
w2
xx dx ≤ −
2π
0
w3
dx,
⇒
d
dt
||ux||2
= −
2π
0
u3
x dx.
Partial Differential Equations Igor Yanovsky, 2005 139
16 Problems: Wave Equation
16.1 The Initial Value Problem
Example (McOwen 3.1 #1). Solve the initial value problem:
⎧
⎪⎨
⎪⎩
utt − c2uxx = 0,
u(x, 0) = x3
g(x)
, ut(x, 0) = sinx
h(x)
.
Proof. D’Alembert’s formula gives the solution:
u(x, t) =
1
2
(g(x + ct) + g(x − ct)) +
1
2c
x+ct
x−ct
h(ξ) dξ
=
1
2
(x + ct)3
+
1
2
(x − ct)3
+
1
2c
x+ct
x−ct
sin ξ dξ
= x3
+ 2xc2
t2
−
1
2c
cos(x + ct) +
1
2c
cos(x − ct) =
= x3
+ 2xc2
t2
+
1
c
sin x sinct.
Problem (S’99, #6). Solve the Cauchy problem
utt = a2uxx + cos x,
u(x, 0) = sin x, ut(x, 0) = 1 + x.
(16.1)
Proof. We have a nonhomogeneous PDE with nonhomogeneous initial conditions:
⎧
⎪⎪⎪⎨
⎪⎪⎪⎩
utt − c2uxx = cos x
f(x,t)
,
u(x, 0) = sin x
g(x)
, ut(x, 0) = 1 + x
h(x)
.
The solution is given by d’Alembert’s formula and Duhamel’s principle.35
uA
(x, t) =
1
2
(g(x + ct) + g(x − ct)) +
1
2c
x+ct
x−ct
h(ξ) dξ
=
1
2
(sin(x + ct) + sin(x − ct)) +
1
2c
x+ct
x−ct
(1 + ξ) dξ
= sinx cos ct +
1
2c
ξ +
ξ2
2
ξ=x+ct
ξ=x−ct
= sinx cos ct + xt + t.
uD
(x, t) =
1
2c
t
0
x+c(t−s)
x−c(t−s)
f(ξ, s) dξ ds =
1
2c
t
0
x+c(t−s)
x−c(t−s)
cos ξ dξ ds
=
1
2c
t
0
sin[x + c(t − s)] − sin[x − c(t − s)] ds =
1
c2
(cos x − cos x cos ct).
u(x, t) = uA
(x, t) + uD
(x, t) = sinx cos ct + xt + t +
1
c2
(cos x − cos x cos ct).
35
Note the relationship: x ↔ ξ, t ↔ s.
Partial Differential Equations Igor Yanovsky, 2005 140
We can check that the solution satisfies equation (16.1). Can also check that uA
, uD
satisfy
uA
tt − c2
uA
xx = 0,
uA(x, 0) = sinx, uA
t (x, 0) = 1 + x;
uD
tt − c2
uD
xx = cos x,
uD(x, 0) = 0, uD
t (x, 0) = 0.
Partial Differential Equations Igor Yanovsky, 2005 141
16.2 Initial/Boundary Value Problem
Problem 1. Consider the initial/boundary value problem
⎧
⎪⎨
⎪⎩
utt − c2uxx = 0 0 < x < L, t > 0
u(x, 0) = g(x), ut(x, 0) = h(x) 0 < x < L
u(0, t) = 0, u(L, t) = 0 t ≥ 0.
(16.2)
Proof. Find u(x, t) in the form
u(x, t) =
a0(t)
2
+
∞
n=1
an(t) cos
nπx
L
+ bn(t) sin
nπx
L
.
• Functions an(t) and bn(t) are determined by the boundary conditions:
0 = u(0, t) =
a0(t)
2
+
∞
n=1
an(t) ⇒ an(t) = 0. Thus,
u(x, t) =
∞
n=1
bn(t) sin
nπx
L
. (16.3)
• If we substitute (16.3) into the equation utt − c2uxx = 0, we get
∞
n=1
bn(t) sin
nπx
L
+ c2
∞
n=1
nπ
L
2
bn(t) sin
nπx
L
= 0, or
bn(t) +
nπc
L
2
bn(t) = 0,
whose general solution is
bn(t) = cn sin
nπct
L
+ dn cos
nπct
L
. (16.4)
Also, bn(t) = cn(nπc
L ) cos nπct
L − dn(nπc
L ) sin nπct
L .
• The constants cn and dn are determined by the initial conditions:
g(x) = u(x, 0) =
∞
n=1
bn(0) sin
nπx
L
=
∞
n=1
dn sin
nπx
L
,
h(x) = ut(x, 0) =
∞
n=1
bn(0) sin
nπx
L
=
∞
n=1
cn
nπc
L
sin
nπx
L
.
By orthogonality, we may multiply by sin(mπx/L) and integrate:
L
0
g(x) sin
mπx
L
dx =
L
0
∞
n=1
dn sin
nπx
L
sin
mπx
L
dx = dm
L
2
,
L
0
h(x) sin
mπx
L
dx =
L
0
∞
n=1
cn
nπc
L
sin
nπx
L
sin
mπx
L
dx = cm
mπc
L
L
2
.
Thus,
dn =
2
L
L
0
g(x) sin
nπx
L
dx, cn =
2
nπc
L
0
h(x) sin
nπx
L
dx. (16.5)
The formulas (16.3), (16.4), and (16.5) define the solution.
Partial Differential Equations Igor Yanovsky, 2005 142
Example (McOwen 3.1 #2). Consider the initial/boundary value problem
⎧
⎪⎨
⎪⎩
utt − uxx = 0 0 < x < π, t > 0
u(x, 0) = 1, ut(x, 0) = 0 0 < x < π
u(0, t) = 0, u(π, t) = 0 t ≥ 0.
(16.6)
Proof. Find u(x, t) in the form
u(x, t) =
a0(t)
2
+
∞
n=1
an(t) cos nx + bn(t) sinnx.
• Functions an(t) and bn(t) are determined by the boundary conditions:
0 = u(0, t) =
a0(t)
2
+
∞
n=1
an(t) ⇒ an(t) = 0. Thus,
u(x, t) =
∞
n=1
bn(t) sinnx. (16.7)
• If we substitute this into utt − uxx = 0, we get
∞
n=1
bn(t) sinnx +
∞
n=1
bn(t)n2
sinnx = 0, or
bn(t) + n2
bn(t) = 0,
whose general solution is
bn(t) = cn sinnt + dn cos nt. (16.8)
Also, bn(t) = ncn cos nt − ndn sinnt.
• The constants cn and dn are determined by the initial conditions:
1 = u(x, 0) =
∞
n=1
bn(0) sinnx =
∞
n=1
dn sin nx,
0 = ut(x, 0) =
∞
n=1
bn(0) sinnx =
∞
n=1
ncn sinnx.
By orthogonality, we may multiply both equations by sinmx and integrate:
π
0
sin mx dx = dm
π
2
,
π
0
0 dx = ncn
π
2
.
Thus,
dn =
2
nπ
(1 − cos nπ) =
4
nπ , n odd,
0, n even,
and cn = 0. (16.9)
Using this in (16.8) and (16.7), we get
bn(t) =
4
nπ cos nt, n odd,
0, n even,
Partial Differential Equations Igor Yanovsky, 2005 143
u(x, t) =
4
π
∞
n=0
cos(2n + 1)t sin(2n + 1)x
(2n + 1)
.
Partial Differential Equations Igor Yanovsky, 2005 144
We can sum the series in regions bouded by characteristics. We have
u(x, t) =
4
π
∞
n=0
cos(2n + 1)t sin(2n + 1)x
(2n + 1)
, or
u(x, t) =
2
π
∞
n=0
sin[(2n + 1)(x + t)]
(2n + 1)
+
2
π
∞
n=0
sin[(2n + 1)(x − t)]
(2n + 1)
. (16.10)
The initial condition may be written as
1 = u(x, 0) =
4
π
∞
n=0
sin(2n + 1)x
(2n + 1)
for 0 < x < π. (16.11)
We can use (16.11) to sum the series in (16.10).
In R1, u(x, t) =
1
2
+
1
2
= 1.
Since sin[(2n + 1)(x − t)] = − sin[(2n + 1)(−(x − t))], and 0 < −(x − t) < π in R2,
in R2, u(x, t) =
1
2
−
1
2
= 0.
Since sin[(2n + 1)(x + t)] = sin[(2n + 1)(x + t − 2π)] = − sin[(2n + 1)(2π − (x + t))],
and 0 < 2π − (x + t) < π in R3,
in R3, u(x, t) = −
1
2
+
1
2
= 0.
Since 0 < −(x − t) < π and 0 < 2π − (x + t) < π in R4,
in R4, u(x, t) = −
1
2
−
1
2
= −1.
Partial Differential Equations Igor Yanovsky, 2005 145
Problem 2. Consider the initial/boundary value problem
⎧
⎪⎨
⎪⎩
utt − c2uxx = 0 0 < x < L, t > 0
u(x, 0) = g(x), ut(x, 0) = h(x) 0 < x < L
ux(0, t) = 0, ux(L, t) = 0 t ≥ 0.
(16.12)
Proof. Find u(x, t) in the form
u(x, t) =
a0(t)
2
+
∞
n=1
an(t) cos
nπx
L
+ bn(t) sin
nπx
L
.
• Functions an(t) and bn(t) are determined by the boundary conditions:
ux(x, t) =
∞
n=1
−an(t)
nπ
L
sin
nπx
L
+ bn(t)
nπ
L
cos
nπx
L
,
0 = ux(0, t) =
∞
n=1
bn(t)
nπ
L
⇒ bn(t) = 0. Thus,
u(x, t) =
a0(t)
2
+
∞
n=1
an(t) cos
nπx
L
. (16.13)
• If we substitute (16.13) into the equation utt − c2
uxx = 0, we get
a0(t)
2
+
∞
n=1
an(t) cos
nπx
L
+ c2
∞
n=1
an(t)
nπ
L
2
cos
nπx
L
= 0,
a0(t) = 0 and an(t) +
nπc
L
2
an(t) = 0,
whose general solutions are
a0(t) = c0t + d0 and an(t) = cn sin
nπct
L
+ dn cos
nπct
L
. (16.14)
Also, a0(t) = c0 and an(t) = cn(nπc
L ) cos nπct
L − dn(nπc
L ) sin nπct
L .
• The constants cn and dn are determined by the initial conditions:
g(x) = u(x, 0) =
a0(0)
2
+
∞
n=1
an(0) cos
nπx
L
=
d0
2
+
∞
n=1
dn cos
nπx
L
,
h(x) = ut(x, 0) =
a0(0)
2
+
∞
n=1
an(0) cos
nπx
L
=
c0
2
+
∞
n=1
cn
nπc
L
cos
nπx
L
.
By orthogonality, we may multiply both equations by cos(mπx/L), including m = 0,
and integrate:
L
0
g(x) dx = d0
L
2
,
L
0
g(x) cos
mπx
L
dx = dm
L
2
,
L
0
h(x) dx = c0
L
2
,
L
0
h(x) cos
mπx
L
dx = cm
mπc
L
L
2
.
Thus,
dn =
2
L
L
0
g(x) cos
nπx
L
dx, cn =
2
nπc
L
0
h(x) cos
nπx
L
dx, c0 =
2
L
L
0
h(x) dx.
(16.15)
The formulas (16.13), (16.14), and (16.15) define the solution.
Partial Differential Equations Igor Yanovsky, 2005 146
Example (McOwen 3.1 #3). Consider the initial/boundary value problem
⎧
⎪⎨
⎪⎩
utt − uxx = 0 0 < x < π, t > 0
u(x, 0) = x, ut(x, 0) = 0 0 < x < π
ux(0, t) = 0, ux(π, t) = 0 t ≥ 0.
(16.16)
Proof. Find u(x, t) in the form
u(x, t) =
a0(t)
2
+
∞
n=1
an(t) cos nx + bn(t) sinnx.
• Functions an(t) and bn(t) are determined by the boundary conditions:
ux(x, t) =
∞
n=1
−an(t)n sinnx + bn(t)n cos nx,
0 = ux(0, t) =
∞
n=1
bn(t)n ⇒ bn(t) = 0. Thus,
u(x, t) =
a0(t)
2
+
∞
n=1
an(t) cos nx. (16.17)
• If we substitute (16.17) into the equation utt − uxx = 0, we get
a0(t)
2
+
∞
n=1
an(t) cos nx +
∞
n=1
an(t)n2
cos nx = 0,
a0(t) = 0 and an(t) + n2
an(t) = 0,
whose general solutions are
a0(t) = c0t + d0 and an(t) = cn sinnt + dn cos nt. (16.18)
Also, a0(t) = c0 and an(t) = cnn cos nt − dnn sinnt.
• The constants cn and dn are determined by the initial conditions:
x = u(x, 0) =
a0(0)
2
+
∞
n=1
an(0) cos nx =
d0
2
+
∞
n=1
dn cos nx,
0 = ut(x, 0) =
a0(0)
2
+
∞
n=1
an(0) cos nx =
c0
2
+
∞
n=1
cnn cos nx.
By orthogonality, we may multiply both equations by cos mx, including m = 0, and
integrate:
π
0
x dx = d0
π
2
,
π
0
x cos mx dx = dm
π
2
,
π
0
0 dx = c0
π
2
,
π
0
0 cos mx dx = cmm
π
2
.
Thus,
d0 = π, dn =
2
πn2
(cos nπ − 1), cn = 0. (16.19)
Using this in (16.18) and (16.17), we get
a0(t) = d0 = π, an(t) =
2
πn2
(cos nπ − 1) cosnt,
Partial Differential Equations Igor Yanovsky, 2005 147
u(x, t) =
π
2
+
2
π
∞
n=1
(cos nπ − 1) cos nt cos nx
n2
.
Partial Differential Equations Igor Yanovsky, 2005 148
We can sum the series in regions bouded by characteristics. We have
u(x, t) =
π
2
+
2
π
∞
n=1
(cos nπ − 1) cosnt cos nx
n2
, or
u(x, t) =
π
2
+
1
π
∞
n=1
(cos nπ − 1) cos[n(x − t)]
n2
+
1
π
∞
n=1
(cos nπ − 1) cos[n(x + t)]
n2
. (16.20)
The initial condition may be written as
u(x, 0) = x =
π
2
+
2
π
∞
n=1
(cos nπ − 1) cosnx
n2
for 0 < x < π,
which implies
x
2
−
π
4
=
1
π
∞
n=1
(cos nπ − 1) cosnx
n2
for 0 < x < π, (16.21)
We can use (16.21) to sum the series in (16.20).
In R1, u(x, t) =
π
2
+
x − t
2
−
π
4
+
x + t
2
−
π
4
= x.
Since cos[n(x − t)] = cos[n(−(x − t))], and 0 < −(x − t) < π in R2,
in R2, u(x, t) =
π
2
+
−(x − t)
2
−
π
4
+
x + t
2
−
π
4
= t.
Since cos[n(x+t)] = cos[n(x+t−2π)] = cos[n(2π−(x+t))], and 0 < 2π−(x+t) < π
in R3,
in R3, u(x, t) =
π
2
+
x − t
2
−
π
4
+
2π − (x + t)
2
−
π
4
= π − t.
Since 0 < −(x − t) < π and 0 < 2π − (x + t) < π in R4
in R4, u(x, t) =
π
2
+
−(x − t)
2
−
π
4
+
2π − (x + t)
2
−
π
4
= π − x.
Partial Differential Equations Igor Yanovsky, 2005 149
Example (McOwen 3.1 #4). Consider the initial boundary value problem
⎧
⎪⎨
⎪⎩
utt − c2uxx = 0 for x > 0, t > 0
u(x, 0) = g(x), ut(x, 0) = h(x) for x > 0
u(0, t) = 0 for t ≥ 0,
(16.22)
where g(0) = 0 = h(0). If we extend g and h as odd functions on −∞ < x < ∞, show
that d’Alembert’s formula gives the solution.
Proof. Extend g and h as odd functions on −∞ < x < ∞:
˜g(x) =
g(x), x ≥ 0
−g(−x), x < 0
˜h(x) =
h(x), x ≥ 0
−h(−x), x < 0.
Then, we need to solve
˜utt − c2˜uxx = 0 for − ∞ < x < ∞, t > 0
˜u(x, 0) = ˜g(x), ˜ut(x, 0) = ˜h(x) for − ∞ < x < ∞.
(16.23)
To show that d’Alembert’s formula gives the solution to (16.23), we need to show that
the solution given by d’Alembert’s formula satisfies the boundary condition ˜u(0, t) = 0.
˜u(x, t) =
1
2
(˜g(x + ct) + ˜g(x − ct)) +
1
2c
x+ct
x−ct
˜h(ξ) dξ,
˜u(0, t) =
1
2
(˜g(ct) + ˜g(−ct)) +
1
2c
ct
−ct
˜h(ξ) dξ
=
1
2
(˜g(ct) − ˜g(ct)) +
1
2c
(H(ct) − H(−ct))
= 0 +
1
2c
(H(ct) − H(ct)) = 0,
where we used H(x) =
x
0
˜h(ξ) dξ; and since ˜h is odd, then H is even.
Example (McOwen 3.1 #5). Find in closed form (similar to d’Alembet’s formula)
the solution u(x, t) of
⎧
⎪⎨
⎪⎩
utt − c2
uxx = 0 for x, t > 0
u(x, 0) = g(x), ut(x, 0) = h(x) for x > 0
u(0, t) = α(t) for t ≥ 0,
(16.24)
where g, h, α ∈ C2 satisfy α(0) = g(0), α (0) = h(0), and α (0) = c2g (0). Verify that
u ∈ C2
, even on the characteristic x = ct.
Proof. As in (McOwen 3.1 #4), we can extend g and h to be odd functions. We want
to transform the problem to have zero boundary conditions.
Consider the function:
U(x, t) = u(x, t) − α(t). (16.25)
Partial Differential Equations Igor Yanovsky, 2005 150
Then (16.24) transforms to:
⎧
⎪⎪⎪⎪⎪⎪⎪⎪⎨
⎪⎪⎪⎪⎪⎪⎪⎪⎩
Utt − c2Uxx = −α (t)
fU (x,t)
U(x, 0) = g(x) − α(0)
gU (x)
, Ut(x, 0) = h(x) − α (0)
hU (x)
U(0, t) = 0
αu(t)
.
We use d’Alembert’s formula and Duhamel’s principle on U.
After getting U, we can get u from u(x, t) = U(x, t) + α(t).
Partial Differential Equations Igor Yanovsky, 2005 151
Example (Zachmanoglou, Chapter 8, Example 7.2). Find the solution of
⎧
⎪⎨
⎪⎩
utt − c2uxx = 0 for x > 0, t > 0
u(x, 0) = g(x), ut(x, 0) = h(x) for x > 0
ux(0, t) = 0 for t > 0.
(16.26)
Proof. Extend g and h as even functions on −∞ < x < ∞:
˜g(x) =
g(x), x ≥ 0
g(−x), x < 0
˜h(x) =
h(x), x ≥ 0
h(−x), x < 0.
Then, we need to solve
˜utt − c2˜uxx = 0 for − ∞ < x < ∞, t > 0
˜u(x, 0) = ˜g(x), ˜ut(x, 0) = ˜h(x) for − ∞ < x < ∞.
(16.27)
To show that d’Alembert’s formula gives the solution to (16.27), we need to show that
the solution given by d’Alembert’s formula satisfies the boundary condition ˜ux(0, t) = 0.
˜u(x, t) =
1
2
(˜g(x + ct) + ˜g(x − ct)) +
1
2c
x+ct
x−ct
˜h(ξ) dξ.
˜ux(x, t) =
1
2
(˜g (x + ct) + ˜g (x − ct)) +
1
2c
[˜h(x + ct) − ˜h(x − ct)],
˜ux(0, t) =
1
2
(˜g (ct) + ˜g (−ct)) +
1
2c
[˜h(ct) − ˜h(−ct)] = 0.
Since ˜g is even, then g is odd.
Problem (F’89, #3). 36
Let α = c, constant. Find the solution of
⎧
⎪⎨
⎪⎩
utt − c2
uxx = 0 for x > 0, t > 0
u(x, 0) = g(x), ut(x, 0) = h(x) for x > 0
ut(0, t) = αux(0, t) for t > 0,
(16.28)
where g, h ∈ C2 for x > 0 and vanish near x = 0.
Hint: Use the fact that a general solution of (16.28) can be written as the sum of two
traveling wave solutions.
Proof. D’Alembert’s formula is derived by plugging in the following into the above
equation and initial conditions:
u(x, t) = F(x + ct) + G(x − ct).
As in (Zachmanoglou 7.2), we can extend g and h to be even functions.
36
Similar to McOwen 3.1 #5. The notation in this problem is changed to be consistent with McOwen.
Partial Differential Equations Igor Yanovsky, 2005 152
Example (McOwen 3.1 #6). Solve the initial/boundary value problem
⎧
⎪⎨
⎪⎩
utt − uxx = 1 for 0 < x < π and t > 0
u(x, 0) = 0, ut(x, 0) = 0 for 0 < x < π
u(0, t) = 0, u(π, t) = −π2/2 for t ≥ 0.
(16.29)
Proof. If we first find a particular solution of the nonhomogeneous equation, this re-
duces the problem to a boundary value problem for the homogeneous equation ( as in
(McOwen 3.1 #2) and (McOwen 3.1 #3) ).
Hint: You should use a particular solution depending on x!
❶ First, find a particular solution. This is similar to the method of separation of
variables. Assume
up(x, t) = X(x),
which gives
−X (x) = 1,
X (x) = −1.
The solution to the above ODE is
X(x) = −
x2
2
+ ax + b.
The boundary conditions give
up(0, t) = b = 0,
up(π, t) = −
π2
2
+ aπ + b = −
π2
2
, ⇒ a = b = 0.
Thus, the particular solution is
up(x, t) = −
x2
2
.
This solution satisfies the following:
⎧
⎪⎨
⎪⎩
uptt − upxx = 1
up(x, 0) = −x2
2 , upt(x, 0) = 0
up(0, t) = 0, up(π, t) = −π2
2 .
❷ Second, we find a solution to a boundary value problem for the homogeneous equa-
tion:
⎧
⎪⎨
⎪⎩
utt − uxx = 0
u(x, 0) = x2
2 , ut(x, 0) = 0
u(0, t) = 0, u(π, t) = 0.
This is solved by the method of Separation of Variables. See Separation of Variables
subsection of “Problems: Separation of Variables: Wave Equation” McOwen 3.1 #2.
The only difference there is that u(x, 0) = 1.
We would find uh(x, t). Then,
u(x, t) = uh(x, t) + up(x, t).
Partial Differential Equations Igor Yanovsky, 2005 153
Problem (S’02, #2). a) Given a continuous function f on R which vanishes for
|x| > R, solve the initial value problem
utt − uxx = f(x) cos t,
u(x, 0) = 0, ut(x, 0) = 0, −∞ < x < ∞, 0 ≤ t < ∞
by first finding a particular solution by separation of variables and then adding the
appropriate solution of the homogeneous PDE.
b) Since the particular solution is not unique, it will not be obvious that the solution
to the initial value problem that you have found in part (a) is unique. Prove that it is
unique.
Proof. a) ❶ First, find a particular solution by separation of variables. Assume
up(x, t) = X(x) cost,
which gives
−X(x) cost − X (x) cost = f(x) cos t,
X + X = −f(x).
The solution to the above ODE is written as X = Xh +Xp. The homogeneous solution
is
Xh(x) = a cos x + b sinx.
To find a particular solution, note that since f is continuous, ∃G ∈ C2
(R), such that
G + G = −f(x).
Thus,
Xp(x) = G(x).
⇒ X(x) = Xh(x) + Xp(x) = a cos x + b sinx + G(x).
up(x, t) = a cos x + b sinx + G(x) cos t.
It can be verified that this solution satisfies the following:
uptt − upxx = f(x) cost,
up(x, 0) = a cos x + b sinx + G(x), upt(x, 0) = 0.
❷ Second, we find a solution of the homogeneous PDE:
⎧
⎪⎨
⎪⎩
utt − uxx = 0,
u(x, 0) = −a cos x − b sinx − G(x)
g(x)
, ut(x, 0) = 0
h(x)
.
The solution is given by d’Alembert’s formula (with c = 1):
uh(x, t) = uA
(x, t) =
1
2
(g(x + t) + g(x − t)) +
1
2
x+t
x−t
h(ξ) dξ
=
1
2
− a cos(x + t) − b sin(x + t) − G(x + t) + − a cos(x − t) − b sin(x − t) − G(x − t)
= −
1
2
a cos(x + t) + b sin(x + t) + G(x + t) −
1
2
a cos(x − t) + b sin(x − t) + G(x − t) .
Partial Differential Equations Igor Yanovsky, 2005 154
It can be verified that the solution satisfies the above homogeneous PDE with the
boundary conditions. Thus, the complete solution is:
u(x, t) = uh(x, t) + up(x, t).
Alternatively, we could use Duhamel’s principle to find the solution: 37
u(x, t) =
1
2
t
0
x+(t−s)
x−(t−s)
f(ξ) cos s dξ ds.
However, this is not how it was suggested to do this problem.
b) The particular solution is not unique, since any constants a, b give the solution.
However, we show that the solution to the initial value problem is unique.
Suppose u1 and u2 are two solutions. Then w = u1 − u2 satisfies:
wtt − wxx = 0,
w(x, 0) = 0, wt(x, 0) = 0.
D’Alembert’s formula gives
w(x, t) =
1
2
(g(x + t) + g(x − t)) +
1
2
x+t
x−t
h(ξ) dξ = 0.
Thus, the solution to the initial value problem is unique.
37
Note the relationship: x ↔ ξ, t ↔ s.
Partial Differential Equations Igor Yanovsky, 2005 155
16.3 Similarity Solutions
Problem (F’98, #7). Look for a similarity solution of the form
v(x, t) = tα
w(y = x/tβ
) for the differential equation
vt = vxx + (v2
)x. (16.30)
a) Find the parameters α and β.
b) Find a differential equation for w(y) and show that this ODE can be reduced to first
order.
c) Find a solution for the resulting first order ODE.
Proof. We can rewrite (16.30) as
vt = vxx + 2vvx. (16.31)
We look for a similarity solution of the form
v(x, t) = tα
w(y), y =
x
tβ
.
vt = αtα−1
w + tα
w yt = αtα−1
w + tα
−
βx
tβ+1
w = αtα−1
w − tα−1
βyw ,
vx = tα
w yx = tα
w t−β
= tα−β
w ,
vxx = (tα−β
w )x = tα−β
w yx = tα−β
w t−β
= tα−2β
w .
Plugging in the derivatives we calculated into (16.31), we obtain
αtα−1
w − tα−1
βyw = tα−2β
w + 2(tα
w)(tα−β
w ),
αw − βyw = t1−2β
w + 2tα−β+1
ww .
The parameters that would eliminate t from equation above are
β =
1
2
, α = −
1
2
.
With these parameters, we obtain the differential equation for w(y):
−
1
2
w −
1
2
yw = w + 2ww ,
w + 2ww +
1
2
yw +
1
2
w = 0.
We can write the ODE as
w + 2ww +
1
2
(yw) = 0.
Integrating it with respect to y, we obtain the first order ODE:
w + w2
+
1
2
yw = c.
Partial Differential Equations Igor Yanovsky, 2005 156
16.4 Traveling Wave Solutions
Consider the Korteweg-de Vries (KdV) equation in the form 38
ut + 6uux + uxxx = 0, −∞ < x < ∞, t > 0. (16.32)
We look for a traveling wave solution
u(x, t) = f(x − ct). (16.33)
We get the ODE
−cf + 6ff + f = 0. (16.34)
We integrate (16.34) to get
−cf + 3f2
+ f = a, (16.35)
where a is a constant. Multiplying this equality by f , we obtain
−cff + 3f2
f + f f = af .
Integrating again, we get
−
c
2
f2
+ f3
+
(f )2
2
= af + b. (16.36)
We are looking for solutions f which satisfy f(x), f (x), f (x) → 0 as x → ±∞. (In
which case the function u having the form (16.33) is called a solitary wave.) Then
(16.35) and (16.36) imply a = b = 0, so that
−
c
2
f2
+ f3
+
(f )2
2
= 0, or f = ±f c − 2f.
The solution of this ODE is
f(x) =
c
2
sech2
[
√
c
2
(x − x0)],
where x0 is the constant of integration. A solution of this form is called a soliton.
38
Evans, p. 174; Strauss, p. 367.
Partial Differential Equations Igor Yanovsky, 2005 157
Problem (S’93, #6). The generalized KdV equation is
∂u
∂t
=
1
2
(n + 1)(n + 2)un ∂u
∂x
−
∂3u
∂x3
,
where n is a positive integer. Solitary wave solutions are sought in which u = f(η),
where η = x − ct and
f, f , f → 0, as |η| → ∞;
c, the wave speed, is constant.
Show that
f 2
= fn+2
+ cf2
.
Hence show that solitary waves do not exist if n is even.
Show also that, when n = 1, all conditions of the problem are satisfied provided c > 0
and
u = −c sech2 1
2
√
c(x − ct) .
Proof. • We look for a traveling wave solution
u(x, t) = f(x − ct).
We get the ODE
−cf =
1
2
(n + 1)(n + 2)fn
f − f ,
Integrating this equation, we get
−cf =
1
2
(n + 2)fn+1
− f + a, (16.37)
where a is a constant. Multiplying this equality by f , we obtain
−cff =
1
2
(n + 2)fn+1
f − f f + af .
Integrating again, we get
−
cf2
2
=
1
2
fn+2
−
(f )2
2
+ af + b. (16.38)
We are looking for solutions f which satisfy f, f , f → 0 as x → ±∞. Then (16.37)
and (16.38) imply a = b = 0, so that
−
cf2
2
=
1
2
fn+2
−
(f )2
2
,
(f )2
= fn+2
+ cf2
.
• We show that solitary waves do not exist if n is even. We have
f = ± fn+2 + cf2 = ±|f| fn + c,
∞
−∞
f dη = ±
∞
−∞
|f| fn + c dη,
f
∞
−∞
= ±
∞
−∞
|f| fn + c dη,
0 = ±
∞
−∞
|f| fn + c dη.
Partial Differential Equations Igor Yanovsky, 2005 158
Thus, either ➀ |f| ≡ 0 ⇒ f = 0, or
➁ fn + c = 0. Since f → 0 as x → ±∞, we have c = 0 ⇒ f = 0.
Thus, solitary waves do not exist if n is even.
Partial Differential Equations Igor Yanovsky, 2005 159
• When n = 1, we have
(f )2
= f3
+ cf2
. (16.39)
We show that all conditions of the problem are satisfied provided c > 0, including
u = −c sech2 1
2
√
c(x − ct) , or
f = −c sech2 η
√
c
2
= −
c
cosh2
[η
√
c
2 ]
= −c cosh
η
√
c
2
−2
.
We have
f = 2c cosh
η
√
c
2
−3
· sinh
η
√
c
2
·
√
c
2
= c
√
c cosh
η
√
c
2
−3
· sinh
η
√
c
2
,
(f )2
=
c3
sinh2 η
√
c
2
cosh6 η
√
c
2
,
f3
= −
c3
cosh6 η
√
c
2
,
cf2
=
c3
cosh4 η
√
c
2
.
Plugging these into (16.39), we obtain: 39
c3
sinh2 η
√
c
2
cosh6 η
√
c
2
= −
c3
cosh6 η
√
c
2
+
c3
cosh4 η
√
c
2
,
c3
sinh2 η
√
c
2
cosh6 η
√
c
2
=
−c3
+ c3
cosh2 η
√
c
2
cosh6 η
√
c
2
,
c3
sinh2 η
√
c
2
cosh6 η
√
c
2
=
c3
sinh2 η
√
c
2
cosh6 η
√
c
2
.
Also, f, f , f → 0, as |η| → ∞, since
f(η) = −c sech2 η
√
c
2
= −
c
cosh2
[η
√
c
2 ]
= −c
2
e[ η
√
c
2
]
+ e−[ η
√
c
2
]
2
→ 0, as |η| → ∞.
Similarly, f , f → 0, as |η| → ∞.
39
cosh2
x − sinh2
x = 1.
cosh x =
ex
+ e−x
2
, sinh x =
ex
− e−x
2
Partial Differential Equations Igor Yanovsky, 2005 160
Problem (S’00, #5). Look for a traveling wave solution of the PDE
utt + (u2
)xx = −uxxxx
of the form u(x, t) = v(x − ct). In particular, you should find an ODE for v. Under
the assumption that v goes to a constant as |x| → ∞, describe the form of the solution.
Proof. Since (u2
)x = 2uux, and (u2
)xx = 2u2
x + 2uuxx, we have
utt + 2u2
x + 2uuxx = −uxxxx.
We look for a traveling wave solution
u(x, t) = v(x − ct).
We get the ODE
c2
v + 2(v )2
+ 2vv = −v ,
c2
v + 2((v )2
+ vv ) = −v ,
c2
v + 2(vv ) = −v , (exact differentials)
c2
v + 2vv = −v + a, s = x − ct
c2
v + v2
= −v + as + b,
v + c2
v + v2
= a(x − ct) + b.
Since v → C = const as |x| → ∞, we have v , v → 0, as |x| → ∞. Thus, implies
c2
v + v2
= as + b.
Since |x| → ∞, but v → C, we have a = 0:
v2
+ c2
v − b = 0.
v =
−c2 ±
√
c4 + 4b
2
.
Partial Differential Equations Igor Yanovsky, 2005 161
Problem (S’95, #2). Consider the KdV-Burgers equation
ut + uux = uxx + δuxxx
in which > 0, δ > 0.
a) Find an ODE for traveling wave solutions of the form
u(x, t) = ϕ(x − st)
with s > 0 and
lim
y→−∞
ϕ(y) = 0
and analyze the stationary points from this ODE.
b) Find the possible (finite) values of
ϕ+ = lim
y→∞
ϕ(y).
Proof. a) We look for a traveling wave solution
u(x, t) = ϕ(x − st), y = x − st.
We get the ODE
−sϕ + ϕϕ = ϕ + δϕ ,
−sϕ +
1
2
ϕ2
= ϕ + δϕ + a.
Since ϕ → 0 as y → −∞, then ϕ , ϕ → 0 as y → −∞. Therefore, at y = −∞, a = 0.
We found the following ODE,
ϕ +
δ
ϕ +
s
δ
ϕ −
1
2δ
ϕ2
= 0.
In order to find and analyze the stationary points of an ODE above, we write it as a
first-order system.
φ1 = ϕ,
φ2 = ϕ .
φ1 = ϕ = φ2,
φ2 = ϕ = −
δ
ϕ −
s
δ
ϕ +
1
2δ
ϕ2
= −
δ
φ2 −
s
δ
φ1 +
1
2δ
φ2
1.
φ1 = φ2 = 0,
φ2 = −δ φ2 − s
δ φ1 + 1
2δ φ2
1 = 0;
⇒
φ1 = φ2 = 0,
φ2 = −s
δ φ1 + 1
2δ φ2
1 = 0;
⇒
φ1 = φ2 = 0,
φ2 = −1
δ φ1(s − 1
2φ1) = 0.
Stationary points: (0, 0), (2s, 0), s > 0.
φ1 = φ2 = f(φ1, φ2),
φ2 = −
δ
φ2 −
s
δ
φ1 +
1
2δ
φ2
1 = g(φ1, φ2).
Partial Differential Equations Igor Yanovsky, 2005 162
In order to classify a stationary point, need to find eigenvalues of a linearized system
at that point.
J(f(φ1, φ2), g(φ1, φ2)) =
∂f
∂φ1
∂f
∂φ2
∂g
∂φ1
∂g
∂φ2
=
0 1
−s
δ + 1
δ φ1 −δ
.
• For (φ1, φ2) = (0, 0) :
det(J|(0,0) − λI) =
−λ 1
−s
δ −δ − λ
= λ2
+ δ λ + s
δ = 0.
λ± = −2δ ±
2
4δ2 − s
δ .
If
2
4δ > s ⇒ λ± ∈ R, λ± < 0.
⇒ (0,0) is Stable Improper Node.
If
2
4δ < s ⇒ λ± ∈ C, Re(λ±) < 0.
⇒ (0,0) is Stable Spiral Point.
• For (φ1, φ2) = (2s, 0) :
det(J|(2s,0) − λI) =
−λ 1
s
δ −δ − λ
= λ2
+ δ λ − s
δ = 0.
λ± = −2δ ±
2
4δ2 + s
δ .
⇒ λ+ > 0, λ− < 0.
⇒ (2s,0) is Untable Saddle Point.
b) Since
lim
y→−∞
ϕ(y) = 0 = lim
t→∞
ϕ(x − st),
we may have
lim
y→+∞
ϕ(y) = lim
t→−∞
ϕ(x − st) = 2s.
That is, a particle may start off at an unstable node (2s, 0) and as t increases, approach
the stable node (0, 0).
A phase diagram with (0, 0) being a stable spiral point, is shown below.
Partial Differential Equations Igor Yanovsky, 2005 163
Partial Differential Equations Igor Yanovsky, 2005 164
Problem (F’95, #8). Consider the equation
ut + f(u)x = uxx
where f is smooth and > 0. We seek traveling wave solutions to this equation,
i.e., solutions of the form u = φ(x − st), under the boundary conditions
u → uL and ux → 0 as x → −∞,
u → uR and ux → 0 as x → +∞.
Find a necessary and sufficient condition on f, uL, uR and s for such traveling waves
to exist; in case this condition holds, write an equation which defines φ implicitly.
Proof. We look for traveling wave solutions
u(x, t) = φ(x − st), y = x − st.
The boundary conditions become
φ → uL and φ → 0 as x → −∞,
φ → uR and φ → 0 as x → +∞.
Since f(φ(x − st))x = f (φ)φ , we get the ODE
−sφ + f (φ)φ = φ ,
−sφ + (f(φ)) = φ ,
−sφ + f(φ) = φ + a,
φ =
−sφ + f(φ)
+ b.
We use boundary conditions to determine constant b:
At x = −∞, 0 = φ =
−suL + f(uL)
+ b ⇒ b =
suL − f(uL)
.
At x = +∞, 0 = φ =
−suR + f(uR)
+ b ⇒ b =
suR − f(uR)
.
s =
f(uL) − f(uR)
uL − uR
.
40
40
For the solution for the second part of the problem, refer to Chiu-Yen’s solutions.
Partial Differential Equations Igor Yanovsky, 2005 165
Problem (S’02, #5; F’90, #2). Fisher’s Equation. Consider
ut = u(1 − u) + uxx, −∞ < x < ∞, t > 0.
The solutions of physical interest satisfy 0 ≤ u ≤ 1, and
lim
x→−∞
u(x, t) = 0, lim
x→+∞
u(x, t) = 1.
One class of solutions is the set of “wavefront” solutions. These have the form u(x, t) =
φ(x + ct), c ≥ 0.
Determine the ordinary differential equation and boundary conditions which φ must
satisfy (to be of physical interest). Carry out a phase plane analysis of this equation,
and show that physically interesting wavefront solutions are possible if c ≥ 2, but not if
0 ≤ c < 2.
Proof. We look for a traveling wave solution
u(x, t) = φ(x + ct), s = x + ct.
We get the ODE
cφ = φ(1 − φ) + φ ,
φ − cφ + φ − φ2
= 0,
◦ φ(s) → 0, as s → −∞,
◦ φ(s) → 1, as s → +∞,
◦ 0 ≤ φ ≤ 1.
In order to find and analyze the stationary points of an ODE above, we write it as a
first-order system.
y1 = φ,
y2 = φ .
y1 = φ = y2,
y2 = φ = cφ − φ + φ2
= cy2 − y1 + y2
1.
y1 = y2 = 0,
y2 = cy2 − y1 + y2
1 = 0;
⇒
y2 = 0,
y1(y1 − 1) = 0.
Stationary points: (0, 0), (1, 0).
y1 = y2 = f(y1, y2),
y2 = cy2 − y1 + y2
1 = g(y1, y2).
In order to classify a stationary point, need to find eigenvalues of a linearized system
at that point.
J(f(y1, y2), g(y1, y2)) =
∂f
∂y1
∂f
∂y2
∂g
∂y1
∂g
∂y2
=
0 1
2y1 − 1 c
.
Partial Differential Equations Igor Yanovsky, 2005 166
• For (y1, y2) = (0, 0) :
det(J|(0,0) − λI) =
−λ 1
−1 c − λ
= λ2
− cλ + 1 = 0.
λ± =
c ±
√
c2 − 4
2
.
If c ≥ 2 ⇒ λ± ∈ R, λ± > 0.
(0,0) is Unstable Improper (c > 2) / Proper (c = 2) Node.
If 0 ≤ c < 2 ⇒ λ± ∈ C, Re(λ±) ≥ 0.
(0,0) is Unstable Spiral Node.
• For (y1, y2) = (1, 0) :
det(J|(1,0) − λI) =
−λ 1
1 c − λ
= λ2
− cλ − 1 = 0.
λ± =
c ±
√
c2 + 4
2
.
If c ≥ 0 ⇒ λ+ > 0, λ− < 0.
(1,0) is Unstable Saddle Point.
By looking at the phase plot, a particle may start off at an unstable node (0, 0) and as
t increases, approach the unstable node (1, 0).
Partial Differential Equations Igor Yanovsky, 2005 167
Partial Differential Equations Igor Yanovsky, 2005 168
Problem (F’99, #6). For the system
∂tρ + ∂xu = 0
∂tu + ∂x(ρu) = ∂2
xu
look for traveling wave solutions of the form ρ(x, t) = ρ(y = x−st), u(x, t) = u(y =
x − st). In particular
a) Find a first order ODE for u.
b) Show that this equation has solutions of the form
u(y) = u0 + u1 tanh(αy + y0),
for some constants u0, u1, α, y0.
Proof. a) We rewrite the system:
ρt + ux = 0
ut + ρxu + ρux = uxx
We look for traveling wave solutions
ρ(x, t) = ρ(x − st), u(x, t) = u(x − st), y = x − st.
We get the system of ODEs
−sρ + u = 0,
−su + ρ u + ρu = u .
The first ODE gives
ρ =
1
s
u ,
ρ =
1
s
u + a,
where a is a constant, and integration was done with respect to y. The second ODE
gives
−su +
1
s
u u +
1
s
u + a u = u ,
−su +
2
s
uu + au = u . Integrating, we get
−su +
1
s
u2
+ au = u + b.
u =
1
s
u2
+ (a − s)u − b.
b) Note that the ODE above may be written in the following form:
u + Au2
+ Bu = C,
which is a nonlinear first order equation.
Partial Differential Equations Igor Yanovsky, 2005 169
Problem (S’01, #7). Consider the following system of PDEs:
ft + fx = g2
− f2
gt − gx = f2
− g
a) Find a system of ODEs that describes traveling wave solutions of the PDE
system; i.e. for solutions of the form f(x, t) = f(x − st) and g(x, t) = g(x − st).
b) Analyze the stationary points and draw the phase plane for this ODE system in the
standing wave case s = 0.
Proof. a) We look for traveling wave solutions
f(x, t) = f(x − st), g(x, t) = g(x − st).
We get the system of ODEs
−sf + f = g2
− f2
,
−sg − g = f2
− g.
Thus,
f =
g2 − f2
1 − s
,
g =
f2
− g
−1 − s
.
b) If s = 0, the system becomes
f = g2
− f2
,
g = g − f2.
Relabel the variables f → y1, g → y2.
y1 = y2
2 − y2
1 = 0,
y2 = y2 − y2
1 = 0.
Stationary points: (0, 0), (−1, 1), (1, 1).
y1 = y2
2 − y2
1 = φ(y1, y2),
y2 = y2 − y2
1 = ψ(y1, y2).
In order to classify a stationary point, need to find eigenvalues of a linearized system
at that point.
J(φ(y1, y2), ψ(y1, y2)) =
∂φ
∂y1
∂φ
∂y2
∂ψ
∂y1
∂ψ
∂y2
=
−2y1 2y2
−2y1 1
.
• For (y1, y2) = (0, 0) :
det(J|(0,0) − λI) =
−λ 0
0 1 − λ
= −λ(1 − λ) = 0.
λ1 = 0, λ2 = 1; eigenvectors: v1 =
1
0
, v2 =
0
1
.
(0,0) is Unstable Node.
Partial Differential Equations Igor Yanovsky, 2005 170
• For (y1, y2) = (−1, 1) :
det(J|(−1,1) − λI) =
2 − λ 2
2 1 − λ
= λ2 − 3λ − 2 = 0.
λ± =
3
2
±
√
17
2
.
λ− < 0, λ+ > 0.
(-1,1) is Unstable Saddle Point.
• For (y1, y2) = (1, 1) :
det(J|(1,1) − λI) =
−2 − λ 2
−2 1 − λ
= λ2
+ λ + 2 = 0.
λ± = −
1
2
± i
√
7
2
.
Re(λ±) < 0.
(1,1) is Stable Spiral Point.
Partial Differential Equations Igor Yanovsky, 2005 171
16.5 Dispersion
Problem (S’97, #8). Consider the following equation
ut = (f(ux))x − αuxxxx, f(v) = v2
− v, (16.40)
with constant α.
a) Linearize this equation around u = 0 and find the principal mode solution of the
form eωt+ikx
. For which values of α are there unstable modes, i.e., modes with ω = 0
for real k? For these values, find the maximally unstable mode, i.e., the value of k with
the largest positive value of ω.
b) Consider the steady solution of the (fully nonlinear) problem. Show that the resulting
equation can be written as a second order autonomous ODE for v = ux and draw the
corresponding phase plane.
Proof. a) We have
ut = (f(ux))x − αuxxxx,
ut = (u2
x − ux)x − αuxxxx,
ut = 2uxuxx − uxx − αuxxxx.
However, we need to linearize (16.40) around u = 0. To do this, we need to linearize f.
f(u) = f(0) + uf (0) +
u2
2
f (0) + · · · = 0 + u(0 − 1) + · · · = −u + · · · .
Thus, we have
ut = −uxx − αuxxxx.
Consider u(x, t) = eωt+ikx.
ωeωt+ikx
= (k2
− αk4
)eωt+ikx
,
ω = k2
− αk4
.
To find unstable nodes, we set ω = 0, to get
α =
1
k2
.
• To find the maximally unstable mode, i.e., the value of k with the largest positive
value of ω, consider
ω(k) = k2
− αk4
,
ω (k) = 2k − 4αk3
.
To find the extremas of ω, we set ω = 0. Thus,the extremas are at
k1 = 0, k2,3 = ±
1
2α
.
To find if the extremas are maximums or minimums, we set ω = 0:
ω (k) = 2 − 12αk2
= 0,
ω (0) = 2 > 0 ⇒ k = 0 is the minimum.
ω ±
1
2α
= −4 < 0 ⇒ k = ±
1
2α
is the maximum unstable mode.
ω ±
1
2α
=
1
4α
is the largest positive value of ω.
Partial Differential Equations Igor Yanovsky, 2005 172
b) Integrating , we get
u2
x − ux − αuxxx = 0.
Let v = ux. Then,
v2
− v − αvxx = 0, or
v =
v2
− v
α
.
In order to find and analyze the stationary points of an ODE above, we write it as a
first-order system.
y1 = v,
y2 = v .
y1 = v = y2,
y2 = v =
v2
− v
α
=
y2
1 − y1
α
.
y1 = y2 = 0,
y2 =
y2
1 −y1
α = 0;
⇒
y2 = 0,
y1(y1 − 1) = 0.
Stationary points: (0, 0), (1, 0).
y1 = y2 = f(y1, y2),
y2 =
y2
1 − y1
α
= g(y1, y2).
In order to classify a stationary point, need to find eigenvalues of a linearized system
at that point.
J(f(y1, y2), g(y1, y2)) =
∂f
∂y1
∂f
∂y2
∂g
∂y1
∂g
∂y2
=
0 1
2y1−1
α 0
.
• For (y1, y2) = (0, 0), λ± = ± −1
α .
If α < 0, λ± ∈ R, λ+ > 0, λ− < 0. ⇒ (0,0) is Unstable Saddle Point.
If α > 0, λ± = ±i 1
α ∈ C, Re(λ±) = 0. ⇒ (0,0) is Spiral Point.
• For (y1, y2) = (1, 0), λ± = ± 1
α .
If α < 0, λ± = ±i −1
α ∈ C, Re(λ±) = 0. ⇒ (1,0) is Spiral Point.
If α > 0, λ± ∈ R, λ+ > 0, λ− < 0. ⇒ (1,0) is Unstable Saddle Point.
Partial Differential Equations Igor Yanovsky, 2005 173
Partial Differential Equations Igor Yanovsky, 2005 174
16.6 Energy Methods
Problem (S’98, #9; S’96, #5). Consider the following initial-boundary value
problem for the multi-dimensional wave equation:
utt = u in Ω × (0, ∞),
u(x, 0) = f(x),
∂u
∂t
(x, 0) = g(x) for x ∈ Ω,
∂u
∂n
+ a(x)
∂u
∂t
= 0 on ∂Ω.
Here, Ω is a bounded domain in Rn and a(x) ≥ 0. Define the Energy integral for this
problem and use it in order to prove the uniqueness of the classical solution of the prob-
lem.
Proof.
d ˜E
dt
= 0 =
Ω
(utt − u)ut dx =
Ω
uttut dx −
∂Ω
∂u
∂n
ut ds +
Ω
∇u · ∇ut dx
=
Ω
1
2
∂
∂t
(u2
t ) dx +
Ω
1
2
∂
∂t
|∇u|2
dx +
∂Ω
a(x)u2
t ds.
Thus,
−
∂Ω
a(x)u2
t dx
≤0
=
1
2
∂
∂t Ω
u2
t + |∇u|2
dx.
Let Energy integral be
E(t) =
1
2 Ω
u2
t + |∇u|2
dx.
In order to prove that the given E(t) ≤ 0 from scratch, take its derivative with respect
to t:
dE
dt
(t) =
Ω
ututt + ∇u · ∇ut dx
=
Ω
ututt dx +
∂Ω
ut
∂u
∂n
ds −
Ω
ut u dx
=
Ω
ut(utt − u) dx
=0
−
∂Ω
a(x)u2
t dx ≤ 0.
Thus, E(t) ≤ E(0).
To prove the uniqueness of the classical solution, suppose u1 and u2 are two solutions
of the initial boundary value problem. Let w = u1 − u2. Then, w satisfies
wtt = w in Ω × (0, ∞),
w(x, 0) = 0, wt(x, 0) = 0 for x ∈ Ω,
∂w
∂n
+ a(x)
∂w
∂t
= 0 on ∂Ω.
We have
Ew(0) =
1
2 Ω
(wt(x, 0)2
+ |∇w(x, 0)|2
) dx = 0.
Partial Differential Equations Igor Yanovsky, 2005 175
Ew(t) ≤ Ew(0) = 0 ⇒ Ew(t) = 0. Thus, wt = 0, wxi = 0 ⇒ w(x, t) = const = 0.
Hence, u1 = u2.
Problem (S’94, #7). Consider the wave equation
1
c2(x)
utt = u x ∈ Ω
∂u
∂t
− α(x)
∂u
∂n
= 0 on ∂Ω × R.
Assume that α(x) is of one sign for all x (i.e. α always positive or α always negative).
For the energy
E(t) =
1
2 Ω
1
c2(x)
u2
t + |∇u|2
dx,
show that the sign of dE
dt is determined by the sign of α.
Proof. We have
dE
dt
(t) =
Ω
1
c2(x)
ututt + ∇u · ∇ut dx
=
Ω
1
c2(x)
ututt dx +
∂Ω
ut
∂u
∂n
ds −
Ω
ut u dx
=
Ω
ut
1
c2(x)
utt − u dx
=0
+
∂Ω
1
α(x)
u2
t dx
=
∂Ω
1
α(x)
u2
t dx =
> 0, if α(x) > 0, ∀x ∈ Ω,
< 0, if α(x) < 0, ∀x ∈ Ω.
Partial Differential Equations Igor Yanovsky, 2005 176
Problem (F’92, #2). Let Ω ∈ Rn
. Let u(x, t) be a smooth solution of the following
initial boundary value problem:
utt − u + u3
= 0 for (x, t) ∈ Ω × [0, T]
u(x, t) = 0 for (x, t) ∈ ∂Ω × [0, T].
a) Derive an energy equality for u. (Hint: Multiply by ut and integrate over Ω ×
[0, T].)
b) Show that if u|t=0 = ut|t=0 = 0 for x ∈ Ω, then u ≡ 0.
Proof. a) Multiply by ut and integrate:
0 =
Ω
(utt − u + u3
)ut dx =
Ω
uttut dx −
∂Ω
∂u
∂n
ut ds
=0
+
Ω
∇u · ∇ut dx +
Ω
u3
ut dx
=
Ω
1
2
∂
∂t
(u2
t ) dx +
Ω
1
2
∂
∂t
|∇u|2
dx +
Ω
1
4
∂
∂t
(u4
) dx =
1
2
d
dt Ω
u2
t + |∇u|2
+
1
2
u4
dx.
Thus, the Energy integral is
E(t) =
Ω
u2
t + |∇u|2
+
1
2
u4
dx = const = E(0).
b) Since u(x, 0) = 0, ut(x, 0) = 0, we have
E(0) =
Ω
ut(x, 0)2
+ |∇u(x, 0)|2
+
1
2
u(x, 0)4
dx = 0.
Since E(t) = E(0) = 0, we have
E(t) =
Ω
ut(x, t)2
+ |∇u(x, t)|2
+
1
2
u(x, t)4
dx = 0.
Thus, u ≡ 0.
Partial Differential Equations Igor Yanovsky, 2005 177
Problem (F’04, #3). Consider a damped wave equation
utt − u + a(x)ut = 0, (x, t) ∈ R3 × R,
u|t=0 = u0, ut|t=0 = u1.
Here the damping coefficient a ∈ C∞
0 (R3) is a non-negative function and u0, u1 ∈
C∞
0 (R3
). Show that the energy of the solution u(x, t) at time t,
E(t) =
1
2 R3
|∇xu|2
+ |ut|2
dx
is a decreasing function of t ≥ 0.
Proof. Take the derivative of E(t) with respect to t. Note that the boundary integral
is 0 by Huygen’s principle.
dE
dt
(t) =
R3
∇u · ∇ut + ututt dx
=
∂R3
ut
∂u
∂n
ds
=0
−
R3
ut u dx +
R3
ututt dx
=
R3
ut(− u + utt) dx =
R3
ut(−a(x)ut) dx =
R3
−a(x)u2
t dx ≤ 0.
Thus, dE
dt ≤ 0 ⇒ E(t) ≤ E(0), i.e. E(t) is a decreasing function of t.
Partial Differential Equations Igor Yanovsky, 2005 178
Problem (W’03, #8). a) Consider the damped wave equation for high-speed waves
(0 < << 1) in a bounded region D
2
utt + ut = u
with the boundary condition u(x, t) = 0 on ∂D. Show that the energy functional
E(t) =
D
2
u2
t + |∇u|2
dx
is nonincreasing on solutions of the boundary value problem.
b) Consider the solution to the boundary value problem in part (a) with initial data
u (x, 0) = 0, ut(x, 0) = −αf(x), where f does not depend on and α < 1. Use part
(a) to show that
D
|∇u (x, t)|2
dx → 0
uniformly on 0 ≤ t ≤ T for any T as → 0.
c) Show that the result in part (b) does not hold for α = 1. To do this consider
the case where f is an eigenfunction of the Laplacian, i.e. f + λf = 0 in D and
f = 0 on ∂D, and solve for u explicitly.
Proof. a)
dE
dt
=
D
2 2
ututt dx +
D
2∇u · ∇ut dx
=
D
2 2
ututt dx +
∂D
2
∂u
∂n
ut ds
=0, (u=0 on ∂D)
−
D
2 uut dx
= 2
D
( 2
utt − u)ut dx = = −2
D
|ut|2
dx ≤ 0.
Thus, E(t) ≤ E(0), i.e. E(t) is nonincreasing.
b) From (a), we know dE
dt ≤ 0. We also have
E (0) =
D
2
(ut(x, 0))2
+ |∇u (x, 0)|2
dx
=
D
2
( −α
f(x))2
+ 0 dx =
D
2(1−α)
f(x)2
dx → 0 as → 0.
Since E (0) ≥ E (t) = D
2(ut)2 + |∇u |2 dx, then E (t) → 0 as → 0.
Thus, D |∇u |2
dx → 0 as → 0.
c) If α = 1,
E (0) =
D
2(1−α)
f(x)2
dx =
D
f(x)2
dx.
Since f is independent of , E (0) does not approach 0 as → 0. We can not conclude
that D |∇u (x, t)|2
dx → 0.
Partial Differential Equations Igor Yanovsky, 2005 179
Problem (F’98, #6). Let f solve the nonlinear wave equation
ftt − fxx = −f(1 + f2
)−1
for x ∈ [0, 1], with f(x = 0, t) = f(x = 1, t) = 0 and with smooth initial data f(x, t) =
f0(x).
a) Find an energy integral E(t) which is constant in time.
b) Show that |f(x, t)| < c for all x and t, in which c is a constant.
Hint: Note that
f
1 + f2
=
1
2
d
df
log(1 + f2
).
Proof. a) Since f(0, t) = f(1, t) = 0, ∀t, we have ft(0, t) = ft(1, t) = 0. Let
dE
dt
= 0 =
1
0
ftt − fxx + f(1 + f2
)−1
ft dx
=
1
0
fttft dx −
1
0
fxxft dx +
1
0
fft
1 + f2
dx
=
1
0
fttft dx − [fx ft
=0
]1
0 +
1
0
fxftx dx +
1
0
fft
1 + f2
dx
=
1
0
1
2
∂
∂t
(f2
t ) dx +
1
0
1
2
∂
∂t
(f2
x) dx +
1
0
1
2
∂
∂t
(ln(1 + f2
)) dx
=
1
2
d
dt
1
0
f2
t + f2
x + ln(1 + f2
) dx.
Thus,
E(t) =
1
2
1
0
f2
t + f2
x + ln(1 + f2
) dx.
b) We want to show that f is bounded. For smooth f(x, 0) = f0(x), we have
E(0) =
1
2
1
0
ft(x, 0)2
+ fx(x, 0)2
+ ln(1 + f(x, 0)2
) dx < ∞.
Since E(t) is constant in time, E(t) = E(0) < ∞. Thus,
1
2
1
0
ln(1 + f2
) dx ≤
1
2
1
0
f2
t + f2
x + ln(1 + f2
) dx = E(t) < ∞.
Hence, f is bounded.
Partial Differential Equations Igor Yanovsky, 2005 180
Problem (F’97, #1). Consider initial-boundary value problem
utt + a2
(x, t)ut − u(x, t) = 0 x ∈ Ω ⊂ Rn
, 0 < t < +∞
u(x) = 0 x ∈ ∂Ω
u(x, 0) = f(x), ut(x, 0) = g(x) x ∈ Ω.
Prove that L2-norm of the solution is bounded in t on (0, +∞).
Here Ω is a bounded domain, and a(x, t), f(x), g(x) are smooth functions.
Proof. Multiply the equation by ut and integrate over Ω:
ututt + a2
u2
t − ut u = 0,
Ω
ututt dx +
Ω
a2
u2
t dx −
Ω
ut u dx = 0,
1
2
d
dt Ω
u2
t dx +
Ω
a2
u2
t dx −
∂Ω
ut
∂u
∂n
ds
=0, (u=0, x∈∂Ω)
+
Ω
∇u · ∇ut dx = 0,
1
2
d
dt Ω
u2
t dx +
Ω
a2
u2
t dx +
1
2
d
dt Ω
|∇u|2
dx = 0,
1
2
d
dt Ω
u2
t + |∇u|2
dx = −
Ω
a2
u2
t dx ≤ 0.
Let Energy integral be
E(t) =
Ω
u2
t + |∇u|2
dx.
We have dE
dt ≤ 0, i.e. E(t) ≤ E(0).
E(t) ≤ E(0) =
Ω
ut(x, 0)2
+ |∇u(x, 0)|2
dx =
Ω
g(x)2
+ |∇f(x)|2
dx < ∞,
since f and g are smooth functions. Thus,
E(t) =
Ω
u2
t + |∇u|2
dx < ∞,
Ω
|∇u|2
dx < ∞,
Ω
u2
dx ≤ C
Ω
|∇u|2
dx < ∞, by Poincare inequality.
Thus, ||u||2 is bounded ∀t.
Partial Differential Equations Igor Yanovsky, 2005 181
Problem (S’98, #4). a) Let u(x, y, z, t), −∞ < x, y, z < ∞ be a solution of the
equation
⎧
⎪⎨
⎪⎩
utt + ut = uxx + uyy + uzz
u(x, y, z, 0) = f(x, y, z),
ut(x, y, z, 0) = g(x, y, z).
(16.41)
Here f, g are smooth functions which vanish if x2 + y2 + z2 is large enough. Prove
that it is the unique solution for t ≥ 0.
b) Suppose we want to solve the same equation (16.41) in the region z ≥ 0, −∞ <
x, y < ∞, with the additional conditions
u(x, y, 0, t) = f(x, y, t)
uz(x, y, 0, t) = g(x, y, t)
with the same f, g as before in (16.41). What goes wrong?
Proof. a) Suppose u1 and u2 are two solutions. Let w = u1 − u2. Then,
⎧
⎪⎨
⎪⎩
wtt + wt = w,
w(x, y, z, 0) = 0,
wt(x, y, z, 0) = 0.
Multiply the equation by wt and integrate:
wtwtt + w2
t = wt w,
R3
wtwtt dx +
R3
w2
t dx =
R3
wt w dx,
1
2
d
dt R3
w2
t dx +
R3
w2
t dx =
∂R3
wt
∂w
∂n
dx
=0
−
R3
∇w · ∇wt dx,
1
2
d
dt R3
w2
t dx +
R3
w2
t dx = −
1
2
d
dt R3
|∇w|2
dx,
d
dt R3
w2
t + |∇w|2
dx
E(t)
= −2
R3
w2
t dx ≤ 0,
dE
dt
≤ 0,
E(t) ≤ E(0) =
R3
wt(x, 0)2
+ |∇w(x, 0)|2
dx = 0,
⇒ E(t) =
R3
w2
t + |∇w|2
dx = 0.
Thus, wt = 0, ∇w = 0, and w = constant. Since w(x, y, z, 0) = 0, we have w ≡ 0.
b)
Partial Differential Equations Igor Yanovsky, 2005 182
Problem (F’94, #8). The one-dimensional, isothermal fluid equations with viscosity
and capillarity in Lagrangian variables are
vt − ux = 0
ut + p(v)x = εuxx − δvxxx
in which v(= 1/ρ) is specific volume, u is velocity, and p(v) is pressure. The coefficients
ε and δ are non-negative.
Find an energy integral which is non-increasing (as t increases) if ε > 0 and con-
served if ε = 0.
Hint: if δ = 0, E = u2/2 − P(v) dx where P (v) = p(v).
Proof. Multiply the second equation by u and integrate over R. We use ux = vt.
Note that the boundary integrals are 0 due to finite speed of propagation.
uut + up(v)x = εuuxx − δuvxxx,
R
uut dx +
R
up(v)x dx = ε
R
uuxx dx − δ
R
uvxxx dx,
1
2 R
∂
∂t
(u2
) dx +
∂R
up(v) ds
=0
+
R
uxp(v) dx
= ε
∂R
uux dx
=0
−ε
R
u2
x dx − δ
∂R
uvxx dx
=0
+δ
R
uxvxx dx,
1
2 R
∂
∂t
(u2
) dx +
R
vtp(v) dx = −ε
R
u2
x dx + δ
R
vtvxx dx,
1
2 R
∂
∂t
(u2
) dx +
R
∂
∂t
P(v) dx = −ε
R
u2
x dx + δ
∂R
vtvx dx
=0
−δ
R
vxtvx dx,
1
2 R
∂
∂t
(u2
) dx +
R
∂
∂t
P(v) dx +
δ
2 R
∂
∂t
(v2
x) dx = −ε
R
u2
x dx,
d
dt R
u2
2
+ P(v) +
δ
2
v2
x dx = −ε
R
u2
x dx ≤ 0.
E(t) =
R
u2
2
+ P(v) +
δ
2
v2
x dx
is nonincreasing if ε > 0, and conserved if ε = 0.
Partial Differential Equations Igor Yanovsky, 2005 183
Problem (S’99, #5). Consider the equation
utt =
∂
∂x
σ(ux) (16.42)
with σ(z) a smooth function. This is to be solved for t > 0, 0 ≤ x ≤ 1, with
periodic boundary conditions and initial data u(x, 0) = u0(x) and ut(x, 0) = v0(x).
a) Multiply (16.42) by ut and get an expression of the form
d
dt
1
0
F(ut, ux) = 0
that is satisfied for an appropriate function F(y, z) with y = ut, z = ux,
where u is any smooth, periodic in space solution of (16.42).
b) Under what conditions on σ(z) is this function, F(y, z), convex in its variables?
c) What `a priori inequality is satisfied for smooth solutions when F is convex?
d) Discuss the special case σ(z) = a2z3/3, with a > 0 and constant.
Proof. a) Multiply by ut and integrate:
ututt = utσ(ux)x,
1
0
ututt dx =
1
0
utσ(ux)x dx,
d
dt
1
0
u2
t
2
dx = utσ(ux)
1
0
=0, (2π-periodic)
−
1
0
utxσ(ux) dx =
Let Q (z) = σ(z), then d
dt Q(ux) = σ(ux)uxt. Thus,
= −
1
0
utxσ(ux) dx = −
d
dt
1
0
Q(ux) dx.
d
dt
1
0
u2
t
2
+ Q(ux) dx = 0.
b) We have
F(ut, ux) =
u2
t
2
+ Q(ux).
41 For F to be convex, the Hessian matrix of partial derivatives must be positive definite.
41
A function f is convex on a convex set S if it satisfies
f(αx + (1 − α)y) ≤ αf(x) + (1 − α)f(y)
for all 0 ≤ α ≤ 1 and for all x, y ∈ S.
If a one-dimensional function f has two continuous derivatives, then f is convex if and only if
f (x) ≥ 0.
In the multi-dimensional case the Hessian matrix of second derivatives must be positive semi-definite,
that is, at every point x ∈ S
yT
∇2
f(x) y ≥ 0, for all y.
The Hessian matrix is the matrix with entries
[∇2
f(x)]ij ≡
∂2
f(x)
∂xi∂xj
.
For functions with continuous second derivatives, it will always be symmetric matrix: fxixj = fxj xi .
Partial Differential Equations Igor Yanovsky, 2005 184
The Hessian matrix is
∇2
F(ut, ux) =
Futut Futux
Fuxut Fuxux
=
1 0
0 σ (ux)
.
yT
∇2
F(x) y = y1 y2
1 0
0 σ (ux)
y1
y2
= y2
1 + σ (ux)y2
2 ≥
need
0.
Thus, for a Hessian matrix to be positive definite, need σ (ux) ≥ 0, so that the above
inequality holds for all y.
c) We have
d
dt
1
0
F(ut, ux) dx = 0,
1
0
F(ut, ux) dx = const,
1
0
F(ut, ux) dx =
1
0
F(ut(x, 0), ux(x, 0)) dx,
1
0
u2
t
2
+ Q(ux) dx =
1
0
v2
0
2
+ Q(u0x) dx.
d) If σ(z) = a2
z3
/3, we have
F(ut, ux) =
u2
t
2
+ Q(ux) =
u2
t
2
+
a2
u4
x
12
,
d
dt
1
0
u2
t
2
+
a2
u4
x
12
dx = 0,
1
0
u2
t
2
+
a2
u4
x
12
dx = const,
1
0
u2
t
2
+
a2
u4
x
12
dx =
1
0
v0
2
2
+
a2
u0
4
x
12
dx.
Partial Differential Equations Igor Yanovsky, 2005 185
Problem (S’96, #8). 42
Let u(x, t) be the solution of the Korteweg-de Vries equation
ut + uux = uxxx, 0 ≤ x ≤ 2π,
with 2π-periodic boundary conditions and prescribed initial data
u(x, t = 0) = f(x).
a) Prove that the energy integral
I1(u) =
2π
0
u2
(x, t) dx
is independent of the time t.
b) Prove that the second “energy integral”,
I2(u) =
2π
0
1
2
u2
x(x, t) +
1
6
u3
(x, t) dx
is also independent of the time t.
c) Assume the initial data are such that I1(f) + I2(f) < ∞. Use (a) + (b) to prove
that the maximum norm of the solution, |u|∞ = supx |u(x, t)|, is bounded in time.
Hint: Use the following inequalities (here, |u|p is the Lp
-norm of u(x, t) at fixed time
t):
• |u|2
∞ ≤
π
6
(|u|2
2 + |ux|2
2) (one of Sobolev’s inequalities);
• |u|3
3 ≤ |u|2
2 |u|∞ (straightforward).
Proof. a) Multiply the equation by u and integrate. Note that all boundary terms are
0 due to 2π-periodicity.
uut + u2
ux = uuxxx,
2π
0
uut dx +
2π
0
u2
ux dx =
2π
0
uuxxx dx,
1
2
d
dt
2π
0
u2
dx +
1
3
2π
0
(u3
)x dx = uuxx
2π
0
−
2π
0
uxuxx dx,
1
2
d
dt
2π
0
u2
dx +
1
3
u3 2π
0
= −
1
2
2π
0
(u2
x)x dx,
1
2
d
dt
2π
0
u2
dx = −
1
2
u2
x
2π
0
= 0.
I1(u) =
2π
0
u2
dx = C.
Thus, I1(u) =
2π
0 u2
(x, t) dx is independent of the time t.
Alternatively, we may differentiate I1(u):
dI1
dt
(u) =
d
dt
2π
0
u2
dx =
2π
0
2uut dx =
2π
0
2u(−uux + uxxx) dx
=
2π
0
−2u2
ux dx +
2π
0
2uuxxx dx =
2π
0
−
2
3
(u3
)x dx + 2uuxx
2π
0
−
2π
0
2uxuxx dx
= −
2
3
u3 2π
0
−
2π
0
(u2
x)x dx = −u2
x
2π
0
= 0.
42
Also, see S’92, #7.
Partial Differential Equations Igor Yanovsky, 2005 186
b) Note that all boundary terms are 0 due to 2π-periodicity.
dI2
dt
(u) =
d
dt
2π
0
1
2
u2
x +
1
6
u3
dx =
2π
0
uxuxt +
1
2
u2
ut dx =
We differentiate the original equation with respect to x:
ut = −uux + uxxx
utx = −(uux)x + uxxxx.
=
2π
0
ux(−(uux)x + uxxxx) dx +
1
2
2π
0
u2
(−uux + uxxx) dx
=
2π
0
−ux(uux)x dx +
2π
0
uxuxxxx dx −
1
2
2π
0
u3
ux dx +
1
2
2π
0
u2
uxxx dx
= −uxuux
2π
0
+
2π
0
uxxuux dx + uxuxxx
2π
0
−
2π
0
uxxuxxx dx
−
1
2
2π
0
u4
4 x
dx +
1
2
u2
uxx
2π
0
−
1
2
2π
0
2uuxuxx dx
=
2π
0
uxxuux dx −
2π
0
uxxuxxx dx −
1
2
u4
4
2π
0
−
2π
0
uuxuxx dx
= −
2π
0
uxxuxxx dx = −u2
xx
2π
0
+
2π
0
uxxxuxx dx =
2π
0
uxxxuxx dx = 0,
since −
2π
0 uxxuxxx dx = +
2π
0 uxxuxxx dx. Thus,
I2(u) =
2π
0
1
2
u2
x(x, t) +
1
6
u3
(x, t) dx = C,
and I2(u) is independent of the time t.
c) From (a) and (b), we have
I1(u) =
2π
0
u2
dx = ||u||2
2,
I2(u) =
2π
0
1
2
u2
x +
1
6
u3
dx =
1
2
||ux||2
2 +
1
6
||u||3
3.
Using given inequalities, we have
||u||2
∞ ≤
π
6
(||u||2
2 + ||ux||2
2) ≤
π
6
I1(u) + 2I2(u) −
1
3
||u||3
3
≤
π
6
I1(u) +
π
3
I2(u) +
π
18
||u||2
2 ||u||∞ ≤
π
6
I1(u) +
π
3
I2(u) +
π
18
I1(u)||u||∞
= C + C1||u||∞.
⇒ ||u||2
∞ ≤ C + C1||u||∞,
⇒ ||u||∞ ≤ C2.
Thus, ||u||∞ is bounded in time.
Also see Energy Methods problems for higher order equations (3rd and
4th) in the section on Gas Dynamics.
Partial Differential Equations Igor Yanovsky, 2005 187
16.7 Wave Equation in 2D and 3D
Problem (F’97, #8); (McOwen 3.2 #90). Solve
utt = uxx + uyy + uzz
with initial conditions
u(x, y, z, 0) = x2
+ y2
g(x)
, ut(x, y, z, 0) = 0
h(x)
.
Proof.
➀ We may use the Kirchhoff’s formula:
u(x, t) =
1
4π
∂
∂t
t
|ξ|=1
g(x + ctξ) dSξ +
t
4π |ξ|=1
h(x + ctξ) dSξ
=
1
4π
∂
∂t
t
|ξ|=1
(x1 + ctξ1)2
+ (x2 + ctξ2)2
dSξ + 0 =
➁ We may solve the problem by Hadamard’s method of descent, since initial con-
ditions are independent of x3. We need to convert surface integrals in R3
to domain
integrals in R2. Specifically, we need to express the surface measure on the upper half
of the unit sphere S2
+ in terms of the two variables ξ1 and ξ2. To do this, consider
f(ξ1, ξ2) = 1 − ξ2
1 − ξ2
2 over the unit disk ξ2
1 + ξ2
2 < 1.
dSξ = 1 + (fξ1 )2 + (fξ2 )2 dξ1dξ2 =
dξ1dξ2
1 − ξ2
1 − ξ2
2
.
Partial Differential Equations Igor Yanovsky, 2005 188
u(x1, x2, t) =
1
4π
∂
∂t
2t
ξ2
1+ξ2
2<1
g(x1 + ctξ1, x2 + ctξ2) dξ1dξ2
1 − ξ2
1 − ξ2
2
+
t
4π
2
ξ2
1+ξ2
2 <1
h(x1 + ctξ1, x2 + ctξ2) dξ1dξ2
1 − ξ2
1 − ξ2
2
=
1
4π
∂
∂t
2t
ξ2
1+ξ2
2<1
(x1 + tξ1)2 + (x2 + tξ2)2
1 − ξ2
1 − ξ2
2
dξ1dξ2 + 0,
=
1
2π
∂
∂t
t
ξ2
1+ξ2
2 <1
x2
1 + 2x1tξ1 + t2ξ2
1 + x2
2 + 2x2tξ2 + t2ξ2
2
1 − ξ2
1 − ξ2
2
dξ1dξ2
=
1
2π
∂
∂t ξ2
1+ξ2
2<1
tx2
1 + 2x1t2ξ1 + t3ξ2
1 + tx2
2 + 2x2t2ξ2 + t3ξ2
2
1 − ξ2
1 − ξ2
2
dξ1dξ2
=
1
2π ξ2
1+ξ2
2<1
x2
1 + 4x1tξ1 + 3t2ξ2
1 + x2
2 + 4x2tξ2 + 3t2ξ2
2
1 − ξ2
1 − ξ2
2
dξ1dξ2
=
1
2π ξ2
1+ξ2
2<1
(x2
1 + x2
2) + 4t(x1ξ1 + x2ξ2) + 3t2(ξ2
1 + ξ2
2)
1 − ξ2
1 − ξ2
2
dξ1dξ2
=
1
2π
(x2
1 + x2
2)
ξ2
1+ξ2
2 <1
dξ1dξ2
1 − ξ2
1 − ξ2
2
❶
+
4t
2π ξ2
1+ξ2
2 <1
x1ξ1 + x2ξ2
1 − ξ2
1 − ξ2
2
dξ1dξ2
❷
+
3t2
2π ξ2
1+ξ2
2<1
ξ2
1 + ξ2
2
1 − ξ2
1 − ξ2
2
dξ1dξ2
❸
=
❶ =
1
2π
(x2
1 + x2
2)
ξ2
1+ξ2
2 <1
dξ1dξ2
1 − ξ2
1 − ξ2
2
=
1
2π
(x2
1 + x2
2)
2π
0
1
0
r dr dθ
√
1 − r2
=
1
2π
(x2
1 + x2
2)
2π
0
−2
1
0
−1
2 du dθ
u
1
2
u = 1 − r2
, du = −2r dr
=
1
2π
(x2
1 + x2
2)
2π
0
1 dθ = x2
1 + x2
2.
❷ =
4t
2π ξ2
1+ξ2
2<1
x1ξ1 + x2ξ2
1 − ξ2
1 − ξ2
2
dξ1dξ2 =
4t
2π
1
−1
√
1−ξ2
2
−
√
1−ξ2
2
x1ξ1 + x2ξ2
1 − ξ2
1 − ξ2
2
dξ1dξ2
= 0.
❸ =
3t2
2π ξ2
1+ξ2
2 <1
ξ2
1 + ξ2
2
1 − ξ2
1 − ξ2
2
dξ1dξ2 =
3t2
2π
2π
0
1
0
(r cos θ)2
+ (r sinθ)2
√
1 − r2
r drdθ
=
3t2
2π
2π
0
1
0
r3
√
1 − r2
drdθ u = 1 − r2
, du = −2r dr
=
3t2
2π
2π
0
2
3
dθ =
t2
π
2π
0
dθ = 2t2
.
⇒ u(x1, x2, t) = ❶ + ❷ + ❸ = x2
1 + x2
2 + 2t2
.
➂ We may guess what the solution is:
u(x, y, z, t) =
1
2
(x + t)2
+ (y + t)2
+ (x − t)2
+ (y − t)2
= x2
+ y2
+ 2t2
.
Partial Differential Equations Igor Yanovsky, 2005 189
Check:
u(x, y, z, 0) = x2
+ y2
.
ut(x, y, z, t) = (x + t) + (y + t) − (x − t) − (y − t),
ut(x, y, z, 0) = 0.
utt(x, y, z, t) = 4,
ux(x, y, z, t) = (x + t) + (x − t),
uxx(x, y, z, t) = 2,
uy(x, y, z, t) = (y + t) + (y − t),
uyy(x, y, z, t) = 2,
uzz(x, y, z, t) = 0,
utt = uxx + uyy + uzz.
Partial Differential Equations Igor Yanovsky, 2005 190
Problem (S’98, #6).
Consider the two-dimensional wave equation wtt = a2 w, with initial data which van-
ish for x2
+y2
large enough. Prove that w(x, y, t) satisfies the decay |w(x, y, t)| ≤ C·t−1
.
(Note that the estimate is not uniform with respect to x, y since C may depend on x, y).
Proof. Suppose we have the following problem with initial data:
utt = a2
u x ∈ R2
, t > 0,
u(x, 0) = g(x), ut(x, 0) = h(x) x ∈ R2
.
The result is the consequence of the Huygens’ principle and may be proved by Hadamard’s
method of descent: 43
u(x, t) =
1
4π
∂
∂t
2t
ξ2
1+ξ2
2<1
g(x1 + ctξ1, x2 + ctξ2) dξ1dξ2
1 − ξ2
1 − ξ2
2
+
t
4π
2
ξ2
1+ξ2
2 <1
h(x1 + ctξ1, x2 + ctξ2) dξ1dξ2
1 − ξ2
1 − ξ2
2
=
1
2π |ξ|2<c2t2
th(x + ξ) + g(x + ξ)
1 −
|ξ|2
c2t2
dξ1dξ2
c2t2
+
t
2π |ξ|2<c2t2
∇g(x + ξ) · (ct, ct)
1 −
|ξ|2
c2t2
dξ1dξ2
c2t2
.
For a given x, let T(x) be so large that T > 1 and supp(h + g) ⊂ BT (x). Then for
t > 2T we have:
|u(x, t)| =
1
2π |ξ|2<c2T 2
tM + M + 2Mct
1 − c2T 2
c2T 24
dξ1dξ2
c2t2
=
πc2
T2
2π
M
3/4
1
c2t
+
M
3/4
1
c2Tt
+
2Mc
c2t
.
⇒ u(x, t) ≤ C1/t for t > 2T.
For t ≤ 2T:
|u(x, t)| =
1
2π |ξ|2<c2t2
2TM + M + 4McT
1 − |ξ|2
c2t2
dξ1dξ2
c2t2
=
1
2π
(2TM + M + 4Mct)2π
ct
0
r dr/c2
t2
1 − r2
c2t2
=
M(2T + 1 + 4cT)
2
1
0
−du
u1/2
=
M(2T + 1 + 4cT)
2
2 ≤
M(2T + 1 + 4cT)2T
t
.
Letting C = max(C1, M(2T + 1 + 4cT)2T), we have |u(x, t)| ≤ C(x)/t.
• For n = 3, suppose g, h ∈ C∞
0 (R3
). The solution is given by the Kircchoff’s
formula. There is a constant C so that u(x, t) ≤ C/t for all x ∈ R3 and t > 0. As
McOwen suggensts in Hints for Exercises, to prove the result, we need to estimate the
43
Nick’s solution follows.
Partial Differential Equations Igor Yanovsky, 2005 191
area of intersection of the sphere of radius ct with the support of the functions g and
h.
Partial Differential Equations Igor Yanovsky, 2005 192
Problem (S’95, #6). Spherical waves in 3-d are waves symmetric about the origin;
i.e. u = u(r, t) where r is the distance from the origin. The wave equation
utt = c2
u
then reduces to
1
c2
utt = urr +
2
r
ur. (16.43)
a) Find the general solutions u(r, t) by solving (16.43). Include both the incoming waves
and outgoing waves in your solutions.
b) Consider only the outgoing waves and assume the finite out-flux condition
0 < lim
r→0
r2
ur < ∞
for all t. The wavefront is defined as r = ct. How is the amplitude of the wavefront
decaying in time?
Proof. a) We want to reduce (16.43) to the 1D wave equation. Let v = ru. Then
vtt = rutt,
vr = rur + u,
vrr = rurr + 2ur.
Thus, (16.43) becomes
1
c2
1
r
vtt =
1
r
vrr,
1
c2
vtt = vrr,
vtt = c2
vrr,
which has the solution
v(r, t) = f(r + ct) + g(r − ct).
Thus,
u(r, t) =
1
r
v(r, t) =
1
r
f(r + ct)
incoming, (c>0)
+
1
r
g(r − ct)
outgoing, (c>0)
.
b) We consider u(r, t) = 1
r g(r − ct):
0 < lim
r→0
r2
ur < ∞,
0 < lim
r→0
r2 1
r
g (r − ct) −
1
r2
g(r − ct) < ∞,
0 < lim
r→0
rg (r − ct) − g(r − ct) < ∞,
0 < −g(−ct) < ∞,
0 < −g(−ct) = G(t) < ∞,
g(t) = −G
t
−c
.
Partial Differential Equations Igor Yanovsky, 2005 193
The wavefront is defined as r = ct. We have
u(r, t) =
1
r
g(r − ct) = −
1
r
G
r − ct
−c
= −
1
ct
G(0).
|u(r, t)| =
1
t
−
1
c
G(0) .
The amplitude of the wavefront decays like 1
t .
Partial Differential Equations Igor Yanovsky, 2005 194
Problem (S’00, #8). a) Show that for a smooth function F on the line, while
u(x, t) = F(ct + |x|)/|x| may look like a solution of the wave equation utt = c2 u
in R3
, it actually is not. Do this by showing that for any smooth function φ(x, t) with
compact support
R3×R
u(x, t)(φtt − φ) dxdt = 4π
R
φ(0, t)F(ct) dt.
Note that, setting r = |x|, for any function w which only depends on r one has
w = r−2
(r2
wr)r = wrr + 2
r wr.
b) If F(0) = F (0) = 0, what is the true solution to utt = u with initial conditions
u(x, 0) = F(|x|)/|x| and ut(x, 0) = F (|x|)/|x|?
c) (Ralston Hw) Suppose u(x, t) is a solution to the wave equation utt = c2
u in
R3 × R with u(x, t) = w(|x|, t) and u(x, 0) = 0. Show that
u(x, t) =
F(|x| + ct) − F(|x| − ct)
|x|
for a function F of one variable.
Proof. a) We have
R3 R
u (φtt − φ) dxdt = lim
→0 R
dt
|x|>
u (φtt − φ) dx
= lim
→0 R
dt
|x|>
φ (utt − u) dx +
|x|=
∂u
∂n
φ − u
∂φ
∂n
dS .
The final equality is derived by integrating by parts twice in t, and using Green’s
theorem:
Ω
(v u − u v) dx =
∂Ω
v
∂u
∂n
− u
∂v
∂n
ds.
Since dS = 2
sinφ dφ dθ and ∂
∂n = − ∂
∂r , substituting u(x, t) = F(|x| + ct)/|x|
gives:
R3 R
u (φtt − φ) dxdt =
R
4πφF(ct) dt.
Thus, u is not a weak solution to the wave equation.
b)
c) We want to show that v(|x|, t) = |x|w(|x|, t) is a solution to the wave equation in
one space dimension and hence must have the from v = F(|x| +ct) +G(|x| −ct). Then
we can argue that w will be undefined at x = 0 for some t unless F(ct) + G(−ct) = 0
for all t.
We work in spherical coordinates. Note that w and v are independent of φ and θ. We
have:
vtt(r, t) = c2
w = c2 1
r2
(r2
wr)r = c2 1
r2
(2rwr + r2
wrr),
⇒ rwtt = c2
rwrr + 2wr.
Thus we see that vtt = c2
vrr, and we can conclude that
v(r, t) = F(r + ct) + G(r − ct) and
w(r, t) =
F(r + ct) + G(r − ct)
r
.
Partial Differential Equations Igor Yanovsky, 2005 195
limr→0 w(r, t) does not exist unless F(ct) + G(−ct) = 0 for all t. Hence
w(r, t) =
F(ct + r) + G(ct − r)
r
, and
u(x, t) =
F(ct + |x|) + G(ct − |x|)
|x|
.
Partial Differential Equations Igor Yanovsky, 2005 196
17 Problems: Laplace Equation
A fundamental solution K(x) for the Laplace operator is a distribution satisfying 44
K(x) = δ(x)
The fundamental solution for the Laplace operator is
K(x) =
1
2π log |x| if n = 2
1
(2−n)ωn
|x|2−n if n ≥ 3.
17.1 Green’s Function and the Poisson Kernel
Green’s function is a special fundamental solution satisfying 45
G(x, ξ) = δ(x) for x ∈ Ω
G(x, ξ) = 0 for x ∈ ∂Ω,
(17.1)
To construct the Green’s function,
➀ consider wξ(x) with wξ(x) = 0 in Ω and wξ(x) = −K(x − ξ) on ∂Ω;
➁ consider G(x, ξ) = K(x − ξ) + wξ(x), which is a fundamental solution satisfying
(17.1).
Problem 1. Given a particular distribution solution to the set of Dirichlet problems
uξ(x) = δξ(x) for x ∈ Ω
uξ(x) = 0 for x ∈ ∂Ω,
how would you use this to solve
u = 0 for x ∈ Ω
u(x) = g(x) for x ∈ ∂Ω.
Proof. uξ(x) = G(x, ξ), a Green’s function. G is a fundamental solution to the Laplace
operator, G(x, ξ) = 0, x ∈ ∂Ω. In this problem, it is assumed that G(x, ξ) is known for
Ω. Then
u(ξ) =
Ω
G(x, ξ) u dx +
∂Ω
u(x)
∂G(x, ξ)
∂nx
dSx
for every u ∈ C2
(Ω). In particular, if u = 0 in Ω and u = g on ∂Ω, then we obtain
the Poisson integral formula
u(ξ) =
∂Ω
∂G(x, ξ)
∂nx
g(x) dSx,
44
We know that u(x) = Rn K(x−y)f(y)dy is a distribution solution of u = f when f is integrable
and has compact support. In particular, we have
u(x) =
Rn
K(x − y) u(y) dy whenever u ∈ C∞
0 (Rn
).
The above result is a consequence of:
u(x) =
Ω
δ(x − y)u(y) dy = ( K) ∗ u = K ∗ ( u) =
Ω
K(x − y) u(y) dy.
45
Green’s function is useful in satisfying Dirichlet boundary conditions.
Partial Differential Equations Igor Yanovsky, 2005 197
where H(x, ξ) =
∂G(x,ξ)
∂nx
is the Poisson kernel.
Thus if we know that the Dirichlet problem has a solution u ∈ C2
(Ω), then we can
calculate u from the Poisson integral formula (provided of course that we can compute
G(x, ξ)).
Partial Differential Equations Igor Yanovsky, 2005 198
Dirichlet Problem on a Half-Space. Solve the n-dimensional Laplace/Poisson
equation on the half-space with Dirichlet boundary conditions.
Proof. Use the method of reflection to construct Green’s function. Let Ω be an
upper half-space in Rn. If x = (x , xn), where x ∈ Rn−1, we can see
|x − ξ| = |x − ξ∗
|, and hence K(x − ξ) = K(x − ξ∗
). Thus
G(x, ξ) = K(x − ξ) − K(x − ξ∗
)
is the Green’s function on Ω. G(x, ξ) is harmonic in Ω,
and G(x, ξ) = 0 on ∂Ω.
To compute the Poisson kernel, we must differentiate G(x, ξ)
in the negative xn direction. For n ≥ 2,
∂
∂xn
K(x − ξ) =
xn − ξn
ωn
|x − ξ|−n
,
so that the Poisson kernel is given by
−
∂
∂xn
G(x, ξ) xn=0
=
2ξn
ωn
|x − ξ|−n
, for x ∈ Rn−1
.
Thus, the solution is
u(ξ) =
∂Ω
∂G(x, ξ)
∂nx
g(x) dSx =
2ξn
ωn Rn−1
g(x )
|x − ξ|n
dx .
If g(x ) is bounded and continuous for x ∈ Rn−1, then u(ξ) is C∞ and harmonic in Rn
+
and extends continuously to Rn
+ such that u(ξ ) = g(ξ ).
Partial Differential Equations Igor Yanovsky, 2005 199
Problem (F’95, #3): Neumann Problem on a Half-Space.
a) Consider the Neumann problem in the upper half plane,
Ω = {x = (x1, x2) : −∞ < x1 < ∞, x2 > 0}:
u = ux1x1 + ux2x2 = 0 x ∈ Ω,
ux2 (x1, 0) = f(x1) − ∞ < x1 < ∞.
Find the corresponding Green’s function and conclude that
u(ξ) = u(ξ1, ξ2) =
1
2π
∞
−∞
ln [(x1 − ξ1)2
+ ξ2
2] · f(x1) dx1
is a solution of the problem.
b) Show that this solution is bounded in Ω if and only if
∞
−∞ f(x1) dx1 = 0.
Proof. a) Notation: x = (x, y), ξ = (x0, y0). Since K(x−ξ) = 1
2π log |x−ξ|, n = 2.
➀ First, we find the Green’s function. We have
K(x − ξ) =
1
2π
log (x − x0)2 + (y − y0)2.
Let G(x, ξ) = K(x − ξ) + ω(x).
Since the problem is Neumann, we need:
G(x, ξ) = δ(x − ξ),
∂G
∂y ((x, 0), ξ) = 0.
G((x, y), ξ) =
1
2π
log (x − x0)2 + (y − y0)2 + ω((x, y), ξ),
∂G
∂y
((x, y), ξ) =
1
2π
y − y0
(x − x0)2 + (y − y0)2
+ ωy((x, y), ξ),
∂G
∂y
((x, 0), ξ) = −
1
2π
y0
(x − x0)2 + y2
0
+ ωy((x, 0), ξ) = 0.
Let
ω((x, y), ξ) =
a
2π
log (x − x0)2 + (y + y0)2. Then,
∂G
∂y
((x, 0), ξ) = −
1
2π
y0
(x − x0)2 + y2
0
+
a
2π
y0
(x − x0)2 + y2
0
= 0.
Thus, a = 1.
G((x, y), ξ) =
1
2π
log (x − x0)2 + (y − y0)2 +
1
2π
log (x − x0)2 + (y + y0)2.
46
➁ Consider Green’s identity (after cutting out B (ξ) and having → 0):
Ω
(u G − G u
=0
) dx =
∂Ω
u
∂G
∂n
=0
−G
∂u
∂n
dS
46
Note that for the Dirichlet problem, we would have gotten the “-” sign instead of “+” in front of
ω.
Partial Differential Equations Igor Yanovsky, 2005 200
Since ∂u
∂n = ∂u
∂(−y) = −f(x), we have
Ω
u δ(x − ξ) dx =
∞
−∞
G((x, y), ξ) f(x) dx,
u(ξ) =
∞
−∞
G((x, y), ξ) f(x) dx.
For y = 0, we have
G((x, y), ξ) =
1
2π
log (x − x0)2 + y2
0 +
1
2π
log (x − x0)2 + y2
0
=
1
2π
2 log (x − x0)2 + y2
0
=
1
2π
log (x − x0)2
+ y2
0 .
Thus,
u(ξ) =
1
2π
∞
−∞
log (x − x0)2
+ y2
0 f(x) dx.
b) Show that this solution is bounded in Ω if and only if
∞
−∞ f(x1) dx1 = 0.
Consider the Green’s identity:
Ω
u dxdy =
∂Ω
∂u
∂n
dS = −
∞
−∞
∂u
∂y
dx =
∞
−∞
f(x) dx = 0.
Note that the Green’s identity applies to bounded domains Ω.
R
−R
f dx1 +
2π
0
∂u
∂r
R dθ = 0.
???
Partial Differential Equations Igor Yanovsky, 2005 201
McOwen 4.2 # 6. For n = 2, use the method of reflections to find the Green’s
function for the first quadrant Ω = {(x, y) : x, y > 0}.
Proof. For x ∈ ∂Ω,
|x − ξ(0)
| · |x − ξ(2)
| = |x − ξ(1)
| · |x − ξ(3)
|,
|x − ξ(0)
| =
|x − ξ(1)
| · |x − ξ(3)
|
|x − ξ(2)|
.
But ξ(0) = ξ, so for n = 2,
G(x, ξ) =
1
2π
log |x − ξ| −
1
2π
log
|x − ξ(1)| · |x − ξ(3)|
|x − ξ(2)|
.
G(x, ξ) = 0, x ∈ ∂Ω.
Problem. Use the method of images to solve
G = δ(x − ξ)
in the first quadrant with G = 0 on the boundary.
Proof. To solve the problem in the first quadrant
we take a reflection to the fourth quadrant
and the two are reflected to the left half.
G = δ(x − ξ(0)
) − δ(x − ξ(1)
) − δ(x − ξ(2)
) + δ(x − ξ(3)
).
G =
1
2π
log
|x − ξ(0)| |x − ξ(3)|
|x − ξ(1)| |x − ξ(2)|
=
1
2π
log
(x − x0)2 + (y − y0)2 (x + x0)2 + (y + y0)2
(x − x0)2 + (y + y0)2 (x + x0)2 + (y − y0)2
.
Note that on the axes G = 0.
Partial Differential Equations Igor Yanovsky, 2005 202
Problem (S’96, #3). Construct a Green’s function for the following mixed Dirichlet-
Neumann problem in Ω = {x = (x1, x2) ∈ R2 : x1 > 0, x2 > 0}:
u =
∂2
u
∂x2
1
+
∂2
u
∂x2
2
= f, x ∈ Ω,
ux2 (x1, 0) = 0, x1 > 0,
u(0, x2) = 0, x2 > 0.
Proof. Notation: x = (x, y), ξ = (x0, y0). Since K(x − ξ) = 1
2π log |x − ξ|, n = 2.
K(x − ξ) =
1
2π
log (x − x0)2 + (y − y0)2.
Let G(x, ξ) = K(x − ξ) + ω(x).
At (0, y), y > 0,
G (0, y), ξ =
1
2π
log x2
0 + (y − y0)2 + ω(0, y) = 0.
Also,
Gy (x, y), ξ =
1
2π
1
2 · 2(y − y0)
(x − x0)2 + (y − y0)2
+ wy(x, y)
=
1
2π
y − y0
(x − x0)2 + (y − y0)2
+ wy(x, y).
At (x, 0), x > 0,
Gy (x, 0), ξ = −
1
2π
y0
(x − x0)2 + y2
0
+ wy(x, 0) = 0.
We have
ω((x, y), ξ) =
a
2π
log (x + x0)2 + (y − y0)2
+
b
2π
log (x − x0)2 + (y + y0)2
+
c
2π
log (x + x0)2 + (y + y0)2.
Using boundary conditions, we have
0 = G((0, y), ξ) =
1
2π
log x2
0 + (y − y0)2 + ω(0, y)
=
1
2π
log x2
0 + (y − y0)2 +
a
2π
log x2
0 + (y − y0)2 +
b
2π
log x2
0 + (y + y0)2 +
c
2π
log x2
0 + (y + y0)2.
Thus, a = −1, c = −b. Also,
0 = Gy((x, 0), ξ) = −
1
2π
y0
(x − x0)2 + y2
0
+ wy(x, 0)
= −
1
2π
y0
(x − x0)2 + y2
0
−
(−1)
2π
y0
(x + x0)2 + y2
0
+
b
2π
y0
(x − x0)2 + y2
0
+
(−b)
2π
y0
(x + x0)2 + y2
0
.
Thus, b = 1, and
G((x, y), ξ) =
1
2π
log (x − x0)2 + (y − y0)2 + ω(x) =
1
2π
log (x − x0)2 + (y − y0)2
Partial Differential Equations Igor Yanovsky, 2005 203
− log (x + x0)2 + (y − y0)2 + log (x − x0)2 + (y + y0)2 − log (x + x0)2 + (y + y0)2 .
It can be seen that G((x, y), ξ) = 0 on x = 0, for example.
Partial Differential Equations Igor Yanovsky, 2005 204
Dirichlet Problem on a Ball. Solve the n-dimensional Laplace/Poisson equation on
the ball with Dirichlet boundary conditions.
Proof. Use the method of reflection to construct Green’s function.
Let Ω = {x ∈ Rn : |x| < a}. For ξ ∈ Ω, define ξ∗ = a2ξ
|ξ|2 as its reflection in ∂Ω; note
ξ∗
/∈ Ω.
|x − ξ∗
|
|x − ξ|
=
a
|ξ|
for |x| = a. ⇒ |x − ξ| =
|ξ|
a
|x − ξ∗
|. (17.2)
From (17.2) we conclude that for x ∈ ∂Ω (i.e. |x| = a),
K(x − ξ) =
⎧
⎨
⎩
1
2π log
|ξ|
a |x − ξ∗
| if n = 2
a
|ξ|
n−2
K(x − ξ∗
) if n ≥ 3.
(17.3)
Define for x, ξ ∈ Ω:
G(x, ξ) =
⎧
⎨
⎩
K(x − ξ) − 1
2π log |ξ|
a |x − ξ∗
| if n = 2
K(x − ξ) − a
|ξ|
n−2
K(x − ξ∗
) if n ≥ 3.
Since ξ∗ is not in Ω, the second terms on the RHS are harmonic
in x ∈ Ω. Moreover, by (17.3) we have G(x, ξ) = 0 if x ∈ ∂Ω.
Thus, G is the Green’s function for Ω.
u(ξ) =
∂Ω
∂G(x, ξ)
∂nx
g(x) dSx =
a2
− |ξ|2
aωn |x|=a
g(x)
|x − ξ|n
dSx.
Partial Differential Equations Igor Yanovsky, 2005 205
17.2 The Fundamental Solution
Problem (F’99, #2). ➀ Given that Ka(x − y) and Kb(x − y) are the kernels for
the operators ( − aI)−1
and ( − bI)−1
on L2
(Rn
), where 0 < a < b, show that
( − aI)( − bI) has a fundamental solution of the form c1Ka + c2Kb.
➁ Use the preceding to find a fundamental solution for 2
− , when n = 3.
Proof. METHOD ❶:
➀
( − aI)u = f ( − bI)u = f
u = Ka f u = Kb f
fundamental solution ⇔ kernel
⇒ u = Kaf u = Kbf if u ∈ L2
,
( − aI)u = (−|ξ|2
− a)u = f ( − bI)u = (−|ξ|2
− b)u = f
⇒ u = −
1
(ξ2 + a)
f(ξ) u = −
1
(ξ2 + b)
f(ξ)
⇒ Ka = −
1
ξ2 + a
Kb = −
1
ξ2 + b
( − aI)( − bI)u = f,
2
− (a + b) + abI u = f,
u =
1
(ξ2 + a)(ξ2 + b)
f(ξ) = Knewf(ξ),
Knew =
1
(ξ2 + a)(ξ2 + b)
=
1
b − a
−
1
ξ2 + b
+
1
ξ2 + a
=
1
b − a
(Kb − Ka),
Knew =
1
b − a
(Kb − Ka),
c1 =
1
b − a
, c2 = −
1
b − a
.
➁ n = 3 is not relevant (may be used to assume Ka, Kb ∈ L2
).
For 2 − , a = 0, b = 1 above, or more explicitly
( 2
− )u = f,
(ξ4
+ ξ2
)u = f,
u =
1
(ξ4 + ξ2)
f,
K =
1
(ξ4 + ξ2)
=
1
ξ2(ξ2 + 1)
= −
1
ξ2 + 1
+
1
ξ2
= K1 − K0.
Partial Differential Equations Igor Yanovsky, 2005 206
METHOD ❷:
• For u ∈ C∞
0 (Rn) we have:
u(x) =
Rn
Ka(x − y) ( − aI) u(y) dy, ➀
u(x) =
Rn
Kb(x − y) ( − bI) u(y) dy. ➁
Let
u(x) = c1( − bI) φ(x), for ➀
u(x) = c2( − aI) φ(x), for ➁
for φ(x) ∈ C∞
0 (Rn
). Then,
c1( − bI)φ(x) =
Rn
Ka(x − y) ( − aI) c1( − bI)φ(y) dy,
c2( − aI)φ(x) =
Rn
Kb(x − y) ( − bI) c2( − aI)φ(y) dy.
We add two equations:
(c1 + c2) φ(x) − (c1b + c2a)φ(x) =
Rn
(c1Ka + c2Kb) ( − aI) ( − bI) φ(y) dy.
If c1 = −c2 and −(c1b + c2a) = 1, that is, c1 = 1
a−b, we have:
φ(x) =
Rn
1
a − b
(Ka − Kb) ( − aI) ( − bI) φ(y) dy,
which means that 1
a−b(Ka − Kb) is a fundamental solution of ( − aI)( − bI).
• 2 − = ( − 1) = ( − 0I)( − 1I).
( − 0I) has fundamental solution K0 = − 1
4πr in R3
.
To find K, a fundamental solution for ( − 1I), we need to solve for a radially
symmetric solution of
( − 1I)K = δ.
In spherical coordinates, in R3, the above expression may be written as:
K +
2
r
K − K = 0.
Let
K =
1
r
w(r),
K =
1
r
w −
1
r2
w,
K =
1
r
w −
2
r2
w +
2
r3
w.
Plugging these into , we obtain:
1
r
w −
1
r
w = 0, or
w − w = 0.
Partial Differential Equations Igor Yanovsky, 2005 207
Thus,
w = c1er
+ c2e−r
,
K =
1
r
w(r) = c1
er
r
+ c2
e−r
r
.
Suppose v(x) ≡ 0 for |x| ≥ R and let Ω = BR(0); for small > 0 let
Ω = Ω − B (0).
Note: ( − I)K(|x|) = 0 in Ω . Consider Green’s identity (∂Ω = ∂Ω ∪ ∂B (0)):
Ω
K(|x|) v − v K(|x|) dx =
∂Ω
K(|x|)
∂v
∂n
− v
∂K(|x|)
∂n
dS
=0, since v≡0 for x≥R
+
∂B (0)
K(|x|)
∂v
∂n
− v
∂K(|x|)
∂n
dS
We add − Ω K(|x|) v dx + Ω v K(|x|) dx to LHS to get:
Ω
K(|x|)( − I)v − v ( − I)K(|x|)
= 0, in Ω
dx =
∂B (0)
K(|x|)
∂v
∂n
− v
∂K(|x|)
∂n
dS.
lim
→0 Ω
K(|x|)( − I)v dx =
Ω
K(|x|)( − I)v dx. Since K(r) = c1
er
r
+ c2
e−r
r
is integrable at x = 0.
On ∂B (0), K(|x|) = K( ). Thus, 47
∂B (0)
K(|x|)
∂v
∂n
dS = K( )
∂B (0)
∂v
∂n
dS ≤ c1
e
+ c2
e−
4π 2
max ∇v → 0, as → 0.
∂B (0)
v(x)
∂K(|x|)
∂n
dS =
∂B (0)
1
− c1e + c2e−
+
1
2
c1e + c2e−
v(x) dS
=
1
− c1e + c2e−
+
1
2
c1e + c2e−
∂B (0)
v(x) dS
=
1
− c1e + c2e−
+
1
2
c1e + c2e−
∂B (0)
v(0) dS
+
1
− c1e + c2e−
+
1
2
c1e + c2e−
∂B (0)
[v(x) − v(0)] dS
→
1
2
c1e + c2e−
v(0) 4π 2
→ 4π(c1 + c2)v(0) = −v(0).
Thus, taking c1 = c2, we have c1 = c2 = − 1
8π , which gives
Ω
K(|x|)( − I)v dx = lim
→0 Ω
K(|x|)( − I)v dx = v(0),
47
In R3
, for |x| = ,
K(|x|) = K( ) = c1
e
+ c2
e−
.
∂K(|x|)
∂n
= −
∂K( )
∂r
= −c1
e
−
e
2
− c2 −
e−
−
e−
2
=
1
− c1e + c2e−
+
1
2
c1e + c2e−
,
since n points inwards. n points toward 0 on the sphere |x| = (i.e., n = −x/|x|).
Partial Differential Equations Igor Yanovsky, 2005 208
that is K(r) = − 1
8π
er
r + e−r
r = − 1
4πr cosh(r) is the fundamental solution of
( − I).
By part (a), 1
a−b(Ka − Kb) is a fundamental solution of ( − aI)( − bI).
Here, the fundamental solution of ( − 0I)( − 1I) is 1
−1 (K0 − K) = − − 1
4πr +
1
4πr cosh(r) = 1
4πr 1 − cosh(r) .
Partial Differential Equations Igor Yanovsky, 2005 209
Problem (F’91, #3). Prove that
−
1
4π
cos k|x|
|x|
is a fundamental solution for ( + k2
) in R3
where |x| = x2
1 + x2
2 + x2
3,
i.e. prove that for any smooth function f(x) with compact support
u(x) = −
1
4π R3
cos k|x − y|
|x − y|
f(y) dy
is a solution to ( + k2
)u = f.
Proof. For v ∈ C∞
0 (Rn
), we want to show that for K(|x|) = − 1
4π
cos k|x|
|x| ,
we have ( + k2)K = δ, i.e.
Rn
K(|x|) ( + k2
)v(x) dx = v(0).
Suppose v(x) ≡ 0 for |x| ≥ R and let Ω = BR(0); for small > 0 let
Ω = Ω − B (0).
( + k2
)K(|x|) = 0 in Ω . Consider Green’s identity (∂Ω = ∂Ω ∪ ∂B (0)):
Ω
K(|x|) v − v K(|x|) dx =
∂Ω
K(|x|)
∂v
∂n
− v
∂K(|x|)
∂n
dS
=0, since v≡0 for x≥R
+
∂B (0)
K(|x|)
∂v
∂n
− v
∂K(|x|)
∂n
dS
We add Ω k2 K(|x|) v dx − Ω v k2 K(|x|) dx to LHS to get:
Ω
K(|x|)( + k2
)v − v ( + k2
)K(|x|)
= 0, in Ω
dx =
∂B (0)
K(|x|)
∂v
∂n
− v
∂K(|x|)
∂n
dS.
lim
→0 Ω
K(|x|)( + k2
)v dx =
Ω
K(|x|)( + k2
)v dx. Since K(r) = −
cos kr
4πr
is integrable at x = 0.
On ∂B (0), K(|x|) = K( ). Thus, 48
∂B (0)
K(|x|)
∂v
∂n
dS = K( )
∂B (0)
∂v
∂n
dS ≤ −
cos k
4π
4π 2
max ∇v → 0, as → 0.
48
In R3
, for |x| = ,
K(|x|) = K( ) = −
cos k
4π
.
∂K(|x|)
∂n
= −
∂K( )
∂r
=
1
4π
−
k sin k
−
cos k
2
= −
1
4π
k sin k +
cos k
,
since n points inwards. n points toward 0 on the sphere |x| = (i.e., n = −x/|x|).
Partial Differential Equations Igor Yanovsky, 2005 210
∂B (0)
v(x)
∂K(|x|)
∂n
dS =
∂B (0)
−
1
4π
k sink +
cos k
v(x) dS
= −
1
4π
k sink +
cos k
∂B (0)
v(x) dS
= −
1
4π
k sink +
cos k
∂B (0)
v(0) dS −
1
4π
k sin k +
cos k
∂B (0)
[v(x) − v(0)] dS
= −
1
4π
k sink +
cos k
v(0) 4π 2
−
1
4π
k sin k +
cos k
[v(x) − v(0)] 4π 2
→0, (v is continuous)
→ − cos k v(0) → −v(0).
Thus,
Ω
K(|x|)( + k2
)v dx = lim
→0 Ω
K(|x|)( + k2
)v dx = v(0),
that is, K(r) = − 1
4π
cos kr
r is the fundamental solution of + k2
.
Problem (F’97, #2). Let u(x) be a solution of the Helmholtz equation
u + k2
u = 0 x ∈ R3
satisfying the “radiation” conditions
u = O
1
r
,
∂u
∂r
− iku = O
1
r2
, |x| = r → ∞.
Prove that u ≡ 0.
Hint: A fundamental solution to the Helmholtz equation is 1
4πr eikr.
Use the Green formula.
Proof. Denote K(|x|) = 1
4πr eikr
, a fundamental solution. Thus, ( + k2
)K = δ.
Let x0 be any point and Ω = BR(x0); for small > 0 let
Ω = Ω − B (x0).
( + k2
)K(|x|) = 0 in Ω . Consider Green’s identity (∂Ω = ∂Ω ∪ ∂B (x0)):
Ω
u ( + k2
)K − K( + k2
)u dx
= 0
=
∂Ω
u
∂K
∂n
− K
∂u
∂n
dS +
∂B (x0)
u
∂K
∂n
− K
∂u
∂n
dS
→u(x0), as →0
.
(It can be shown by the method previously used that the integral over B (x0) ap-
proaches u(x0) as → 0.) Taking the limit when → 0, we obtain
−u(x0) =
∂Ω
u
∂K
∂n
− K
∂u
∂n
dS =
∂Ω
u
∂
∂r
eik|x−x0|
4π|x − x0|
−
eik|x−x0|
4π|x − x0|
∂u
∂r
dS
=
∂Ω
u
∂
∂r
eik|x−x0|
4π|x − x0|
− ik
eik|x−x0|
4π|x − x0|
= O( 1
|x|2 ); (can be shown)
−
eik|x−x0|
4π|x − x0|
∂u
∂r
− iku dS
= O
1
R
· O
1
R2
· 4πR2
− O
1
R
· O
1
R2
· 4πR2
= 0.
Taking the limit when R → ∞, we get u(x0) = 0.
Partial Differential Equations Igor Yanovsky, 2005 211
Problem (S’02, #1). a) Find a radially symmetric solution, u, to the equation in
R2,
u =
1
2π
log |x|,
and show that u is a fundamental solution for 2
, i.e. show
φ(0) =
R2
u 2
φ dx
for any smooth φ which vanishes for |x| large.
b) Explain how to construct the Green’s function for the following boundary value in
a bounded domain D ⊂ R2 with smooth boundary ∂D
w = 0 and
∂w
∂n
= 0 on ∂D,
2
w = f in D.
Proof. a) Rewriting the equation in polar coordinates, we have
u =
1
r
rur r
+
1
r2
uθθ =
1
2π
log r.
For a radially symmetric solution u(r), we have uθθ = 0. Thus,
1
r
rur r
=
1
2π
log r,
rur r
=
1
2π
r log r,
rur =
1
2π
r log r dr =
r2 log r
4π
−
r2
8π
,
ur =
r log r
4π
−
r
8π
,
u =
1
4π
r log r dr −
1
8π
r dr =
1
8π
r2
log r − 1 .
u(r) =
1
8π
r2
log r − 1 .
We want to show that u defined above is a fundamental solution of 2
for n = 2. That
is
R2
u 2
v dx = v(0), v ∈ C∞
0 (Rn
).
See the next page that shows that u defined as u(r) = 1
8π r2
log r is the
Fundamental Solution of 2
. (The − 1
8π r2
term does not play any role.)
In particular, the solution of
2
ω = f(x),
if given by
ω(x) =
R2
u(x − y) 2
ω(y) dy =
1
8π R2
|x − y|2
log |x − y| − 1 f(y) dy.
Partial Differential Equations Igor Yanovsky, 2005 212
b) Let
K(x − ξ) =
1
8π
|x − ξ|2
log |x − ξ| − 1 .
We use the method of images to construct the Green’s function.
Let G(x, ξ) = K(x − ξ) + ω(x). We need G(x, ξ) = 0 and ∂G
∂n (x, ξ) = 0 for x ∈ ∂Ω.
Consider wξ(x) with 2wξ(x) = 0 in Ω, wξ(x) = −K(x − ξ) and
∂wξ
∂n (x) = −∂K
∂n (x − ξ)
on ∂Ω. Note, we can find the Greens function for the upper-half plane, and then
make a conformal map onto the domain.
Partial Differential Equations Igor Yanovsky, 2005 213
Problem (S’97, #6). Show that the fundamental solution of 2
in R2
is given by
V (x1, x2) =
1
8π
r2
ln(r), r = |x − ξ|,
and write the solution of
2
w = F(x1, x2).
Hint: In polar coordinates, = 1
r
∂
∂r (r ∂
∂r )+ 1
r2
∂2
∂θ2 ; for example, V = 1
2π (1+ln(r)).
Proof. Notation: x = (x1, x2). We have
V (x) =
1
8π
r2
log(r),
In polar coordinates: (here, Vθθ = 0)
V =
1
r
rVr r
=
1
r
r
1
8π
r2
log(r)
r r
=
1
8π
1
r
r 2r log(r) + r
r
=
1
8π
1
r
2r2
log(r) + r2
r
=
1
8π
1
r
4r + 4r log r
=
1
2π
(1 + log r).
The fundamental solution V (x) for 2 is the distribution satisfying: 2V (r) = δ(r).
2
V = ( V ) =
1
2π
(1 + log r) =
1
2π
(1 + log r) =
1
2π
1
r
r(1 + log r)r r
=
1
2π
1
r
r
1
r r
=
1
2π
1
r
(1)r = 0 for r = 0.
Thus, 2V (r) = δ(r) ⇒ V is the fundamental solution.
The approach above is not rigorous. See the next page that shows that
V defined above is the Fundamental Solution of 2
.
The solution of
2
ω = F(x),
if given by
ω(x) =
R2
V (x − y) 2
ω(y) dy =
1
8π R2
|x − y|2
log |x − y| F(y) dy.
Partial Differential Equations Igor Yanovsky, 2005 214
Show that the Fundamental Solution of 2
in R2
is given by:
K(x) =
1
8π
r2
ln(r), r = |x − ξ|, (17.4)
Proof. For v ∈ C∞
0 (Rn), we want to show
Rn
K(|x|) 2
v(x) dx = v(0).
Suppose v(x) ≡ 0 for |x| ≥ R and let Ω = BR(0); for small > 0 let
Ω = Ω − B (0).
K(|x|) is biharmonic ( 2
K(|x|) = 0) in Ω . Consider Green’s identity (∂Ω = ∂Ω ∪
∂B (0)):
Ω
K(|x|) 2
v dx =
∂Ω
K(|x|)
∂ v
∂n
− v
∂ K(|x|)
∂n
ds +
∂Ω
K(|x|)
∂v
∂n
− v
∂K(|x|)
∂n
ds
=0, since v≡0 for x≥R
+
∂B (0)
K(|x|)
∂ v
∂n
− v
∂ K(|x|)
∂n
ds +
∂B (0)
K(|x|)
∂v
∂n
− v
∂K(|x|)
∂n
ds.
lim
→0 Ω
K(|x|) 2
v dx =
Ω
K(|x|) v2
dx. Since K(r) is integrable at x = 0.
On ∂B (0), K(|x|) = K( ). Thus, 49
∂B (0)
K(|x|)
∂ v
∂n
dS = K( )
∂B (0)
∂ v
∂n
dS ≤ K( ) ωn
1
max
x∈Ω
∇( v)
=
1
8π
2
log( ) ωn max
x∈Ω
∇( v) → 0, as → 0.
∂B (0)
v(x)
∂ K(|x|)
∂n
dS =
∂B (0)
−
1
2π
v(x) dS
=
∂B (0)
−
1
2π
v(0) dS +
∂B (0)
−
1
2π
[v(x) − v(0)] dS
= −
1
2π
v(0) 2π − max
x∈∂B (0)
v(x) − v(0)
→0, (v is continuous)
= −v(0).
∂B (0)
K(|x|)
∂v
∂n
dS = K( )
∂B (0)
∂v
∂n
dS ≤
1
2π
(1 + log ) 2π max
x∈Ω
|∇v| → 0, as → 0.
∂B (0)
v
∂K(|x|)
∂n
dS =
∂B (0)
−
1
4π
log −
1
8π
v(x) dS
≤
4π
log +
1
2
· 2π max
x∈∂B (0)
| v| → 0, as → 0.
49
Note that for |x| = ,
K(|x|) = K( ) =
1
8π
2
log , K =
1
2π
(1 + log ),
∂K(|x|)
∂n
= −
∂K( )
∂r
= −
1
4π
log −
1
8π
,
∂ K
∂n
= −
∂ K
∂r
= −
1
2π
.
Partial Differential Equations Igor Yanovsky, 2005 215
⇒
Ω
K(|x|) 2
v dx = lim
→0 Ω
K(|x|) 2
v dx = v(0).
Partial Differential Equations Igor Yanovsky, 2005 216
17.3 Radial Variables
Problem (F’99, #8). Let u = u(x, t) solve the following PDE in two spatial dimen-
sions
− u = 1
for r < R(t), in which r = |x| is the radial variable, with boundary condition
u = 0
on r = R(t). In addition assume that R(t) satisfies
dR
dt
= −
∂u
∂r
(r = R)
with initial condition R(0) = R0.
a) Find the solution u(x, t).
b) Find an ODE for the outer radius R(t), and solve for R(t).
Proof. a) Rewrite the equation in polar coordinates:
− u = −
1
r
(rur)r +
1
r2
uθθ = 1.
For a radially symmetric solution u(r), we have uθθ = 0. Thus,
1
r
(rur)r = −1,
(rur)r = −r,
rur = −
r2
2
+ c1,
ur = −
r
2
+
c1
r
,
u(r, t) = −
r2
4
+ c1 log r + c2.
Since we want u to be defined for r = 0, we have c1 = 0. Thus,
u(r, t) = −
r2
4
+ c2.
Using boundary conditions, we have
u(R(t), t) = −
R(t)2
4
+ c2 = 0 ⇒ c2 =
R(t)2
4
. Thus,
u(r, t) = −
r2
4
+
R(t)2
4
.
b) We have
u(r, t) = −
r2
4
+
R(t)2
4
,
∂u
∂r
= −
r
2
,
dR
dt
= −
∂u
∂r
(r = R) =
R
2
, (from )
dR
R
=
dt
2
,
log R =
t
2
,
R(t) = c1e
t
2 , R(0) = c1 = R0. Thus,
Partial Differential Equations Igor Yanovsky, 2005 217
R(t) = R0e
t
2 .
Partial Differential Equations Igor Yanovsky, 2005 218
Problem (F’01, #3). Let u = u(x, t) solve the following PDE in three spatial di-
mensions
u = 0
for R1 < r < R(t), in which r = |x| is the radial variable, with boundary conditions
u(r = R(t), t) = 0, and u(r = R1, t) = 1.
In addition assume that R(t) satisfies
dR
dt
= −
∂u
∂r
(r = R)
with initial condition R(0) = R0 in which R0 > R1.
a) Find the solution u(x, t).
b) Find an ODE for the outer radius R(t).
Proof. a) Rewrite the equation in spherical coordinates (n = 3, radial functions):
u =
∂2
∂r2
+
2
r
∂
∂r
u =
1
r2
(r2
ur)r = 0.
(r2
ur)r = 0,
r2
ur = c1,
ur =
c1
r2
,
u(r, t) = −
c1
r
+ c2.
Using boundary conditions, we have
u(R(t), t) = −
c1
R(t)
+ c2 = 0 ⇒ c2 =
c1
R(t)
,
u(R1, t) = −
c1
R1
+ c2 = 1.
This gives
c1 =
R1R(t)
R1 − R(t)
, c2 =
R1
R1 − R(t)
.
u(r, t) = −
R1R(t)
R1 − R(t)
·
1
r
+
R1
R1 − R(t)
.
b) We have
u(r, t) = −
R1R(t)
R1 − R(t)
·
1
r
+
R1
R1 − R(t)
,
∂u
∂r
=
R1R(t)
R1 − R(t)
·
1
r2
,
dR
dt
= −
∂u
∂r
(r = R) = −
R1R(t)
R1 − R(t)
·
1
R(t)2
= −
R1
(R1 − R(t)) R(t)
(from )
Thus, an ODE for the outer radius R(t) is
dR
dt = R1
(R(t)−R1) R(t),
R(0) = R0, R0 > R1.
Partial Differential Equations Igor Yanovsky, 2005 219
Problem (S’02, #3). Steady viscous flow in a cylindrical pipe is described by the
equation
(u · ∇)u +
1
ρ
∇p −
η
ρ
u = 0
on the domain −∞ < x1 < ∞, x2
2 +x2
3 ≤ R2
, where u = (u1, u2, u3) = (U(x2, x3), 0, 0)
is the velocity vector, p(x1, x2, x3) is the pressure, and η and ρ are constants.
a) Show that ∂p
∂x1
is a constant c, and that U = c/η.
b) Assuming further that U is radially symmetric and U = 0 on the surface of the pipe,
determine the mass Q of fluid passing through a cross-section of pipe per unit time in
terms of c, ρ, η, and R. Note that
Q = ρ
{x2
2+x2
3≤R2}
U dx2dx3.
Proof. a) Since u = (u1, u2, u3) = (U(x2, x3), 0, 0), we have
(u · ∇)u = (u1, u2, u3) ·
∂u1
∂x1
,
∂u2
∂x2
,
∂u3
∂x3
= (U(x2, x3), 0, 0) · (0, 0, 0) = 0.
Thus,
1
ρ
∇p −
η
ρ
u = 0,
∇p = η u,
∂p
∂x1
,
∂p
∂x2
,
∂p
∂x3
= η( u1, u2, u3),
∂p
∂x1
,
∂p
∂x2
,
∂p
∂x3
= η(Ux2x2 + Ux3x3 , 0, 0).
We can make the following observations:
∂p
∂x1
= η (Ux2x2 + Ux3x3 )
indep. of x1
,
∂p
∂x2
= 0 ⇒ p = f(x1, x3),
∂p
∂x3
= 0 ⇒ p = g(x1, x2).
Thus, p = h(x1). But ∂p
∂x1
is independent of x1. Therefore, ∂p
∂x1
= c.
∂p
∂x1
= η U,
U =
1
η
∂p
∂x1
=
c
η
.
Partial Differential Equations Igor Yanovsky, 2005 220
b) Cylindrical Laplacian in R3
for radial functions is
U =
1
r
rUr r
,
1
r
rUr r
=
c
η
,
rUr r
=
cr
η
,
rUr =
cr2
2η
+ c1,
Ur =
cr
2η
+
c1
r
.
For Ur to stay bounded for r = 0, we need c1 = 0. Thus,
Ur =
cr
2η
,
U =
cr2
4η
+ c2,
0 = U(R) =
cR2
4η
+ c2,
⇒ U =
cr2
4η
−
cR2
4η
=
c
4η
(r2
− R2
).
Q = ρ
{x2
2+x2
3≤R2}
U dx2dx3 =
cρ
4η
2π
0
R
0
(r2
− R2
) rdrdθ = −
cρ
4η
2π
0
R4
4
dθ
= −
cρR4π
8η
.
It is not clear why Q is negative?
Partial Differential Equations Igor Yanovsky, 2005 221
17.4 Weak Solutions
Problem (S’98, #2).
A function u ∈ H2
0 (Ω) is a weak solution of the biharmonic equation
⎧
⎪⎨
⎪⎩
2u = f in Ω
u = 0 on ∂Ω
∂u
∂n = 0 on ∂Ω
provided
Ω
u v dx =
Ω
fv dx
for all test functions v ∈ H2
0 (Ω). Prove that for each f ∈ L2
(Ω), there exists a unique
weak solution for this problem. Here, H2
0 (Ω) is the closure of all smooth functions in
Ω which vanish on the boundary and with finite H2
norm: ||u||2
2 = Ω(u2
xx + u2
xy +
u2
yy) dxdy < ∞.
Hint: use Lax-Milgram lemma.
Proof. Multiply the equation by v ∈ H2
0 (Ω) and integrate over Ω:
2
u = f,
Ω
2
u v dx =
Ω
f v dx,
∂Ω
∂ u
∂n
v ds −
∂Ω
u
∂v
∂n
ds
= 0
+
Ω
u v dx =
Ω
f v dx,
Ω
u v dx
a(u,v)
=
Ω
f v dx
L(v)
.
Denote: V = H2
0 (Ω). Check the following conditions:
❶ a(·, ·) is continuous: ∃γ > 0, s.t. |a(u, v)| ≤ γ||u||V ||v||V , ∀u, v ∈ V ;
❷ a(·, ·) is V-elliptic: ∃α > 0, s.t. a(v, v) ≥ α||v||2
V , ∀v ∈ V ;
❸ L(·) is continuous: ∃Λ > 0, s.t. |L(v)| ≤ Λ||v||V , ∀v ∈ V.
Partial Differential Equations Igor Yanovsky, 2005 222
We have 50
❶ |a(u, v)|2
=
Ω
u v dx
2
≤
Ω
( u)2
dx
Ω
( v)2
dx ≤ ||u||2
H2
0(Ω)||v||2
H2
0(Ω).
❷ a(v, v) =
Ω
( v)2
dx ≥ ||v||H2
0(Ω).
❸ |L(v)| =
Ω
f v dx ≤
Ω
|f| |v| dx ≤
Ω
f2
dx
1
2
Ω
v2
dx
1
2
= ||f||L2(Ω)||v||L2(Ω) ≤ ||f||L2(Ω)
Λ
||v||H2
0(Ω).
Thus, by Lax-Milgram theorem, there exists a weak solution u ∈ H2
0 (Ω).
Also, we can prove the stability result.
α||u||2
H2
0(Ω) ≤ a(u, u) = |L(u)| ≤ Λ||u||H2
0(Ω),
⇒ ||u||H2
0(Ω) ≤
Λ
α
.
Let u1, u2 be two solutions so that
a(u1, v) = L(v),
a(u2, v) = L(v)
for all v ∈ V . Subtracting these two equations, we see that:
a(u1 − u2, v) = 0 ∀v ∈ V.
Applying the stability estimate (with L ≡ 0, i.e. Λ = 0), we conclude that
||u1 − u2||H2
0 (Ω) = 0, i.e. u1 = u2.
50
Cauchy-Schwarz Inequality:
|(u, v)| ≤ ||u||||v|| in any norm, for example |uv|dx ≤ ( u2
dx)
1
2 ( v2
dx)
1
2 ;
|a(u, v)| ≤ a(u, u)
1
2 a(v, v)
1
2 ;
|v|dx = |v| · 1 dx = ( |v|2
dx)
1
2 ( 12
dx)
1
2 .
Poincare Inequality:
||v||H2(Ω) ≤ C
Ω
( v)2
dx.
Green’s formula:
Ω
( u)2
dx =
Ω
(u2
xx + u2
yy + 2uxxuyy) dxdy =
Ω
(u2
xx + u2
yy − 2uxxy uy) dxdy =
Ω
(u2
xx + u2
yy + 2|uxy|2
) dxdy ≥ ||u||2
H2
0(Ω).
Partial Differential Equations Igor Yanovsky, 2005 223
17.5 Uniqueness
Problem. The solution of the Robin problem
∂u
∂n
+ αu = β, x ∈ ∂Ω
for the Laplace equation is unique when α > 0 is a constant.
Proof. Let u1 and u2 be two solutions of the Robin problem. Let w = u1 − u2. Then
w = 0 in Ω,
∂w
∂n
+ αw = 0 on ∂Ω.
Consider Green’s formula:
Ω
∇u · ∇v dx =
∂Ω
v
∂u
∂n
ds −
Ω
v u dx.
Setting w = u = v gives
Ω
|∇w|2
dx =
∂Ω
w
∂w
∂n
ds −
Ω
w w dx
=0
.
Boundary condition gives
Ω
|∇w|2
dx
≥0
= −
∂Ω
αw2
ds
≤0
.
Thus, w ≡ 0, and u1 ≡ u2. Hence, the solution to the Robin problem is unique.
Problem. Suppose q(x) ≥ 0 for x ∈ Ω and consider solutions u ∈ C2
(Ω) ∩ C1
(Ω) of
u − q(x)u = 0 in Ω.
Establish uniqueness theorems for
a) the Dirichlet problem: u(x) = g(x), x ∈ ∂Ω;
b) the Neumann problem: ∂u/∂n = h(x), x ∈ ∂Ω.
Proof. Let u1 and u2 be two solutions of the Dirichlet or Neumann problem.
Let w = u1 − u2. Then
w − q(x)w = 0 in Ω,
w = 0 or
∂w
∂n
= 0 on ∂Ω.
Consider Green’s formula:
Ω
∇u · ∇v dx =
∂Ω
v
∂u
∂n
ds −
Ω
v u dx.
Setting w = u = v gives
Ω
|∇w|2
dx =
∂Ω
w
∂w
∂n
ds
=0, Dirichlet or Neumann
−
Ω
w w dx.
Partial Differential Equations Igor Yanovsky, 2005 224
Ω
|∇w|2
dx
≥0
= −
Ω
q(x)w2
dx
≤0
.
Thus, w ≡ 0, and u1 ≡ u2. Hence, the solution to the Dirichlet and Neumann problems
are unique.
Problem (F’02, #8; S’93, #5).
Let D be a bounded domain in R3. Show that a solution of the boundary value problem
2
u = f in D,
u = u = 0 on ∂D
is unique.
Proof. Method I: Maximum Principle. Let u1, u2 be two solutions of the boundary
value problem. Define w = u1 − u2. Then w satisfies
2
w = 0 in D,
w = w = 0 on ∂D.
So w is harmonic and thus achieves min and max on ∂D ⇒ w ≡ 0.
So w is harmonic, but w ≡ 0 on ∂D ⇒ w ≡ 0. Hence, u1 = u2.
Method II: Green’s Identities. Multiply the equation by w and integrate:
w 2
w = 0,
Ω
w 2
w dx = 0,
∂Ω
w
∂( w)
∂n
ds
=0
−
Ω
∇w∇( )w dx = 0,
−
∂Ω
∂w
∂n
w ds
=0
+
Ω
( w)2
dx = 0.
Thus, w ≡ 0. Now, multiply w = 0 by w. We get
Ω
|∇w|2
dx = 0.
Thus, ∇w = 0 and w is a constant. Since w = 0 on ∂Ω, we have w ≡ 0.
Problem (F’97, #6).
a) Let u(x) ≥ 0 be continuous in closed bounded domain Ω ⊂ Rn, u is continuous in
Ω,
u = u2
and u|∂Ω = 0.
Prove that u ≡ 0.
b) What can you say about u(x) when the condition u(x) ≥ 0 in Ω is dropped?
Partial Differential Equations Igor Yanovsky, 2005 225
Proof. a) Multiply the equation by u and integrate:
u u = u3
,
Ω
u u dx =
Ω
u3
dx,
∂Ω
u
∂u
∂n
ds
=0
−
Ω
|∇u|2
dx =
Ω
u3
dx,
Ω
u3
+ |∇u|2
dx = 0.
Since u(x) ≥ 0, we have u ≡ 0.
b) If we don’t know that u(x) ≥ 0, then u can not be nonnegative on the entire
domain Ω. That is, u(x) < 0, on some (or all) parts of Ω. If u is nonnegative on Ω,
then u ≡ 0.
Partial Differential Equations Igor Yanovsky, 2005 226
Problem (W’02, #5). Consider the boundary value problem
u +
n
k=1
αk
∂u
∂xk
− u3
= 0 in Ω,
u = 0 on ∂Ω,
where Ω is a bounded domain in Rn with smooth boundary. If the αk’s are constants,
and u(x) has continuous derivatives up to second order, prove that u must vanish
identically.
Proof. Multiply the equation by u and integrate:
u u +
n
k=1
αk
∂u
∂xk
u − u4
= 0,
Ω
u u dx +
Ω
n
k=1
αk
∂u
∂xk
u dx −
Ω
u4
dx = 0,
∂Ω
u
∂u
∂n
ds
= 0
−
Ω
|∇u|2
dx +
Ω
n
k=1
αk
∂u
∂xk
u dx
➀
−
Ω
u4
dx = 0.
We will show that ➀ = 0.
Ω
αk
∂u
∂xk
u dx =
∂Ω
αk u2
ds
= 0
−
Ω
αk u
∂u
∂xk
dx,
⇒ 2
Ω
αk
∂u
∂xk
u dx = 0,
⇒
Ω
n
k=1
αk
∂u
∂xk
u dx = 0.
Thus, we have
−
Ω
|∇u|2
dx −
Ω
u4
dx = 0,
Ω
|∇u|2
+
Ω
u4
dx = 0.
Hence, |∇u|2
= 0 and u4
= 0. Thus, u ≡ 0.
Note that
Ω
n
k=1
αk
∂u
∂xk
u dx =
Ω
α · ∇u u dx =
∂Ω
α · nu2
ds
= 0
−
Ω
α · ∇u u dx,
and thus,
Ω
α · ∇u u dx = 0.
Partial Differential Equations Igor Yanovsky, 2005 227
Problem (W’02, #9). Let D = {x ∈ R2
: x1 ≥ 0, x2 ≥ 0}, and assume that f is
continuous on D and vanishes for |x| > R.
a) Show that the boundary value problem
u = f in D,
u(x1, 0) =
∂u
∂x1
(0, x2) = 0
can have only one bounded solution.
b) Find an explicit Green’s function for this boundary value problem.
Proof. a) Let u1, u2 be two solutions of the boundary value problem. Define w =
u1 − u2. Then w satisfies
w = 0 in D,
w(x1, 0) =
∂w
∂x1
(0, x2) = 0.
Consider Green’s formula:
D
∇u · ∇v dx =
∂D
v
∂u
∂n
ds −
D
v u dx.
Setting w = u = v gives
D
|∇w|2
dx =
∂D
w
∂w
∂n
ds −
D
w w dx,
D
|∇w|2
dx =
Rx1
w
∂w
∂n
ds +
Rx2
w
∂w
∂n
ds +
|x|>R
w
∂w
∂n
ds −
D
w w dx
=
Rx1
w(x1, 0)
=0
∂w
∂x2
ds +
Rx2
w(0, x2)
∂w
∂x1
=0
ds +
|x|>R
w
=0
∂w
∂n
ds −
D
w w
=0
dx,
D
|∇w|2
dx = 0 ⇒ |∇w|2
= 0 ⇒ w = const.
Since w(x1, 0) = 0 ⇒ w ≡ 0. Thus, u1 = u2.
b) The similar problem is solved in the appropriate section (S’96, #3).
Notice whenever you are on the boundary with variable x,
|x − ξ(0)
| =
|x − ξ(1)||x − ξ(3)|
|x − ξ(2)|
.
So, G(x, ξ) =
1
2π
log |x − ξ| − log
|x − ξ(1)||x − ξ(3)|
|x − ξ(2)|
is the Green’s function.
Partial Differential Equations Igor Yanovsky, 2005 228
Problem (F’98, #4). In two dimensions x = (x, y), define the set Ωa as
Ωa = Ω+
∪ Ω−
in which
Ω+
= {|x − x0| ≤ a} ∩ {x ≥ 0}
Ω−
= {|x + x0| ≤ a} ∩ {x ≤ 0} = −Ω+
and x0 = (1, 0). Note that Ωa consists of two components when 0 < a < 1
and a single component when a > 1. Consider the Neumann problem
∇2
u = f, x ∈ Ωa
∂u/∂n = 0, x ∈ ∂Ωa
in which
Ω+
f(x) dx = 1
Ω−
f(x) dx = −1
a) Show that this problem has a solution for 1 < a, but not for 0 < a < 1.
(You do not need to construct the solution, only demonstrate solveability.)
b) Show that maxΩa |∇u| → ∞ as a → 1 from above. (Hint: Denote L to be
the line segment L = Ω+ ∩ Ω−, and note that its length |L| goes to 0 as a → 1.)
Proof. a) We use the Green’s identity. For 1 < a,
0 =
∂Ωa
∂u
∂n
ds =
Ωa
u dx =
Ωa
f(x) dx
=
Ω+
f(x) dx +
Ω−
f(x) dx = 1 − 1 = 0.
Thus, the problem has a solution for 1 < a.
For 0 < a < 1, Ω+
and Ω−
are disjoint. Consider Ω+
:
0 =
∂Ω+
∂u
∂n
ds =
Ω+
u dx =
Ω+
f(x) dx = 1,
0 =
∂Ω−
∂u
∂n
ds =
Ω−
u dx =
Ω−
f(x) dx = −1.
We get contradictions.
Thus, the solution does not exist for 0 < a < 1.
Partial Differential Equations Igor Yanovsky, 2005 229
b) Using the Green’s identity, we have: (n+ is the unit normal to Ω+)
Ω+
u dx =
∂Ω+
∂u
∂n+
ds =
L
∂u
∂n+
ds,
Ω−
u dx =
∂Ω−
∂u
∂n−
ds =
L
∂u
∂n−
ds = −
L
∂u
∂n+
ds.
Ω+
u dx −
Ω−
u dx = 2
L
∂u
∂n+
ds,
Ω+
f(x) dx −
Ω−
f(x) dx = 2
L
∂u
∂n+
ds.
2 = 2
L
∂u
∂n+
ds,
1 =
L
∂u
∂n+
ds ≤
L
∂u
∂n+
ds ≤
L
∂u
∂n+
2
+
∂u
∂τ
2
≤ |L| max
L
|∇u| ≤ |L| max
Ωa
|∇u|.
Thus,
max
Ωa
|∇u| ≥
1
|L|
.
As a → 1 (L → 0) ⇒ maxΩa |∇u| → ∞.
Partial Differential Equations Igor Yanovsky, 2005 230
Problem (F’00, #1). Consider the Dirichlet problem in a bounded domain D ⊂ Rn
with smooth boundary ∂D,
u + a(x)u = f(x) in D,
u = 0 on ∂D.
a) Assuming that |a(x)| is small enough, prove the uniqueness of the classical solution.
b) Prove the existence of the solution in the Sobolev space H1(D) assuming that f ∈
L2(D).
Note: Use Poincare inequality.
Proof. a) By Poincare Inequality, for any u ∈ C1
0 (D), we have ||u||2
2 ≤ C||∇u||2
2.
Consider two solutions of the Dirichlet problem above. Let w = u1 − u2. Then, w
satisfies
w + a(x)w = 0 in D,
w = 0 on ∂D.
w w + a(x)w2
= 0,
w w dx + a(x)w2
dx = 0,
− |∇w|2
dx + a(x)w2
dx = 0,
a(x)w2
dx = |∇w|2
dx ≥
1
C
w2
dx, (by Poincare inequality)
a(x)w2
dx −
1
C
w2
dx ≥ 0,
|a(x)| w2
dx −
1
C
w2
dx ≥ 0,
|a(x)| −
1
C
w2
dx ≥ 0.
If |a(x)| < 1
C ⇒ w ≡ 0.
b) Consider
F(v, u) = −
Ω
(v u + a(x)vu) dx = −
Ω
vf(x) dx = F(v).
F(v) is a bounded linear functional on v ∈ H1,2
(D), D = Ω.
|F(v)| ≤ ||f||2||v||2 ≤ ||f||2C||v||H1,2(D)
So by Riesz representation, there exists a solution u ∈ H1,2
0 (D) of
− < u, v >=
Ω
v u + a(x)vu dx =
Ω
vf(x) dx = F(v) ∀v ∈ H1,2
0 (D).
Partial Differential Equations Igor Yanovsky, 2005 231
Problem (S’91, #8). Define the operator
Lu = uxx + uyy − 4(r2
+ 1)u
in which r2 = x2 + y2.
a) Show that ϕ = er2
satisfies Lϕ = 0.
b) Use this to show that the equation
Lu = f in Ω
∂u
∂n
= γ on ∂Ω
has a solution only if
Ω
ϕf dx =
∂Ω
ϕγ ds(x).
Proof. a) Expressing Laplacian in polar coordinates, we obtain:
Lu =
1
r
(rur)r − 4(r2
+ 1)u,
Lϕ =
1
r
(rϕr)r − 4(r2
+ 1)ϕ =
1
r
(2r2
er2
)r − 4(r2
+ 1)er2
=
1
r
(4rer2
+ 2r2
· 2rer2
) − 4r2
er2
− 4er2
= 0.
b) We have ϕ = er2
= ex2+y2
= ex2
ey2
. From part (a),
Lϕ = 0,
∂ϕ
∂n
= ∇ϕ · n = (ϕx, ϕy) · n = (2xex2
ey2
, 2yex2
ey2
) · n = 2er2
(x, y) · (−y, x) = 0.
51
Consider two equations:
Lu = u − 4(r2
+ 1)u,
Lϕ = ϕ − 4(r2
+ 1)ϕ.
Multiply the first equation by ϕ and the second by u and subtract the two equations:
ϕLu = ϕ u − 4(r2
+ 1)uϕ,
uLϕ = u ϕ − 4(r2
+ 1)uϕ,
ϕLu − uLϕ = ϕ u − u ϕ.
Then, we start from the LHS of the equality we need to prove and end up with RHS:
Ω
ϕf dx =
Ω
ϕLu dx =
Ω
(ϕLu − uLϕ) dx =
Ω
(ϕ u − u ϕ) dx
=
Ω
(ϕ
∂u
∂n
− u
∂ϕ
∂n
) ds =
Ω
ϕ
∂u
∂n
ds =
Ω
ϕγ ds.
51
The only shortcoming in the above proof is that we assume n = (−y, x), without giving an expla-
nation why it is so.
Partial Differential Equations Igor Yanovsky, 2005 232
17.6 Self-Adjoint Operators
Consider an mth-order differential operator
Lu =
|α|≤m
aα(x)Dα
u.
The integration by parts formula
Ω
uxk
v dx =
∂Ω
uvnk ds −
Ω
uvxk
dx n = (n1, . . ., nn) ∈ Rn
,
with u or v vanishing near ∂Ω is:
Ω
uxk
v dx = −
Ω
uvxk
dx.
We can repeat the integration by parts with any combination of derivatives
Dα = (∂/∂x1)α1 · · ·(∂/∂xn)αn:
Ω
(Dα
u)v dx = (−1)m
Ω
uDα
v dx, (m = |α|).
We have
Ω
(Lu) v dx =
Ω |α|≤m
aα(x)Dα
u v dx =
|α|≤m Ω
aα(x) v Dα
u dx
=
|α|≤m
(−1)|α|
Ω
Dα
(aα(x) v) u dx =
Ω |α|≤m
(−1)|α|
Dα
(aα(x) v)
L∗(v)
u dx
=
Ω
L∗
(v) u dx,
for all u ∈ Cm
(Ω) and v ∈ C∞
0 .
The operator
L∗
(v) =
|α|≤m
(−1)|α|
Dα
(aα(x) v)
is called the adjoint of L.
The operator is self-adjoint if L∗ = L.
Also, L is self-adjoint if 52
Ω
vL(u) dx =
Ω
uL(v) dx.
52
L = L∗
⇔ (Lu|v) = (u|L∗
v) = (u|Lv).
Partial Differential Equations Igor Yanovsky, 2005 233
Problem (F’92, #6).
Consider the Laplace operator in the wedge 0 ≤ x ≤ y with boundary conditions
∂f
∂x
= 0 on x = 0
∂f
∂x
− α
∂f
∂y
= 0 on x = y.
a) For which values of α is this operator self-adjoint?
b) For such a value of α, suppose that
f = e−r2/2
cos θ
with these boundary conditions. Evaluate
CR
∂
∂r
f ds
in which CR is the circular arc of radius R connecting the boundaries x = 0 and x = y.
Proof. a) We have
Lu = u = 0
∂u
∂x
= 0 on x = 0
∂u
∂x
− α
∂u
∂y
= 0 on x = y.
The operator L is self-adjoint if:
Ω
(u Lv − v Lu) dx = 0.
Ω
(u Lv − v Lu) dx =
Ω
(u v − v u) dx =
∂Ω
u
∂v
∂n
− v
∂u
∂n
ds
=
x=0
u
∂v
∂n
− v
∂u
∂n
ds +
x=y
u
∂v
∂n
− v
∂u
∂n
ds
=
x=0
u (∇v · n) − v (∇u · n) ds +
x=y
u (∇v · n) − v (∇u · n) ds
=
x=0
u (vx, vy) · (−1, 0) − v (ux, uy) · (−1, 0) ds
+
x=y
u (vx, vy) · (1/
√
2, −1/
√
2) − v (ux, uy) · (1/
√
2, −1/
√
2) ds
=
x=0
u (0, vy) · (−1, 0) − v (0, uy) · (−1, 0) ds
= 0
+
x=y
u (αvy, vy) · (1/
√
2, −1/
√
2) − v (αuy, uy) · (1/
√
2, −1/
√
2) ds
=
x=y
uvy
√
2
(α − 1) −
vuy
√
2
(α − 1) ds =
need
0.
Partial Differential Equations Igor Yanovsky, 2005 234
Thus, we need α = 1 so that L is self-adjoint.
b) We have α = 1. Using Green’s identity and results from part (a), (∂f
∂n = 0 on
x = 0 and x = y):
Ω
f dx =
∂Ω
∂f
∂n
ds =
∂CR
∂f
∂n
ds +
x=0
∂f
∂n
=0
ds +
x=y
∂f
∂n
=0
ds =
∂CR
∂f
∂r
ds.
Thus,
∂CR
∂f
∂r
ds =
Ω
f dx =
R
0
π
2
π
4
e−r2/2
cos θ r drdθ
= 1 −
1
√
2
R
0
e−r2/2
r dr = 1 −
1
√
2
(1 − e−R2/2
).
Partial Differential Equations Igor Yanovsky, 2005 235
Problem (F’99, #1). Suppose that u = 0 in the weak sense in Rn
and that there
is a constant C such that
{|x−y|<1}
|u(y)| dy < C, ∀x ∈ Rn
.
Show that u is constant.
Proof. Consider Green’s formula:
Ω
∇u · ∇v dx =
∂Ω
v
∂u
∂n
ds −
Ω
v u dx
For v = 1, we have
∂Ω
∂u
∂n
ds =
Ω
u dx.
Let Br(x0) be a ball in Rn
. We have
0 =
Br(x0)
u dx =
∂Br(x0)
∂u
∂n
ds = rn−1
|x|=1
∂u
∂r
(x0 + rx) ds
= rn−1
ωn
∂
∂r
1
ωn |x|=1
u(x0 + rx) ds.
Thus, 1
ωn |x|=1 u(x0 + rx) ds is independent of r. Hence, it is constant.
By continuity, as r → 0, we obtain the Mean Value property:
u(x0) =
1
ωn |x|=1
u(x0 + rx) ds.
If |x−y|<1 |u(y)| dy < C ∀x ∈ Rn, we have |u(x)| < C in Rn.
Since u is harmonic and bounded in Rn, u is constant by Liouville’s theorem. 53
53
Liouville’s Theorem: A bounded harmonic function defined on Rn
is a constant.
Partial Differential Equations Igor Yanovsky, 2005 236
Problem (S’01, #1). For bodies (bounded regions B in R3
) which are not perfectly
conducting one considers the boundary value problem
0 = ∇ · γ(x)∇u =
3
j=1
∂
∂xj
γ(x)
∂u
∂xj
u = f on ∂B.
The function γ(x) is the “local conductivity” of B and u is the voltage. We define
operator Λ(f) mapping the boundary data f to the current density at the boundary by
Λ(f) = γ(x)
∂u
∂n
,
and ∂/∂n is the inward normal derivative (this formula defines the current density).
a) Show that Λ is a symmetric operator, i.e. prove
∂B
gΛ(f) dS =
∂B
fΛ(g) dS.
b) Use the positivity of γ(x) > 0 to show that Λ is negative as an operator, i.e., prove
∂B
fΛ(f) dS ≤ 0.
Proof. a) Let
∇ · γ(x)∇u = 0 on Ω,
u = f on ∂Ω.
∇ · γ(x)∇v = 0 on Ω,
v = g on ∂Ω.
Λ(f) = γ(x)
∂u
∂n
, Λ(g) = γ(x)
∂v
∂n
.
Since ∂/∂n is inward normal derivative, Green’s formula is:
−
∂Ω
v
=g
γ(x)
∂u
∂n
dS −
Ω
∇v · γ(x)∇u dx =
Ω
v∇ · γ(x)∇u dx.
We have
∂Ω
gΛ(f) dS =
∂Ω
gγ(x)
∂u
∂n
dS = −
Ω
∇v · γ(x)∇u dx −
Ω
v ∇ · γ(x)∇u
=0
dx
=
∂Ω
uγ(x)
∂v
∂n
dS +
Ω
u ∇ · γ(x)∇v
=0
dx
=
∂Ω
fγ(x)
∂v
∂n
dS =
∂Ω
fΛ(g) dS.
b) We have γ(x) > 0.
∂Ω
fΛ(f) dS =
∂Ω
uγ(x)
∂u
∂n
dS = −
Ω
u ∇ · γ(x)∇u
=0
dx −
Ω
γ(x)∇u · ∇u dx
= −
Ω
γ(x)|∇u|2
≥0
≤ 0.
Partial Differential Equations Igor Yanovsky, 2005 237
Problem (S’01, #4). The Poincare Inequality states that for any bounded domain
Ω in Rn there is a constant C such that
Ω
|u|2
dx ≤ C
Ω
|∇u|2
dx
for all smooth functions u which vanish on the boundary of Ω.
a) Find a formula for the “best” (smallest) constant for the domain Ω in terms of the
eigenvalues of the Laplacian on Ω, and
b) give the best constant for the rectangular domain in R2
Ω = {(x, y) : 0 ≤ x ≤ a, 0 ≤ y ≤ b}.
Proof. a) Consider Green’s formula:
Ω
∇u · ∇v dx =
∂Ω
v
∂u
∂n
ds −
Ω
v u dx.
Setting u = v and with u vanishing on ∂Ω, Green’s formula becomes:
Ω
|∇u|2
dx = −
Ω
u u dx.
Expanding u in the eigenfunctions of the Laplacian, u(x) = anφn(x), the formula
above gives
Ω
|∇u|2
dx = −
Ω
∞
n=1
anφn(x)
∞
m=1
−λmamφm(x) dx =
∞
m,n=1
λmanam
Ω
φnφm dx
=
∞
n=1
λn|an|2
.
Also,
Ω
|u|2
dx =
Ω
∞
n=1
anφn(x)
∞
m=1
amφm(x) =
∞
n=1
|an|2
.
Comparing and , and considering that λn increases as n → ∞, we obtain
λ1
Ω
|u|2
dx = λ1
∞
n=1
|an|2
≤
∞
n=1
λn|an|2
=
Ω
|∇u|2
dx.
Ω
|u|2
dx ≤
1
λ1 Ω
|∇u|2
dx,
with C = 1/λ1.
b) For the rectangular domain Ω = {(x, y) : 0 ≤ x ≤ a, 0 ≤ y ≤ b} ⊂ R2, the
eigenvalues of the Laplacian are
λmn = π2 m2
a2
+
n2
b2
, m, n = 1, 2, . . ..
λ1 = λ11 = π2 1
a2
+
1
b2
,
⇒ C =
1
λ11
=
1
π2
1
( 1
a2 + 1
b2 )
.
Partial Differential Equations Igor Yanovsky, 2005 238
Partial Differential Equations Igor Yanovsky, 2005 239
Problem (S’01, #6). a) Let B be a bounded region in R3
with smooth boundary ∂B.
The “conductor” potential for the body B is the solution of Laplace’s equation outside
B
V = 0 in R3
/B
subject to the boundary conditions, V = 1 on ∂B and V (x) tends to zero as |x| → ∞.
Assuming that the conductor potential exists, show that it is unique.
b) The “capacity” C(B) of B is defined to be the limit of |x|V (x) as |x| → ∞. Show
that
C(B) = −
1
4π ∂B
∂V
∂n
dS,
where ∂B is the boundary of B and n is the outer unit normal to it (i.e. the normal
pointing “toward infinity”).
c) Suppose that B ⊂ B. Show that C(B ) ≤ C(B).
Proof. a) Let V1, V2 be two solutions of the boundary value problem. Define W =
V1 − V2. Then W satisfies
⎧
⎪⎨
⎪⎩
W = 0 in R3
/B
W = 0 on ∂B
W → 0 as |x| → ∞.
Consider Green’s formula:
B
∇u · ∇v dx =
∂B
v
∂u
∂n
ds −
B
v u dx.
Setting W = u = v gives
B
|∇W|2
dx =
∂B
W
=0
∂W
∂n
ds −
B
W W
=0
dx = 0.
Thus, |∇W|2 = 0 ⇒ W = const. Since W = 0 on ∂B, W ≡ 0, and V1 = V2.
b & c) For (b)&(c), see the solutions from Ralston’s homework (a few pages
down).
Partial Differential Equations Igor Yanovsky, 2005 240
Problem (W’03, #2). Let L be the second order differential operator L = − a(x)
in which x = (x1, x2, x3) is in the three-dimensional cube C = {0 < xi < 1, i = 1, 2, 3}.
Suppose that a > 0 in C. Consider the eigenvalue problem
Lu = λu for x ∈ C
u = 0 for x ∈ ∂C.
a) Show that all eigenvalues are negative.
b) If u and v are eigenfunctions for distinct eigenvalues λ and μ, show that u and v
are orthogonal in the appropriate product.
c) If a(x) = a1(x1) + a2(x2) + a3(x3) find an expression for the eigenvalues and eigen-
vectors of L in terms of the eigenvalues and eigenvectors of a set of one-dimensional
problems.
Proof. a) We have
u − a(x)u = λu.
Multiply the equation by u and integrate:
u u − a(x)u2
= λu2
,
Ω
u u dx −
Ω
a(x)u2
dx = λ
Ω
u2
dx,
∂Ω
u
∂u
∂n
ds
=0
−
Ω
|∇u|2
dx −
Ω
a(x)u2
dx = λ
Ω
u2
dx,
λ =
− Ω(|∇u|2 + a(x)u2) dx
Ω u2 dx
< 0.
b) Let λ, μ, be the eigenvalues and u, v be the corresponding eigenfunctions. We have
u − a(x)u = λu. (17.5)
v − a(x)v = μv. (17.6)
Multiply (17.5) by v and (17.6) by u and subtract equations from each other
v u − a(x)uv = λuv,
u v − a(x)uv = μuv.
v u − u v = (λ − μ)uv.
Integrating over Ω gives
Ω
v u − u v dx = (λ − μ)
Ω
uv dx,
∂Ω
v
∂u
∂n
− u
∂v
∂n
=0
dx = (λ − μ)
Ω
uv dx.
Since λ = μ, u and v are orthogonal on Ω.
Partial Differential Equations Igor Yanovsky, 2005 241
c) The three one-dimensional eigenvalue problems are:
u1x1x1
(x1) − a(x1)u1(x1) = λ1u1(x1),
u2x2x2
(x2) − a(x2)u2(x2) = λ2u2(x2),
u3x3x3
(x3) − a(x3)u3(x3) = λ3u3(x3).
We need to derive how u1, u2, u3 and λ1, λ2, λ3 are related to u and λ.
Partial Differential Equations Igor Yanovsky, 2005 242
17.7 Spherical Means
Problem (S’95, #4). Consider the biharmonic operator in R3
2
u ≡
∂2
∂x2
+
∂2
∂y2
+
∂2
∂z2
2
u.
a) Show that 2 is self-adjoint on |x| < 1 with the following boundary conditions on
|x| = 1:
u = 0,
u = 0.
Proof. a) We have
Lu = 2
u = 0
u = 0 on |x| = 1
u = 0 on |x| = 1.
The operator L is self-adjoint if:
Ω
(u Lv − v Lu) dx = 0.
Ω
(u Lv − v Lu) dx =
Ω
(u 2
v − v 2
u) dx
=
∂Ω
u
∂ v
∂n
ds
=0
−
Ω
∇u · ∇( v) dx −
∂Ω
v
∂ u
∂n
ds
=0
+
Ω
∇v · ∇( u) dx
= −
∂Ω
v
∂u
∂n
ds
=0
+
Ω
u v dx +
∂Ω
u
∂v
∂n
ds
=0
−
Ω
v u dx = 0.
Partial Differential Equations Igor Yanovsky, 2005 243
b) Denote |x| = r and define the averages
S(r) = (4πr2
)−1
|x|=r
u(x) ds,
V (r) =
4
3
πr3
−1
|x|≤r
u(x) dx.
Show that
d
dr
S(r) =
r
3
V (r).
Hint: Rewrite S(r) as an integral over the unit sphere before differentiating; i.e.,
S(r) = (4π)−1
|x |=1
u(rx ) dx .
c) Use the result of (b) to show that if u is biharmonic, i.e. 2
u = 0, then
S(r) = u(0) +
r2
6
u(0).
Hint: Use the mean value theorem for u.
b) Let x = x/|x|. We have 54
S(r) =
1
4πr2
|x|=r
u(x) dSr =
1
4πr2
|x |=1
u(rx ) r2
dS1 =
1
4π |x |=1
u(rx ) dS1.
dS
dr
=
1
4π |x |=1
∂u
∂r
(rx ) dS1 =
1
4π |x |=1
∂u
∂n
(rx ) dS1 =
1
4πr2
|x|=r
∂u
∂n
(x) dSr
=
1
4πr2
|x|≤r
u dx.
where we have used Green’s identity in the last equality. Also
r
3
V (r) =
1
4πr2
|x|≤r
u dx.
c) Since u is biharmonic (i.e. u is harmonic), u has a mean value property. We
have
d
dr
S(r) =
r
3
V (r) =
r
3
4
3
πr3
−1
|x|≤r
u(x) dx =
r
3
u(0),
S(r) =
r2
6
u(0) + S(0) = u(0) +
r2
6
u(0).
54
Change of variables:
Surface integrals: x = rx in R3
:
|x|=r
u(x) dS =
|x |=1
u(rx ) r2
dS1.
Volume integrals: ξ = rξ in Rn
:
|ξ |<r
h(x + ξ ) dξ =
|ξ|<1
h(x + rξ) rn
dξ.
Partial Differential Equations Igor Yanovsky, 2005 244
Partial Differential Equations Igor Yanovsky, 2005 245
Problem (S’00, #7). Suppose that u = u(x) for x ∈ R3
is biharmonic;
i.e. that 2u ≡ ( u) = 0. Show that
(4πr2
)−1
|x|=r
u(x) ds(x) = u(0) + (r2
/6) u(0)
through the following steps:
a) Show that for any smooth f,
d
dr |x|≤r
f(x) dx =
|x|=r
f(x) ds(x).
b) Show that for any smooth f,
d
dr
(4πr2
)−1
|x|=r
f(x) ds(x) = (4πr2
)−1
|x|=r
n · ∇f(x, y) ds
in which n is the outward normal to the circle |x| = r.
c) Use step (b) to show that
d
dr
(4πr2
)−1
|x|=r
f(x) ds(x) = (4πr2
)−1
|x|≤r
f(x) dx.
d) Combine steps (a) and (c) to obtain the final result.
Proof. a) We can express the integral in Spherical Coordinates: 55
|x|≤R
f(x) dx =
R
0
2π
0
π
0
f(φ, θ, r) r2
sinφ dφ dθ dr.
d
dr |x|≤R
f(x) dx =
d
dr
R
0
2π
0
π
0
f(φ, θ, r) r2
sinφ dφ dθ dr = ???
=
2π
0
π
0
f(φ, θ, r) R2
sinφ dφ dθ
=
|x|=R
f(x) dS.
55
Differential Volume in spherical coordinates:
d3
ω = ω2
sin φ dφ dθ dω.
Differential Surface Area on sphere:
dS = ω2
sin φ dφ dθ.
Partial Differential Equations Igor Yanovsky, 2005 246
b&c) We have
d
dr
1
4πr2
|x|=r
f(x) dS =
d
dr
1
4πr2
|x |=1
f(rx ) r2
dS1 =
1
4π
d
dr |x |=1
f(rx ) dS1
=
1
4π |x |=1
∂f
∂r
(rx ) dS1 =
1
4π |x |=1
∂f
∂n
(rx ) dS1
=
1
4πr2
|x|=r
∂f
∂n
(x) dS =
1
4πr2
|x|=r
∇f · n dS
=
1
4πr2
|x|≤r
f dx.
Green’s formula was used in the last equality.
Alternatively,
d
dr
1
4πr2
|x|=r
f(x) dS =
d
dr
1
4πr2
2π
0
π
0
f(φ, θ, r) r2
sinφ dφ dθ
=
d
dr
1
4π
2π
0
π
0
f(φ, θ, r) sinφ dφ dθ
=
1
4π
2π
0
π
0
∂f
∂r
(φ, θ, r) sin φ dφ dθ
=
1
4π
2π
0
π
0
∇f · n sin φ dφ dθ
=
1
4πr2
2π
0
π
0
∇f · n r2
sinφ dφ dθ
=
1
4πr2
|x|=r
∇f · n dS
=
1
4πr2
|x|=r
f dx.
d) Since f is biharmonic (i.e. f is harmonic), f has a mean value property. From
(c), we have 56
d
dr
1
4πr2
|x|=r
f(x) ds(x) =
1
4πr2
|x|≤r
f(x) dx =
r
3
1
4
3πr3
|x|≤r
f(x) dx
=
r
3
f(0).
1
4πr2
|x|=r
f(x) ds(x) =
r2
6
f(0) + f(0).
56
Note that part (a) was not used. We use exactly the same derivation as we did in S’95 #4.
Partial Differential Equations Igor Yanovsky, 2005 247
Problem (F’96, #4).
Consider smooth solutions of u = k2u in dimension d = 2 with k > 0.
a) Show that u satisfies the following ‘mean value property’:
Mx (r) +
1
r
Mx(r) − k2
Mx(r) = 0,
in which Mx(r) is defined by
Mx(r) =
1
2π
2π
0
u(x + r cos θ, y + r sinθ) dθ
and the derivatives (denoted by ) are in r with x fixed.
b) For k = 1, this equation is the modified Bessel equation (of order 0)
f +
1
r
f − f = 0,
for which one solution (denoted as I0) is
I0(r) =
1
2π
2π
0
er sin θ
dθ.
Find an expression for Mx(r) in terms of I0.
Proof. a) Laplacian in polar coordinates written as:
u = urr +
1
r
ur +
1
r2
uθθ.
Thus, the equation may be written as
urr +
1
r
ur +
1
r2
uθθ = k2
u.
Mx(r) =
1
2π
2π
0
u dθ,
Mx(r) =
1
2π
2π
0
ur dθ,
Mx (r) =
1
2π
2π
0
urr dθ.
Mx (r) +
1
r
Mx(r) − k2
Mx(r) =
1
2π
2π
0
urr +
1
r
ur − k2
u dθ
= −
1
2πr2
2π
0
uθθ dθ = −
1
2πr2
uθ
2π
0
= 0.
b) Note that w = er sin θ
satisfies w = w, i.e.
w = wrr +
1
r
wr +
1
r2
wθθ
= sin2
θ er sin θ
+
1
r
sinθ er sin θ
+
1
r2
(−r sinθ er sin θ
+ r2
cos2
θ er sin θ
) = er sin θ
= w.
Thus,
Mx(r) = ey 1
2π
2π
0
er sin θ
dθ = ey
I0.
Partial Differential Equations Igor Yanovsky, 2005 248
57
57
Check with someone about the last result.
Partial Differential Equations Igor Yanovsky, 2005 249
17.8 Harmonic Extensions, Subharmonic Functions
Problem (S’94, #8). Suppose that Ω is a bounded region in R3 and that u = 1 on
∂Ω. If u = 0 in the exterior region R3
/Ω and u(x) → 0 as |x| → ∞, prove the
following:
a) u > 0 in R3/Ω;
b) if ρ(x) is a smooth function such that ρ(x) = 1 for |x| > R and ρ(x) = 0 near ∂Ω,
then for |x| > R,
u(x) = −
1
4π R3/Ω
( (ρu))(y)
|x − y|
dy.
c) lim|x|→∞ |x|u(x) exists and is non-negative.
Proof. a) Let Br(0) denote the closed ball {x : |x| ≥ r}.
Given ε > 0, we can find r large enough that Ω ∈ BR1 (0) and maxx∈∂BR1
(0) |u(x)| < ε,
since |u(x)| → 0 as |x| → ∞.
Since u is harmonic in BR1 − Ω, it takes its maximum and minimum on the boundary.
Assume
min
x∈∂BR1
(0)
u(x) = −a < 0 (where |a| < ε).
We can find an R2 such that maxx∈BR2
(0) |u(x)| < a
2 ; hence u takes a minimum inside
BR2 (0) − Ω, which is impossible; hence u ≥ 0.
Now let V = {x : u(x) = 0} and let α = minx∈V |x|. Since u cannot take a minimum
inside BR(0) (where R > α), it follows that u ≡ C and C = 0, but this contradicts
u = 1 on ∂Ω. Hence u > 0 in R3 − Ω.
b) For n = 3,
K(|x − y|) =
1
(2 − n)ωn
|x − y|2−n
= −
1
4π
1
|x − y|
.
Since ρ(x) = 1 for |x| > R, then for x /∈ BR, we have (ρu) = u = 0. Thus,
−
1
4π R3/Ω
( (ρu))(y)
|x − y|
dy
= −
1
4π BR/Ω
( (ρu))(y)
|x − y|
dy
=
1
4π BR/Ω
∇y
1
|x − y|
· ∇y(ρu) dy −
1
4π ∂(BR/Ω)
∂
∂n
ρu
1
|x − y|
dSy
= −
1
4π BR/Ω
1
|x − y|
ρu dy +
1
4π ∂(BR/Ω)
∂
∂n
1
|x − y|
ρu dSy −
1
4π ∂(BR/Ω)
∂
∂n
ρu
1
|x − y|
dSy
= ??? = u(x) −
1
4πR2
∂B
u dSy
→0, as R→∞
−
1
4πR ∂B
∂u
∂n
dSy
→0, as R→∞
= u(x).
c) See the next problem.
Partial Differential Equations Igor Yanovsky, 2005 250
Ralston Hw. a) Suppose that u is a smooth function on R3
and u = 0 for |x| > R.
If limx→∞ u(x) = 0, show that you can write u as a convolution of u with the − 1
4π|x|
and prove that limx→∞ |x|u(x) = 0 exists.
b) The “conductor potential” for Ω ⊂ R3 is the solution to the Dirichlet problem v =
0. The limit in part (a) is called the “capacity” of Ω. Show that if Ω1 ⊂ Ω2, then the
capacity of Ω2 is greater or equal the capacity of Ω1.
Proof. a) If we define
v(x) = −
1
4π R3
u(y)
|x − y|
dy,
then (u − v) = 0 in all R3
, and, since v(x) → 0 as |x| → ∞, we have lim|x|→∞(u(x)−
v(x)) = 0. Thus, u − v must be bounded, and Liouville’s theorem implies that it is
identically zero. Since we now have
|x|u(x) = −
1
4π R3
|x| u(y)
|x − y|
dy,
and |x|/|x − y| converges uniformly to 1 on {|y| ≤ R}, it follows that
lim
|x|→∞
|x|u(x) = −
1
4π R3
u(y) dy.
b) Note that part (a) implies that the limit lim|x|→∞ |x|v(x) exists, because we can
apply (a) to u(x) = φ(x)v(x), where φ is smooth and vanishes on Ω, but φ(x) = 1 for
|x| > R.
Let v1 be the conductor potential for Ω1 and v2 for Ω2. Since vi → ∞ as |x| → ∞ and
vi = 1 on ∂Ωi, the max principle says that 1 > vi(x) > 0 for x ∈ R3
− Ωi. Consider
v2 − v1. Since Ω1 ⊂ Ω2, this is defined in R3 − Ω2, positive on ∂Ω2, and has limit 0 as
|x| → ∞. Thus, it must be positive in R3
− Ω2. Thus, lim|x|→∞ |x|(v2 − v1) ≥ 0.
Problem (F’95, #4). 58
Let Ω be a simply connected open domain in R2
and u = u(x, y) be subharmonic there, i.e. u ≥ 0 in Ω. Prove that if
DR = {(x, y) : (x − x0)2
+ (y − y0)2
≤ R2
} ⊂ Ω
then
u(x0, y0) ≤
1
2π
2π
0
u(x0 + R cos θ, y0 + R sinθ) dθ.
Proof. Let
M(x0, R) =
1
2π
2π
0
u(x0 + R cos θ, y0 + R sin θ) dθ,
w(r, θ) = u(x0 + R cos θ, y0 + R sin θ).
Differentiate M(x0, R) with respect to R:
d
dr
M(x0, R) =
1
2πR
2π
0
wr(R, θ)Rdθ,
58
See McOwen, Sec.4.3, p.131, #1.
Partial Differential Equations Igor Yanovsky, 2005 251
59
59
See ChiuYen’s solutions and Sung Ha’s solutions (in two places). Nick’s solutions, as started above,
have a very simplistic approach.
Partial Differential Equations Igor Yanovsky, 2005 252
Ralston Hw (Maximum Principle).
Suppose that u ∈ C(Ω) satisfies the mean value property in the connected open set Ω.
a) Show that u satisfies the maximum principle in Ω, i.e.
either u is constant or u(x) < supΩ u for all x ∈ Ω.
b) Show that, if v is a continuous function on a closed ball Br(ξ) ⊂ Ω and has the
mean value property in Br(ξ), then u = v on ∂Br(ξ) implies u = v in Br(ξ). Does this
imply that u is harmonic in Ω?
Proof. a) If u(x) is not less than supΩ u for all x ∈ Ω, then the set
K = {x ∈ Ω : u(x) = sup
Ω
u}
is nonempty. This set is closed because u is continuous. We will show it is also open.
This implies that K = Ω because Ω is connected. Thus u is constant on Ω.
Let x0 ∈ K. Since Ω is open, ∃δ > 0, s.t. Bδ(x0) = {x ∈ Rn : |x − x0| ≤ δ} ⊂ Ω. Let
supΩ u = M. By the mean value property, for 0 ≤ r ≤ δ
M = u(x0) =
1
A(Sn−1) |ξ|=1
u(x0 + rξ)dSξ, and 0 =
1
A(Sn−1) |ξ|=1
(M − u(x0 + rξ))dSξ.
Sinse M −u(x0 +rξ) is a continuous nonnegative function on ξ, this implies M −u(x0 +
rξ) = 0 for all ξ ∈ Sn−1. Thus u = 0 on Bδ(x0).
b) Since u − v has the mean value property in the open interior of Br(ξ), by part
a) it satisfies the maximum principle. Since it is continuous on Br(ξ), its supremum
over the interior of Br(ξ) is its maximum on Br(ξ), and this maximum is assumed at a
point x0 in Br(ξ). If x0 in the interior of Br(ξ), then u −v is constant ant the constant
must be zero, since this is the value of u −v on the boundary. If x0 is on the boundary,
then u − v must be nonpositive in the interior of Br(ξ).
Applying the same argument to v − u, one finds that it is either identically zero or
nonpositive in the interior of Br(ξ). Thus, u − v ≡ 0 on Br(ξ).
Yes, it does follow that u harmonic in Ω. Take v in the preceding to be the harmonic
function in the interior of Br(ξ) which agrees with u on the boundary. Since u = v on
Br(ξ), u is harmonic in the interior of Br(ξ). Since Ω is open we can do this for every
ξ ∈ Ω. Thus u is harmonic in Ω.
Partial Differential Equations Igor Yanovsky, 2005 253
Ralston Hw. Assume Ω is a bounded open set in Rn
and the Green’s function, G(x, y),
for Ω exists. Use the strong maximum principle, i.e. either u(x) < supΩ u for all x ∈ Ω,
or u is constant, to prove that G(x, y) < 0 for x, y ∈ Ω, x = y.
Proof. G(x, y) = K(x, y) + ω(x, y). For each x ∈ Ω, f(y) = ω(x, y) is continuous on Ω,
thus, bounded. So |ω(x, y)| ≤ Mx for all y ∈ Ω. K(x − y) → −∞ as y → x. Thus,
given Mx, there is δ > 0, such that K(x − y) < −Mx when |x − y| = r and 0 < r ≤ δ.
So for 0 < r ≤ δ the Green’s function with x fixed satisfies, G(x, y) is harmonic on
Ω − Br(x), and G(x, y) ≤ 0 on the boundary of Ω − Br(x). Since we can choose r as
small as we wish, we get G(x, y) < 0 for y ∈ Ω − {x}.
Problem (W’03, #6). Assume that u is a harmonic function in the half ball
D = {(x, y, z) : x2
+y2
+z2
< 1, z ≥ 0} which is continuously differentiable, and satis-
fies u(x, y, 0) = 0. Show that u can be extended to be a harmonic function in the whole
ball. If you propose and explicit extension for u, explain why the extension is harmonic.
Proof. We can extend u to all of n-space by defining
u(x , xn) = −u(x , −xn)
for xn < 0. Define
ω(x) =
1
aωn |y|=1
a2 − |x|2
|x − y|n
v(y)dSy
ω(x) is continuous on a closed ball B, harmonic in B.
Poisson kernel is symmetric in y at xn = 0. ⇒ ω(x) = 0, (xn = 0).
ω is harmonic for x ∈ B, xn ≥ 0,with the same boundary values ω = u.
ω is harmonic ⇒ u can be extended to a harmonic function on the interior of B.
Ralston Hw. Show that a bounded solution to the Dirichlet problem in a half
space is unique. (Note that one can show that a bounded solution exists for any
given bounded continuous Dirichlet data by using the Poisson kernel for the half space.)
Proof. We have to show that a function, u, which is harmonic in the half-space, con-
tinuous, equal to 0 when xn = 0, and bounded, must be identically 0. We can extend
u to all of n-space by defining
u(x , xn) = −u(x , −xn)
for xn < 0. This extends u to a bounded harmonic function on all of n-space (by the
problem above). Liouville’s theorem says u must be constant, and since u(x , 0) = 0,
the constant is 0. So the original u must be identically 0.
Ralston Hw. Suppose u is harmonic on the ball minus the origin, B0 = {x ∈ R3
:
0 < |x| < a}. Show that u(x) can be extended to a harmonic function on the ball
B = {|x| < a} iff lim|x|→0 |x|u(x) = 0.
Proof. The condition lim|x|→0 |x|u(x) = 0 is necessary, because harmonic functions are
continuous.
To prove the converse, let v be the function which is continuous on {|x| ≤ a/2},
harmonic on {|x| < a/2}, and equals u on {|x| = a/2}. One can construct v using the
Poisson kernel. Since v is continuous, it is bounded, and we can assume that |v| ≤ M.
Since lim|x|→0 |x|u(x) = 0, given > 0, we can choose δ, 0 < δ < a/2 such that
− < |x|u(x) < when |x| < δ. Note that u, v − 2 /|x|, and v + 2 /|x| are harmonic
Partial Differential Equations Igor Yanovsky, 2005 254
on {0 < |x| < a/2}. Choose b, 0 < b < min( , a/2), so that /b > M. Then on both
{|x| = a/2} and {|x| = b} we have v − 2 /|x| < u(x) < v + 2 /|x|. Thus, by
max principle these inequalities hold on {b ≤ |x| ≤ a/2}. Pick x with 0 < |x| ≤ a/2.
u(x) = v(x). v is the extension of u on {|x| < a/2}, and u is extended on {|x| < a}.
Partial Differential Equations Igor Yanovsky, 2005 255
18 Problems: Heat Equation
McOwen 5.2 #7(a). Consider
⎧
⎪⎨
⎪⎩
ut = uxx for x > 0, t > 0
u(x, 0) = g(x) for x > 0
u(0, t) = 0 for t > 0,
where g is continuous and bounded for x ≥ 0 and g(0) = 0.
Find a formula for the solution u(x, t).
Proof. Extend g to be an odd function on all of R:
˜g(x) =
g(x), x ≥ 0
−g(−x), x < 0.
Then, we need to solve
˜ut = ˜uxx for x ∈ R, t > 0
˜u(x, 0) = ˜g(x) for x ∈ R.
The solution is given by: 60
˜u(x, t) =
R
K(x, y, t)g(y) dy =
1
√
4πt
∞
−∞
e−
(x−y)2
4t ˜g(y) dy
=
1
√
4πt
∞
0
e−(x−y)2
4t ˜g(y) dy +
0
−∞
e−(x−y)2
4t ˜g(y) dy
=
1
√
4πt
∞
0
e−
(x−y)2
4t g(y) dy −
∞
0
e−
(x+y)2
4t g(y) dy
=
1
√
4πt
∞
0
e
−x2+2xy−y2
4t − e
−x2−2xy−y2
4t g(y) dy
=
1
√
4πt
∞
0
e−
(x2+y2)
4t e
xy
2t − e−xy
2t g(y) dy.
u(x, t) =
1
√
4πt
∞
0
e−
(x2+y2)
4t 2 sinh
xy
2t
g(y) dy.
Since sinh(0) = 0, we can verify that u(0, t) = 0.
60
In calculations, we use:
0
−∞
ey
dy =
∞
0
e−y
dy, and g(−y) = −g(y).
Partial Differential Equations Igor Yanovsky, 2005 256
McOwen 5.2 #7(b). Consider
⎧
⎪⎨
⎪⎩
ut = uxx for x > 0, t > 0
u(x, 0) = g(x) for x > 0
ux(0, t) = 0 for t > 0,
where g is continuous and bounded for x ≥ 0.
Find a formula for the solution u(x, t).
Proof. Extend g to be an even function 61
on all of R:
˜g(x) =
g(x), x ≥ 0
g(−x), x < 0.
Then, we need to solve
˜ut = ˜uxx for x ∈ R, t > 0
˜u(x, 0) = ˜g(x) for x ∈ R.
The solution is given by: 62
˜u(x, t) =
R
K(x, y, t)g(y) dy =
1
√
4πt
∞
−∞
e−
(x−y)2
4t ˜g(y) dy
=
1
√
4πt
∞
0
e−(x−y)2
4t ˜g(y) dy +
0
−∞
e−(x−y)2
4t ˜g(y) dy
=
1
√
4πt
∞
0
e−
(x−y)2
4t g(y) dy +
∞
0
e−
(x+y)2
4t g(y) dy
=
1
√
4πt
∞
0
e
−x2+2xy−y2
4t + e
−x2−2xy−y2
4t g(y) dy
=
1
√
4πt
∞
0
e−
(x2+y2)
4t e
xy
2t + e−xy
2t g(y) dy.
u(x, t) =
1
√
4πt
∞
0
e−
(x2+y2)
4t 2 cosh
xy
2t
g(y) dy.
To check that the boundary condition holds, we perform the calculation:
ux(x, t) =
1
√
4πt
∞
0
d
dx
e−
(x2 +y2)
4t 2 cosh
xy
2t
g(y) dy
=
1
√
4πt
∞
0
−
2x
4t
e−(x2+y2)
4t 2 cosh
xy
2t
+ e−(x2+y2)
4t 2
y
2t
sinh
xy
2t
g(y) dy,
ux(0, t) =
1
√
4πt
∞
0
0 · e−y2
4t 2 cosh0 + e−y2
4t 2
y
2t
sinh 0 g(y) dy = 0.
61
Even extensions are always continuous. Not true for odd extensions. g odd is continuous if g(0) =
0.
62
In calculations, we use:
0
−∞
ey
dy =
∞
0
e−y
dy, and g(−y) = g(y).
Partial Differential Equations Igor Yanovsky, 2005 257
Problem (F’90, #5).
The initial value problem for the heat equation on the whole real line is
ft = fxx t ≥ 0
f(t = 0, x) = f0(x)
with f0 smooth and bounded.
a) Write down the Green’s function G(x, y, t) for this initial value problem.
b) Write the solution f(x, t) as an integral involving G and f0.
c) Show that the maximum values of |f(x, t)| and |fx(x, t)| are non-increasing
as t increases, i.e.
sup
x
|f(x, t)| ≤ sup
x
|f0(x)| sup
x
|fx(x, t)| ≤ sup
x
|f0x(x)|.
When are these inequalities actually equalities?
Proof. a) The fundamental solution
K(x, y, t) =
1
√
4πt
e−
|x−y|2
4t .
The Green’s function is: 63
G(x, t; y, s) =
1
(2π)n
π
k(t − s)
n
2
e
−
(x−y)2
4k(t−s) .
b) The solution to the one-dimensional heat equation is
u(x, t) =
R
K(x, y, t) f0(y) dy =
1
√
4πt R
e−
|x−y|2
4t f0(y) dy.
c) We have
sup
x
|u(x, t)| =
1
√
4πt R
e−
(x−y)2
4t f0(y) dy ≤
1
√
4πt R
e−
(x−y)2
4t f0(y) dy
=
1
√
4πt R
e−y2
4t f0(x − y) dy
≤ sup
x
|f0(x)|
1
√
4πt R
e−y2
4t dy z =
y
√
4t
, dz =
dy
√
4t
≤ sup
x
|f0(x)|
1
√
4πt R
e−z2 √
4t dz
= sup
x
|f0(x)|
1
√
π R
e−z2
dz
=
√
π
= sup
x
|f0(x)|.
63
The Green’s function for the heat equation on an infinite domain; derived in R. Haberman using
the Fourier transform.
Partial Differential Equations Igor Yanovsky, 2005 258
ux(x, t) =
1
√
4πt R
−
2(x − y)
4t
e−
(x−y)2
4t f0(y) dy =
1
√
4πt R
−
d
dy
e−
(x−y)2
4t f0(y) dy
=
1
√
4πt
− e−
(x−y)2
4t f0(y)
∞
−∞
= 0
+
1
√
4πt R
e−
(x−y)2
4t f0y(y) dy,
sup
x
|u(x, t)| ≤
1
√
4πt
sup
x
|f0x(x)|
R
e−(x−y)2
4t dy =
1
√
4πt
sup
x
|f0x(x)|
R
e−z2 √
4t dz
= sup
x
|f0x(x)|.
These inequalities are equalities when f0(x) and f0x(x) are constants, respectively.
Partial Differential Equations Igor Yanovsky, 2005 259
Problem (S’01, #5). a) Show that the solution of the heat equation
ut = uxx, −∞ < x < ∞
with square-integrable initial data u(x, 0) = f(x), decays in time, and there is a constant
α independent of f and t such that for all t > 0
max
x
|ux(x, t)| ≤ αt−3
4
x
|f(x)|2
dx
1
2
.
b) Consider the solution ρ of the transport equation ρt +uρx = 0 with square-integrable
initial data ρ(x, 0) = ρ0(x) and the velocity u from part (a). Show that ρ(x, t) remains
square-integrable for all finite time
R
|ρ(x, t)|2
dx ≤ eCt
1
4
R
|ρ0(x)|2
dx,
where C does not depend on ρ0.
Proof. a) The solution to the one-dimensional homogeneous heat equation is
u(x, t) =
1
√
4πt R
e−
(x−y)2
4t f(y) dy.
Take the derivative with respect to x, we get 64
ux(x, t) =
1
√
4πt R
−
2(x − y)
4t
e−
(x−y)2
4t f(y) dy = −
1
4t
3
2
√
π R
(x − y)e−
(x−y)2
4t f(y) dy.
|ux(x, t)| ≤
1
4t
3
2
√
π R
(x − y)e−(x−y)2
4t f(y) dy (Cauchy-Schwarz)
≤
1
4t
3
2
√
π R
(x − y)2
e−
(x−y)2
2t dy
1
2
||f||L2(R) z =
x − y
√
2t
, dz = −
dy
√
2t
=
1
4t
3
2
√
π R
− z2
(2t)
3
2 e−z2
dz
1
2
||f||L2(R)
=
(2t)
3
4
4t
3
2
√
π R
z2
e−z2
dz
M<∞
1
2
||f||L2(R)
= Ct−3
4 M
1
2 ||f||L2(R) = αt−3
4 ||f||L2(R).
b) Note:
max
x
|u| = max
x
1
√
4πt R
e−(x−y)2
4t f(y) dy ≤
1
√
4πt R
e−(x−y)2
2t dy
1
2
||f||L2(R)
≤
1
√
4πt R
− e−z2 √
2t dz
1
2
||f||L2(R) z =
x − y
√
2t
, dz = −
dy
√
2t
=
(2t)
1
4
2π
1
2 t
1
2 R
e−z2
dz
=
√
π
1
2
||f||L2(R) = Ct−1
4 ||f||L2(R).
65
64
Cauchy-Schwarz: |(u, v)| ≤ ||u||||v|| in any norm, for example |uv|dx ≤ ( u2
dx)
1
2 ( v2
dx)
1
2
65
See Yana’s and Alan’s solutions.
Partial Differential Equations Igor Yanovsky, 2005 260
Problem (F’04, #2).
Let u(x, t) be a bounded solution to the Cauchy problem for the heat equation
ut = a2
uxx, t > 0, x ∈ R, a > 0,
u(x, 0) = ϕ(x).
Here ϕ(x) ∈ C(R) satisfies
lim
x→+∞
ϕ(x) = b, lim
x→−∞
ϕ(x) = c.
Compute the limit of u(x, t) as t → +∞, x ∈ R. Justify your argument carefully.
Proof. For a = 1, the solution to the one-dimensional homogeneous heat equation is
u(x, t) =
1
√
4πt R
e−(x−y)2
4t ϕ(y) dy.
We want to transform the equation to vt = vxx. Make a change of variables: x = ay.
u(x, t) = u(x(y), t) = u(ay, t) = v(y, t). Then,
vy = uxxy = aux,
vyy = auxxxy = a2
uxx,
v(y, 0) = u(ay, 0) = ϕ(ay).
Thus, the new problem is:
vt = vyy, t > 0, y ∈ R,
v(y, 0) = ϕ(ay).
v(y, t) =
1
√
4πt R
e−
(y−z)2
4t ϕ(az) dz.
Since ϕ is continuous, and limx→+∞ ϕ(x) = b, limx→−∞ ϕ(x) = c, we have
|ϕ(x)| < M, ∀x ∈ R. Thus,
|v(y, t)| ≤
M
√
4πt R
e−z2
4t dz s =
z
√
4t
, ds =
dz
√
4t
=
M
√
4πt R
e−s2 √
4t ds =
M
√
π R
e−s2
ds
√
π
= M.
Integral in converges uniformly ⇒ lim = lim. For ψ = ϕ(a·):
v(y, t) =
1
√
4πt
∞
−∞
e−
(y−z)2
4t ψ(z) dz =
1
√
4πt
∞
−∞
e−z2
4t ψ(y − z) dz
=
1
√
4πt
∞
−∞
e−s2
ψ(y − s
√
4t)
√
4t ds
=
1
√
π
∞
−∞
e−s2
ψ(y − s
√
4t) ds.
Partial Differential Equations Igor Yanovsky, 2005 261
lim
t→+∞
v(y, t) =
1
√
π
∞
0
e−s2
lim
t→+∞
ψ(y − s
√
4t) ds +
1
√
π
0
−∞
e−s2
lim
t→+∞
ψ(y − s
√
4t) ds
=
1
√
π
∞
0
e−s2
c ds +
1
√
π
0
−∞
e−s2
b ds = c
1
√
π
√
π
2
+ b
1
√
π
√
π
2
=
c + b
2
.
Partial Differential Equations Igor Yanovsky, 2005 262
Problem. Consider
ut = kuxx + Q, 0 < x < 1
u(0, t) = 0,
u(1, t) = 1.
What is the steady state temperature?
Proof. Set ut = 0, and integrate with respect to x twice:
kuxx + Q = 0,
uxx = −
Q
k
,
ux = −
Q
k
x + a,
u = −
Q
k
x2
2
+ ax + b.
Boundary conditions give
u(x) = −
Q
2k
x2
+ 1 +
Q
2k
x.
Partial Differential Equations Igor Yanovsky, 2005 263
18.1 Heat Equation with Lower Order Terms
McOwen 5.2 #11. Find a formula for the solution of
ut = u − cu in Rn
× (0, ∞)
u(x, 0) = g(x) on Rn
.
(18.1)
Show that such solutions, with initial data g ∈ L2(Rn), are unique, even when c is
negative.
Proof. McOwen. Consider v(x, t) = ectu(x, t). The transformed problem is
vt = v in Rn × (0, ∞)
v(x, 0) = g(x) on Rn.
(18.2)
Since g is continuous and bounded in Rn, we have
v(x, t) =
Rn
K(x, y, t) g(y) dy =
1
(4πt)
n
2 Rn
e−
|x−y|2
4t g(y) dy,
u(x, t) = e−ct
v(x, t) =
1
(4πt)
n
2 Rn
e−
|x−y|2
4t
−ct
g(y) dy.
u(x, t) is a bounded solution since v(x, t) is.
To prove uniqueness, assume there is another solution v of (18.2). w = v − v satisfies
wt = w in Rn
× (0, ∞)
w(x, 0) = 0 on Rn.
(18.3)
Since bounded solutions of (18.3) are unique, and since w is a nontrivial solution, w is
unbounded. Thus, v is unbounded, and therefore, the bounded solution v is unique.
Partial Differential Equations Igor Yanovsky, 2005 264
18.1.1 Heat Equation Energy Estimates
Problem (F’94, #3). Let u(x, y, t) be a twice continuously differential solution of
ut = u − u3
in Ω ⊂ R2
, t ≥ 0
u(x, y, 0) = 0 in Ω
u(x, y, t) = 0 in ∂Ω, t ≥ 0.
Prove that u(x, y, t) ≡ 0 in Ω × [0, T].
Proof. Multiply the equation by u and integrate:
uut = u u − u4
,
Ω
uut dx =
Ω
u u dx −
Ω
u4
dx,
1
2
d
dt Ω
u2
dx =
∂Ω
u
∂u
∂n
ds
=0
−
Ω
|∇u|2
dx −
Ω
u4
dx,
1
2
d
dt
||u||2
2 = −
Ω
|∇u|2
dx −
Ω
u4
dx ≤ 0.
Thus,
||u(x, y, t)||2 ≤ ||u(x, y, 0)||2 = 0.
Hence, ||u(x, y, t)||2 = 0, and u ≡ 0.
Partial Differential Equations Igor Yanovsky, 2005 265
Problem (F’98, #5). Consider the heat equation
ut − u = 0
in a two dimensional region Ω. Define the mass M as
M(t) =
Ω
u(x, t) dx.
a) For a fixed domain Ω, show M is a constant in time if the boundary conditions are
∂u/∂n = 0.
b) Suppose that Ω = Ω(t) is evolving in time, with a boundary that moves at velocity
v, which may vary along the boundary. Find a modified boundary condition (in terms
of local quantities only) for u, so that M is constant.
Hint: You may use the fact that
d
dt Ω(t)
f(x, t) dx =
Ω(t)
ft(x, t) dx +
∂Ω(t)
n · v f(x, t) dl,
in which n is a unit normal vector to the boundary ∂Ω.
Proof. a) We have
ut − u = 0, on Ω
∂u
∂n = 0, on ∂Ω.
We want to show that d
dt M(t) = 0. We have 66
d
dt
M(t) =
d
dt Ω
u(x, t) dx =
Ω
ut dx =
Ω
u dx =
∂Ω
∂u
∂n
ds = 0.
b) We need d
dt M(t) = 0.
0 =
d
dt
M(t) =
d
dt Ω(t)
u(x, t) dx =
Ω(t)
ut dx +
∂Ω(t)
n · v u ds
=
Ω(t)
u dx +
∂Ω(t)
n · v u ds =
∂Ω(t)
∂u
∂n
ds +
∂Ω(t)
n · v u ds
=
∂Ω(t)
∇u · n ds +
∂Ω(t)
n · v u ds =
∂Ω(t)
n · (∇u + vu) ds.
Thus, we need:
n · (∇u + vu) ds = 0, on ∂Ω.
66
The last equality below is obtained from the Green’s formula:
Ω
u dx =
Ω
∂u
∂n
ds.
Partial Differential Equations Igor Yanovsky, 2005 266
Problem (S’95, #3). Write down an explicit formula for a function u(x, t) solving
ut + b · ∇u + cu = u in Rn
× (0, ∞)
u(x, 0) = f(x) on Rn.
(18.4)
where b ∈ Rn
and c ∈ R are constants.
Hint: First transform this to the heat equation by a linear change of the dependent
and independent variables. Then solve the heat equation using the fundamental solution.
Proof. Consider
• u(x, t) = eα·x+βt
v(x, t).
ut = βeα·x+βt
v + eα·x+βt
vt = (vt + βv)eα·x+βt
,
∇u = αeα·x+βt
v + eα·x+βt
∇v = (αv + ∇v)eα·x+βt
,
∇ · (∇u) = ∇ · (αv + ∇v)eα·x+βt
= (α · ∇v + v)eα·x+βt
+ (|α|2
v + α · ∇v)eα·x+βt
= v + 2α · ∇v + |α|2
v)eα·x+βt
.
Plugging this into (18.4), we obtain
vt + βv + b · (αv + ∇v) + cv = v + 2α · ∇v + |α|2
v,
vt + b − 2α · ∇v + β + b · α + c − |α|2
v = v.
In order to get homogeneous heat equation, we set
α =
b
2
, β = −
|b|2
4
− c,
which gives
vt = v in Rn
× (0, ∞)
v(x, 0) = e− b
2
·x
f(x) on Rn.
The above PDE has the following solution:
v(x, t) =
1
(4πt)
n
2 Rn
e−
|x−y|2
4t e− b
2
·y
f(y) dy.
Thus,
u(x, t) = e
b
2
·x−( |b|2
4
+c)t
v(x, t) =
1
(4πt)
n
2
e
b
2
·x−( |b|2
4
+c)t
Rn
e−|x−y|2
4t e− b
2
·y
f(y) dy.
Partial Differential Equations Igor Yanovsky, 2005 267
Problem (F’01, #7). Consider the parabolic problem
ut = uxx + c(x)u (18.5)
for −∞ < x < ∞, in which
c(x) = 0 for |x| > 1,
c(x) = 1 for |x| < 1.
Find solutions of the form u(x, t) = eλt
v(x) in which
∞
−∞ |u|2
dx < ∞.
Hint: Look for v to have the form
v(x) = ae−k|x|
for |x| > 1,
v(x) = b coslx for |x| < 1,
for some a, b, k, l.
Proof. Plug u(x, t) = eλtv(x) into (18.5) to get:
λeλt
v(x) = eλt
v (x) + ceλt
v(x),
λv(x) = v (x) + cv(x),
v (x) − λv(x) + cv(x) = 0.
• For |x| > 1, c = 0. We look for solutions of the form v(x) = ae−k|x|
.
v (x) − λv(x) = 0,
ak2
e−k|x|
− aλe−k|x|
= 0,
k2
− λ = 0,
k2
= λ,
k = ±
√
λ.
Thus, v(x) = c1e−
√
λx + c2e
√
λx. Since we want
∞
−∞ |u|2 dx < ∞:
u(x, t) = aeλt
e−
√
λx
.
• For |x| < 1, c = 1. We look for solutions of the form v(x) = b coslx.
v (x) − λv(x) + v(x) = 0,
−bl2
cos lx + (1 − λ)b coslx = 0,
−l2
+ (1 − λ) = 0,
l2
= 1 − λ,
l = ±
√
1 − λ.
Thus, (since cos(−x) = cos x)
u(x, t) = beλt
cos (1 − λ)x.
• We want v(x) to be continuous on R, and at x = ±1, in particular. Thus,
ae−
√
λ
= b cos (1 − λ),
a = be
√
λ
cos (1 − λ).
• Also, v(x) is symmetric:
∞
−∞
|u|2
dx = 2
∞
0
|u|2
dx = 2
1
0
|u|2
dx +
∞
1
|u|2
dx < ∞.
Partial Differential Equations Igor Yanovsky, 2005 268
Problem (F’03, #3). ❶ The function
h(X, T) = (4πT)−1
2 e−X2
4T
satisfies (you do not have to show this)
hT = hXX.
Using this result, verify that for any smooth function U
u(x, t) = e
1
3
t3−xt
∞
−∞
U(ξ) h(x − t2
− ξ, t) dξ
satisfies
ut + xu = uxx.
❷ Given that U(x) is bounded and continuous everywhere on −∞ ≤ x ≤ ∞, establish
that
lim
t→0
∞
−∞
U(ξ) h(x − ξ, t) dξ = U(x)
❸ and show that u(x, t) → U(x) as t → 0. (You may use the fact that
∞
0 e−ξ2
dξ =
1
2
√
π.)
Proof. We change the notation: h → K, U → g, ξ → y. We have
K(X, T) =
1
√
4πT
e−X2
4T
❶ We want to verify that
u(x, t) = e
1
3
t3−xt
∞
−∞
K(x − y − t2
, t) g(y) dy.
satisfies
ut + xu = uxx.
We have
ut =
∞
−∞
d
dt
e
1
3
t3−xt
K(x − y − t2
, t) g(y) dy
=
∞
−∞
(t2
− x) e
1
3
t3−xt
K + e
1
3
t3−xt
KX · (−2t) + KT g(y) dy,
xu =
∞
−∞
x e
1
3
t3−xt
K(x − y − t2
, t) g(y) dy,
ux =
∞
−∞
d
dx
e
1
3
t3−xt
K(x − y − t2
, t) g(y) dy
=
∞
−∞
− t e
1
3
t3−xt
K + e
1
3
t3−xt
KX g(y) dy,
uxx =
∞
−∞
d
dx
− t e
1
3
t3−xt
K + e
1
3
t3−xt
KX g(y) dy
=
∞
−∞
t2
e
1
3
t3−xt
K − t e
1
3
t3−xt
KX − t e
1
3
t3−xt
KX + e
1
3
t3−xt
KXX g(y) dy.
Partial Differential Equations Igor Yanovsky, 2005 269
Plugging these into , most of the terms cancel out. The remaining two terms cancel
because KT = KXX.
❷ Given that g(x) is bounded and continuous on −∞ ≤ x ≤ ∞, we establish that 67
lim
t→0
∞
−∞
K(x − y, t) g(y) dy = g(x).
Fix x0 ∈ Rn, ε > 0. Choose δ > 0 such that
|g(y) − g(x0)| < ε if |y − x0| < δ, y ∈ Rn
.
Then if |x − x0| < δ
2 , we have: ( R K(x, t) dx = 1)
R
K(x − y, t) g(y) dy − g(x0) ≤
R
K(x − y, t) [g(y) − g(x0)] dy
≤
Bδ(x0)
K(x − y, t) g(y) − g(x0) dy
≤ ε R K(x−y,t) dy = ε
+
R−Bδ(x0)
K(x − y, t) g(y) − g(x0) dy
Furthermore, if |x − x0| ≤ δ
2 and |y − x0| ≥ δ, then
|y − x0| ≤ |y − x| +
δ
2
≤ |y − x| +
1
2
|y − x0|.
Thus, |y − x| ≥ 1
2 |y − x0|. Consequently,
= ε + 2||g||L∞
R−Bδ(x0)
K(x − y, t) dy
≤ ε +
C
√
t R−Bδ(x0)
e−
|x−y|2
4t dy
≤ ε +
C
√
t R−Bδ(x0)
e−
|y−x0|2
16t dy
= ε +
C
√
t
∞
δ
e− r2
16t r dr → ε + 0 as t → 0+
.
Hence, if |x − x0| < δ
2 and t > 0 is small enough, |u(x, t) − g(x0)| < 2ε.
67
Evans, p. 47, Theorem 1 (c).
Partial Differential Equations Igor Yanovsky, 2005 270
Problem (S’93, #4). The temperature T(x, t) in a stationary medium, x ≥ 0, is
governed by the heat conduction equation
∂T
∂t
=
∂2T
∂x2
. (18.6)
Making the change of variable (x, t) → (u, t), where u = x/2
√
t, show that
4t
∂T
∂t
=
∂2T
∂u2
+ 2u
∂T
∂u
. (18.7)
Solutions of (18.7) that depend on u alone are called similarity solutions. 68
Proof. We change notation: the change of variables is (x, t) → (u, τ), where t = τ.
After the change of variables, we have T = T(u(x, t), τ(t)).
u =
x
2
√
t
⇒ ut = −
x
4t
3
2
, ux =
1
2
√
t
, uxx = 0,
τ = t ⇒ τt = 1, τx = 0.
∂T
∂t
=
∂T
∂u
∂u
∂t
+
∂T
∂τ
,
∂T
∂x
=
∂T
∂u
∂u
∂x
,
∂2
T
∂x2
=
∂
∂x
∂T
∂x
=
∂
∂x
∂T
∂u
∂u
∂x
=
∂2
T
∂u2
∂u
∂x
∂u
∂x
+
∂T
∂u
∂2
u
∂x2
=0
=
∂2
T
∂u2
∂u
∂x
2
.
Thus, (18.6) gives:
∂T
∂u
∂u
∂t
+
∂T
∂τ
=
∂2T
∂u2
∂u
∂x
2
,
∂T
∂u
−
x
4t
3
2
+
∂T
∂τ
=
∂2
T
∂u2
1
2
√
t
2
,
∂T
∂τ
=
1
4t
∂2T
∂u2
+
x
4t
3
2
∂T
∂u
,
4t
∂T
∂τ
=
∂2T
∂u2
+
x
√
t
∂T
∂u
,
4t
∂T
∂τ
=
∂2
T
∂u2
+ 2u
∂T
∂u
.
68
This is only the part of the qual problem.
Partial Differential Equations Igor Yanovsky, 2005 271
19 Contraction Mapping and Uniqueness - Wave
Recall that the solution to
utt − c2uxx = f(x, t),
u(x, 0) = g(x), ut(x, 0) = h(x),
(19.1)
is given by adding together d’Alembert’s formula and Duhamel’s principle:
u(x, t) =
1
2
(g(x + ct) + g(x − ct)) +
1
2c
x+ct
x−ct
h(ξ) dξ +
1
2c
t
0
x+c(t−s)
x−c(t−s)
f(ξ, s) dξ ds.
Problem (W’02, #8). a) Find an explicit solution of the following Cauchy problem
∂2u
∂t2 − ∂2u
∂x2 = f(t, x),
u(0, x) = 0, ∂u
∂x (0, x) = 0.
(19.2)
b) Use part (a) to prove the uniqueness of the solution of the Cauchy problem
∂2u
∂t2 − ∂2u
∂x2 + q(t, x)u = 0,
u(0, x) = 0, ∂u
∂x (0, x) = 0.
(19.3)
Here f(t, x) and q(t, x) are continuous functions.
Proof. a) It was probably meant to give the ut initially. We rewrite (19.2) as
utt − uxx = f(x, t),
u(x, 0) = 0, ut(x, 0) = 0.
(19.4)
Duhamel’s principle, with c = 1, gives the solution to (19.4):
u(x, t) =
1
2c
t
0
x+c(t−s)
x−c(t−s)
f(ξ, s) dξ ds =
1
2
t
0
x+(t−s)
x−(t−s)
f(ξ, s) dξ ds.
b) We use the Contraction Mapping Principle to prove uniqueness.
Define the operator
T(u) =
1
2
t
0
x+(t−s)
x−(t−s)
−q(ξ, s) u(ξ, s) dξ ds.
on the Banach space C2,2, || · ||∞.
We will show |Tun − Tun+1| < α||un − un+1|| where α < 1. Then {un}∞
n=1:
un+1 = T(un) converges to a unique fixed point which is the unique solution of PDE.
|Tun − Tun+1| =
1
2
t
0
x+(t−s)
x−(t−s)
−q(ξ, s) un(ξ, s) − un+1(ξ, s) dξ ds
≤
1
2
t
0
||q||∞||un − un+1||∞ 2(t − s) ds
≤ t2
||q||∞||un − un+1||∞ ≤ α||un − un+1||∞, for small t.
Thus, T is a contraction ⇒ ∃ a unique fixed point.
Since Tu = u, u is the solution to the PDE.
Partial Differential Equations Igor Yanovsky, 2005 272
Problem (F’00, #3). Consider the Goursat problem:
Find the solution of the equation
∂2u
∂t2
−
∂2u
∂x2
+ a(x, t)u = 0
in the square D, satisfying the boundary conditions
u|γ1 = ϕ, u|γ2 = ψ,
where γ1, γ2 are two adjacent sides D. Here a(x, t), ϕ and ψ are continuous functions.
Prove the uniqueness of the solution of this Goursat problem.
Proof. The change of variable μ = x + t, η = x − t
transforms the equation to
˜uμη + ˜a(μ, η)˜u = 0.
We integrate the equation:
η
0
μ
0
˜uμη(u, v) du dv = −
η
0
μ
0
˜a(μ, η) ˜udu dv,
η
0
˜uη(μ, v) − ˜uη(0, v) dv = −
η
0
μ
0
˜a(μ, η) ˜udu dv,
˜u(μ, η) = ˜u(μ, 0) + ˜u(0, η) − u(0, 0) −
η
0
μ
0
˜a(μ, η) ˜udu dv.
We change the notation. In the new notation:
f(x, y) = ϕ(x, y) −
x
0
y
0
a(u, v)f(u, v) du dv,
f = ϕ + Kf,
f = ϕ + K(ϕ + Kf),
· · ·
f = ϕ +
∞
n=1
Kn
ϕ,
f = Kf ⇒ f = 0,
max
0<x<δ
|f| ≤ δ max |a| max|f|.
For small enough δ, the operator K is a contraction. Thus, there exists a unique fixed
point of K, and f = Kf, where f is the unique solution.
Partial Differential Equations Igor Yanovsky, 2005 273
20 Contraction Mapping and Uniqueness - Heat
The solution of the initial value problem
ut = u + f(x, t) for t > 0, x ∈ Rn
u(x, 0) = g(x) for x ∈ Rn
.
(20.1)
is given by
u(x, t) =
Rn
˜K(x − y, t) g(y) dy +
t
0 Rn
˜K(x − y, t − s) f(y, s) dyds
where
˜K(x, t) =
⎧
⎨
⎩
1
(4πt)
n
2
e−
|x|2
4t for t > 0,
0 for t ≤ 0.
Problem (F’00, #2). Consider the Cauchy problem
ut − u + u2
(x, t) = f(x, t), x ∈ RN
, 0 < t < T
u(x, 0) = 0.
Prove the uniqueness of the classical bounded solution assuming that T is small
enough.
Proof. Let {un} be a sequence of approximations to the solution, such that
S(un) = un+1 =
use Duhamel s principle
t
0 Rn
K(x − y, t − s) f(y, s) − u2
n(y, s) dy ds.
We will show that S has a fixed point |S(un) − S(un+1)| ≤ α|un − un+1|, α < 1
⇔ {un} converges to a uniques solution for small enough T.
Since un, un+1 ∈ C2
(Rn
) ∩ C1
(t) ⇒ |un+1 + un| ≤ M.
|S(un) − S(un+1)| ≤
t
0 Rn
K(x − y, t − s) u2
n+1 − u2
n dy ds
=
t
0 Rn
K(x − y, t − s) un+1 − un un+1 + un dy ds
≤ M
t
0 Rn
K(x − y, t − s) un+1 − un dy ds
≤ MM1
t
0
un+1(x, s) − un(x, s) ds
≤ MM1T ||un+1 − un||∞ < ||un+1 − un||∞ for small T.
Thus, S is a contraction ⇒ ∃ a unique fixed point u ∈ C2
(Rn
) ∩ C1
(t) such that
u = limn→∞ un. u is implicitly defined as
u(x, t) =
t
0 Rn
K(x − y, t − s) f(y, s) − u2
(y, s) dy ds.
Partial Differential Equations Igor Yanovsky, 2005 274
Problem (S’97, #3). a) Let Q(x) ≥ 0 such that
∞
x=−∞ Q(x) dx = 1,
and define Q = 1
Q(x
). Show that (here ∗ denotes convolution)
||Q (x) ∗ w(x)||L∞ ≤ ||w(x)||L∞.
In particular, let Qt(x) denote the heat kernel (at time t), then
||Qt(x) ∗ w1(x) − Qt(x) ∗ w2(x)||L∞ ≤ ||w1(x) − w2(x)||L∞.
b) Consider the parabolic equation ut = uxx + u2
subject to initial conditions
u(x, 0) = f(x). Show that the solution of this equation satisfies
u(x, t) = Qt(x) ∗ f(x) +
t
0
Qt−s(x) ∗ u2
(x, s) ds. (20.2)
c) Fix t > 0. Let {un(x, t)}, n = 1, 2, . . . the fixed point iterations for the solution of
(20.2)
un+1(x, t) = Qt(x) ∗ f(x) +
t
0
Qt−s(x) ∗ u2
n(x, s) ds. (20.3)
Let Kn(t) = sup0≤m≤n ||um(x, t)||L∞. Using (a) and (b) show that
||un+1(x, t) − un(x, t)||L∞ ≤ 2 sup
0≤τ≤t
Kn(τ) ·
t
0
||un(x, s) − un−1(x, s)||L∞ ds.
Conclude that the fixed point iterations in (20.3) converge if t is sufficiently small.
Proof. a) We have
||Q (x) ∗ w(x)||L∞ =
∞
−∞
Q (x − y)w(y) dy ≤
∞
−∞
Q (x − y)w(y) dy
≤ ||w||∞
∞
−∞
Q (x − y) dy = ||w||∞
∞
−∞
1
Q
x − y
dy
= ||w||∞
∞
−∞
1
Q
y
dy z =
y
, dz =
dy
= ||w||∞
∞
−∞
Q(z) dz = ||w(x)||∞.
Partial Differential Equations Igor Yanovsky, 2005 275
Qt(x) = 1√
4πt
e−x2
4t , the heat kernel. We have 69
||Qt(x) ∗ w1(x) − Qt(x) ∗ w2(x)||L∞ =
∞
−∞
Qt(x − y)w1(y) dy −
∞
−∞
Qt(x − y)w2(y) dy
∞
=
1
√
4πt
∞
−∞
e−
(x−y)2
4t w1(y) dy −
∞
−∞
e−
(x−y)2
4t w2(y) dy
∞
≤
1
√
4πt
∞
−∞
e−
(x−y)2
4t w1(y) − w2(y) dy
≤ w1(y) − w2(y) ∞
1
√
4πt
∞
−∞
e−
(x−y)2
4t dy
z =
x − y
√
4t
, dz =
−dy
√
4t
= w1(y) − w2(y) ∞
1
√
4πt
∞
−∞
e−z2 √
4t dz
= w1(y) − w2(y) ∞
1
√
π
∞
−∞
e−z2
dz
√
π
= w1(y) − w2(y) ∞
.
69
Note:
∞
−∞
Qt(x) dx =
1
√
4πt
∞
−∞
e−
(x−y)2
4t dy =
1
√
4πt
∞
−∞
e−z2 √
4t dz =
1
√
π
∞
−∞
e−z2
dz = 1.
Partial Differential Equations Igor Yanovsky, 2005 276
b) Consider
ut = uxx + u2
,
u(x, 0) = f(x).
We will show that the solution of this equation satisfies
u(x, t) = Qt(x) ∗ f(x) +
t
0
Qt−s(x) ∗ u2
(x, s) ds.
t
0
Qt−s(x) ∗ u2
(x, s) ds =
t
0 R
Qt−s(x − y) u2
(y, s) dy ds
=
t
0 R
Qt−s(x − y) us(y, s) − uyy(y, s) dy ds
=
t
0 R
d
ds
Qt−s(x − y)u(y, s) −
d
ds
Qt−s(x − y) u(y, s) − Qt−s(x − y)uyy(y, s) dy ds
=
R
Q0(x − y)u(y, t) dy −
R
Qt(x − y)u(y, 0) dy
−
t
0 R
d
ds
Qt−s(x − y) u(y, s) +
d2
dy2
Qt−s(x − y)u(y, s)
= 0, since Qt satisfies heat equation
dy ds
= u(x, t) −
R
Qt(x − y)f(y) dy Note: lim
t→0+
Q(x, t) = δ0(x) = δ(x).
= u(x, t) − Qt(x) ∗ f(x). lim
t→0+ R Q(x − y, t)v(y) dy = v(0).
Note that we used: Dα(f ∗ g) = (Dαf) ∗ g = f ∗ (Dαg).
c) Let
un+1(x, t) = Qt(x) ∗ f(x) +
t
0
Qt−s(x) ∗ u2
n(x, s) ds.
||un+1(x, t) − un(x, t)||L∞ =
t
0
Qt−s(x) ∗ u2
n(x, s) − u2
n−1(x, s) ds
∞
≤
t
0
Qt−s(x) ∗ u2
n(x, s) − u2
n−1(x, s) ∞
ds
≤
(a)
t
0
u2
n(x, s) − u2
n−1(x, s) ∞
ds
≤
t
0
un(x, s) − un−1(x, s) ∞
un(x, s) + un−1(x, s) ∞
ds
≤ sup
0≤τ≤t
un(x, s) + un−1(x, s) ∞
t
0
un(x, s) − un−1(x, s) ∞
ds
≤ 2 sup
0≤τ≤t
Kn(τ) ·
t
0
||un(x, s) − un−1(x, s)||L∞ ds.
Also, ||un+1(x, t) − un(x, t)||L∞ ≤ 2t sup
0≤τ≤t
Kn(τ) · ||un(x, s) − un−1(x, s)||L∞.
Partial Differential Equations Igor Yanovsky, 2005 277
For t small enough, 2t sup0≤τ≤t Kn(τ) ≤ α < 1. Thus, T defined as
Tu = Qt(x) ∗ f(x) +
t
0
Qt−s(x) ∗ u2
(x, s) ds
is a contraction, and has a unique fixed point u = Tu.
Partial Differential Equations Igor Yanovsky, 2005 278
Problem (S’99, #3). Consider the system of equations
ut = uxx + f(u, v)
vt = 2vxx + g(u, v)
to be solved for t > 0, −∞ < x < ∞, and smooth initial data with compact support:
u(x, 0) = u0(x), v(x, 0) = v0(x).
If f and g are uniformly Lipschitz continuous, give a proof of existence and unique-
ness of the solution to this problem in the space of bounded continuous functions with
||u(·, t)|| = supx |u(x, t)|.
Proof. The space of continuous bounded functions forms a complete metric space so
the contraction mapping principle applies.
First, let v(x, t) = w x√
2
, t , then
ut = uxx + f(u, w)
wt = wxx + g(u, w).
These initial value problems have the following solutions (K is the heat kernel):
u(x, t) =
Rn
˜K(x − y, t) u0(y) dy +
t
0 Rn
˜K(x − y, t − s) f(u, w) dyds,
w(x, t) =
Rn
˜K(x − y, t) w0(y) dy +
t
0 Rn
˜K(x − y, t − s) g(u, w) dyds.
By the Lipshitz conditions,
|f(u, w)| ≤ M1||u||,
|g(u, w)| ≤ M2||w||.
Now we can show the mappings, as defined below, are contractions:
T1u =
Rn
˜K(x − y, t) u0(y) dy +
t
0 Rn
˜K(x − y, t − s) f(u, w) dyds,
T2w =
Rn
˜K(x − y, t) w0(y) dy +
t
0 Rn
˜K(x − y, t − s) g(u, w) dyds.
|T1(un) − T1(un+1)| ≤
t
0 Rn
˜K(x − y, t − s) f(un, w) − f(un+1, w) dy ds
≤ M1
t
0 Rn
˜K(x − y, t − s) un − un+1 dy ds
≤ M1
t
0
sup
x
un − un+1
Rn
˜K(x − y, t − s)dy ds
≤ M1
t
0
sup
x
un − un+1 ds ≤ M1t sup
x
un − un+1
< sup
x
un − un+1 for small t.
We used the Lipshitz condition and R
˜K(x − y, t − s) dy = 1.
Thus, for small t, T1 is a contraction, and has a unique fixed point. Thus, the solution
is defined as u = T1u.
Similarly, T2 is a contraction and has a unique fixed point. The solution is defined as
w = T2w.
Partial Differential Equations Igor Yanovsky, 2005 279
21 Problems: Maximum Principle - Laplace and Heat
21.1 Heat Equation - Maximum Principle and Uniqueness
Let us introduce the “cylinder” U = UT = Ω × (0, T). We know that harmonic (and
subharmonic) functions achieve their maximum on the boundary of the domain. For
the heat equation, the result is improved in that the maximum is achieved on a certain
part of the boundary, parabolic boundary:
Γ = {(x, t) ∈ U : x ∈ ∂Ω or t = 0}.
Let us also denote by C2;1
(U) functions satisfying ut, uxixj ∈ C(U).
Weak Maximum Principle. Let u ∈ C2;1
(U) ∩ C(U) satisfy u ≥ ut in U.
Then u achieves its maximum on the parabolic boundary of U:
max
U
u(x, t) = max
Γ
u(x, t). (21.1)
Proof. • First, assume u > ut in U. For 0 < τ < T consider
Uτ = Ω × (0, τ), Γτ = {(x, t) ∈ Uτ : x ∈ ∂Ω or t = 0}.
If the maximum of u on Uτ occurs at x ∈ Ω and t = τ, then ut(x, τ) ≥ 0 and
u(x, τ) ≤ 0, violating our assumption; similarly, u cannot attain an interior maximum
on Uτ . Hence (21.1) holds for Uτ : maxUτ
u = maxΓτ u. But maxΓτ u ≤ maxΓ u
and by continuity of u, maxU u = limτ→T maxUτ
u. This establishes (21.1).
• Second, we consider the general case of u ≥ ut in U. Let u = v + εt for ε > 0.
Notice that v ≤ u on U and v − vt > 0 in U. Thus we may apply (21.1) to v:
max
U
u = max
U
(v + εt) ≤ max
U
v + εT = max
Γ
v + εT ≤ max
Γ
u + εT.
Letting ε → 0 establishes (21.1) for u.
Partial Differential Equations Igor Yanovsky, 2005 280
Problem (S’98, #7). Prove that any smooth solution, u(x, y, t) in the unit box
Ω = {(x, y) | − 1 ≤ x, y ≤ 1}, of the following equation
ut = uux + uuy + u, t ≥ 0, (x, y) ∈ Ω
u(x, y, 0) = f(x, y), (x, y) ∈ Ω
satisfies the weak maximum principle,
max
Ω×[0,T ]
u(x, y, t) ≤ max{ max
0≤t≤T
u(±1, ±1, t), max
(x,y)∈Ω
f(x, y)}.
Proof. Suppose u satisfies given equation. Let u = v + εt for ε > 0. Then,
vt + ε = vvx + vvy + εt(vx + vy) + v.
Suppose v has a maximum at (x0, y0, t0) ∈ Ω × (0, T). Then
vx = vy = vt = 0 ⇒ ε = v ⇒ v > 0
⇒ v has a minimum at (x0, y0, t0), a contradiction.
Thus, the maximum of v is on the boundary of Ω × (0, T).
Suppose v has a maximum at (x0, y0, T), (x0, y0) ∈ Ω. Then
vx = vy = 0, vt ≥ 0 ⇒ ε ≤ v ⇒ v > 0
⇒ v has a minimum at (x0, y0, T), a contradiction. Thus,
max
Ω×[0,T ]
v ≤ max{ max
0≤t≤T
v(±1, ±1, t), max
(x,y)∈Ω
f(x, y)}.
Now
max
Ω×[0,T ]
u = max
Ω×[0,T ]
(v + εt) ≤ max
Ω×[0,T ]
v + εT ≤ max{ max
0≤t≤T
v(±1, ±1, t), max
(x,y)∈Ω
f(x, y)} + εT
≤ max{ max
0≤t≤T
u(±1, ±1, t), max
(x,y)∈Ω
f(x, y)} + εT.
Letting ε → 0 establishes the result.
Partial Differential Equations Igor Yanovsky, 2005 281
21.2 Laplace Equation - Maximum Principle
Problem (S’91, #6). Suppose that u satisfies
Lu = auxx + buyy + cux + duy − eu = 0
with a > 0, b > 0, e > 0, for (x, y) ∈ Ω, with Ω a bounded open set in R2.
a) Show that u cannot have a positive maximum or a negative minimum in the in-
terior of Ω.
b) Use this to show that the only function u satisfying Lu = 0 in Ω, u = 0 on ∂Ω
and u continuous on Ω is u = 0.
Proof. a) For an interior (local) maximum or minimum at an interior point (x, y), we
have
ux = 0, uy = 0.
• Suppose u has a positive maximum in the interior of Ω. Then
u > 0, uxx ≤ 0, uyy ≤ 0.
With these values, we have
auxx
≤0
+ buyy
≤0
+ cux
=0
+ duy
=0
−eu
<0
= 0,
which leads to contradiction. Thus, u can not have a positive maximum in Ω.
• Suppose u has a negative minimum in the interior of Ω. Then
u < 0, uxx ≥ 0, uyy ≥ 0.
With these values, we have
auxx
≥0
+ buyy
≥0
+ cux
=0
+ duy
=0
−eu
>0
= 0,
which leads to contradiction. Thus, u can not have a negative minimum in Ω.
b) Since u can not have positive maximum in the interior of Ω, then maxu = 0 on Ω.
Since u can not have negative minimum in the interior of Ω, then min u = 0 on Ω.
Since u is continuous, u ≡ 0 on Ω.
Partial Differential Equations Igor Yanovsky, 2005 282
22 Problems: Separation of Variables - Laplace Equation
Problem 1: The 2D LAPLACE Equation on a Square.
Let Ω = (0, π) × (0, π), and use separation of variables to solve the boundary value
problem
⎧
⎪⎨
⎪⎩
uxx + uyy = 0 0 < x, y < π
u(0, y) = 0 = u(π, y) 0 ≤ y ≤ π
u(x, 0) = 0, u(x, π) = g(x) 0 ≤ x ≤ π,
where g is a continuous function satisfying g(0) = 0 = g(π).
Proof. Assume u(x, y) = X(x)Y (y), then substitution in the PDE gives X Y +XY =
0.
X
X
= −
Y
Y
= −λ.
• From X + λX = 0, we get Xn(x) = an cos nx + bn sin nx. Boundary conditions
give
u(0, y) = X(0)Y (y) = 0
u(π, y) = X(π)Y (y) = 0
⇒ X(0) = 0 = X(π).
Thus, Xn(0) = an = 0, and
Xn(x) = bn sin nx, n = 1, 2, . . ..
−n2
bn sinnx + λbn sinnx = 0,
λn = n2
, n = 1, 2, . . ..
• With these values of λn we solve Y − n2Y = 0 to find Yn(y) = cn cosh ny +
dn sinhny.
Boundary conditions give
u(x, 0) = X(x)Y (0) = 0 ⇒ Y (0) = 0 = cn.
Yn(x) = dn sinh ny.
• By superposition, we write
u(x, y) =
∞
n=1
˜an sin nx sinhny,
which satifies the equation and the three homogeneous boundary conditions. The
boundary condition at y = π gives
u(x, π) = g(x) =
∞
n=1
˜an sinnx sinh nπ,
π
0
g(x) sinmx dx =
∞
n=1
˜an sinhnπ
π
0
sin nx sinmx dx =
π
2
˜am sinhmπ.
Partial Differential Equations Igor Yanovsky, 2005 283
˜an sinh nπ =
2
π
π
0
g(x) sinnx dx.
Partial Differential Equations Igor Yanovsky, 2005 284
Problem 2: The 2D LAPLACE Equation on a Square. Let Ω = (0, π)×(0, π),
and use separation of variables to solve the mixed boundary value problem
⎧
⎪⎨
⎪⎩
u = 0 in Ω
ux(0, y) = 0 = ux(π, y) 0 < y < π
u(x, 0) = 0, u(x, π) = g(x) 0 < x < π.
Proof. Assume u(x, y) = X(x)Y (y), then substitution in the PDE gives X Y +XY =
0.
X
X
= −
Y
Y
= −λ.
• Consider X + λX = 0.
If λ = 0, X0(x) = a0x + b0.
If λ > 0, Xn(x) = an cos nx + bn sin nx.
Boundary conditions give
ux(0, y) = X (0)Y (y) = 0
ux(π, y) = X (π)Y (y) = 0
⇒ X (0) = 0 = X (π).
Thus, X0(0) = a0 = 0, and Xn(0) = nbn = 0.
X0(x) = b0, Xn(x) = an cos nx, n = 1, 2, . . ..
−n2
an cos nx + λan cos nx = 0,
λn = n2
, n = 0, 1, 2, . . ..
• With these values of λn we solve Y − n2Y = 0.
If n = 0, Y0(y) = c0y + d0.
If n = 0, Yn(y) = cn coshny + dn sinhny.
Boundary conditions give
u(x, 0) = X(x)Y (0) = 0 ⇒ Y (0) = 0.
Thus, Y0(0) = d0 = 0, and Yn(0) = cn = 0.
Y0(y) = c0y, Yn(y) = dn sinh ny, n = 1, 2, . . ..
• We have
u0(x, y) = X0(x)Y0(y) = b0c0y = ˜a0y,
un(x, y) = Xn(x)Yn(y) = (an cos nx)(dn sinh ny) = ˜an cos nx sinhny.
By superposition, we write
u(x, y) = ˜a0y +
∞
n=1
˜an cos nx sinhny,
which satifies the equation and the three homogeneous boundary conditions. The fourth
boundary condition gives
u(x, π) = g(x) = ˜a0π +
∞
n=1
˜an cos nx sinh nπ,
Partial Differential Equations Igor Yanovsky, 2005 285
π
0 g(x) dx =
π
0 ˜a0π + ∞
n=1 ˜an cos nx sinh nπ dx = ˜a0π2,
π
0 g(x) cosmx dx = ∞
n=1 ˜an sinhnπ
π
0 cos nx cos mx dx = π
2 ˜am sinh mπ.
˜a0 =
1
π2
π
0
g(x) dx,
˜an sinh nπ =
2
π
π
0
g(x) cosnx dx.
Partial Differential Equations Igor Yanovsky, 2005 286
Problem (W’04, #5) The 2D LAPLACE Equation in an Upper-Half Plane.
Consider the Laplace equation
∂2u
∂x2
+
∂2u
∂y2
= 0, y > 0, −∞ < x < +∞
∂u(x, 0)
∂y
− u(x, 0) = f(x),
where f(x) ∈ C∞
0 (R1
).
Find a bounded solution u(x, y) and show that u(x, y) → 0 when |x| + y → ∞.
Proof. Assume u(x, y) = X(x)Y (y), then substitution in the PDE gives X Y +XY =
0.
X
X
= −
Y
Y
= −λ.
• Consider X + λX = 0.
If λ = 0, X0(x) = a0x + b0.
If λ > 0, Xn(x) = an cos
√
λnx + bn sin
√
λnx.
Since we look for bounded solutions as |x| → ∞, we have a0 = 0.
• Consider Y − λnY = 0.
If λn = 0, Y0(y) = c0y + d0.
If λn > 0, Yn(y) = cne−
√
λny
+ dne
√
λny
.
Since we look for bounded solutions as y → ∞, we have c0 = 0, dn = 0. Thus,
u(x, y) = ˜a0 +
∞
n=1
e−
√
λny
˜an cos λnx + ˜bn sin λnx .
Initial condition gives:
f(x) = uy(x, 0) − u(x, 0) = −˜a0 −
∞
n=1
( λn + 1) ˜an cos λnx + ˜bn sin λnx .
f(x) ∈ C∞
0 (R1
), i.e. has compact support [−L, L], for some L > 0. Thus the coefficients
˜an, ˜bn are given by
L
−L
f(x) cos λnx dx = −( λn + 1)˜anL.
L
−L
f(x) sin λnx dx = −( λn + 1)˜bnL.
Thus, u(x, y) → 0 when |x| + y → ∞. 70
70
Note that if we change the roles of X and Y in , the solution we get will be unbounded.
Partial Differential Equations Igor Yanovsky, 2005 287
Problem 3: The 2D LAPLACE Equation on a Circle.
Let Ω be the unit disk in R2 and consider the problem
u = 0 in Ω
∂u
∂n = h on ∂Ω,
where h is a continuous function.
Proof. Use polar coordinates (r, θ)
urr + 1
r ur + 1
r2 uθθ = 0 for 0 ≤ r < 1, 0 ≤ θ < 2π
∂u
∂r (1, θ) = h(θ) for 0 ≤ θ < 2π.
r2
urr + rur + uθθ = 0.
Let r = e−t
, u(r(t), θ).
ut = urrt = −e−t
ur,
utt = (−e−t
ur)t = e−t
ur + e−2t
urr = rur + r2
urr.
Thus, we have
utt + uθθ = 0.
Let u(t, θ) = X(t)Y (θ), which gives X (t)Y (θ) + X(t)Y (θ) = 0.
X (t)
X(t)
= −
Y (θ)
Y (θ)
= λ.
• From Y (θ) + λY (θ) = 0, we get Yn(θ) = an cos nθ + bn sin nθ.
λn = n2
, n = 0, 1, 2, . . ..
• With these values of λn we solve X (t) − n2X(t) = 0.
If n = 0, X0(t) = c0t + d0. ⇒ X0(r) = −c0 log r + d0.
If n = 0, Xn(t) = cnent + dne−nt ⇒ Xn(r) = cnr−n + dnrn.
• We have
u0(r, θ) = X0(r)Y0(θ) = (−c0 log r + d0)a0,
un(r, θ) = Xn(r)Yn(θ) = (cnr−n
+ dnrn
)(an cos nθ + bn sinnθ).
But u must be finite at r = 0, so cn = 0, n = 0, 1, 2, . . ..
u0(r, θ) = d0a0,
un(r, θ) = dnrn
(an cos nθ + bn sinnθ).
By superposition, we write
u(r, θ) = ˜a0 +
∞
n=1
rn
(˜an cos nθ + ˜bn sinnθ).
Boundary condition gives
ur(1, θ) =
∞
n=1
n(˜an cos nθ + ˜bn sinnθ) = h(θ).
The coefficients an, bn for n ≥ 1 are determined from the Fourier series for h(θ).
a0 is not determined by h(θ) and therefore may take an arbitrary value. Moreover,
Partial Differential Equations Igor Yanovsky, 2005 288
the constant term in the Fourier series for h(θ) must be zero [i.e.,
2π
0 h(θ)dθ = 0].
Therefore, the problem is not solvable for an arbitrary function h(θ), and when it is
solvable, the solution is not unique.
Partial Differential Equations Igor Yanovsky, 2005 289
Problem 4: The 2D LAPLACE Equation on a Circle.
Let Ω = {(x, y) ∈ R2 : x2 + y2 < 1} = {(r, θ) : 0 ≤ r < 1, 0 ≤ θ < 2π},
and use separation of variables (r, θ) to solve the Dirichlet problem
u = 0 in Ω
u(1, θ) = g(θ) for 0 ≤ θ < 2π.
Proof. Use polar coordinates (r, θ)
urr + 1
r ur + 1
r2 uθθ = 0 for 0 ≤ r < 1, 0 ≤ θ < 2π
u(1, θ) = g(θ) for 0 ≤ θ < 2π.
r2
urr + rur + uθθ = 0.
Let r = e−t, u(r(t), θ).
ut = urrt = −e−t
ur,
utt = (−e−t
ur)t = e−t
ur + e−2t
urr = rur + r2
urr.
Thus, we have
utt + uθθ = 0.
Let u(t, θ) = X(t)Y (θ), which gives X (t)Y (θ) + X(t)Y (θ) = 0.
X (t)
X(t)
= −
Y (θ)
Y (θ)
= λ.
• From Y (θ) + λY (θ) = 0, we get Yn(θ) = an cos nθ + bn sin nθ.
λn = n2, n = 0, 1, 2, . . ..
• With these values of λn we solve X (t) − n2
X(t) = 0.
If n = 0, X0(t) = c0t + d0. ⇒ X0(r) = −c0 log r + d0.
If n = 0, Xn(t) = cnent
+ dne−nt
⇒ Xn(r) = cnr−n
+ dnrn
.
• We have
u0(r, θ) = X0(r)Y0(θ) = (−c0 log r + d0)a0,
un(r, θ) = Xn(r)Yn(θ) = (cnr−n
+ dnrn
)(an cos nθ + bn sinnθ).
But u must be finite at r = 0, so cn = 0, n = 0, 1, 2, . . ..
u0(r, θ) = d0a0,
un(r, θ) = dnrn
(an cos nθ + bn sinnθ).
By superposition, we write
u(r, θ) = ˜a0 +
∞
n=1
rn
(˜an cos nθ + ˜bn sinnθ).
Boundary condition gives
u(1, θ) = ˜a0 +
∞
n=1
(˜an cos nθ + ˜bn sin nθ) = g(θ).
Partial Differential Equations Igor Yanovsky, 2005 290
˜a0 =
1
π
π
0
g(θ) dθ,
˜an =
2
π
π
0
g(θ) cosnθ dθ,
˜bn =
2
π
π
0
g(θ) sinnθ dθ.
Partial Differential Equations Igor Yanovsky, 2005 291
Problem (F’94, #6): The 2D LAPLACE Equation on a Circle.
Find all solutions of the homogeneous equation
uxx + uyy = 0, x2
+ y2
< 1,
∂u
∂n
− u = 0, x2
+ y2
= 1.
Hint: = 1
r
∂
∂r (r ∂
∂r ) + 1
r2
∂2
∂θ2 in polar coordinates.
Proof. Use polar coordinates (r, θ):
urr + 1
r ur + 1
r2 uθθ = 0 for 0 ≤ r < 1, 0 ≤ θ < 2π
∂u
∂r (1, θ) − u(1, θ) = 0 for 0 ≤ θ < 2π.
Since we solve the equation on a circle, we have periodic conditions:
u(r, 0) = u(r, 2π) ⇒ X(r)Y (0) = X(r)Y (2π) ⇒ Y (0) = Y (2π),
uθ(r, 0) = uθ(r, 2π) ⇒ X(r)Y (0) = X(r)Y (2π) ⇒ Y (0) = Y (2π).
Also, we want the solution to be bounded. In particular, u is bounded for r = 0.
r2
urr + rur + uθθ = 0.
Let r = e−t, u(r(t), θ), we have
utt + uθθ = 0.
Let u(t, θ) = X(t)Y (θ), which gives X (t)Y (θ) + X(t)Y (θ) = 0.
X (t)
X(t)
= −
Y (θ)
Y (θ)
= λ.
• From Y (θ) + λY (θ) = 0, we get Yn(θ) = an cos
√
λθ + bn sin
√
λθ.
Using periodic condition: Yn(0) = an,
Yn(2π) = an cos( λn 2π) + bn sin( λn 2π) = an ⇒ λn = n ⇒ λn = n2
.
Thus, Yn(θ) = an cos nθ + bn sinnθ.
• With these values of λn we solve X (t) − n2
X(t) = 0.
If n = 0, X0(t) = c0t + d0. ⇒ X0(r) = −c0 log r + d0.
If n = 0, Xn(t) = cnent + dne−nt ⇒ Xn(r) = cnr−n + dnrn.
u must be finite at r = 0 ⇒ cn = 0, n = 0, 1, 2, . . ..
u(r, θ) = ˜a0 +
∞
n=1
rn
(˜an cos nθ + ˜bn sinnθ).
Boundary condition gives
0 = ur(1, θ) − u(1, θ) = −˜a0 +
∞
n=1
(n − 1)(˜an cos nθ + ˜bn sinnθ).
Calculating Fourier coefficients gives −2π˜a0 = 0 ⇒ ˜a0 = 0.
π(n − 1)an = 0 ⇒ ˜an = 0, n = 2, 3, . . ..
a1, b1 are constants. Thus,
u(r, θ) = r(˜a1 cos θ + ˜b1 sin θ).
Partial Differential Equations Igor Yanovsky, 2005 292
Problem (S’00, #4).
a) Let (r, θ) be polar coordinates on the plane,
i.e. x1 + ix2 = reiθ
. Solve the boudary value problem
u = 0 in r < 1
∂u/∂r = f(θ) on r = 1,
beginning with the Fourier series for f (you may assume that f is continuously dif-
ferentiable). Give your answer as a power series in x1 + ix2 plus a power series in
x1 − ix2. There is a necessary condition on f for this boundary value problem to be
solvable that you will find in the course of doing this.
b) Sum the series in part (a) to get a representation of u in the form
u(r, θ) =
2π
0
N(r, θ − θ )f(θ ) dθ .
Proof. a) Green’s identity gives the necessary compatibility condition on f:
2π
0
f(θ) dθ =
r=1
∂u
∂r
dθ =
∂Ω
∂u
∂n
ds =
Ω
u dx = 0.
Use polar coordinates (r, θ):
urr + 1
r ur + 1
r2 uθθ = 0 for 0 ≤ r < 1, 0 ≤ θ < 2π
∂u
∂r (1, θ) = f(θ) for 0 ≤ θ < 2π.
Since we solve the equation on a circle, we have periodic conditions:
u(r, 0) = u(r, 2π) ⇒ X(r)Y (0) = X(r)Y (2π) ⇒ Y (0) = Y (2π),
uθ(r, 0) = uθ(r, 2π) ⇒ X(r)Y (0) = X(r)Y (2π) ⇒ Y (0) = Y (2π).
Also, we want the solution to be bounded. In particular, u is bounded for r = 0.
r2
urr + rur + uθθ = 0.
Let r = e−t, u(r(t), θ), we have
utt + uθθ = 0.
Let u(t, θ) = X(t)Y (θ), which gives X (t)Y (θ) + X(t)Y (θ) = 0.
X (t)
X(t)
= −
Y (θ)
Y (θ)
= λ.
• From Y (θ) + λY (θ) = 0, we get Yn(θ) = an cos
√
λθ + bn sin
√
λθ.
Using periodic condition: Yn(0) = an,
Yn(2π) = an cos( λn 2π) + bn sin( λn 2π) = an ⇒ λn = n ⇒ λn = n2
.
Thus, Yn(θ) = an cos nθ + bn sinnθ.
• With these values of λn we solve X (t) − n2X(t) = 0.
If n = 0, X0(t) = c0t + d0. ⇒ X0(r) = −c0 log r + d0.
Partial Differential Equations Igor Yanovsky, 2005 293
If n = 0, Xn(t) = cnent
+ dne−nt
⇒ Xn(r) = cnr−n
+ dnrn
.
u must be finite at r = 0 ⇒ cn = 0, n = 0, 1, 2, . . ..
u(r, θ) = ˜a0 +
∞
n=1
rn
(˜an cos nθ + ˜bn sinnθ).
Since
ur(r, θ) =
∞
n=1
nrn−1
(˜an cos nθ + ˜bn sinnθ),
the boundary condition gives
ur(1, θ) =
∞
n=1
n (˜an cos nθ + ˜bn sinnθ) = f(θ).
˜an =
1
nπ
2π
0
f(θ) cos nθ dθ,
˜bn =
1
nπ
2π
0
f(θ) sin nθ dθ.
˜a0 is not determined by f(θ) (since
2π
0 f(θ) dθ = 0). Therefore, it may take an
arbitrary value. Moreover, the constant term in the Fourier series for f(θ) must be zero
[i.e.,
2π
0 f(θ)dθ = 0]. Therefore, the problem is not solvable for an arbitrary function
f(θ), and when it is solvable, the solution is not unique.
b) In part (a), we obtained the solution and the Fourier coefficients:
˜an =
1
nπ
2π
0
f(θ ) cos nθ dθ ,
˜bn =
1
nπ
2π
0
f(θ ) sinnθ dθ .
u(r, θ) = ˜a0 +
∞
n=1
rn
(˜an cos nθ + ˜bn sinnθ)
= ˜a0 +
∞
n=1
rn 1
nπ
2π
0
f(θ ) cos nθ dθ cos nθ +
1
nπ
2π
0
f(θ ) sin nθ dθ sinnθ
= ˜a0 +
∞
n=1
rn
nπ
2π
0
f(θ ) cos nθ cos nθ + sin nθ sinnθ dθ
= ˜a0 +
∞
n=1
rn
nπ
2π
0
f(θ ) cos n(θ − θ) dθ
= ˜a0 +
2π
0
∞
n=1
rn
nπ
cos n(θ − θ )
N(r,θ−θ )
f(θ ) dθ .
Partial Differential Equations Igor Yanovsky, 2005 294
Problem (S’92, #6). Consider the Laplace equation
uxx + uyy = 0
for x2 + y2 ≥ 1. Denoting by x = r cos θ, y = r sinθ polar coordinates, let f = f(θ) be
a given smooth function of θ. Construct a uniformly bounded solution which satisfies
boundary conditions
u = f for x2
+ y2
= 1.
What conditions has f to satisfy such that
lim
x2+y2→∞
(x2
+ y2
)u(x, y) = 0?
Proof. Use polar coordinates (r, θ):
urr + 1
r ur + 1
r2 uθθ = 0 for r ≥ 1
u(1, θ) = f(θ) for 0 ≤ θ < 2π.
Since we solve the equation on outside of a circle, we have periodic conditions:
u(r, 0) = u(r, 2π) ⇒ X(r)Y (0) = X(r)Y (2π) ⇒ Y (0) = Y (2π),
uθ(r, 0) = u(r, 2π) ⇒ X(r)Y (0) = X(r)Y (2π) ⇒ Y (0) = Y (2π).
Also, we want the solution to be bounded. In particular, u is bounded for r = ∞.
r2
urr + rur + uθθ = 0.
Let r = e−t, u(r(t), θ), we have
utt + uθθ = 0.
Let u(t, θ) = X(t)Y (θ), which gives X (t)Y (θ) + X(t)Y (θ) = 0.
X (t)
X(t)
= −
Y (θ)
Y (θ)
= λ.
• From Y (θ) + λY (θ) = 0, we get Yn(θ) = an cos
√
λθ + bn sin
√
λθ.
Using periodic condition: Yn(0) = an,
Yn(2π) = an cos( λn 2π) + bn sin( λn 2π) = an ⇒ λn = n ⇒ λn = n2
.
Thus, Yn(θ) = an cos nθ + bn sinnθ.
• With these values of λn we solve X (t) − n2X(t) = 0.
If n = 0, X0(t) = c0t + d0. ⇒ X0(r) = −c0 log r + d0.
If n = 0, Xn(t) = cnent + dne−nt ⇒ Xn(r) = cnr−n + dnrn.
u must be finite at r = ∞ ⇒ c0 = 0, dn = 0, n = 1, 2, . . ..
u(r, θ) = ˜a0 +
∞
n=1
r−n
(˜an cos nθ + ˜bn sin nθ).
Boundary condition gives
f(θ) = u(1, θ) = ˜a0 +
∞
n=1
(˜an cos nθ + ˜bn sinnθ).
Partial Differential Equations Igor Yanovsky, 2005 295
⎧
⎪⎨
⎪⎩
2π˜a0 =
2π
0 f(θ) dθ,
π˜an =
2π
0 f(θ) cos nθ dθ,
π˜bn =
2π
0 f(θ) sinnθ dθ.
⇒
⎧
⎪⎨
⎪⎩
f0 = ˜a0 = 1
2π
2π
0 f(θ) dθ,
fn = ˜an = 1
π
2π
0 f(θ) cos nθ dθ,
˜fn = ˜bn = 1
π
2π
0 f(θ) sinnθ dθ.
• We need to find conditions for f such that
lim
x2+y2→∞
(x2
+ y2
)u(x, y) = 0, or
lim
r→∞
r2
u(r, θ) =
need
0,
lim
r→∞
r2
f0 +
∞
n=1
r−n
(fn cos nθ + ˜fn sin nθ) =
need
0.
Since
lim
r→∞
∞
n>2
r2−n
(fn cos nθ + ˜fn sin nθ) = 0,
we need
lim
r→∞
r2
f0 +
2
n=1
r2−n
(fn cos nθ + ˜fn sinnθ) =
need
0.
Thus, the conditions are
fn, ˜fn = 0, n = 0, 1, 2.
Partial Differential Equations Igor Yanovsky, 2005 296
Problem (F’96, #2): The 2D LAPLACE Equation on a Semi-Annulus.
Solve the Laplace equation in the semi-annulus
⎧
⎪⎪⎪⎪⎨
⎪⎪⎪⎪⎩
u = 0, 1 < r < 2, 0 < θ < π,
u(r, 0) = u(r, π) = 0, 1 < r < 2,
u(1, θ) = sinθ, 0 < θ < π,
u(2, θ) = 0, 0 < θ < π.
Hint: Use the formula = 1
r
∂
∂r (r ∂
∂r ) + 1
r2
∂2
∂θ2 for the Laplacian in polar coordinates.
Proof. Use polar coordinates (r, θ)
urr +
1
r
ur +
1
r2
uθθ = 0 1 < r < 2, 0 < θ < π,
r2
urr + rur + uθθ = 0.
With r = e−t, we have
utt + uθθ = 0.
Let u(t, θ) = X(t)Y (θ), which gives X (t)Y (θ) + X(t)Y (θ) = 0.
X (t)
X(t)
= −
Y (θ)
Y (θ)
= λ.
• From Y (θ) + λY (θ) = 0, we get Yn(θ) = an cos
√
λθ + bn sin
√
λθ.
Boundary conditions give
un(r, 0) = 0 = Xn(r)Yn(0) = 0, ⇒ Yn(0) = 0,
un(r, π) = 0 = Xn(r)Yn(π) = 0, ⇒ Yn(π) = 0.
Thus, 0 = Yn(0) = an, and Yn(π) = bn sin
√
λπ = 0 ⇒
√
λ = n ⇒ λn = n2
.
Thus, Yn(θ) = bn sinnθ, n = 1, 2, . . ..
• With these values of λn we solve X (t) − n2X(t) = 0.
If n = 0, X0(t) = c0t + d0. ⇒ X0(r) = −c0 log r + d0.
If n > 0, Xn(t) = cnent + dne−nt ⇒ Xn(r) = cnr−n + dnrn.
• We have,
u(r, θ) =
∞
n=1
Xn(r)Yn(θ) =
∞
n=1
(˜cnr−n
+ ˜dnrn
) sinnθ.
Using the other two boundary conditions, we obtain
sinθ = u(1, θ) =
∞
n=1
(˜cn + ˜dn) sinnθ ⇒
˜c1 + ˜d1 = 1,
˜cn + ˜dn = 0, n = 2, 3, . . ..
0 = u(2, θ) =
∞
n=1
(˜cn2−n
+ ˜dn2n
) sinnθ ⇒ ˜cn2−n
+ ˜dn2n
= 0, n = 1, 2, . . ..
Thus, the coefficients are given by
c1 =
4
3
, d1 = −
1
3
;
cn = 0, dn = 0.
Partial Differential Equations Igor Yanovsky, 2005 297
u(r, θ) =
4
3r
−
r
3
sinθ.
Partial Differential Equations Igor Yanovsky, 2005 298
Problem (S’98, #8): The 2D LAPLACE Equation on a Semi-Annulus.
Solve
⎧
⎪⎨
⎪⎩
u = 0, 1 < r < 2, 0 < θ < π,
u(r, 0) = u(r, π) = 0, 1 < r < 2,
u(1, θ) = u(2, θ) = 1, 0 < θ < π.
Proof. Use polar coordinates (r, θ)
urr +
1
r
ur +
1
r2
uθθ = 0 for 1 < r < 2, 0 < θ < π,
r2
urr + rur + uθθ = 0.
With r = e−t
, we have
utt + uθθ = 0.
Let u(t, θ) = X(t)Y (θ), which gives X (t)Y (θ) + X(t)Y (θ) = 0.
X (t)
X(t)
= −
Y (θ)
Y (θ)
= λ.
• From Y (θ) + λY (θ) = 0, we get Yn(θ) = an cos nθ + bn sin nθ.
Boundary conditions give
un(r, 0) = 0 = Xn(r)Yn(0) = 0, ⇒ Yn(0) = 0,
un(r, π) = 0 = Xn(r)Yn(π) = 0, ⇒ Yn(π) = 0.
Thus, 0 = Yn(0) = an, and Yn(θ) = bn sin nθ.
λn = n2
, n = 1, 2, . . ..
• With these values of λn we solve X (t) − n2X(t) = 0.
If n = 0, X0(t) = c0t + d0. ⇒ X0(r) = −c0 log r + d0.
If n > 0, Xn(t) = cnent + dne−nt ⇒ Xn(r) = cnr−n + dnrn.
• We have,
u(r, θ) =
∞
n=1
Xn(r)Yn(θ) =
∞
n=1
(˜cnr−n
+ ˜dnrn
) sinnθ.
Using the other two boundary conditions, we obtain
u(1, θ) = 1 =
∞
n=1
(˜cn + ˜dn) sinnθ,
u(2, θ) = 1 =
∞
n=1
(˜cn2−n
+ ˜dn2n
) sinnθ,
which give the two equations for ˜cn and ˜dn:
π
0
sin nθ dθ =
π
2
(˜cn + ˜dn),
π
0
sin nθ dθ =
π
2
(˜cn2−n
+ ˜dn2n
),
that can be solved.
Partial Differential Equations Igor Yanovsky, 2005 299
Problem (F’89, #1). Consider Laplace equation inside a 90◦
sector of a circular
annulus
u = 0 a < r < b, 0 < θ <
π
2
subject to the boundary conditions
∂u
∂θ
(r, 0) = 0,
∂u
∂θ
(r,
π
2
) = 0,
∂u
∂r
(a, θ) = f1(θ),
∂u
∂r
(b, θ) = f2(θ),
where f1(θ), f2(θ) are continuously differentiable.
a) Find the solution of this equation with the prescribed
boundary conditions using separation of variables.
Proof. a) Use polar coordinates (r, θ)
urr +
1
r
ur +
1
r2
uθθ = 0 for a < r < b, 0 < θ <
π
2
,
r2
urr + rur + uθθ = 0.
With r = e−t, we have
utt + uθθ = 0.
Let u(t, θ) = X(t)Y (θ), which gives X (t)Y (θ) + X(t)Y (θ) = 0.
X (t)
X(t)
= −
Y (θ)
Y (θ)
= λ.
• From Y (θ) + λY (θ) = 0, we get Yn(θ) = an cos
√
λθ + bn sin
√
λθ.
Boundary conditions give
unθ(r, 0) = Xn(r)Yn(0) = 0 ⇒ Yn(0) = 0,
unθ(r,
π
2
) = Xn(r)Yn(
π
2
) = 0 ⇒ Yn(
π
2
) = 0.
Yn(θ) = −an
√
λn sin
√
λnθ + bn
√
λn cos
√
λnθ. Thus, Yn(0) = bn
√
λn = 0 ⇒ bn = 0.
Yn(π
2 ) = −an
√
λn sin
√
λn
π
2 = 0 ⇒
√
λn
π
2 = nπ ⇒ λn = (2n)2.
Thus, Yn(θ) = an cos(2nθ), n = 0, 1, 2, . . ..
In particular, Y0(θ) = a0t + b0. Boundary conditions give Y0(θ) = b0.
• With these values of λn we solve X (t) − (2n)2
X(t) = 0.
If n = 0, X0(t) = c0t + d0. ⇒ X0(r) = −c0 log r + d0.
If n > 0, Xn(t) = cne2nt + dne−2nt ⇒ Xn(r) = cnr−2n + dnr2n.
u(r, θ) = ˜c0 log r + ˜d0 +
∞
n=1
(˜cnr−2n
+ ˜dnr2n
) cos(2nθ).
Using the other two boundary conditions, we obtain
ur(r, θ) =
˜c0
r
+
∞
n=1
(−2n˜cnr−2n−1
+ 2n ˜dnr2n−1
) cos(2nθ).
Partial Differential Equations Igor Yanovsky, 2005 300
f1(θ) = ur(a, θ) =
˜c0
a
+ 2
∞
n=1
n(−˜cna−2n−1
+ ˜dna2n−1
) cos(2nθ),
f2(θ) = ur(b, θ) =
˜c0
b
+ 2
∞
n=1
n(−˜cnb−2n−1
+ ˜dnb2n−1
) cos(2nθ).
which give the two equations for ˜cn and ˜dn:
π
2
0
f1(θ) cos(2nθ) dθ =
π
2
n(−˜cna−2n−1
+ ˜dna2n−1
),
π
2
0
f2(θ) sin(2nθ) dθ =
π
2
n(−˜cnb−2n−1
+ ˜dnb2n−1
).
b) Show that the solution exists if and only if
a
π
2
0
f1(θ) dθ − b
π
2
0
f2(θ) dθ = 0.
Proof. Using Green’s identity, we obtain:
0 =
Ω
u dx =
∂Ω
∂u
∂n
=
π
2
0
∂u
∂r
(b, θ) dθ +
0
π
2
−
∂u
∂r
(a, θ) dθ +
b
a
−
∂u
∂θ
(r, 0) dr +
a
b
∂u
∂θ
r,
π
2
dr
=
π
2
0
f2(θ) dθ +
π
2
0
f1(θ) dθ + 0 + 0
=
π
2
0
f1(θ) dθ +
π
2
0
f2(θ) dθ.
c) Is the solution unique?
Proof. No, since the boundary conditions are Neumann. The solution is unique only
up to a constant.
Partial Differential Equations Igor Yanovsky, 2005 301
Problem (S’99, #4). Let u(x, y) be harmonic inside the unit disc,
with boundary values along the unit circle
u(x, y) =
1, y > 0
0, y ≤ 0.
Compute u(0, 0) and u(0, y).
Proof. Since u is harmonic, u = 0. Use polar coordinates (r, θ)
⎧
⎪⎨
⎪⎩
urr + 1
r ur + 1
r2 uθθ = 0 0 ≤ r < 1, 0 ≤ θ < 2π
u(1, θ) =
1, 0 < θ < π
0, π ≤ θ ≤ 2π.
r2
urr + rur + uθθ = 0.
With r = e−t
, we have
utt + uθθ = 0.
Let u(t, θ) = X(t)Y (θ), which gives X (t)Y (θ) + X(t)Y (θ) = 0.
X (t)
X(t)
= −
Y (θ)
Y (θ)
= λ.
• From Y (θ) + λY (θ) = 0, we get Yn(θ) = an cos nθ + bn sin nθ.
λn = n2, n = 1, 2, . . ..
• With these values of λn we solve X (t) − n2
X(t) = 0.
If n = 0, X0(t) = c0t + d0. ⇒ X0(r) = −c0 log r + d0.
If n > 0, Xn(t) = cnent
+ dne−nt
⇒ Xn(r) = cnr−n
+ dnrn
.
• We have
u0(r, θ) = X0(r)Y0(θ) = (−c0 log r + d0)a0,
un(r, θ) = Xn(r)Yn(θ) = (cnr−n
+ dnrn
)(an cos nθ + bn sinnθ).
But u must be finite at r = 0, so cn = 0, n = 0, 1, 2, . . ..
u0(r, θ) = ˜a0,
un(r, θ) = rn
(˜an cos nθ + ˜bn sin nθ).
By superposition, we write
u(r, θ) = ˜a0 +
∞
n=1
rn
(˜an cos nθ + ˜bn sinnθ).
Boundary condition gives
u(1, θ) = ˜a0 +
∞
n=1
(˜an cos nθ + ˜bn sin nθ) =
1, 0 < θ < π
0, π ≤ θ ≤ 2π,
and the coefficients ˜an and ˜bn are determined from the above equation.
71
71
See Yana’s solutions, where Green’s function on a unit disk is constructed.
Partial Differential Equations Igor Yanovsky, 2005 302
23 Problems: Separation of Variables - Poisson Equation
Problem (F’91, #2): The 2D POISSON Equation on a Quarter-Circle.
Solve explicitly the following boundary value problem
uxx + uyy = f(x, y)
in the domain Ω = {(x, y), x > 0, y > 0, x2 + y2 < 1}
with boundary conditions
u = 0 for y = 0, 0 < x < 1,
∂u
∂x
= 0 for x = 0, 0 < y < 1,
u = 0 for x > 0, y > 0, x2
+ y2
= 1.
Function f(x, y) is known and is assumed to be continuous.
Proof. Use polar coordinates (r, θ):
⎧
⎪⎪⎪⎪⎨
⎪⎪⎪⎪⎩
urr + 1
r ur + 1
r2 uθθ = f(r, θ) 0 ≤ r < 1, 0 ≤ θ < π
2
u(r, 0) = 0 0 ≤ r < 1,
uθ(r, π
2 ) = 0 0 ≤ r < 1,
u(1, θ) = 0 0 ≤ θ ≤ π
2 .
We solve
r2
urr + rur + uθθ = 0.
Let r = e−t, u(r(t), θ), we have
utt + uθθ = 0.
Let u(t, θ) = X(t)Y (θ), which gives X (t)Y (θ) + X(t)Y (θ) = 0.
X (t)
X(t)
= −
Y (θ)
Y (θ)
= λ.
• From Y (θ) + λY (θ) = 0, we get Yn(θ) = an cos
√
λθ + bn sin
√
λθ. Boundary
conditions:
u(r, 0) = X(r)Y (0) = 0
uθ(r, π
2 ) = X(r)Y (π
2 ) = 0
⇒ Y (0) = Y
π
2
= 0.
Thus, Yn(0) = an = 0, and Yn(π
2 ) =
√
λnbn cos
√
λn
π
2 = 0
⇒
√
λn
π
2 = nπ − π
2 , n = 1, 2, . . . ⇒ λn = (2n − 1)2.
Thus, Yn(θ) = bn sin(2n − 1)θ, n = 1, 2, . . .. Thus, we have
u(r, θ) =
∞
n=1
Xn(r) sin[(2n − 1)θ].
Partial Differential Equations Igor Yanovsky, 2005 303
We now plug this equation into with inhomogeneous term and obtain
∞
n=1
Xn(t) sin[(2n − 1)θ] − (2n − 1)2
Xn(t) sin[(2n − 1)θ] = f(t, θ),
∞
n=1
Xn(t) − (2n − 1)2
Xn(t) sin[(2n − 1)θ] = f(t, θ),
π
4
Xn(t) − (2n − 1)2
Xn(t) =
π
2
0
f(t, θ) sin[(2n − 1)θ] dθ,
Xn(t) − (2n − 1)2
Xn(t) =
4
π
π
2
0
f(t, θ) sin[(2n − 1)θ] dθ.
The solution to this equation is
Xn(t) = cne(2n−1)t
+ dne−(2n−1)t
+ Unp(t), or
Xn(r) = cnr−(2n−1)
+ dnr(2n−1)
+ unp(r),
where unp is the particular solution of inhomogeneous equation.
u must be finite at r = 0 ⇒ cn = 0, n = 1, 2, . . .. Thus,
u(r, θ) =
∞
n=1
dnr(2n−1)
+ unp(r) sin[(2n − 1)θ].
Using the last boundary condition, we have
0 = u(1, θ) =
∞
n=1
dn + unp(1) sin[(2n − 1)θ],
⇒ 0 =
π
4
(dn + unp(1)),
⇒ dn = −unp(1).
u(r, θ) =
∞
n=1
− unp(1)r(2n−1)
+ unp(r) sin[(2n − 1)θ].
The method used to solve this problem is similar to section
Problems: Eigenvalues of the Laplacian - Poisson Equation:
1) First, we find Yn(θ) eigenfunctions.
2) Then, we plug in our guess u(t, θ) = X(t)Y (θ) into the equation utt + uθθ = f(t, θ)
and solve an ODE in X(t).
Note the similar problem on 2D Poisson equation on a square domain. The prob-
lem is used by first finding the eigenvalues and eigenfunctions of the Laplacian, and
then expanding f(x, y) in eigenfunctions, and comparing coefficients of f with the gen-
eral solution u(x, y).
Here, however, this could not be done because of the circular geometry of the domain.
In particular, the boundary conditions do not give enough information to find explicit
representations for μm and νn. Also, the condition u = 0 for x > 0, y > 0, x2
+y2
= 1
Partial Differential Equations Igor Yanovsky, 2005 304
can not be used.
72
72
ChiuYen’s solutions have attempts to solve this problem using Green’s function.
Partial Differential Equations Igor Yanovsky, 2005 305
24 Problems: Separation of Variables - Wave Equation
Example (McOwen 3.1 #2). We considered the initial/boundary value problem and
solved it using Fourier Series. We now solve it using the Separation of Variables.
⎧
⎪⎨
⎪⎩
utt − uxx = 0 0 < x < π, t > 0
u(x, 0) = 1, ut(x, 0) = 0 0 < x < π
u(0, t) = 0, u(π, t) = 0 t ≥ 0.
(24.1)
Proof. Assume u(x, t) = X(x)T(t), then substitution in the PDE gives XT −X T = 0.
X
X
=
T
T
= −λ.
• From X + λX = 0, we get Xn(x) = an cos nx + bn sin nx. Boundary conditions
give
u(0, t) = X(0)T(t) = 0
u(π, t) = X(π)T(t) = 0
⇒ X(0) = X(π) = 0.
Thus, Xn(0) = an = 0, and Xn(x) = bn sinnx, λn = n2, n = 1, 2, . . ..
• With these values of λn, we solve T +n2
T = 0 to find Tn(t) = cn sinnt+dn cos nt.
Thus,
u(x, t) =
∞
n=1
˜cn sin nt + ˜dn cos nt sin nx,
ut(x, t) =
∞
n=1
n˜cn cos nt − n ˜dn sin nt sinnx.
• Initial conditions give
1 = u(x, 0) =
∞
n=1
˜dn sin nx,
0 = ut(x, 0) =
∞
n=1
n˜cn sinnx.
By orthogonality, we may multiply both equations by sinmx and integrate:
π
0
sin mx dx = ˜dm
π
2
,
π
0
0 dx = n˜cn
π
2
,
which gives the coefficients
˜dn =
2
nπ
(1 − cos nπ) =
4
nπ , n odd,
0, n even,
and ˜cn = 0.
Plugging the coefficients into a formula for u(x, t), we get
u(x, t) =
4
π
∞
n=0
cos(2n + 1)t sin(2n + 1)x
(2n + 1)
.
Partial Differential Equations Igor Yanovsky, 2005 306
Example. Use the method of separation of variables to find the solution to:
⎧
⎪⎨
⎪⎩
utt + 3ut + u = uxx, 0 < x < 1
u(0, t) = 0, u(1, t) = 0,
u(x, 0) = 0, ut(x, 0) = x sin(2πx).
Proof. Assume u(x, t) = X(x)T(t), then substitution in the PDE gives
XT + 3XT + XT = X T,
T
T
+ 3
T
T
+ 1 =
X
X
= −λ.
• From X + λX = 0, Xn(x) = an cos
√
λnx + bn sin
√
λnx. Boundary conditions
give
u(0, t) = X(0)T(t) = 0
u(1, t) = X(1)T(t) = 0
⇒ X(0) = X(1) = 0.
Thus, Xn(0) = an = 0, and Xn(x) = bn sin
√
λnx.
Xn(1) = bn sin
√
λn = 0. Hence,
√
λn = nπ, or λn = (nπ)2
, n = 1, 2, . . ..
λn = (nπ)2
, Xn(x) = bn sinnπx.
• With these values of λn, we solve
T + 3T + T = −λnT,
T + 3T + T = −(nπ)2
T,
T + 3T + (1 + (nπ)2
)T = 0.
We can solve this 2nd-order ODE with the following guess, T(t) = cest to obtain
s = −3
2 ± 5
4 − (nπ)2. For n ≥ 1, 5
4 − (nπ)2 < 0. Thus, s = −3
2 ± i (nπ)2 − 5
4.
Tn(t) = e−3
2
t
cn cos (nπ)2 −
5
4
t + dn sin (nπ)2 −
5
4
t .
u(x, t) = X(x)T(t) =
∞
n=1
e−3
2
t
cn cos (nπ)2 −
5
4
t + dn sin (nπ)2 −
5
4
t sinnπx.
• Initial conditions give
0 = u(x, 0) =
∞
n=1
cn sin nπx.
By orthogonality, we may multiply this equations by sin mπx and integrate:
1
0
0 dx =
1
2
cm ⇒ cm = 0.
Partial Differential Equations Igor Yanovsky, 2005 307
Thus,
u(x, t) =
∞
n=1
dne−3
2
t
sin (nπ)2 −
5
4
t sinnπx.
ut(x, t) =
∞
n=1
−
3
2
dne−3
2
t
sin (nπ)2 −
5
4
t + dne−3
2
t
(nπ)2 −
5
4
cos (nπ)2 −
5
4
t sinnπx,
x sin(2πx) = ut(x, 0) =
∞
n=1
dn (nπ)2 −
5
4
sinnπx.
By orthogonality, we may multiply this equations by sin mπx and integrate:
1
0
x sin(2πx) sin(mπx) dx = dm
1
2
(mπ)2 −
5
4
,
dn =
2
(nπ)2 − 5
4
1
0
x sin(2πx) sin(nπx) dx.
u(x, t) = e−3
2
t
∞
n=1
dn sin (nπ)2 −
5
4
t sin nπx.
Problem (F’04, #1). Solve the following initial-boundary value problem for the wave
equation with a potential term,
⎧
⎪⎨
⎪⎩
utt − uxx + u = 0 0 < x < π, t < 0
u(0, t) = u(π, t) = 0 t > 0
u(x, 0) = f(x), ut(x, 0) = 0 0 < x < π,
where
f(x) =
x if x ∈ (0, π/2),
π − x if x ∈ (π/2, π).
The answer should be given in terms of an infinite series of explicitly given functions.
Proof. Assume u(x, t) = X(x)T(t), then substitution in the PDE gives
XT − X T + XT = 0,
T
T
+ 1 =
X
X
= −λ.
• From X + λX = 0, Xn(x) = an cos
√
λnx + bn sin
√
λnx. Boundary conditions
give
u(0, t) = X(0)T(t) = 0
u(π, t) = X(π)T(t) = 0
⇒ X(0) = X(π) = 0.
Thus, Xn(0) = an = 0, and Xn(x) = bn sin
√
λnx.
Xn(π) = bn sin
√
λnπ = 0. Hence,
√
λn = n, or λn = n2
, n = 1, 2, . . ..
λn = n2
, Xn(x) = bn sinnx.
Partial Differential Equations Igor Yanovsky, 2005 308
• With these values of λn, we solve
T + T = −λnT,
T + T = −n2
T,
Tn + (1 + n2
)Tn = 0.
The solution to this 2nd-order ODE is of the form:
Tn(t) = cn cos 1 + n2 t + dn sin 1 + n2 t.
u(x, t) = X(x)T(t) =
∞
n=1
cn cos 1 + n2 t + dn sin 1 + n2 t sinnx.
ut(x, t) =
∞
n=1
− cn( 1 + n2) sin 1 + n2 t + dn( 1 + n2) cos 1 + n2 t sin nx.
• Initial conditions give
f(x) = u(x, 0) =
∞
n=1
cn sin nx.
0 = ut(x, 0) =
∞
n=1
dn( 1 + n2) sinnx.
By orthogonality, we may multiply both equations by sinmx and integrate:
π
0
f(x) sinmx dx = cm
π
2
,
π
0
0 dx = dm
π
2
1 + m2,
which gives the coefficients
cn =
2
π
π
0
f(x) sinnx dx =
2
π
π
2
0
x sinnx dx +
2
π
π
π
2
(π − x) sinnx dx
=
2
π
− x
1
n
cos nx
π
2
0
+
1
n
π
2
0
cos nx dx +
2
π
−
π
n
cos nx
π
π
2
+ x
1
n
cos nx
π
π
2
−
1
n
π
π
2
cos nx dx
=
2
π
−
π
2n
cos
nπ
2
+
1
n2
sin
nπ
2
−
1
n2
sin0
+
2
π
−
π
n
cos nπ +
π
n
cos
nπ
2
+
π
n
cos nπ −
π
2n
cos
nπ
2
−
1
n2
sin nπ +
1
n2
sin
nπ
2
=
2
π
1
n2
sin
nπ
2
+
2
π
1
n2
sin
nπ
2
=
4
πn2
sin
nπ
2
=
⎧
⎪⎨
⎪⎩
0, n = 2k
4
πn2 , n = 4m + 1
− 4
πn2 , n = 4m + 3
=
0, n = 2k
(−1)
n−1
2
4
πn2 , n = 2k + 1.
dn = 0.
u(x, t) =
∞
n=1
cn cos 1 + n2 t sinnx.
Partial Differential Equations Igor Yanovsky, 2005 309
25 Problems: Separation of Variables - Heat Equation
Problem (F’94, #5).
Solve the initial-boundary value problem
⎧
⎪⎨
⎪⎩
ut = uxx 0 < x < 2, t > 0
u(x, 0) = x2 − x + 1 0 ≤ x ≤ 2
u(0, t) = 1, u(2, t) = 3 t > 0.
Find limt→+∞ u(x, t).
Proof. ➀ First, we need to obtain function v that satisfies vt = vxx and takes 0
boundary conditions. Let
• v(x, t) = u(x, t) + (ax + b), (25.1)
where a and b are constants to be determined. Then,
vt = ut,
vxx = uxx.
Thus,
vt = vxx.
We need equation (25.1) to take 0 boundary conditions for v(0, t) and v(2, t):
v(0, t) = 0 = u(0, t) + b = 1 + b ⇒ b = −1,
v(2, t) = 0 = u(2, t) + 2a − 1 = 2a + 2 ⇒ a = −1.
Thus, (25.1) becomes
v(x, t) = u(x, t) − x − 1. (25.2)
The new problem is
⎧
⎪⎨
⎪⎩
vt = vxx,
v(x, 0) = (x2
− x + 1) − x − 1 = x2
− 2x,
v(0, t) = v(2, t) = 0.
➁ We solve the problem for v using the method of separation of variables.
Let v(x, t) = X(x)T(t), which gives XT − X T = 0.
X
X
=
T
T
= −λ.
From X + λX = 0, we get Xn(x) = an cos
√
λx + bn sin
√
λx.
Using boundary conditions, we have
v(0, t) = X(0)T(t) = 0
v(2, t) = X(2)T(t) = 0
⇒ X(0) = X(2) = 0.
Hence, Xn(0) = an = 0, and Xn(x) = bn sin
√
λx.
Xn(2) = bn sin 2
√
λ = 0 ⇒ 2
√
λ = nπ ⇒ λn = (nπ
2 )2.
Xn(x) = bn sin
nπx
2
, λn =
nπ
2
2
.
Partial Differential Equations Igor Yanovsky, 2005 310
With these values of λn, we solve T + nπ
2
2
T = 0 to find
Tn(t) = cne−( nπ
2
)2t
.
v(x, t) =
∞
n=1
Xn(x)Tn(t) =
∞
n=1
˜cn e−( nπ
2
)2t
sin
nπx
2
.
Coefficients ˜cn are obtained using the initial condition:
v(x, 0) =
∞
n=1
˜cn sin
nπx
2
= x2
− 2x.
˜cn =
2
0
(x2
− 2x) sin
nπx
2
dx =
0 n is even,
− 32
(nπ)3 n is odd.
⇒ v(x, t) =
∞
n=2k−1
−
32
(nπ)3
e−( nπ
2
)2t
sin
nπx
2
.
We now use equation (25.2) to convert back to function u:
u(x, t) = v(x, t) + x + 1.
u(x, t) =
∞
n=2k−1
−
32
(nπ)3
e−( nπ
2
)2t
sin
nπx
2
+ x + 1.
lim
t→+∞
u(x, t) = x + 1.
Partial Differential Equations Igor Yanovsky, 2005 311
Problem (S’96, #6).
Let u(x, t) be the solution of the initial-boundary value problem for the heat equation
⎧
⎪⎨
⎪⎩
ut = uxx 0 < x < L, t > 0
u(x, 0) = f(x) 0 ≤ x ≤ L
ux(0, t) = ux(L, t) = A t > 0 (A = Const).
Find v(x) - the limit of u(x, t) when t → ∞. Show that v(x) is one of the inifinitely
many solutions of the stationary problem
vxx = 0 0 < x < L
vx(0) = vx(L) = A.
Proof. ➀ First, we need to obtain function v that satisfies vt = vxx and takes 0
boundary conditions. Let
• v(x, t) = u(x, t) + (ax + b), (25.3)
where a and b are constants to be determined. Then,
vt = ut,
vxx = uxx.
Thus,
vt = vxx.
We need equation (25.3) to take 0 boundary conditions for vx(0, t) and vx(L, t).
vx = ux + a.
vx(0, t) = 0 = ux(0, t) + a = A + a ⇒ a = −A,
vx(L, t) = 0 = ux(L, t) + a = A + a ⇒ a = −A.
We may set b = 0 (infinitely many solutions are possible, one for each b).
Thus, (25.3) becomes
v(x, t) = u(x, t) − Ax. (25.4)
The new problem is
⎧
⎪⎨
⎪⎩
vt = vxx,
v(x, 0) = f(x) − Ax,
vx(0, t) = vx(L, t) = 0.
➁ We solve the problem for v using the method of separation of variables.
Let v(x, t) = X(x)T(t), which gives XT − X T = 0.
X
X
=
T
T
= −λ.
From X + λX = 0, we get Xn(x) = an cos
√
λx + bn sin
√
λx.
Using boundary conditions, we have
vx(0, t) = X (0)T(t) = 0
vx(L, t) = X (L)T(t) = 0
⇒ X (0) = X (L) = 0.
Partial Differential Equations Igor Yanovsky, 2005 312
Xn(x) = −an
√
λ sin
√
λx + bn
√
λ cos
√
λx.
Hence, Xn(0) = bn
√
λn = 0 ⇒ bn = 0; and Xn(x) = an cos
√
λx.
Xn(L) = −an
√
λ sinL
√
λ = 0 ⇒ L
√
λ = nπ ⇒ λn = (nπ
L )2
.
Xn(x) = an cos
nπx
L
, λn =
nπ
L
2
.
With these values of λn, we solve T + nπ
L
2
T = 0 to find
T0(t) = c0, Tn(t) = cne−( nπ
L
)2t
, n = 1, 2, . . ..
v(x, t) =
∞
n=1
Xn(x)Tn(t) = ˜c0 +
∞
n=1
˜cn e−( nπ
L
)2t
cos
nπx
L
.
Coefficients ˜cn are obtained using the initial condition:
v(x, 0) = ˜c0 +
∞
n=1
˜cn cos
nπx
L
= f(x) − Ax.
L˜c0 =
L
0
(f(x) − Ax) dx =
L
0
f(x) dx −
AL2
2
⇒ ˜c0 =
1
L
L
0
f(x) dx −
AL
2
,
L
2
˜cn =
L
0
(f(x) − Ax) cos
nπx
L
dx ⇒ ˜cn =
1
L
L
0
(f(x) − Ax) cos
nπx
L
dx.
⇒ v(x, t) =
1
L
L
0
f(x) dx −
AL
2
+
∞
n
˜cn e−( nπ
L
)2t
cos
nπx
L
.
We now use equation (25.4) to convert back to function u:
u(x, t) = v(x, t) + Ax.
u(x, t) =
1
L
L
0
f(x) dx −
AL
2
+
∞
n
˜cn e−( nπ
L
)2t
cos
nπx
L
+ Ax.
lim
t→+∞
u(x, t) = Ax + b, b arbitrary.
To show that v(x) is one of the inifinitely many solutions of the stationary problem
vxx = 0 0 < x < L
vx(0) = vx(L) = A,
we can solve the boundary value problem to obtain v(x, t) = Ax+b, where b is arbitrary.
Partial Differential Equations Igor Yanovsky, 2005 313
Heat Equation with Nonhomogeneous Time-Independent BC in N-dimensions.
The solution to this problem takes somewhat different approach than in the last few prob-
lems, but is similar.
Consider the following initial-boundary value problem,
⎧
⎪⎨
⎪⎩
ut = u, x ∈ Ω, t ≥ 0
u(x, 0) = f(x), x ∈ Ω
u(x, t) = g(x), x ∈ ∂Ω, t > 0.
Proof. Let w(x) be the solution of the Dirichlet problem:
w = 0, x ∈ Ω
w(x) = g(x), x ∈ ∂Ω
and let v(x, t) be the solution of the IBVP for the heat equation with homogeneous
BC:
⎧
⎪⎨
⎪⎩
vt = v, x ∈ Ω, t ≥ 0
v(x, 0) = f(x) − w(x), x ∈ Ω
v(x, t) = 0, x ∈ ∂Ω, t > 0.
Then u(x, t) satisfies
u(x, t) = v(x, t) + w(x).
lim
t→∞
u(x, t) = w(x).
Partial Differential Equations Igor Yanovsky, 2005 314
Nonhomogeneous Heat Equation with Nonhomogeneous Time-Independent
BC in N dimensions.
Describe the method of solution of the problem
⎧
⎪⎨
⎪⎩
ut = u + F(x, t), x ∈ Ω, t ≥ 0
u(x, 0) = f(x), x ∈ Ω
u(x, t) = g(x), x ∈ ∂Ω, t > 0.
Proof. ❶ We first find u1, the solution to the homogeneous heat equation (no F(x, t)).
Let w(x) be the solution of the Dirichlet problem:
w = 0, x ∈ Ω
w(x) = g(x), x ∈ ∂Ω
and let v(x, t) be the solution of the IBVP for the heat equation with homogeneous
BC:
⎧
⎪⎨
⎪⎩
vt = v, x ∈ Ω, t ≥ 0
v(x, 0) = f(x) − w(x), x ∈ Ω
v(x, t) = 0, x ∈ ∂Ω, t > 0.
Then u1(x, t) satisfies
u1(x, t) = v(x, t) + w(x).
lim
t→∞
u1(x, t) = w(x).
❷ The solution to the homogeneous equation with 0 boundary conditions is given by
Duhamel’s principle.
u2t = u2 + F(x, t) for t > 0, x ∈ Rn
u2(x, 0) = 0 for x ∈ Rn.
(25.5)
Duhamel’s principle gives the solution:
u2(x, t) =
t
0 Rn
˜K(x − y, t − s) F(y, s) dy ds
Note: u2(x, t) = 0 on ∂Ω may not be satisfied.
u(x, t) = v(x, t) + w(x) +
t
0 Rn
˜K(x − y, t − s) F(y, s) dy ds.
Partial Differential Equations Igor Yanovsky, 2005 315
Problem (S’98, #5). Find the solution of
⎧
⎪⎨
⎪⎩
ut = uxx, t ≥ 0, 0 < x < 1,
u(x, 0) = 0, 0 < x < 1,
u(0, t) = 1 − e−t, ux(1, t) = e−t − 1, t > 0.
Prove that limt→∞ u(x, t) exists and find it.
Proof. ➀ First, we need to obtain function v that satisfies vt = vxx and takes 0
boundary conditions. Let
• v(x, t) = u(x, t) + (ax + b) + (c1 cos x + c2 sin x)e−t
, (25.6)
where a, b, c1, c2 are constants to be determined. Then,
vt = ut − (c1 cos x + c2 sinx)e−t
,
vxx = uxx + (−c1 cos x − c2 sin x)e−t
.
Thus,
vt = vxx.
We need equation (25.6) to take 0 boundary conditions for v(0, t) and vx(1, t):
v(0, t) = 0 = u(0, t) + b + c1e−t
= 1 − e−t
+ b + c1e−t
.
Thus, b = −1, c1 = 1, and (25.6) becomes
v(x, t) = u(x, t) + (ax − 1) + (cos x + c2 sin x)e−t
. (25.7)
vx(x, t) = ux(x, t) + a + (− sinx + c2 cos x)e−t
,
vx(1, t) = 0 = ux(1, t) + a + (− sin1 + c2 cos 1)e−t
= −1 + a + (1 − sin 1 + c2 cos 1)e−t
.
Thus, a = 1, c2 = sin 1−1
cos 1 , and equation (25.7) becomes
v(x, t) = u(x, t) + (x − 1) + (cos x +
sin 1 − 1
cos 1
sin x)e−t
. (25.8)
Initial condition tranforms to:
v(x, 0) = u(x, 0) + (x − 1) + (cos x +
sin 1 − 1
cos 1
sin x) = (x − 1) + (cos x +
sin1 − 1
cos 1
sinx).
The new problem is
⎧
⎪⎨
⎪⎩
vt = vxx,
v(x, 0) = (x − 1) + (cos x + sin 1−1
cos 1 sinx),
v(0, t) = 0, vx(1, t) = 0.
➁ We solve the problem for v using the method of separation of variables.
Let v(x, t) = X(x)T(t), which gives XT − X T = 0.
X
X
=
T
T
= −λ.
Partial Differential Equations Igor Yanovsky, 2005 316
From X + λX = 0, we get Xn(x) = an cos
√
λx + bn sin
√
λx.
Using the first boundary condition, we have
v(0, t) = X(0)T(t) = 0 ⇒ X(0) = 0.
Hence, Xn(0) = an = 0, and Xn(x) = bn sin
√
λx. We also have
vx(1, t) = X (1)T(t) = 0 ⇒ X (1) = 0.
Xn(x) =
√
λbn cos
√
λx,
Xn(1) =
√
λbn cos
√
λ = 0,
cos
√
λ = 0,
√
λ = nπ +
π
2
.
Thus,
Xn(x) = bn sin nπ +
π
2
x, λn = nπ +
π
2
2
.
With these values of λn, we solve T + nπ + π
2
2
T = 0 to find
Tn(t) = cne−(nπ+π
2
)2t
.
v(x, t) =
∞
n=1
Xn(x)Tn(t) =
∞
n=1
˜bn sin nπ +
π
2
x e−(nπ+π
2
)2t
.
We now use equation (25.8) to convert back to function u:
u(x, t) = v(x, t) − (x − 1) − (cos x +
sin 1 − 1
cos 1
sin x)e−t
.
u(x, t) =
∞
n=1
˜bn sin nπ +
π
2
x e−(nπ+π
2
)2t
− (x − 1) − (cos x +
sin 1 − 1
cos 1
sin x)e−t
.
Coefficients ˜bn are obtained using the initial condition:
u(x, 0) =
∞
n=1
˜bn sin nπ +
π
2
x − (x − 1) − (cos x +
sin1 − 1
cos 1
sinx).
➂ Finally, we can check that the differential equation and the boundary conditions are
satisfied:
u(0, t) = 1 − (1 + 0)e−t
= 1 − e−t
.
ux(x, t) =
∞
n=1
˜bn nπ +
π
2
cos nπ +
π
2
x e−(nπ+π
2
)2t
− 1 + (sinx −
sin1 − 1
cos 1
cos x)e−t
,
ux(1, t) = −1 + (sin1 −
sin 1 − 1
cos 1
cos 1)e−t
= −1 + e−t
.
ut =
∞
n=1
−˜bn nπ +
π
2
2
sin nπ +
π
2
x e−(nπ+π
2
)2t
+ (cos x +
sin1 − 1
cos 1
sin x)e−t
= uxx.
Partial Differential Equations Igor Yanovsky, 2005 317
Problem (F’02, #6). The temperature of a rod insulated at the ends with an ex-
ponentially decreasing heat source in it is a solution of the following boundary value
problem:
⎧
⎪⎨
⎪⎩
ut = uxx + e−2t
g(x) for (x, t) ∈ [0, 1] × R+
ux(0, t) = ux(1, t) = 0
u(x, 0) = f(x).
Find the solution to this problem by writing u as a cosine series,
u(x, t) =
∞
n=0
an(t) cos nπx,
and determine limt→∞ u(x, t).
Proof. Let g accept an expansion in eigenfunctions
g(x) = b0 +
∞
n=1
bn cos nπx with bn = 2
1
0
g(x) cosnπx dx.
Plugging in the PDE gives:
a0(t) +
∞
n=1
an(t) cosnπx = −
∞
n=1
n2
π2
an(t) cos nπx + b0e−2t
+ e−2t
∞
n=1
bn cos nπx,
which gives
a0(t) = b0e−2t,
an(t) + n2π2an(t) = bne−2t, n = 1, 2, . . ..
Adding homogeneous and particular solutions of the above ODEs, we obtain the solu-
tions
a0(t) = c0 − b0
2 e−2t
,
an(t) = cne−n2π2t
− bn
2−n2π2 e−2t
, n = 1, 2, . . .,
for some constants cn, n = 0, 1, 2, . . .. Thus,
u(x, t) =
∞
n=0
cne−n2π2t
−
bn
2 − n2π2
e−2t
cos nπx.
Initial condition gives
u(x, 0) =
∞
n=0
cn −
bn
2 − n2π2
cos nπx = f(x),
As, t → ∞, the only mode that survives is n = 0:
u(x, t) → c0 +
b0
2
as t → ∞.
Partial Differential Equations Igor Yanovsky, 2005 318
Problem (F’93, #4). a) Assume f, g ∈ C∞
. Give the compatibility conditions which
f and g must satisfy if the following problem is to possess a solution.
u = f(x) x ∈ Ω
∂u
∂n
(s) = g(s) s ∈ ∂Ω.
Show that your condition is necessary for a solution to exist.
b) Give an explicit solution to
⎧
⎪⎨
⎪⎩
ut = uxx + cos x x ∈ [0, 2π]
ux(0, t) = ux(2π, t) = 0 t > 0
u(x, 0) = cos x + cos 2x x ∈ [0, 2π].
c) Does there exist a steady state solution to the problem in (b) if
ux(0) = 1 ux(2π) = 0 ?
Explain your answer.
Proof. a) Integrating the equation and using Green’s identity gives:
Ω
f(x) dx =
Ω
u dx =
∂Ω
∂u
∂n
ds =
∂Ω
g(s) ds.
b) With
• v(x, t) = u(x, t) − cos x
the problem above transforms to
⎧
⎪⎨
⎪⎩
vt = vxx
vx(0, t) = vx(2π, t) = 0
v(x, 0) = cos 2x.
We solve this problem for v using the separation of variables. Let v(x, t) = X(x)T(t),
which gives XT = X T.
X
X
=
T
T
= −λ.
From X + λX = 0, we get Xn(x) = an cos
√
λx + bn sin
√
λx.
Xn(x) = −
√
λnan sin
√
λx +
√
λnbn cos
√
λx.
Using boundary conditions, we have
vx(0, t) = X (0)T(t) = 0
vx(2π, t) = X (2π)T(t) = 0
⇒ X (0) = X (2π) = 0.
Hence, Xn(0) =
√
λnbn = 0, and Xn(x) = an cos
√
λnx.
Xn(2π) = −
√
λnan sin
√
λn2π = 0 ⇒
√
λn = n
2 ⇒ λn = (n
2 )2. Thus,
Xn(x) = an cos
nx
2
, λn =
n
2
2
Partial Differential Equations Igor Yanovsky, 2005 319
With these values of λn, we solve T + n
2
2
T = 0 to find
Tn(t) = cne−( n
2
)2t
.
v(x, t) =
∞
n=0
Xn(x)Tn(t) =
∞
n=0
˜an e−( n
2
)2t
cos
nx
2
.
Initial condition gives
v(x, 0) =
∞
n=0
˜an cos
nx
2
= cos 2x.
Thus, ˜a4 = 1, ˜an = 0, n = 4. Hence,
v(x, t) = e−4t
cos 2x.
u(x, t) = v(x, t) + cos x = e−4t
cos 2x + cos x.
c) Does there exist a steady state solution to the problem in (b) if
ux(0) = 1 ux(2π) = 0 ?
Explain your answer.
c) Set ut = 0. We have
uxx + cos x = 0 x ∈ [0, 2π]
ux(0) = 1, ux(2π) = 0.
uxx = − cos x,
ux = − sin x + C,
u(x) = cos x + Cx + D.
Boundary conditions give:
1 = ux(0) = C,
0 = ux(2π) = C ⇒ contradiction
There exists no steady state solution.
We may use the result we obtained in part (a) with uxx = cos x = f(x). We
need
Ω
f(x) dx =
∂Ω
∂u
∂n
ds,
2π
0
cos x dx
=0
= ux(2π) − ux(0) = −1
given
.
Partial Differential Equations Igor Yanovsky, 2005 320
Problem (F’96, #7). Solve the parabolic problem
u
v t
=
1 1
2
0 2
u
v xx
, 0 ≤ x ≤ π, t > 0
u(x, 0) = sinx, u(0, t) = u(π, t) = 0,
v(x, 0) = sin x, v(0, t) = v(π, t) = 0.
Prove the energy estimate (for general initial data)
π
x=0
[u2
(x, t) + v2
(x, t)] dx ≤ c
π
x=0
[u2
(x, 0) + v2
(x, 0)] dx
for come constant c.
Proof. We can solve the second equation for v and then use the value of v to solve the
first equation for u. 73
➀ We have
⎧
⎪⎨
⎪⎩
vt = 2vxx, 0 ≤ x ≤ π, t > 0
v(x, 0) = sinx,
v(0, t) = v(π, t) = 0.
Assume v(x, t) = X(x)T(t), then substitution in the PDE gives XT = 2X T.
T
T
= 2
X
X
= −λ.
From X + λ
2 X = 0, we get Xn(x) = an cos λ
2 x + bn sin λ
2 x.
Boundary conditions give
v(0, t) = X(0)T(t) = 0
v(π, t) = X(π)T(t) = 0
⇒ X(0) = X(π) = 0.
Thus, Xn(0) = an = 0, and Xn(x) = bn sin λ
2 x.
Xn(π) = bn sin λ
2 π = 0. Hence λ
2 = n, or λ = 2n2
.
λ = 2n2
, Xn(x) = bn sinnx.
With these values of λn, we solve T + 2n2T = 0 to get Tn(t) = cne−2n2t.
Thus, the solution may be written in the form
v(x, t) =
∞
n=1
˜ane−2n2t
sin nx.
From initial condition, we get
v(x, 0) =
∞
n=1
˜an sinnx = sinx.
Thus, ˜a1 = 1, ˜an = 0, n = 2, 3, . . ..
v(x, t) = e−2t
sin x.
73
Note that if the matrix was fully inseparable, we would have to find eigenvalues and eigenvectors,
just as we did for the hyperbolic systems.
Partial Differential Equations Igor Yanovsky, 2005 321
➁ We have
⎧
⎪⎨
⎪⎩
ut = uxx − 1
2 e−2t
sinx, 0 ≤ x ≤ π, t > 0
u(x, 0) = sin x,
u(0, t) = u(π, t) = 0.
Let u(x, t) = ∞
n=1 un(t) sinnx. Plugging this into the equation, we get
∞
n=1
un(t) sinnx +
∞
n=1
n2
un(t) sinnx = −
1
2
e−2t
sin x.
For n = 1:
u1(t) + u1(t) = −
1
2
e−2t
.
Combining homogeneous and particular solution of the above equation, we obtain:
u1(t) =
1
2
e−2t
+ c1e−t
.
For n = 2, 3, . . .:
un(t) + n2
un(t) = 0,
un(t) = cne−n2t
.
Thus,
u(x, t) =
1
2
e−2t
+ c1e−t
sinx +
∞
n=2
cne−n2t
sinnx =
1
2
e−2t
sinx +
∞
n=1
cne−n2t
sinnx.
From initial condition, we get
u(x, 0) =
1
2
sin x +
∞
n=1
cn sinnx = sin x.
Thus, c1 = 1
2 , cn = 0, n = 2, 3, . . ..
u(x, t) =
1
2
sinx (e−2t
+ e−t
).
To prove the energy estimate (for general initial data)
π
x=0
[u2
(x, t) + v2
(x, t)] dx ≤ c
π
x=0
[u2
(x, 0) + v2
(x, 0)] dx
for come constant c, we assume that
u(x, 0) =
∞
n=1
an sinnx, v(x, 0) =
∞
n=1
bn sinnx.
Partial Differential Equations Igor Yanovsky, 2005 322
The general solutions are obtained by the same method as above
u(x, t) =
1
2
e−2t
sinx +
∞
n=1
cne−n2 t
sinnx,
v(x, t) =
∞
n=1
bne−2n2t
sinnx.
π
x=0
[u2
(x, t) + v2
(x, t)] dx =
π
x=0
1
2
e−2t
sinx +
∞
n=1
cne−n2t
sinnx
2
+
∞
n=1
bne−2n2t
sinnx
2
dx
≤
∞
n=1
(b2
n + a2
n)
π
x=0
sin2
nx dx ≤
π
x=0
[u2
(x, 0) + v2
(x, 0)] dx.
Partial Differential Equations Igor Yanovsky, 2005 323
26 Problems: Eigenvalues of the Laplacian - Laplace
The 2D LAPLACE Equation (eigenvalues/eigenfuctions of the Laplacian).
Consider
⎧
⎪⎨
⎪⎩
uxx + uyy + λu = 0 in Ω
u(0, y) = 0 = u(a, y) for 0 ≤ y ≤ b,
u(x, 0) = 0 = u(x, b) for 0 ≤ x ≤ a.
(26.1)
Proof. We can solve this problem by separation of variables.
Let u(x, y) = X(x)Y (y), then substitution in the PDE gives X Y + XY + λXY = 0.
X
X
+
Y
Y
+ λ = 0.
Letting λ = μ2 + ν2 and using boundary conditions, we find the equations for X and
Y :
X + μ2
X = 0 Y + ν2
Y = 0
X(0) = X(a) = 0 Y (0) = Y (b) = 0.
The solutions of these one-dimensional eigenvalue problems are
μm =
mπ
a
νn =
nπ
b
Xm(x) = sin
mπx
a
Yn(y) = sin
nπy
b
,
where m, n = 1, 2, . . .. Thus we obtain solutions of (26.1) of the form
λmn = π2 m2
a2
+
n2
b2
umn(x, y) = sin
mπx
a
sin
nπy
b
,
where m, n = 1, 2, . . ..
Observe that the eigenvalues {λmn}∞
m,n=1 are positive. The smallest eigenvalue λ11
has only one eigenfunction u11(x, y) = sin(πx/a) sin(πy/b); notice that u11 is positive
in Ω. Other eigenvalues λ may correspond to more than one choice of m and n; for
example, in the case a = b we have λnm = λnm. For this λ, there are two linearly
independent eigenfunctions. However, for a particular value of λ there are at most
finitely many linearly independent eigenfunctions. Moreover,
b
0
a
0
umn(x, y) um n (x, y) dx dy =
b
0
a
0
sin
mπx
a
sin
nπy
b
sin
m πx
a
sin
n πy
b
dx dy
=
a
2
b
0 sin nπy
b sin n πy
b dy
0
=
ab
4 if m = m and n = n
0 if m = m or n = n .
In particular, the {umn} are pairwise orthogonal. We could normalize each umn by a
scalar multiple (i.e. multiply by 4/ab) so that ab/4 above becomes 1.
Let us change the notation somewhat so that each eigenvalue λn corresponds to a
particular eigenfunction φn(x). If we choose an orthonormal basis of eigenfunctions in
each eigenspace, we may arrange that {φn}∞
n=1 is pairwise orthonormal:
Ω
φn(x)φm(x) dx =
1 if m = n
0 if m = n.
Partial Differential Equations Igor Yanovsky, 2005 324
In this notation, the eigenfunction expansion of f(x) defined on Ω becomes
f(x) ∼
∞
n=1
anφn(x), where an =
Ω
f(x)φn(x) dx.
Partial Differential Equations Igor Yanovsky, 2005 325
Problem (S’96, #4). Let D denote the rectangular
D = {(x, y) ∈ R2
: 0 < x < a, 0 < y < b}.
Find the eigenvalues of the following Dirichlet problem:
( + λ)u = 0 in D
u = 0 on ∂D.
Proof. The problem may be rewritten as
⎧
⎪⎨
⎪⎩
uxx + uyy + λu = 0 in Ω
u(0, y) = 0 = u(a, y) for 0 ≤ y ≤ b,
u(x, 0) = 0 = u(x, b) for 0 ≤ x ≤ a.
We may assume that the eigenvalues λ are positive, λ = μ2
+ ν2
. Then,
λmn = π2 m2
a2
+
n2
b2
umn(x, y) = sin
mπx
a
sin
nπy
b
, m, n = 1, 2, . . ..
Problem (W’04, #1). Consider the differential equation:
∂2u(x, y)
∂x2
+
∂2u(x, y)
∂y2
+ λu(x, y) = 0 (26.2)
in the strip {(x, y), 0 < y < π, −∞ < x < +∞} with boundary conditions
u(x, 0) = 0, u(x, π) = 0. (26.3)
Find all bounded solutions of the boundary value problem (26.4), (26.5) when
a) λ = 0, b) λ > 0, c) λ < 0.
Proof. a) λ = 0. We have
uxx + uyy = 0.
Assume u(x, y) = X(x)Y (y), then substitution in the PDE gives
X Y + XY = 0.
Boundary conditions give
u(x, 0) = X(x)Y (0) = 0
u(x, π) = X(x)Y (π) = 0
⇒ Y (0) = Y (π) = 0.
Method I: We have
X
X
= −
Y
Y
= −c, c > 0.
From X + cX = 0, we have Xn(x) = an cos
√
cx + bn sin
√
cx.
From Y − cY = 0, we have Yn(y) = cne−
√
cy
+ dne
√
cy
.
Y (0) = cn + dn = 0 ⇒ cn = −dn.
Partial Differential Equations Igor Yanovsky, 2005 326
Y (π) = cne−
√
cπ
− cne
√
cπ
= 0 ⇒ cn = 0 ⇒ Yn(y) = 0.
⇒ u(x, y) = X(x)Y (y) = 0.
Method II: We have
X
X
= −
Y
Y
= c, c > 0.
From X − cX = 0, we have Xn(x) = ane−
√
cx + bne
√
cx.
Since we look for bounded solutions for −∞ < x < ∞, an = bn = 0 ⇒ Xn(x) = 0.
From Y + cY = 0, we have Yn(y) = cn cos
√
cy + dn sin
√
cy.
Y (0) = cn = 0,
Y (π) = dn sin
√
cπ = 0 ⇒
√
c = n ⇒ c = n2.
⇒ Yn(y) = dn sin nx = 0.
⇒ u(x, y) = X(x)Y (y) = 0.
b) λ > 0. We have
X
X
+
Y
Y
+ λ = 0.
Letting λ = μ2 + ν2, and using boundary conditions for Y , we find the equations:
X + μ2
X = 0 Y + ν2
Y = 0
Y (0) = Y (π) = 0.
The solutions of these one-dimensional eigenvalue problems are
Xm(x) = am cos μmx + bm sinμmx.
νn = n, Yn(y) = dn sin ny, where m, n = 1, 2, . . ..
u(x, y) =
∞
m,n=1
umn(x, y) =
∞
m,n=1
(am cos μmx + bm sinμmx) sinny.
c) λ < 0. We have
uxx + uyy + λu = 0,
u(x, 0) = 0, u(x, π) = 0.
u ≡ 0 is the solution to this equation. We will show that this solution is unique.
Let u1 and u2 be two solutions, and consider w = u1 − u2. Then,
w + λw = 0,
w(x, 0) = 0, w(x, π) = 0.
Multiply the equation by w and integrate:
w w + λw2
= 0,
Ω
w w dx + λ
Ω
w2
dx = 0,
∂Ω
w
∂w
∂n
ds
=0
−
Ω
|∇w|2
dx + λ
Ω
w2
dx = 0,
Ω
|∇w|2
dx
≥0
= λ
Ω
w2
dx
≤0
.
Partial Differential Equations Igor Yanovsky, 2005 327
Thus, w ≡ 0 and the solution u(x, y) ≡ 0 is unique.
Partial Differential Equations Igor Yanovsky, 2005 328
Problem (F’95, #5). Find all bounded solutions
for the following boundary value problem in the strip
0 < x < a, −∞ < y < ∞,
( + k2
)u = 0 (k = Const > 0),
u(0, y) = 0, ux(a, y) = 0.
In particular, show that when ak ≤ π,
the only bounded solution to this problem is u ≡ 0.
Proof. Let u(x, y) = X(x)Y (y), then we have X Y + XY + k2XY = 0.
X
X
+
Y
Y
+ k2
= 0.
Letting k2
= μ2
+ ν2
and using boundary conditions, we find:
X + μ2
X = 0, Y + ν2
Y = 0.
X(0) = X (a) = 0.
The solutions of these one-dimensional eigenvalue problems are
μm =
(m − 1
2 )π
a
,
Xm(x) = sin
(m − 1
2 )πx
a
Yn(y) = cn cos νny + dn sinνny,
where m, n = 1, 2, . . .. Thus we obtain solutions of the form
k2
mn =
(m − 1
2 )π
a
2
+ν2
n, umn(x, y) = sin
(m − 1
2)πx
a
cn cos νny+dn sin νny ,
where m, n = 1, 2, . . ..
u(x, y) =
∞
m,n=1
umn(x, y) =
∞
m,n=1
sin
(m − 1
2 )πx
a
cn cos νny + dn sinνny .
• We can take an alternate approach and prove the second part of the question. We
have
X Y + XY + k2
XY = 0,
−
Y
Y
=
X
X
+ k2
= c2
.
We obtain Yn(y) = cn cos cy + dn sin cy. The second equation gives
X + k2
X = c2
X,
X + (k2
− c2
)X = 0,
Xm(x) = ame
√
c2−k2x
+ bme
√
c2−k2x
.
Thus, Xm(x) is bounded only if k2 − c2 > 0, (if k2 − c2 = 0, X = 0, and Xm(x) =
amx + bm, BC’s give Xm(x) = πx, unbounded), in which case
Xm(x) = am cos k2 − c2 x + bm sin k2 − c2 x.
Partial Differential Equations Igor Yanovsky, 2005 329
Boundary conditions give Xm(0) = am = 0.
Xm(x) = bm k2 − c2 cos k2 − c2 x,
Xm(a) = bm k2 − c2 cos k2 − c2 a = 0,
k2 − c2 a = mπ −
π
2
, m = 1, 2, . . .,
k2
− c2
=
π
a
m −
1
2
2
,
k2
=
π
a
2
m −
1
2
2
+ c2
,
a2
k2
> π2
m −
1
2
2
,
ak > π m −
1
2
, m = 1, 2, . . ..
Thus, bounded solutions exist only when ak > π
2 .
Problem (S’90, #2). Show that the boundary value problem
∂2u(x, y)
∂x2
+
∂2u(x, y)
∂y2
+ k2
u(x, y) = 0, (26.4)
where −∞ < x < +∞, 0 < y < π, k > 0 is a constant,
u(x, 0) = 0, u(x, π) = 0 (26.5)
has a bounded solution if and only if k ≥ 1.
Proof. We have
uxx + uyy + k2
u = 0,
X Y + XY + k2
XY = 0,
−
X
X
=
Y
Y
+ k2
= c2
.
We obtain Xm(x) = am cos cx + bm sin cx. The second equation gives
Y + k2
Y = c2
Y,
Y + (k2
− c2
)Y = 0,
Yn(y) = cne
√
c2−k2y
+ dne
√
c2−k2y
.
Thus, Yn(y) is bounded only if k2
−c2
> 0, (if k2
−c2
= 0, Y = 0, and Yn(y) = cny+dn,
BC’s give Y ≡ 0), in which case
Yn(y) = cn cos k2 − c2 y + dn sin k2 − c2 y.
Boundary conditions give Yn(0) = cn = 0.
Yn(π) = dn sin
√
k2 − c2 π = 0 ⇒
√
k2 − c2 = n ⇒ k2 − c2 = n2 ⇒
k2
= n2
+ c2
, n = 1, 2, . . .. Hence, k > n, n = 1, 2, . . ..
Partial Differential Equations Igor Yanovsky, 2005 330
Thus, bounded solutions exist if k ≥ 1.
Note: If k = 1, then c = 0, which gives trivial solutions for Yn(y).
u(x, y) =
∞
m,n=1
Xm(x)Yn(y) =
∞
m,n=1
sin ny Xm(x).
Partial Differential Equations Igor Yanovsky, 2005 331
McOwen, 4.4 #7; 266B Ralston Hw. Show that the boundary value problem
−∇ · a(x)∇u + b(x)u = λu in Ω
u = 0 on ∂Ω
has only trivial solution with λ ≤ 0, when b(x) ≥ 0 and a(x) > 0 in Ω.
Proof. Multiplying the equation by u and integrating over Ω, we get
Ω
−u∇ · a∇u dx +
Ω
bu2
dx = λ
Ω
u2
dx.
Since ∇ · (ua∇u) = u∇ · a∇u + a|∇u|2
, we have
Ω
−∇ · (ua∇u) dx +
Ω
a|∇u|2
dx +
Ω
bu2
dx = λ
Ω
u2
dx. (26.6)
Using divergence theorem, we obtain
∂Ω
− u
=0
a
∂u
∂n
ds +
Ω
a|∇u|2
dx +
Ω
bu2
dx = λ
Ω
u2
dx,
Ω
a
>0
|∇u|2
dx +
Ω
b
≥0
u2
dx = λ
≤0
Ω
u2
dx,
Thus, ∇u = 0 in Ω, and u is constant. Since u = 0 on ∂Ω, u ≡ 0 on Ω.
Similar Problem I: Note that this argument also works with Neumann B.C.:
−∇ · a(x)∇u + b(x)u = λu in Ω
∂u/∂n = 0 on ∂Ω
Using divergence theorem, (26.6) becomes
∂Ω
−ua
∂u
∂n
=0
ds +
Ω
a|∇u|2
dx +
Ω
bu2
dx = λ
Ω
u2
dx,
Ω
a
>0
|∇u|2
dx +
Ω
b
≥0
u2
dx = λ
≤0
Ω
u2
dx.
Thus, ∇u = 0, and u = const on Ω. Hence, we now have
Ω
b
≥0
u2
dx = λ
≤0
Ω
u2
dx,
which implies λ = 0. This gives the useful information that for the eigenvalue problem74
−∇ · a(x)∇u + b(x)u = λu
∂u/∂n = 0,
λ = 0 is an eigenvalue, its eigenspace is the set of constants, and all other λ’s are
positive.
74
In Ralston’s Hw#7 solutions, there is no ‘-’ sign in front of ∇ · a(x)∇u below, which is probably a
typo.
Partial Differential Equations Igor Yanovsky, 2005 332
Similar Problem II: If λ ≤ 0, we show that the only solution to the problem below
is the trivial solution.
u + λu = 0 in Ω
u = 0 on ∂Ω
Ω
u u dx + λ
Ω
u2
dx = 0,
∂Ω
u
=0
∂u
∂n
ds −
Ω
|∇u|2
dx + λ
≤0
Ω
u2
dx = 0.
Thus, ∇u = 0 in Ω, and u is constant. Since u = 0 on ∂Ω, u ≡ 0 on Ω.
Partial Differential Equations Igor Yanovsky, 2005 333
27 Problems: Eigenvalues of the Laplacian - Poisson
The ND POISSON Equation (eigenvalues/eigenfunctions of the Laplacian).
Suppose we want to find the eigenfunction expansion of the solution of
u = f in Ω
u = 0 on ∂Ω,
when f has the expansion in the orthonormal Dirichlet eigenfunctions φn:
f(x) ∼
∞
n=1
anφn(x), where an =
Ω
f(x)φn(x) dx.
Proof. Writing u = cnφn and inserting into −λu = f, we get
∞
n=1
−λncnφn =
∞
n=1
anφn(x).
Thus, cn = −an/λn, and
u(x) = −
∞
n=1
anφn(x)
λn
.
The 1D POISSON Equation (eigenvalues/eigenfunctions of the Laplacian).
For the boundary value problem
u = f(x)
u(0) = 0, u(L) = 0,
the related eigenvalue problem is
φ = −λφ
φ(0) = 0, φ(L) = 0.
The eigenvalues are λn = (nπ/L)2, and the corresponding eigenfunctions are sin(nπx/L),
n = 1, 2, . . ..
Writing u = cnφn = cn sin(nπx/L) and inserting into −λu = f, we get
∞
n=1
−cn
nπ
L
2
sin
nπx
L
= f(x),
L
0
∞
n=1
−cn
nπ
L
2
sin
nπx
L
sin
mπx
L
dx =
L
0
f(x) sin
mπx
L
dx,
−cn
nπ
L
2 L
2
=
L
0
f(x) sin
nπx
L
dx,
cn = −
2
L
L
0 f(x) sin(nπx/L) dx
(nπ/L)2
.
Partial Differential Equations Igor Yanovsky, 2005 334
u(x) = cn sin(nπx/L) =
∞
n=1
−
2
L
L
0 f(ξ) sin(nπx/L) sin(nπξ/L) dξ
(nπ/L)2
,
u =
L
0
f(ξ) −
2
L
∞
n=1
sin(nπx/L) sin(nπξ/L)
(nπ/L)2
= G(x,ξ)
dξ.
See similar, but more complicated, problem in Sturm-Liouville Problems (S’92, #2(c)).
Partial Differential Equations Igor Yanovsky, 2005 335
Example: Eigenfunction Expansion of the GREEN’s Function.
Suppose we fix x and attempt to expand the Green’s function G(x, y) in the orthonormal
eigenfunctions φn(y):
G(x, y) ∼
∞
n=1
an(x)φn(y), where an(x) =
Ω
G(x, z)φn(z) dz.
Proof. We can rewrite u + λu = 0 in Ω, u = 0 on ∂Ω, as an integral equation 75
u(x) + λ
Ω
G(x, y)u(y) dy = 0.
Suppose, u(x) = cnφn(x). Plugging this into , we get
∞
m=1
cmφm(x) + λ
Ω
∞
n=1
an(x)φn(y)
∞
m=1
cmφm(y) dy = 0,
∞
m=1
cmφm(x) + λ
∞
n=1
an(x)
∞
m=1
cm
Ω
φn(y)φm(y) dy = 0,
∞
n=1
cnφn(x) +
∞
n=1
λan(x)cn = 0,
∞
n=1
cn φn(x) + λan(x) = 0,
an(x) = −
φn(x)
λn
.
Thus,
G(x, y) ∼
∞
n=1
−
φn(x)φn(y)
λn
.
75
See the section: ODE - Integral Equations.
Partial Differential Equations Igor Yanovsky, 2005 336
The 2D POISSON Equation (eigenvalues/eigenfunctions of the Laplacian).
Solve the boundary value problem
⎧
⎪⎨
⎪⎩
uxx + uyy = f(x, y) for 0 < x < a, 0 < y < b
u(0, y) = 0 = u(a, y) for 0 ≤ y ≤ b,
u(x, 0) = 0 = u(x, b) for 0 ≤ x ≤ a.
(27.1)
f(x, y) ∈ C2, f(x, y) = 0 if x = 0, x = a, y = 0, y = b,
f(x, y) =
2
√
ab
∞
m,n=1
cmn sin
mπx
a
sin
nπy
b
.
Proof. ➀ First, we find eigenvalues/eigenfunctions of the Laplacian.
⎧
⎪⎨
⎪⎩
uxx + uyy + λu = 0 in Ω
u(0, y) = 0 = u(a, y) for 0 ≤ y ≤ b,
u(x, 0) = 0 = u(x, b) for 0 ≤ x ≤ a.
Let u(x, y) = X(x)Y (y), then substitution in the PDE gives X Y + XY + λXY = 0.
X
X
+
Y
Y
+ λ = 0.
Letting λ = μ2 + ν2 and using boundary conditions, we find the equations for X and
Y :
X + μ2
X = 0 Y + ν2
Y = 0
X(0) = X(a) = 0 Y (0) = Y (b) = 0.
The solutions of these one-dimensional eigenvalue problems are
μm =
mπ
a
νn =
nπ
b
Xm(x) = sin
mπx
a
Yn(y) = sin
nπy
b
,
where m, n = 1, 2, . . .. Thus we obtain eigenvalues and normalized eigenfunctions of
the Laplacian:
λmn = π2 m2
a2
+
n2
b2
φmn(x, y) =
2
√
ab
sin
mπx
a
sin
nπy
b
,
where m, n = 1, 2, . . .. Note that
f(x, y) =
∞
m,n=1
cmnφmn.
➁ Second, writing u(x, y) = ˜cmnφmn and inserting into −λu = f, we get
−
∞
m,n=1
λmn˜cmnφmn(x, y) =
∞
m,n=1
cmnφmn(x, y).
Thus, ˜cmn = − cmn
λmn
.
u(x, y) = −
∞
n=1
cmn
λmn
φmn(x, y),
Partial Differential Equations Igor Yanovsky, 2005 337
with λmn, φmn(x) given above, and cmn given by
b
0
a
0
f(x, y)φmn dx dy =
b
0
a
0
∞
m ,n =1
cm n φm n φmn dx dy = cmn.
Partial Differential Equations Igor Yanovsky, 2005 338
28 Problems: Eigenvalues of the Laplacian - Wave
In the section on the wave equation, we considered an initial boundary value problem
for the one-dimensional wave equation on an interval, and we found that the solu-
tion could be obtained using Fourier series. If we replace the Fourier series by an
expansion in eigenfunctions, we can consider an initial/boundary value problem for the
n-dimensional wave equation.
The ND WAVE Equation (eigenvalues/eigenfunctions of the Laplacian).
Consider
⎧
⎪⎨
⎪⎩
utt = u for x ∈ Ω, t > 0
u(x, 0) = g(x), ut(x, 0) = h(x) for x ∈ Ω
u(x, t) = 0 for x ∈ ∂Ω, t > 0.
Proof. For g, h ∈ C2(Ω) with g = h = 0 on ∂Ω, we have eigenfunction expansions
g(x) =
∞
n=1
anφn(x) and h(x) =
∞
n=1
bnφn(x).
Assume the solution u(x, t) may be expanded in the eigenfunctions with coefficients
depending on t: u(x, t) = ∞
n=1 un(t)φn(x). This implies
∞
n=1
un(t)φn(x) = −
∞
n=1
λnun(t)φn(x),
un(t) + λnun(t) = 0 for each n.
Since λn > 0, this ordinary differential equation has general solution
un(t) = An cos λnt + Bn sin λnt. Thus,
u(x, t) =
∞
n=1
An cos λnt + Bn sin λnt φn(x),
ut(x, t) =
∞
n=1
− λnAn sin λnt + λnBn cos λnt φn(x),
u(x, 0) =
∞
n=1
Anφn(x) = g(x),
ut(x, 0) =
∞
n=1
λnBnφn(x) = h(x).
Comparing with , we obtain
An = an, Bn =
bn
√
λn
.
Thus, the solution is given by
u(x, t) =
∞
n=1
an cos λnt +
bn
√
λn
sin λnt φn(x),
Partial Differential Equations Igor Yanovsky, 2005 339
with
an =
Ω
g(x)φn(x) dx,
bn =
Ω
h(x)φn(x) dx.
Partial Differential Equations Igor Yanovsky, 2005 340
The 2D WAVE Equation (eigenvalues/eigenfunctions of the Laplacian).
Let Ω = (0, a) × (0, b) and consider
⎧
⎪⎨
⎪⎩
utt = uxx + uyy for x ∈ Ω, t > 0
u(x, 0) = g(x), ut(x, 0) = h(x) for x ∈ Ω
u(x, t) = 0 for x ∈ ∂Ω, t > 0.
(28.1)
Proof. ➀ First, we find eigenvalues/eigenfunctions of the Laplacian.
⎧
⎪⎨
⎪⎩
uxx + uyy + λu = 0 in Ω
u(0, y) = 0 = u(a, y) for 0 ≤ y ≤ b,
u(x, 0) = 0 = u(x, b) for 0 ≤ x ≤ a.
Let u(x, y) = X(x)Y (y), then substitution in the PDE gives X Y + XY + λXY = 0.
X
X
+
Y
Y
+ λ = 0.
Letting λ = μ2 + ν2 and using boundary conditions, we find the equations for X and
Y :
X + μ2
X = 0 Y + ν2
Y = 0
X(0) = X(a) = 0 Y (0) = Y (b) = 0.
The solutions of these one-dimensional eigenvalue problems are
μm =
mπ
a
νn =
nπ
b
Xm(x) = sin
mπx
a
Yn(y) = sin
nπy
b
,
where m, n = 1, 2, . . .. Thus we obtain eigenvalues and normalized eigenfunctions of
the Laplacian:
λmn = π2 m2
a2
+
n2
b2
φmn(x, y) =
2
√
ab
sin
mπx
a
sin
nπy
b
,
where m, n = 1, 2, . . ..
➁ Second, we solve the Wave Equation (28.1) using the “space” eigenfunctions.
For g, h ∈ C2(Ω) with g = h = 0 on ∂Ω, we have eigenfunction expansions 76
g(x) =
∞
n=1
anφn(x) and h(x) =
∞
n=1
bnφn(x).
Assume u(x, t) = ∞
n=1 un(t)φn(x). This implies
un(t) + λnun(t) = 0 for each n.
76
In 2D, φn is really φmn, and x is (x, y).
Partial Differential Equations Igor Yanovsky, 2005 341
Since λn > 0, this ordinary differential equation has general solution
un(t) = An cos λnt + Bn sin λnt. Thus,
u(x, t) =
∞
n=1
An cos λnt + Bn sin λnt φn(x),
ut(x, t) =
∞
n=1
− λnAn sin λnt + λnBn cos λnt φn(x),
u(x, 0) =
∞
n=1
Anφn(x) = g(x),
ut(x, 0) =
∞
n=1
λnBnφn(x) = h(x).
Comparing with , we obtain
An = an, Bn =
bn
√
λn
.
Thus, the solution is given by
u(x, t) =
∞
m,n=1
amn cos λmnt +
bmn
√
λmn
sin λmnt φmn(x),
with λmn, φmn(x) given above, and
amn =
Ω
g(x)φmn(x) dx,
bmn =
Ω
h(x)φmn(x) dx.
Partial Differential Equations Igor Yanovsky, 2005 342
McOwen, 4.4 #3; 266B Ralston Hw. Consider the initial-boundary value problem
⎧
⎪⎨
⎪⎩
utt = u + f(x, t) for x ∈ Ω, t > 0
u(x, t) = 0 for x ∈ ∂Ω, t > 0
u(x, 0) = 0, ut(x, 0) = 0 for x ∈ Ω.
Use Duhamel’s principle and an expansion of f in eigenfunctions to obtain a (formal)
solution.
Proof. a) We expand u in terms of the Dirichlet eigenfunctions of Laplacian in
Ω.
φn + λnφn = 0 in Ω, φn = 0 on ∂Ω.
Assume
u(x, t) =
∞
n=1
an(t)φn(x), an(t) =
Ω
φn(x)u(x, t) dx.
f(x, t) =
∞
n=1
fn(t)φn(x), fn(t) =
Ω
φn(x)f(x, t) dx.
an(t) =
Ω
φn(x)utt dx =
Ω
φn( u + f) dx =
Ω
φn u dx +
Ω
φnf dx
=
Ω
φnu dx +
Ω
φnf dx = −λn
Ω
φnu dx +
Ω
φnf dx
fn
= −λnan(t) + fn(t).
an(0) =
Ω
φn(x)u(x, 0) dx = 0.
an(0) =
Ω
φn(x)ut(x, 0) dx = 0.
77 Thus, we have an ODE which is converted and solved by Duhamel’s principle:
⎧
⎪⎨
⎪⎩
an + λnan = fn(t)
an(0) = 0
an(0) = 0
⇒
⎧
⎪⎨
⎪⎩
˜an + λn˜an = 0
˜an(0, s) = 0
˜an(0, s) = fn(s)
an(t) =
t
0
˜an(t−s, s) ds.
With the anzats ˜an(t, s) = c1 cos
√
λnt + c2 sin
√
λnt, we get c1 = 0, c2 = fn(s)/
√
λn,
or
˜an(t, s) = fn(s)
sin
√
λnt
√
λn
.
Duhamel’s principle gives
an(t) =
t
0
˜an(t − s, s) ds =
t
0
fn(s)
sin(
√
λn(t − s))
√
λn
ds.
u(x, t) =
∞
n=1
φn(x)
√
λn
t
0
fn(s) sin( λn(t − s)) ds.
77
We used Green’s formula: ∂Ω
φn
∂u
∂n − u∂φn
∂n ds = Ω
(φn u − φnu) dx.
On ∂Ω, u = 0; φn = 0 since eigenfunctions are Dirichlet.
Partial Differential Equations Igor Yanovsky, 2005 343
Problem (F’90, #3). Consider the initial-boundary value problem
⎧
⎪⎨
⎪⎩
utt = a(t)uxx + f(x, t) 0 ≤ x ≤ π, t ≥ 0
u(0, t) = u(π, t) = 0 t ≥ 0
u(x, 0) = g(x), ut(x, 0) = h(x) 0 ≤ x ≤ π,
where the coefficient a(t) = 0.
a) Express (formally) the solution of this problem by the method of eigenfunction ex-
pansions.
b) Show that this problem is not well-posed if a ≡ −1.
Hint: Take f = 0 and prove that the solution does not depend continuously on the
initial data g, h.
Proof. a) We expand u in terms of the Dirichlet eigenfunctions of Laplacian in
Ω.
φnxx + λnφn = 0 in Ω, φn(0) = φn(π) = 0.
That gives us the eigenvalues and eigenfunctions of the Laplacian: λn = n2, φn(x) =
sinnx.
Assume
u(x, t) =
∞
n=1
un(t)φn(x), un(t) =
Ω
φn(x)u(x, t) dx.
f(x, t) =
∞
n=1
fn(t)φn(x), fn(t) =
Ω
φn(x)f(x, t) dx.
g(x) =
∞
n=1
gnφn(x), gn =
Ω
φn(x)g(x) dx.
h(x) =
∞
n=1
hnφn(x), hn =
Ω
φn(x)h(x) dx.
un(t) =
Ω
φn(x)utt dx =
Ω
φn(a(t)uxx + f) dx = a(t)
Ω
φnuxx dx +
Ω
φnf dx
= a(t)
Ω
φnxxu dx +
Ω
φnf dx = −λna(t)
Ω
φnu dx +
Ω
φnf dx
fn
= −λna(t)un(t) + fn(t).
un(0) =
Ω
φn(x)u(x, 0) dx =
Ω
φn(x)g(x) dx = gn.
un(0) =
Ω
φn(x)ut(x, 0) dx =
Ω
φn(x)h(x) dx = hn.
Thus, we have an ODE which is converted and solved by Duhamel’s principle:
⎧
⎪⎨
⎪⎩
un + λna(t)un = fn(t)
un(0) = gn
un(0) = hn.
Partial Differential Equations Igor Yanovsky, 2005 344
Note: The initial data is not 0; therefore, the Duhamel’s principle is not applicable.
Also, the ODE is not linear in t, and it’s solution is not obvious. Thus,
u(x, t) =
∞
n=1
un(t)φn(x),
where un(t) are solutions of .
Partial Differential Equations Igor Yanovsky, 2005 345
b) Assume we have two solutions, u1 and u2, to the PDE:
⎧
⎪⎨
⎪⎩
u1tt + u1xx = 0,
u1(0, t) = u1(π, t) = 0,
u1(x, 0) = g1(x), u1t(x, 0) = h1(x);
⎧
⎪⎨
⎪⎩
u2tt + u2xx = 0,
u2(0, t) = u2(π, t) = 0,
u2(x, 0) = g2(x), u2t(x, 0) = h2(x).
Note that the equation is elliptic, and therefore, the maximum principle holds.
In order to prove that the solution does not depend continuously on the initial data
g, h, we need to show that one of the following conditions holds:
max
Ω
|u1 − u2| > max
∂Ω
|g1 − g2|,
max
Ω
|ut1 − ut2| > max
∂Ω
|h1 − h2|.
That is, the difference of the two solutions is not bounded by the difference of initial
data.
By the method of separation of variables, we may obtain
u(x, t) =
∞
n=1
(an cos nt + bn sinnt) sinnx,
u(x, 0) =
∞
n=1
an sinnx = g(x),
ut(x, 0) =
∞
n=1
nbn sinnx = h(x).
Not complete.
We also know that for elliptic equations, and for Laplace equation in particular, the
value of the function u has to be prescribed on the entire boundary, i.e. u = g on
∂Ω, which is not the case here, making the problem under-determined. Also, ut is
prescribed on one of the boundaries, making the problem overdetermined.
Partial Differential Equations Igor Yanovsky, 2005 346
29 Problems: Eigenvalues of the Laplacian - Heat
The ND HEAT Equation (eigenvalues/eigenfunctions of the Laplacian).
Consider the initial value problem with homogeneous Dirichlet condition:
⎧
⎪⎨
⎪⎩
ut = u for x ∈ Ω, t > 0
u(x, 0) = g(x) for x ∈ Ω
u(x, t) = 0 for x ∈ ∂Ω, t > 0.
Proof. For g ∈ C2(Ω) with g = 0 on ∂Ω, we have eigenfunction expansion
g(x) =
∞
n=1
anφn(x)
Assume the solution u(x, t) may be expanded in the eigenfunctions with coefficients
depending on t: u(x, t) = ∞
n=1 un(t)φn(x). This implies
∞
n=1
un(t)φn(x) = −λn
∞
n=1
un(t)φn(x),
un(t) + λnun(t) = 0, which has the general solution
un(t) = Ane−λnt
. Thus,
u(x, t) =
∞
n=1
Ane−λnt
φn(x),
u(x, 0) =
∞
n=1
Anφn(x) = g(x).
Comparing with , we obtain An = an. Thus, the solution is given by
u(x, t) =
∞
n=1
ane−λnt
φn(x),
with an =
Ω
g(x)φn(x) dx.
Also
u(x, t) =
∞
n=1
ane−λnt
φn(x) =
∞
n=1 Ω
g(y)φn(y) dy e−λnt
φn(x)
=
Ω
∞
n=1
e−λnt
φn(x)φn(y)
K(x,y,t), heat kernel
g(y) dy
Partial Differential Equations Igor Yanovsky, 2005 347
The 2D HEAT Equation (eigenvalues/eigenfunctions of the Laplacian).
Let Ω = (0, a) × (0, b) and consider
⎧
⎪⎨
⎪⎩
ut = uxx + uyy for x ∈ Ω, t > 0
u(x, 0) = g(x) for x ∈ Ω
u(x, t) = 0 for x ∈ ∂Ω, t > 0.
(29.1)
Proof. ➀ First, we find eigenvalues/eigenfunctions of the Laplacian.
⎧
⎪⎨
⎪⎩
uxx + uyy + λu = 0 in Ω
u(0, y) = 0 = u(a, y) for 0 ≤ y ≤ b,
u(x, 0) = 0 = u(x, b) for 0 ≤ x ≤ a.
Let u(x, y) = X(x)Y (y), then substitution in the PDE gives X Y + XY + λXY = 0.
X
X
+
Y
Y
+ λ = 0.
Letting λ = μ2 + ν2 and using boundary conditions, we find the equations for X and
Y :
X + μ2
X = 0 Y + ν2
Y = 0
X(0) = X(a) = 0 Y (0) = Y (b) = 0.
The solutions of these one-dimensional eigenvalue problems are
μm =
mπ
a
νn =
nπ
b
Xm(x) = sin
mπx
a
Yn(y) = sin
nπy
b
,
where m, n = 1, 2, . . .. Thus we obtain eigenvalues and normalized eigenfunctions of
the Laplacian:
λmn = π2 m2
a2
+
n2
b2
φmn(x, y) =
2
√
ab
sin
mπx
a
sin
nπy
b
,
where m, n = 1, 2, . . ..
➁ Second, we solve the Heat Equation (29.1) using the “space” eigenfunctions.
For g ∈ C2(Ω) with g = 0 on ∂Ω, we have eigenfunction expansion
g(x) =
∞
n=1
anφn(x).
Assume u(x, t) = ∞
n=1 un(t)φn(x). This implies
un(t) + λnun(t) = 0, which has the general solution
un(t) = Ane−λnt
. Thus,
u(x, t) =
∞
n=1
Ane−λnt
φn(x),
u(x, 0) =
∞
n=1
Anφn(x) = g(x).
Partial Differential Equations Igor Yanovsky, 2005 348
Comparing with , we obtain An = an. Thus, the solution is given by
u(x, t) =
∞
m,n=1
amne−λmnt
φmn(x),
with λmn, φmn given above and amn = Ω g(x)φmn(x) dx.
Partial Differential Equations Igor Yanovsky, 2005 349
Problem (S’91, #2). Consider the heat equation
ut = uxx + uyy
on the square Ω = {0 ≤ x ≤ 2π, 0 ≤ y ≤ 2π} with
periodic boundary conditions and with initial data
u(0, x, y) = f(x, y).
a) Find the solution using separation of variables.
Proof. ➀ First, we find eigenvalues/eigenfunctions of the Laplacian.
⎧
⎪⎨
⎪⎩
uxx + uyy + λu = 0 in Ω
u(0, y) = u(2π, y) for 0 ≤ y ≤ 2π,
u(x, 0) = u(x, 2π) for 0 ≤ x ≤ 2π.
Let u(x, y) = X(x)Y (y), then substitution in the PDE gives X Y + XY + λXY = 0.
X
X
+
Y
Y
+ λ = 0.
Letting λ = μ2 + ν2 and using periodic BC’s, we find the equations for X and Y :
X + μ2
X = 0 Y + ν2
Y = 0
X(0) = X(2π) Y (0) = Y (2π).
The solutions of these one-dimensional eigenvalue problems are
μm = m νn = n
Xm(x) = eimx
Yn(y) = einy
,
where m, n = . . ., −2, −1, 0, 1, 2, . . .. Thus we obtain eigenvalues and normalized eigen-
functions of the Laplacian:
λmn = m2
+ n2
φmn(x, y) = eimx
einy
,
where m, n = . . ., −2, −1, 0, 1, 2, . . ..
➁ Second, we solve the Heat Equation using the “space” eigenfunctions.
Assume u(x, y, t) = ∞
m,n=−∞ umn(t)eimxeiny. This implies
umn(t) + (m2
+ n2
)umn(t) = 0, which has the general solution
un(t) = cmne−(m2+n2)t
. Thus,
u(x, y, t) =
∞
m,n=−∞
cmne−(m2+n2)t
eimx
einy
.
Partial Differential Equations Igor Yanovsky, 2005 350
u(x, y, 0) =
∞
m,n=−∞
cmneimx
einy
= f(x, y),
2π
0
2π
0
f(x, y)eimx
einy
dxdy =
2π
0
2π
0
∞
m,n=−∞
cmneimx
einy
eim x
ein y
dxdy
= 2π
2π
0
∞
n=−∞
cmneiny
ein y
dy = 4π2
cmn.
cmn =
1
4π2
2π
0
2π
0
f(x, y)e−imx
e−iny
dxdy = fmn.
Partial Differential Equations Igor Yanovsky, 2005 351
b) Show that the integral Ω u2(x, y, t) dxdy is decreasing in t, if f is not constant.
Proof. We have
ut = uxx + uyy
Multiply the equation by u and integrate:
uut = u u,
1
2
d
dt
u2
= u u,
1
2
d
dt Ω
u2
dxdy =
Ω
u u dxdy =
∂Ω
u
∂u
∂n
ds
=0, (periodic BC)
−
Ω
|∇u|2
dxdy
= −
Ω
|∇u|2
dxdy ≤ 0.
Equality is obtained only when ∇u = 0 ⇒ u = constant ⇒ f = constant.
If f is not constant, Ω u2 dxdy is decreasing in t.
Partial Differential Equations Igor Yanovsky, 2005 352
Problem (F’98, #3). Consider the eigenvalue problem
d2φ
dx2
+ λφ = 0,
φ(0) −
dφ
dx
(0) = 0, φ(1) +
dφ
dx
(1) = 0.
a) Show that all eigenvalues are positive.
b) Show that there exist a sequence of eigenvalues λ = λn, each of which satisfies
tan
√
λ =
2
√
λ
λ − 1
.
c) Solve the following initial-boundary value problem on 0 < x < 1, t > 0
∂u
∂t
=
∂2u
∂x2
,
u(0, t) −
∂u
∂x
(0, t) = 0, u(1, t) +
∂u
∂x
(1, t) = 0,
u(x, 0) = f(x).
You may call the relevant eigenfunctions φn(x) and assume that they are known.
Proof. a) • If λ = 0, the ODE reduces to φ = 0. Try φ(x) = Ax + B.
From the first boundary condition,
φ(0) − φ (0) = 0 = B − A ⇒ B = A.
Thus, the solution takes the form φ(x) = Ax+A. The second boundary condition gives
φ(1) + φ (1) = 0 = 3A ⇒ A = B = 0.
Thus the only solution is φ ≡ 0, which is not an eigenfunction, and 0 not an eigenvalue.
• If λ < 0, try φ(x) = esx
, which gives s = ±
√
−λ = ±β ∈ R.
Hence, the family of solutions is φ(x) = Aeβx +Be−βx. Also, φ (x) = βAeβx −βBe−βx.
The boundary conditions give
φ(0) − φ (0) = 0 = A + B − βA + βB = A(1 − β) + B(1 + β), (29.2)
φ(1)+φ (1) = 0 = Aeβ
+Be−β
+βAeβ
−βBe−β
= Aeβ
(1+β)+Be−β
(1−β). (29.3)
From (29.2) and (29.3) we get
1 + β
1 − β
= −
A
B
and
1 + β
1 − β
= −
B
A
e−2β
, or
A
B
= e−β
.
From (29.2), β =
A + B
A − B
and thus,
A
B
= e
A+B
B−A , which has no solutions.
b) Since λ > 0, the anzats φ = esx gives s = ±i
√
λ and the family of solutions takes
the form
φ(x) = A sin(x
√
λ) + B cos(x
√
λ).
Then, φ (x) = A
√
λ cos(x
√
λ) − B
√
λ sin(x
√
λ). The first boundary condition gives
φ(0) − φ (0) = 0 = B − A
√
λ ⇒ B = A
√
λ.
Partial Differential Equations Igor Yanovsky, 2005 353
Hence, φ(x) = A sin(x
√
λ) + A
√
λ cos(x
√
λ). The second boundary condition gives
φ(1) + φ (1) = 0 = A sin(
√
λ) + A
√
λ cos(
√
λ) + A
√
λ cos(
√
λ) − Aλ sin(
√
λ)
= A (1 − λ) sin(
√
λ) + 2
√
λ cos(
√
λ)
A = 0 (since A = 0 implies B = 0 and φ = 0, which is not an eigenfunction). Therefore,
−(1 − λ) sin(
√
λ) = 2
√
λ cos(
√
λ), and thus tan(
√
λ) = 2
√
λ
λ−1 .
c) We may assume that the eigenvalues/eigenfunctins of the Laplacian, λn and φn(x),
are known. We solve the Heat Equation using the “space” eigenfunctions.
⎧
⎪⎨
⎪⎩
ut = uxx,
u(0, t) − ux(0, t) = 0, u(1, t) + ux(1, t) = 0,
u(x, 0) = f(x).
For f, we have an eigenfunction expansion
f(x) =
∞
n=1
anφn(x).
Assume u(x, t) = ∞
n=1 un(t)φn(x). This implies
un(t) + λnun(t) = 0, which has the general solution
un(t) = Ane−λnt
. Thus,
u(x, t) =
∞
n=1
Ane−λnt
φn(x),
u(x, 0) =
∞
n=1
Anφn(x) = f(x).
Comparing with , we have An = an. Thus, the solution is given by
u(x, t) =
∞
n=1
ane−λnt
φn(x),
with
an =
1
0
f(x)φn(x) dx.
Partial Differential Equations Igor Yanovsky, 2005 354
Problem (W’03, #3); 266B Ralston Hw. Let Ω be a smooth domain in three
dimensions and consider the initial-boundary value problem for the heat equation
⎧
⎪⎨
⎪⎩
ut = u + f(x) for x ∈ Ω, t > 0
∂u/∂n = 0 for x ∈ ∂Ω, t > 0
u(x, 0) = g(x) for x ∈ Ω,
in which f and g are known smooth functions with
∂g/∂n = 0 for x ∈ ∂Ω.
a) Find an approximate formula for u as t → ∞.
Proof. We expand u in terms of the Neumann eigenfunctions of Laplacian in Ω.
φn + λnφn = 0 in Ω,
∂φn
∂n
= 0 on ∂Ω.
Note that here λ1 = 0 and φ1 is the constant V −1/2, where V is the volume of Ω.
Assume
u(x, t) =
∞
n=1
an(t)φn(x), an(t) =
Ω
φn(x)u(x, t) dx.
f(x) =
∞
n=1
fnφn(x), fn =
Ω
φn(x)f(x) dx.
g(x) =
∞
n=1
gnφn(x), gn =
Ω
φn(x)g(x) dx.
an(t) =
Ω
φn(x)ut dx =
Ω
φn( u + f) dx =
Ω
φn u dx +
Ω
φnf dx
=
Ω
φnu dx +
Ω
φnf dx = −λn
Ω
φnu dx +
Ω
φnf dx
fn
= −λnan + fn.
an(0) =
Ω
φn(x)u(x, 0) dx =
Ω
φng dx = gn.
78
Thus, we solve the ODE:
an + λnan = fn
an(0) = gn.
For n = 1, λ1 = 0, and we obtain a1(t) = f1t + g1.
For n ≥ 2, the homogeneous solution is anh
= ce−λnt. The anzats for a particular
solution is anp = c1t + c2, which gives c1 = 0 and c2 = fn/λn. Using the initial
condition, we obtain
an(t) = gn −
fn
λn
e−λnt
+
fn
λn
.
78
We used Green’s formula: ∂Ω
φn
∂u
∂n − u∂φn
∂n ds = Ω
(φn u − φnu) dx.
On ∂Ω, ∂u
∂n
= 0; ∂φn
∂n
= 0 since eigenfunctions are Neumann.
Partial Differential Equations Igor Yanovsky, 2005 355
u(x, t) = (f1t + g1)φ1(x) +
∞
n=2
gn −
fn
λn
e−λnt
+
fn
λn
φn(x).
If f1 = 0
Ω
f(x) dx = 0 , lim
t→∞
u(x, t) = g1φ1 +
∞
n=2
fnφn
λn
.
If f1 = 0
Ω
f(x) dx = 0 , lim
t→∞
u(x, t) ∼ f1φ1t.
Partial Differential Equations Igor Yanovsky, 2005 356
b) If g ≥ 0 and f > 0, show that u > 0 for all t > 0.
Partial Differential Equations Igor Yanovsky, 2005 357
Problem (S’97, #2). a) Consider the eigenvalue problem for the Laplace operator
in Ω ∈ R2 with zero Neumann boundary condition
uxx + uyy + λu = 0 in Ω
∂u
∂n = 0 on ∂Ω.
Prove that λ0 = 0 is the lowest eigenvalue and that it is simple.
b) Assume that the eigenfunctions φn(x, y) of the problem in (a) form a complete
orthogonal system, and that f(x, y) has a uniformly convergent expansion
f(x, y) =
∞
n=0
fnφn(x, y).
Solve the initial value problem
ut = u + f(x, y)
subject to initial and boundary conditions
u(x, y, 0) = 0,
∂u
∂n
u|∂Ω = 0.
What is the behavior of u(x, y, t) as t → ∞?
c) Consider the problem with Neumann boundary conditions
vxx + vyy + f(x, y) = 0 in Ω
∂v
∂nv = 0 on ∂Ω.
When does a solution exist? Find this solution, and find its relation with the behavior
of lim u(x, y, t) in (b) as t → ∞.
Proof. a) Suppose this eigenvalue problem did have a solution u with λ ≤ 0.
Multiplying u + λu = 0 by u and integrating over Ω, we get
Ω
u u dx + λ
Ω
u2
dx = 0,
∂Ω
u
∂u
∂n
=0
ds −
Ω
|∇u|2
dx + λ
Ω
u2
dx = 0,
Ω
|∇u|2
dx = λ
≤0
Ω
u2
dx,
Thus, ∇u = 0 in Ω, and u is constant in Ω. Hence, we now have
0 = λ
≤0
Ω
u2
dx.
For nontrivial u, we have λ = 0. For this eigenvalue problem, λ = 0 is an eigenvalue,
its eigenspace is the set of constants, and all other λ’s are positive.
Partial Differential Equations Igor Yanovsky, 2005 358
b) We expand u in terms of the Neumann eigenfunctions of Laplacian in Ω. 79
φn + λnφn = 0 in Ω,
∂φn
∂n
= 0 on ∂Ω.
u(x, y, t) =
∞
n=1
an(t)φn(x, y), an(t) =
Ω
φn(x, y)u(x, y, t) dx.
an(t) =
Ω
φn(x, y)ut dx =
Ω
φn( u + f) dx =
Ω
φn u dx +
Ω
φnf dx
=
Ω
φnu dx +
Ω
φnf dx = −λn
Ω
φnu dx +
Ω
φnf dx
fn
= −λnan + fn.
an(0) =
Ω
φn(x, y)u(x, y, 0) dx = 0.
80 Thus, we solve the ODE:
an + λnan = fn
an(0) = 0.
For n = 1, λ1 = 0, and we obtain a1(t) = f1t.
For n ≥ 2, the homogeneous solution is anh
= ce−λnt
. The anzats for a particular
solution is anp = c1t + c2, which gives c1 = 0 and c2 = fn/λn. Using the initial
condition, we obtain
an(t) = −
fn
λn
e−λnt
+
fn
λn
.
u(x, t) = f1φ1t +
∞
n=2
−
fn
λn
e−λnt
+
fn
λn
φn(x).
If f1 = 0
Ω
f(x) dx = 0 , lim
t→∞
u(x, t) =
∞
n=2
fnφn
λn
.
If f1 = 0
Ω
f(x) dx = 0 , lim
t→∞
u(x, t) ∼ f1φ1t.
c) Integrate v + f(x, y) = 0 over Ω:
Ω
f dx = −
Ω
v dx = −
Ω
∇ · ∇v dx =1
−
∂Ω
∂v
∂n
ds =2
0,
where we used 1
divergence theorem and 2
Neumann boundary conditions. Thus, the
solution exists only if
Ω
f dx = 0.
79
We use dx dy → dx.
80
We used Green’s formula: ∂Ω
φn
∂u
∂n
− u∂φn
∂n
ds = Ω
(φn u − φnu) dx.
On ∂Ω, ∂u
∂n
= 0; ∂φn
∂n
= 0 since eigenfunctions are Neumann.
Partial Differential Equations Igor Yanovsky, 2005 359
Assume v(x, y) = ∞
n=0 anφn(x, y). Since we have f(x, y) = ∞
n=0 fnφn(x, y), we
obtain
−
∞
n=0
λnanφn +
∞
n=0
fnφn = 0,
−λnanφn + fnφn = 0,
an =
fn
λn
.
v(x, y) = ∞
n=0( fn
λn
)φn(x, y).
Partial Differential Equations Igor Yanovsky, 2005 360
29.1 Heat Equation with Periodic Boundary Conditions in 2D
(with extra terms)
Problem (F’99, #5). In two spatial dimensions, consider the differential equation
ut = −ε u − 2
u
with periodic boundary conditions on the unit square [0, 2π]2
.
a) If ε = 2 find a solution whose amplitude increases as t increases.
b) Find a value ε0, so that the solution of this PDE stays bounded as t → ∞, if ε < ε0.
Proof. a) Eigenfunctions of the Laplacian.
The periodic boundary conditions imply a Fourier Series solution of the form:
u(x, t) =
m,n
amn(t)ei(mx+ny)
.
ut =
m,n
amn(t)ei(mx+ny)
,
u = uxx + uyy = −
m,n
(m2
+ n2
) amn(t)ei(mx+ny)
,
2
u = uxxxx + 2uxxyy + uyyyy =
m,n
(m4
+ 2m2
n2
+ n4
) amn(t)ei(mx+ny)
=
m,n
(m2
+ n2
)2
amn(t)ei(mx+ny)
.
Plugging this into the PDE, we obtain
amn(t) = ε(m2
+ n2
)amn(t) − (m2
+ n2
)2
amn(t),
amn(t) − [ε(m2
+ n2
) − (m2
+ n2
)2
]amn(t) = 0,
amn(t) − (m2
+ n2
)[ε − (m2
+ n2
)]amn(t) = 0.
The solution to the ODE above is
amn(t) = αmn e(m2+n2)[ε−(m2+n2 )]t
.
u(x, t) =
m,n
αmn e(m2+n2)[ε−(m2+n2 )]t
ei(mx+ny)
oscillates
.
When ε = 2, we have
u(x, t) =
m,n
αmn e(m2+n2)[2−(m2+n2)]t
ei(mx+ny)
.
We need a solution whose amplitude increases as t increases. Thus, we need those
αmn > 0, with
(m2
+ n2
)[2 − (m2
+ n2
)] > 0,
2 − (m2
+ n2
) > 0,
2 > m2
+ n2
.
Hence, αmn > 0 for (m, n) = (0, 0), (m, n) = (1, 0), (m, n) = (0, 1).
Else, αmn = 0. Thus,
u(x, t) = α00 + α10et
eix
+ α01et
eiy
= 1 + et
eix
+ et
eiy
= 1 + et
(cos x + i sinx) + et
(cos y + i siny).
Partial Differential Equations Igor Yanovsky, 2005 361
b) For ε ≤ ε0 = 1, the solution stays bounded as t → ∞.
Partial Differential Equations Igor Yanovsky, 2005 362
Problem (F’93, #1).
Suppose that a and b are constants with a ≥ 0, and consider the equation
ut = uxx + uyy − au3
+ bu (29.4)
in which u(x, y, t) is 2π-periodic in x and y.
a) Let u be a solution of (29.4) with
||u(t = 0)|| =
2π
0
2π
0
|u(x, y, t = 0)|2
dxdy1/2
< .
Derive an explicit bound on ||u(t)|| and show that it stays finite for all t.
b) If a = 0, construct the normal modes for (29.4); i.e. find all solutions of the form
u(x, y, t) = eλt+ikx+ily
.
c) Use these normal modes to construct a solution of (29.4) with a = 0 for the initial
data
u(x, y, t = 0) =
1
1 − 1
2 eix
+
1
1 − 1
2e−ix
.
Proof. a) Multiply the equation by u and integrate:
ut = u − au3
+ bu,
uut = u u − au4
+ bu2
,
Ω
uut dx =
Ω
u u dx −
Ω
au4
dx +
Ω
bu2
dx,
1
2
d
dt Ω
u2
dx =
∂Ω
u
∂u
∂n
ds
=0, u periodic on [0,2π]2
−
Ω
|∇u|2
dx −
Ω
au4
dx
≤0
+
Ω
bu2
dx,
d
dt
||u||2
2 ≤ 2b ||u||2
2,
||u||2
2 ≤ ||u(x, 0)||2
2 e2bt
,
||u||2 ≤ ||u(x, 0)||2 ebt
≤ ε ebt
.
Thus, ||u|| stays finite for all t.
b) Since a = 0, plugging u = eλt+ikx+ily into the equation, we obtain:
ut = uxx + uyy + bu,
λ eλt+ikx+ily
= (−k2
− l2
+ b) eλt+ikx+ily
,
λ = −k2
− l2
+ b.
Thus,
ukl = e(−k2−l2+b)t+ikx+ily
,
u(x, y, t) =
k,l
akl e(−k2−l2+b)t+ikx+ily
.
Partial Differential Equations Igor Yanovsky, 2005 363
c) Using the initial condition, we obtain:
u(x, y, 0) =
k,l
akl ei(kx+ly)
=
1
1 − 1
2eix
+
1
1 − 1
2 e−ix
=
∞
k=0
1
2
eix
k
+
∞
k=0
1
2
e−ix
k
=
∞
k=0
1
2k
eikx
+
∞
k=0
1
2k
e−ikx
,
= 2 +
∞
k=1
1
2k
eikx
+
−∞
k=−1
1
2−k
eikx
.
Thus, l = 0, and we have
∞
k=−∞
ak eikx
= 2 +
∞
k=1
1
2k
eikx
+
−∞
k=−1
1
2−k
eikx
,
⇒ a0 = 2; ak =
1
2k
, k > 0; ak =
1
2−k
, k < 0
⇒ a0 = 2; ak =
1
2|k|
, k = 0.
u(x, y, t) = 2ebt
+
+∞
k=−∞, k=0
1
2|k|
e(−k2+b)t+ikx
.
81
81
Note a similar question formulation in F’92 #3(b).
Partial Differential Equations Igor Yanovsky, 2005 364
Problem (S’00, #3). Consider the initial-boundary value problem for u = u(x, y, t)
ut = u − u
for (x, y) ∈ [0, 2π]2, with periodic boundary conditions and with
u(x, y, 0) = u0(x, y)
in which u0 is periodic. Find an asymptotic expansion for u for t large with terms
tending to zero increasingly rapidly as t → ∞.
Proof. Since we have periodic boundary conditions, assume
u(x, y, t) =
m,n
umn(t) ei(mx+ny)
.
Plug this into the equation:
m,n
umn(t) ei(mx+ny)
=
m,n
(−m2
− n2
− 1) umn(t) ei(mx+ny)
,
umn(t) = (−m2
− n2
− 1) umn(t),
umn(t) = amn e(−m2−n2−1)t
,
u(x, y, t) =
m,n
amn e−(m2+n2+1)t
ei(mx+ny)
.
Since u0 is periodic,
u0(x, y) =
m,n
u0mn ei(mx+ny)
, u0mn =
1
4π2
2π
0
2π
0
u0(x, y) e−i(mx+ny)
dxdy.
Initial condition gives:
u(x, y, 0) =
m,n
amn ei(mx+ny)
= u0(x, y),
m,n
amn ei(mx+ny)
=
m,n
u0mn ei(mx+ny)
,
⇒ amn = u0mn.
u(x, y, t) =
m,n
u0mn e−(m2+n2+1)t
ei(mx+ny)
.
u0mn e−(m2+n2+1)t ei(mx+ny) → 0 as t → ∞, since e−(m2+n2+1)t → 0 as t → ∞.
Partial Differential Equations Igor Yanovsky, 2005 365
30 Problems: Fourier Transform
Problem (S’01, #2b). Write the solution of initial value problem
Ut −
1 0
5 3
Ux = 0,
for general initial data
u(1)
(x, 0)
u(2)(x, 0)
=
f(x)
0
as an inverse Fourier transform.
You may assume that f is smooth and rapidly decreasing as |x| → ∞.
Proof. Consider the original system:
u
(1)
t − u(1)
x = 0,
u
(2)
t − 5u(1)
x − 3u(2)
x = 0.
Take the Fourier transform in x. The transformed initial value problems are:
u
(1)
t − iξu(1)
= 0, u(1)
(ξ, 0) = f(ξ),
u
(2)
t − 5iξu(1)
− 3iξu(2)
= 0, u(2)
(ξ, 0) = 0.
Solving the first ODE for u(1) gives:
u(1)
(ξ, t) = f(ξ)eiξt
.
With this u(1)
, the second initial value problem becomes
u
(2)
t − 3iξu(2)
= 5iξf(ξ)eiξt
, u(2)
(ξ, 0) = 0.
The homogeneous solution of the above ODE is:
u
(2)
h (ξ, t) = c1e3iξt
.
With u
(2)
p = c2eiξt as anzats for a particular solution, we obtain:
iξc2eiξt
− 3iξc2eiξt
= 5iξf(ξ)eiξt
,
−2iξc2eiξt
= 5iξf(ξ)eiξt
,
c2 = −
5
2
f(ξ).
⇒ u(2)
p (ξ, t) = −
5
2
f(ξ)eiξt
.
u(2)
(ξ, t) = u
(2)
h (ξ, t) + u(2)
p (ξ, t) = c1e3iξt
−
5
2
f(ξ)eiξt
.
We find c1 using initial conditions:
u(2)
(ξ, 0) = c1 −
5
2
f(ξ) = 0 ⇒ c1 =
5
2
f(ξ).
Thus,
u(2)
(ξ, t) =
5
2
f(ξ) e3iξt
− eiξt
.
Partial Differential Equations Igor Yanovsky, 2005 366
u(1)
(x, t) and u(2)
(x, t) are be obtained by taking inverse Fourier transform:
u(1)
(x, t) = u(1)
(ξ, t)
∨
=
1
√
2π Rn
eixξ
f(ξ) eiξt
dξ,
u(2)
(x, t) = u(2)
(ξ, t)
∨
=
1
√
2π Rn
eixξ 5
2
f(ξ) e3iξt
− eiξt
dξ.
Partial Differential Equations Igor Yanovsky, 2005 367
Problem (S’02, #4). Use the Fourier transform on L2
(R) to show that
du
dx
+ cu(x) + u(x − 1) = f (30.1)
has a unique solution u ∈ L2(R) for each f ∈ L2(R) when |c| > 1 - you may assume
that c is a real number.
Proof. u ∈ L2(R). Define its Fourier transform u by
u(ξ) =
1
√
2π R
e−ixξ
u(x) dx for ξ ∈ R.
du
dx
(ξ) = iξu(ξ).
We can find u(x − 1)(ξ) in two ways.
• Let u(x − 1
y
) = v(x), and determinte v(ξ):
u(x − 1)(ξ) = v(ξ) =
1
√
2π R
e−ixξ
v(x) dx =
1
√
2π R
e−i(y+1)ξ
u(y) dy
=
1
√
2π R
e−iyξ
e−iξ
u(y) dy = e−iξ
u(ξ).
• We can also write the definition for u(ξ) and substitute x − 1 later in calculations:
u(ξ) =
1
√
2π R
e−iyξ
u(y) dy =
1
√
2π R
e−i(x−1)ξ
u(x − 1) dx
=
1
√
2π R
e−ixξ
eiξ
u(x − 1) dx = eiξ
u(x − 1)(ξ),
⇒ u(x − 1)(ξ) = e−iξ
u(ξ).
Substituting into (30.1), we obtain
iξu(ξ) + cu(ξ) + e−iξ
u(ξ) = f(ξ),
u(ξ) =
f(ξ)
iξ + c + e−iξ
.
u(x) =
f(ξ)
iξ + c + e−iξ
∨
= f B
∨
=
1
√
2π
f ∗ B,
where B =
1
iξ + c + e−iξ
,
⇒ B =
1
iξ + c + e−iξ
∨
=
1
√
2π R
eixξ
iξ + c + e−iξ
dξ.
For |c| > 1, u(ξ) exists for all ξ ∈ R, so that u(x) = (u(ξ))∨
and this is unique by the
Fourier Inversion Theorem.
Note that in Rn
, becomes
u(x − 1)(ξ) = v(ξ) =
1
(2π)
n
2 Rn
e−ix·ξ
v(x) dx =
1
(2π)
n
2 Rn
e−i(y+1)·ξ
u(y) dy
=
1
(2π)
n
2 Rn
e−iy·ξ
e−i1·ξ
u(y) dy = e−i1·ξ
u(ξ) = e(−i j ξj)
u(ξ).
Partial Differential Equations Igor Yanovsky, 2005 368
Problem (F’96, #3). Find the fundamental solution for the equation
ut = uxx − xux. (30.2)
Hint: The Fourier transform converts this problem into a PDE which can be solved
using the method of characteristics.
Proof. u ∈ L2
(R). Define its Fourier transform u by
u(ξ) =
1
√
2π R
e−ixξ
u(x) dx for ξ ∈ R.
ux(ξ) = iξ u(ξ),
uxx(ξ) = (iξ)2
u(ξ) = −ξ2
u(ξ).
We find xux(ξ) in two steps:
➀ Multiplication by x:
−ixu(ξ) =
1
√
2π R
e−ixξ
− ixu(x) dx =
d
dξ
u(ξ).
⇒ xu(x)(ξ) = i
d
dξ
u(ξ).
➁ Using the previous result, we find:
xux(x)(ξ) =
1
√
2π R
e−ixξ
xux(x) dx =
1
√
2π
e−ixξ
xu
∞
−∞
= 0
−
1
√
2π R
(−iξ)e−ixξ
x + e−ixξ
u dx
=
1
√
2π
iξ
R
e−ixξ
x u dx −
1
√
2π R
e−ixξ
u dx
= iξ xu(x)(ξ) − u(ξ) = iξ i
d
dξ
u(ξ) − u(ξ) = −ξ
d
dξ
u(ξ) − u(ξ).
⇒ xux(x)(ξ) = −ξ
d
dξ
u(ξ) − u(ξ).
Plugging these into (30.2), we get:
∂
∂t
u(ξ, t) = −ξ2
u(ξ, t) − − ξ
d
dξ
u(ξ, t) − u(ξ, t) ,
ut = −ξ2
u + ξuξ + u,
ut − ξuξ = −(ξ2
− 1)u.
We now solve the above equation by characteristics.
We change the notation: u → u, t → y, ξ → x. We have
uy − xux = −(x2
− 1)u.
dx
dt
= −x ⇒ x = c1e−t
, (c1 = xet
)
dy
dt
= 1 ⇒ y = t + c2,
dz
dt
= −(x2
− 1)z = −(c2
1e−2t
− 1)z ⇒
dz
z
= −(c2
1e−2t
− 1)dt
⇒ log z =
1
2
c2
1e−2t
+ t + c3 =
x2
2
+ t + c3 =
x2
2
+ y − c2 + c3 ⇒ z = ce
x2
2
+y
.
Partial Differential Equations Igor Yanovsky, 2005 369
Changing the notation back, we have
u(ξ, t) = ce
ξ2
2
+t
.
Thus, we have
u(ξ, t) = ce
ξ2
2
+t
.
We use Inverse Fourier Tranform to get u(x, t): 82
u(x, t) =
1
√
2π R
eixξ
u(ξ, t) dξ =
1
√
2π R
eixξ
ce
ξ2
2
+t
dξ
=
c
√
2π
et
R
eixξ
e
ξ2
2 dξ =
c
√
2π
et
R
eixξ+ξ2
2 dξ
=
c
√
2π
et
R
e
2ixξ+ξ2
2 dξ =
c
√
2π
et
R
e
(ξ+ix)2
2 dξ e
x2
2
=
c
√
2π
et
e
x2
2
R
e
y2
2 dy =
c
√
2π
et
e
x2
2
√
2π = c et
e
x2
2 .
u(x, t) = c et
e
x2
2 .
Check:
ut = c et
e
x2
2 ,
ux = c et
xe
x2
2 ,
uxx = c et
e
x2
2 + x2
e
x2
2 .
Thus,
ut = uxx − xux,
c et
e
x2
2 = c et
e
x2
2 + x2
e
x2
2 − x c et
xe
x2
2 .
82
We complete the square for powers of exponentials.
Partial Differential Equations Igor Yanovsky, 2005 370
Problem (W’02, #4). a) Solve the initial value problem
∂u
∂t
+
n
k=1
ak(t)
∂u
∂xk
+ a0(t)u = 0, x ∈ Rn
,
u(0, x) = f(x)
where ak(t), k = 1, . . ., n, and a0(t) are continuous functions, and f is a continuous
function. You may assume f has compact support.
b) Solve the initial value problem
∂u
∂t
+
n
k=1
ak(t)
∂u
∂xk
+ a0(t)u = f(x, t), x ∈ Rn
,
u(0, x) = 0
where f is continuous in x and t.
Proof. a) Use the Fourier transform to solve this problem.
u(ξ, t) =
1
(2π)
n
2 Rn
e−ix·ξ
u(x, t) dx for ξ ∈ R.
∂u
∂xk
= iξku.
Thus, the equation becomes:
ut + i n
k=1 ak(t)ξku + a0(t)u = 0,
u(ξ, 0) = f(ξ),
or
ut + i a(t) · ξ u + a0(t)u = 0,
ut = − i a(t) · ξ + a0(t) u.
This is an ODE in u with solution:
u(ξ, t) = ce−
t
0 (ia(s)·ξ+a0(s)) ds
, u(ξ, 0) = c = f(ξ). Thus,
u(ξ, t) = f(ξ) e− t
0
(i a(s)·ξ+a0(s)) ds
.
Use the Inverse Fourier transform to get u(x, t):
u(x, t) = u(ξ, t)∨
= f(ξ) e−
t
0 (i a(s)·ξ+a0(s)) ds
∨
=
(f ∗ g)(x)
(2π)
n
2
,
where g(ξ) = e− t
0
(i a(s)·ξ+a0(s)) ds
.
g(x) =
1
(2π)
n
2 Rn
eix·ξ
g(ξ) dξ =
1
(2π)
n
2 Rn
eix·ξ
e− t
0
(ia(s)·ξ+a0(s)) ds
dξ.
u(x, t) =
(f ∗ g)(x)
(2π)
n
2
=
1
(2π)n
Rn Rn
ei(x−y)·ξ
e−
t
0 (ia(s)·ξ+a0(s)) ds
dξ f(y) dy.
b) Use Duhamel’s Principle and the result from (a).
u(x, t) =
t
0
U(x, t − s, s) ds, where U(x, t, s) solves
∂U
∂t
+
n
k=1
ak(t)
∂U
∂xk
+ a0(t)U = 0,
U(x, 0, s) = f(x, s).
Partial Differential Equations Igor Yanovsky, 2005 371
u(x, t) =
t
0
U(x, t − s, s) ds =
1
(2π)n
t
0 Rn Rn
ei(x−y)·ξ
e− t−s
0 (ia(s)·ξ+a0(s)) ds
dξ f(y, s) dy ds.
Partial Differential Equations Igor Yanovsky, 2005 372
Problem (S’93, #2). a) Define the Fourier transform 83
f(ξ) =
∞
−∞
eixξ
f(x) dx.
State the inversion theorem. If
f(ξ) =
⎧
⎪⎨
⎪⎩
π, |ξ| < a,
1
2 π, |ξ| = a,
0, |ξ| > a,
where a is a real constant, what f(x) does the inversion theorem give?
b) Show that
f(x − b) = eiξb
f(x),
where b is a real constant. Hence, using part (a) and Parseval’s theorem, show that
1
π
∞
−∞
sin a(x + z)
x + z
sina(x + ξ)
x + ξ
dx =
sina(z − ξ)
z − ξ
,
where z and ξ are real constants.
Proof. a) • The inverse Fourier transform for f ∈ L1
(Rn
):
f∨
(ξ) =
1
2π
∞
−∞
e−ixξ
f(x) dx for ξ ∈ R.
Fourier Inversion Theorem: Assume f ∈ L2
(R). Then
f(x) =
1
2π
∞
−∞
e−ixξ
f(ξ) dξ =
1
2π
∞
−∞
∞
−∞
ei(y−x)ξ
f(y) dy dξ = (f)∨
(x).
• Parseval’s theorem (Plancherel’s theorem) (for this definition of the Fourier
transform). Assume f ∈ L1(Rn) ∩ L2(Rn). Then f, f∨ ∈ L2(Rn) and
1
2π
||f||L2(Rn) = ||f∨
||L2(Rn) = ||f||L2(Rn), or
∞
−∞
|f(x)|2
dx =
1
2π
∞
−∞
|f(ξ)|2
dξ.
Also,
∞
−∞
f(x) g(x)dx =
1
2π
∞
−∞
f(ξ) g(ξ) dξ.
• We can write
f(ξ) =
π, |ξ| < a,
0, |ξ| > a.
83
Note that the Fourier transform is defined incorrectly here. There should be ‘-’ sign in e−ixξ
.
Need to be careful, since the consequences of this definition propagate throughout the solution.
Partial Differential Equations Igor Yanovsky, 2005 373
f(x) = (f(ξ))∨
=
1
2π
∞
−∞
e−ixξ
f(ξ) dξ =
1
2π
−a
−∞
0 dξ +
1
2π
a
−a
e−ixξ
π dξ +
1
2π
∞
a
0 dξ
=
1
2
a
−a
e−ixξ
dξ = −
1
2ix
e−ixξ
ξ=a
ξ=−a
= −
1
2ix
e−iax
− eiax
=
sinax
x
.
b) • Let f(x − b
y
) = g(x), and determinte g(ξ):
f(x − b)(ξ) = g(ξ) =
R
eixξ
g(x) dx =
R
ei(y+b)ξ
f(y) dy
=
R
eiyξ
eibξ
f(y) dy = eibξ
f(ξ).
• With f(x) = sin ax
x (from (a)), we have
1
π
∞
−∞
sina(x + z)
x + z
sin a(x + s)
x + s
dx =
1
π
∞
−∞
f(x + z)f(x + s) dx (x = x + s, dx = dx)
=
1
π
∞
−∞
f(x + z − s)f(x ) dx (Parseval’s)
=
1
π
1
2π
∞
−∞
f(x + z − s)f(x ) dξ part (b)
=
1
2π2
∞
−∞
f(ξ) e−i(z−s)ξ
f(ξ) dξ
=
1
2π2
a
−a
f(ξ)
2
e−i(z−s)ξ
dξ
=
1
2π2
a
−a
π2
e−i(z−s)ξ
dξ
=
1
2
a
−a
e−i(z−s)ξ
dξ
=
1
−2i(z − s)
e−i(z−s)ξ ξ=a
ξ=−a
=
ei(z−s)a − e−i(z−s)a
2i(z − s)
=
sin a(z − s)
z − s
.
Partial Differential Equations Igor Yanovsky, 2005 374
Problem (F’03, #5). ❶ State Parseval’s relation for Fourier transforms.
❷ Find the Fourier transform ˆf(ξ) of
f(x) =
eiαx/2
√
πy, |x| ≤ y
0, |x| > y,
in which y and α are constants.
❸ Use this in Parseval’s relation to show that
∞
−∞
sin2
(α − ξ)y
(α − ξ)2
dξ = πy.
What does the transform ˆf(ξ) become in the limit y → ∞?
❹ Use Parseval’s relation to show that
sin(α − β)y
(α − β)
=
1
π
∞
−∞
sin(α − ξ)y
(α − ξ)
sin(β − ξ)y
(β − ξ)
dξ.
Proof. • f ∈ L2(R). Define its Fourier transform u by
f(ξ) =
1
√
2π R
e−ixξ
f(x) dx for ξ ∈ R.
❶ Parseval’s theorem (Plancherel’s theorem):
Assume f ∈ L1
(Rn
) ∩ L2
(Rn
). Then f, f∨
∈ L2
(Rn
) and
||f||L2(Rn) = ||f∨
||L2(Rn) = ||f||L2(Rn), or
∞
−∞
|f(x)|2
dx =
∞
−∞
|f(ξ)|2
dξ.
Also,
∞
−∞
f(x) g(x)dx =
∞
−∞
f(ξ) g(ξ) dξ.
❷ Find the Fourier transform of f:
f(ξ) =
1
√
2π R
e−ixξ
f(x) dx =
1
√
2π
y
−y
e−ixξ eiαx
2
√
πy
dx =
1
2π
√
2y
y
−y
ei(α−ξ)x
dx
=
1
2π
√
2y
1
i(α − ξ)
ei(α−ξ)x
x=y
x=−y
=
1
2iπ
√
2y(α − ξ)
ei(α−ξ)y
− e−i(α−ξ)y
=
siny(α − ξ)
π
√
2y(α − ξ)
.
❸ Parseval’s theorem gives:
∞
−∞
|f(ξ)|2
dξ =
∞
−∞
|f(x)|2
dx,
∞
−∞
sin2
y(α − ξ)
π22y(α − ξ)2
dξ =
y
−y
e2iαx
4πy
dx,
∞
−∞
sin2
y(α − ξ)
(α − ξ)2
dξ =
π
2
y
−y
dx,
∞
−∞
sin2
y(α − ξ)
(α − ξ)2
dξ = πy.
Partial Differential Equations Igor Yanovsky, 2005 375
❹ We had
f(ξ) =
siny(α − ξ)
π
√
2y(α − ξ)
.
• We make change of variables: α − ξ = β − ξ . Then, ξ = ξ + α − β. We have
f(ξ) = f(ξ + α − β) =
siny(β − ξ )
(β − ξ )
, or
f(ξ + α − β) =
siny(β − ξ)
(β − ξ)
.
• We will also use the following result.
Let f(ξ + a
ξ
) = g(ξ), and determinte g(ξ)∨
:
f(ξ + a)∨
= g(ξ)∨
=
1
√
2π R
eixξ
g(ξ) dξ =
1
√
2π R
eix(ξ −a)
f(ξ ) dξ
= e−ixa
f(x).
• Using these results, we have
1
π
∞
−∞
sin(α − ξ)y
(α − ξ)
sin(β − ξ)y
(β − ξ)
dξ =
1
π
(π 2y)2
∞
−∞
f(ξ) f(ξ + α − β) dξ
= 2πy
∞
−∞
f(x) e−(α−β)ix
f(x) dx
= 2πy
∞
−∞
f(x)2
e−(α−β)ix
dx
= 2πy
y
−y
e2iαx
4πy
e−(α−β)ix
dx
=
1
2
y
−y
e−(α−β)ix
dx
=
1
−2i(α − β)
e−(α−β)ix x=y
x=−y
=
1
−2i(α − β)
e−(α−β)iy
− e(α−β)iy
=
sin(α − β)y
α − β
.
Partial Differential Equations Igor Yanovsky, 2005 376
Problem (S’95, #5). For the Laplace equation
f ≡
∂2
∂x2
+
∂2
∂y2
f = 0 (30.3)
in the upper half plane y ≥ 0, consider
• the Dirichlet problem f(x, 0) = g(x);
• the Neumann problem ∂
∂y f(x, 0) = h(x).
Assume that f, g and h are 2π periodic in x and that f is bounded at infinity.
Find the Fourier transform N of the Dirichlet-Neumann map. In other words,
find an operator N taking the Fourier transform of g to the Fourier transform of h; i.e.
Ngk = hk.
Proof. We solve the problem by two methods.
❶ Fourier Series.
Since f is 2π-periodic in x, we can write
f(x, y) =
∞
n=−∞
an(y) einx
.
Plugging this into (30.3), we get the ODE:
∞
n=−∞
− n2
an(y)einx
+ an(y)einx
= 0,
an(y) − n2
an(y) = 0.
Initial conditions give: (g and h are 2π-periodic in x)
f(x, 0) =
∞
n=−∞
an(0)einx
= g(x) =
∞
n=−∞
gneinx
⇒ an(0) = gn.
fy(x, 0) =
∞
n=−∞
an(0)einx
= h(x) =
∞
n=−∞
hneinx
⇒ an(0) = hn.
Thus, the problems are:
an(y) − n2
an(y) = 0,
an(0) = gn, (Dirichlet)
an(0) = hn. (Neumann)
⇒ an(y) = bneny
+ cne−ny
, n = 1, 2, . . .; a0(y) = b0y + c0.
an(y) = nbneny
− ncne−ny
, n = 1, 2, . . .; a0(y) = b0.
Since f is bounded at y = ±∞, we have:
bn = 0 for n > 0,
cn = 0 for n < 0,
b0 = 0, c0 arbitrary.
Partial Differential Equations Igor Yanovsky, 2005 377
• n > 0:
an(y) = cne−ny
,
an(0) = cn = gn, (Dirichlet)
an(0) = −ncn = hn. (Neumann)
⇒ −ngn = hn.
• n < 0:
an(y) = bneny
,
an(0) = bn = gn, (Dirichlet)
an(0) = nbn = hn. (Neumann)
⇒ ngn = hn.
−|n|gn = hn, n = 0.
• n = 0 : a0(y) = c0,
a0(0) = c0 = g0, (Dirichlet)
a0(0) = 0 = h0. (Neumann)
Note that solution f(x, y) may be written as
f(x, y) =
∞
n=−∞
an(y) einx
= a0(y) +
−1
n=−∞
an(y) einx
+
∞
n=1
an(y) einx
= c0 +
−1
n=−∞
bneny
einx
+
∞
n=1
cne−ny
einx
=
g0 + −1
n=−∞ gneny
einx
+ ∞
n=1 gne−ny
einx
, (Dirichlet)
c0 + −1
n=−∞
hn
n eny
einx
+ ∞
n=1 −hn
n e−ny
einx
. (Neumann)
❷ Fourier Transform. The Fourier transform of f(x, y) in x is:
f(ξ, y) =
1
√
2π
∞
−∞
e−ixξ
f(x, y) dx,
f(x, y) =
1
√
2π
∞
−∞
eixξ
f(ξ, y) dξ.
(iξ)2
f(ξ, y) + fyy(ξ, y) = 0,
fyy − ξ2
f = 0. The solution to this ODE is:
f(ξ, y) = c1eξy
+ c2e−ξy
.
For ξ > 0, c1 = 0; for ξ < 0, c2 = 0.
• ξ > 0 : f(ξ, y) = c2e−ξy
, fy(ξ, y) = −ξc2e−ξy
,
c2 = f(ξ, 0) =
1
√
2π
∞
−∞
e−ixξ
f(x, 0) dx =
1
√
2π
∞
−∞
e−ixξ
g(x) dx = g(ξ), (Dirichlet)
−ξc2 = fy(ξ, 0) =
1
√
2π
∞
−∞
e−ixξ
fy(x, 0) dx =
1
√
2π
∞
−∞
e−ixξ
h(x) dx = h(ξ). (Neumann)
⇒ −ξg(ξ) = h(ξ).
Partial Differential Equations Igor Yanovsky, 2005 378
• ξ < 0 : f(ξ, y) = c1eξy
, fy(ξ, y) = ξc1eξy
,
c1 = f(ξ, 0) =
1
√
2π
∞
−∞
e−ixξ
f(x, 0) dx =
1
√
2π
∞
−∞
e−ixξ
g(x) dx = g(ξ), (Dirichlet)
ξc1 = fy(ξ, 0) =
1
√
2π
∞
−∞
e−ixξ
fy(x, 0) dx =
1
√
2π
∞
−∞
e−ixξ
h(x) dx = h(ξ). (Neumann)
⇒ ξg(ξ) = h(ξ).
−|ξ|g(ξ) = h(ξ).
Partial Differential Equations Igor Yanovsky, 2005 379
Problem (F’97, #3). Consider the Dirichlet problem in the half-space xn > 0,
n ≥ 2:
u + a
∂u
∂xn
+ k2
u = 0, xn > 0
u(x , 0) = f(x ), x = (x1, . . ., xn−1).
Here a and k are constants.
Use the Fourier transform to show that for any f(x ) ∈ L2(Rn−1) there exists a
solution u(x , xn) of the Dirichlet problem such that
Rn
|u(x , xn)|2
dx ≤ C
for all 0 < xn < +∞.
Proof. 84
Denote ξ = (ξ , ξn). Transform in the first n − 1 variables:
−|ξ |2
u(ξ , xn) +
∂2u
∂x2
n
(ξ , xn) + a
∂u
∂xn
(ξ , xn) + k2
u(ξ , xn) = 0.
Thus, the ODE and initial conditions of the transformed problem become:
uxnxn + auxn + (k2
− |ξ |2
)u = 0,
u(ξ , 0) = f(ξ ).
With the anzats u = cesxn , we obtain s2 + as + (k2 − |ξ |2) = 0, and
s1,2 =
−a ± a2 − 4(k2 − |ξ |2)
2
.
Choosing only the negative root, we obtain the solution: 85
u(ξ , xn) = c(ξ ) e
−a−
√
a2−4(k2−|ξ |2)
2
xn
. u(ξ , 0) = c = f(ξ ). Thus,
u(ξ , xn) = f(ξ ) e
−a−
√
a2−4(k2−|ξ |2)
2
xn
.
Parseval’s theorem gives:
||u||2
L2(Rn−1) = ||u||2
L2(Rn−1) =
Rn−1
|u(ξ , xn)|2
dξ
=
Rn−1
f(ξ ) e
−a−
√
a2−4(k2−|ξ |2)
2
xn 2
dξ ≤
Rn−1
f(ξ )
2
dξ
= ||f||2
L2(Rn−1) = ||f||2
L2(Rn−1) ≤ C,
since f(x ) ∈ L2
(Rn−1
). Thus, u(x , xn) ∈ L2
(Rn−1
).
84
Note that the last element of x = (x , xn) = (x1, . . . , xn−1, xn), i.e. xn, plays a role of time t.
As such, the PDE may be written as
u + utt + aut + k2
u = 0.
85
Note that a > 0 should have been provided by the statement of the problem.
Partial Differential Equations Igor Yanovsky, 2005 380
Problem (F’89, #7). Find the following fundamental solutions
a)
∂G(x, y, t)
∂t
= a(t)
∂2G(x, y, t)
∂x2
+ b(t)
∂G(x, y, t)
∂x
+ c(t)G(x, y, t) for t > 0
G(x, y, 0) = δ(x − y),
where a(t), b(t), c(t) are continuous functions on [0, +∞], a(t) > 0 for t > 0.
b)
∂G
∂t
(x1, . . ., xn, y1, . . ., yn, t) =
n
k=1
ak(t)
∂G
∂xk
for t > 0,
G(x1, . . ., xn, y1, . . ., yn, 0) = δ(x1 − y1)δ(x2 − y2) . . .δ(xn − yn).
Proof. a) We use the Fourier transform to solve this problem.
Transform the equation in the first variable only. That is,
G(ξ, y, t) =
1
√
2π R
e−ixξ
G(x, y, t) dx.
The equation is transformed to an ODE, that can be solved:
Gt(ξ, y, t) = −a(t) ξ2
G(ξ, y, t) + i b(t) ξ G(ξ, y, t) + c(t) G(ξ, y, t),
Gt(ξ, y, t) = − a(t) ξ2
+ i b(t) ξ + c(t) G(ξ, y, t),
G(ξ, y, t) = c e
t
0 [−a(s)ξ2+i b(s)ξ+c(s)] ds
.
We can also transform the initial condition:
G(ξ, y, 0) = δ(x − y)(ξ) = e−iyξ
δ(ξ) =
1
√
2π
e−iyξ
.
Thus, the solution of the transformed problem is:
G(ξ, y, t) =
1
√
2π
e−iyξ
e
t
0 [−a(s)ξ2+i b(s)ξ+c(s)] ds
.
The inverse Fourier transform gives the solution to the original problem:
G(x, y, t) = G(ξ, y, t)
∨
=
1
√
2π R
eixξ
G(ξ, y, t) dξ
=
1
√
2π R
eixξ 1
√
2π
e−iyξ
e
t
0
[−a(s)ξ2+i b(s)ξ+c(s)] ds
dξ
=
1
2π R
ei(x−y)ξ
e
t
0 [−a(s)ξ2+i b(s)ξ+c(s)] ds
dξ.
b) Denote x = (x1, . . ., xn), y = (y1, . . ., yn). Transform in x:
G(ξ, y, t) =
1
(2π)
n
2 Rn
e−ix·ξ
G(x, y, t) dx.
The equation is transformed to an ODE, that can be solved:
Gt(ξ, y, t) =
n
k=1
ak(t) iξk G(ξ, y, t),
G(ξ, y, t) = c ei
t
0 [ n
k=1 ak(s) ξk] ds
.
Partial Differential Equations Igor Yanovsky, 2005 381
We can also transform the initial condition:
G(ξ, y, 0) = δ(x1 − y1)δ(x2 − y2) . . .δ(xn − yn) (ξ) = e−iy·ξ
δ(ξ) =
1
(2π)
n
2
e−iy·ξ
.
Thus, the solution of the transformed problem is:
G(ξ, y, t) =
1
(2π)
n
2
e−iy·ξ
ei t
0 [ n
k=1 ak(s) ξk] ds
.
The inverse Fourier transform gives the solution to the original problem:
G(x, y, t) = G(ξ, y, t)
∨
=
1
(2π)
n
2 Rn
eix·ξ
G(ξ, y, t) dξ
=
1
(2π)
n
2 Rn
eix·ξ 1
(2π)
n
2
e−iy·ξ
ei t
0 [ n
k=1 ak(s) ξk] ds
dξ
=
1
(2π)n
Rn
ei(x−y)·ξ
ei t
0 [ n
k=1 ak(s) ξk] ds
dξ.
Partial Differential Equations Igor Yanovsky, 2005 382
Problem (W’02, #7). Consider the equation
∂2
∂x2
1
+ · · · +
∂2
∂x2
n
u = f in Rn
, (30.4)
where f is an integrable function (i.e. f ∈ L1(Rn)), satisfying f(x) = 0 for |x| ≥ R.
Solve (30.4) by Fourier transform, and prove the following results.
a) There is a solution of (30.4) belonging to L2(Rn) if n > 4.
b) If Rn f(x) dx = 0, there is a solution of (30.4) belonging to L2
(Rn
) if n > 2.
Proof.
u = f,
−|ξ|2
u(ξ) = f(ξ),
u(ξ) = −
1
|ξ|2
f(ξ), ξ ∈ Rn
,
u(x) = −
f(ξ)
|ξ|2
∨
.
a) Then
||u||L2(Rn) =
Rn
|f(ξ)|2
|ξ|4
dξ
1
2
≤
|ξ|<1
|f(ξ)|2
|ξ|4
dξ
A
+
|ξ|≥1
|f(ξ)|2
|ξ|4
dξ
B
1
2
.
Notice, ||f||2 = ||f||2 ≥ B, so B < ∞.
Use polar coordinates on A.
A =
|ξ|<1
|f(ξ)|2
|ξ|4
dξ =
1
0 Sn−1
|f|2
r4
rn−1
dSn−1 dr =
1
0 Sn−1
|f|2
rn−5
dSn−1 dr.
If n > 4,
A ≤
Sn−1
|f|2
dSn−1 = ||f||2
2 < ∞.
||u||L2(Rn) = ||u||L2(Rn) = (A + B)
1
2 < ∞.
Partial Differential Equations Igor Yanovsky, 2005 383
b) We have
u(x, t) = −
f(ξ)
|ξ|2
∨
= −
1
(2π)
n
2
∞
−∞
eix·ξ f(ξ)
|ξ|2
dξ
= −
1
(2π)
n
2
∞
−∞
eix·ξ
|ξ|2
1
(2π)
n
2
∞
−∞
e−iy·ξ
f(y) dy dξ
= −
1
(2π)n
∞
−∞
f(y)
∞
−∞
ei(x−y)·ξ
|ξ|2
dξ dy
= −
1
(2π)n
∞
−∞
f(y)
1
0 Sn−1
ei(x−y)r
r2
rn−1
dSn−1 dr dy
= −
1
(2π)n
∞
−∞
f(y)
1
0 Sn−1
ei(x−y)r
rn−3
dSn−1 dr
≤ M < ∞, if n>2.
dy.
|u(x, t)| =
1
(2π)n
∞
−∞
M f(y) dy < ∞.
Partial Differential Equations Igor Yanovsky, 2005 384
Problem (F’02, #7). For the right choice of the constant c, the function
F(x, y) = c(x + iy)−1 is a fundamental solution for the equation
∂u
∂x
+ i
∂u
∂y
= f in R2
.
Find the right choice of c, and use your answer to compute the Fourier transform
(in distribution sense) of (x + iy)−1
.
Proof. 86
=
∂
∂x
+ i
∂
∂y
∂
∂x
− i
∂
∂y
.
F1(x, y) = 1
2π log |z| is the fundamental solution of the Laplacian. z = x + iy.
F1(x, y) = δ,
∂
∂x
+ i
∂
∂y
∂
∂x
− i
∂
∂y
F(x, y) = δ.
hx + ihy = e−i(xξ1+yξ2)
.
Suppose h = h(xξ1 + yξ2) or h = ce−i(xξ1+yξ2).
⇒ c − iξ1 e−i(xξ1+yξ2)
− i2
ξ2 e−i(xξ1+yξ2)
= −ic(ξ1 − iξ2) e−i(xξ1+yξ2)
≡ e−i(xξ1+yξ2)
,
⇒ −ic(ξ1 − iξ2) = 1,
⇒ c = −
1
i(ξ1 − iξ2)
,
⇒ h(x, y) = −
1
i(ξ1 − iξ2)
e−i(xξ1+yξ2)
.
Integrate by parts:
1
x + iy
(ξ) =
R2
e−i(xξ1+yξ2) 1
i(ξ1 − iξ2)
∂
∂x
+ i
∂
∂y
1
(x + iy) − 0
dxdy
=
1
i(ξ1 − iξ2)
=
1
i(ξ2 + iξ1)
.
86
Alan solved in this problem in class.
Partial Differential Equations Igor Yanovsky, 2005 385
31 Laplace Transform
If u ∈ L1(R+), we define its Laplace transform to be
L[u(t)] = u#
(s) =
∞
0
e−st
u(t) dt (s > 0).
In practice, for a PDE involving time, it may be useful to perform a Laplace transform
in t, holding the space variables x fixed.
The inversion formula for the Laplace transform is:
u(t) = L−1
[u#
(s)] =
1
2πi
c+i∞
c−i∞
est
u#
(s) ds.
Example: f(t) = 1.
L[1] =
∞
0
e−st
· 1 dt = −
1
s
e−st
t=∞
t=0
=
1
s
for s > 0.
Example: f(t) = eat.
L[eat
] =
∞
0
e−st
eat
dt =
∞
0
e(a−s)t
dt =
1
a − s
e(a−s)t
t=∞
t=0
=
1
s − a
for s > a.
Convolution: We want to find an inverse Laplace transform of 1
s · 1
s2+1
.
L−1 1
s
L[f]
·
1
s2 + 1
L[g]
= f ∗ g =
t
0
1 · sint dt = 1 − cos t.
Partial Derivatives: u = u(x, t)
L[ut] =
∞
0
e−st
ut dt = e−st
u(x, t)
t=∞
t=0
+ s
∞
0
e−st
u dt = sL[u] − u(x, 0),
L[utt] =
∞
0
e−st
utt dt = e−st
ut
t=∞
t=0
+ s
∞
0
e−st
ut dt = −ut(x, 0) + sL[ut]
= s2
L[u] − su(x, 0) − ut(x, 0),
L[ux] =
∞
0
e−st
ux dt =
∂
∂x
L[u],
L[uxx] =
∞
0
e−st
uxx dt =
∂2
∂x2
L[u].
Heat Equation: Consider
ut − u = 0 in U × (0, ∞)
u = f on U × {t = 0},
and perform a Laplace transform with respect to time:
L[ut] =
∞
0
e−st
ut dt = sL[u] − u(x, 0) = sL[u] − f(x),
L[ u] =
∞
0
e−st
u dt = L[u].
Partial Differential Equations Igor Yanovsky, 2005 386
Thus, the transformed problem is: sL[u] − f(x) = L[u]. Writing v(x) = L[u], we
have
− v + sv = f in U.
Thus, the solution of this equation with RHS f is the Laplace transform of the solution
of the heat equation with initial data f.
Partial Differential Equations Igor Yanovsky, 2005 387
Table of Laplace Transforms: L[f] = f#
(s)
L[sinat] =
a
s2 + a2
, s > 0
L[cos at] =
s
s2 + a2
, s > 0
L[sinhat] =
a
s2 − a2
, s > |a|
L[coshat] =
s
s2 − a2
, s > |a|
L[eat
sinbt] =
b
(s − a)2 + b2
, s > a
L[eat
cos bt] =
s − a
(s − a)2 + b2
, s > a
L[tn
] =
n!
sn+1
, s > 0
L[tn
eat
] =
n!
(s − a)n+1
, s > a
L[H(t − a)] =
e−as
s
, s > 0
L[H(t − a) f(t − a)] = e−as
L[f],
L[af(t) + bg(t)] = aL[f] + bL[g],
L[f(t) ∗ g(t)] = L[f] L[g],
L
t
0
g(t − t) f(t ) dt = L[f] L[g],
L
df
dt
= sL[f] − f(0),
L
d2
f
dt2
= s2
L[f] − sf(0) − f (0), f =
df
dt
L
dnf
dtn
= sn
L[f] − sn−1
f(0) − . . . − fn−1
(0),
L[f(at)] =
1
a
f# s
a
,
L[ebt
f(t)] = f#
(s − b),
L[tf(t)] = −
d
ds
L[f],
L
f(t)
t
=
∞
s
f#
(s ) ds ,
L
t
0
f(t ) dt =
1
s
L[f],
L[J0(at)] = (s2
+ a2
)−1
2 ,
L[δ(t − a)] = e−sa
.
Example: f(t) = sint. After integrating by parts twice, we obtain:
L[sint] =
∞
0
e−st
sint dt = 1 − s2
∞
0
e−st
sin t dt,
⇒
∞
0
e−st
sint dt =
1
1 + s2
.
Partial Differential Equations Igor Yanovsky, 2005 388
Example: f(t) = tn
.
L[tn
] =
∞
0
e−st
tn
dt = −
tne−st
s
∞
0
+
n
s
∞
0
e−st
tn−1
dt =
n
s
L[tn−1
]
=
n
s
n − 1
s
L[tn−2
] = . . . =
n!
sn
L[1] =
n!
sn+1
.
Partial Differential Equations Igor Yanovsky, 2005 389
Problem (F’00, #6). Consider the initial-boundary value problem
ut − uxx + au = 0, t > 0, x > 0
u(x, 0) = 0, x > 0
u(0, t) = g(t), t > 0,
where g(t) is continuous function with a compact support, and a is constant.
Find the explicit solution of this problem.
Proof. We solve this problem using the Laplace transform.
L[u(x, t)] = u#
(x, s) =
∞
0
e−st
u(x, t) dt (s > 0).
L[ut] =
∞
0
e−st
ut dt = e−st
u(x, t)
t=∞
t=0
+ s
∞
0
e−st
u dt
= su#
(x, s) − u(x, 0) = su#
(x, s), (since u(x, 0) = 0)
L[uxx] =
∞
0
e−st
uxx dt =
∂2
∂x2
u#
(x, s),
L[u(0, t)] = u#
(0, s) =
∞
0
e−st
g(t) dt = g#
(s).
Plugging these into the equation, we obtain the ODE in u#
:
su#
(x, s) −
∂2
∂x2
u#
(x, s) + au#
(x, s) = 0.
(u#)xx − (s + a)u# = 0,
u#(0, s) = g#(s).
This initial value problem has a solution:
u#
(x, s) = c1e
√
s+a x
+ c2e−
√
s+a x
.
Since we want u to be bounded as x → ∞, we have c1 = 0, so
u#
(x, s) = c2e−
√
s+a x
. u#
(0, s) = c2 = g#
(s), thus,
u#
(x, s) = g#
(s)e−
√
s+a x
.
To obtain u(x, t), we take the inverse Laplace transform of u#(x, s):
u(x, t) = L−1
[u#
(x, s)] = L−1
g#
(s)
L[g]
e−
√
s+a x
L[f]
= g ∗ f
= g ∗ L−1
e−
√
s+a x
= g ∗
1
2πi
c+i∞
c−i∞
est
e−
√
s+a x
ds ,
u(x, t) =
t
0
g(t − t )
1
2πi
c+i∞
c−i∞
est
e−
√
s+a x
ds dt .
Partial Differential Equations Igor Yanovsky, 2005 390
Problem (F’04, #8). The function y(x, t) satisfies the partial differential equation
x
∂y
∂x
+
∂2y
∂x∂t
+ 2y = 0,
and the boundary conditions
y(x, 0) = 1, y(0, t) = e−at
,
where a ≥ 0. Find the Laplace transform, y(x, s), of the solution, and hence derive
an expression for y(x, t) in the domain x ≥ 0, t ≥ 0.
Proof. We change the notation: y → u. We have
xux + uxt + 2u = 0,
u(x, 0) = 1, u(0, t) = e−at
.
The Laplace transform is defined as:
L[u(x, t)] = u#
(x, s) =
∞
0
e−st
u(x, t) dt (s > 0).
L[xux] =
∞
0
e−st
xux dt = x
∞
0
e−st
ux dt = x(u#
)x,
L[uxt] =
∞
0
e−st
uxt dt = e−st
ux(x, t)
t=∞
t=0
+ s
∞
0
e−st
ux dt
= s(u#
)x − ux(x, 0) = s(u#
)x, (since u(x, 0) = 0)
L[u(0, t)] = u#
(0, s) =
∞
0
e−st
e−at
dt =
∞
0
e−(s+a)t
dt = −
1
s + a
e−(s+a)t
t=∞
t=0
=
1
s + a
.
Plugging these into the equation, we obtain the ODE in u#:
(x + s)(u#
)x + 2u#
= 0,
u#
(0, s) = 1
s+a ,
which can be solved:
(u#
)x
u#
= −
2
x + s
⇒ log u#
= −2 log(x + s) + c1 ⇒ u#
= c2elog(x+s)−2
=
c2
(x + s)2
.
From the initial conditions:
u#
(0, s) =
c2
s2
=
1
s + a
⇒ c2 =
s2
s + a
.
u#
(x, s) =
s2
(s + a)(x + s)2
.
To obtain u(x, t), we take the inverse Laplace transform of u#(x, s):
u(x, t) = L−1
[u#
(x, s)] = L−1 s2
(s + a)(x + s)2
=
1
2πi
c+i∞
c−i∞
est s2
(s + a)(x + s)2
ds.
u(x, t) =
1
2πi
c+i∞
c−i∞
est s2
(s + a)(x + s)2
ds.
Partial Differential Equations Igor Yanovsky, 2005 391
Problem (F’90, #1). Using the Laplace transform, or any other convenient method,
solve the Volterra integral equation
u(x) = sinx +
x
0
sin(x − y)u(y) dy.
Proof. Rewrite the equation:
u(t) = sint +
t
0
sin(t − t )u(t ) dt ,
u(t) = sint + (sint) ∗ u.
Taking the Laplace transform of each of the elements in :
L[u(t)] = u#
(s) =
∞
0
e−st
u(t) dt,
L[sint] =
1
1 + s2
,
L[(sint) ∗ u] = L[sint] ∗ L[u] =
u#
1 + s2
.
Plugging these into the equation:
u#
=
1
1 + s2
+
u#
1 + s2
=
u# + 1
1 + s2
.
u#
(s) =
1
s2
.
To obtain u(t), we take the inverse Laplace transform of u#(s):
u(t) = L−1
[u#
(s)] = L−1 1
s2
= t.
u(t) = t.
Partial Differential Equations Igor Yanovsky, 2005 392
Problem (F’91, #5). In what follows, the Laplace transform of x(t) is denoted
either by x(s) or by Lx(t). ❶ Show that, for integral n ≥ 0,
L(tn
) =
n!
sn+1
.
❷ Hence show that
LJ0(2
√
ut) =
1
s
e−u/s
,
where
J0(z) =
∞
n=0
(−1)n
(1
2z)2n
n!n!
is a Bessel function. ❸ Hence show that
L
∞
0
J0(2
√
ut)x(u) du =
1
s
x
1
s
. (31.1)
❹ Assuming that
LJ0(at) =
1
√
a2 + s2
,
prove with the help of (31.1) that if t ≥ 0
∞
0
J0(au)J0(2
√
ut) du =
1
a
J0
t
a
.
Hint: For the last part, use the uniqueness of the Laplace transform.
Proof.
❶ L[tn
] =
∞
0
e−st
g
tn
f
dt = −
tn
e−st
s
∞
0
= 0
+
n
s
∞
0
e−st
tn−1
dt =
n
s
L[tn−1
]
=
n
s
n − 1
s
L[tn−2
] = . . . =
n!
sn
L[1] =
n!
sn+1
.
❷ LJ0(2
√
ut) = L
∞
n=0
(−1)n
un
tn
n!n!
=
∞
n=0
(−1)n
un
n!n!
L[tn
] =
∞
n=0
(−1)n
un
n!sn+1
=
1
s
∞
n=0
(−1)n
n!
u
s
n
=
1
s
e−u
s .
❸ L
∞
0
J0(2
√
ut) x(u) du =
∞
0
L[J0(2
√
ut)] x(u) du =
1
s
∞
0
e−u
s x(u) du
=
1
s
x# 1
s
,
where
x#
(s) =
∞
0
e−us
x(u) du.
Partial Differential Equations Igor Yanovsky, 2005 393
32 Linear Functional Analysis
32.1 Norms
|| · || is a norm on a vector space X if
i) ||x|| = 0 iff x = 0.
ii) ||αx|| = |α| · ||x|| for all scalars α.
iii) ||x + y|| ≤ ||x|| + ||y|| (the triangle inequality).
The norm induces the distance function d(x, y) = ||x − y|| so that X is a metric space,
called a normed vector space.
32.2 Banach and Hilbert Spaces
A Banach space is a normed vector space that is complete in that norm’s metric. I.e.
a complete normed linear space is a Banach space.
A Hilbert space is an inner product space for which the corresponding normed space
is complete. I.e. a complete inner product space is a Hilbert space.
Examples: 1) Let K be a compact set of Rn and let C(K) denote the space of continuous
functions on K. Since every u ∈ C(K) achieves maximum and minimum values on K,
we may define
||u||∞ = max
x∈K
|u(x)|.
|| · ||∞ is indeed a norm on C(K) and since a uniform limit of continuous functions is
continuous, C(K) is a Banach space. However, this norm cannot be derived from an
inner product, so C(K) is not a Hilbert space.
2) C(K) is not a Banach space with || · ||2 norm. (Bell-shaped functions on [0, 1] may
converge to a discontinuous δ-function). In general, the space of continuous functions
on [0, 1], with the norm || · ||p, 1 ≤ p < ∞, is not a Banach space, since it is not
complete.
3) Rn and Cn are real and complex Banach spaces (with a Eucledian norm).
4) Lp
are Banach spaces (with || · ||p norm).
5) The space of bounded real-valued functions on a set S, with the sup norm || · ||S are
Banach spaces.
6) The space of bounded continuous real-valued functions on a metric space X is a
Banach space.
32.3 Cauchy-Schwarz Inequality
|(u, v)| ≤ ||u||||v|| in any norm, for example |uv|dx ≤ ( u2
dx)
1
2 ( v2
dx)
1
2
|a(u, v)| ≤ a(u, u)
1
2 a(v, v)
1
2
|v|dx = |v| · 1 dx = ( |v|2
dx)
1
2 ( 12
dx)
1
2
32.4 H¨older Inequality
Ω
|uv| dx ≤ ||u||p||v||q,
which holds for u ∈ Lp
(Ω) and v ∈ Lq
(Ω), where 1
p + 1
q = 1. In particular, this shows
uv ∈ L1
(Ω).
Partial Differential Equations Igor Yanovsky, 2005 394
32.5 Minkowski Inequality
||u + v||p ≤ ||u||p + ||v||p,
which holds for u, v ∈ Lp(Ω). In particular, it shows u + v ∈ Lp(Ω).
Using the Minkowski Inequality, we find that || · ||p is a norm on Lp
(Ω).
The Riesz-Fischer theorem asserts that Lp(Ω) is complete in this norm, so Lp(Ω) is a
Banach space under the norm || · ||p.
If p = 2, then L2(Ω) is a Hilbert space with inner product
(u, v) =
Ω
uv dx.
Example: Ω ∈ Rn
bounded domain, C1
(¯Ω) denotes the functions that, along with
their first-order derivatives, extend continuously to the compact set ¯Ω. Then C1
(¯Ω) is
a Banach space under the norm
||u||1,∞ = max
x∈¯Ω
(|∇u(x)| + |u(x)|).
Note that C1(Ω) is not a Banach space since ||u||1,∞ need not be finite for u ∈ C1(Ω).
32.6 Sobolev Spaces
A Sobolev space is a space of functions whose distributional derivatives (up to some
fixed order) exist in an Lp
-space.
Let Ω be a domain in Rn, and let us introduce
< u, v >1=
Ω
(∇u · ∇v + uv) dx, (32.1)
||u||1,2 =
√
< u, u >1 =
Ω
(|∇u|2
+ |u|2
) dx
1
2
(32.2)
when these expressions are defined and finite. For example, (32.1) and (32.2) are defined
for functions in C1
0 (Ω). However, C1
0 (Ω) is not complete under the norm (32.2), and so
does not form a Hilbert space.
Divergence Theorem
∂Ω
A · n dS =
Ω
div A dx
Trace Theorem
u L2(∂Ω) ≤ C u H1(Ω) Ω smooth or square
Poincare Inequality
u p ≤ C ∇u p 1 ≤ p ≤ ∞
Ω
|u(x)|2
dx ≤ C
Ω
|∇u(x)|2
dx u ∈ C1
0 (Ω), H1,2
0 (Ω) i.e. p = 2
u − uΩ p ≤ ∇u p u ∈ H1,p
0 (Ω)
Partial Differential Equations Igor Yanovsky, 2005 395
uΩ =
1
|Ω| Ω
u(x) dx (Average value of u over Ω), |Ω| is the volume of Ω
Notes
∂u
∂n
= ∇u · n = n1
∂u
∂x1
+ n2
∂u
∂x2
|∇u|2
= u2
x1
+ u2
x2
Ω
∇|u| dx =
Ω
|u|
u
∇u dx
√
ab ≤
a + b
2
⇒ ab ≤
a2 + b2
2
⇒ ||∇u||||u|| ≤
||∇u||2 + ||u||2
2
u∇u = ∇(
u2
2
)
Ω
(uxy)2
dx =
Ω
uxxuyy dx ∀u ∈ H2
0 (Ω) Ω square
Problem (F’04, #6). Let q ∈ C1
0 (R3
). Prove that the vector field
u(x) =
1
4π R3
q(y)(x − y)
|x − y|3
dy
enjoys the following properties: 87
a) u(x) is conservative;
b) div u(x) = q(x) for all x ∈ R3;
c) |u(x)| = O(|x|−2
) for large x.
Furthermore, prove that the proverties (1), (2), and (3) above determine the vector field
u(x) uniquely.
Proof. a) To show that u(x) is conservative, we need to show that curl u = 0.
The curl of V is another vector field defined by
curl V = ∇ × V = det
⎛
⎝
e1 e2 e3
∂1 ∂2 ∂3
V1 V2 V3
⎞
⎠ =
∂V3
∂x2
−
∂V2
∂x3
,
∂V1
∂x3
−
∂V3
∂x1
,
∂V2
∂x1
−
∂V1
∂x2
.
Consider
V (x) =
x
|x|3
=
(x1, x2, x3)
(x2
1 + x2
2 + x2
3)
3
2
.
Then,
u(x) =
1
4π R3
q(y) V (x − y) dy,
curl u(x) =
1
4π R3
q(y) curlx V (x − y) dy.
curl V (x) = curl
(x1, x2, x3)
(x2
1 + x2
2 + x2
3)
3
2
=
−3
2 · 2x2x3
(x2
1 + x2
2 + x2
3)
5
2
−
−3
2 · 2x3x2
(x2
1 + x2
2 + x2
3)
5
2
,
−3
2 · 2x3x1
(x2
1 + x2
2 + x2
3)
5
2
−
−3
2 · 2x1x3
(x2
1 + x2
2 + x2
3)
5
2
,
−3
2 · 2x1x2
(x2
1 + x2
2 + x2
3)
5
2
−
= (0, 0, 0).
87
McOwen, p. 138-140.
Partial Differential Equations Igor Yanovsky, 2005 396
Thus, curl u = 1
4π R3 q(y) · 0 dy = 0, and u(x) is conservative.
b) Note that the Laplace kernel in R3 is − 1
4πr .
u(x) =
1
4π R3
q(y)(x − y)
|x − y|3
dy =
1
4π R3
q(r) r
r3
r dr =
R3
q(r)
4πr
dr = q.
c) Consider
F(x) = −
1
4π R3
q(y)
|x − y|
dy.
F(x) is O(|x|−1) as |x| → ∞.
Note that u = ∇F, which is clearly O(|x|−2
) as |x| → ∞.

More Related Content

PPTX
Ppt sets and set operations
PPTX
GRADE 10 ARITHMETIC.pptx
PPTX
Geometric Sequence and Geometric Mean
PPTX
linear equation in 2 variables
PPTX
Gauss jordan and Guass elimination method
PPTX
Arc Length & Area of a Sector.pptx
PDF
Lesson 9: Gaussian Elimination
PPTX
Sets PowerPoint Presentation
Ppt sets and set operations
GRADE 10 ARITHMETIC.pptx
Geometric Sequence and Geometric Mean
linear equation in 2 variables
Gauss jordan and Guass elimination method
Arc Length & Area of a Sector.pptx
Lesson 9: Gaussian Elimination
Sets PowerPoint Presentation

What's hot (20)

PPT
Properties of relations
PPT
CONVERGENCE.ppt
PPTX
Section 9: Equivalence Relations & Cosets
PPTX
Circle
PPTX
Normal subgroups- Group theory
PDF
Mathematics Riddles
PPTX
Taylor series
PPTX
PPTX
Fourier series
PDF
Infinite sequences and series i
DOCX
Factoring Perfect Square Trinomial
PDF
PAIR OF LINEAR EQUATIONS CLASS X MODULE 1
PPTX
Dot Plots and Box Plots.pptx
PPTX
5.4 mathematical induction t
PPTX
Kuhn munkres algorithm
PPT
bearings.ppt
PPT
Parallel and Perpendicular lines
PPT
Quadratic inequalities
PPTX
Arithmetic vs Geometric Series and Sequence
PDF
Part 1:Electrostatics
Properties of relations
CONVERGENCE.ppt
Section 9: Equivalence Relations & Cosets
Circle
Normal subgroups- Group theory
Mathematics Riddles
Taylor series
Fourier series
Infinite sequences and series i
Factoring Perfect Square Trinomial
PAIR OF LINEAR EQUATIONS CLASS X MODULE 1
Dot Plots and Box Plots.pptx
5.4 mathematical induction t
Kuhn munkres algorithm
bearings.ppt
Parallel and Perpendicular lines
Quadratic inequalities
Arithmetic vs Geometric Series and Sequence
Part 1:Electrostatics
Ad

Similar to Partial differential equations, graduate level problems and solutions by igor yanovsky (20)

PDF
Differential equations
PDF
Quantum Mechanics: Lecture notes
PDF
Essentials of applied mathematics
PDF
Advanced Quantum Mechanics
PDF
Applied Math
PDF
Fundamentals of computational fluid dynamics
PDF
Probability and Statistics by sheldon ross (8th edition).pdf
PDF
SGC 2014 - Mathematical Sciences Tutorials
PDF
I do like cfd vol 1 2ed_v2p2
PDF
Basic calculus free
PDF
Reading Materials for Operational Research
PDF
Lecture notes on planetary sciences and orbit determination
PDF
General physics
PDF
Thermal and statistical physics h. gould, j. tobochnik-1
PDF
toaz.info-instructor-solution-manual-probability-and-statistics-for-engineers...
PDF
Elementary algebra notes 001.pdf
PDF
Elementray college-algebra-free-pdf-download-olga-lednichenko-math-for-colleg...
PDF
phd_unimi_R08725
PDF
Mathematical formula handbook
Differential equations
Quantum Mechanics: Lecture notes
Essentials of applied mathematics
Advanced Quantum Mechanics
Applied Math
Fundamentals of computational fluid dynamics
Probability and Statistics by sheldon ross (8th edition).pdf
SGC 2014 - Mathematical Sciences Tutorials
I do like cfd vol 1 2ed_v2p2
Basic calculus free
Reading Materials for Operational Research
Lecture notes on planetary sciences and orbit determination
General physics
Thermal and statistical physics h. gould, j. tobochnik-1
toaz.info-instructor-solution-manual-probability-and-statistics-for-engineers...
Elementary algebra notes 001.pdf
Elementray college-algebra-free-pdf-download-olga-lednichenko-math-for-colleg...
phd_unimi_R08725
Mathematical formula handbook
Ad

More from Julio Banks (20)

PDF
Apologia - A Call for a Reformation of Christian Protestants Organizations.pdf
PDF
Mathcad - CMS (Component Mode Synthesis) Analysis.pdf
PDF
MathCAD - Synchronicity Algorithm.pdf
PDF
Sharing the gospel with muslims
PDF
Mathcad explicit solution cubic equation examples
PDF
Math cad prime the relationship between the cubit, meter, pi and the golden...
PDF
Mathcad day number in the year and solar declination angle
PDF
Transcript for abraham_lincoln_thanksgiving_proclamation_1863
PDF
Thanksgiving and lincolns calls to prayer
PDF
Jannaf 10 1986 paper by julio c. banks, et. al.-ballistic performance of lpg ...
PDF
Man's search-for-meaning-viktor-frankl
PDF
Love versus shadow self
PDF
Exposing the truth about the qur'an
PDF
NASA-TM-X-74335 --U.S. Standard Atmosphere 1976
PDF
Mathcad P-elements linear versus nonlinear stress 2014-t6
PDF
Apologia - The martyrs killed for clarifying the bible
PDF
Apologia - Always be prepared to give a reason for the hope that is within yo...
PDF
Spontaneous creation of the universe ex nihil by maya lincoln and avi wasser
PDF
The “necessary observer” that quantum mechanics require is described in the b...
PDF
Advances in fatigue and fracture mechanics by grzegorz (greg) glinka
Apologia - A Call for a Reformation of Christian Protestants Organizations.pdf
Mathcad - CMS (Component Mode Synthesis) Analysis.pdf
MathCAD - Synchronicity Algorithm.pdf
Sharing the gospel with muslims
Mathcad explicit solution cubic equation examples
Math cad prime the relationship between the cubit, meter, pi and the golden...
Mathcad day number in the year and solar declination angle
Transcript for abraham_lincoln_thanksgiving_proclamation_1863
Thanksgiving and lincolns calls to prayer
Jannaf 10 1986 paper by julio c. banks, et. al.-ballistic performance of lpg ...
Man's search-for-meaning-viktor-frankl
Love versus shadow self
Exposing the truth about the qur'an
NASA-TM-X-74335 --U.S. Standard Atmosphere 1976
Mathcad P-elements linear versus nonlinear stress 2014-t6
Apologia - The martyrs killed for clarifying the bible
Apologia - Always be prepared to give a reason for the hope that is within yo...
Spontaneous creation of the universe ex nihil by maya lincoln and avi wasser
The “necessary observer” that quantum mechanics require is described in the b...
Advances in fatigue and fracture mechanics by grzegorz (greg) glinka

Recently uploaded (20)

PPTX
AI-Reporting for Emerging Technologies(BS Computer Engineering)
PDF
First part_B-Image Processing - 1 of 2).pdf
PPTX
Micro1New.ppt.pptx the mai themes of micfrobiology
DOC
T Pandian CV Madurai pandi kokkaf illaya
PPTX
CN_Unite_1 AI&DS ENGGERING SPPU PUNE UNIVERSITY
PPTX
CONTRACTS IN CONSTRUCTION PROJECTS: TYPES
PPT
Programmable Logic Controller PLC and Industrial Automation
PPTX
Sorting and Hashing in Data Structures with Algorithms, Techniques, Implement...
PDF
Principles of operation, construction, theory, advantages and disadvantages, ...
PDF
Project_Mgmt_Institute_-Marc Marc Marc .pdf
PPT
Chapter 1 - Introduction to Manufacturing Technology_2.ppt
PDF
VSL-Strand-Post-tensioning-Systems-Technical-Catalogue_2019-01.pdf
PDF
August -2025_Top10 Read_Articles_ijait.pdf
PPTX
Chemical Technological Processes, Feasibility Study and Chemical Process Indu...
PDF
Accra-Kumasi Expressway - Prefeasibility Report Volume 1 of 7.11.2018.pdf
PDF
[jvmmeetup] next-gen integration with apache camel and quarkus.pdf
PPTX
Software Engineering and software moduleing
PPTX
Petroleum Refining & Petrochemicals.pptx
PDF
August 2025 - Top 10 Read Articles in Network Security & Its Applications
PDF
Influence of Green Infrastructure on Residents’ Endorsement of the New Ecolog...
AI-Reporting for Emerging Technologies(BS Computer Engineering)
First part_B-Image Processing - 1 of 2).pdf
Micro1New.ppt.pptx the mai themes of micfrobiology
T Pandian CV Madurai pandi kokkaf illaya
CN_Unite_1 AI&DS ENGGERING SPPU PUNE UNIVERSITY
CONTRACTS IN CONSTRUCTION PROJECTS: TYPES
Programmable Logic Controller PLC and Industrial Automation
Sorting and Hashing in Data Structures with Algorithms, Techniques, Implement...
Principles of operation, construction, theory, advantages and disadvantages, ...
Project_Mgmt_Institute_-Marc Marc Marc .pdf
Chapter 1 - Introduction to Manufacturing Technology_2.ppt
VSL-Strand-Post-tensioning-Systems-Technical-Catalogue_2019-01.pdf
August -2025_Top10 Read_Articles_ijait.pdf
Chemical Technological Processes, Feasibility Study and Chemical Process Indu...
Accra-Kumasi Expressway - Prefeasibility Report Volume 1 of 7.11.2018.pdf
[jvmmeetup] next-gen integration with apache camel and quarkus.pdf
Software Engineering and software moduleing
Petroleum Refining & Petrochemicals.pptx
August 2025 - Top 10 Read Articles in Network Security & Its Applications
Influence of Green Infrastructure on Residents’ Endorsement of the New Ecolog...

Partial differential equations, graduate level problems and solutions by igor yanovsky

  • 1. Partial Differential Equations: Graduate Level Problems and Solutions Igor Yanovsky 1
  • 2. Partial Differential Equations Igor Yanovsky, 2005 2 Disclaimer: This handbook is intended to assist graduate students with qualifying examination preparation. Please be aware, however, that the handbook might contain, and almost certainly contains, typos as well as incorrect or inaccurate solutions. I can not be made responsible for any inaccuracies contained in this handbook.
  • 3. Partial Differential Equations Igor Yanovsky, 2005 3 Contents 1 Trigonometric Identities 6 2 Simple Eigenvalue Problem 8 3 Separation of Variables: Quick Guide 9 4 Eigenvalues of the Laplacian: Quick Guide 9 5 First-Order Equations 10 5.1 Quasilinear Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 5.2 Weak Solutions for Quasilinear Equations . . . . . . . . . . . . . . . . . 12 5.2.1 Conservation Laws and Jump Conditions . . . . . . . . . . . . . 12 5.2.2 Fans and Rarefaction Waves . . . . . . . . . . . . . . . . . . . . . 12 5.3 General Nonlinear Equations . . . . . . . . . . . . . . . . . . . . . . . . 13 5.3.1 Two Spatial Dimensions . . . . . . . . . . . . . . . . . . . . . . . 13 5.3.2 Three Spatial Dimensions . . . . . . . . . . . . . . . . . . . . . . 13 6 Second-Order Equations 14 6.1 Classification by Characteristics . . . . . . . . . . . . . . . . . . . . . . . 14 6.2 Canonical Forms and General Solutions . . . . . . . . . . . . . . . . . . 14 6.3 Well-Posedness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 7 Wave Equation 23 7.1 The Initial Value Problem . . . . . . . . . . . . . . . . . . . . . . . . . . 23 7.2 Weak Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24 7.3 Initial/Boundary Value Problem . . . . . . . . . . . . . . . . . . . . . . 24 7.4 Duhamel’s Principle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24 7.5 The Nonhomogeneous Equation . . . . . . . . . . . . . . . . . . . . . . . 24 7.6 Higher Dimensions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 7.6.1 Spherical Means . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 7.6.2 Application to the Cauchy Problem . . . . . . . . . . . . . . . . 26 7.6.3 Three-Dimensional Wave Equation . . . . . . . . . . . . . . . . . 27 7.6.4 Two-Dimensional Wave Equation . . . . . . . . . . . . . . . . . . 28 7.6.5 Huygen’s Principle . . . . . . . . . . . . . . . . . . . . . . . . . . 28 7.7 Energy Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29 7.8 Contraction Mapping Principle . . . . . . . . . . . . . . . . . . . . . . . 30 8 Laplace Equation 31 8.1 Green’s Formulas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 8.2 Polar Coordinates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32 8.3 Polar Laplacian in R2 for Radial Functions . . . . . . . . . . . . . . . . 32 8.4 Spherical Laplacian in R3 and Rn for Radial Functions . . . . . . . . . . 32 8.5 Cylindrical Laplacian in R3 for Radial Functions . . . . . . . . . . . . . 33 8.6 Mean Value Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 8.7 Maximum Principle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 8.8 The Fundamental Solution . . . . . . . . . . . . . . . . . . . . . . . . . . 34 8.9 Representation Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . 37 8.10 Green’s Function and the Poisson Kernel . . . . . . . . . . . . . . . . . . 42
  • 4. Partial Differential Equations Igor Yanovsky, 2005 4 8.11 Properties of Harmonic Functions . . . . . . . . . . . . . . . . . . . . . . 44 8.12 Eigenvalues of the Laplacian . . . . . . . . . . . . . . . . . . . . . . . . . 44 9 Heat Equation 45 9.1 The Pure Initial Value Problem . . . . . . . . . . . . . . . . . . . . . . . 45 9.1.1 Fourier Transform . . . . . . . . . . . . . . . . . . . . . . . . . . 45 9.1.2 Multi-Index Notation . . . . . . . . . . . . . . . . . . . . . . . . 45 9.1.3 Solution of the Pure Initial Value Problem . . . . . . . . . . . . . 49 9.1.4 Nonhomogeneous Equation . . . . . . . . . . . . . . . . . . . . . 50 9.1.5 Nonhomogeneous Equation with Nonhomogeneous Initial Condi- tions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50 9.1.6 The Fundamental Solution . . . . . . . . . . . . . . . . . . . . . 50 10 Schr¨odinger Equation 52 11 Problems: Quasilinear Equations 54 12 Problems: Shocks 75 13 Problems: General Nonlinear Equations 86 13.1 Two Spatial Dimensions . . . . . . . . . . . . . . . . . . . . . . . . . . . 86 13.2 Three Spatial Dimensions . . . . . . . . . . . . . . . . . . . . . . . . . . 93 14 Problems: First-Order Systems 102 15 Problems: Gas Dynamics Systems 127 15.1 Perturbation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127 15.2 Stationary Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128 15.3 Periodic Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130 15.4 Energy Estimates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136 16 Problems: Wave Equation 139 16.1 The Initial Value Problem . . . . . . . . . . . . . . . . . . . . . . . . . . 139 16.2 Initial/Boundary Value Problem . . . . . . . . . . . . . . . . . . . . . . 141 16.3 Similarity Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155 16.4 Traveling Wave Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . 156 16.5 Dispersion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 171 16.6 Energy Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174 16.7 Wave Equation in 2D and 3D . . . . . . . . . . . . . . . . . . . . . . . . 187 17 Problems: Laplace Equation 196 17.1 Green’s Function and the Poisson Kernel . . . . . . . . . . . . . . . . . . 196 17.2 The Fundamental Solution . . . . . . . . . . . . . . . . . . . . . . . . . . 205 17.3 Radial Variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 216 17.4 Weak Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 221 17.5 Uniqueness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 223 17.6 Self-Adjoint Operators . . . . . . . . . . . . . . . . . . . . . . . . . . . . 232 17.7 Spherical Means . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 242 17.8 Harmonic Extensions, Subharmonic Functions . . . . . . . . . . . . . . . 249
  • 5. Partial Differential Equations Igor Yanovsky, 2005 5 18 Problems: Heat Equation 255 18.1 Heat Equation with Lower Order Terms . . . . . . . . . . . . . . . . . . 263 18.1.1 Heat Equation Energy Estimates . . . . . . . . . . . . . . . . . . 264 19 Contraction Mapping and Uniqueness - Wave 271 20 Contraction Mapping and Uniqueness - Heat 273 21 Problems: Maximum Principle - Laplace and Heat 279 21.1 Heat Equation - Maximum Principle and Uniqueness . . . . . . . . . . . 279 21.2 Laplace Equation - Maximum Principle . . . . . . . . . . . . . . . . . . 281 22 Problems: Separation of Variables - Laplace Equation 282 23 Problems: Separation of Variables - Poisson Equation 302 24 Problems: Separation of Variables - Wave Equation 305 25 Problems: Separation of Variables - Heat Equation 309 26 Problems: Eigenvalues of the Laplacian - Laplace 323 27 Problems: Eigenvalues of the Laplacian - Poisson 333 28 Problems: Eigenvalues of the Laplacian - Wave 338 29 Problems: Eigenvalues of the Laplacian - Heat 346 29.1 Heat Equation with Periodic Boundary Conditions in 2D (with extra terms) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 360 30 Problems: Fourier Transform 365 31 Laplace Transform 385 32 Linear Functional Analysis 393 32.1 Norms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 393 32.2 Banach and Hilbert Spaces . . . . . . . . . . . . . . . . . . . . . . . . . 393 32.3 Cauchy-Schwarz Inequality . . . . . . . . . . . . . . . . . . . . . . . . . 393 32.4 H¨older Inequality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 393 32.5 Minkowski Inequality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 394 32.6 Sobolev Spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 394
  • 6. Partial Differential Equations Igor Yanovsky, 2005 6 1 Trigonometric Identities cos(a + b) = cos a cos b − sina sinb cos(a − b) = cos a cos b + sina sinb sin(a + b) = sin a cos b + cos a sinb sin(a − b) = sin a cos b − cos a sinb cos a cos b = cos(a + b) + cos(a − b) 2 sin a cos b = sin(a + b) + sin(a − b) 2 sin a sinb = cos(a − b) − cos(a + b) 2 cos 2t = cos2 t − sin2 t sin2t = 2 sint cos t cos2 1 2 t = 1 + cos t 2 sin2 1 2 t = 1 − cos t 2 1 + tan2 t = sec2 t cot2 t + 1 = csc2 t cos x = eix + e−ix 2 sinx = eix − e−ix 2i coshx = ex + e−x 2 sinhx = ex − e−x 2 d dx cosh x = sinh(x) d dx sinh x = cosh(x) cosh2 x − sinh2 x = 1 du a2 + u2 = 1 a tan−1 u a + C du √ a2 − u2 = sin−1 u a + C L −L cos nπx L cos mπx L dx = 0 n = m L n = m L −L sin nπx L sin mπx L dx = 0 n = m L n = m L −L sin nπx L cos mπx L dx = 0 L 0 cos nπx L cos mπx L dx = 0 n = m L 2 n = m L 0 sin nπx L sin mπx L dx = 0 n = m L 2 n = m L 0 einx eimx dx = 0 n = m L n = m L 0 einx dx = 0 n = 0 L n = 0 sin2 x dx = x 2 − sin x cos x 2 cos2 x dx = x 2 + sin x cos x 2 tan2 x dx = tanx − x sinx cos x dx = − cos2 x 2 ln(xy) = ln(x) + ln(y) ln x y = ln(x) − ln(y) ln xr = r lnx ln x dx = x ln x − x x ln x dx = x2 2 ln x − x2 4 R e−z2 dz = √ π R e−z2 2 dz = √ 2π
  • 7. Partial Differential Equations Igor Yanovsky, 2005 7 A = a b c d , A−1 = 1 det(A) d −b −c a
  • 8. Partial Differential Equations Igor Yanovsky, 2005 8 2 Simple Eigenvalue Problem X + λX = 0 Boundary conditions Eigenvalues λn Eigenfunctions Xn X(0) = X(L) = 0 nπ L 2 sin nπ L x n = 1, 2, . . . X(0) = X (L) = 0 (n−1 2 )π L 2 sin (n−1 2 )π L x n = 1, 2, . . . X (0) = X(L) = 0 (n−1 2 )π L 2 cos (n−1 2 )π L x n = 1, 2, . . . X (0) = X (L) = 0 nπ L 2 cos nπ L x n = 0, 1, 2, . . . X(0) = X(L), X (0) = X (L) 2nπ L 2 sin 2nπ L x n = 1, 2, . . . cos 2nπ L x n = 0, 1, 2, . . . X(−L) = X(L), X (−L) = X (L) nπ L 2 sin nπ L x n = 1, 2, . . . cos nπ L x n = 0, 1, 2, . . . X − λX = 0 Boundary conditions Eigenvalues λn Eigenfunctions Xn X(0) = X(L) = 0, X (0) = X (L) = 0 nπ L 4 sin nπ L x n = 1, 2, . . . X (0) = X (L) = 0, X (0) = X (L) = 0 nπ L 4 cos nπ L x n = 0, 1, 2, . . .
  • 9. Partial Differential Equations Igor Yanovsky, 2005 9 3 Separation of Variables: Quick Guide Laplace Equation: u = 0. X (x) X(x) = − Y (y) Y (y) = −λ. X + λX = 0. X (t) X(t) = − Y (θ) Y (θ) = λ. Y (θ) + λY (θ) = 0. Wave Equation: utt − uxx = 0. X (x) X(x) = T (t) T(t) = −λ. X + λX = 0. utt + 3ut + u = uxx. T T + 3 T T + 1 = X X = −λ. X + λX = 0. utt − uxx + u = 0. T T + 1 = X X = −λ. X + λX = 0. utt + μut = c2uxx + βuxxt, (β > 0) X X = −λ, 1 c2 T T + μ c2 T T = 1 + β c2 T T X X . 4th Order: utt = −k uxxxx. − X X = 1 k T T = −λ. X − λX = 0. Heat Equation: ut = kuxx. T T = k X X = −λ. X + λ k X = 0. 4th Order: ut = −uxxxx. T T = − X X = −λ. X − λX = 0. 4 Eigenvalues of the Lapla- cian: Quick Guide Laplace Equation: uxx +uyy +λu = 0. X X + Y Y + λ = 0. (λ = μ2 + ν2 ) X + μ2 X = 0, Y + ν2 Y = 0. uxx + uyy + k2 u = 0. − X X = Y Y + k2 = c2 . X + c2 X = 0, Y + (k2 − c2 )Y = 0. uxx + uyy + k2u = 0. − Y Y = X X + k2 = c2 . Y + c2 Y = 0, X + (k2 − c2 )X = 0.
  • 10. Partial Differential Equations Igor Yanovsky, 2005 10 5 First-Order Equations 5.1 Quasilinear Equations Consider the Cauchy problem for the quasilinear equation in two variables a(x, y, u)ux + b(x, y, u)uy = c(x, y, u), with Γ parameterized by (f(s), g(s), h(s)). The characteristic equations are dx dt = a(x, y, z), dy dt = b(x, y, z), dz dt = c(x, y, z), with initial conditions x(s, 0) = f(s), y(s, 0) = g(s), z(s, 0) = h(s). In a quasilinear case, the characteristic equations for dx dt and dy dt need not decouple from the dz dt equation; this means that we must take the z values into account even to find the projected characteristic curves in the xy-plane. In particular, this allows for the possibility that the projected characteristics may cross each other. The condition for solving for s and t in terms of x and y requires that the Jacobian matrix be nonsingular: J ≡ xs ys xt yt = xsyt − ysxt = 0. In particular, at t = 0 we obtain the condition f (s) · b(f(s), g(s), h(s)) − g (s) · a(f(s), g(s), h(s)) = 0. Burger’s Equation. Solve the Cauchy problem ut + uux = 0, u(x, 0) = h(x). (5.1) The characteristic equations are dx dt = z, dy dt = 1, dz dt = 0, and Γ may be parametrized by (s, 0, h(s)). x = h(s)t + s, y = t, z = h(s). u(x, y) = h(x − uy) (5.2) The characteristic projection in the xt-plane1 passing through the point (s, 0) is the line x = h(s)t + s along which u has the constant value u = h(s). Two characteristics x = h(s1)t + s1 and x = h(s2)t + s2 intersect at a point (x, t) with t = − s2 − s1 h(s2) − h(s1) . 1 y and t are interchanged here
  • 11. Partial Differential Equations Igor Yanovsky, 2005 11 From (5.2), we have ux = h (s)(1 − uxt) ⇒ ux = h (s) 1 + h (s)t Hence for h (s) < 0, ux becomes infinite at the positive time t = −1 h (s) . The smallest t for which this happens corresponds to the value s = s0 at which h (s) has a minimum (i.e.−h (s) has a maximum). At time T = −1/h (s0) the solution u experiences a “gradient catastrophe”.
  • 12. Partial Differential Equations Igor Yanovsky, 2005 12 5.2 Weak Solutions for Quasilinear Equations 5.2.1 Conservation Laws and Jump Conditions Consider shocks for an equation ut + f(u)x = 0, (5.3) where f is a smooth function of u. If we integrate (5.3) with respect to x for a ≤ x ≤ b, we obtain d dt b a u(x, t) dx + f(u(b, t)) − f(u(a, t)) = 0. (5.4) This is an example of a conservation law. Notice that (5.4) implies (5.3) if u is C1, but (5.4) makes sense for more general u. Consider a solution of (5.4) that, for fixed t, has a jump discontinuity at x = ξ(t). We assume that u, ux, and ut are continuous up to ξ. Also, we assume that ξ(t) is C1 in t. Taking a < ξ(t) < b in (5.4), we obtain d dt ξ a u dx + b ξ u dx + f(u(b, t)) − f(u(a, t)) = ξ (t)ul(ξ(t), t) − ξ (t)ur(ξ(t), t) + ξ a ut(x, t) dx + b ξ ut(x, t) dx + f(u(b, t)) − f(u(a, t)) = 0, where ul and ur denote the limiting values of u from the left and right sides of the shock. Letting a ↑ ξ(t) and b ↓ ξ(t), we get the Rankine-Hugoniot jump condition: ξ (t)(ul − ur) + f(ur) − f(ul) = 0, ξ (t) = f(ur) − f(ul) ur − ul . 5.2.2 Fans and Rarefaction Waves For Burgers’ equation ut + 1 2 u2 x = 0, we have f (u) = u, f ˜u x t = x t ⇒ ˜u x t = x t . For a rarefaction fan emanating from (s, 0) on xt-plane, we have: u(x, t) = ⎧ ⎪⎨ ⎪⎩ ul, x−s t ≤ f (ul) = ul, x−s t , ul ≤ x−s t ≤ ur, ur, x−s t ≥ f (ur) = ur.
  • 13. Partial Differential Equations Igor Yanovsky, 2005 13 5.3 General Nonlinear Equations 5.3.1 Two Spatial Dimensions Write a general nonlinear equation F(x, y, u, ux, uy) = 0 as F(x, y, z, p, q) = 0. Γ is parameterized by Γ : f(s) x(s,0) , g(s) y(s,0) , h(s) z(s,0) , φ(s) p(s,0) , ψ(s) q(s,0) We need to complete Γ to a strip. Find φ(s) and ψ(s), the initial conditions for p(s, t) and q(s, t), respectively: • F(f(s), g(s), h(s), φ(s), ψ(s)) = 0 • h (s) = φ(s)f (s) + ψ(s)g (s) The characteristic equations are dx dt = Fp dy dt = Fq dz dt = pFp + qFq dp dt = −Fx − Fzp dq dt = −Fy − Fzq We need to have the Jacobian condition. That is, in order to solve the Cauchy problem in a neighborhood of Γ, the following condition must be satisfied: f (s) · Fq[f, g, h, φ, ψ](s) − g (s) · Fp[f, g, h, φ, ψ](s) = 0. 5.3.2 Three Spatial Dimensions Write a general nonlinear equation F(x1, x2, x3, u, ux1, ux2, ux3 ) = 0 as F(x1, x2, x3, z, p1, p2, p3) = 0. Γ is parameterized by Γ : f1(s1, s2) x1(s1,s2,0) , f2(s1, s2) x2(s1,s2,0) , f3(s1, s2) x3(s1,s2,0) , h(s1, s2) z(s1,s2,0) , φ1(s1, s2) p1(s1,s2,0) , φ2(s1, s2) p2(s1,s2,0) , φ3(s1, s2) p3(s1,s2,0) We need to complete Γ to a strip. Find φ1(s1, s2), φ2(s1, s2), and φ3(s1, s2), the initial conditions for p1(s1, s2, t), p2(s1, s2, t), and p3(s1, s2, t), respectively: • F f1(s1, s2), f2(s1, s2), f3(s1, s2), h(s1, s2), φ1, φ2, φ3 = 0 • ∂h ∂s1 = φ1 ∂f1 ∂s1 + φ2 ∂f2 ∂s1 + φ3 ∂f3 ∂s1 • ∂h ∂s2 = φ1 ∂f1 ∂s2 + φ2 ∂f2 ∂s2 + φ3 ∂f3 ∂s2 The characteristic equations are dx1 dt = Fp1 dx2 dt = Fp2 dx3 dt = Fp3 dz dt = p1Fp1 + p2Fp2 + p3Fp3 dp1 dt = −Fx1 − p1Fz dp2 dt = −Fx2 − p2Fz dp3 dt = −Fx3 − p3Fz
  • 14. Partial Differential Equations Igor Yanovsky, 2005 14 6 Second-Order Equations 6.1 Classification by Characteristics Consider the second-order equation in which the derivatives of second-order all occur linearly, with coefficients only depending on the independent variables: a(x, y)uxx + b(x, y)uxy + c(x, y)uyy = d(x, y, u, ux, uy). (6.1) The characteristic equation is dy dx = b ± √ b2 − 4ac 2a . • b2 − 4ac > 0 ⇒ two characteristics, and (6.1) is called hyperbolic; • b2 − 4ac = 0 ⇒ one characteristic, and (6.1) is called parabolic; • b2 − 4ac < 0 ⇒ no characteristics, and (6.1) is called elliptic. These definitions are all taken at a point x0 ∈ R2; unless a, b, and c are all constant, the type may change with the point x0. 6.2 Canonical Forms and General Solutions ➀ uxx − uyy = 0 is hyperbolic (one-dimensional wave equation). ➁ uxx − uy = 0 is parabolic (one-dimensional heat equation). ➂ uxx + uyy = 0 is elliptic (two-dimensional Laplace equation). By the introduction of new coordinates μ and η in place of x and y, the equation (6.1) may be transformed so that its principal part takes the form ➀, ➁, or ➂. If (6.1) is hyperbolic, parabolic, or elliptic, there exists a change of variables μ(x, y) and η(x, y) under which (6.1) becomes, respectively, uμη = ˜d(μ, η, u, uμ, uη) ⇔ u¯x¯x − u¯y¯y = ¯d(¯x, ¯y, u, u¯x, u¯y), uμμ = ˜d(μ, η, u, uμ, uη), uμμ + uηη = ˜d(μ, η, u, uμ, uη). Example 1. Reduce to canonical form and find the general solution: uxx + 5uxy + 6uyy = 0. (6.2) Proof. a = 1, b = 5, c = 6 ⇒ b2 − 4ac = 1 > 0 ⇒ hyperbolic ⇒ two characteristics. The characteristics are found by solving dy dx = 5 ± 1 2 = 3 2 to find y = 3x + c1 and y = 2x + c2.
  • 15. Partial Differential Equations Igor Yanovsky, 2005 15 Let μ(x, y) = 3x − y, η(x, y) = 2x − y. μx = 3, ηx = 2, μy = −1, ηy = −1. u = u(μ(x, y), η(x, y)); ux = uμμx + uηηx = 3uμ + 2uη, uy = uμμy + uηηy = −uμ − uη, uxx = (3uμ + 2uη)x = 3(uμμμx + uμηηx) + 2(uημμx + uηηηx) = 9uμμ + 12uμη + 4uηη, uxy = (3uμ + 2uη)y = 3(uμμμy + uμηηy) + 2(uημμy + uηηηy) = −3uμμ − 5uμη − 2uηη, uyy = −(uμ + uη)y = −(uμμμy + uμηηy + uημμy + uηηηy) = uμμ + 2uμη + uηη. Inserting these expressions into (6.2) and simplifying, we obtain uμη = 0, which is the Canonical form, uμ = f(μ), u = F(μ) + G(η), u(x, y) = F(3x − y) + G(2x − y), General solution. Example 2. Reduce to canonical form and find the general solution: y2 uxx − 2yuxy + uyy = ux + 6y. (6.3) Proof. a = y2, b = −2y, c = 1 ⇒ b2 −4ac = 0 ⇒ parabolic ⇒ one characteristic. The characteristics are found by solving dy dx = −2y 2y2 = − 1 y to find − y2 2 + c = x. Let μ = y2 2 + x. We must choose a second constant function η(x, y) so that η is not parallel to μ. Choose η(x, y) = y. μx = 1, ηx = 0, μy = y, ηy = 1. u = u(μ(x, y), η(x, y)); ux = uμμx + uηηx = uμ, uy = uμμy + uηηy = yuμ + uη, uxx = (uμ)x = uμμμx + uμηηx = uμμ, uxy = (uμ)y = uμμμy + uμηηy = yuμμ + uμη, uyy = (yuμ + uη)y = uμ + y(uμμμy + uμηηy) + (uημμy + uηηηy) = uμ + y2 uμμ + 2yuμη + uηη.
  • 16. Partial Differential Equations Igor Yanovsky, 2005 16 Inserting these expressions into (6.3) and simplifying, we obtain uηη = 6y, uηη = 6η, which is the Canonical form, uη = 3η2 + f(μ), u = η3 + ηf(μ) + g(μ), u(x, y) = y3 + y · f y2 2 + x + g y2 2 + x , General solution.
  • 17. Partial Differential Equations Igor Yanovsky, 2005 17 Problem (F’03, #4). Find the characteristics of the partial differential equation xuxx + (x − y)uxy − yuyy = 0, x > 0, y > 0, (6.4) and then show that it can be transformed into the canonical form (ξ2 + 4η)uξη + ξuη = 0 whence ξ and η are suitably chosen canonical coordinates. Use this to obtain the general solution in the form u(ξ, η) = f(ξ) + η g(η ) dη (ξ2 + 4η ) 1 2 where f and g are arbitrary functions of ξ and η. Proof. a = x, b = x − y, c = −y ⇒ b2 − 4ac = (x − y)2 + 4xy > 0 for x > 0, y > 0 ⇒ hyperbolic ⇒ two characteristics. ➀ The characteristics are found by solving dy dx = b ± √ b2 − 4ac 2a = x − y ± (x − y)2 + 4xy 2x = x − y ± (x + y) 2x = 2x 2x = 1 −2y 2x = −y x ⇒ y = x + c1, dy y = − dx x , ln y = ln x−1 + ˜c2, y = c2 x .➁ Let μ = x − y and η = xy μx = 1, ηx = y, μy = −1, ηy = x. u = u(μ(x, y), η(x, y)); ux = uμμx + uηηx = uμ + yuη, uy = uμμy + uηηy = −uμ + xuη, uxx = (uμ + yuη)x = uμμμx + uμηηx + y(uημμx + uηηηx) = uμμ + 2yuμη + y2 uηη, uxy = (uμ + yuη)y = uμμμy + uμηηy + uη + y(uημμy + uηηηy) = −uμμ + xuμη + uη − yuημ + xyuηη, uyy = (−uμ + xuη)y = −uμμμy − uμηηy + x(uημμy + uηηηy) = uμμ − 2xuμη + x2 uηη, Inserting these expressions into (6.4), we obtain x(uμμ + 2yuμη + y2 uηη) + (x − y)(−uμμ + xuμη + uη − yuημ + xyuηη) − y(uμμ − 2xuμη + x2 uηη) = 0, (x2 + 2xy + y2 )uμη + (x − y)uη = 0, (x − y)2 + 4xy uμη + (x − y)uη = 0, (μ2 + 4η)uμη + μuη = 0, which is the Canonical form.
  • 18. Partial Differential Equations Igor Yanovsky, 2005 18 ➂ We need to integrate twice to get the general solution: (μ2 + 4η)(uη)μ + μuη = 0, (uη)μ uη dμ = − μ μ2 + 4η dμ, ln uη = − 1 2 ln (μ2 + 4η) + ˜g(η), ln uη = ln (μ2 + 4η)−1 2 + ˜g(η), uη = g(η) (μ2 + 4η) 1 2 , u(μ, η) = f(μ) + g(η) dη (μ2 + 4η) 1 2 , General solution.
  • 19. Partial Differential Equations Igor Yanovsky, 2005 19 6.3 Well-Posedness Problem (S’99, #2). In R2 consider the unit square Ω defined by 0 ≤ x, y ≤ 1. Consider a) ux + uyy = 0; b) uxx + uyy = 0; c) uxx − uyy = 0. Prescribe data for each problem separately on the boundary of Ω so that each of these problems is well-posed. Justify your answers. Proof. • The initial / boundary value problem for the HEAT EQUATION is well- posed: ⎧ ⎪⎨ ⎪⎩ ut = u x ∈ Ω, t > 0, u(x, 0) = g(x) x ∈ Ω, u(x, t) = 0 x ∈ ∂Ω, t > 0. Existence - by eigenfunction expansion. Uniqueness and continuous dependence on the data - by maximum principle. The method of eigenfunction expansion and maximum principle give well-posedness for more general problems: ⎧ ⎪⎨ ⎪⎩ ut = u + f(x, t) x ∈ Ω, t > 0, u(x, 0) = g(x) x ∈ Ω, u(x, t) = h(x, t) x ∈ ∂Ω, t > 0. It is also possible to replace the Dirichlet boundary condition u(x, t) = h(x, t) by a Neumann or Robin condition, provided we replace λn, φn by the eigenvalues and eigen- functions for the appropriate boundary value problem. a) • Relabel the variables (x → t, y → x). We have the BACKWARDS HEAT EQUATION: ut + uxx = 0. Need to define initial conditions u(x, 1) = g(x), and either Dirichlet, Neumann, or Robin boundary conditions. b) • The solution to the LAPLACE EQUATION u = 0 in Ω, u = g on ∂Ω exists if g is continuous on ∂Ω, by Perron’s method. Maximum principle gives unique- ness. To show the continuous dependence on the data, assume u1 = 0 in Ω, u1 = g1 on ∂Ω; u2 = 0 in Ω, u2 = g2 on ∂Ω.
  • 20. Partial Differential Equations Igor Yanovsky, 2005 20 Then (u1 − u2) = 0 in Ω. Maximum principle gives max Ω (u1 − u2) = max ∂Ω (g1 − g2). Thus, max Ω |u1 − u2| = max ∂Ω |g1 − g2|. Thus, |u1 − u2| is bounded by |g1 − g2|, i.e. continuous dependence on data. • Perron’s method gives existence of the solution to the POISSON EQUATION u = f in Ω, ∂u ∂n = h on ∂Ω for f ∈ C∞ (Ω) and h ∈ C∞ (∂Ω), satisfying the compatibility condition ∂Ω h dS = Ω f dx. It is unique up to an additive constant. c) • Relabel the variables (y → t). The solution to the WAVE EQUATION utt − uxx = 0, is of the form u(x, y) = F(x + t) + G(x − t). The existence of the solution to the initial/boundary value problem ⎧ ⎪⎨ ⎪⎩ utt − uxx = 0 0 < x < 1, t > 0 u(x, 0) = g(x), ut(x, 0) = h(x) 0 < x < 1 u(0, t) = α(t), u(1, t) = β(t) t ≥ 0. is given by the method of separation of variables (expansion in eigenfunctions) and by the parallelogram rule. Uniqueness is given by the energy method. Need initial conditions u(x, 0), ut(x, 0). Prescribe u or ux for each of the two boundaries.
  • 21. Partial Differential Equations Igor Yanovsky, 2005 21 Problem (F’95, #7). Let a, b be real numbers. The PDE uy + auxx + buyy = 0 is to be solved in the box Ω = [0, 1]2. Find data, given on an appropriate part of ∂Ω, that will make this a well-posed prob- lem. Cover all cases according to the possible values of a and b. Justify your statements. Proof. ➀ ab < 0 ⇒ two sets of characteristics ⇒ hyperbolic. Relabeling the variables (y → t), we have utt + a b uxx = − 1 b ut. The solution of the equation is of the form u(x, t) = F(x + −a b t) + G(x − −a b t). Existence of the solution to the initial/boundary value problem is given by the method of separation of variables (expansion in eigenfunctions) and by the parallelogram rule. Uniqueness is given by the energy method. Need initial conditions u(x, 0), ut(x, 0). Prescribe u or ux for each of the two boundaries. ➁ ab > 0 ⇒ no characteristics ⇒ elliptic. The solution to the Laplace equation with boundary conditions u = g on ∂Ω exists if g is continuous on ∂Ω, by Perron’s method. To show uniqueness, we use maximum principle. Assume there are two solutions u1 and u2 with with u1 = g(x), u2 = g(x) on ∂Ω. By maximum principle max Ω (u1 − u2) = max ∂Ω (g(x) − g(x)) = 0. Thus, u1 = u2. ➂ ab = 0 ⇒ one set of characteristics ⇒ parabolic. • a = b = 0. We have uy = 0, a first-order ODE. u must be specified on y = 0, i.e. x -axis. • a = 0, b = 0. We have uy + buyy = 0, a second-order ODE. u and uy must be specified on y = 0, i.e. x -axis. • a > 0, b = 0. We have a Backwards Heat Equation. ut = −auxx. Need to define initial conditions u(x, 1) = g(x), and either Dirichlet, Neumann, or Robin boundary conditions.
  • 22. Partial Differential Equations Igor Yanovsky, 2005 22 • a < 0, b = 0. We have a Heat Equation. ut = −auxx. The initial / boundary value problem for the heat equation is well-posed: ⎧ ⎪⎨ ⎪⎩ ut = u x ∈ Ω, t > 0, u(x, 0) = g(x) x ∈ Ω, u(x, t) = 0 x ∈ ∂Ω, t > 0. Existence - by eigenfunction expansion. Uniqueness and continuous dependence on the data - by maximum principle.
  • 23. Partial Differential Equations Igor Yanovsky, 2005 23 7 Wave Equation The one-dimensional wave equation is utt − c2 uxx = 0. (7.1) The characteristic equation with a = −c2, b = 0, c = 1 would be dt dx = b ± √ b2 − 4ac 2a = ± √ 4c2 −2c2 = ± 1 c , and thus t = − 1 c x + c1 and t = 1 c x + c2, μ = x + ct η = x − ct, which transforms (7.1) to uμη = 0. (7.2) The general solution of (7.2) is u(μ, η) = F(μ)+G(η), where F and G are C1 functions. Returning to the variables x, t we find that u(x, t) = F(x + ct) + G(x − ct) (7.3) solves (7.1). Moreover, u is C2 provided that F and G are C2. If F ≡ 0, then u has constant values along the lines x−ct = const, so may be described as a wave moving in the positive x-direction with speed dx/dt = c; if G ≡ 0, then u is a wave moving in the negative x-direction with speed c. 7.1 The Initial Value Problem For an initial value problem, consider the Cauchy problem utt − c2 uxx = 0, u(x, 0) = g(x), ut(x, 0) = h(x). (7.4) Using (7.3) and (7.4), we find that F and G satisfy F(x) + G(x) = g(x), cF (x) − cG (x) = h(x). (7.5) If we integrate the second equation in (7.5), we get cF(x) − cG(x) = x 0 h(ξ) dξ + C. Combining this with the first equation in (7.5), we can solve for F and G to find F(x) = 1 2 g(x) + 1 2c x 0 h(ξ) dξ + C1 G(x) = 1 2 g(x) − 1 2c x 0 h(ξ) dξ − C1, Using these expressions in (7.3), we obtain d’Alembert’s Formula for the solution of the initial value problem (7.4): u(x, t) = 1 2 (g(x + ct) + g(x − ct)) + 1 2c x+ct x−ct h(ξ) dξ. If g ∈ C2 and h ∈ C1, then d’Alembert’s Formula defines a C2 solution of (7.4).
  • 24. Partial Differential Equations Igor Yanovsky, 2005 24 7.2 Weak Solutions Equation (7.3) defines a weak solution of (7.1) when F and G are not C2 functions. Consider the parallelogram with sides that are segments of characteristics. Since u(x, t) = F(x + ct) + G(x − ct), we have u(A) + u(C) = = F(k1) + G(k3) + F(k2) + G(k4) = u(B) + u(D), which is the parallelogram rule. 7.3 Initial/Boundary Value Problem ⎧ ⎪⎨ ⎪⎩ utt − c2uxx = 0 0 < x < L, t > 0 u(x, 0) = g(x), ut(x, 0) = h(x) 0 < x < L u(0, t) = α(t), u(L, t) = β(t) t ≥ 0. (7.6) Use separation of variables to obtain an expansion in eigenfunctions. Find u(x, t) in the form u(x, t) = a0(t) 2 + ∞ n=1 an(t) cos nπx L + bn(t) sin nπx L . 7.4 Duhamel’s Principle ⎧ ⎪⎨ ⎪⎩ utt − c2 uxx = f(x, t) u(x, 0) = 0 ut(x, 0) = 0. ⇒ ⎧ ⎪⎨ ⎪⎩ Utt − c2 Uxx = 0 U(x, 0, s) = 0 Ut(x, 0, s) = f(x, s) u(x, t) = t 0 U(x, t−s, s) ds. ⎧ ⎪⎨ ⎪⎩ an + λnan = fn(t) an(0) = 0 an(0) = 0 ⇒ ⎧ ⎪⎨ ⎪⎩ ˜an + λn˜an = 0 ˜an(0, s) = 0 ˜an(0, s) = fn(s) an(t) = t 0 ˜an(t−s, s) ds. 7.5 The Nonhomogeneous Equation Consider the nonhomogeneous wave equation with homogeneous initial conditions: utt − c2 uxx = f(x, t), u(x, 0) = 0, ut(x, 0) = 0. (7.7) Duhamel’s Principle provides the solution of (7.7): u(x, t) = 1 2c t 0 x+c(t−s) x−c(t−s) f(ξ, s) dξ ds. If f(x, t) is C1 in x and C0 in t, then Duhamel’s Principle provides a C2 solution of (7.7).
  • 25. Partial Differential Equations Igor Yanovsky, 2005 25 We can solve (7.7) with nonhomogeneous initial conditions, utt − c2 uxx = f(x, t), u(x, 0) = g(x), ut(x, 0) = h(x), (7.8) by adding together d’Alembert’s formula and Duhamel’s principle gives the solution: u(x, t) = 1 2 (g(x + ct) + g(x − ct)) + 1 2c x+ct x−ct h(ξ) dξ + 1 2c t 0 x+c(t−s) x−c(t−s) f(ξ, s) dξ ds.
  • 26. Partial Differential Equations Igor Yanovsky, 2005 26 7.6 Higher Dimensions 7.6.1 Spherical Means For a continuous function u(x) on Rn, its spherical mean or average on a sphere of radius r and center x is Mu(x, r) = 1 ωn |ξ|=1 u(x + rξ)dSξ, where ωn is the area of the unit sphere Sn−1 = {ξ ∈ Rn : |ξ| = 1} and dSξ is surface measure. Since u is continuous in x, Mu(x, r) is continuous in x and r, so Mu(x, 0) = u(x). Using the chain rule, we find ∂ ∂r Mu(x, r) = 1 ωn |ξ|=1 n i=1 uxi (x + rξ) ξi dSξ = To compute the RHS, we apply the divergence theorem in Ω = {ξ ∈ Rn : |ξ| < 1}, which has boundary ∂Ω = Sn−1 and exterior unit normal n(ξ) = ξ. The integrand is V · n where V (ξ) = r−1 ∇ξu(x + rξ) = ∇xu(x + rξ). Computing the divergence of V , we obtain div V (ξ) = r n i=1 uxixi (x + rξ) = r xu(x + rξ), so, = 1 ωn |ξ|<1 r xu(x + rξ) dξ = r ωn x |ξ|<1 u(x + rξ) dξ (ξ = rξ) = r ωn 1 rn x |ξ |<r u(x + ξ ) dξ (spherical coordinates) = 1 ωnrn−1 x r 0 ρn−1 |ξ|=1 u(x + ρξ) dSξ dρ = 1 ωnrn−1 ωn x r 0 ρn−1 Mu(x, ρ) dρ = 1 rn−1 x r 0 ρn−1 Mu(x, ρ) dρ. If we multiply by rn−1 , differentiate with respect to r, and then divide by rn−1 , we obtain the Darboux equation: ∂2 ∂r2 + n − 1 r ∂ ∂r Mu(x, r) = xMu(x, r). Note that for a radial function u = u(r), we have Mu = u, so the equation provides the Laplacian of u in spherical coordinates. 7.6.2 Application to the Cauchy Problem We want to solve the equation utt = c2 u x ∈ Rn , t > 0, (7.9) u(x, 0) = g(x), ut(x, 0) = h(x) x ∈ Rn . We use Poisson’s method of spherical means to reduce this problem to a partial differ- ential equation in the two variables r and t.
  • 27. Partial Differential Equations Igor Yanovsky, 2005 27 Suppose that u(x, t) solves (7.9). We can view t as a parameter and take the spherical mean to obtain Mu(x, r, t), which satisfies ∂2 ∂t2 Mu(x, r, t) = 1 ωn |ξ|=1 utt(x + rξ, t)dSξ = 1 ωn |ξ|=1 c2 u(x + rξ, t)dSξ = c2 Mu(x, r, t). Invoking the Darboux equation, we obtain the Euler-Poisson-Darboux equation: ∂2 ∂t2 Mu(x, r, t) = c2 ∂2 ∂r2 + n − 1 r ∂ ∂r Mu(x, r, t). The initial conditions are obtained by taking the spherical means: Mu(x, r, 0) = Mg(x, r), ∂Mu ∂t (x, r, 0) = Mh(x, r). If we find Mu(x, r, t), we can then recover u(x, t) by: u(x, t) = lim r→0 Mu(x, r, t). 7.6.3 Three-Dimensional Wave Equation When n = 3, we can write the Euler-Poisson-Darboux equation as 2 ∂2 ∂t2 rMu(x, r, t) = c2 ∂2 ∂r2 rMu(x, r, t) . For each fixed x, consider V x (r, t) = rMu(x, r, t) as a solution of the one-dimensional wave equation in r, t > 0: ∂2 ∂t2 V x (r, t) = c2 ∂2 ∂r2 V x (r, t), V x (r, 0) = rMg(x, r) ≡ Gx (r), (IC) V x t (r, 0) = rMh(x, r) ≡ Hx (r), (IC) V x (0, t) = lim r→0 rMu(x, r, t) = 0 · u(x, t) = 0. (BC) Gx (0) = Hx (0) = 0. We may extend Gx and Hx as odd functions of r and use d’Alembert’s formula for V x(r, t): V x (r, t) = 1 2 Gx (r + ct) + Gx (r − ct) + 1 2c r+ct r−ct Hx (ρ) dρ. Since Gx and Hx are odd functions, we have for r < ct: Gx (r − ct) = −Gx (ct − r) and r+ct r−ct Hx (ρ) dρ = ct+r ct−r Hx (ρ) dρ. After some more manipulations, we find that the solution of (7.9) is given by the Kirchhoff’s formula: u(x, t) = 1 4π ∂ ∂t t |ξ|=1 g(x + ctξ)dSξ + t 4π |ξ|=1 h(x + ctξ)dSξ. If g ∈ C3(R3) and h ∈ C2(R3), then Kirchhoff’s formula defines a C2-solution of (7.9). 2 It is seen by expanding the equation below.
  • 28. Partial Differential Equations Igor Yanovsky, 2005 28 7.6.4 Two-Dimensional Wave Equation This problem is solved by Hadamard’s method of descent, namely, view (7.9) as a special case of a three-dimensional problem with initial conditions independent of x3. We need to convert surface integrals in R3 to domain integrals in R2. u(x1, x2, t) = 1 4π ∂ ∂t 2t ξ2 1+ξ2 2 <1 g(x1 + ctξ1, x2 + ctξ2)dξ1dξ2 1 − ξ2 1 − ξ2 2 + t 4π 2 ξ2 1+ξ2 2 <1 h(x1 + ctξ1, x2 + ctξ2)dξ1dξ2 1 − ξ2 1 − ξ2 2 If g ∈ C3 (R2 ) and h ∈ C2 (R2 ), then this equation defines a C2 -solution of (7.9). 7.6.5 Huygen’s Principle Notice that u(x, t) depends only on the Cauchy data g, h on the surface of the hyper- sphere {x + ctξ : |ξ| = 1} in Rn , n = 2k + 1; in other words we have sharp signals. If we use the method of descent to obtain the solution for n = 2k, the hypersurface integrals become domain integrals. This means that there are no sharp signals. The fact that sharp signals exist only for odd dimensions n ≥ 3 is known as Huygen’s principle. 3 3 For x ∈ Rn : ∂ ∂t |ξ|=1 f(x + tξ)dSξ = 1 tn−1 |y|≤t f(x + y)dy ∂ ∂t |y|≤t f(x + y)dy = tn−1 |ξ|=1 f(x + tξ)dSξ
  • 29. Partial Differential Equations Igor Yanovsky, 2005 29 7.7 Energy Methods Suppose u ∈ C2(Rn × (0, ∞)) solves utt = c2 u x ∈ Rn, t > 0, u(x, 0) = g(x), ut(x, 0) = h(x) x ∈ Rn, (7.10) where g and h have compact support. Define energy for a function u(x, t) at time t by E(t) = 1 2 Rn (u2 t + c2 |∇u|2 ) dx. If we differentiate this energy function, we obtain dE dt = d dt 1 2 Rn u2 t + c2 n i=1 u2 xi dx = Rn ututt + c2 n i=1 uxi uxit dx = Rn ututt dx + c2 n i=1 uxi ut ∂Rn − Rn c2 n i=1 uxixi ut dx = Rn ut(utt − c2 u) dx = 0, or dE dt = d dt 1 2 Rn u2 t + c2 n i=1 u2 xi dx = Rn ututt + c2 n i=1 uxi uxit dx = Rn ututt + c2 ∇u · ∇ut dx = Rn ututt dx + c2 ∂Rn ut ∂u ∂n ds − Rn ut u dx = Rn ut(utt − c2 u) dx = 0. Hence, E(t) is constant, or E(t) ≡ E(0). In particular, if u1 and u2 are two solutions of (7.10), then w = u1 −u2 has zero Cauchy data and hence Ew(0) = 0. By discussion above, Ew(t) ≡ 0, which implies w(x, t) ≡ const. But w(x, 0) = 0 then implies w(x, t) ≡ 0, so the solution is unique.
  • 30. Partial Differential Equations Igor Yanovsky, 2005 30 7.8 Contraction Mapping Principle Suppose X is a complete metric space with distance function represented by d(·, ·). A mapping T : X → X is a strict contraction if there exists 0 < α < 1 such that d(Tx, Ty) ≤ α d(x, y) ∀ x, y ∈ X. An obvious example on X = Rn is Tx = αx, which shrinks all of Rn , leaving 0 fixed. The Contraction Mapping Principle. If X is a complete metric space and T : X → X is a strict contraction, then T has a unique fixed point. The process of replacing a differential equation by an integral equation occurs in time-evolution partial differential equations. The Contraction Mapping Principle is used to establish the local existence and unique- ness of solutions to various nonlinear equations.
  • 31. Partial Differential Equations Igor Yanovsky, 2005 31 8 Laplace Equation Consider the Laplace equation u = 0 in Ω ⊂ Rn (8.1) and the Poisson equation u = f in Ω ⊂ Rn . (8.2) Solutions of (8.1) are called harmonic functions in Ω. Cauchy problems for (8.1) and (8.2) are not well posed. We use separation of variables for some special domains Ω to find boundary conditions that are appropriate for (8.1), (8.2). Dirichlet problem: u(x) = g(x), x ∈ ∂Ω Neumann problem: ∂u(x) ∂n = h(x), x ∈ ∂Ω Robin problem: ∂u ∂n + αu = β, x ∈ ∂Ω 8.1 Green’s Formulas Ω ∇u · ∇v dx = ∂Ω v ∂u ∂n ds − Ω v u dx (8.3) ∂Ω v ∂u ∂n − u ∂v ∂n ds = Ω (v u − u v) dx ∂Ω ∂u ∂n ds = Ω u dx (v = 1 in (8.3)) Ω |∇u|2 dx = ∂Ω u ∂u ∂n ds − Ω u u dx (u = v in (8.3)) Ω uxvx dxdy = ∂Ω vuxn1 ds − Ω vuxx dxdy n = (n1, n2) ∈ R2 Ω uxk v dx = ∂Ω uvnk ds − Ω uvxk dx n = (n1, . . ., nn) ∈ Rn . Ω u 2 v dx = ∂Ω u ∂ v ∂n ds − ∂Ω v ∂u ∂n ds + Ω u v dx. Ω u 2 v − v 2 u dx = ∂Ω u ∂ v ∂n − v ∂ u ∂n ds + ∂Ω u ∂v ∂n − v ∂u ∂n ds.
  • 32. Partial Differential Equations Igor Yanovsky, 2005 32 8.2 Polar Coordinates Polar Coordinates. Let f : Rn → R be continuous. Then Rn f dx = ∞ 0 ∂Br(x0) f dS dr for each x0 ∈ Rn. In particular d dr Br(x0) f dx = ∂Br(x0) f dS for each r > 0. u = u(x(r, θ), y(r, θ)) x(r, θ) = r cos θ y(r, θ) = r sin θ ur = uxxr + uyyr = ux cos θ + uy sinθ, uθ = uxxθ + uyyθ = −uxr sin θ + uyr cos θ, urr = (ux cos θ + uy sinθ)r = (uxxxr + uxyyr) cosθ + (uyxxr + uyyyr) sinθ = uxx cos2 θ + 2uxy cos θ sinθ + uyy sin2 θ, uθθ = (−uxr sinθ + uyr cos θ)θ = (−uxxxθ − uxyyθ)r sinθ − uxr cos θ + (uyxxθ + uyyyθ)r cos θ − uyr sin θ = (uxxr sin θ − uxyr cos θ)r sinθ − uxr cos θ + (−uyxr sin θ + uyyr cos θ)r cos θ − uyr sin θ = r2 (uxx sin2 θ − 2uxy cos θ sinθ + uyy cos2 θ) − r(ux cos θ + uy sinθ). urr + 1 r2 uθθ = uxx cos2 θ + 2uxy cos θ sinθ + uyy sin2 θ + uxx sin2 θ − 2uxy cos θ sinθ + uyy cos2 θ − 1 r (ux cos θ + uy sin θ) = uxx + uyy − 1 r ur. uxx + uyy = urr + 1 r ur + 1 r2 uθθ. ∂2 ∂x2 + ∂2 ∂y2 = ∂2 ∂r2 + 1 r ∂ ∂r + 1 r2 ∂2 ∂θ2 . 8.3 Polar Laplacian in R2 for Radial Functions u = 1 r rur r = ∂2 ∂r2 + 1 r ∂ ∂r u. 8.4 Spherical Laplacian in R3 and Rn for Radial Functions u = ∂2 ∂r2 + n − 1 r ∂ ∂r u. In R3: 4 u = 1 r2 r2 ur r = 1 r ru rr = ∂2 ∂r2 + 2 r ∂ ∂r u. 4 These formulas are taken from S. Farlow, p. 411.
  • 33. Partial Differential Equations Igor Yanovsky, 2005 33 8.5 Cylindrical Laplacian in R3 for Radial Functions u = 1 r rur r = ∂2 ∂r2 + 1 r ∂ ∂r u. 8.6 Mean Value Theorem Gauss Mean Value Theorem. If u ∈ C2 (Ω) is harmonic in Ω, let ξ ∈ Ω and pick r > 0 so that Br(ξ) = {x : |x − ξ| ≤ r} ⊂ Ω. Then u(ξ) = Mu(ξ, r) ≡ 1 ωn |x|=1 u(ξ + rx) dSx, where ωn is the measure of the (n − 1)-dimensional sphere in Rn . 8.7 Maximum Principle Maximum Principle. If u ∈ C2(Ω) satisfies u ≥ 0 in Ω, then either u is a constant, or u(ξ) < sup x∈Ω u(x) for all ξ ∈ Ω. Proof. We may assume A = supx∈Ω u(x) ≤ ∞, so by continuity of u we know that {x ∈ Ω : u(x) = A} is relatively closed in Ω. But since u(ξ) ≤ n ωn |x|≤1 u(ξ + rx) dx, if u(ξ) = A at an interior point ξ, then u(x) = A for all x in a ball about ξ, so {x ∈ Ω : u(x) = A} is open. The connectedness of Ω implies u(ξ) < A or u(ξ) ≡ A for all ξ ∈ Ω. The maximum principle shows that u ∈ C2 (Ω) with u ≥ 0 can attain an interior maximum only if u is constant. In particular, if Ω is compact, and u ∈ C2(Ω) ∩ C(Ω) satisfies u ≥ 0 in Ω, we have the weak maximum principle: max x∈Ω u(x) = max x∈∂Ω u(x).
  • 34. Partial Differential Equations Igor Yanovsky, 2005 34 8.8 The Fundamental Solution A fundamental solution K(x) for the Laplace operator is a distribution satisfying K(x) = δ(x) (8.4) where δ is the delta distribution supported at x = 0. In order to solve (8.4), we should first observe that is symmetric in the variables x1, . . ., xn, and δ(x) is also radially symmetric (i.e., its value only depends on r = |x|). Thus, we try to solve (8.4) with a radially symmetric function K(x). Since δ(x) = 0 for x = 0, we see that (8.4) requires K to be harmonic for r > 0. For the radially symmetric function K, Laplace equation becomes (K = K(r)): ∂2K ∂r2 + n − 1 r ∂K ∂r = 0. (8.5) The general solution to (8.5) is K(r) = c1 + c2 log r if n = 2 c1 + c2r2−n if n ≥ 3. (8.6) After we determine c2, we find the fundamental solution for the Laplace operator: K(x) = 1 2π log r if n = 2 1 (2−n)ωn r2−n if n ≥ 3. • We can derive, (8.6) for any given n. For intance, when n = 3, we have: K + 2 r K = 0. Let K = 1 r w(r), K = 1 r w − 1 r2 w, K = 1 r w − 2 r2 w + 2 r3 w. Plugging these into , we obtain: 1 r w = 0, or w = 0. Thus, w = c1r + c2, K = 1 r w(r) = c1 + c2 r . See the similar problem, F’99, #2, where the fundamental solution for ( − I) is found in the process.
  • 35. Partial Differential Equations Igor Yanovsky, 2005 35 Find the Fundamental Solution of the Laplace Operator for n = 3 We found that starting with the Laplacian in R3 for a radially symmetric function K, K + 2 r K = 0, and letting K = 1 r w(r), we obtained the equation: w = c1r + c2, which implied: K = c1 + c2 r . We now find the constant c2 that ensures that for v ∈ C∞ 0 (R3 ), we have R3 K(|x|) v(x) dx = v(0). Suppose v(x) ≡ 0 for |x| ≥ R and let Ω = BR(0); for small > 0 let Ω = Ω − B (0). K(|x|) is harmonic ( K(|x|) = 0) in Ω . Consider Green’s identity (∂Ω = ∂Ω ∪ ∂B (0)): Ω K(|x|) v dx = ∂Ω K(|x|) ∂v ∂n − v ∂K(|x|) ∂n dS =0, since v≡0 for x≥R + ∂B (0) K(|x|) ∂v ∂n − v ∂K(|x|) ∂n dS. lim →0 Ω K(|x|) v dx = Ω K(|x|) v dx. Since K(r) = c1 + c2 r is integrable at x = 0. On ∂B (0), K(|x|) = K( ). Thus, 5 ∂B (0) K(|x|) ∂v ∂n dS = K( ) ∂B (0) ∂v ∂n dS ≤ c1 + c2 4π 2 max ∇v → 0, as → 0. ∂B (0) v(x) ∂K(|x|) ∂n dS = ∂B (0) c2 2 v(x) dS = ∂B (0) c2 2 v(0) dS + ∂B (0) c2 2 [v(x) − v(0)] dS = c2 2 v(0) 4π 2 + 4πc2 max x∈∂B (0) v(x) − v(0) →0, (v is continuous) = 4πc2 v(0) → −v(0). Thus, taking 4πc2 = −1, i.e. c2 = − 1 4π , we obtain Ω K(|x|) v dx = lim →0 Ω K(|x|) v dx = v(0), that is K(r) = − 1 4πr is the fundamental solution of . 5 In R3 , for |x| = , K(|x|) = K( ) = c1 + c2 . ∂K(|x|) ∂n = − ∂K( ) ∂r = c2 2 , (since n points inwards.) n points toward 0 on the sphere |x| = (i.e., n = −x/|x|).
  • 36. Partial Differential Equations Igor Yanovsky, 2005 36 Show that the Fundamental Solution of the Laplace Operator is given by. K(x) = 1 2π log r if n = 2 1 (2−n)ωn r2−n if n ≥ 3. (8.7) Proof. For v ∈ C∞ 0 (Rn), we want to show Rn K(|x|) v(x) dx = v(0). Suppose v(x) ≡ 0 for |x| ≥ R and let Ω = BR(0); for small > 0 let Ω = Ω − B (0). K(|x|) is harmonic ( K(|x|) = 0) in Ω . Consider Green’s identity (∂Ω = ∂Ω ∪ ∂B (0)): Ω K(|x|) v dx = ∂Ω K(|x|) ∂v ∂n − v ∂K(|x|) ∂n dS =0, since v≡0 for x≥R + ∂B (0) K(|x|) ∂v ∂n − v ∂K(|x|) ∂n dS. lim →0 Ω K(|x|) v dx = Ω K(|x|) v dx. Since K(r) is integrable at x = 0. On ∂B (0), K(|x|) = K( ). Thus, 6 ∂B (0) K(|x|) ∂v ∂n dS = K( ) ∂B (0) ∂v ∂n dS ≤ K( ) ωn n−1 max ∇v → 0, as → 0. ∂B (0) v(x) ∂K(|x|) ∂n dS = ∂B (0) − 1 ωn n−1 v(x) dS = ∂B (0) − 1 ωn n−1 v(0) dS + ∂B (0) − 1 ωn n−1 [v(x) − v(0)] dS = − 1 ωn n−1 v(0) ωn n−1 − max x∈∂B (0) v(x) − v(0) →0, (v is continuous) = −v(0). Thus, Ω K(|x|) v dx = lim →0 Ω K(|x|) v dx = v(0). 6 Note that for |x| = , K(|x|) = K( ) = 1 2π log if n = 2 1 (2−n)ωn 2−n if n ≥ 3. ∂K(|x|) ∂n = − ∂K( ) ∂r = − 1 2π if n = 2 1 ωn n−1 if n ≥ 3, = − 1 ωn n−1 , (since n points inwards.) n points toward 0 on the sphere |x| = (i.e., n = −x/|x|).
  • 37. Partial Differential Equations Igor Yanovsky, 2005 37 8.9 Representation Theorem Representation Theorem, n = 3. Let Ω be bounded domain in R3 and let n be the unit exterior normal to ∂Ω. Let u ∈ C2(Ω). Then the value of u at any point x ∈ Ω is given by the formula u(x) = 1 4π ∂Ω 1 |x − y| ∂u(y) ∂n − u(y) ∂ ∂n 1 |x − y| dS − 1 4π Ω u(y) |x − y| dy. (8.8) Proof. Consider the Green’s identity: Ω (u w − w u) dy = ∂Ω u ∂w ∂n − w ∂u ∂n dS, where w is the harmonic function w(y) = 1 |x − y| , which is singular at x ∈ Ω. In order to be able to apply Green’s identity, we consider a new domain Ω : Ω = Ω − B (x). Since u, w ∈ C2(Ω ), Green’s identity can be applied. Since w is harmonic ( w = 0) in Ω and since ∂Ω = ∂Ω ∪ ∂B (x), we have − Ω u(y) |x − y| dy = ∂Ω u(y) ∂ ∂n 1 |x − y| − 1 |x − y| ∂u(y) ∂n dS (8.9) + ∂B (x) u(y) ∂ ∂n 1 |x − y| − 1 |x − y| ∂u(y) ∂n dS. (8.10) We will show that formula (8.8) is obtained by letting → 0. lim →0 − Ω u(y) |x − y| dy = − Ω u(y) |x − y| dy. Since 1 |x − y| is integrable at x = y. The first integral on the right of (8.10) does not depend on . Hence, the limit as → 0 of the second integral on the right of (8.10) exists, and in order to obtain (8.8), need lim →0 ∂B (x) u(y) ∂ ∂n 1 |x − y| − 1 |x − y| ∂u(y) ∂n dS = 4πu(x). ∂B (x) u(y) ∂ ∂n 1 |x − y| − 1 |x − y| ∂u(y) ∂n dS = ∂B (x) 1 2 u(y) − 1 ∂u(y) ∂n dS = ∂B (x) 1 2 u(x) dS + ∂B (x) 1 2 [u(y) − u(x)] − 1 ∂u(y) ∂n dS = 4πu(x) + ∂B (x) 1 2 [u(y) − u(x)] − 1 ∂u(y) ∂n dS.
  • 38. Partial Differential Equations Igor Yanovsky, 2005 38 7 The last integral tends to 0 as → 0: ∂B (x) 1 2 [u(y) − u(x)] − 1 ∂u(y) ∂n dS ≤ 1 2 ∂B (x) u(y) − u(x) + 1 ∂B (x) ∂u(y) ∂n dS ≤ 4π max y∈∂B (x) u(y) − u(x) →0, (u continuous in Ω) + 4π max y∈Ω ∇u(y) →0, (|∇u| is finite) . 7 Note that for points y on ∂B (x), 1 |x − y| = 1 and ∂ ∂n 1 |x − y| = 1 2 .
  • 39. Partial Differential Equations Igor Yanovsky, 2005 39 Representation Theorem, n = 2. Let Ω be bounded domain in R2 and let n be the unit exterior normal to ∂Ω. Let u ∈ C2 (Ω). Then the value of u at any point x ∈ Ω is given by the formula u(x) = 1 2π Ω u(y) log|x − y| dy + 1 2π ∂Ω u(y) ∂ ∂n log |x − y| − log |x − y| ∂u(y) ∂n dS.(8.11) Proof. Consider the Green’s identity: Ω (u w − w u) dy = ∂Ω u ∂w ∂n − w ∂u ∂n dS, where w is the harmonic function w(y) = log |x − y|, which is singular at x ∈ Ω. In order to be able to apply Green’s identity, we consider a new domain Ω : Ω = Ω − B (x). Since u, w ∈ C2(Ω ), Green’s identity can be applied. Since w is harmonic ( w = 0) in Ω and since ∂Ω = ∂Ω ∪ ∂B (x), we have − Ω u(y) log |x − y| dy (8.12) = ∂Ω u(y) ∂ ∂n log |x − y| − log |x − y| ∂u(y) ∂n dS + ∂B (x) u(y) ∂ ∂n log |x − y| − log |x − y| ∂u(y) ∂n dS. We will show that formula (8.11) is obtained by letting → 0. lim →0 − Ω u(y) log|x − y| dy = − Ω u(y) log |x − y| dy. since log |x − y| is integrable at x = y. The first integral on the right of (8.12) does not depend on . Hence, the limit as → 0 of the second integral on the right of (8.12) exists, and in order to obtain (8.11), need lim →0 ∂B (x) u(y) ∂ ∂n log |x − y| − log |x − y| ∂u(y) ∂n dS = 2πu(x). ∂B (x) u(y) ∂ ∂n log |x − y| − log |x − y| ∂u(y) ∂n dS = ∂B (x) 1 u(y) − log ∂u(y) ∂n dS = ∂B (x) 1 u(x) dS + ∂B (x) 1 [u(y) − u(x)] − log ∂u(y) ∂n dS = 2πu(x) + ∂B (x) 1 [u(y) − u(x)] − log ∂u(y) ∂n dS.
  • 40. Partial Differential Equations Igor Yanovsky, 2005 40 8 The last integral tends to 0 as → 0: ∂B (x) 1 [u(y) − u(x)] − log ∂u(y) ∂n dS ≤ 1 ∂B (x) u(y) − u(x) + log ∂B (x) ∂u(y) ∂n dS ≤ 2π max y∈∂B (x) u(y) − u(x) →0, (u continuous in Ω) + 2π log max y∈Ω ∇u(y) →0, (|∇u| is finite) . 8 Note that for points y on ∂B (x), log |x − y| = log and ∂ ∂n log |x − y| = 1 .
  • 41. Partial Differential Equations Igor Yanovsky, 2005 41 Representation Theorems, n > 3 can be obtained in the same way. We use the Green’s identity with w(y) = 1 |x − y|n−2 , which is a harmonic function in Rn with a singularity at x. The fundamental solution for the Laplace operator is (r = |x|): K(x) = 1 2π log r if n = 2 1 (2−n)ωn r2−n if n ≥ 3. Representation Theorem. If Ω ∈ Rn is bounded, u ∈ C2 (Ω), and x ∈ Ω, then u(x) = Ω K(x − y) u(y) dy + ∂Ω u(y) ∂K(x − y) ∂n − K(x − y) ∂u(y) ∂n dS.(8.13) Proof. Consider the Green’s identity: Ω (u w − w u) dy = ∂Ω u ∂w ∂n − w ∂u ∂n dS, where w is the harmonic function w(y) = K(x − y), which is singular at y = x. In order to be able to apply Green’s identity, we consider a new domain Ω : Ω = Ω − B (x). Since u, K(x − y) ∈ C2(Ω ), Green’s identity can be applied. Since K(x − y) is harmonic ( K(x − y) = 0) in Ω and since ∂Ω = ∂Ω ∪ ∂B (x), we have − Ω K(x − y) u(y) dy = ∂Ω u(y) ∂K(x − y) ∂n − K(x − y) ∂u(y) ∂n dS (8.14) + ∂B (x) u(y) ∂K(x − y) ∂n − K(x − y) ∂u(y) ∂n dS.(8.15) We will show that formula (8.13) is obtained by letting → 0. lim →0 − Ω K(x − y) u(y) dy = − Ω K(x − y) u(y) dy. since K(x − y) is integrable at x = y. The first integral on the right of (8.15) does not depend on . Hence, the limit as → 0 of the second integral on the right of (8.15) exists, and in order to obtain (8.13), need lim →0 ∂B (x) u(y) ∂K(x − y) ∂n − K(x − y) ∂u(y) ∂n dS = −u(x).
  • 42. Partial Differential Equations Igor Yanovsky, 2005 42 ∂B (x) u(y) ∂K(x − y) ∂n − K(x − y) ∂u(y) ∂n dS = ∂B (x) u(y) ∂K( ) ∂n − K( ) ∂u(y) ∂n dS = ∂B (x) u(x) ∂K( ) ∂n dS + ∂B (x) ∂K( ) ∂n [u(y) − u(x)] − K( ) ∂u(y) ∂n dS = − 1 ωn n−1 ∂B (x) u(x) dS − 1 ωn n−1 ∂B (x) [u(y) − u(x)] dS − ∂B (x) K( ) ∂u(y) ∂n dS = − 1 ωn n−1 u(x)ωn n−1 −u(x) − 1 ωn n−1 ∂B (x) [u(y) − u(x)] dS − ∂B (x) K( ) ∂u(y) ∂n dS. 9 The last two integrals tend to 0 as → 0: − 1 ωn n−1 ∂B (x) [u(y) − u(x)] dS − ∂B (x) K( ) ∂u(y) ∂n dS ≤ 1 ωn n−1 max y∈∂B (x) u(y) − u(x) ωn n−1 →0, (u continuous in Ω) + K( ) max y∈Ω ∇u(y) ωn n−1 →0, (|∇u| is finite) . 8.10 Green’s Function and the Poisson Kernel With a slight change in notation, the Representation Theorem has the following special case. Theorem. If Ω ∈ Rn is bounded, u ∈ C2(Ω) C1(Ω) is harmonic, and ξ ∈ Ω, then u(ξ) = ∂Ω u(x) ∂K(x − ξ) ∂n − K(x − ξ) ∂u(x) ∂n dS. (8.16) Let ω(x) be any harmonic function in Ω, and for x, ξ ∈ Ω consider G(x, ξ) = K(x − ξ) + ω(x). If we use the Green’s identity (with u = 0 and ω = 0), we get: 0 = ∂Ω u ∂ω ∂n − ω ∂u ∂n ds. (8.17) Adding (8.16) and (8.17), we obtain: u(ξ) = ∂Ω u(x) ∂G(x, ξ) ∂n − G(x, ξ) ∂u(x) ∂n dS. (8.18) Suppose that for each ξ ∈ Ω we can find a function ωξ(x) that is harmonic in Ω and satisfies ωξ(x) = −K(x − ξ) for all x ∈ ∂Ω. Then G(x, ξ) = K(x − ξ) + ωξ(x) is a fundamental solution such that G(x, ξ) = 0 x ∈ ∂Ω. 9 Note that for points y on ∂B (x), K(x − y) = K( ) = 1 2π log if n = 2 1 (2−n)ωn 2−n if n ≥ 3. ∂K(x − y) ∂n = − ∂K( ) ∂r = − 1 2π if n = 2 1 ωn n−1 if n ≥ 3, = − 1 ωn n−1 , (since n points inwards.)
  • 43. Partial Differential Equations Igor Yanovsky, 2005 43 G is called the Green’s function and is useful in satisfying Dirichlet boundary conditions. The Green’s function is difficult to construct for a general domain Ω since it requires solving the Dirichlet problem ωξ = 0 in Ω, ωξ(x) = −K(x − ξ) for x ∈ ∂Ω, for each ξ ∈ Ω. From (8.18) we find 10 u(ξ) = ∂Ω u(x) ∂G(x, ξ) ∂n dS. Thus if we know that the Dirichlet problem has a solution u ∈ C2 (Ω), then we can calculate u from the Poisson integral formula (provided of course that we can compute G(x, ξ)). 10 If we did not assume u = 0 in our derivation, we would have (8.13) instead of (8.16), and an extra term in (8.17), which would give us a more general expression: u(ξ) = Ω G(x, ξ) u dx + ∂Ω u(x) ∂G(x, ξ) ∂n dS.
  • 44. Partial Differential Equations Igor Yanovsky, 2005 44 8.11 Properties of Harmonic Functions Liouville’s Theorem. A bounded harmonic function defined on all of Rn must be a constant. 8.12 Eigenvalues of the Laplacian Consider the equation u + λu = 0 in Ω u = 0 on ∂Ω, (8.19) where Ω is a bounded domain and λ is a (complex) number. The values of λ for which (8.19) admits a nontrivial solution u are called the eigenvalues of in Ω, and the solution u is an eigenfunction associated to the eigenvalue λ. (The convention u + λu = 0 is chosen so that all eigenvalues λ will be positive.) Properties of the Eigenvalues and Eigenfunctions for (8.19): 1. The eigenvalues of (8.19) form a countable set {λn}∞ n=1 of positive numbers with λn → ∞ as n → ∞. 2. For each eigenvalue λn there is a finite number (called the multiplicity of λn) of linearly independent eigenfunctions un. 3. The first (or principal) eigenvalue, λ1, is simple and u1 does not change sign in Ω. 4. Eigenfunctions corresponding to distinct eigenvalues are orthogonal. 5. The eigenfunctions may be used to expand certain functions on Ω in an infinite series.
  • 45. Partial Differential Equations Igor Yanovsky, 2005 45 9 Heat Equation The heat equation is ut = k u for x ∈ Ω, t > 0, (9.1) with initial and boundary conditions. 9.1 The Pure Initial Value Problem 9.1.1 Fourier Transform If u ∈ C∞ 0 (Rn), define its Fourier transform u by u(ξ) = 1 (2π) n 2 Rn e−ix·ξ u(x) dx for ξ ∈ Rn . We can differentiate ˆu: ∂ ∂ξj u(ξ) = 1 (2π) n 2 Rn e−ix·ξ (−ixj)u(x) dx = (−ixj) u (ξ). Iterating this computation, we obtain ∂ ∂ξj k u(ξ) = (−ixj)k u (ξ). (9.2) Similarly, integrating by parts shows ∂u ∂xj (ξ) = 1 (2π) n 2 Rn e−ix·ξ ∂u ∂xj (x) dx = − 1 (2π) n 2 Rn ∂ ∂xj (e−ix·ξ )u(x) dx = 1 (2π) n 2 Rn (iξj)e−ix·ξ u(x) dx = (iξj)u(ξ). Iterating this computation, we obtain ∂ku ∂xk j (ξ) = (iξj)k u(ξ). (9.3) Formulas (9.2) and (9.3) express the fact that Fourier transform interchanges differen- tiation and multiplication by the coordinate function. 9.1.2 Multi-Index Notation A multi-index is a vector α = (α1, . . ., αn) where each αi is a nonnegative integer. The order of the multi-index is |α| = α1 + . . . + αn. Given a multi-index α, define Dα u = ∂|α| u ∂xα1 1 · · ·∂xαn n = ∂α1 x1 · · ·∂αn xn u. We can generalize (9.3) in multi-index notation: Dαu(ξ) = 1 (2π) n 2 Rn e−ix·ξ Dα u(x) dx = (−1)|α| (2π) n 2 Rn Dα x (e−ix·ξ )u(x) dx = 1 (2π) n 2 Rn (iξ)α e−ix·ξ u(x) dx = (iξ)α u(ξ). (iξ)α = (iξ1)α1 · · ·(iξn)αn .
  • 46. Partial Differential Equations Igor Yanovsky, 2005 46 Parseval’s theorem (Plancherel’s theorem). Assume u ∈ L1(Rn) ∩ L2(Rn). Then u, u∨ ∈ L2(Rn) and ||u||L2(Rn) = ||u∨ ||L2(Rn) = ||u||L2(Rn), or ∞ −∞ |u(x)|2 dx = ∞ −∞ |u(ξ)|2 dξ. Also, ∞ −∞ u(x) v(x)dx = ∞ −∞ u(ξ) v(ξ) dξ. The properties (9.2) and (9.3) make it very natural to consider the fourier transform on a subspace of L1 (Rn ) called the Schwartz class of functions, S, which consists of the smooth functions whose derivatives of all orders decay faster than any polynomial, i.e. S = {u ∈ C∞ (Rn ) : for every k ∈ N and α ∈ Nn , |x|k |Dα u(x)| is bounded on Rn }. For u ∈ S, the Fourier transform u exists since u decays rapidly at ∞. Lemma. (i) If u ∈ L1(Rn), then u is bounded. (ii) If u ∈ S, then u ∈ S. Define the inverse Fourier transform for u ∈ L1(Rn): u∨ (ξ) = 1 (2π) n 2 Rn eix·ξ u(x) dx for ξ ∈ Rn , or u(x) = 1 (2π) n 2 Rn eix·ξ u(ξ) dξ for x ∈ Rn . Fourier Inversion Theorem (McOwen). If u ∈ S, then (u)∨ = u; that is, u(x) = 1 (2π) n 2 Rn eix·ξ u(ξ) dξ = 1 (2π)n R2n ei(x−y)·ξ u(y) dy dξ = (u)∨ (x). Fourier Inversion Theorem (Evans). Assume u ∈ L2 (Rn ). Then, u = (u)∨ .
  • 47. Partial Differential Equations Igor Yanovsky, 2005 47 Shift: Let u(x − a y ) = v(x), and determinte v(ξ): u(x − a)(ξ) = v(ξ) = 1 √ 2π R e−ixξ v(x) dx = 1 √ 2π R e−i(y+a)ξ u(y) dy = 1 √ 2π R e−iyξ e−iaξ u(y) dy = e−iaξ u(ξ). u(x − a)(ξ) = e−iaξ u(ξ). Delta function: δ(x)(ξ) = 1 √ 2π R e−ixξ δ(x) dx = 1 √ 2π , since u(x) = R δ(x − y) u(y) dy . δ(x − a)(ξ) = e−iaξ δ(ξ) = 1 √ 2π e−iaξ . (using result from ‘Shift’) Convolution: (f ∗ g)(x) = Rn f(x − y)g(y) dy, (f ∗ g)(ξ) = 1 (2π) n 2 Rn e−ix·ξ Rn f(x − y) g(y) dydx = 1 (2π) n 2 Rn Rn e−ix·ξ f(x − y) g(y) dydx = 1 (2π) n 2 Rn Rn e−i(x−y)·ξ f(x − y) dx e−iy·ξ g(y) dy = 1 (2π) n 2 Rn e−iz·ξ f(z) dz · Rn e−iy·ξ g(y) dy = (2π) n 2 f(ξ)g(ξ). (f ∗ g)(ξ) = (2π) n 2 f(ξ) g(ξ). Gaussian: (completing the square) e−x2 2 (ξ) = 1 √ 2π R e−ixξ e−x2 2 dx = 1 √ 2π R e−x2 +2ixξ 2 dx = 1 √ 2π R e−x2+2ixξ−ξ2 2 dx e−ξ2 2 = 1 √ 2π R e− (x+iξ)2 2 dx e−ξ2 2 = 1 √ 2π R e −y2 2 dy e−ξ2 2 = 1 √ 2π √ 2πe−ξ2 2 = e−ξ2 2 . e−x2 2 (ξ) = e−ξ2 2 . Multiplication by x: −ixu(ξ) = 1 √ 2π R e−ixξ − ixu(x) dx = d dξ u(ξ). xu(x)(ξ) = i d dξ u(ξ).
  • 48. Partial Differential Equations Igor Yanovsky, 2005 48 Multiplication of ux by x: (using the above result) xux(x)(ξ) = 1 √ 2π R e−ixξ xux(x) dx = 1 √ 2π e−ixξ xu ∞ −∞ = 0 − 1 √ 2π R (−iξ)e−ixξ x + e−ixξ u dx = 1 √ 2π iξ R e−ixξ x u dx − 1 √ 2π R e−ixξ u dx = iξ xu(x)(ξ) − u(ξ) = iξ i d dξ u(ξ) − u(ξ) = −ξ d dξ u(ξ) − u(ξ). xux(x)(ξ) = −ξ d dξ u(ξ) − u(ξ). Table of Fourier Transforms: 11 e−ax2 2 (ξ) = 1 √ a e−ξ2 2a , (Gaussian) eibxf(ax)(ξ) = 1 a f ξ − b a , f(x) = 1, |x| ≤ L 0, |x| > L, f(x)(ξ) = 1 √ 2π 2 sin(ξL) ξ , e−a|x|(ξ) = 1 √ 2π 2a a2 + ξ2 , (a > 0) 1 a2 + x2 (ξ) = √ 2π 2a e−a|ξ| , (a > 0) H(a − |x|)(ξ) = 2 π 1 ξ sinaξ, H(x)(ξ) = 1 √ 2π πδ(ξ) + 1 iξ , H(x) − H(−x) (ξ) = 2 π 1 iξ , (sign) 1(ξ) = √ 2πδ(ξ). 11 Results with marked with were taken from W. Strauss, where the definition of Fourier Transform is different. An extra multiple of 1√ 2π was added to each of these results.
  • 49. Partial Differential Equations Igor Yanovsky, 2005 49 9.1.3 Solution of the Pure Initial Value Problem Consider the pure initial value problem ut = u for t > 0, x ∈ Rn u(x, 0) = g(x) for x ∈ Rn . (9.4) We take the Fourier transform of the heat equation in the x-variables. (ut)(ξ, t) = 1 (2π) n 2 Rn e−ix·ξ ut(x, t) dx = ∂ ∂t u(ξ, t) u(ξ, t) = n j=1 (iξj)2 u(ξ, t) = −|ξ|2 u(ξ, t). The heat equation therefore becomes ∂ ∂t u(ξ, t) = −|ξ|2 u(ξ, t), which is an ordinary differential equation in t, with the solution u(ξ, t) = Ce−|ξ|2t . The initial condition u(ξ, 0) = g(ξ) gives u(ξ, t) = g(ξ) e−|ξ|2t , u(x, t) = g(ξ) e−|ξ|2t ∨ = 1 (2π) n 2 g ∗ e−|ξ|2t ∨ = 1 (2π) n 2 g ∗ 1 (2π) n 2 Rn e−|ξ|2t eix·ξ dξ = 1 (4π2) n 2 g ∗ Rn eix·ξ−|ξ|2t dξ = 1 (4π2) n 2 g ∗ e−|x|2 4t π t n 2 = 1 (4πt) n 2 g ∗ e− |x|2 4t = 1 (4πt) n 2 Rn e− |x−y|2 4t g(y) dy. Thus, 12 solution of the initial value problem (9.4) is u(x, t) = Rn K(x, y, t) g(y) dy = 1 (4πt) n 2 Rn e− |x−y|2 4t g(y) dy. Uniqueness of solutions for the pure initial value problem fails: there are nontrivial solutions of (9.4) with g = 0. 13 Thus, the pure initial value problem for the heat equation is not well-posed, as it was for the wave equation. However, the nontrivial solutions are unbounded as functions of x when t > 0 is fixed; uniqueness can be regained by adding a boundedness condition on the solution. 12 Identity (Evans, p. 187.) : Rn eix·ξ−|ξ|2 t dξ = e− |x|2 4t π t n 2 . 13 The following function u satisfies ut = uxx for t > 0 with u(x, 0) = 0: u(x, t) = ∞ k=0 1 (2k)! x2k dk dtk e−1/t2 .
  • 50. Partial Differential Equations Igor Yanovsky, 2005 50 9.1.4 Nonhomogeneous Equation Consider the pure initial value problem with homogeneous initial condition: ut = u + f(x, t) for t > 0, x ∈ Rn u(x, 0) = 0 for x ∈ Rn . (9.5) Duhamel’s principle gives the solution: u(x, t) = t 0 Rn ˜K(x − y, t − s) f(y, s) dyds. 9.1.5 Nonhomogeneous Equation with Nonhomogeneous Initial Conditions Combining two solutions above, we find that the solution of the initial value problem ut = u + f(x, t) for t > 0, x ∈ Rn u(x, 0) = g(x) for x ∈ Rn. (9.6) is given by u(x, t) = Rn ˜K(x − y, t) g(y) dy + t 0 Rn ˜K(x − y, t − s) f(y, s) dyds. 9.1.6 The Fundamental Solution Suppose we want to solve the Cauchy problem ut = Lu x ∈ Rn, t > 0 u(x, 0) = g(x) x ∈ Rn . (9.7) where L is a differential operator in Rn with constant coefficients. Suppose K(x, t) is a distribution in Rn for each value of t ≥ 0, K is C1 in t and satisfies Kt − LK = 0, K(x, 0) = δ(x). (9.8) We call K a fundamental solution for the initial value problem. The solution of (9.7) is then given by convolution in the space variables: u(x, t) = Rn K(x − y, t) g(y) dy.
  • 51. Partial Differential Equations Igor Yanovsky, 2005 51 For operators of the form ∂t −L, the fundamental solution of the initial value problem, K(x, t) as defined in (9.8), coincides with the “free space” fundamental solution, which satisfies ∂t − L K(x, t) = δ(x, t), provided we extend K(x, t) by zero to t < 0. For the heat equation, consider ˜K(x, t) = ⎧ ⎨ ⎩ 1 (4πt)n/2 e− |x|2 4t t > 0 0 t ≤ 0. (9.9) Notice that ˜K is smooth for (x, t) = (0, 0). ˜K defined as in (9.9), is the fundamental solution of the “free space” heat equation. Proof. We need to show: ∂t − K(x, t) = δ(x, t). (9.10) To verify (9.10) as distributions, we must show that for any v ∈ C∞ 0 (Rn+1): 14 Rn+1 ˜K(x, t) − ∂t − v dx dt = Rn+1 δ(x, t) v(x, t) dx dt ≡ v(0, 0). To do this, let us take > 0 and define ˜K (x, t) = ⎧ ⎨ ⎩ 1 (4πt)n/2 e− |x|2 4t t > 0 t ≤ . Then ˜K → ˜K as distributions, so it suffices to show that (∂t − ) ˜K → δ as distribu- tions. Now ˜K − ∂t − v dx dt = ∞ Rn ˜K(x, t) − ∂t − v(x, t) dx dt = − ∞ Rn ˜K(x, t) ∂tv(x, t) dx dt − ∞ Rn ˜K(x, t) v(x, t) dx dt = − Rn ˜K(x, t) v(x, t) dx t=∞ t= + ∞ Rn ∂t ˜K(x, t) v(x, t) dx dt − ∞ Rn ˜K(x, t) v(x, t) dx dt = ∞ Rn ∂t − ˜K(x, t) v(x, t) dx dt + Rn ˜K(x, ) v(x, ) dx. But for t > , (∂t − ) ˜K(x, t) = 0; moreover, since limt→0+ ˜K(x, t) = δ0(x) = δ(x), we have ˜K(x, ) → δ0(x) as → 0, so the last integral tends to v(0, 0). 14 Note, for the operator L = ∂/∂t, the adjoint operator is L∗ = −∂/∂t.
  • 52. Partial Differential Equations Igor Yanovsky, 2005 52 10 Schr¨odinger Equation Problem (F’96, #5). The Gauss kernel G(t, x, y) = 1 (4πt) 1 2 e− (x−y)2 4t is the fundamental solution of the heat equation, solving Gt = Gxx, G(0, x, y) = δ(x − y). By analogy with the heat equation, find the fundamental solution H(t, x, y) of the Schr¨odinger equation Ht = iHxx, H(0, x, y) = δ(x − y). Show that your expression H(x) is indeed the fundamental solution for the Schr¨odinger equation. You may use the following special integral ∞ −∞ e −ix2 4 dx = √ −i4π. Proof. • Remark: Consider the initial value problem for the Schr¨odinger equation ut = i u x ∈ Rn, t > 0, u(x, 0) = g(x) x ∈ Rn. If we formally replace t by it in the heat kernel, we obtain the Fundamental Solution of the Schr¨odinger Equation: 15 H(x, t) = 1 (4πit) n 2 e− |x|2 4it (x ∈ Rn , t = 0) u(x, t) = 1 (4πit) n 2 Rn e− |x−y|2 4it g(y) dy. In particular, the Schr¨odinger equation is reversible in time, whereas the heat equation is not. • Solution: We have already found the fundamental solution for the heat equation using the Fourier transform. For the Schr¨odinger equation is one dimension, we have ∂ ∂t u(ξ, t) = −iξ2 u(ξ, t), which is an ordinary differential equation in t, with the solution u(ξ, t) = Ce−iξ2t . The initial condition u(ξ, 0) = g(ξ) gives u(ξ, t) = g(ξ) e−iξ2t , u(x, t) = g(ξ) e−iξ2t ∨ = 1 √ 2π g ∗ e−iξ2t ∨ = 1 √ 2π g ∗ 1 √ 2π R e−iξ2t eix·ξ dξ = 1 2π g ∗ R eix·ξ−iξ2t dξ = (need some work) = = 1 √ 4πit g ∗ e− |x|2 4it = 1 √ 4πit R e− |x−y|2 4it g(y) dy. 15 Evans, p. 188, Example 3.
  • 53. Partial Differential Equations Igor Yanovsky, 2005 53 • For the Schr¨odinger equation, consider ˜Ψ(x, t) = ⎧ ⎨ ⎩ 1 (4πit)n/2 e− |x|2 4it t > 0 0 t ≤ 0. (10.1) Notice that ˜Ψ is smooth for (x, t) = (0, 0). ˜Ψ defined as in (10.1), is the fundamental solution of the Schr¨odinger equa- tion. We need to show: ∂t − i Ψ(x, t) = δ(x, t). (10.2) To verify (10.2) as distributions, we must show that for any v ∈ C∞ 0 (Rn+1): 16 Rn+1 ˜Ψ(x, t) − ∂t − i v dx dt = Rn+1 δ(x, t) v(x, t) dx dt ≡ v(0, 0). To do this, let us take > 0 and define ˜Ψ (x, t) = ⎧ ⎨ ⎩ 1 (4πit)n/2 e− |x|2 4it t > 0 t ≤ . Then ˜Ψ → ˜Ψ as distributions, so it suffices to show that (∂t − i )˜Ψ → δ as distribu- tions. Now ˜Ψ − ∂t − i v dx dt = ∞ Rn ˜Ψ(x, t) − ∂t − i v(x, t) dx dt = ∞ Rn ∂t − i ˜Ψ(x, t) v(x, t) dx dt + Rn ˜Ψ(x, ) v(x, ) dx. But for t > , (∂t − i )˜Ψ(x, t) = 0; moreover, since limt→0+ ˜Ψ(x, t) = δ0(x) = δ(x), we have ˜Ψ(x, ) → δ0(x) as → 0, so the last integral tends to v(0, 0). 16 Note, for the operator L = ∂/∂t, the adjoint operator is L∗ = −∂/∂t.
  • 54. Partial Differential Equations Igor Yanovsky, 2005 54 11 Problems: Quasilinear Equations Problem (F’90, #7). Use the method of characteristics to find the solution of the first order partial differential equation x2 ux + xyuy = u2 which passes through the curve u = 1, x = y2. Determine where this solution becomes singular. Proof. We have a condition u(x = y2) = 1. Γ is parametrized by Γ : (s2, s, 1). dx dt = x2 ⇒ x = 1 −t − c1(s) ⇒ x(0, s) = 1 −c1(s) = s2 ⇒ x = 1 −t + 1 s2 = s2 1 − ts2 , dy dt = xy ⇒ dy dt = s2y 1 − ts2 ⇒ y = c2(s) 1 − ts2 ⇒ y(s, 0) = c2(s) = s ⇒ y = s 1 − ts2 , dz dt = z2 ⇒ z = 1 −t − c3(s) ⇒ z(0, s) = 1 −c3(s) = 1 ⇒ z = 1 1 − t . Thus, x y = s ⇒ y = x y 1 − tx2 y2 ⇒ t = y2 x2 − 1 x . ⇒ u(x, y) = 1 1 − y2 x2 + 1 x = x2 x2 + x − y2 . The solution becomes singular when y2 = x2 + x. It can be checked that the solution satisfies the PDE and u(x = y2) = y4 y4+y2 −y2 = 1. Problem (S’91, #7). Solve the first order PDE fx + x2 yfy + f = 0 f(x = 0, y) = y2 using the method of characteristics. Proof. Rewrite the equation ux + x2 yuy = −u, u(0, y) = y2 . Γ is parameterized by Γ : (0, s, s2). dx dt = 1 ⇒ x = t, dy dt = x2 y ⇒ dy dt = t2 y ⇒ y = se t3 3 , dz dt = −z ⇒ z = s2 e−t . Thus, x = t and s = ye−t3 3 = ye−x3 3 , and u(x, y) = (ye−x3 3 )2 e−x = y2 e−2 3 x3−x . The solution satisfies both the PDE and initial conditions.
  • 55. Partial Differential Equations Igor Yanovsky, 2005 55 Problem (S’92, #1). Consider the Cauchy problem ut = xux − u + 1 − ∞ < x < ∞, t ≥ 0 u(x, 0) = sinx − ∞ < x < ∞ and solve it by the method of characteristics. Discuss the properties of the solution; in particular investigate the behavior of |ux(·, t)|∞ for t → ∞. Proof. Γ is parametrized by Γ : (s, 0, sins). We have dx dt = −x ⇒ x = se−t , dy dt = 1 ⇒ y = t, dz dt = 1 − z ⇒ z = 1 − 1 − sins et . Thus, t = y, s = xey, and u(x, y) = 1 − 1 ey + sin(xey) ey . It can be checked that the solution satisfies the PDE and the initial condition. As t → ∞, u(x, t) → 1. Also, |ux(x, y)|∞ = | cos(xey )|∞ = 1. ux(x, y) oscillate between −1 and 1. If x = 0, ux = 1. Problem (W’02, #6). Solve the Cauchy problem ut + u2 ux = 0, t > 0, u(0, x) = 2 + x. Proof. Solved
  • 56. Partial Differential Equations Igor Yanovsky, 2005 56 Problem (S’97, #1). Find the solution of the Burgers’ equation ut + uux = −x, t ≥ 0 u(x, 0) = f(x), −∞ < x < ∞. Proof. Γ is parameterized by Γ : (s, 0, f(s)). dx dt = z, dy dt = 1 ⇒ y = t, dz dt = −x. Note that we have a coupled system: ˙x = z, ˙z = −x, which can be written as a second order ODE: ¨x + x = 0, x(s, 0) = s, ˙x(s, 0) = z(0) = f(s). Solving the equation, we get x(s, t) = s cos t + f(s) sint, and thus, z(s, t) = ˙x(t) = −s sint + f(s) cos t. x = s cos y + f(s) siny, u = −s sin y + f(s) cos y. ⇒ x cos y = s cos2 y + f(s) siny cos y, u siny = −s sin2 y + f(s) cos y siny. ⇒ x cos y − u siny = s(cos2 y + sin2 y) = s. ⇒ u(x, y) = f(x cosy − u siny) cosy − (x cos y − u siny) siny. Problem (F’98, #2). Solve the partial differential equation uy − u2 ux = 3u, u(x, 0) = f(x) using method of characteristics. (Hint: find a parametric representation of the solu- tion.) Proof. Γ is parameterized by Γ : (s, 0, f(s)). dx dt = −z2 ⇒ dx dt = −f2 (s)e6t ⇒ x = − 1 6 f2 (s)e6t + 1 6 f2 (s) + s, dy dt = 1 ⇒ y = t, dz dt = 3z ⇒ z = f(s)e3t .
  • 57. Partial Differential Equations Igor Yanovsky, 2005 57 Thus, x = −1 6 f2(s)e6y + 1 6 f2(s) + s, f(s) = z e3y ⇒ x = − 1 6 z2 e6y e6y + 1 6 z2 e6y + s = z2 6e6y − z2 6 + s, ⇒ s = x − z2 6e6y + z2 6 . ⇒ z = f x − z2 6e6y + z2 6 e3y . ⇒ u(x, y) = f x − u2 6e6y + u2 6 e3y .
  • 58. Partial Differential Equations Igor Yanovsky, 2005 58 Problem (S’99, #1) Modified Problem. a) Solve ut + u3 3 x = 0 (11.1) for t > 0, −∞ < x < ∞ with initial data u(x, 0) = h(x) = −a(1 − ex), x < 0 −a(1 − e−x), x > 0 where a > 0 is constant. Solve until the first appearance of discontinuous derivative and determine that critical time. b) Consider the equation ut + u3 3 x = −cu. (11.2) How large does the constant c > 0 has to be, so that a smooth solution (with no discon- tinuities) exists for all t > 0? Explain. Proof. a) Characteristic form: ut + u2ux = 0. Γ : (s, 0, h(s)). dx dt = z2 , dy dt = 1, dz dt = 0. x = h(s)2 t + s, y = t, z = h(s). u(x, y) = h(x − u2 y) (11.3) The characteristic projection in the xt-plane17 passing through the point (s, 0) is the line x = h(s)2 t + s along which u has the constant value u = h(s). The derivative of the initial data is discontinuous, and that leads to a rarefaction-like behavior at t = 0. However, if the question meant to ask to determine the first time when a shock forms, we proceed as follows. Two characteristics x = h(s1)2t + s1 and x = h(s2)2t + s2 intersect at a point (x, t) with t = − s2 − s1 h(s2)2 − h(s1)2 . From (11.3), we have ux = h (s)(1 − 2uuxt) ⇒ ux = h (s) 1 + 2h(s)h (s)t Hence for 2h(s)h (s) < 0, ux becomes infinite at the positive time t = −1 2h(s)h (s) . The smallest t for which this happens corresponds to the value s = s0 at which h(s)h (s) has a minimum (i.e.−h(s)h (s) has a maximum). At time T = −1/(2h(s0)h (s0)) the 17 y and t are interchanged here
  • 59. Partial Differential Equations Igor Yanovsky, 2005 59 solution u experiences a “gradient catastrophe”. Therefore, need to find a minimum of f(x) = 2h(x)h (x) = −2a(1 − ex ) · aex −2a(1 − e−x) · (−ae−x) = −2a2 ex (1 − ex ), x < 0 2a2e−x(1 − e−x), x > 0 f (x) = −2a2 ex (1 − 2ex ), x < 0 −2a2e−x(1 − 2e−x), x > 0 = 0 ⇒ x = ln(1 2) = − ln(2), x < 0 x = ln(2), x > 0 ⇒ f(ln(1 2)) = −2a2 eln( 1 2 ) (1 − eln( 1 2 ) ) = −2a2 (1 2 )(1 2) = −a2 2 , x < 0 f(ln(2)) = 2a2(1 2 )(1 − 1 2 ) = a2 2 , x > 0 ⇒ t = − 1 min{2h(s)h (s)} = 2 a2 Proof. b) Characteristic form: ut + u2ux = −cu. Γ : (s, 0, h(s)). dx dt = z2 = h(s)2 e−2ct ⇒ x = s + 1 2c h(s)2 (1 − e−2ct ), dy dt = 1 ⇒ y = t, dz dt = −cz ⇒ z = h(s)e−ct (⇒ h(s) = uecy ). Solving for s and t in terms of x and y, we get: t = y, s = x − 1 2c h(s)2 (1 − e−2cy ). Thus, u(x, y) = h x − 1 2c u2 e2cy (1 − e−2cy ) · e−cy . ux = h (s)e−cy · (1 − 1 c uuxe2cy (1 − e−2cy )), ux = h (s)e−cy 1 + 1 c h (s)ecyu · (1 − e−2cy) = h (s)e−cy 1 + 1 c h (s)h(s)(1 − e−2cy) . Thus, c > 0 that would allow a smooth solution to exist for all t > 0 should satisfy 1 + 1 c h (s)h(s)(1 − e−2cy ) = 0. We can perform further calculations taking into account the result from part (a): min{2h(s)h (s)} = − a2 2 .
  • 60. Partial Differential Equations Igor Yanovsky, 2005 60 Problem (S’99, #1). Original Problem. a). Solve ut + u3 x 3 = 0 (11.4) for t > 0, −∞ < x < ∞ with initial data u(x, 0) = h(x) = −a(1 − ex ), x < 0 −a(1 − e−x), x > 0 where a > 0 is constant. Proof. Rewrite the equation as F(x, y, u, ux, uy) = u3 x 3 + uy = 0, F(x, y, z, p, q) = p3 3 + q = 0. Γ is parameterized by Γ : (s, 0, h(s), φ(s), ψ(s)). We need to complete Γ to a strip. Find φ(s) and ψ(s), the initial conditions for p(s, t) and q(s, t), respectively: • F(f(s), g(s), h(s), φ(s), ψ(s)) = 0, φ(s)3 3 + ψ(s) = 0, ψ(s) = − φ(s)3 3 . • h (s) = φ(s)f (s) + ψ(s)g (s) aes = φ(s), x < 0 −ae−s = φ(s), x > 0 ⇒ ψ(s) = −a3e3s 3 , x < 0 ψ(s) = a3e−3s 3 , x > 0 Therefore, now Γ is parametrized by Γ : (s, 0, −a(1 − es ), aes , −a3e3s 3 ), x < 0 Γ : (s, 0, −a(1 − e−s ), −ae−s , a3e−3s 3 ), x > 0 dx dt = Fp = p2 = a2 e2s a2 e−2s ⇒ x(s, t) = a2 e2s t + c4(s) a2 e−2s t + c5(s) ⇒ x = a2 e2s t + s a2 e−2s t + s dy dt = Fq = 1 ⇒ y(s, t) = t + c1(s) ⇒ y = t dz dt = pFp + qFq = p3 + q = a3 e3s − a3e3s 3 = 2 3 a3 e3s , x < 0 −a3e−3s + a3e−3s 3 = −2 3 a3e−3s, x > 0 ⇒ z(s, t) = 2 3 a3 e3s t + c6(s), x < 0 −2 3 a3e−3st + c7(s), x > 0 ⇒ z = 2 3 a3 e3s t − a(1 − es ), x < 0 −2 3 a3e−3st − a(1 − e−s), x > 0 dp dt = −Fx − Fzp = 0 ⇒ p(s, t) = c2(s) ⇒ p = aes , x < 0 −ae−s, x > 0
  • 61. Partial Differential Equations Igor Yanovsky, 2005 61 dq dt = −Fy − Fzq = 0 ⇒ q(s, t) = c3(s) ⇒ q = −a3e3s 3 , x < 0 a3e−3s 3 , x > 0 Thus, u(x, y) = 2 3a3e3sy − a(1 − es), x < 0 −2 3 a3 e−3s y − a(1 − e−s ), x > 0 where s is defined as x = a2e2sy + s, x < 0 a2 e−2s y + s, x > 0. b). Solve the equation ut + u3 x 3 = −cu. (11.5) Proof. Rewrite the equation as F(x, y, u, ux, uy) = u3 x 3 + uy + cu = 0, F(x, y, z, p, q) = p3 3 + q + cz = 0. Γ is parameterized by Γ : (s, 0, h(s), φ(s), ψ(s)). We need to complete Γ to a strip. Find φ(s) and ψ(s), the initial conditions for p(s, t) and q(s, t), respectively: • F(f(s), g(s), h(s), φ(s), ψ(s)) = 0, φ(s)3 3 + ψ(s) + ch(s) = 0, ψ(s) = − φ(s)3 3 − ch(s) = − φ(s)3 3 + ca(1 − ex ), x < 0 −φ(s)3 3 + ca(1 − e−x), x > 0 • h (s) = φ(s)f (s) + ψ(s)g (s) aes = φ(s), x < 0 −ae−s = φ(s), x > 0 ⇒ ψ(s) = −a3e3s 3 + ca(1 − ex ), x < 0 ψ(s) = a3e−3s 3 + ca(1 − e−x), x > 0 Therefore, now Γ is parametrized by Γ : (s, 0, −a(1 − es), aes, −a3e3s 3 + ca(1 − ex), x < 0 Γ : (s, 0, −a(1 − e−s), −ae−s, a3e−3s 3 + ca(1 − e−x), x > 0
  • 62. Partial Differential Equations Igor Yanovsky, 2005 62 dx dt = Fp = p2 dy dt = Fq = 1 dz dt = pFp + qFq = p3 + q dp dt = −Fx − Fzp = −cp dq dt = −Fy − Fzq = −cq We can proceed solving the characteristic equations with initial conditions above.
  • 63. Partial Differential Equations Igor Yanovsky, 2005 63 Problem (S’95, #7). a) Solve the following equation, using characteristics, ut + u3 ux = 0, u(x, 0) = a(1 − ex ), for x < 0 −a(1 − e−x), for x > 0 where a > 0 is a constant. Determine the first time when a shock forms. Proof. a) Γ is parameterized by Γ : (s, 0, h(s)). dx dt = z3 , dy dt = 1, dz dt = 0. x = h(s)3 t + s, y = t, z = h(s). u(x, y) = h(x − u3 y) (11.6) The characteristic projection in the xt-plane18 passing through the point (s, 0) is the line x = h(s)3 t + s along which u has a constant value u = h(s). Characteristics x = h(s1)3t+s1 and x = h(s2)3t+s2 intersect at a point (x, t) with t = − s2 − s1 h(s2)3 − h(s1)3 . From (11.6), we have ux = h (s)(1 − 3u2 uxt) ⇒ ux = h (s) 1 + 3h(s)2h (s)t Hence for 3h(s)2h (s) < 0, ux becomes infinite at the positive time t = −1 3h(s)2h (s) . The smallest t for which this happens corresponds to the value s = s0 at which h(s)2h (s) has a minimum (i.e.−h(s)2h (s) has a maximum). At time T = −1/(3h(s0)2h (s0)) the solution u experiences a “gradient catastrophe”. Therefore, need to find a minimum of f(x) = 3h(x)2 h (x) = −3a2 (1 − ex )2 aex = −3a3 ex (1 − ex )2 , x < 0 −3a2 (1 − e−x )2 ae−x = −3a3 e−x (1 − e−x )2 , x > 0 f (x) = −3a3 ex (1 − ex )2 − ex 2(1 − ex )ex = −3a3 ex (1 − ex )(1 − 3ex ), x < 0 −3a3 − e−x (1 − e−x )2 + e−x 2(1 − e−x )e−x = −3a3 e−x (1 − e−x )(−1 + 3e−x ), x > 0 = 0 The zeros of f (x) are x = 0, x = − ln 3, x < 0, x = 0, x = ln 3, x > 0. We check which ones give the minimum of f(x) : ⇒ f(0) = −3a3 , f(− ln3) = −3a3 1 3 (1 − 1 3)2 = −4a3 9 , x < 0 f(0) = −3a3, f(ln3) = −3a3 1 3 (1 − 1 3)2 = −4a3 9 , x > 0 18 y and t are interchanged here
  • 64. Partial Differential Equations Igor Yanovsky, 2005 64 ⇒ t = − 1 min{3h(s)2h (s)} = − 1 min f(s) = 1 3a3 .
  • 65. Partial Differential Equations Igor Yanovsky, 2005 65 b) Now consider ut + u3 ux + cu = 0 with the same initial data and a positive constant c. How large does c need to be in order to prevent shock formation? b) Characteristic form: ut + u3ux = −cu. Γ : (s, 0, h(s)). dx dt = z3 = h(s)3 e−3ct ⇒ x = s + 1 3c h(s)3 (1 − e−3ct ), dy dt = 1 ⇒ y = t, dz dt = −cz ⇒ z = h(s)e−ct (⇒ h(s) = uecy ). ⇒ z(s, t) = h x − 1 3c h(s)3 (1 − e−3ct ) e−ct , ⇒ u(x, y) = h x − 1 3c u3 e3cy (1 − e−3cy ) e−cy . ux = h (s) · e−cy · 1 − 1 c u2 uxe3cy (1 − e−3cy ) , ux = h (s)e−cy 1 + 1 c h (s)u2e2cy(1 − e−3cy) = h (s)e−cy 1 + 1 c h (s)h(s)2(1 − e−3cy) . Thus, we need 1 + 1 c h (s)h(s)2 (1 − e−3cy ) = 0. We can perform further calculations taking into account the result from part (a): min{3h(s)2 h (s)} = −3a3 .
  • 66. Partial Differential Equations Igor Yanovsky, 2005 66 Problem (F’99, #4). Consider the Cauchy problem uy + a(x)ux = 0, u(x, 0) = h(x). Give an example of an (unbounded) smooth a(x) for which the solution of the Cauchy problem is not unique. Proof. Γ is parameterized by Γ : (s, 0, h(s)). dx dt = a(x) ⇒ x(t) − x(0) = t 0 a(x)dt ⇒ x = t 0 a(x)dt + s, dy dt = 1 ⇒ y(s, t) = t + c1(s) ⇒ y = t, dz dt = 0 ⇒ z(s, t) = c2(s) ⇒ z = h(s). Thus, u(x, t) = h x − y 0 a(x)dy Problem (F’97, #7). a) Solve the Cauchy problem ut − xuux = 0 − ∞ < x < ∞, t ≥ 0, u(x, 0) = f(x) − ∞ < x < ∞. b) Find a class of initial data such that this problem has a global solution for all t. Compute the critical time for the existence of a smooth solution for initial data, f, which is not in the above class. Proof. a) Γ is parameterized by Γ : (s, 0, f(s)). dx dt = −xz ⇒ dx dt = −xf(s) ⇒ x = se−f(s)t , dy dt = 1 ⇒ y = t, dz dt = 0 ⇒ z = f(s). ⇒ z = f xef(s)t , ⇒ u(x, y) = f xeuy . Check: ux = f (s) · (euy + xeuy uxy) uy = f (s) · xeuy(uyy + u) ⇒ ux − f (s)xeuy uxy = f (s)euy uy − f (s)xeuyuyy = f (s)xeuyu ⇒ ux = f (s)euy 1−f (s)xyeuy uy = f (s)euyxu 1−f (s)xyeuy ⇒ uy − xuux = f (s)euyxu 1 − f (s)xyeuy − xu f (s)euy 1 − f (s)xyeuy = 0. u(x, 0) = f(x).
  • 67. Partial Differential Equations Igor Yanovsky, 2005 67 b) The characteristics would intersect when 1 − f (s)xyeuy = 0. Thus, tc = 1 f (s)xeutc .
  • 68. Partial Differential Equations Igor Yanovsky, 2005 68 Problem (F’96, #6). Find an implicit formula for the solution u of the initial-value problem ut = (2x − 1)tux + sin(πx) − t, u(x, t = 0) = 0. Evaluate u explicitly at the point (x = 0.5, t = 2). Proof. Rewrite the equation as uy + (1 − 2x)yux = sin(πx) − y. Γ is parameterized by Γ : (s, 0, 0). dx dt = (1 − 2x)y = (1 − 2x)t ⇒ x = 1 2 (2s − 1)e−t2 + 1 2 , ⇒ s = (x − 1 2 )et2 + 1 2 , dy dt = 1 ⇒ y = t, dz dt = sin(πx) − y = sin π 2 (2s − 1)e−t2 + π 2 − t. ⇒ z(s, t) = t 0 sin π 2 (2s − 1)e−t2 + π 2 − t dt + z(s, 0), z(s, t) = t 0 sin π 2 (2s − 1)e−t2 + π 2 − t dt. ⇒ u(x, y) = y 0 sin π 2 (2s − 1)e−y2 + π 2 − y dy = y 0 sin π 2 (2x − 1)ey2 e−y2 + π 2 − y dy = y 0 sin π 2 (2x − 1) + π 2 − y dy = y 0 sin(πx) − y dy, ⇒ u(x, y) = y sin(πx) − y2 2 . Note: This solution does not satisfy the PDE. Problem (S’90, #8). Consider the Cauchy problem ut = xux − u, −∞ < x < ∞, t ≥ 0, u(x, 0) = f(x), f(x) ∈ C∞ . Assume that f ≡ 0 for |x| ≥ 1. Solve the equation by the method of characteristics and discuss the behavior of the solution. Proof. Rewrite the equation as uy − xux = −u, Γ is parameterized by Γ : (s, 0, f(s)). dx dt = −x ⇒ x = se−t , dy dt = 1 ⇒ y = t, dz dt = −z ⇒ z = f(s)e−t . ⇒ u(x, y) = f(xey )e−y .
  • 69. Partial Differential Equations Igor Yanovsky, 2005 69 The solution satisfies the PDE and initial conditions. As y → +∞, u → 0. u = 0 for |xey| ≥ 1 ⇒ u = 0 for |x| ≥ 1 ey .
  • 70. Partial Differential Equations Igor Yanovsky, 2005 70 Problem (F’02, #4). Consider the nonlinear hyperbolic equation uy + uux = 0 − ∞ < x < ∞. a) Find a smooth solution to this equation for initial condition u(x, 0) = x. b) Describe the breakdown of smoothness for the solution if u(x, 0) = −x. Proof. a) Γ is parameterized by Γ : (s, 0, s). dx dt = z = s ⇒ x = st + s ⇒ s = x t + 1 = x y + 1 . dy dt = 1 ⇒ y = t, dz dt = 0 ⇒ z = s. ⇒ u(x, y) = x y + 1 ; solution is smooth for all positive time y. b) Γ is parameterized by Γ : (s, 0, −s). dx dt = z = −s ⇒ x = −st + s ⇒ s = x 1 − t = x 1 − y . dy dt = 1 ⇒ y = t, dz dt = 0 ⇒ z = −s. ⇒ u(x, y) = x y − 1 ; solution blows up at time y = 1.
  • 71. Partial Differential Equations Igor Yanovsky, 2005 71 Problem (F’97, #4). Solve the initial-boundary value problem ut + (x + 1)2 ux = x for x > 0, t > 0 u(x, 0) = f(x) 0 < x < +∞ u(0, t) = g(t) 0 < t < +∞. Proof. Rewrite the equation as uy + (x + 1)2 ux = x for x > 0, y > 0 u(x, 0) = f(x) 0 < x < +∞ u(0, y) = g(y) 0 < y < +∞. • For region I, we solve the following characteristic equations with Γ is parameterized 19 by Γ : (s, 0, f(s)). dx dt = (x + 1)2 ⇒ x = − s + 1 (s + 1)t − 1 − 1, dy dt = 1 ⇒ y = t, dz dt = x = − s + 1 (s + 1)t − 1 − 1, ⇒ z = −ln|(s + 1)t − 1| − t + c1(s), ⇒ z = −ln|(s + 1)t − 1| − t + f(s). In region I, characteristics are of the form x = − s + 1 (s + 1)y − 1 − 1. Thus, region I is bounded above by the line x = − 1 y − 1 − 1, or y = x x + 1 . Since t = y, s = x−xy−y xy+y+1 , we have u(x, y) = −ln x − xy − y xy + y + 1 + 1 y − 1 − y + f x − xy − y xy + y + 1 , ⇒ u(x, y) = −ln −1 xy + y + 1 − y + f x − xy − y xy + y + 1 . • For region II, Γ is parameterized by Γ : (0, s, g(s)). dx dt = (x + 1)2 ⇒ x = − 1 t − 1 − 1, dy dt = 1 ⇒ y = t + s, dz dt = x = − 1 t − 1 − 1, ⇒ z = −ln|t − 1| − t + c2(s), ⇒ z = −ln|t − 1| − t + g(s). 19 Variable t as a third coordinate of u and variable t used to parametrize characteristic equations are two different entities.
  • 72. Partial Differential Equations Igor Yanovsky, 2005 72 Since t = x x+1 , s = y − x x+1 , we have u(x, y) = −ln x x + 1 − 1 − x x + 1 + g y − x x + 1 . Note that on y = x x+1 , both solutions are equal if f(0) = g(0).
  • 73. Partial Differential Equations Igor Yanovsky, 2005 73 Problem (S’93, #3). Solve the following equation ut + ux + yuy = sint for 0 ≤ t, 0 ≤ x, −∞ < y < ∞ and with u = x + y for t = 0, x ≥ 0 and u = t2 + y for x = 0, t ≥ 0. Proof. Rewrite the equation as (x ↔ x1, y ↔ x2, t ↔ x3): ux3 + ux1 + x2ux2 = sinx3 for 0 ≤ x3, 0 ≤ x1, −∞ < x2 < ∞, u(x1, x2, 0) = x1 + x2, u(0, x2, x3) = x2 3 + x2. • For region I, we solve the following characteristic equations with Γ is parameterized 20 by Γ : (s1, s2, 0, s1 + s2). dx1 dt = 1 ⇒ x1 = t + s1, dx2 dt = x2 ⇒ x2 = s2et , dx3 dt = 1 ⇒ x3 = t, dz dt = sin x3 = sin t ⇒ z = − cos t + s1 + s2 + 1. Since in region I, in x1x3-plane, characteristics are of the form x1 = x3 + s1, region I is bounded above by the line x1 = x3. Since t = x3, s1 = x1 − x3, s2 = x2e−x3 , we have u(x1, x2, x3) = − cos x3 + x1 − x3 + x2e−x3 + 1, or u(x, y, t) = − cos t + x − t + ye−t + 1, x ≥ t. • For region II, we solve the following characteristic equations with Γ is parameterized by Γ : (0, s2, s3, s2 + s2 3). dx1 dt = 1 ⇒ x1 = t, dx2 dt = x2 ⇒ x2 = s2et , dx3 dt = 1 ⇒ x3 = t + s3, dz dt = sin x3 = sin(t + s3) ⇒ z = − cos(t + s3) + cos s3 + s2 + s2 3. Since t = x1, s3 = x3 − x1, s2 = x2e−x3 , we have u(x1, x2, x3) = − cos x3 + cos(x3 − x1) + x2e−x3 + (x3 − x1)2 , or u(x, y, t) = − cos t + cos(t − x) + ye−t + (t − x)2 , x ≤ t. Note that on x = t, both solutions are u(x = t, y) = − cos x + ye−x + 1. 20 Variable t as a third coordinate of u and variable t used to parametrize characteristic equations are two different entities.
  • 74. Partial Differential Equations Igor Yanovsky, 2005 74 Problem (W’03, #5). Find a solution to xux + (x + y)uy = 1 which satisfies u(1, y) = y for 0 ≤ y ≤ 1. Find a region in {x ≥ 0, y ≥ 0} where u is uniquely determined by these conditions. Proof. Γ is parameterized by Γ : (1, s, s). dx dt = x ⇒ x = et . dy dt = x + y ⇒ y − y = et . dz dt = 1 ⇒ z = t + s. The homogeneous solution for the second equation is yh(s, t) = c1(s)et . Since the right hand side and yh are linearly dependent, our guess for the particular solution is yp(s, t) = c2(s)tet . Plugging in yp into the differential equation, we get c2(s)tet + c2(s)et − c2(s)tet = et ⇒ c2(s) = 1. Thus, yp(s, t) = tet and y(s, t) = yh + yp = c1(s)et + tet . Since y(s, 0) = s = c1(s), we get y = set + tet . With and , we can solve for s and t in terms of x and y to get t = ln x, y = sx + x ln x ⇒ s = y − x ln x x . u(x, y) = t + s = ln x + y − x ln x x . u(x, y) = y x . We have found that the characteristics in the xy-plane are of the form y = sx + x ln x, where s is such that 0 ≤ s ≤ 1. Also, the characteristics originate from Γ. Thus, u is uniquely determined in the region between the graphs: y = x ln x, y = x + x ln x.
  • 75. Partial Differential Equations Igor Yanovsky, 2005 75 12 Problems: Shocks Example 1. Determine the exact solution to Burgers’ equation ut + 1 2 u2 x = 0, t > 0 with initial data u(x, 0) = h(x) = ⎧ ⎪⎨ ⎪⎩ 1 if x < −1, 0 if − 1 < x < 1, −1 if x > 1. Proof. Characteristic form: ut + uux = 0. The characteristic projection in xt-plane passing through the point (s, 0) is the line x = h(s)t + s. • Rankine-Hugoniot shock condition at s = −1: shock speed: ξ (t) = F(ur) − F(ul) ur − ul = 1 2 u2 r − 1 2 u2 l ur − ul = 0 − 1 2 0 − 1 = 1 2 . The “1/slope” of the shock curve = 1/2. Thus, x = ξ(t) = 1 2 t + s, and since the jump occurs at (−1, 0), ξ(0) = −1 = s. Therefore, x = 1 2 t − 1. • Rankine-Hugoniot shock condition at s = 1: shock speed: ξ (t) = F(ur) − F(ul) ur − ul = 1 2 u2 r − 1 2 u2 l ur − ul = 1 2 − 0 −1 − 0 = − 1 2 . The “1/slope” of the shock curve = −1/2. Thus, x = ξ(t) = − 1 2 t + s, and since the jump occurs at (1, 0), ξ(0) = 1 = s. Therefore, x = − 1 2 t + 1. • At t = 2, Rankine-Hugoniot shock condition at s = 0: shock speed: ξ (t) = F(ur) − F(ul) ur − ul = 1 2 u2 r − 1 2 u2 l ur − ul = 1 2 − 1 2 −1 − 1 = 0. The “1/slope” of the shock curve = 0. Thus, x = ξ(t) = s, and since the jump occurs at (x, t) = (0, 2), ξ(2) = 0 = s. Therefore, x = 0.
  • 76. Partial Differential Equations Igor Yanovsky, 2005 76 ➡ For t < 2, u(x, t) = ⎧ ⎪⎨ ⎪⎩ 1 if x < 1 2 t − 1, 0 if 1 2 t − 1 < x < −1 2 t + 1, −1 if x > −1 2 t + 1. ➡ and for t > 2, u(x, t) = 1 if x < 0, −1 if x > 0.
  • 77. Partial Differential Equations Igor Yanovsky, 2005 77 Example 2. Determine the exact solution to Burgers’ equation ut + 1 2 u2 x = 0, t > 0 with initial data u(x, 0) = h(x) = ⎧ ⎪⎨ ⎪⎩ −1 if x < −1, 0 if − 1 < x < 1, 1 if x > 1. Proof. Characteristic form: ut + uux = 0. The characteristic projection in xt-plane passing through the point (s, 0) is the line x = h(s)t + s. For Burgers’ equation, for a rarefaction fan emanating from (s, 0) on xt-plane, we have: u(x, t) = ⎧ ⎪⎨ ⎪⎩ ul, x−s t ≤ ul, x−s t , ul ≤ x−s t ≤ ur, ur, x−s t ≥ ur. ➡ u(x, t) = ⎧ ⎪⎪⎪⎪⎪⎪⎨ ⎪⎪⎪⎪⎪⎪⎩ −1, x < −t − 1, x+1 t , −t − 1 < x < −1, i.e. − 1 < x+1 t < 0 0, −1 < x < 1, x−1 t , 1 < x < t + 1, i.e. 0 < x−1 t < 1 1, x > t + 1.
  • 78. Partial Differential Equations Igor Yanovsky, 2005 78
  • 79. Partial Differential Equations Igor Yanovsky, 2005 79 Example 3. Determine the exact solution to Burgers’ equation ut + 1 2 u2 x = 0, t > 0 with initial data u(x, 0) = h(x) = 2 if 0 < x < 1, 0 if otherwise. Proof. Characteristic form: ut + uux = 0. The characteristic projection in xt-plane passing through the point (s, 0) is the line x = h(s)t + s. • Shock: Rankine-Hugoniot shock condition at s = 1: shock speed: ξ (t) = F(ur) − F(ul) ur − ul = 1 2 u2 r − 1 2 u2 l ur − ul = 0 − 2 0 − 2 = 1. The “1/slope” of the shock curve = 1. Thus, x = ξ(t) = t + s, and since the jump occurs at (1, 0), ξ(0) = 1 = s. Therefore, x = t + 1. • Rarefaction: A rarefaction emanates from (0, 0) on xt-plane. ➡ For 0 < t < 1, u(x, t) = ⎧ ⎪⎪⎪⎪⎨ ⎪⎪⎪⎪⎩ 0 if x < 0, x t if 0 < x < 2t, 2 if 2t < x < t + 1. 0 if x > t + 1. Rarefaction catches up to shock at t = 1. • Shock: At (x, t) = (2, 1), ul = x/t, ur = 0. Rankine-Hugoniot shock condition: ξ (t) = F(ur) − F(ul) ur − ul = 1 2 u2 r − 1 2 u2 l ur − ul = 0 − 1 2 (x t )2 0 − x t = 1 2 x t , dxs dt = x 2t , x = c √ t, and since the jump occurs at (x, t) = (2, 1), x(1) = 2 = c. Therefore, x = 2 √ t. ➡ For t > 1, u(x, t) = ⎧ ⎪⎨ ⎪⎩ 0 if x < 0, x t if 0 < x < 2 √ t, 0 if x > 2 √ t.
  • 80. Partial Differential Equations Igor Yanovsky, 2005 80
  • 81. Partial Differential Equations Igor Yanovsky, 2005 81 Example 4. Determine the exact solution to Burgers’ equation ut + 1 2 u2 x = 0, t > 0 with initial data u(x, 0) = h(x) = 1 + x if x < 0, 0 if x > 0. Proof. Characteristic form: ut + uux = 0. The characteristic projection in xt-plane passing through the point (s, 0) is the line x = h(s)t + s. ➀ For s > 0, the characteristics are x = s. ➁ For s < 0, the characteristics are x = (1 + s)t + s. • There are two ways to look for the solution on the left half-plane. One is to notice that the characteristic at s = 0− is x = t and characteristic at s = −1 is x = −1 and that characteristics between s = −∞ and s = 0− are intersecting at (x, t) = (−1, −1). Also, for a fixed t, u is a linear function of x, i.e. for t = 0, u = 1 + x, allowing a continuous change of u with x. Thus, the solution may be viewed as an ‘implicit’ rarefaction, originating at (−1, −1), thus giving rise to the solution u(x, t) = x + 1 t + 1 . Another way to find a solution on the left half-plane is to solve ➁ for s to find s = x − t 1 + t . Thus, u(x, t) = h(s) = 1 + s = 1 + x − t 1 + t = x + 1 t + 1 . • Shock: At (x, t) = (0, 0), ul = x+1 t+1 , ur = 0. Rankine-Hugoniot shock condition: ξ (t) = F(ur) − F(ul) ur − ul = 1 2 u2 r − 1 2 u2 l ur − ul = 0 − 1 2 (x+1 t+1 )2 0 − x+1 t+1 = 1 2 x + 1 t + 1 , dxs dt = 1 2 x + 1 t + 1 , x = c √ t + 1 − 1, and since the jump occurs at (x, t) = (0, 0), x(0) = 0 = c − 1, or c = 1. Therefore, the shock curve is x = √ t + 1 − 1. ➡ u(x, t) = x+1 t+1 if x < √ t + 1 − 1, 0 if x > √ t + 1 − 1.
  • 82. Partial Differential Equations Igor Yanovsky, 2005 82
  • 83. Partial Differential Equations Igor Yanovsky, 2005 83 Example 5. Determine the exact solution to Burgers’ equation ut + 1 2 u2 x = 0, t > 0 with initial data u(x, 0) = h(x) = ⎧ ⎪⎨ ⎪⎩ u0 if x < 0, u0 · (1 − x) if 0 < x < 1, 0 if x ≥ 1, where u0 > 0. Proof. Characteristic form: ut + uux = 0. The characteristic projection in xt-plane passing through the point (s, 0) is the line x = h(s)t + s. ➀ For s > 1, the characteristics are x = s. ➁ For 0 < s < 1, the characteristics are x = u0(1 − s)t + s. ➂ For s < 0, the characteristics are x = u0t + s. The characteristics emanating from (s, 0), 0 < s < 1 on xt-plane intersect at (1, 1 u0 ). Also, we can check that the characteristics do not intersect before t = 1 u0 for this problem: tc = min −1 h (s) = 1 u0 . • To find solution in a triangular domain between x = u0t and x = 1, we note that characteristics there are x = u0 · (1 − s)t + s. Solving for s we get s = x − u0t 1 − u0t . Thus, u(x, t) = h(s) = u0 · (1 − s) = u0 · 1 − x − u0t 1 − u0t = u0 · (1 − x) 1 − u0t . We can also find a solution in the triangular domain as follows. Note, that the charac- teristics are the straight lines dx dt = u = const. Integrating the equation above, we obtain x = ut + c Since all characteristics in the triangular domain meet at (1, 1 u0 ), we have c = 1 − u u0 , and x = ut + 1 − u u0 or u = u0 · (1 − x) 1 − u0t . ➡ For 0 < t < 1 u0 , u(x, t) = ⎧ ⎪⎨ ⎪⎩ u0 if x < u0t, u0·(1−x) 1−u0t if u0t < x < 1, 0 if x > 1.
  • 84. Partial Differential Equations Igor Yanovsky, 2005 84 • Shock: At (x, t) = (1, 1 u0 ), Rankine-Hugoniot shock condition: ξ (t) = F(ur) − F(ul) ur − ul = 1 2 u2 r − 1 2 u2 l ur − ul = 0 − 1 2 u2 0 0 − u0 = 1 2 u0, ξ(t) = 1 2 u0t + c, and since the jump occurs at (x, t) = (1, 1 u0 ), x 1 u0 = 1 = 1 2 +c, or c = 1 2. Therefore, the shock curve is x = u0t+1 2 .
  • 85. Partial Differential Equations Igor Yanovsky, 2005 85 ➡ For t > 1 u0 , u(x, t) = u0 if x < u0t+1 2 , 0 if x > u0t+1 2 . Problem. Show that for u = f(x/t) to be a nonconstant solution of ut + a(u)ux = 0, f must be the inverse of the function a. Proof. If u = f(x/t), ut = −f x t · x t2 and ux = f x t · 1 t . Hence, ut + a(u)ux = 0 implies that −f x t · x t2 + a f x t f x t · 1 t = 0 or, assuming f is not identically 0 to rule out the constant solution, that a f x t = x t . This shows the functions a and f to be inverses of each other.
  • 86. Partial Differential Equations Igor Yanovsky, 2005 86 13 Problems: General Nonlinear Equations 13.1 Two Spatial Dimensions Problem (S’01, #3). Solve the initial value problem 1 2 u2 x − uy = − x2 2 , u(x, 0) = x. You will find that the solution blows up in finite time. Explain this in terms of the characteristics for this equation. Proof. Rewrite the equation as F(x, y, z, p, q) = p2 2 − q + x2 2 = 0. Γ is parameterized by Γ : (s, 0, s, φ(s), ψ(s)). We need to complete Γ to a strip. Find φ(s) and ψ(s), the initial conditions for p(s, t) and q(s, t), respectively: • F(f(s), g(s), h(s), φ(s), ψ(s)) = 0, F(s, 0, s, φ(s), ψ(s)) = 0, φ(s)2 2 − ψ(s) + s2 2 = 0, ψ(s) = φ(s)2 + s2 2 . • h (s) = φ(s)f (s) + ψ(s)g (s), 1 = φ(s). ⇒ ψ(s) = s2 + 1 2 . Therefore, now Γ is parametrized by Γ : (s, 0, s, 1, s2+1 2 ). dx dt = Fp = p, dy dt = Fq = −1 ⇒ y(s, t) = −t + c1(s) ⇒ y = −t, dz dt = pFp + qFq = p2 − q, dp dt = −Fx − Fzp = −x, dq dt = −Fy − Fzq = 0 ⇒ q(s, t) = c2(s) ⇒ q = s2 + 1 2 . Thus, we found y and q in terms of s and t. Note that we have a coupled system: x = p, p = −x, which can be written as two second order ODEs: x + x = 0, x(s, 0) = s, x (s, 0) = p(s, 0) = 1, p + p = 0, p(s, 0) = 1, p (s, 0) = −x(s, 0) = −s.
  • 87. Partial Differential Equations Igor Yanovsky, 2005 87 Solving the two equations separately, we get x(s, t) = s · cos t + sint, p(s, t) = cos t − s · sint. From this, we get dz dt = p2 − q = cos t − s · sint 2 − s2 + 1 2 = cos2 t − 2s cos t sint + s2 sin2 t − s2 + 1 2 . z(s, t) = t 0 cos2 t − 2s cos t sint + s2 sin2 t − s2 + 1 2 dt + z(s, 0), z(s, t) = t 2 + sint cos t 2 + s cos2 t + s2t 2 − s2 sint cos t 2 − t(s2 + 1) 2 t 0 + s, = sin t cos t 2 + s cos2 t − s2 sint cos t 2 t 0 + s, = sint cos t 2 + s cos2 t − s2 sin t cos t 2 − s + s = = sint cos t 2 + s cos2 t − s2 sin t cos t 2 . Plugging in x and y found earlier for s and t, we get u(x, y) = sin(−y) cos(−y) 2 + x − sin(−y) cos(−y) cos2 (−y) − (x − sin(−y))2 cos2(−y) · sin(−y) cos(−y) 2 = − sin y cos y 2 + x + siny cos y cos2 y + (x + sin y)2 cos2 y · siny cos y 2 = − sin y cos y 2 + (x + sin y) cosy + (x + siny)2 sin y 2 cosy = x cos y + sin y cos y 2 + (x + sin y)2 siny 2 cos y .
  • 88. Partial Differential Equations Igor Yanovsky, 2005 88 Problem (S’98, #3). Find the solution of ut + u2 x 2 = −x2 2 , t ≥ 0, −∞ < x < ∞ u(x, 0) = h(x), −∞ < x < ∞, where h(x) is smooth function which vanishes for |x| large enough. Proof. Rewrite the equation as F(x, y, z, p, q) = p2 2 + q + x2 2 = 0. Γ is parameterized by Γ : (s, 0, h(s), φ(s), ψ(s)). We need to complete Γ to a strip. Find φ(s) and ψ(s), the initial conditions for p(s, t) and q(s, t), respectively: • F(f(s), g(s), h(s), φ(s), ψ(s)) = 0, F(s, 0, h(s), φ(s), ψ(s)) = 0, φ(s)2 2 + ψ(s) + s2 2 = 0, ψ(s) = − φ(s)2 + s2 2 . • h (s) = φ(s)f (s) + ψ(s)g (s), h (s) = φ(s). ⇒ ψ(s) = − h (s)2 + s2 2 . Therefore, now Γ is parametrized by Γ : (s, 0, s, h (s), −h (s)2+s2 2 ). dx dt = Fp = p, dy dt = Fq = 1 ⇒ y(s, t) = t + c1(s) ⇒ y = t, dz dt = pFp + qFq = p2 + q, dp dt = −Fx − Fzp = −x, dq dt = −Fy − Fzq = 0 ⇒ q(s, t) = c2(s) ⇒ q = − h (s)2 + s2 2 . Thus, we found y and q in terms of s and t. Note that we have a coupled system: x = p, p = −x, which can be written as a second order ODE: x + x = 0, x(s, 0) = s, x (s, 0) = p(s, 0) = h (s). Solving the equation, we get x(s, t) = s cos t + h (s) sint, p(s, t) = x (s, t) = h (s) cos t − s sin t.
  • 89. Partial Differential Equations Igor Yanovsky, 2005 89 From this, we get dz dt = p2 + q = h (s) cost − s sint 2 − h (s)2 + s2 2 = h (s)2 cos2 t − 2sh (s) cost sin t + s2 sin2 t − h (s)2 + s2 2 . z(s, t) = t 0 h (s)2 cos2 t − 2sh (s) cos t sint + s2 sin2 t − h (s)2 + s2 2 dt + z(s, 0) = t 0 h (s)2 cos2 t − 2sh (s) cos t sint + s2 sin2 t − h (s)2 + s2 2 dt + h(s). We integrate the above expression similar to S 01#3 to get an expression for z(s, t). Plugging in x and y found earlier for s and t, we get u(x, y).
  • 90. Partial Differential Equations Igor Yanovsky, 2005 90 Problem (S’97, #4). Describe the method of the bicharacteristics for solving the initial value problem ∂ ∂x u(x, y) 2 + ∂ ∂y u(x, y) 2 = 2 + y, u(x, 0) = u0(x) = x. Assume that | ∂ ∂xu0(x)| < 2 and consider the solution such that ∂u ∂y > 0. Apply all general computations for the particular case u0(x) = x. Proof. We have u2 x + u2 y = 2 + y u(x, 0) = u0(x) = x. Rewrite the equation as F(x, y, z, p, q) = p2 + q2 − y − 2 = 0. Γ is parameterized by Γ : (s, 0, s, φ(s), ψ(s)). We need to complete Γ to a strip. Find φ(s) and ψ(s), the initial conditions for p(s, t) and q(s, t), respectively: • F(f(s), g(s), h(s), φ(s), ψ(s)) = 0, F(s, 0, s, φ(s), ψ(s)) = 0, φ(s)2 + ψ(s)2 − 2 = 0, φ(s)2 + ψ(s)2 = 2. • h (s) = φ(s)f (s) + ψ(s)g (s), 1 = φ(s). ⇒ ψ(s) = ±1. Since we have a condition that q(s, t) > 0, we choose q(s, 0) = ψ(s) = 1. Therefore, now Γ is parametrized by Γ : (s, 0, s, 1, 1). dx dt = Fp = 2p ⇒ dx dt = 2 ⇒ x = 2t + s, dy dt = Fq = 2q ⇒ dy dt = 2t + 2 ⇒ y = t2 + 2t, dz dt = pFp + qFq = 2p2 + 2q2 = 2y + 4 ⇒ dz dt = 2t2 + 4t + 4, ⇒ z = 2 3 t3 + 2t2 + 4t + s = 2 3 t3 + 2t2 + 4t + x − 2t = 2 3 t3 + 2t2 + 2t + x, dp dt = −Fx − Fzp = 0 ⇒ p = 1, dq dt = −Fy − Fzq = 1 ⇒ q = t + 1. We solve y = t2 + 2t, a quadratic equation in t, t2 + 2t − y = 0, for t in terms of y to get: t = −1 ± 1 + y. ⇒ u(x, y) = 2 3 (−1 ± 1 + y)3 + 2(−1 ± 1 + y)2 + 2(−1 ± 1 + y) + x. Both u± satisfy the PDE. ux = 1, uy = ± √ y + 1 ⇒ u2 x + u2 y = y + 2 u+ satisfies u+(x, 0) = x . However, u− does not satisfy IC, i.e. u−(x, 0) = x−4 3.
  • 91. Partial Differential Equations Igor Yanovsky, 2005 91 Problem (S’02, #6). Consider the equation ux + uxuy = 1, u(x, 0) = f(x). Assuming that f is differentiable, what conditions on f insure that the problem is noncharacteristic? If f satisfies those conditions, show that the solution is u(x, y) = f(r) − y + 2y f (r) , where r must satisfy y = (f (r))2 (x − r). Finally, show that one can solve the equation for (x, y) in a sufficiently small neighbor- hood of (x0, 0) with r(x0, 0) = x0. Proof. Solved. In order to solve the Cauchy problem in a neighborhood of Γ, need: f (s) · Fq[f, g, h, φ, ψ](s) − g (s) · Fp[f, g, h, φ, ψ](s) = 0, 1 · h (s) − 0 · 1 + 1 − h (s) h (s) = 0, h (s) = 0. Thus, h (s) = 0 ensures that the problem is noncharacteristic. To show that one can solve y = (f (s))2(x − s) for (x, y) in a sufficiently small neighborhood of (x0, 0) with s(x0, 0) = x0, let G(x, y, s) = (f (s))2 (x − s) − y = 0, G(x0, 0, x0) = 0, Gr(x0, 0, x0) = −(f (s))2 . Hence, if f (s) = 0, ∀s, then Gs(x0, 0, x0) = 0 and we can use the implicit function theorem in a neighborhood of (x0, 0, x0) to get G(x, y, h(x, y)) = 0 and solve the equation in terms of x and y.
  • 92. Partial Differential Equations Igor Yanovsky, 2005 92 Problem (S’00, #1). Find the solutions of (ux)2 + (uy)2 = 1 in a neighborhood of the curve y = x2 2 satisfying the conditions u x, x2 2 = 0 and uy x, x2 2 > 0. Leave your answer in parametric form. Proof. Rewrite the equation as F(x, y, z, p, q) = p2 + q2 − 1 = 0. Γ is parameterized by Γ : (s, s2 2 , 0, φ(s), ψ(s)). We need to complete Γ to a strip. Find φ(s) and ψ(s), the initial conditions for p(s, t) and q(s, t), respectively: • F(f(s), g(s), h(s), φ(s), ψ(s)) = 0, F s, s2 2 , 0, φ(s), ψ(s) = 0, φ(s)2 + ψ(s)2 = 1. • h (s) = φ(s)f (s) + ψ(s)g (s), 0 = φ(s) + sψ(s), φ(s) = −sψ(s). Thus, s2 ψ(s)2 + ψ(s)2 = 1 ⇒ ψ(s)2 = 1 s2 + 1 . Since, by assumption, ψ(s) > 0, we have ψ(s) = 1√ s2+1 . Therefore, now Γ is parametrized by Γ : s, s2 2 , 0, −s√ s2+1 , 1√ s2+1 . dx dt = Fp = 2p = −2s √ s2 + 1 ⇒ x = −2st √ s2 + 1 + s, dy dt = Fq = 2q = 2 √ s2 + 1 ⇒ y = 2t √ s2 + 1 + s2 2 , dz dt = pFp + qFq = 2p2 + 2q2 = 2 ⇒ z = 2t, dp dt = −Fx − Fzp = 0 ⇒ p = −s √ s2 + 1 , dq dt = −Fy − Fzq = 0 ⇒ q = 1 √ s2 + 1 . Thus, in parametric form, z(s, t) = 2t, x(s, t) = −2st √ s2 + 1 + s, y(s, t) = 2t √ s2 + 1 + s2 2 .
  • 93. Partial Differential Equations Igor Yanovsky, 2005 93 13.2 Three Spatial Dimensions Problem (S’96, #2). Solve the following Cauchy problem21: ux + u2 y + u2 z = 1, u(0, y, z) = y · z. Proof. Rewrite the equation as ux1 + u2 x2 + u2 x3 = 1, u(0, x2, x3) = x2 · x3. Write a general nonlinear equation F(x1, x2, x3, z, p1, p2, p3) = p1 + p2 2 + p2 3 − 1 = 0. Γ is parameterized by Γ : 0 x1(s1,s2,0) , s1 x2(s1,s2,0) , s2 x3(s1,s2,0) , s1s2 z(s1,s2,0) , φ1(s1, s2) p1(s1,s2,0) , φ2(s1, s2) p2(s1,s2,0) , φ3(s1, s2) p3(s1,s2,0) We need to complete Γ to a strip. Find φ1(s1, s2), φ2(s1, s2), and φ3(s1, s2), the initial conditions for p1(s1, s2, t), p2(s1, s2, t), and p3(s1, s2, t), respectively: • F f1(s1, s2), f2(s1, s2), f3(s1, s2), h(s1, s2), φ1, φ2, φ3 = 0, F 0, s1, s2, s1s2, φ1, φ2, φ3 = φ1 + φ2 2 + φ2 3 − 1 = 0, ⇒ φ1 + φ2 2 + φ2 3 = 1. • ∂h ∂s1 = φ1 ∂f1 ∂s1 + φ2 ∂f2 ∂s1 + φ3 ∂f3 ∂s1 , ⇒ s2 = φ2. • ∂h ∂s2 = φ1 ∂f1 ∂s2 + φ2 ∂f2 ∂s2 + φ3 ∂f3 ∂s2 , ⇒ s1 = φ3. Thus, we have: φ2 = s2, φ3 = s1, φ1 = −s2 1 − s2 2 + 1. Γ : 0 x1(s1,s2,0) , s1 x2(s1,s2,0) , s2 x3(s1,s2,0) , s1s2 z(s1,s2,0) , −s2 1 − s2 2 + 1 p1(s1,s2,0) , s2 p2(s1,s2,0) , s1 p3(s1,s2,0) 21 This problem is very similar to an already hand-written solved problem F’95 #2.
  • 94. Partial Differential Equations Igor Yanovsky, 2005 94 The characteristic equations are dx1 dt = Fp1 = 1 ⇒ x1 = t, dx2 dt = Fp2 = 2p2 ⇒ dx2 dt = 2s2 ⇒ x2 = 2s2t + s1, dx3 dt = Fp3 = 2p3 ⇒ dx3 dt = 2s1 ⇒ x3 = 2s1t + s2, dz dt = p1Fp1 + p2Fp2 + p3Fp3 = p1 + 2p2 2 + 2p2 3 = −s2 1 − s2 2 + 1 + 2s2 2 + 2s2 1 = s2 1 + s2 2 + 1 ⇒ z = (s2 1 + s2 2 + 1)t + s1s2, dp1 dt = −Fx1 − p1Fz = 0 ⇒ p1 = −s2 1 − s2 2 + 1, dp2 dt = −Fx2 − p2Fz = 0 ⇒ p2 = s2, dp3 dt = −Fx3 − p3Fz = 0 ⇒ p3 = s1. Thus, we have ⎧ ⎪⎪⎪⎪⎨ ⎪⎪⎪⎪⎩ x1 = t x2 = 2s2t + s1 x3 = 2s1t + s2 z = (s2 1 + s2 2 + 1)t + s1s2 ⇒ ⎧ ⎪⎪⎪⎪⎨ ⎪⎪⎪⎪⎩ t = x1 s1 = x2 − 2s2t s2 = x3 − 2s1t z = (s2 1 + s2 2 + 1)t + s1s2 ⇒ ⎧ ⎪⎪⎪⎪⎨ ⎪⎪⎪⎪⎩ t = x1 s1 = x2−2x1x3 1−4x2 1 s2 = x3−2x1x2 1−4x2 1 z = (s2 1 + s2 2 + 1)t + s1s2 ⇒ u(x1, x2, x3) = x2 − 2x1x3 1 − 4x2 1 2 + x3 − 2x1x2 1 − 4x2 1 2 + 1 x1 + x2 − 2x1x3 1 − 4x2 1 x3 − 2x1x2 1 − 4x2 1 . Problem (F’95, #2). Solve the following Cauchy problem ux + uy + u3 z = x + y + z, u(x, y, 0) = xy. Proof. Solved
  • 95. Partial Differential Equations Igor Yanovsky, 2005 95 Problem (S’94, #1). Solve the following PDE for f(x, y, t): ft + xfx + 3t2 fy = 0 f(x, y, 0) = x2 + y2 . Proof. Rewrite the equation as (x → x1, y → x2, t → x3, f → u): x1ux1 + 3x2 3ux2 + ux3 = 0, u(x1, x2, 0) = x2 1 + x2 2. F(x1, x2, x3, z, p1, p2, p3) = x1p1 + 3x2 3p2 + p3 = 0. Γ is parameterized by Γ : s1 x1(s1,s2,0) , s2 x2(s1,s2,0) , 0 x3(s1,s2,0) , s2 1 + s2 2 z(s1,s2,0) , φ1(s1, s2) p1(s1,s2,0) , φ2(s1, s2) p2(s1,s2,0) , φ3(s1, s2) p3(s1,s2,0) We need to complete Γ to a strip. Find φ1(s1, s2), φ2(s1, s2), and φ3(s1, s2), the initial conditions for p1(s1, s2, t), p2(s1, s2, t), and p3(s1, s2, t), respectively: • F f1(s1, s2), f2(s1, s2), f3(s1, s2), h(s1, s2), φ1, φ2, φ3 = 0, F s1, s2, 0, s2 1 + s2 2, φ1, φ2, φ3 = s1φ1 + φ3 = 0, ⇒ φ3 = s1φ1. • ∂h ∂s1 = φ1 ∂f1 ∂s1 + φ2 ∂f2 ∂s1 + φ3 ∂f3 ∂s1 , ⇒ 2s1 = φ1. • ∂h ∂s2 = φ1 ∂f1 ∂s2 + φ2 ∂f2 ∂s2 + φ3 ∂f3 ∂s2 , ⇒ 2s2 = φ2. Thus, we have: φ1 = 2s1, φ2 = 2s2, φ3 = 2s2 1. Γ : s1 x1(s1,s2,0) , s2 x2(s1,s2,0) , 0 x3(s1,s2,0) , s2 1 + s2 2 z(s1,s2,0) , 2s1 p1(s1,s2,0) , 2s2 p2(s1,s2,0) , 2s2 1 p3(s1,s2,0) The characteristic equations are dx1 dt = Fp1 = x1 ⇒ x1 = s1et , dx2 dt = Fp2 = 3x2 3 ⇒ dx2 dt = 3t2 ⇒ x2 = t3 + s2, dx3 dt = Fp3 = 1 ⇒ x3 = t, dz dt = p1Fp1 + p2Fp2 + p3Fp3 = p1x1 + p23x2 3 + p3 = 0 ⇒ z = s2 1 + s2 2, dp1 dt = −Fx1 − p1Fz = −p1 ⇒ p1 = 2s1e−t , dp2 dt = −Fx2 − p2Fz = 0 ⇒ p2 = 2s2, dp3 dt = −Fx3 − p3Fz = −6x3p2 ⇒ dp3 dt = −12ts2 ⇒ p3 = −6t2 s2 + 2s2 1. With t = x3, s1 = x1e−x3 , s2 = x2 − x3 3, we have u(x1, x2, x3) = x2 1e−2x3 + (x2 − x3 3)2 . f(x, y, t) = x2 e−2t + (y − t3 )2 . The solution satisfies the PDE and initial condition.
  • 96. Partial Differential Equations Igor Yanovsky, 2005 96 Problem (F’93, #3). Find the solution of the following equation ft + xfx + (x + t)fy = t3 f(x, y, 0) = xy. Proof. Rewrite the equation as (x → x1, y → x2, t → x3, f → u): x1ux1 + (x1 + x3)ux2 + ux3 = x3 , u(x1, x2, 0) = x1x2. Method I: Treat the equation as a QUASILINEAR equation. Γ is parameterized by Γ : (s1, s2, 0, s1s2). dx1 dt = x1 ⇒ x1 = s1et , dx2 dt = x1 + x3 ⇒ dx2 dt = s1et + t ⇒ x2 = s1et + t2 2 + s2 − s1, dx3 dt = 1 ⇒ x3 = t, dz dt = x3 3 ⇒ dz dt = t3 ⇒ z = t4 4 + s1s2. Since t = x3, s1 = x1e−x3 , s2 = x2 − s1et − t2 2 + s1 = x2 − x1 − x2 3 2 + x1e−x3 , we have u(x1, x2, x3) = x4 3 4 + x1e−x3 (x2 − x1 − x2 3 2 + x1e−x3 ), or f(x, y, t) = t4 4 + xe−t (y − x − t2 2 + xe−t ). The solution satisfies the PDE and initial condition. Method II: Treat the equation as a fully NONLINEAR equation. F(x1, x2, x3, z, p1, p2, p3) = x1p1 + (x1 + x3)p2 + p3 − x3 3 = 0. Γ is parameterized by Γ : s1 x1(s1,s2,0) , s2 x2(s1,s2,0) , 0 x3(s1,s2,0) , s1s2 z(s1,s2,0) , φ1(s1, s2) p1(s1,s2,0) , φ2(s1, s2) p2(s1,s2,0) , φ3(s1, s2) p3(s1,s2,0) We need to complete Γ to a strip. Find φ1(s1, s2), φ2(s1, s2), and φ3(s1, s2), the initial conditions for p1(s1, s2, t), p2(s1, s2, t), and p3(s1, s2, t), respectively: • F f1(s1, s2), f2(s1, s2), f3(s1, s2), h(s1, s2), φ1, φ2, φ3 = 0, F s1, s2, 0, s1s2, φ1, φ2, φ3 = s1φ1 + s1φ2 + φ3 = 0, ⇒ φ3 = −s1(φ1 + φ2). • ∂h ∂s1 = φ1 ∂f1 ∂s1 + φ2 ∂f2 ∂s1 + φ3 ∂f3 ∂s1 , ⇒ s2 = φ1. • ∂h ∂s2 = φ1 ∂f1 ∂s2 + φ2 ∂f2 ∂s2 + φ3 ∂f3 ∂s2 , ⇒ s1 = φ2. Thus, we have: φ1 = s2, φ2 = s1, φ3 = −s2 1 − s1s2. Γ : s1 x1(s1,s2,0) , s2 x2(s1,s2,0) , 0 x3(s1,s2,0) , s1s2 z(s1,s2,0) , s2 p1(s1,s2,0) , s1 p2(s1,s2,0) , −s2 1 − s1s2 p3(s1,s2,0)
  • 97. Partial Differential Equations Igor Yanovsky, 2005 97 The characteristic equations are dx1 dt = Fp1 = x1 ⇒ x1 = s1et , dx2 dt = Fp2 = x1 + x3 ⇒ dx2 dt = s1et + t ⇒ x2 = s1et + t2 2 + s2 − s1, dx3 dt = Fp3 = 1 ⇒ x3 = t, dz dt = p1Fp1 + p2Fp2 + p3Fp3 = p1x1 + p2(x1 + x3) + p3 = x3 3 = t3 ⇒ z = t4 4 + s1s2, dp1 dt = −Fx1 − p1Fz = −p1 − p2 = −p1 − s1 ⇒ p1 = 2s1e−t − s1, dp2 dt = −Fx2 − p2Fz = 0 ⇒ p2 = s1, dp3 dt = −Fx3 − p3Fz = 3x2 3 − p2 = 3t2 − s1 ⇒ p3 = t3 − s1t − s2 1 − s1s2. With t = x3, s1 = x1e−x3 , s2 = x2 − s1et − t2 2 + s1 = x2 − x1 − x2 3 2 + x1e−x3 , we have u(x1, x2, x3) = x4 3 4 + x1e−x3 (x2 − x1 − x2 3 2 + x1e−x3 ), or f(x, y, t) = t4 4 + xe−t (y − x − t2 2 + xe−t ). 22 The solution satisfies the PDE and initial condition. 22 Variable t in the derivatives of characteristics equations and t in the solution f(x,y, t) are different entities.
  • 98. Partial Differential Equations Igor Yanovsky, 2005 98 Problem (F’92, #1). Solve the initial value problem ut + αux + βuy + γu = 0 for t > 0 u(x, y, 0) = ϕ(x, y), in which α, β and γ are real constants and ϕ is a smooth function. Proof. Rewrite the equation as (x → x1, y → x2, t → x3)23: αux1 + βux2 + ux3 = −γu, u(x1, x2, 0) = ϕ(x1, x2). Γ is parameterized by Γ : (s1, s2, 0, ϕ(s1, s2)). dx1 dt = α ⇒ x1 = αt + s1, dx2 dt = β ⇒ x2 = βt + s2, dx3 dt = 1 ⇒ x3 = t, dz dt = −γz ⇒ dz z = −γdt ⇒ z = ϕ(s1, s2)e−γt . J ≡ det ∂(x1, x2, x3) ∂(s1, s2, t) = 1 0 0 0 1 0 α β 1 = 1 = 0 ⇒ J is invertible. Since t = x3, s1 = x1 − αx3, s2 = x2 − βx3, we have u(x1, x2, x3) = ϕ(x1 − αx3, x2 − βx3)e−γx3 , or u(x, y, t) = ϕ(x − αt, y − βt)e−γt . The solution satisfies the PDE and initial condition.24 23 Variable t as a third coordinate of u and variable t used to parametrize characteristic equations are two different entities. 24 Chain Rule: u(x1, x2, x3) = ϕ(f(x1, x2, x3), g(x1, x2, x3)), then ux1 = ∂ϕ ∂f ∂f ∂x1 + ∂ϕ ∂g ∂g ∂x1 .
  • 99. Partial Differential Equations Igor Yanovsky, 2005 99 Problem (F’94, #2). Find the solution of the Cauchy problem ut(x, y, t) + aux(x, y, t) + buy(x, y, t) + c(x, y, t)u(x, y, t) = 0 u(x, y, 0) = u0(x, y), where 0 < t < +∞, −∞ < x < +∞, −∞ < y < +∞, a, b are constants, c(x, y, t) is a continuous function of (x, y, t), and u0(x, y) is a con- tinuous function of (x, y). Proof. Rewrite the equation as (x → x1, y → x2, t → x3): aux1 + bux2 + ux3 = −c(x1, x2, x3)u, u(x1, x2, 0) = u0(x1, x2). Γ is parameterized by Γ : (s1, s2, 0, u0(s1, s2)). dx1 dt = a ⇒ x1 = at + s1, dx2 dt = b ⇒ x2 = bt + s2, dx3 dt = 1 ⇒ x3 = t, dz dt = −c(x1, x2, x3)z ⇒ dz dt = −c(at + s1, bt + s2, t)z ⇒ dz z = −c(at + s1, bt + s2, t)dt ⇒ ln z = − t 0 c(aξ + s1, bξ + s2, ξ)dξ + c1(s1, s2), ⇒ z(s1, s2, t) = c2(s1, s2)e− t 0 c(aξ+s1,bξ+s2,ξ)dξ ⇒ z(s1, s2, 0) = c2(s1, s2) = u0(s2, s2), ⇒ z(s1, s2, t) = u0(s1, s2)e− t 0 c(aξ+s1,bξ+s2,ξ)dξ . J ≡ det ∂(x1, x2, x3) ∂(s1, s2, t) = 1 0 0 0 1 0 a b 1 = 1 = 0 ⇒ J is invertible. Since t = x3, s1 = x1 − ax3, s2 = x2 − bx3, we have u(x1, x2, x3) = u0(x1 − ax3, x2 − bx3)e− x3 0 c(aξ+x1−ax3,bξ+x2−bx3,ξ)dξ = u0(x1 − ax3, x2 − bx3)e− x3 0 c(x1+a(ξ−x3),x2+b(ξ−x3),ξ)dξ , or u(x, y, t) = u0(x − at, y − bt)e− t 0 c(x+a(ξ−t),y+b(ξ−t),ξ)dξ .
  • 100. Partial Differential Equations Igor Yanovsky, 2005 100 Problem (F’89, #4). Consider the first order partial differential equation ut + (α + βt)ux + γet uy = 0 (13.1) in which α, β and γ are constants. a) For this equation, solve the initial value problem with initial data u(x, y, t = 0) = sin(xy) (13.2) for all x and y and for t ≥ 0. b) Suppose that this initial data is prescribed only for x ≥ 0 (and all y) and consider (13.1) in the region x ≥ 0, t ≥ 0 and all y. For which values of α, β and γ is it possible to solve the initial-boundary value problem (13.1), (13.2) with u(x = 0, y, t) given for t ≥ 0? For non-permissible values of α, β and γ, where can boundary values be prescribed in order to determine a solution of (13.1) in the region (x ≥ 0, t ≥ 0, all y). Proof. a) Rewrite the equation as (x → x1, y → x2, t → x3): (α + βx3)ux1 + γex3 ux2 + ux3 = 0, u(x1, x2, 0) = sin(x1x2). Γ is parameterized by Γ : (s1, s2, 0, sin(s1s2)). dx1 dt = α + βx3 ⇒ dx1 dt = α + βt ⇒ x1 = βt2 2 + αt + s1, dx2 dt = γex3 ⇒ dx2 dt = γet ⇒ x2 = γet − γ + s2, dx3 dt = 1 ⇒ x3 = t, dz dt = 0 ⇒ z = sin(s1s2). J ≡ det ∂(x1, x2, x3) ∂(s1, s2, t) = 1 0 0 0 1 0 βt + α γet 1 = 1 = 0 ⇒ J is invertible. Since t = x3, s1 = x1 − βx2 3 2 − αx3, s2 = x2 − γex3 + γ, we have u(x1, x2, x3) = sin((x1 − βx2 3 2 − αx3)(x2 − γex3 + γ)), or u(x, y, t) = sin((x − βt2 2 − αt)(y − γet + γ)). The solution satisfies the PDE and initial condition. b) We need a compatibility condition between the initial and boundary values to hold on y-axis (x = 0, t = 0): u(x = 0, y, 0) = u(0, y, t = 0), 0 = 0.
  • 101. Partial Differential Equations Igor Yanovsky, 2005 101
  • 102. Partial Differential Equations Igor Yanovsky, 2005 102 14 Problems: First-Order Systems Problem (S’01, #2a). Find the solution u = u1(x, t) u2(x, t) , (x, t) ∈ R × R, to the (strictly) hyperbolic equation ut − 1 0 5 3 ux = 0, satisfying u1(x, 0) u2(x, 0) = eixa 0 , a ∈ R. Proof. Rewrite the equation as Ut + −1 0 −5 −3 Ux = 0, U(x, 0) = u(1)(x, 0) u(2) (x, 0) = eixa 0 . The eigenvalues of the matrix A are λ1 = −1, λ2 = −3 and the corresponding eigenvectors are e1 = 2 −5 , e2 = 0 1 . Thus, Λ = −1 0 0 −3 , Γ = 2 0 −5 1 , Γ−1 = 1 det Γ · Γ = 1 2 0 5 2 1 . Let U = ΓV . Then, Ut + AUx = 0, ΓVt + AΓVx = 0, Vt + Γ−1 AΓVx = 0, Vt + ΛVx = 0. Thus, the transformed problem is Vt + −1 0 0 −3 Vx = 0, V (x, 0) = Γ−1 U(x, 0) = 1 2 0 5 2 1 eixa 0 = 1 2 eixa 1 5 . We have two initial value problems v (1) t − v (1) x = 0, v(1) (x, 0) = 1 2 eixa ; v (2) t − 3v (2) x = 0, v(2) (x, 0) = 5 2 eixa , which we solve by characteristics to get v(1) (x, t) = 1 2 eia(x+t) , v(2) (x, t) = 5 2 eia(x+3t) . We solve for U: U = ΓV = Γ v(1) v(2) = 2 0 −5 1 1 2eia(x+t) 5 2 eia(x+3t) . Thus, U = u(1) (x, t) u(2) (x, t) = eia(x+t) −5 2 eia(x+t) + 5 2 eia(x+3t) . Can check that this is the correct solution by plugging it into the original equation.
  • 103. Partial Differential Equations Igor Yanovsky, 2005 103 Part (b) of the problem is solved in the Fourier Transform section.
  • 104. Partial Differential Equations Igor Yanovsky, 2005 104 Problem (S’96, #7). Solve the following initial-boundary value problem in the do- main x > 0, t > 0, for the unknown vector U = u(1) u(2) : Ut + −2 3 0 1 Ux = 0. (14.1) U(x, 0) = sinx 0 and u(2) (0, t) = t. Proof. The eigenvalues of the matrix A are λ1 = −2, λ2 = 1 and the corresponding eigenvectors are e1 = 1 0 , e2 = 1 1 . Thus, Λ = −2 0 0 1 , Γ = 1 1 0 1 , Γ−1 = 1 det Γ · Γ = 1 −1 0 1 . Let U = ΓV . Then, Ut + AUx = 0, ΓVt + AΓVx = 0, Vt + Γ−1 AΓVx = 0, Vt + ΛVx = 0. Thus, the transformed problem is Vt + −2 0 0 1 Vx = 0, (14.2) V (x, 0) = Γ−1 U(x, 0) = 1 −1 0 1 sin x 0 = sinx 0 . (14.3) Equation (14.2) gives traveling wave solutions of the form v(1) (x, t) = F(x + 2t), v(2) (x, t) = G(x − t). We can write U in terms of V : U = ΓV = 1 1 0 1 v(1) v(2) = 1 1 0 1 F(x + 2t) G(x − t) = F(x + 2t) + G(x − t) G(x − t) . (14.4)
  • 105. Partial Differential Equations Igor Yanovsky, 2005 105 • For region I, (14.2) and (14.3) give two initial value problems (since any point in region I can be traced back along both characteristics to initial conditions): v (1) t − 2v (1) x = 0, v(1) (x, 0) = sinx; v (2) t + v (2) x = 0, v(2) (x, 0) = 0. which we solve by characteristics to get traveling wave solutions: v(1) (x, t) = sin(x + 2t), v(2) (x, t) = 0. ➡ Thus, for region I, U = ΓV = 1 1 0 1 sin(x + 2t) 0 = sin(x + 2t) 0 . • For region II, solutions of the form F(x+2t) can be traced back to initial conditions. Thus, v(1) is the same as in region I. Solutions of the form G(x −t) can be traced back to the boundary. Since from (14.4), u(2) = v(2) , we use boundary conditions to get u(2) (0, t) = t = G(−t). Hence, G(x − t) = −(x − t). ➡ Thus, for region II, U = ΓV = 1 1 0 1 sin(x + 2t) −(x − t) = sin(x + 2t) − (x − t) −(x − t) . Solutions for regions I and II satisfy (14.1). Solution for region I satisfies both initial conditions. Solution for region II satisfies given boundary condition.
  • 106. Partial Differential Equations Igor Yanovsky, 2005 106 Problem (S’02, #7). Consider the system ∂ ∂t u v = −1 2 2 2 ∂ ∂x u v . (14.5) Find an explicit solution for the following mixed problem for the system (14.5): u(x, 0) v(x, 0) = f(x) 0 for x > 0, u(0, t) = 0 for t > 0. You may assume that the function f is smooth and vanishes on a neighborhood of x = 0. Proof. Rewrite the equation as Ut + 1 −2 −2 −2 Ux = 0, U(x, 0) = u(1) (x, 0) u(2) (x, 0) = f(x) 0 . The eigenvalues of the matrix A are λ1 = −3, λ2 = 2 and the corresponding eigen- vectors are e1 = 1 2 , e2 = −2 1 . Thus, Λ = −3 0 0 2 , Γ = 1 −2 2 1 , Γ−1 = 1 det Γ · Γ = 1 5 1 2 −2 1 . Let U = ΓV . Then, Ut + AUx = 0, ΓVt + AΓVx = 0, Vt + Γ−1 AΓVx = 0, Vt + ΛVx = 0. Thus, the transformed problem is Vt + −3 0 0 2 Vx = 0, (14.6) V (x, 0) = Γ−1 U(x, 0) = 1 5 1 2 −2 1 f(x) 0 = f(x) 5 1 −2 . (14.7) Equation (14.6) gives traveling wave solutions of the form: v(1) (x, t) = F(x + 3t), v(2) (x, t) = G(x − 2t). (14.8) We can write U in terms of V : U = ΓV = 1 −2 2 1 v(1) v(2) = 1 −2 2 1 F(x + 3t) G(x − 2t) = F(x + 3t) − 2G(x − 2t) 2F(x + 3t) + G(x − 2t) . (14.9)
  • 107. Partial Differential Equations Igor Yanovsky, 2005 107 • For region I, (14.6) and (14.7) give two initial value problems (since value at any point in region I can be traced back along both characteristics to initial conditions): v (1) t − 3v (1) x = 0, v(1)(x, 0) = 1 5 f(x); v (2) t + 2v (2) x = 0, v(2)(x, 0) = −2 5 f(x). which we solve by characteristics to get traveling wave solutions: v(1) (x, t) = 1 5 f(x + 3t), v(2) (x, t) = − 2 5 f(x − 2t). ➡ Thus, for region I, U = ΓV = 1 −2 2 1 1 5 f(x + 3t) −2 5 f(x − 2t) = 1 5 f(x + 3t) + 4 5 f(x − 2t) 2 5 f(x + 3t) − 2 5 f(x − 2t) . • For region II, solutions of the form F(x+3t) can be traced back to initial conditions. Thus, v(1) is the same as in region I. Solutions of the form G(x−2t) can be traced back to the boundary. Since from (14.9), u(1) = v(1) − 2v(2) , we have u(1) (x, t) = F(x + 3t) − 2G(x − 2t) = 1 5 f(x + 3t) − 2G(x − 2t). The boundary condition gives u(1) (0, t) = 0 = 1 5 f(3t) − 2G(−2t), 2G(−2t) = 1 5 f(3t), G(t) = 1 10 f − 3 2 t , G(x − 2t) = 1 10 f − 3 2 (x − 2t) . ➡ Thus, for region II, U = ΓV = 1 −2 2 1 1 5 f(x + 3t) 1 10 f(−3 2 (x − 2t)) = 1 5 f(x + 3t) − 1 5 f(−3 2 (x − 2t)) 2 5 f(x + 3t) + 1 10f(−3 2 (x − 2t)) . Solutions for regions I and II satisfy (14.5). Solution for region I satisfies both initial conditions. Solution for region II satisfies given boundary condition.
  • 108. Partial Differential Equations Igor Yanovsky, 2005 108 Problem (F’94, #1; S’97, #7). Solve the initial-boundary value problem ut + 3vx = 0, vt + ux + 2vx = 0 in the quarter plane 0 ≤ x, t < ∞, with initial conditions 25 u(x, 0) = ϕ1(x), v(x, 0) = ϕ2(x), 0 < x < +∞ and boundary condition u(0, t) = ψ(t), t > 0. Proof. Rewrite the equation as Ut + AUx = 0: Ut + 0 3 1 2 Ux = 0, (14.10) U(x, 0) = u(1)(x, 0) u(2) (x, 0) = ϕ1(x) ϕ2(x) . The eigenvalues of the matrix A are λ1 = −1, λ2 = 3 and the corresponding eigen- vectors are e1 = −3 1 , e2 = 1 1 . Thus, Λ = −1 0 0 3 , Γ = −3 1 1 1 , Γ−1 = 1 det Γ · Γ = 1 4 −1 1 1 3 . Let U = ΓV . Then, Ut + AUx = 0, ΓVt + AΓVx = 0, Vt + Γ−1 AΓVx = 0, Vt + ΛVx = 0. Thus, the transformed problem is Vt + −1 0 0 3 Vx = 0, (14.11) V (x, 0) = Γ−1 U(x, 0) = 1 4 −1 1 1 3 ϕ1(x) ϕ2(x) = 1 4 −ϕ1(x) + ϕ2(x) ϕ1(x) + 3ϕ2(x) . (14.12) Equation (14.11) gives traveling wave solutions of the form: v(1) (x, t) = F(x + t), v(2) (x, t) = G(x − 3t). (14.13) We can write U in terms of V : U = ΓV = −3 1 1 1 v(1) v(2) = −3 1 1 1 F(x + t) G(x − 3t) = −3F(x + t) + G(x − 3t) F(x + t) + G(x − 3t) . (14.14) 25 In S’97, #7, the zero initial conditions are considered.
  • 109. Partial Differential Equations Igor Yanovsky, 2005 109 • For region I, (14.11) and (14.12) give two initial value problems (since value at any point in region I can be traced back along characteristics to initial conditions): v (1) t − v (1) x = 0, v(1)(x, 0) = −1 4 ϕ1(x) + 1 4 ϕ2(x); v (2) t + 3v (2) x = 0, v(2)(x, 0) = 1 4ϕ1(x) + 3 4ϕ2(x), which we solve by characteristics to get traveling wave solutions: v(1) (x, t) = − 1 4 ϕ1(x + t) + 1 4 ϕ2(x + t), v(2) (x, t) = 1 4 ϕ1(x − 3t) + 3 4 ϕ2(x − 3t). ➡ Thus, for region I, U = ΓV = −3 1 1 1 −1 4 ϕ1(x + t) + 1 4ϕ2(x + t) 1 4 ϕ1(x − 3t) + 3 4ϕ2(x − 3t) = 1 4 3ϕ1(x + t) − 3ϕ2(x + t) + ϕ1(x − 3t) + 3ϕ2(x − 3t) −ϕ1(x + t) + ϕ2(x + t) + ϕ1(x − 3t) + 3ϕ2(x − 3t) . • For region II, solutions of the form F(x + t) can be traced back to initial conditions. Thus, v(1) is the same as in region I. Solutions of the form G(x−3t) can be traced back to the boundary. Since from (14.14), u(1) = −3v(1) + v(2) , we have u(1) (x, t) = 3 4 ϕ1(x + t) − 3 4 ϕ2(x + t) + G(x − 3t). The boundary condition gives u(1) (0, t) = ψ(t) = 3 4 ϕ1(t) − 3 4 ϕ2(t) + G(−3t), G(−3t) = ψ(t) − 3 4 ϕ1(t) + 3 4 ϕ2(t), G(t) = ψ − t 3 − 3 4 ϕ1 − t 3 + 3 4 ϕ2 − t 3 , G(x − 3t) = ψ − x − 3t 3 − 3 4 ϕ1 − x − 3t 3 + 3 4 ϕ2 − x − 3t 3 . ➡ Thus, for region II, U = ΓV = −3 1 1 1 −1 4 ϕ1(x + t) + 1 4ϕ2(x + t) ψ(−x−3t 3 ) − 3 4ϕ1(−x−3t 3 ) + 3 4ϕ2(−x−3t 3 ) = 3 4 ϕ1(x + t) − 3 4 ϕ2(x + t) + ψ(−x−3t 3 ) − 3 4 ϕ1(−x−3t 3 ) + 3 4 ϕ2(−x−3t 3 ) −1 4ϕ1(x + t) + 1 4ϕ2(x + t) + ψ(−x−3t 3 ) − 3 4ϕ1(−x−3t 3 ) + 3 4ϕ2(−x−3t 3 ) .
  • 110. Partial Differential Equations Igor Yanovsky, 2005 110 Solutions for regions I and II satisfy (14.10). Solution for region I satisfies both initial conditions. Solution for region II satisfies given boundary condition.
  • 111. Partial Differential Equations Igor Yanovsky, 2005 111 Problem (F’91, #1). Solve explicitly the following initial-boundary value problem for linear 2×2 hyperbolic system ut = ux + vx vt = 3ux − vx, where 0 < t < +∞, 0 < x < +∞ with initial conditions u(x, 0) = u0(x), v(x, 0) = v0(x), 0 < x < +∞, and the boundary condition u(0, t) + bv(0, t) = ϕ(t), 0 < t < +∞, where b = 1 3 is a constant. What happens when b = 1 3 ? Proof. Let us change the notation (u ↔ u(1) , v ↔ u(2) ). Rewrite the equation as Ut + −1 −1 −3 1 Ux = 0, (14.15) U(x, 0) = u(1) (x, 0) u(2)(x, 0) = u (1) 0 (x) u (2) 0 (x) . The eigenvalues of the matrix A are λ1 = −2, λ2 = 2 and the corresponding eigen- vectors are e1 = 1 1 , e2 = 1 −3 . Thus, Λ = −2 0 0 2 , Γ = 1 1 1 −3 , Γ−1 = 1 4 3 1 1 −1 . Let U = ΓV . Then, Ut + AUx = 0, ΓVt + AΓVx = 0, Vt + Γ−1 AΓVx = 0, Vt + ΛVx = 0. Thus, the transformed problem is Vt + −2 0 0 2 Vx = 0, (14.16) V (x, 0) = Γ−1 U(x, 0) = 1 4 3 1 1 −1 u(1)(x, 0) u(2) (x, 0) = 1 4 3u (1) 0 (x) + u (2) 0 (x) u (1) 0 (x) − u (2) 0 (x) . (14.17) Equation (14.16) gives traveling wave solutions of the form: v(1) (x, t) = F(x + 2t), v(2) (x, t) = G(x − 2t). (14.18)
  • 112. Partial Differential Equations Igor Yanovsky, 2005 112 We can write U in terms of V : U = ΓV = 1 1 1 −3 v(1) v(2) = 1 1 1 −3 F(x + 2t) G(x − 2t) = F(x + 2t) + G(x − 2t) F(x + 2t) − 3G(x − 2t) . (14.19) • For region I, (14.16) and (14.17) give two initial value problems (since value at any point in region I can be traced back along characteristics to initial conditions): v (1) t − 2v (1) x = 0, v(1)(x, 0) = 3 4 u (1) 0 (x) + 1 4u (2) 0 (x); v (2) t + 2v (2) x = 0, v(2)(x, 0) = 1 4u (1) 0 (x) − 1 4 u (2) 0 (x), which we solve by characteristics to get traveling wave solutions: v(1) (x, t) = 3 4 u (1) 0 (x + 2t) + 1 4 u (2) 0 (x + 2t); v(2) (x, t) = 1 4 u (1) 0 (x − 2t) − 1 4 u (2) 0 (x − 2t). ➡ Thus, for region I, U = ΓV = 1 1 1 −3 3 4u (1) 0 (x + 2t) + 1 4 u (2) 0 (x + 2t) 1 4u (1) 0 (x − 2t) − 1 4 u (2) 0 (x − 2t) = 3 4u (1) 0 (x + 2t) + 1 4 u (2) 0 (x + 2t) + 1 4u (1) 0 (x − 2t) − 1 4 u (2) 0 (x − 2t) 3 4u (1) 0 (x + 2t) + 1 4 u (2) 0 (x + 2t) − 3 4u (1) 0 (x − 2t) + 3 4 u (2) 0 (x − 2t) . • For region II, solutions of the form F(x+2t) can be traced back to initial conditions. Thus, v(1) is the same as in region I. Solutions of the form G(x−2t) can be traced back to the boundary. The boundary condition gives u(1) (0, t) + bu(2) (0, t) = ϕ(t). Using (14.19), v(1) (0, t) + G(−2t) + bv(1) (0, t) − 3bG(−2t) = ϕ(t), (1 + b)v(1) (0, t) + (1 − 3b)G(−2t) = ϕ(t), (1 + b) 3 4 u (1) 0 (2t) + 1 4 u (2) 0 (2t) + (1 − 3b)G(−2t) = ϕ(t), G(−2t) = ϕ(t) − (1 + b) 3 4 u (1) 0 (2t) + 1 4 u (2) 0 (2t) 1 − 3b , G(t) = ϕ(−t 2) − (1 + b) 3 4u (1) 0 (−t) + 1 4 u (2) 0 (−t) 1 − 3b , G(x − 2t) = ϕ(−x−2t 2 ) − (1 + b) 3 4 u (1) 0 (−(x − 2t)) + 1 4 u (2) 0 (−(x − 2t)) 1 − 3b . ➡ Thus, for region II, U = ΓV = 1 1 1 −3 ⎛ ⎝ 3 4 u (1) 0 (x + 2t) + 1 4u (2) 0 (x + 2t) ϕ(−x−2t 2 )−(1+b) 3 4 u (1) 0 (−(x−2t))+1 4 u (2) 0 (−(x−2t)) 1−3b ⎞ ⎠ = ⎛ ⎝ 3 4u (1) 0 (x + 2t) + 1 4 u (2) 0 (x + 2t) + ϕ(−x−2t 2 )−(1+b) 3 4 u (1) 0 (−(x−2t))+1 4 u (2) 0 (−(x−2t)) 1−3b 3 4 u (1) 0 (x + 2t) + 1 4 u (2) 0 (x + 2t) − 3ϕ(−x−2t 2 )−3(1+b) 3 4 u (1) 0 (−(x−2t))+1 4 u (2) 0 (−(x−2t)) 1−3b ⎞ ⎠ . The following were performed, but are arithmetically complicated: Solutions for regions I and II satisfy (14.15).
  • 113. Partial Differential Equations Igor Yanovsky, 2005 113 Solution for region I satisfies both initial conditions. Solution for region II satisfies given boundary condition. If b = 1 3 , u(1) (0, t) + 1 3u(2) (0, t) = F(2t) + G(−2t) + 1 3 F(2t) − G(−2t) = 4 3 F(2t) = ϕ(t). Thus, the solutions of the form v(2) = G(x − 2t) are not defined at x = 0, which leads to ill-posedness.
  • 114. Partial Differential Equations Igor Yanovsky, 2005 114 Problem (F’96, #8). Consider the system ut = 3ux + 2vx vt = −vx − v in the region x ≥ 0, t ≥ 0. Which of the following sets of initial and boundary data make this a well-posed problem? a) u(x, 0) = 0, x ≥ 0 v(x, 0) = x2 , x ≥ 0 v(0, t) = t2 , t ≥ 0. b) u(x, 0) = 0, x ≥ 0 v(x, 0) = x2 , x ≥ 0 u(0, t) = t, t ≥ 0. c) u(x, 0) = 0, x ≥ 0 v(x, 0) = x2 , x ≥ 0 u(0, t) = t, t ≥ 0 v(0, t) = t2 , t ≥ 0. Proof. Rewrite the equation as Ut + AUx = BU. Initial conditions are same for (a),(b),(c): Ut + −3 −2 0 1 Ux = 0 0 0 −1 U, U(x, 0) = u(1)(x, 0) u(2) (x, 0) = 0 x2 . The eigenvalues of the matrix A are λ1 = −3, λ2 = 1, and the corresponding eigen- vectors are e1 = 1 0 , e2 = 1 −2 . Thus, Λ = −3 0 0 1 , Γ = 1 1 0 −2 , Γ−1 = 1 2 2 1 0 −1 . Let U = ΓV . Then, Ut + AUx = BU, ΓVt + AΓVx = BΓV, Vt + Γ−1 AΓVx = Γ−1 BΓV, Vt + ΛVx = Γ−1 BΓV. Thus, the transformed problem is Vt + −3 0 0 1 Vx = 0 1 0 −1 V, (14.20) V (x, 0) = Γ−1 U(x, 0) = 1 2 2 1 0 −1 0 x2 = x2 2 1 −1 . (14.21) Equation (14.20) gives traveling wave solutions of the form v(1) (x, t) = F(x + 3t), v(2) (x, t) = G(x − t). (14.22)
  • 115. Partial Differential Equations Igor Yanovsky, 2005 115 We can write U in terms of V : U = ΓV = 1 1 0 −2 v(1) v(2) = 1 1 0 −2 F(x + 3t) G(x − t) = F(x + 3t) + G(x − t) −2G(x − t) . (14.23) • For region I, (14.20) and (14.21) give two initial value problems (since a value at any point in region I can be traced back along both characteristics to initial conditions): v (1) t − 3v (1) x = v(2), v(1)(x, 0) = x2 2 ; v (2) t + v (2) x = −v(2), v(2)(x, 0) = −x2 2 , which we do not solve here. Thus, initial conditions for v(1) and v(2) have to be defined. Since (14.23) defines u(1) and u(2) in terms of v(1) and v(2), we need to define two initial conditions for U. • For region II, solutions of the form F(x+3t) can be traced back to initial conditions. Thus, v(1) is the same as in region I. Solutions of the form G(x − t) are traced back to the boundary at x = 0. Since from (14.23), u(2)(x, t) = −2v(2)(x, t) = −2G(x − t), i.e. u(2) is written in term of v(2) only, u(2) requires a boundary condition to be defined on x = 0. Thus, a) u(2) (0, t) = t2 , t ≥ 0. Well-posed. b) u(1)(0, t) = t, t ≥ 0. Not well-posed. c) u(1) (0, t) = t, u(2) (0, t) = t2 , t ≥ 0. Not well-posed.
  • 116. Partial Differential Equations Igor Yanovsky, 2005 116 Problem (F’02, #3). Consider the first order system ut + ux + vx = 0 vt + ux − vx = 0 on the domain 0 < t < ∞ and 0 < x < 1. Which of the following sets of initial- boundary data are well posed for this system? Explain your answers. a) u(x,0) = f(x), v(x,0) = g(x); b) u(x,0) = f(x), v(x,0) = g(x), u(0,t) = h(x), v(0,t) = k(x); c) u(x,0) = f(x), v(x,0) = g(x), u(0,t) = h(x), v(1,t) = k(x). Proof. Rewrite the equation as Ut+AUx = 0. Initial conditions are same for (a),(b),(c): Ut + 1 1 1 −1 Ux = 0, U(x, 0) = u(1) (x, 0) u(2)(x, 0) = f(x) g(x) . The eigenvalues of the matrix A are λ1 = √ 2, λ2 = − √ 2 and the corresponding eigenvectors are e1 = 1 −1 + √ 2 , e2 = 1 −1 − √ 2 . Thus, Λ = √ 2 0 0 − √ 2 , Γ = 1 1 −1 + √ 2 −1 − √ 2 , Γ−1 = 1 2 √ 2 1 + √ 2 1 −1 + √ 2 −1 . Let U = ΓV . Then, Ut + AUx = 0, ΓVt + AΓVx = 0, Vt + Γ−1 AΓVx = 0, Vt + ΛVx = 0. Thus, the transformed problem is Vt + √ 2 0 0 − √ 2 Vx = 0, (14.24) V (x, 0) = Γ−1 U(x, 0) = 1 2 √ 2 1 + √ 2 1 −1 + √ 2 −1 f(x) g(x) = 1 2 √ 2 (1 + √ 2)f(x) + g(x) (−1 + √ 2)f(x) − g(x) . (14.25) Equation (14.24) gives traveling wave solutions of the form: v(1) (x, t) = F(x − √ 2t), v(2) (x, t) = G(x + √ 2t). (14.26) However, we can continue and obtain the solutions. We have two initial value problems v (1) t + √ 2v (1) x = 0, v(1)(x, 0) = (1+ √ 2) 2 √ 2 f(x) + 1 2 √ 2 g(x); v (2) t − √ 2v (2) x = 0, v(2)(x, 0) = (−1+ √ 2) 2 √ 2 f(x) − 1 2 √ 2 g(x), which we solve by characteristics to get traveling wave solutions: v(1) (x, t) = (1 + √ 2) 2 √ 2 f(x − √ 2t) + 1 2 √ 2 g(x − √ 2t), v(2) (x, t) = (−1 + √ 2) 2 √ 2 f(x + √ 2t) − 1 2 √ 2 g(x + √ 2t).
  • 117. Partial Differential Equations Igor Yanovsky, 2005 117 We can obtain general solution U by writing U in terms of V : U = ΓV = Γ v(1) v(2) = 1 1 −1 + √ 2 −1 − √ 2 1 2 √ 2 (1 + √ 2)f(x − √ 2t) + g(x − √ 2t) (−1 + √ 2)f(x + √ 2t) − g(x + √ 2t) . (14.27) • In region I, the solution is obtained by solving two initial value problems(since a value at any point in region I can be traced back along both characteristics to initial conditions). • In region II, the solutions of the form v(2) = G(x+ √ 2t) can be traced back to initial conditions and those of the form v(1) = F(x − √ 2t), to left boundary. Since by (14.27), u(1) and u(2) are written in terms of both v(1) and v(2), one initial condition and one boundary condition at x = 0 need to be prescribed. • In region III, the solutions of the form v(2) = G(x + √ 2t) can be traced back to right boundary and those of the form v(1) = F(x − √ 2t), to initial condition. Since by (14.27), u(1) and u(2) are written in terms of both v(1) and v(2) , one initial condition and one boundary condition at x = 1 need to be prescribed. • To obtain the solution for region IV, two boundary conditions, one for each bound- ary, should be given. Thus, a) No boundary conditions. Not well-posed. b) u(1)(0, t) = h(x), u(2)(0, t) = k(x). Not well-posed. c) u(1) (0, t) = h(x), u(2) (1, t) = k(x). Well-posed.
  • 118. Partial Differential Equations Igor Yanovsky, 2005 118 Problem (S’94, #3). Consider the system of equations ft + gx = 0 gt + fx = 0 ht + 2hx = 0 on the set x ≥ 0, t ≥ 0, with the following initial-boundary values: a) f, g, h prescribed on t = 0, x ≥ 0; f, h prescribed on x = 0, t ≥ 0. b) f, g, h prescribed on t = 0, x ≥ 0; f − g, h prescribed on x = 0, t ≥ 0. c) f + g, h prescribed on t = 0, x ≥ 0; f, g, h prescribed on x = 0, t ≥ 0. For each of these 3 sets of data, determine whether or not the system is well-posed. Justify your conclusions. Proof. The third equation is decoupled from the first two and can be considered sepa- rately. Its solution can be written in the form h(x, t) = H(x − 2t), and therefore, h must be prescribed on t = 0 and on x = 0, since the characteristics propagate from both the x and t axis. We rewrite the first two equations as (f ↔ u1, g ↔ u2): Ut + 0 1 1 0 Ux = 0, U(x, 0) = u(1) (x, 0) u(2)(x, 0) . The eigenvalues of the matrix A are λ1 = −1, λ2 = 1 and the corresponding eigen- vectors are e1 = −1 1 , e2 = 1 1 . Thus, Λ = −1 0 0 1 , Γ = −1 1 1 1 , Γ−1 = 1 2 −1 1 1 1 . Let U = ΓV . Then, Ut + AUx = 0, ΓVt + AΓVx = 0, Vt + Γ−1 AΓVx = 0, Vt + ΛVx = 0. Thus, the transformed problem is Vt + −1 0 0 1 Vx = 0, (14.28) V (x, 0) = Γ−1 U(x, 0) = 1 2 −1 1 1 1 u(1) (x, 0) u(1) (x, 0) . (14.29) Equation (14.28) gives traveling wave solutions of the form: v(1) (x, t) = F(x + t), v(2) (x, t) = G(x − t). (14.30)
  • 119. Partial Differential Equations Igor Yanovsky, 2005 119 We can write U in terms of V : U = ΓV = −1 1 1 1 v(1) v(2) = −1 1 1 1 F(x + t) G(x − t) = −F(x + t) + G(x − t) F(x + t) + G(x − t) . (14.31) • For region I, (14.28) and (14.29) give two initial value problems (since a value at any point in region I can be traced back along both characteristics to initial conditions). Thus, initial conditions for v(1) and v(2) have to be defined. Since (14.31) defines u(1) and u(2) in terms of v(1) and v(2) , we need to define two initial conditions for U. • For region II, solutions of the form F(x + t) can be traced back to initial conditions. Thus, v(1) is the same as in region I. Solutions of the form G(x − t) are traced back to the boundary at x = 0. Since from (14.31), u(2)(x, t) = v(1)(x, t) + v(2)(x, t) = F(x + t) + G(x − t), i.e. u(2) is written in terms of v(2) = G(x − t), u(2) requires a boundary condition to be defined on x = 0. a) u(1) , u(2) prescribed on t = 0; u(1) prescribed on x = 0. Since u(1) (x, t) = −F(x + t) + G(x − t), u(2)(x, t) = F(x + t) + G(x − t), i.e. both u(1) and u(2) are written in terms of F(x + t) and G(x − t), we need to define two initial conditions for U (on t = 0). A boundary condition also needs to be prescribed on x = 0 to be able to trace back v(2) = G(x − t). Well-posed. b) u(1), u(2) prescribed on t = 0; u(1) − u(2) prescribed on x = 0. As in part (a), we need to define two initial conditions for U. Since u(1) − u(2) = −2F(x + t), its definition on x = 0 leads to ill-posedness. On the contrary, u(1) + u(2) = 2G(x − t) should be defined on x = 0 in order to be able to trace back the values through characteristics. Ill-posed. c) u(1) + u(2) prescribed on t = 0; u(1), u(2) prescribed on x = 0. Since u(1) + u(2) = 2G(x − t), another initial condition should be prescribed to be able to trace back solutions of the form v(2) = F(x + t), without which the problem is ill-posed. Also, two boundary conditions for both u(1) and u(2) define solutions of both v(1) = G(x − t) and v(2) = F(x + t) on the boundary. The former boundary condition leads to ill-posedness. Ill-posed.
  • 120. Partial Differential Equations Igor Yanovsky, 2005 120 Problem (F’92, #8). Consider the system ut + ux + avx = 0 vt + bux + vx = 0 for 0 < x < 1 with boundary and initial conditions u = v = 0 for x = 0 u = u0, v = v0 for t = 0. a) For which values of a and b is this a well-posed problem? b) For this class of a, b, state conditions on u0 and v0 so that the solution u, v will be continuous and continuously differentiable. Proof. a) Let us change the notation (u ↔ u(1), v ↔ u(2)). Rewrite the equation as Ut + 1 a b 1 Ux = 0, (14.32) U(x, 0) = u(1) (x, 0) u(2) (x, 0) = u (1) 0 (x) u (2) 0 (x) , U(0, t) = u(1) (0, t) u(2)(0, t) = 0. The eigenvalues of the matrix A are λ1 = 1 − √ ab, λ2 = 1 + √ ab. Λ = 1 − √ ab 0 0 1 + √ ab . Let U = ΓV , where Γ is a matrix of eigenvectors. Then, Ut + AUx = 0, ΓVt + AΓVx = 0, Vt + Γ−1 AΓVx = 0, Vt + ΛVx = 0. Thus, the transformed problem is Vt + 1 − √ ab 0 0 1 + √ ab Vx = 0, (14.33) V (x, 0) = Γ−1 U(x, 0). The equation (14.33) gives traveling wave solutions of the form: v(1) (x, t) = F(x − (1 − √ ab)t), v(2) (x, t) = G(x − (1 + √ ab)t). (14.34) We also have U = ΓV , i.e. both u(1) and u(2) (and their initial and boundary conditions) are combinations of v(1) and v(2). In order for this problem to be well-posed, both sets of characteristics should emanate from the boundary at x = 0. Thus, the eigenvalues of the system are real (ab > 0) and λ1,2 > 0 (ab < 1). Thus, 0 < ab < 1. b) For U to be C1, we require the compatibility condition, u (1) 0 (0) = 0, u (2) 0 (0) = 0.
  • 121. Partial Differential Equations Igor Yanovsky, 2005 121 Problem (F’93, #2). Consider the initial-boundary value problem ut + ux = 0 vt − (1 − cx2 )vx + ux = 0 on −1 ≤ x ≤ 1 and 0 ≤ t, with the following prescribed data: u(x, 0), v(x, 0), u(−1, t), v(1, t). For which values of c is this a well-posed problem? Proof. Let us change the notation (u ↔ u(1) , v ↔ u(2) ). The first equation can be solved with u(1)(x, 0) = F(x) to get a solution in the form u(1) (x, t) = F(x − t), which requires u(1) (x, 0) and u(1) (−1, t) to be defined. With u(1) known, we can solve the second equation u (2) t − (1 − cx2 )u(2) x + F(x − t) = 0. Solving the equation by characteristics, we obtain the characteristics in the xt-plane are of the form dx dt = cx2 − 1. We need to determine c such that the prescribed data u(2)(x, 0) and u(2)(1, t) makes the problem to be well-posed. The boundary condition for u(2) (1, t) requires the characteristics to propagate to the left with t increasing. Thus, x(t) is a decreasing function, i.e. dx dt < 0 ⇒ cx2 − 1 < 0 for − 1 < x < 1 ⇒ c < 1. We could also do similar analysis we have done in other problems on first order sys- tems involving finding eigenvalues/eigenvectors of the system and using the fact that u(1)(x, t) is known at both boundaries (i.e. values of u(1)(1, t) can be traced back either to initial conditions or to boundary conditions on x = −1).
  • 122. Partial Differential Equations Igor Yanovsky, 2005 122 Problem (S’91, #4). Consider the first order system ut + aux + bvx = 0 vt + cux + dvx = 0 for 0 < x < 1, with prescribed initial data: u(x, 0) = u0(x) v(x, 0) = v0(x). a) Find conditions on a, b, c, d such that there is a full set of characteristics and, in this case, find the characteristic speeds. b) For which values of a, b, c, d can boundary data be prescribed on x = 0 and for which values can it be prescribed on x = 1? How many pieces of data can be prescribed on each boundary? Proof. a) Let us change the notation (u ↔ u(1), v ↔ u(2)). Rewrite the equation as Ut + a b c d Ux = 0, (14.35) U(x, 0) = u(1)(x, 0) u(2) (x, 0) = u (1) 0 (x) u (2) 0 (x) . The system is hyperbolic if for each value of u(1) and u(2) the eigenvalues are real and the matrix is diagonalizable, i.e. there is a complete set of linearly independent eigenvectors. The eigenvalues of the matrix A are λ1,2 = a + d ± (a + d)2 − 4(ad − bc) 2 = a + d ± (a − d)2 + 4bc 2 . We need (a − d)2 + 4bc > 0. This also makes the problem to be diagonalizable. Let U = ΓV , where Γ is a matrix of eigenvectors. Then, Ut + AUx = 0, ΓVt + AΓVx = 0, Vt + Γ−1 AΓVx = 0, Vt + ΛVx = 0. Thus, the transformed problem is Vt + λ1 0 0 λ2 Vx = 0, (14.36) Equation (14.36) gives traveling wave solutions of the form: v(1) (x, t) = F(x − λ1t), v(2) (x, t) = G(x − λ2t). (14.37) The characteristic speeds are dx dt = λ1, dx dt = λ2. b) We assume (a + d)2 − 4(ad − bc) > 0. a + d > 0, ad − bc > 0 ⇒ λ1, λ2 > 0 ⇒ 2 B.C. on x = 0. a + d > 0, ad − bc < 0 ⇒ λ1 < 0, λ2 > 0 ⇒ 1 B.C. on x = 0, 1 B.C. on x = 1. a + d < 0, ad − bc > 0 ⇒ λ1, λ2 < 0 ⇒ 2 B.C. on x = 1.
  • 123. Partial Differential Equations Igor Yanovsky, 2005 123 a + d < 0, ad − bc < 0 ⇒ λ1 < 0, λ2 > 0 ⇒ 1 B.C. on x = 0, 1 B.C. on x = 1. a + d > 0, ad − bc = 0 ⇒ λ1 = 0, λ2 > 0 ⇒ 1 B.C. on x = 0. a + d < 0, ad − bc = 0 ⇒ λ1 = 0, λ2 < 0 ⇒ 1 B.C. on x = 1. a + d = 0, ad − bc < 0 ⇒ λ1 < 0, λ2 > 0 ⇒ 1 B.C. on x = 0, 1 B.C. on x = 1.
  • 124. Partial Differential Equations Igor Yanovsky, 2005 124 Problem (S’94, #2). Consider the differential operator L u v = ut + 9vx − uxx vt − ux − vxx on 0 ≤ x ≤ 2π, t ≥ 0, in which the vector u(x, t) v(x, t) consists of two functions that are periodic in x. a) Find the eigenfunctions and eigenvalues of the operator L. b) Use the results of (a) to solve the initial value problem L u v = 0 for t ≥ 0, u v = eix 0 for t = 0. Proof. a) We find the ”space” eigenvalues and eigenfunctions. We rewrite the system as Ut + 0 9 −1 0 Ux + −1 0 0 −1 Uxx = 0, and find eigenvalues 0 9 −1 0 Ux + −1 0 0 −1 Uxx = λU. (14.38) Set U = u(x, t) v(x, t) = n=∞ n=−∞ un(t)einx n=∞ n=−∞ vn(t)einx . Plugging this into (14.38), we get 0 9 −1 0 inun(t)einx invn(t)einx + −1 0 0 −1 −n2 un(t)einx −n2vn(t)einx = λ un(t)einx vn(t)einx , 0 9 −1 0 inun(t) invn(t) + −1 0 0 −1 −n2 un(t) −n2vn(t) = λ un(t) vn(t) , 0 9in −in 0 un(t) vn(t) + n2 0 0 n2 un(t) vn(t) = λ un(t) vn(t) , n2 − λ 9in −in n2 − λ un(t) vn(t) = 0, (n2 − λ)2 − 9n2 = 0, which gives λ1 = n2+3n, λ2 = n2−3n, are eigenvalues, and v1 = 3i 1 , v2 = 3i −1 , are corresponding eigenvectors.
  • 125. Partial Differential Equations Igor Yanovsky, 2005 125 b) We want to solve u v t + L u v = 0, L u v = 9vx − uxx −ux − vxx . We have u v t = −L u v = −λ u v , i.e. u v = e−λt . We can write the solution as U(x, t) = un(t)einx vn(t)einx = ∞ n=−∞ ane−λ1t v1einx + bne−λ2t v2einx = ∞ n=−∞ ane−(n2+3n)t 3i 1 einx + bne−(n2−3n)t 3i −1 einx . U(x, 0) = ∞ n=−∞ an 3i 1 einx + bn 3i −1 einx = eix 0 , ⇒ an = bn = 0, n = 1; a1 + b1 = 1 3i and a1 = b1 ⇒ a1 = b1 = 1 6i . ⇒ U(x, t) = 1 6i e−4t 3i 1 eix + 1 6i e2t 3i −1 eix = 1 2(e−4t + e2t ) 1 6i(e−4t − e2t) eix . 26 27 26 ChiuYen’s and Sung-Ha’s solutions give similar answers. 27 Questions about this problem: 1. Needed to find eigenfunctions, not eigenvectors. 2. The notation of L was changed. The problem statement incorporates the derivatives wrt. t into L. 3. Why can we write the solution in this form above?
  • 126. Partial Differential Equations Igor Yanovsky, 2005 126 Problem (W’04, #6). Consider the first order system ut − ux = vt + vx = 0 in the diamond shaped region −1 < x + t < 1, −1 < x − t < 1. For each of the following boundary value problems state whether this problem is well-posed. If it is well-posed, find the solution. a) u(x + t) = u0(x + t) on x − t = −1, v(x − t) = v0(x − t) on x + t = −1. b) v(x + t) = v0(x + t) on x − t = −1, u(x − t) = u0(x − t) on x + t = −1. Proof. We have ut − ux = 0, vt + vx = 0. • u is constant along the characteristics: x + t = c1(s). Thus, its solution is u(x, t) = u0(x + t). It the initial condition is prescribed at x − t = −1, the solution can be determined in the entire region by tracing back through the characteristics. • v is constant along the characteristics: x − t = c2(s). Thus, its solution is v(x, t) = v0(x − t). It the initial condition is prescribed at x + t = −1, the solution can be determined in the entire region by tracing forward through the characteristics.
  • 127. Partial Differential Equations Igor Yanovsky, 2005 127 15 Problems: Gas Dynamics Systems 15.1 Perturbation Problem (S’92, #3). 28 29 Consider the gas dynamic equations ut + uux + (F(ρ))x = 0, ρt + (uρ)x = 0. Here F(ρ) is a given C∞-smooth function of ρ. At t = 0, 2π-periodic initial data u(x, 0) = f(x), ρ(x, 0) = g(x). a) Assume that f(x) = U0 + εf1(x), g(x) = R0 + εg1(x) where U0, R0 > 0 are constants and εf1(x), εg1(x) are “small” perturbations. Lin- earize the equations and given conditions for F such that the linearized problem is well-posed. b) Assume that U0 > 0 and consider the above linearized equations for 0 ≤ x ≤ 1, t ≥ 0. Construct boundary conditions such that the initial-boundary value problem is well-posed. Proof. a) We write the equations in characteristic form: ut + uux + F (ρ)ρx = 0, ρt + uxρ + uρx = 0. Consider the special case of nearly constant initial data u(x, 0) = u0 + εu1(x, 0), ρ(x, 0) = ρ0 + ερ1(x, 0). Then we can approximate nonlinear equations by linear equations. Assuming u(x, t) = u0 + εu1(x, t), ρ(x, t) = ρ0 + ερ1(x, t) remain valid with u1 = O(1), ρ1 = O(1), we find that ut = εu1t, ρt = ερ1t, ux = εu1x, ρx = ερ1x, F (ρ) = F (ρ0 + ερ1(x, t)) = F (ρ0) + ερ1F (ρ0) + O(ε2 ). Plugging these into , gives εu1t + (u0 + εu1)εu1x + F (ρ0) + ερ1F (ρ0) + O(ε2 ) ερ1x = 0, ερ1t + εu1x(ρ0 + ερ1) + (u0 + εu1)ερ1x = 0. Dividing by ε gives u1t + u0u1x + F (ρ0)ρ1x = −εu1u1x − ερ1ρ1xF (ρ0) + O(ε2 ), ρ1t + u1xρ0 + u0ρ1x = −εu1xρ1 − εu1ρ1x. 28 See LeVeque, Second Edition, Birkh¨auser Verlag, 1992, p. 44. 29 This problem has similar notation with S’92, #4.
  • 128. Partial Differential Equations Igor Yanovsky, 2005 128 For small ε, we have u1t + u0u1x + F (ρ0)ρ1x = 0, ρ1t + u1xρ0 + u0ρ1x = 0. This can be written as u1 ρ1 t + u0 F (ρ0) ρ0 u0 u1 ρ1 x = 0 0 . u0 − λ F (ρ0) ρ0 u0 − λ = (u0 − λ)(u0 − λ) − ρ0F (ρ0) = 0, λ2 − 2u0λ + u2 0 − ρ0F (ρ0) = 0, λ1,2 = u0 ± ρ0F (ρ0), u0 > 0, ρ0 > 0. For well-posedness, need λ1,2 ∈ R or F (ρ0) ≥ 0. b) We have u0 > 0, and λ1 = u0 + ρ0F (ρ0), λ2 = u0 − ρ0F (ρ0). • If u0 > ρ0F (ρ0) ⇒ λ1 > 0, λ2 > 0 ⇒ 2 BC at x = 0. • If u0 = ρ0F (ρ0) ⇒ λ1 > 0, λ2 = 0 ⇒ 1 BC at x = 0. • If 0 < u0 < ρ0F (ρ0) ⇒ λ1 > 0, λ2 < 0 ⇒ 1 BC at x = 0, 1 BC at x = 1. 15.2 Stationary Solutions Problem (S’92, #4). 30 Consider ut + uux + ρx = νuxx, ρt + (uρ)x = 0 for t ≥ 0, −∞ < x < ∞. Give conditions for the states U+, U−, R+, R−, such that the system has stationary solutions (i.e. ut = ρt = 0) satisfying lim x→+∞ u ρ = U+ R+ , lim x→−∞ u ρ = U− R− . Proof. For stationary solutions, we need ut = − u2 2 x − ρx + νuxx = 0, ρt = −(uρ)x = 0. Integrating the above equations, we obtain − u2 2 − ρ + νux = C1, −uρ = C2. 30 This problem has similar notation with S’92, #3.
  • 129. Partial Differential Equations Igor Yanovsky, 2005 129 Conditions give ux = 0 at x = ±∞. Thus U2 + 2 + R+ = U2 − 2 + R−, U+R+ = U−R−.
  • 130. Partial Differential Equations Igor Yanovsky, 2005 130 15.3 Periodic Solutions Problem (F’94, #4). Let u(x, t) be a solution of the Cauchy problem ut = −uxxxx − 2uxx, −∞ < x < +∞, 0 < t < +∞, u(x, 0) = ϕ(x), where u(x, t) and ϕ(x) are C∞ functions periodic in x with period 2π; i.e. u(x + 2π, t) = u(x, t), ∀x, ∀t. Prove that ||u(·, t)|| ≤ Ceat ||ϕ|| where ||u(·, t)|| = 2π 0 |u(x, t)|2 dx, ||ϕ|| = 2π 0 |ϕ(x)|2 dx, C, a are some constants. Proof. METHOD I: Since u is 2π-periodic, let u(x, t) = ∞ n=−∞ an(t)einx . Plugging this into the equation, we get ∞ n=−∞ an(t)einx = − ∞ n=−∞ n4 an(t)einx + 2 ∞ n=−∞ n2 an(t)einx , an(t) = (−n4 + 2n2 )an(t), an(t) = an(0)e(−n4+2n2)t . Also, initial condition gives u(x, 0) = ∞ n=−∞ an(0)einx = ϕ(x), ∞ n=−∞ an(0)einx = |ϕ(x)|. ||u(x, t)||2 2 = 2π 0 u2 (x, t) dx = 2π 0 ∞ n=−∞ an(t)einx ∞ m=−∞ an(t)eimx dx = ∞ n=−∞ a2 n(t) 2π 0 einx e−inx dx = 2π ∞ n=−∞ a2 n(t) = 2π ∞ n=−∞ a2 n(0)e2(−n4+2n2)t ≤ 2π ∞ n=−∞ a2 n(0) ∞ n=−∞ e2(−n4+2n2)t = 2π ∞ n=−∞ a2 n(0) ||ϕ||2 e2t ∞ n=−∞ e−2(n2−1)2t = C1, (convergent) = C2e2t ||ϕ||2 . ⇒ ||u(x, t)|| ≤ Cet ||ϕ||.
  • 131. Partial Differential Equations Igor Yanovsky, 2005 131 METHOD II: Multiply this equation by u and integrate uut = −uuxxxx − 2uuxx, 1 2 d dt (u2 ) = −uuxxxx − 2uuxx, 1 2 d dt 2π 0 u2 dx = − 2π 0 uuxxxx dx − 2π 0 2uuxx dx, 1 2 d dt ||u||2 2 = −uuxxx 2π 0 =0 + uxuxx 2π 0 =0 − 2π 0 u2 xx dx − 2π 0 2uuxx dx, 1 2 d dt ||u||2 2 = − 2π 0 u2 xx dx − 2π 0 2uuxx dx (−2ab ≤ a2 + b2 ) ≤ − 2π 0 u2 xx dx + 2π 0 (u2 + u2 xx) dx = 2π 0 u2 dx = ||u||2 , ⇒ d dt ||u||2 ≤ 2||u||2 , ||u||2 ≤ ||u(0)||2 e2t , ||u|| ≤ ||u(0)||et . METHOD III: Can use Fourier transform. See ChiuYen’s solutions, that have both Method II and III.
  • 132. Partial Differential Equations Igor Yanovsky, 2005 132 Problem (S’90, #4). Let f(x) ∈ C∞ be a 2π-periodic function, i.e., f(x) = f(x + 2π) and denote by ||f||2 = 2π 0 |f(x)|2 dx the L2-norm of f. a) Express ||dp f/dxp ||2 in terms of the Fourier coefficients of f. b) Let q > p > 0 be integers. Prove that ∀ > 0, ∃K = N(ε, p, q), constant, such that dp f dxp 2 ≤ ε dq f dxq 2 + K||f||2 . c) Discuss how K depends on ε. Proof. a) Let 31 f(x) = ∞ −∞ fneinx , dp f dxp = ∞ −∞ fn(in)p einx , dp f dxp 2 = 2π 0 ∞ −∞ fn(in)p einx 2 dx = 2π 0 |i2 |p ∞ −∞ fnnp einx 2 dx = 2π 0 ∞ −∞ fnnp einx 2 dx = 2π ∞ n=0 f2 nn2p . b) We have dpf dxp 2 ≤ ε dqf dxq 2 + K||f||2 , 2π ∞ n=0 f2 nn2p ≤ ε 2π ∞ n=0 f2 nn2q + K 2π ∞ n=0 f2 n, n2p − εn2q ≤ K, n2p (1 − εnq ) < 0, for n large ≤ K, some q > 0. Thus, the above inequality is true for n large enough. The statement follows. 31 Note: L 0 einx eimx dx = 0 n = m L n = m
  • 133. Partial Differential Equations Igor Yanovsky, 2005 133 Problem (S’90, #5). 32 Consider the flame front equation ut + uux + uxx + uxxxx = 0 with 2π-periodic initial data u(x, 0) = f(x), f(x) = f(x + 2π) ∈ C∞ . a) Determine the solution, if f(x) ≡ f0 = const. b) Assume that f(x) = 1 + εg(x), 0 < ε 1, |g|∞ = 1, g(x) = g(x + 2π). Linearize the equation. Is the Cauchy problem well-posed for the linearized equation, i.e., do its solutions v satisfy an estimate ||v(·, t)|| ≤ Keα(t−t0) ||v(·, t0)||? c) Determine the best possible constants K, α. Proof. a) The solution to ut + uux + uxx + uxxxx = 0, u(x, 0) = f0 = const, is u(x, t) = f0 = const. b) We consider the special case of nearly constant initial data u(x, 0) = 1 + εu1(x, 0). Then we can approximate the nonlinear equation by a linear equation. Assuming u(x, t) = 1 + εu1(x, t), remain valid with u1 = O(1), from , we find that εu1t + (1 + εu1)εu1x + εu1xx + εu1xxxx = 0. Dividing by ε gives u1t + u1x + εu1u1x + u1xx + u1xxxx = 0. For small ε, we have u1t + u1x + u1xx + u1xxxx = 0. Multiply this equation by u1 and integrate u1u1t + u1u1x + u1u1xx + u1u1xxxx = 0, d dt u2 1 2 + u2 1 2 x + u1u1xx + u1u1xxxx = 0, 1 2 d dt 2π 0 u2 1 dx + u2 1 2 2π 0 =0 + 2π 0 u1u1xx dx + 2π 0 u1u1xxxx dx = 0, 1 2 d dt ||u1||2 2 + u1u1x 2π 0 =0 − 2π 0 u2 1x dx + u1u1xxx 2π 0 =0 − u1xu1xx 2π 0 =0 + 2π 0 u2 1xx dx = 0, 1 2 d dt ||u1||2 2 = 2π 0 u2 1x dx − 2π 0 u2 1xx dx. 32 S’90 #5, #6, #7 all have similar formulations.
  • 134. Partial Differential Equations Igor Yanovsky, 2005 134 Since u1 is 2π-periodic, let u1 = ∞ n=−∞ an(t)einx . Then, u1x = i ∞ n=−∞ nan(t)einx ⇒ u2 1x = − ∞ n=−∞ nan(t)einx 2 , u1xx = − ∞ n=−∞ n2 an(t)einx ⇒ u2 1xx = ∞ n=−∞ n2 an(t)einx 2 . Thus, 1 2 d dt ||u1||2 2 = 2π 0 u2 1x dx − 2π 0 u2 1xx dx = − 2π 0 nan(t)einx 2 dx − 2π 0 n2 an(t)einx 2 dx = −2π n2 an(t)2 − 2π n4 an(t)2 = −2π an(t)2 (n2 + n4 ) ≤ 0. ⇒ ||u1(·, t)||2 ≤ ||u1(·, 0)||2, where K = 1, α = 0. Problem (W’03, #4). Consider the PDE ut = ux + u4 for t > 0 u = u0 for t = 0 for 0 < x < 2π. Define the set A = {u = u(x) : ˆu(k) = 0 if k < 0}, in which {ˆu(k, t)}∞ −∞ is the Fourier series of u in x on [0, 2π]. a) If u0 ∈ A, show that u(t) ∈ A. b) Find differential equations for ˆu(0, t), ˆu(1, t), and ˆu(2, t). Proof. a) Solving ut = ux + u4 u(x, 0) = u0(x) by the method of characteristics, we get u(x, t) = u0(x + t) (1 − 3t(u0(x + t))3) 1 3 . Since u0 ∈ A, u0k = 0 if k < 0. Thus, u0(x) = ∞ k=0 u0k eikx 2 . Since uk = 1 2π 2π 0 u(x, t) e−ikx 2 dx,
  • 135. Partial Differential Equations Igor Yanovsky, 2005 135 we have u(x, t) = ∞ k=0 uk eikx 2 , that is, u(t) ∈ A.
  • 136. Partial Differential Equations Igor Yanovsky, 2005 136 15.4 Energy Estimates Problem (S’90, #6). Let U(x, t) ∈ C∞ be 2π-periodic in x. Consider the linear equation ut + Uux + uxx + uxxxx = 0, u(x, 0) = f(x), f(x) = f(x + 2π) ∈ C∞ . a) Derive an energy estimate for u. b) Prove that one can estimate all derivatives ||∂p u/∂xp ||. c) Indicate how to prove existence of solutions. 33 Proof. a) Multiply the equation by u and integrate uut + Uuux + uuxx + uuxxxx = 0, 1 2 d dt (u2 ) + 1 2 U(u2 )x + uuxx + uuxxxx = 0, 1 2 d dt 2π 0 u2 dx + 1 2 2π 0 U(u2 )x dx + 2π 0 uuxx dx + 2π 0 uuxxxx dx = 0, 1 2 d dt ||u||2 + 1 2 Uu2 2π 0 =0 − 1 2 2π 0 Uxu2 dx + uux 2π 0 − 2π 0 u2 x dx +uuxxx 2π 0 − uxuxx 2π 0 + 2π 0 u2 xx dx = 0, 1 2 d dt ||u||2 − 1 2 2π 0 Uxu2 dx − 2π 0 u2 x dx + 2π 0 u2 xx dx = 0, 1 2 d dt ||u||2 = 1 2 2π 0 Uxu2 dx + 2π 0 u2 x dx − 2π 0 u2 xx dx ≤ (from S’90, #5) ≤ ≤ 1 2 2π 0 Uxu2 dx ≤ 1 2 max x Ux 2π 0 u2 dx. ⇒ d dt ||u||2 ≤ max x Ux||u||2 , ||u(x, t)||2 ≤ ||u(x, 0)||2 e(maxx Ux)t . This can also been done using Fourier Transform. See ChiuYen’s solutions where the above method and the Fourier Transform methods are used. 33 S’90 #5, #6, #7 all have similar formulations.
  • 137. Partial Differential Equations Igor Yanovsky, 2005 137 Problem (S’90, #7). 34 Consider the nonlinear equation ut + uux + uxx + uxxxx = 0, u(x, 0) = f(x), f(x) = f(x + 2π) ∈ C∞ . a) Derive an energy estimate for u. b) Show that there is an interval 0 ≤ t ≤ T, T depending on f, such that also ||∂u(·, t)/∂x|| can be bounded. Proof. a) Multiply the above equation by u and integrate uut + u2 ux + uuxx + uuxxxx = 0, 1 2 d dt (u2 ) + 1 3 (u3 )x + uuxx + uuxxxx = 0, 1 2 d dt 2π 0 u2 dx + 1 3 2π 0 (u3 )x dx + 2π 0 uuxx dx + 2π 0 uuxxxx dx = 0, 1 2 d dt ||u||2 + 1 3 u3 2π 0 =0 − 2π 0 u2 x dx + 2π 0 u2 xx dx = 0, 1 2 d dt ||u||2 = 2π 0 u2 x dx − 2π 0 u2 xx dx ≤ 0, (from S’90, #5) ⇒ ||u(·, t)|| ≤ ||u(·, 0)||. b) In order to find a bound for ||ux(·, t)||, differentiate with respect to x: utx + (uux)x + uxxx + uxxxxx = 0, Multiply the above equation by ux and integrate: uxutx + ux(uux)x + uxuxxx + uxuxxxxx = 0, 1 2 d dt 2π 0 (ux)2 dx + 2π 0 ux(uux)x dx + 2π 0 uxuxxx dx + 2π 0 uxuxxxxx dx = 0. We evaluate one of the integrals in the above expression using the periodicity: 2π 0 ux(uux)x dx = − 2π 0 uxxuux = 2π 0 ux(u2 x + uuxx) = 2π 0 u3 x + 2π 0 uuxuxx, ⇒ 2π 0 uxxuux = − 1 2 2π 0 u3 x, ⇒ 2π 0 ux(uux)x = 1 2 2π 0 u3 x. We have 1 2 d dt ||ux||2 + 2π 0 u3 x dx + 2π 0 uxuxxx dx + 2π 0 uxuxxxxx dx = 0. 34 S’90 #5, #6, #7 all have similar formulations.
  • 138. Partial Differential Equations Igor Yanovsky, 2005 138 Let w = ux, then 1 2 d dt ||w||2 = − 2π 0 w3 dx − 2π 0 wwxx dx − 2π 0 wwxxxx dx = − 2π 0 w3 dx + 2π 0 w2 x dx − 2π 0 w2 xx dx ≤ − 2π 0 w3 dx, ⇒ d dt ||ux||2 = − 2π 0 u3 x dx.
  • 139. Partial Differential Equations Igor Yanovsky, 2005 139 16 Problems: Wave Equation 16.1 The Initial Value Problem Example (McOwen 3.1 #1). Solve the initial value problem: ⎧ ⎪⎨ ⎪⎩ utt − c2uxx = 0, u(x, 0) = x3 g(x) , ut(x, 0) = sinx h(x) . Proof. D’Alembert’s formula gives the solution: u(x, t) = 1 2 (g(x + ct) + g(x − ct)) + 1 2c x+ct x−ct h(ξ) dξ = 1 2 (x + ct)3 + 1 2 (x − ct)3 + 1 2c x+ct x−ct sin ξ dξ = x3 + 2xc2 t2 − 1 2c cos(x + ct) + 1 2c cos(x − ct) = = x3 + 2xc2 t2 + 1 c sin x sinct. Problem (S’99, #6). Solve the Cauchy problem utt = a2uxx + cos x, u(x, 0) = sin x, ut(x, 0) = 1 + x. (16.1) Proof. We have a nonhomogeneous PDE with nonhomogeneous initial conditions: ⎧ ⎪⎪⎪⎨ ⎪⎪⎪⎩ utt − c2uxx = cos x f(x,t) , u(x, 0) = sin x g(x) , ut(x, 0) = 1 + x h(x) . The solution is given by d’Alembert’s formula and Duhamel’s principle.35 uA (x, t) = 1 2 (g(x + ct) + g(x − ct)) + 1 2c x+ct x−ct h(ξ) dξ = 1 2 (sin(x + ct) + sin(x − ct)) + 1 2c x+ct x−ct (1 + ξ) dξ = sinx cos ct + 1 2c ξ + ξ2 2 ξ=x+ct ξ=x−ct = sinx cos ct + xt + t. uD (x, t) = 1 2c t 0 x+c(t−s) x−c(t−s) f(ξ, s) dξ ds = 1 2c t 0 x+c(t−s) x−c(t−s) cos ξ dξ ds = 1 2c t 0 sin[x + c(t − s)] − sin[x − c(t − s)] ds = 1 c2 (cos x − cos x cos ct). u(x, t) = uA (x, t) + uD (x, t) = sinx cos ct + xt + t + 1 c2 (cos x − cos x cos ct). 35 Note the relationship: x ↔ ξ, t ↔ s.
  • 140. Partial Differential Equations Igor Yanovsky, 2005 140 We can check that the solution satisfies equation (16.1). Can also check that uA , uD satisfy uA tt − c2 uA xx = 0, uA(x, 0) = sinx, uA t (x, 0) = 1 + x; uD tt − c2 uD xx = cos x, uD(x, 0) = 0, uD t (x, 0) = 0.
  • 141. Partial Differential Equations Igor Yanovsky, 2005 141 16.2 Initial/Boundary Value Problem Problem 1. Consider the initial/boundary value problem ⎧ ⎪⎨ ⎪⎩ utt − c2uxx = 0 0 < x < L, t > 0 u(x, 0) = g(x), ut(x, 0) = h(x) 0 < x < L u(0, t) = 0, u(L, t) = 0 t ≥ 0. (16.2) Proof. Find u(x, t) in the form u(x, t) = a0(t) 2 + ∞ n=1 an(t) cos nπx L + bn(t) sin nπx L . • Functions an(t) and bn(t) are determined by the boundary conditions: 0 = u(0, t) = a0(t) 2 + ∞ n=1 an(t) ⇒ an(t) = 0. Thus, u(x, t) = ∞ n=1 bn(t) sin nπx L . (16.3) • If we substitute (16.3) into the equation utt − c2uxx = 0, we get ∞ n=1 bn(t) sin nπx L + c2 ∞ n=1 nπ L 2 bn(t) sin nπx L = 0, or bn(t) + nπc L 2 bn(t) = 0, whose general solution is bn(t) = cn sin nπct L + dn cos nπct L . (16.4) Also, bn(t) = cn(nπc L ) cos nπct L − dn(nπc L ) sin nπct L . • The constants cn and dn are determined by the initial conditions: g(x) = u(x, 0) = ∞ n=1 bn(0) sin nπx L = ∞ n=1 dn sin nπx L , h(x) = ut(x, 0) = ∞ n=1 bn(0) sin nπx L = ∞ n=1 cn nπc L sin nπx L . By orthogonality, we may multiply by sin(mπx/L) and integrate: L 0 g(x) sin mπx L dx = L 0 ∞ n=1 dn sin nπx L sin mπx L dx = dm L 2 , L 0 h(x) sin mπx L dx = L 0 ∞ n=1 cn nπc L sin nπx L sin mπx L dx = cm mπc L L 2 . Thus, dn = 2 L L 0 g(x) sin nπx L dx, cn = 2 nπc L 0 h(x) sin nπx L dx. (16.5) The formulas (16.3), (16.4), and (16.5) define the solution.
  • 142. Partial Differential Equations Igor Yanovsky, 2005 142 Example (McOwen 3.1 #2). Consider the initial/boundary value problem ⎧ ⎪⎨ ⎪⎩ utt − uxx = 0 0 < x < π, t > 0 u(x, 0) = 1, ut(x, 0) = 0 0 < x < π u(0, t) = 0, u(π, t) = 0 t ≥ 0. (16.6) Proof. Find u(x, t) in the form u(x, t) = a0(t) 2 + ∞ n=1 an(t) cos nx + bn(t) sinnx. • Functions an(t) and bn(t) are determined by the boundary conditions: 0 = u(0, t) = a0(t) 2 + ∞ n=1 an(t) ⇒ an(t) = 0. Thus, u(x, t) = ∞ n=1 bn(t) sinnx. (16.7) • If we substitute this into utt − uxx = 0, we get ∞ n=1 bn(t) sinnx + ∞ n=1 bn(t)n2 sinnx = 0, or bn(t) + n2 bn(t) = 0, whose general solution is bn(t) = cn sinnt + dn cos nt. (16.8) Also, bn(t) = ncn cos nt − ndn sinnt. • The constants cn and dn are determined by the initial conditions: 1 = u(x, 0) = ∞ n=1 bn(0) sinnx = ∞ n=1 dn sin nx, 0 = ut(x, 0) = ∞ n=1 bn(0) sinnx = ∞ n=1 ncn sinnx. By orthogonality, we may multiply both equations by sinmx and integrate: π 0 sin mx dx = dm π 2 , π 0 0 dx = ncn π 2 . Thus, dn = 2 nπ (1 − cos nπ) = 4 nπ , n odd, 0, n even, and cn = 0. (16.9) Using this in (16.8) and (16.7), we get bn(t) = 4 nπ cos nt, n odd, 0, n even,
  • 143. Partial Differential Equations Igor Yanovsky, 2005 143 u(x, t) = 4 π ∞ n=0 cos(2n + 1)t sin(2n + 1)x (2n + 1) .
  • 144. Partial Differential Equations Igor Yanovsky, 2005 144 We can sum the series in regions bouded by characteristics. We have u(x, t) = 4 π ∞ n=0 cos(2n + 1)t sin(2n + 1)x (2n + 1) , or u(x, t) = 2 π ∞ n=0 sin[(2n + 1)(x + t)] (2n + 1) + 2 π ∞ n=0 sin[(2n + 1)(x − t)] (2n + 1) . (16.10) The initial condition may be written as 1 = u(x, 0) = 4 π ∞ n=0 sin(2n + 1)x (2n + 1) for 0 < x < π. (16.11) We can use (16.11) to sum the series in (16.10). In R1, u(x, t) = 1 2 + 1 2 = 1. Since sin[(2n + 1)(x − t)] = − sin[(2n + 1)(−(x − t))], and 0 < −(x − t) < π in R2, in R2, u(x, t) = 1 2 − 1 2 = 0. Since sin[(2n + 1)(x + t)] = sin[(2n + 1)(x + t − 2π)] = − sin[(2n + 1)(2π − (x + t))], and 0 < 2π − (x + t) < π in R3, in R3, u(x, t) = − 1 2 + 1 2 = 0. Since 0 < −(x − t) < π and 0 < 2π − (x + t) < π in R4, in R4, u(x, t) = − 1 2 − 1 2 = −1.
  • 145. Partial Differential Equations Igor Yanovsky, 2005 145 Problem 2. Consider the initial/boundary value problem ⎧ ⎪⎨ ⎪⎩ utt − c2uxx = 0 0 < x < L, t > 0 u(x, 0) = g(x), ut(x, 0) = h(x) 0 < x < L ux(0, t) = 0, ux(L, t) = 0 t ≥ 0. (16.12) Proof. Find u(x, t) in the form u(x, t) = a0(t) 2 + ∞ n=1 an(t) cos nπx L + bn(t) sin nπx L . • Functions an(t) and bn(t) are determined by the boundary conditions: ux(x, t) = ∞ n=1 −an(t) nπ L sin nπx L + bn(t) nπ L cos nπx L , 0 = ux(0, t) = ∞ n=1 bn(t) nπ L ⇒ bn(t) = 0. Thus, u(x, t) = a0(t) 2 + ∞ n=1 an(t) cos nπx L . (16.13) • If we substitute (16.13) into the equation utt − c2 uxx = 0, we get a0(t) 2 + ∞ n=1 an(t) cos nπx L + c2 ∞ n=1 an(t) nπ L 2 cos nπx L = 0, a0(t) = 0 and an(t) + nπc L 2 an(t) = 0, whose general solutions are a0(t) = c0t + d0 and an(t) = cn sin nπct L + dn cos nπct L . (16.14) Also, a0(t) = c0 and an(t) = cn(nπc L ) cos nπct L − dn(nπc L ) sin nπct L . • The constants cn and dn are determined by the initial conditions: g(x) = u(x, 0) = a0(0) 2 + ∞ n=1 an(0) cos nπx L = d0 2 + ∞ n=1 dn cos nπx L , h(x) = ut(x, 0) = a0(0) 2 + ∞ n=1 an(0) cos nπx L = c0 2 + ∞ n=1 cn nπc L cos nπx L . By orthogonality, we may multiply both equations by cos(mπx/L), including m = 0, and integrate: L 0 g(x) dx = d0 L 2 , L 0 g(x) cos mπx L dx = dm L 2 , L 0 h(x) dx = c0 L 2 , L 0 h(x) cos mπx L dx = cm mπc L L 2 . Thus, dn = 2 L L 0 g(x) cos nπx L dx, cn = 2 nπc L 0 h(x) cos nπx L dx, c0 = 2 L L 0 h(x) dx. (16.15) The formulas (16.13), (16.14), and (16.15) define the solution.
  • 146. Partial Differential Equations Igor Yanovsky, 2005 146 Example (McOwen 3.1 #3). Consider the initial/boundary value problem ⎧ ⎪⎨ ⎪⎩ utt − uxx = 0 0 < x < π, t > 0 u(x, 0) = x, ut(x, 0) = 0 0 < x < π ux(0, t) = 0, ux(π, t) = 0 t ≥ 0. (16.16) Proof. Find u(x, t) in the form u(x, t) = a0(t) 2 + ∞ n=1 an(t) cos nx + bn(t) sinnx. • Functions an(t) and bn(t) are determined by the boundary conditions: ux(x, t) = ∞ n=1 −an(t)n sinnx + bn(t)n cos nx, 0 = ux(0, t) = ∞ n=1 bn(t)n ⇒ bn(t) = 0. Thus, u(x, t) = a0(t) 2 + ∞ n=1 an(t) cos nx. (16.17) • If we substitute (16.17) into the equation utt − uxx = 0, we get a0(t) 2 + ∞ n=1 an(t) cos nx + ∞ n=1 an(t)n2 cos nx = 0, a0(t) = 0 and an(t) + n2 an(t) = 0, whose general solutions are a0(t) = c0t + d0 and an(t) = cn sinnt + dn cos nt. (16.18) Also, a0(t) = c0 and an(t) = cnn cos nt − dnn sinnt. • The constants cn and dn are determined by the initial conditions: x = u(x, 0) = a0(0) 2 + ∞ n=1 an(0) cos nx = d0 2 + ∞ n=1 dn cos nx, 0 = ut(x, 0) = a0(0) 2 + ∞ n=1 an(0) cos nx = c0 2 + ∞ n=1 cnn cos nx. By orthogonality, we may multiply both equations by cos mx, including m = 0, and integrate: π 0 x dx = d0 π 2 , π 0 x cos mx dx = dm π 2 , π 0 0 dx = c0 π 2 , π 0 0 cos mx dx = cmm π 2 . Thus, d0 = π, dn = 2 πn2 (cos nπ − 1), cn = 0. (16.19) Using this in (16.18) and (16.17), we get a0(t) = d0 = π, an(t) = 2 πn2 (cos nπ − 1) cosnt,
  • 147. Partial Differential Equations Igor Yanovsky, 2005 147 u(x, t) = π 2 + 2 π ∞ n=1 (cos nπ − 1) cos nt cos nx n2 .
  • 148. Partial Differential Equations Igor Yanovsky, 2005 148 We can sum the series in regions bouded by characteristics. We have u(x, t) = π 2 + 2 π ∞ n=1 (cos nπ − 1) cosnt cos nx n2 , or u(x, t) = π 2 + 1 π ∞ n=1 (cos nπ − 1) cos[n(x − t)] n2 + 1 π ∞ n=1 (cos nπ − 1) cos[n(x + t)] n2 . (16.20) The initial condition may be written as u(x, 0) = x = π 2 + 2 π ∞ n=1 (cos nπ − 1) cosnx n2 for 0 < x < π, which implies x 2 − π 4 = 1 π ∞ n=1 (cos nπ − 1) cosnx n2 for 0 < x < π, (16.21) We can use (16.21) to sum the series in (16.20). In R1, u(x, t) = π 2 + x − t 2 − π 4 + x + t 2 − π 4 = x. Since cos[n(x − t)] = cos[n(−(x − t))], and 0 < −(x − t) < π in R2, in R2, u(x, t) = π 2 + −(x − t) 2 − π 4 + x + t 2 − π 4 = t. Since cos[n(x+t)] = cos[n(x+t−2π)] = cos[n(2π−(x+t))], and 0 < 2π−(x+t) < π in R3, in R3, u(x, t) = π 2 + x − t 2 − π 4 + 2π − (x + t) 2 − π 4 = π − t. Since 0 < −(x − t) < π and 0 < 2π − (x + t) < π in R4 in R4, u(x, t) = π 2 + −(x − t) 2 − π 4 + 2π − (x + t) 2 − π 4 = π − x.
  • 149. Partial Differential Equations Igor Yanovsky, 2005 149 Example (McOwen 3.1 #4). Consider the initial boundary value problem ⎧ ⎪⎨ ⎪⎩ utt − c2uxx = 0 for x > 0, t > 0 u(x, 0) = g(x), ut(x, 0) = h(x) for x > 0 u(0, t) = 0 for t ≥ 0, (16.22) where g(0) = 0 = h(0). If we extend g and h as odd functions on −∞ < x < ∞, show that d’Alembert’s formula gives the solution. Proof. Extend g and h as odd functions on −∞ < x < ∞: ˜g(x) = g(x), x ≥ 0 −g(−x), x < 0 ˜h(x) = h(x), x ≥ 0 −h(−x), x < 0. Then, we need to solve ˜utt − c2˜uxx = 0 for − ∞ < x < ∞, t > 0 ˜u(x, 0) = ˜g(x), ˜ut(x, 0) = ˜h(x) for − ∞ < x < ∞. (16.23) To show that d’Alembert’s formula gives the solution to (16.23), we need to show that the solution given by d’Alembert’s formula satisfies the boundary condition ˜u(0, t) = 0. ˜u(x, t) = 1 2 (˜g(x + ct) + ˜g(x − ct)) + 1 2c x+ct x−ct ˜h(ξ) dξ, ˜u(0, t) = 1 2 (˜g(ct) + ˜g(−ct)) + 1 2c ct −ct ˜h(ξ) dξ = 1 2 (˜g(ct) − ˜g(ct)) + 1 2c (H(ct) − H(−ct)) = 0 + 1 2c (H(ct) − H(ct)) = 0, where we used H(x) = x 0 ˜h(ξ) dξ; and since ˜h is odd, then H is even. Example (McOwen 3.1 #5). Find in closed form (similar to d’Alembet’s formula) the solution u(x, t) of ⎧ ⎪⎨ ⎪⎩ utt − c2 uxx = 0 for x, t > 0 u(x, 0) = g(x), ut(x, 0) = h(x) for x > 0 u(0, t) = α(t) for t ≥ 0, (16.24) where g, h, α ∈ C2 satisfy α(0) = g(0), α (0) = h(0), and α (0) = c2g (0). Verify that u ∈ C2 , even on the characteristic x = ct. Proof. As in (McOwen 3.1 #4), we can extend g and h to be odd functions. We want to transform the problem to have zero boundary conditions. Consider the function: U(x, t) = u(x, t) − α(t). (16.25)
  • 150. Partial Differential Equations Igor Yanovsky, 2005 150 Then (16.24) transforms to: ⎧ ⎪⎪⎪⎪⎪⎪⎪⎪⎨ ⎪⎪⎪⎪⎪⎪⎪⎪⎩ Utt − c2Uxx = −α (t) fU (x,t) U(x, 0) = g(x) − α(0) gU (x) , Ut(x, 0) = h(x) − α (0) hU (x) U(0, t) = 0 αu(t) . We use d’Alembert’s formula and Duhamel’s principle on U. After getting U, we can get u from u(x, t) = U(x, t) + α(t).
  • 151. Partial Differential Equations Igor Yanovsky, 2005 151 Example (Zachmanoglou, Chapter 8, Example 7.2). Find the solution of ⎧ ⎪⎨ ⎪⎩ utt − c2uxx = 0 for x > 0, t > 0 u(x, 0) = g(x), ut(x, 0) = h(x) for x > 0 ux(0, t) = 0 for t > 0. (16.26) Proof. Extend g and h as even functions on −∞ < x < ∞: ˜g(x) = g(x), x ≥ 0 g(−x), x < 0 ˜h(x) = h(x), x ≥ 0 h(−x), x < 0. Then, we need to solve ˜utt − c2˜uxx = 0 for − ∞ < x < ∞, t > 0 ˜u(x, 0) = ˜g(x), ˜ut(x, 0) = ˜h(x) for − ∞ < x < ∞. (16.27) To show that d’Alembert’s formula gives the solution to (16.27), we need to show that the solution given by d’Alembert’s formula satisfies the boundary condition ˜ux(0, t) = 0. ˜u(x, t) = 1 2 (˜g(x + ct) + ˜g(x − ct)) + 1 2c x+ct x−ct ˜h(ξ) dξ. ˜ux(x, t) = 1 2 (˜g (x + ct) + ˜g (x − ct)) + 1 2c [˜h(x + ct) − ˜h(x − ct)], ˜ux(0, t) = 1 2 (˜g (ct) + ˜g (−ct)) + 1 2c [˜h(ct) − ˜h(−ct)] = 0. Since ˜g is even, then g is odd. Problem (F’89, #3). 36 Let α = c, constant. Find the solution of ⎧ ⎪⎨ ⎪⎩ utt − c2 uxx = 0 for x > 0, t > 0 u(x, 0) = g(x), ut(x, 0) = h(x) for x > 0 ut(0, t) = αux(0, t) for t > 0, (16.28) where g, h ∈ C2 for x > 0 and vanish near x = 0. Hint: Use the fact that a general solution of (16.28) can be written as the sum of two traveling wave solutions. Proof. D’Alembert’s formula is derived by plugging in the following into the above equation and initial conditions: u(x, t) = F(x + ct) + G(x − ct). As in (Zachmanoglou 7.2), we can extend g and h to be even functions. 36 Similar to McOwen 3.1 #5. The notation in this problem is changed to be consistent with McOwen.
  • 152. Partial Differential Equations Igor Yanovsky, 2005 152 Example (McOwen 3.1 #6). Solve the initial/boundary value problem ⎧ ⎪⎨ ⎪⎩ utt − uxx = 1 for 0 < x < π and t > 0 u(x, 0) = 0, ut(x, 0) = 0 for 0 < x < π u(0, t) = 0, u(π, t) = −π2/2 for t ≥ 0. (16.29) Proof. If we first find a particular solution of the nonhomogeneous equation, this re- duces the problem to a boundary value problem for the homogeneous equation ( as in (McOwen 3.1 #2) and (McOwen 3.1 #3) ). Hint: You should use a particular solution depending on x! ❶ First, find a particular solution. This is similar to the method of separation of variables. Assume up(x, t) = X(x), which gives −X (x) = 1, X (x) = −1. The solution to the above ODE is X(x) = − x2 2 + ax + b. The boundary conditions give up(0, t) = b = 0, up(π, t) = − π2 2 + aπ + b = − π2 2 , ⇒ a = b = 0. Thus, the particular solution is up(x, t) = − x2 2 . This solution satisfies the following: ⎧ ⎪⎨ ⎪⎩ uptt − upxx = 1 up(x, 0) = −x2 2 , upt(x, 0) = 0 up(0, t) = 0, up(π, t) = −π2 2 . ❷ Second, we find a solution to a boundary value problem for the homogeneous equa- tion: ⎧ ⎪⎨ ⎪⎩ utt − uxx = 0 u(x, 0) = x2 2 , ut(x, 0) = 0 u(0, t) = 0, u(π, t) = 0. This is solved by the method of Separation of Variables. See Separation of Variables subsection of “Problems: Separation of Variables: Wave Equation” McOwen 3.1 #2. The only difference there is that u(x, 0) = 1. We would find uh(x, t). Then, u(x, t) = uh(x, t) + up(x, t).
  • 153. Partial Differential Equations Igor Yanovsky, 2005 153 Problem (S’02, #2). a) Given a continuous function f on R which vanishes for |x| > R, solve the initial value problem utt − uxx = f(x) cos t, u(x, 0) = 0, ut(x, 0) = 0, −∞ < x < ∞, 0 ≤ t < ∞ by first finding a particular solution by separation of variables and then adding the appropriate solution of the homogeneous PDE. b) Since the particular solution is not unique, it will not be obvious that the solution to the initial value problem that you have found in part (a) is unique. Prove that it is unique. Proof. a) ❶ First, find a particular solution by separation of variables. Assume up(x, t) = X(x) cost, which gives −X(x) cost − X (x) cost = f(x) cos t, X + X = −f(x). The solution to the above ODE is written as X = Xh +Xp. The homogeneous solution is Xh(x) = a cos x + b sinx. To find a particular solution, note that since f is continuous, ∃G ∈ C2 (R), such that G + G = −f(x). Thus, Xp(x) = G(x). ⇒ X(x) = Xh(x) + Xp(x) = a cos x + b sinx + G(x). up(x, t) = a cos x + b sinx + G(x) cos t. It can be verified that this solution satisfies the following: uptt − upxx = f(x) cost, up(x, 0) = a cos x + b sinx + G(x), upt(x, 0) = 0. ❷ Second, we find a solution of the homogeneous PDE: ⎧ ⎪⎨ ⎪⎩ utt − uxx = 0, u(x, 0) = −a cos x − b sinx − G(x) g(x) , ut(x, 0) = 0 h(x) . The solution is given by d’Alembert’s formula (with c = 1): uh(x, t) = uA (x, t) = 1 2 (g(x + t) + g(x − t)) + 1 2 x+t x−t h(ξ) dξ = 1 2 − a cos(x + t) − b sin(x + t) − G(x + t) + − a cos(x − t) − b sin(x − t) − G(x − t) = − 1 2 a cos(x + t) + b sin(x + t) + G(x + t) − 1 2 a cos(x − t) + b sin(x − t) + G(x − t) .
  • 154. Partial Differential Equations Igor Yanovsky, 2005 154 It can be verified that the solution satisfies the above homogeneous PDE with the boundary conditions. Thus, the complete solution is: u(x, t) = uh(x, t) + up(x, t). Alternatively, we could use Duhamel’s principle to find the solution: 37 u(x, t) = 1 2 t 0 x+(t−s) x−(t−s) f(ξ) cos s dξ ds. However, this is not how it was suggested to do this problem. b) The particular solution is not unique, since any constants a, b give the solution. However, we show that the solution to the initial value problem is unique. Suppose u1 and u2 are two solutions. Then w = u1 − u2 satisfies: wtt − wxx = 0, w(x, 0) = 0, wt(x, 0) = 0. D’Alembert’s formula gives w(x, t) = 1 2 (g(x + t) + g(x − t)) + 1 2 x+t x−t h(ξ) dξ = 0. Thus, the solution to the initial value problem is unique. 37 Note the relationship: x ↔ ξ, t ↔ s.
  • 155. Partial Differential Equations Igor Yanovsky, 2005 155 16.3 Similarity Solutions Problem (F’98, #7). Look for a similarity solution of the form v(x, t) = tα w(y = x/tβ ) for the differential equation vt = vxx + (v2 )x. (16.30) a) Find the parameters α and β. b) Find a differential equation for w(y) and show that this ODE can be reduced to first order. c) Find a solution for the resulting first order ODE. Proof. We can rewrite (16.30) as vt = vxx + 2vvx. (16.31) We look for a similarity solution of the form v(x, t) = tα w(y), y = x tβ . vt = αtα−1 w + tα w yt = αtα−1 w + tα − βx tβ+1 w = αtα−1 w − tα−1 βyw , vx = tα w yx = tα w t−β = tα−β w , vxx = (tα−β w )x = tα−β w yx = tα−β w t−β = tα−2β w . Plugging in the derivatives we calculated into (16.31), we obtain αtα−1 w − tα−1 βyw = tα−2β w + 2(tα w)(tα−β w ), αw − βyw = t1−2β w + 2tα−β+1 ww . The parameters that would eliminate t from equation above are β = 1 2 , α = − 1 2 . With these parameters, we obtain the differential equation for w(y): − 1 2 w − 1 2 yw = w + 2ww , w + 2ww + 1 2 yw + 1 2 w = 0. We can write the ODE as w + 2ww + 1 2 (yw) = 0. Integrating it with respect to y, we obtain the first order ODE: w + w2 + 1 2 yw = c.
  • 156. Partial Differential Equations Igor Yanovsky, 2005 156 16.4 Traveling Wave Solutions Consider the Korteweg-de Vries (KdV) equation in the form 38 ut + 6uux + uxxx = 0, −∞ < x < ∞, t > 0. (16.32) We look for a traveling wave solution u(x, t) = f(x − ct). (16.33) We get the ODE −cf + 6ff + f = 0. (16.34) We integrate (16.34) to get −cf + 3f2 + f = a, (16.35) where a is a constant. Multiplying this equality by f , we obtain −cff + 3f2 f + f f = af . Integrating again, we get − c 2 f2 + f3 + (f )2 2 = af + b. (16.36) We are looking for solutions f which satisfy f(x), f (x), f (x) → 0 as x → ±∞. (In which case the function u having the form (16.33) is called a solitary wave.) Then (16.35) and (16.36) imply a = b = 0, so that − c 2 f2 + f3 + (f )2 2 = 0, or f = ±f c − 2f. The solution of this ODE is f(x) = c 2 sech2 [ √ c 2 (x − x0)], where x0 is the constant of integration. A solution of this form is called a soliton. 38 Evans, p. 174; Strauss, p. 367.
  • 157. Partial Differential Equations Igor Yanovsky, 2005 157 Problem (S’93, #6). The generalized KdV equation is ∂u ∂t = 1 2 (n + 1)(n + 2)un ∂u ∂x − ∂3u ∂x3 , where n is a positive integer. Solitary wave solutions are sought in which u = f(η), where η = x − ct and f, f , f → 0, as |η| → ∞; c, the wave speed, is constant. Show that f 2 = fn+2 + cf2 . Hence show that solitary waves do not exist if n is even. Show also that, when n = 1, all conditions of the problem are satisfied provided c > 0 and u = −c sech2 1 2 √ c(x − ct) . Proof. • We look for a traveling wave solution u(x, t) = f(x − ct). We get the ODE −cf = 1 2 (n + 1)(n + 2)fn f − f , Integrating this equation, we get −cf = 1 2 (n + 2)fn+1 − f + a, (16.37) where a is a constant. Multiplying this equality by f , we obtain −cff = 1 2 (n + 2)fn+1 f − f f + af . Integrating again, we get − cf2 2 = 1 2 fn+2 − (f )2 2 + af + b. (16.38) We are looking for solutions f which satisfy f, f , f → 0 as x → ±∞. Then (16.37) and (16.38) imply a = b = 0, so that − cf2 2 = 1 2 fn+2 − (f )2 2 , (f )2 = fn+2 + cf2 . • We show that solitary waves do not exist if n is even. We have f = ± fn+2 + cf2 = ±|f| fn + c, ∞ −∞ f dη = ± ∞ −∞ |f| fn + c dη, f ∞ −∞ = ± ∞ −∞ |f| fn + c dη, 0 = ± ∞ −∞ |f| fn + c dη.
  • 158. Partial Differential Equations Igor Yanovsky, 2005 158 Thus, either ➀ |f| ≡ 0 ⇒ f = 0, or ➁ fn + c = 0. Since f → 0 as x → ±∞, we have c = 0 ⇒ f = 0. Thus, solitary waves do not exist if n is even.
  • 159. Partial Differential Equations Igor Yanovsky, 2005 159 • When n = 1, we have (f )2 = f3 + cf2 . (16.39) We show that all conditions of the problem are satisfied provided c > 0, including u = −c sech2 1 2 √ c(x − ct) , or f = −c sech2 η √ c 2 = − c cosh2 [η √ c 2 ] = −c cosh η √ c 2 −2 . We have f = 2c cosh η √ c 2 −3 · sinh η √ c 2 · √ c 2 = c √ c cosh η √ c 2 −3 · sinh η √ c 2 , (f )2 = c3 sinh2 η √ c 2 cosh6 η √ c 2 , f3 = − c3 cosh6 η √ c 2 , cf2 = c3 cosh4 η √ c 2 . Plugging these into (16.39), we obtain: 39 c3 sinh2 η √ c 2 cosh6 η √ c 2 = − c3 cosh6 η √ c 2 + c3 cosh4 η √ c 2 , c3 sinh2 η √ c 2 cosh6 η √ c 2 = −c3 + c3 cosh2 η √ c 2 cosh6 η √ c 2 , c3 sinh2 η √ c 2 cosh6 η √ c 2 = c3 sinh2 η √ c 2 cosh6 η √ c 2 . Also, f, f , f → 0, as |η| → ∞, since f(η) = −c sech2 η √ c 2 = − c cosh2 [η √ c 2 ] = −c 2 e[ η √ c 2 ] + e−[ η √ c 2 ] 2 → 0, as |η| → ∞. Similarly, f , f → 0, as |η| → ∞. 39 cosh2 x − sinh2 x = 1. cosh x = ex + e−x 2 , sinh x = ex − e−x 2
  • 160. Partial Differential Equations Igor Yanovsky, 2005 160 Problem (S’00, #5). Look for a traveling wave solution of the PDE utt + (u2 )xx = −uxxxx of the form u(x, t) = v(x − ct). In particular, you should find an ODE for v. Under the assumption that v goes to a constant as |x| → ∞, describe the form of the solution. Proof. Since (u2 )x = 2uux, and (u2 )xx = 2u2 x + 2uuxx, we have utt + 2u2 x + 2uuxx = −uxxxx. We look for a traveling wave solution u(x, t) = v(x − ct). We get the ODE c2 v + 2(v )2 + 2vv = −v , c2 v + 2((v )2 + vv ) = −v , c2 v + 2(vv ) = −v , (exact differentials) c2 v + 2vv = −v + a, s = x − ct c2 v + v2 = −v + as + b, v + c2 v + v2 = a(x − ct) + b. Since v → C = const as |x| → ∞, we have v , v → 0, as |x| → ∞. Thus, implies c2 v + v2 = as + b. Since |x| → ∞, but v → C, we have a = 0: v2 + c2 v − b = 0. v = −c2 ± √ c4 + 4b 2 .
  • 161. Partial Differential Equations Igor Yanovsky, 2005 161 Problem (S’95, #2). Consider the KdV-Burgers equation ut + uux = uxx + δuxxx in which > 0, δ > 0. a) Find an ODE for traveling wave solutions of the form u(x, t) = ϕ(x − st) with s > 0 and lim y→−∞ ϕ(y) = 0 and analyze the stationary points from this ODE. b) Find the possible (finite) values of ϕ+ = lim y→∞ ϕ(y). Proof. a) We look for a traveling wave solution u(x, t) = ϕ(x − st), y = x − st. We get the ODE −sϕ + ϕϕ = ϕ + δϕ , −sϕ + 1 2 ϕ2 = ϕ + δϕ + a. Since ϕ → 0 as y → −∞, then ϕ , ϕ → 0 as y → −∞. Therefore, at y = −∞, a = 0. We found the following ODE, ϕ + δ ϕ + s δ ϕ − 1 2δ ϕ2 = 0. In order to find and analyze the stationary points of an ODE above, we write it as a first-order system. φ1 = ϕ, φ2 = ϕ . φ1 = ϕ = φ2, φ2 = ϕ = − δ ϕ − s δ ϕ + 1 2δ ϕ2 = − δ φ2 − s δ φ1 + 1 2δ φ2 1. φ1 = φ2 = 0, φ2 = −δ φ2 − s δ φ1 + 1 2δ φ2 1 = 0; ⇒ φ1 = φ2 = 0, φ2 = −s δ φ1 + 1 2δ φ2 1 = 0; ⇒ φ1 = φ2 = 0, φ2 = −1 δ φ1(s − 1 2φ1) = 0. Stationary points: (0, 0), (2s, 0), s > 0. φ1 = φ2 = f(φ1, φ2), φ2 = − δ φ2 − s δ φ1 + 1 2δ φ2 1 = g(φ1, φ2).
  • 162. Partial Differential Equations Igor Yanovsky, 2005 162 In order to classify a stationary point, need to find eigenvalues of a linearized system at that point. J(f(φ1, φ2), g(φ1, φ2)) = ∂f ∂φ1 ∂f ∂φ2 ∂g ∂φ1 ∂g ∂φ2 = 0 1 −s δ + 1 δ φ1 −δ . • For (φ1, φ2) = (0, 0) : det(J|(0,0) − λI) = −λ 1 −s δ −δ − λ = λ2 + δ λ + s δ = 0. λ± = −2δ ± 2 4δ2 − s δ . If 2 4δ > s ⇒ λ± ∈ R, λ± < 0. ⇒ (0,0) is Stable Improper Node. If 2 4δ < s ⇒ λ± ∈ C, Re(λ±) < 0. ⇒ (0,0) is Stable Spiral Point. • For (φ1, φ2) = (2s, 0) : det(J|(2s,0) − λI) = −λ 1 s δ −δ − λ = λ2 + δ λ − s δ = 0. λ± = −2δ ± 2 4δ2 + s δ . ⇒ λ+ > 0, λ− < 0. ⇒ (2s,0) is Untable Saddle Point. b) Since lim y→−∞ ϕ(y) = 0 = lim t→∞ ϕ(x − st), we may have lim y→+∞ ϕ(y) = lim t→−∞ ϕ(x − st) = 2s. That is, a particle may start off at an unstable node (2s, 0) and as t increases, approach the stable node (0, 0). A phase diagram with (0, 0) being a stable spiral point, is shown below.
  • 163. Partial Differential Equations Igor Yanovsky, 2005 163
  • 164. Partial Differential Equations Igor Yanovsky, 2005 164 Problem (F’95, #8). Consider the equation ut + f(u)x = uxx where f is smooth and > 0. We seek traveling wave solutions to this equation, i.e., solutions of the form u = φ(x − st), under the boundary conditions u → uL and ux → 0 as x → −∞, u → uR and ux → 0 as x → +∞. Find a necessary and sufficient condition on f, uL, uR and s for such traveling waves to exist; in case this condition holds, write an equation which defines φ implicitly. Proof. We look for traveling wave solutions u(x, t) = φ(x − st), y = x − st. The boundary conditions become φ → uL and φ → 0 as x → −∞, φ → uR and φ → 0 as x → +∞. Since f(φ(x − st))x = f (φ)φ , we get the ODE −sφ + f (φ)φ = φ , −sφ + (f(φ)) = φ , −sφ + f(φ) = φ + a, φ = −sφ + f(φ) + b. We use boundary conditions to determine constant b: At x = −∞, 0 = φ = −suL + f(uL) + b ⇒ b = suL − f(uL) . At x = +∞, 0 = φ = −suR + f(uR) + b ⇒ b = suR − f(uR) . s = f(uL) − f(uR) uL − uR . 40 40 For the solution for the second part of the problem, refer to Chiu-Yen’s solutions.
  • 165. Partial Differential Equations Igor Yanovsky, 2005 165 Problem (S’02, #5; F’90, #2). Fisher’s Equation. Consider ut = u(1 − u) + uxx, −∞ < x < ∞, t > 0. The solutions of physical interest satisfy 0 ≤ u ≤ 1, and lim x→−∞ u(x, t) = 0, lim x→+∞ u(x, t) = 1. One class of solutions is the set of “wavefront” solutions. These have the form u(x, t) = φ(x + ct), c ≥ 0. Determine the ordinary differential equation and boundary conditions which φ must satisfy (to be of physical interest). Carry out a phase plane analysis of this equation, and show that physically interesting wavefront solutions are possible if c ≥ 2, but not if 0 ≤ c < 2. Proof. We look for a traveling wave solution u(x, t) = φ(x + ct), s = x + ct. We get the ODE cφ = φ(1 − φ) + φ , φ − cφ + φ − φ2 = 0, ◦ φ(s) → 0, as s → −∞, ◦ φ(s) → 1, as s → +∞, ◦ 0 ≤ φ ≤ 1. In order to find and analyze the stationary points of an ODE above, we write it as a first-order system. y1 = φ, y2 = φ . y1 = φ = y2, y2 = φ = cφ − φ + φ2 = cy2 − y1 + y2 1. y1 = y2 = 0, y2 = cy2 − y1 + y2 1 = 0; ⇒ y2 = 0, y1(y1 − 1) = 0. Stationary points: (0, 0), (1, 0). y1 = y2 = f(y1, y2), y2 = cy2 − y1 + y2 1 = g(y1, y2). In order to classify a stationary point, need to find eigenvalues of a linearized system at that point. J(f(y1, y2), g(y1, y2)) = ∂f ∂y1 ∂f ∂y2 ∂g ∂y1 ∂g ∂y2 = 0 1 2y1 − 1 c .
  • 166. Partial Differential Equations Igor Yanovsky, 2005 166 • For (y1, y2) = (0, 0) : det(J|(0,0) − λI) = −λ 1 −1 c − λ = λ2 − cλ + 1 = 0. λ± = c ± √ c2 − 4 2 . If c ≥ 2 ⇒ λ± ∈ R, λ± > 0. (0,0) is Unstable Improper (c > 2) / Proper (c = 2) Node. If 0 ≤ c < 2 ⇒ λ± ∈ C, Re(λ±) ≥ 0. (0,0) is Unstable Spiral Node. • For (y1, y2) = (1, 0) : det(J|(1,0) − λI) = −λ 1 1 c − λ = λ2 − cλ − 1 = 0. λ± = c ± √ c2 + 4 2 . If c ≥ 0 ⇒ λ+ > 0, λ− < 0. (1,0) is Unstable Saddle Point. By looking at the phase plot, a particle may start off at an unstable node (0, 0) and as t increases, approach the unstable node (1, 0).
  • 167. Partial Differential Equations Igor Yanovsky, 2005 167
  • 168. Partial Differential Equations Igor Yanovsky, 2005 168 Problem (F’99, #6). For the system ∂tρ + ∂xu = 0 ∂tu + ∂x(ρu) = ∂2 xu look for traveling wave solutions of the form ρ(x, t) = ρ(y = x−st), u(x, t) = u(y = x − st). In particular a) Find a first order ODE for u. b) Show that this equation has solutions of the form u(y) = u0 + u1 tanh(αy + y0), for some constants u0, u1, α, y0. Proof. a) We rewrite the system: ρt + ux = 0 ut + ρxu + ρux = uxx We look for traveling wave solutions ρ(x, t) = ρ(x − st), u(x, t) = u(x − st), y = x − st. We get the system of ODEs −sρ + u = 0, −su + ρ u + ρu = u . The first ODE gives ρ = 1 s u , ρ = 1 s u + a, where a is a constant, and integration was done with respect to y. The second ODE gives −su + 1 s u u + 1 s u + a u = u , −su + 2 s uu + au = u . Integrating, we get −su + 1 s u2 + au = u + b. u = 1 s u2 + (a − s)u − b. b) Note that the ODE above may be written in the following form: u + Au2 + Bu = C, which is a nonlinear first order equation.
  • 169. Partial Differential Equations Igor Yanovsky, 2005 169 Problem (S’01, #7). Consider the following system of PDEs: ft + fx = g2 − f2 gt − gx = f2 − g a) Find a system of ODEs that describes traveling wave solutions of the PDE system; i.e. for solutions of the form f(x, t) = f(x − st) and g(x, t) = g(x − st). b) Analyze the stationary points and draw the phase plane for this ODE system in the standing wave case s = 0. Proof. a) We look for traveling wave solutions f(x, t) = f(x − st), g(x, t) = g(x − st). We get the system of ODEs −sf + f = g2 − f2 , −sg − g = f2 − g. Thus, f = g2 − f2 1 − s , g = f2 − g −1 − s . b) If s = 0, the system becomes f = g2 − f2 , g = g − f2. Relabel the variables f → y1, g → y2. y1 = y2 2 − y2 1 = 0, y2 = y2 − y2 1 = 0. Stationary points: (0, 0), (−1, 1), (1, 1). y1 = y2 2 − y2 1 = φ(y1, y2), y2 = y2 − y2 1 = ψ(y1, y2). In order to classify a stationary point, need to find eigenvalues of a linearized system at that point. J(φ(y1, y2), ψ(y1, y2)) = ∂φ ∂y1 ∂φ ∂y2 ∂ψ ∂y1 ∂ψ ∂y2 = −2y1 2y2 −2y1 1 . • For (y1, y2) = (0, 0) : det(J|(0,0) − λI) = −λ 0 0 1 − λ = −λ(1 − λ) = 0. λ1 = 0, λ2 = 1; eigenvectors: v1 = 1 0 , v2 = 0 1 . (0,0) is Unstable Node.
  • 170. Partial Differential Equations Igor Yanovsky, 2005 170 • For (y1, y2) = (−1, 1) : det(J|(−1,1) − λI) = 2 − λ 2 2 1 − λ = λ2 − 3λ − 2 = 0. λ± = 3 2 ± √ 17 2 . λ− < 0, λ+ > 0. (-1,1) is Unstable Saddle Point. • For (y1, y2) = (1, 1) : det(J|(1,1) − λI) = −2 − λ 2 −2 1 − λ = λ2 + λ + 2 = 0. λ± = − 1 2 ± i √ 7 2 . Re(λ±) < 0. (1,1) is Stable Spiral Point.
  • 171. Partial Differential Equations Igor Yanovsky, 2005 171 16.5 Dispersion Problem (S’97, #8). Consider the following equation ut = (f(ux))x − αuxxxx, f(v) = v2 − v, (16.40) with constant α. a) Linearize this equation around u = 0 and find the principal mode solution of the form eωt+ikx . For which values of α are there unstable modes, i.e., modes with ω = 0 for real k? For these values, find the maximally unstable mode, i.e., the value of k with the largest positive value of ω. b) Consider the steady solution of the (fully nonlinear) problem. Show that the resulting equation can be written as a second order autonomous ODE for v = ux and draw the corresponding phase plane. Proof. a) We have ut = (f(ux))x − αuxxxx, ut = (u2 x − ux)x − αuxxxx, ut = 2uxuxx − uxx − αuxxxx. However, we need to linearize (16.40) around u = 0. To do this, we need to linearize f. f(u) = f(0) + uf (0) + u2 2 f (0) + · · · = 0 + u(0 − 1) + · · · = −u + · · · . Thus, we have ut = −uxx − αuxxxx. Consider u(x, t) = eωt+ikx. ωeωt+ikx = (k2 − αk4 )eωt+ikx , ω = k2 − αk4 . To find unstable nodes, we set ω = 0, to get α = 1 k2 . • To find the maximally unstable mode, i.e., the value of k with the largest positive value of ω, consider ω(k) = k2 − αk4 , ω (k) = 2k − 4αk3 . To find the extremas of ω, we set ω = 0. Thus,the extremas are at k1 = 0, k2,3 = ± 1 2α . To find if the extremas are maximums or minimums, we set ω = 0: ω (k) = 2 − 12αk2 = 0, ω (0) = 2 > 0 ⇒ k = 0 is the minimum. ω ± 1 2α = −4 < 0 ⇒ k = ± 1 2α is the maximum unstable mode. ω ± 1 2α = 1 4α is the largest positive value of ω.
  • 172. Partial Differential Equations Igor Yanovsky, 2005 172 b) Integrating , we get u2 x − ux − αuxxx = 0. Let v = ux. Then, v2 − v − αvxx = 0, or v = v2 − v α . In order to find and analyze the stationary points of an ODE above, we write it as a first-order system. y1 = v, y2 = v . y1 = v = y2, y2 = v = v2 − v α = y2 1 − y1 α . y1 = y2 = 0, y2 = y2 1 −y1 α = 0; ⇒ y2 = 0, y1(y1 − 1) = 0. Stationary points: (0, 0), (1, 0). y1 = y2 = f(y1, y2), y2 = y2 1 − y1 α = g(y1, y2). In order to classify a stationary point, need to find eigenvalues of a linearized system at that point. J(f(y1, y2), g(y1, y2)) = ∂f ∂y1 ∂f ∂y2 ∂g ∂y1 ∂g ∂y2 = 0 1 2y1−1 α 0 . • For (y1, y2) = (0, 0), λ± = ± −1 α . If α < 0, λ± ∈ R, λ+ > 0, λ− < 0. ⇒ (0,0) is Unstable Saddle Point. If α > 0, λ± = ±i 1 α ∈ C, Re(λ±) = 0. ⇒ (0,0) is Spiral Point. • For (y1, y2) = (1, 0), λ± = ± 1 α . If α < 0, λ± = ±i −1 α ∈ C, Re(λ±) = 0. ⇒ (1,0) is Spiral Point. If α > 0, λ± ∈ R, λ+ > 0, λ− < 0. ⇒ (1,0) is Unstable Saddle Point.
  • 173. Partial Differential Equations Igor Yanovsky, 2005 173
  • 174. Partial Differential Equations Igor Yanovsky, 2005 174 16.6 Energy Methods Problem (S’98, #9; S’96, #5). Consider the following initial-boundary value problem for the multi-dimensional wave equation: utt = u in Ω × (0, ∞), u(x, 0) = f(x), ∂u ∂t (x, 0) = g(x) for x ∈ Ω, ∂u ∂n + a(x) ∂u ∂t = 0 on ∂Ω. Here, Ω is a bounded domain in Rn and a(x) ≥ 0. Define the Energy integral for this problem and use it in order to prove the uniqueness of the classical solution of the prob- lem. Proof. d ˜E dt = 0 = Ω (utt − u)ut dx = Ω uttut dx − ∂Ω ∂u ∂n ut ds + Ω ∇u · ∇ut dx = Ω 1 2 ∂ ∂t (u2 t ) dx + Ω 1 2 ∂ ∂t |∇u|2 dx + ∂Ω a(x)u2 t ds. Thus, − ∂Ω a(x)u2 t dx ≤0 = 1 2 ∂ ∂t Ω u2 t + |∇u|2 dx. Let Energy integral be E(t) = 1 2 Ω u2 t + |∇u|2 dx. In order to prove that the given E(t) ≤ 0 from scratch, take its derivative with respect to t: dE dt (t) = Ω ututt + ∇u · ∇ut dx = Ω ututt dx + ∂Ω ut ∂u ∂n ds − Ω ut u dx = Ω ut(utt − u) dx =0 − ∂Ω a(x)u2 t dx ≤ 0. Thus, E(t) ≤ E(0). To prove the uniqueness of the classical solution, suppose u1 and u2 are two solutions of the initial boundary value problem. Let w = u1 − u2. Then, w satisfies wtt = w in Ω × (0, ∞), w(x, 0) = 0, wt(x, 0) = 0 for x ∈ Ω, ∂w ∂n + a(x) ∂w ∂t = 0 on ∂Ω. We have Ew(0) = 1 2 Ω (wt(x, 0)2 + |∇w(x, 0)|2 ) dx = 0.
  • 175. Partial Differential Equations Igor Yanovsky, 2005 175 Ew(t) ≤ Ew(0) = 0 ⇒ Ew(t) = 0. Thus, wt = 0, wxi = 0 ⇒ w(x, t) = const = 0. Hence, u1 = u2. Problem (S’94, #7). Consider the wave equation 1 c2(x) utt = u x ∈ Ω ∂u ∂t − α(x) ∂u ∂n = 0 on ∂Ω × R. Assume that α(x) is of one sign for all x (i.e. α always positive or α always negative). For the energy E(t) = 1 2 Ω 1 c2(x) u2 t + |∇u|2 dx, show that the sign of dE dt is determined by the sign of α. Proof. We have dE dt (t) = Ω 1 c2(x) ututt + ∇u · ∇ut dx = Ω 1 c2(x) ututt dx + ∂Ω ut ∂u ∂n ds − Ω ut u dx = Ω ut 1 c2(x) utt − u dx =0 + ∂Ω 1 α(x) u2 t dx = ∂Ω 1 α(x) u2 t dx = > 0, if α(x) > 0, ∀x ∈ Ω, < 0, if α(x) < 0, ∀x ∈ Ω.
  • 176. Partial Differential Equations Igor Yanovsky, 2005 176 Problem (F’92, #2). Let Ω ∈ Rn . Let u(x, t) be a smooth solution of the following initial boundary value problem: utt − u + u3 = 0 for (x, t) ∈ Ω × [0, T] u(x, t) = 0 for (x, t) ∈ ∂Ω × [0, T]. a) Derive an energy equality for u. (Hint: Multiply by ut and integrate over Ω × [0, T].) b) Show that if u|t=0 = ut|t=0 = 0 for x ∈ Ω, then u ≡ 0. Proof. a) Multiply by ut and integrate: 0 = Ω (utt − u + u3 )ut dx = Ω uttut dx − ∂Ω ∂u ∂n ut ds =0 + Ω ∇u · ∇ut dx + Ω u3 ut dx = Ω 1 2 ∂ ∂t (u2 t ) dx + Ω 1 2 ∂ ∂t |∇u|2 dx + Ω 1 4 ∂ ∂t (u4 ) dx = 1 2 d dt Ω u2 t + |∇u|2 + 1 2 u4 dx. Thus, the Energy integral is E(t) = Ω u2 t + |∇u|2 + 1 2 u4 dx = const = E(0). b) Since u(x, 0) = 0, ut(x, 0) = 0, we have E(0) = Ω ut(x, 0)2 + |∇u(x, 0)|2 + 1 2 u(x, 0)4 dx = 0. Since E(t) = E(0) = 0, we have E(t) = Ω ut(x, t)2 + |∇u(x, t)|2 + 1 2 u(x, t)4 dx = 0. Thus, u ≡ 0.
  • 177. Partial Differential Equations Igor Yanovsky, 2005 177 Problem (F’04, #3). Consider a damped wave equation utt − u + a(x)ut = 0, (x, t) ∈ R3 × R, u|t=0 = u0, ut|t=0 = u1. Here the damping coefficient a ∈ C∞ 0 (R3) is a non-negative function and u0, u1 ∈ C∞ 0 (R3 ). Show that the energy of the solution u(x, t) at time t, E(t) = 1 2 R3 |∇xu|2 + |ut|2 dx is a decreasing function of t ≥ 0. Proof. Take the derivative of E(t) with respect to t. Note that the boundary integral is 0 by Huygen’s principle. dE dt (t) = R3 ∇u · ∇ut + ututt dx = ∂R3 ut ∂u ∂n ds =0 − R3 ut u dx + R3 ututt dx = R3 ut(− u + utt) dx = R3 ut(−a(x)ut) dx = R3 −a(x)u2 t dx ≤ 0. Thus, dE dt ≤ 0 ⇒ E(t) ≤ E(0), i.e. E(t) is a decreasing function of t.
  • 178. Partial Differential Equations Igor Yanovsky, 2005 178 Problem (W’03, #8). a) Consider the damped wave equation for high-speed waves (0 < << 1) in a bounded region D 2 utt + ut = u with the boundary condition u(x, t) = 0 on ∂D. Show that the energy functional E(t) = D 2 u2 t + |∇u|2 dx is nonincreasing on solutions of the boundary value problem. b) Consider the solution to the boundary value problem in part (a) with initial data u (x, 0) = 0, ut(x, 0) = −αf(x), where f does not depend on and α < 1. Use part (a) to show that D |∇u (x, t)|2 dx → 0 uniformly on 0 ≤ t ≤ T for any T as → 0. c) Show that the result in part (b) does not hold for α = 1. To do this consider the case where f is an eigenfunction of the Laplacian, i.e. f + λf = 0 in D and f = 0 on ∂D, and solve for u explicitly. Proof. a) dE dt = D 2 2 ututt dx + D 2∇u · ∇ut dx = D 2 2 ututt dx + ∂D 2 ∂u ∂n ut ds =0, (u=0 on ∂D) − D 2 uut dx = 2 D ( 2 utt − u)ut dx = = −2 D |ut|2 dx ≤ 0. Thus, E(t) ≤ E(0), i.e. E(t) is nonincreasing. b) From (a), we know dE dt ≤ 0. We also have E (0) = D 2 (ut(x, 0))2 + |∇u (x, 0)|2 dx = D 2 ( −α f(x))2 + 0 dx = D 2(1−α) f(x)2 dx → 0 as → 0. Since E (0) ≥ E (t) = D 2(ut)2 + |∇u |2 dx, then E (t) → 0 as → 0. Thus, D |∇u |2 dx → 0 as → 0. c) If α = 1, E (0) = D 2(1−α) f(x)2 dx = D f(x)2 dx. Since f is independent of , E (0) does not approach 0 as → 0. We can not conclude that D |∇u (x, t)|2 dx → 0.
  • 179. Partial Differential Equations Igor Yanovsky, 2005 179 Problem (F’98, #6). Let f solve the nonlinear wave equation ftt − fxx = −f(1 + f2 )−1 for x ∈ [0, 1], with f(x = 0, t) = f(x = 1, t) = 0 and with smooth initial data f(x, t) = f0(x). a) Find an energy integral E(t) which is constant in time. b) Show that |f(x, t)| < c for all x and t, in which c is a constant. Hint: Note that f 1 + f2 = 1 2 d df log(1 + f2 ). Proof. a) Since f(0, t) = f(1, t) = 0, ∀t, we have ft(0, t) = ft(1, t) = 0. Let dE dt = 0 = 1 0 ftt − fxx + f(1 + f2 )−1 ft dx = 1 0 fttft dx − 1 0 fxxft dx + 1 0 fft 1 + f2 dx = 1 0 fttft dx − [fx ft =0 ]1 0 + 1 0 fxftx dx + 1 0 fft 1 + f2 dx = 1 0 1 2 ∂ ∂t (f2 t ) dx + 1 0 1 2 ∂ ∂t (f2 x) dx + 1 0 1 2 ∂ ∂t (ln(1 + f2 )) dx = 1 2 d dt 1 0 f2 t + f2 x + ln(1 + f2 ) dx. Thus, E(t) = 1 2 1 0 f2 t + f2 x + ln(1 + f2 ) dx. b) We want to show that f is bounded. For smooth f(x, 0) = f0(x), we have E(0) = 1 2 1 0 ft(x, 0)2 + fx(x, 0)2 + ln(1 + f(x, 0)2 ) dx < ∞. Since E(t) is constant in time, E(t) = E(0) < ∞. Thus, 1 2 1 0 ln(1 + f2 ) dx ≤ 1 2 1 0 f2 t + f2 x + ln(1 + f2 ) dx = E(t) < ∞. Hence, f is bounded.
  • 180. Partial Differential Equations Igor Yanovsky, 2005 180 Problem (F’97, #1). Consider initial-boundary value problem utt + a2 (x, t)ut − u(x, t) = 0 x ∈ Ω ⊂ Rn , 0 < t < +∞ u(x) = 0 x ∈ ∂Ω u(x, 0) = f(x), ut(x, 0) = g(x) x ∈ Ω. Prove that L2-norm of the solution is bounded in t on (0, +∞). Here Ω is a bounded domain, and a(x, t), f(x), g(x) are smooth functions. Proof. Multiply the equation by ut and integrate over Ω: ututt + a2 u2 t − ut u = 0, Ω ututt dx + Ω a2 u2 t dx − Ω ut u dx = 0, 1 2 d dt Ω u2 t dx + Ω a2 u2 t dx − ∂Ω ut ∂u ∂n ds =0, (u=0, x∈∂Ω) + Ω ∇u · ∇ut dx = 0, 1 2 d dt Ω u2 t dx + Ω a2 u2 t dx + 1 2 d dt Ω |∇u|2 dx = 0, 1 2 d dt Ω u2 t + |∇u|2 dx = − Ω a2 u2 t dx ≤ 0. Let Energy integral be E(t) = Ω u2 t + |∇u|2 dx. We have dE dt ≤ 0, i.e. E(t) ≤ E(0). E(t) ≤ E(0) = Ω ut(x, 0)2 + |∇u(x, 0)|2 dx = Ω g(x)2 + |∇f(x)|2 dx < ∞, since f and g are smooth functions. Thus, E(t) = Ω u2 t + |∇u|2 dx < ∞, Ω |∇u|2 dx < ∞, Ω u2 dx ≤ C Ω |∇u|2 dx < ∞, by Poincare inequality. Thus, ||u||2 is bounded ∀t.
  • 181. Partial Differential Equations Igor Yanovsky, 2005 181 Problem (S’98, #4). a) Let u(x, y, z, t), −∞ < x, y, z < ∞ be a solution of the equation ⎧ ⎪⎨ ⎪⎩ utt + ut = uxx + uyy + uzz u(x, y, z, 0) = f(x, y, z), ut(x, y, z, 0) = g(x, y, z). (16.41) Here f, g are smooth functions which vanish if x2 + y2 + z2 is large enough. Prove that it is the unique solution for t ≥ 0. b) Suppose we want to solve the same equation (16.41) in the region z ≥ 0, −∞ < x, y < ∞, with the additional conditions u(x, y, 0, t) = f(x, y, t) uz(x, y, 0, t) = g(x, y, t) with the same f, g as before in (16.41). What goes wrong? Proof. a) Suppose u1 and u2 are two solutions. Let w = u1 − u2. Then, ⎧ ⎪⎨ ⎪⎩ wtt + wt = w, w(x, y, z, 0) = 0, wt(x, y, z, 0) = 0. Multiply the equation by wt and integrate: wtwtt + w2 t = wt w, R3 wtwtt dx + R3 w2 t dx = R3 wt w dx, 1 2 d dt R3 w2 t dx + R3 w2 t dx = ∂R3 wt ∂w ∂n dx =0 − R3 ∇w · ∇wt dx, 1 2 d dt R3 w2 t dx + R3 w2 t dx = − 1 2 d dt R3 |∇w|2 dx, d dt R3 w2 t + |∇w|2 dx E(t) = −2 R3 w2 t dx ≤ 0, dE dt ≤ 0, E(t) ≤ E(0) = R3 wt(x, 0)2 + |∇w(x, 0)|2 dx = 0, ⇒ E(t) = R3 w2 t + |∇w|2 dx = 0. Thus, wt = 0, ∇w = 0, and w = constant. Since w(x, y, z, 0) = 0, we have w ≡ 0. b)
  • 182. Partial Differential Equations Igor Yanovsky, 2005 182 Problem (F’94, #8). The one-dimensional, isothermal fluid equations with viscosity and capillarity in Lagrangian variables are vt − ux = 0 ut + p(v)x = εuxx − δvxxx in which v(= 1/ρ) is specific volume, u is velocity, and p(v) is pressure. The coefficients ε and δ are non-negative. Find an energy integral which is non-increasing (as t increases) if ε > 0 and con- served if ε = 0. Hint: if δ = 0, E = u2/2 − P(v) dx where P (v) = p(v). Proof. Multiply the second equation by u and integrate over R. We use ux = vt. Note that the boundary integrals are 0 due to finite speed of propagation. uut + up(v)x = εuuxx − δuvxxx, R uut dx + R up(v)x dx = ε R uuxx dx − δ R uvxxx dx, 1 2 R ∂ ∂t (u2 ) dx + ∂R up(v) ds =0 + R uxp(v) dx = ε ∂R uux dx =0 −ε R u2 x dx − δ ∂R uvxx dx =0 +δ R uxvxx dx, 1 2 R ∂ ∂t (u2 ) dx + R vtp(v) dx = −ε R u2 x dx + δ R vtvxx dx, 1 2 R ∂ ∂t (u2 ) dx + R ∂ ∂t P(v) dx = −ε R u2 x dx + δ ∂R vtvx dx =0 −δ R vxtvx dx, 1 2 R ∂ ∂t (u2 ) dx + R ∂ ∂t P(v) dx + δ 2 R ∂ ∂t (v2 x) dx = −ε R u2 x dx, d dt R u2 2 + P(v) + δ 2 v2 x dx = −ε R u2 x dx ≤ 0. E(t) = R u2 2 + P(v) + δ 2 v2 x dx is nonincreasing if ε > 0, and conserved if ε = 0.
  • 183. Partial Differential Equations Igor Yanovsky, 2005 183 Problem (S’99, #5). Consider the equation utt = ∂ ∂x σ(ux) (16.42) with σ(z) a smooth function. This is to be solved for t > 0, 0 ≤ x ≤ 1, with periodic boundary conditions and initial data u(x, 0) = u0(x) and ut(x, 0) = v0(x). a) Multiply (16.42) by ut and get an expression of the form d dt 1 0 F(ut, ux) = 0 that is satisfied for an appropriate function F(y, z) with y = ut, z = ux, where u is any smooth, periodic in space solution of (16.42). b) Under what conditions on σ(z) is this function, F(y, z), convex in its variables? c) What `a priori inequality is satisfied for smooth solutions when F is convex? d) Discuss the special case σ(z) = a2z3/3, with a > 0 and constant. Proof. a) Multiply by ut and integrate: ututt = utσ(ux)x, 1 0 ututt dx = 1 0 utσ(ux)x dx, d dt 1 0 u2 t 2 dx = utσ(ux) 1 0 =0, (2π-periodic) − 1 0 utxσ(ux) dx = Let Q (z) = σ(z), then d dt Q(ux) = σ(ux)uxt. Thus, = − 1 0 utxσ(ux) dx = − d dt 1 0 Q(ux) dx. d dt 1 0 u2 t 2 + Q(ux) dx = 0. b) We have F(ut, ux) = u2 t 2 + Q(ux). 41 For F to be convex, the Hessian matrix of partial derivatives must be positive definite. 41 A function f is convex on a convex set S if it satisfies f(αx + (1 − α)y) ≤ αf(x) + (1 − α)f(y) for all 0 ≤ α ≤ 1 and for all x, y ∈ S. If a one-dimensional function f has two continuous derivatives, then f is convex if and only if f (x) ≥ 0. In the multi-dimensional case the Hessian matrix of second derivatives must be positive semi-definite, that is, at every point x ∈ S yT ∇2 f(x) y ≥ 0, for all y. The Hessian matrix is the matrix with entries [∇2 f(x)]ij ≡ ∂2 f(x) ∂xi∂xj . For functions with continuous second derivatives, it will always be symmetric matrix: fxixj = fxj xi .
  • 184. Partial Differential Equations Igor Yanovsky, 2005 184 The Hessian matrix is ∇2 F(ut, ux) = Futut Futux Fuxut Fuxux = 1 0 0 σ (ux) . yT ∇2 F(x) y = y1 y2 1 0 0 σ (ux) y1 y2 = y2 1 + σ (ux)y2 2 ≥ need 0. Thus, for a Hessian matrix to be positive definite, need σ (ux) ≥ 0, so that the above inequality holds for all y. c) We have d dt 1 0 F(ut, ux) dx = 0, 1 0 F(ut, ux) dx = const, 1 0 F(ut, ux) dx = 1 0 F(ut(x, 0), ux(x, 0)) dx, 1 0 u2 t 2 + Q(ux) dx = 1 0 v2 0 2 + Q(u0x) dx. d) If σ(z) = a2 z3 /3, we have F(ut, ux) = u2 t 2 + Q(ux) = u2 t 2 + a2 u4 x 12 , d dt 1 0 u2 t 2 + a2 u4 x 12 dx = 0, 1 0 u2 t 2 + a2 u4 x 12 dx = const, 1 0 u2 t 2 + a2 u4 x 12 dx = 1 0 v0 2 2 + a2 u0 4 x 12 dx.
  • 185. Partial Differential Equations Igor Yanovsky, 2005 185 Problem (S’96, #8). 42 Let u(x, t) be the solution of the Korteweg-de Vries equation ut + uux = uxxx, 0 ≤ x ≤ 2π, with 2π-periodic boundary conditions and prescribed initial data u(x, t = 0) = f(x). a) Prove that the energy integral I1(u) = 2π 0 u2 (x, t) dx is independent of the time t. b) Prove that the second “energy integral”, I2(u) = 2π 0 1 2 u2 x(x, t) + 1 6 u3 (x, t) dx is also independent of the time t. c) Assume the initial data are such that I1(f) + I2(f) < ∞. Use (a) + (b) to prove that the maximum norm of the solution, |u|∞ = supx |u(x, t)|, is bounded in time. Hint: Use the following inequalities (here, |u|p is the Lp -norm of u(x, t) at fixed time t): • |u|2 ∞ ≤ π 6 (|u|2 2 + |ux|2 2) (one of Sobolev’s inequalities); • |u|3 3 ≤ |u|2 2 |u|∞ (straightforward). Proof. a) Multiply the equation by u and integrate. Note that all boundary terms are 0 due to 2π-periodicity. uut + u2 ux = uuxxx, 2π 0 uut dx + 2π 0 u2 ux dx = 2π 0 uuxxx dx, 1 2 d dt 2π 0 u2 dx + 1 3 2π 0 (u3 )x dx = uuxx 2π 0 − 2π 0 uxuxx dx, 1 2 d dt 2π 0 u2 dx + 1 3 u3 2π 0 = − 1 2 2π 0 (u2 x)x dx, 1 2 d dt 2π 0 u2 dx = − 1 2 u2 x 2π 0 = 0. I1(u) = 2π 0 u2 dx = C. Thus, I1(u) = 2π 0 u2 (x, t) dx is independent of the time t. Alternatively, we may differentiate I1(u): dI1 dt (u) = d dt 2π 0 u2 dx = 2π 0 2uut dx = 2π 0 2u(−uux + uxxx) dx = 2π 0 −2u2 ux dx + 2π 0 2uuxxx dx = 2π 0 − 2 3 (u3 )x dx + 2uuxx 2π 0 − 2π 0 2uxuxx dx = − 2 3 u3 2π 0 − 2π 0 (u2 x)x dx = −u2 x 2π 0 = 0. 42 Also, see S’92, #7.
  • 186. Partial Differential Equations Igor Yanovsky, 2005 186 b) Note that all boundary terms are 0 due to 2π-periodicity. dI2 dt (u) = d dt 2π 0 1 2 u2 x + 1 6 u3 dx = 2π 0 uxuxt + 1 2 u2 ut dx = We differentiate the original equation with respect to x: ut = −uux + uxxx utx = −(uux)x + uxxxx. = 2π 0 ux(−(uux)x + uxxxx) dx + 1 2 2π 0 u2 (−uux + uxxx) dx = 2π 0 −ux(uux)x dx + 2π 0 uxuxxxx dx − 1 2 2π 0 u3 ux dx + 1 2 2π 0 u2 uxxx dx = −uxuux 2π 0 + 2π 0 uxxuux dx + uxuxxx 2π 0 − 2π 0 uxxuxxx dx − 1 2 2π 0 u4 4 x dx + 1 2 u2 uxx 2π 0 − 1 2 2π 0 2uuxuxx dx = 2π 0 uxxuux dx − 2π 0 uxxuxxx dx − 1 2 u4 4 2π 0 − 2π 0 uuxuxx dx = − 2π 0 uxxuxxx dx = −u2 xx 2π 0 + 2π 0 uxxxuxx dx = 2π 0 uxxxuxx dx = 0, since − 2π 0 uxxuxxx dx = + 2π 0 uxxuxxx dx. Thus, I2(u) = 2π 0 1 2 u2 x(x, t) + 1 6 u3 (x, t) dx = C, and I2(u) is independent of the time t. c) From (a) and (b), we have I1(u) = 2π 0 u2 dx = ||u||2 2, I2(u) = 2π 0 1 2 u2 x + 1 6 u3 dx = 1 2 ||ux||2 2 + 1 6 ||u||3 3. Using given inequalities, we have ||u||2 ∞ ≤ π 6 (||u||2 2 + ||ux||2 2) ≤ π 6 I1(u) + 2I2(u) − 1 3 ||u||3 3 ≤ π 6 I1(u) + π 3 I2(u) + π 18 ||u||2 2 ||u||∞ ≤ π 6 I1(u) + π 3 I2(u) + π 18 I1(u)||u||∞ = C + C1||u||∞. ⇒ ||u||2 ∞ ≤ C + C1||u||∞, ⇒ ||u||∞ ≤ C2. Thus, ||u||∞ is bounded in time. Also see Energy Methods problems for higher order equations (3rd and 4th) in the section on Gas Dynamics.
  • 187. Partial Differential Equations Igor Yanovsky, 2005 187 16.7 Wave Equation in 2D and 3D Problem (F’97, #8); (McOwen 3.2 #90). Solve utt = uxx + uyy + uzz with initial conditions u(x, y, z, 0) = x2 + y2 g(x) , ut(x, y, z, 0) = 0 h(x) . Proof. ➀ We may use the Kirchhoff’s formula: u(x, t) = 1 4π ∂ ∂t t |ξ|=1 g(x + ctξ) dSξ + t 4π |ξ|=1 h(x + ctξ) dSξ = 1 4π ∂ ∂t t |ξ|=1 (x1 + ctξ1)2 + (x2 + ctξ2)2 dSξ + 0 = ➁ We may solve the problem by Hadamard’s method of descent, since initial con- ditions are independent of x3. We need to convert surface integrals in R3 to domain integrals in R2. Specifically, we need to express the surface measure on the upper half of the unit sphere S2 + in terms of the two variables ξ1 and ξ2. To do this, consider f(ξ1, ξ2) = 1 − ξ2 1 − ξ2 2 over the unit disk ξ2 1 + ξ2 2 < 1. dSξ = 1 + (fξ1 )2 + (fξ2 )2 dξ1dξ2 = dξ1dξ2 1 − ξ2 1 − ξ2 2 .
  • 188. Partial Differential Equations Igor Yanovsky, 2005 188 u(x1, x2, t) = 1 4π ∂ ∂t 2t ξ2 1+ξ2 2<1 g(x1 + ctξ1, x2 + ctξ2) dξ1dξ2 1 − ξ2 1 − ξ2 2 + t 4π 2 ξ2 1+ξ2 2 <1 h(x1 + ctξ1, x2 + ctξ2) dξ1dξ2 1 − ξ2 1 − ξ2 2 = 1 4π ∂ ∂t 2t ξ2 1+ξ2 2<1 (x1 + tξ1)2 + (x2 + tξ2)2 1 − ξ2 1 − ξ2 2 dξ1dξ2 + 0, = 1 2π ∂ ∂t t ξ2 1+ξ2 2 <1 x2 1 + 2x1tξ1 + t2ξ2 1 + x2 2 + 2x2tξ2 + t2ξ2 2 1 − ξ2 1 − ξ2 2 dξ1dξ2 = 1 2π ∂ ∂t ξ2 1+ξ2 2<1 tx2 1 + 2x1t2ξ1 + t3ξ2 1 + tx2 2 + 2x2t2ξ2 + t3ξ2 2 1 − ξ2 1 − ξ2 2 dξ1dξ2 = 1 2π ξ2 1+ξ2 2<1 x2 1 + 4x1tξ1 + 3t2ξ2 1 + x2 2 + 4x2tξ2 + 3t2ξ2 2 1 − ξ2 1 − ξ2 2 dξ1dξ2 = 1 2π ξ2 1+ξ2 2<1 (x2 1 + x2 2) + 4t(x1ξ1 + x2ξ2) + 3t2(ξ2 1 + ξ2 2) 1 − ξ2 1 − ξ2 2 dξ1dξ2 = 1 2π (x2 1 + x2 2) ξ2 1+ξ2 2 <1 dξ1dξ2 1 − ξ2 1 − ξ2 2 ❶ + 4t 2π ξ2 1+ξ2 2 <1 x1ξ1 + x2ξ2 1 − ξ2 1 − ξ2 2 dξ1dξ2 ❷ + 3t2 2π ξ2 1+ξ2 2<1 ξ2 1 + ξ2 2 1 − ξ2 1 − ξ2 2 dξ1dξ2 ❸ = ❶ = 1 2π (x2 1 + x2 2) ξ2 1+ξ2 2 <1 dξ1dξ2 1 − ξ2 1 − ξ2 2 = 1 2π (x2 1 + x2 2) 2π 0 1 0 r dr dθ √ 1 − r2 = 1 2π (x2 1 + x2 2) 2π 0 −2 1 0 −1 2 du dθ u 1 2 u = 1 − r2 , du = −2r dr = 1 2π (x2 1 + x2 2) 2π 0 1 dθ = x2 1 + x2 2. ❷ = 4t 2π ξ2 1+ξ2 2<1 x1ξ1 + x2ξ2 1 − ξ2 1 − ξ2 2 dξ1dξ2 = 4t 2π 1 −1 √ 1−ξ2 2 − √ 1−ξ2 2 x1ξ1 + x2ξ2 1 − ξ2 1 − ξ2 2 dξ1dξ2 = 0. ❸ = 3t2 2π ξ2 1+ξ2 2 <1 ξ2 1 + ξ2 2 1 − ξ2 1 − ξ2 2 dξ1dξ2 = 3t2 2π 2π 0 1 0 (r cos θ)2 + (r sinθ)2 √ 1 − r2 r drdθ = 3t2 2π 2π 0 1 0 r3 √ 1 − r2 drdθ u = 1 − r2 , du = −2r dr = 3t2 2π 2π 0 2 3 dθ = t2 π 2π 0 dθ = 2t2 . ⇒ u(x1, x2, t) = ❶ + ❷ + ❸ = x2 1 + x2 2 + 2t2 . ➂ We may guess what the solution is: u(x, y, z, t) = 1 2 (x + t)2 + (y + t)2 + (x − t)2 + (y − t)2 = x2 + y2 + 2t2 .
  • 189. Partial Differential Equations Igor Yanovsky, 2005 189 Check: u(x, y, z, 0) = x2 + y2 . ut(x, y, z, t) = (x + t) + (y + t) − (x − t) − (y − t), ut(x, y, z, 0) = 0. utt(x, y, z, t) = 4, ux(x, y, z, t) = (x + t) + (x − t), uxx(x, y, z, t) = 2, uy(x, y, z, t) = (y + t) + (y − t), uyy(x, y, z, t) = 2, uzz(x, y, z, t) = 0, utt = uxx + uyy + uzz.
  • 190. Partial Differential Equations Igor Yanovsky, 2005 190 Problem (S’98, #6). Consider the two-dimensional wave equation wtt = a2 w, with initial data which van- ish for x2 +y2 large enough. Prove that w(x, y, t) satisfies the decay |w(x, y, t)| ≤ C·t−1 . (Note that the estimate is not uniform with respect to x, y since C may depend on x, y). Proof. Suppose we have the following problem with initial data: utt = a2 u x ∈ R2 , t > 0, u(x, 0) = g(x), ut(x, 0) = h(x) x ∈ R2 . The result is the consequence of the Huygens’ principle and may be proved by Hadamard’s method of descent: 43 u(x, t) = 1 4π ∂ ∂t 2t ξ2 1+ξ2 2<1 g(x1 + ctξ1, x2 + ctξ2) dξ1dξ2 1 − ξ2 1 − ξ2 2 + t 4π 2 ξ2 1+ξ2 2 <1 h(x1 + ctξ1, x2 + ctξ2) dξ1dξ2 1 − ξ2 1 − ξ2 2 = 1 2π |ξ|2<c2t2 th(x + ξ) + g(x + ξ) 1 − |ξ|2 c2t2 dξ1dξ2 c2t2 + t 2π |ξ|2<c2t2 ∇g(x + ξ) · (ct, ct) 1 − |ξ|2 c2t2 dξ1dξ2 c2t2 . For a given x, let T(x) be so large that T > 1 and supp(h + g) ⊂ BT (x). Then for t > 2T we have: |u(x, t)| = 1 2π |ξ|2<c2T 2 tM + M + 2Mct 1 − c2T 2 c2T 24 dξ1dξ2 c2t2 = πc2 T2 2π M 3/4 1 c2t + M 3/4 1 c2Tt + 2Mc c2t . ⇒ u(x, t) ≤ C1/t for t > 2T. For t ≤ 2T: |u(x, t)| = 1 2π |ξ|2<c2t2 2TM + M + 4McT 1 − |ξ|2 c2t2 dξ1dξ2 c2t2 = 1 2π (2TM + M + 4Mct)2π ct 0 r dr/c2 t2 1 − r2 c2t2 = M(2T + 1 + 4cT) 2 1 0 −du u1/2 = M(2T + 1 + 4cT) 2 2 ≤ M(2T + 1 + 4cT)2T t . Letting C = max(C1, M(2T + 1 + 4cT)2T), we have |u(x, t)| ≤ C(x)/t. • For n = 3, suppose g, h ∈ C∞ 0 (R3 ). The solution is given by the Kircchoff’s formula. There is a constant C so that u(x, t) ≤ C/t for all x ∈ R3 and t > 0. As McOwen suggensts in Hints for Exercises, to prove the result, we need to estimate the 43 Nick’s solution follows.
  • 191. Partial Differential Equations Igor Yanovsky, 2005 191 area of intersection of the sphere of radius ct with the support of the functions g and h.
  • 192. Partial Differential Equations Igor Yanovsky, 2005 192 Problem (S’95, #6). Spherical waves in 3-d are waves symmetric about the origin; i.e. u = u(r, t) where r is the distance from the origin. The wave equation utt = c2 u then reduces to 1 c2 utt = urr + 2 r ur. (16.43) a) Find the general solutions u(r, t) by solving (16.43). Include both the incoming waves and outgoing waves in your solutions. b) Consider only the outgoing waves and assume the finite out-flux condition 0 < lim r→0 r2 ur < ∞ for all t. The wavefront is defined as r = ct. How is the amplitude of the wavefront decaying in time? Proof. a) We want to reduce (16.43) to the 1D wave equation. Let v = ru. Then vtt = rutt, vr = rur + u, vrr = rurr + 2ur. Thus, (16.43) becomes 1 c2 1 r vtt = 1 r vrr, 1 c2 vtt = vrr, vtt = c2 vrr, which has the solution v(r, t) = f(r + ct) + g(r − ct). Thus, u(r, t) = 1 r v(r, t) = 1 r f(r + ct) incoming, (c>0) + 1 r g(r − ct) outgoing, (c>0) . b) We consider u(r, t) = 1 r g(r − ct): 0 < lim r→0 r2 ur < ∞, 0 < lim r→0 r2 1 r g (r − ct) − 1 r2 g(r − ct) < ∞, 0 < lim r→0 rg (r − ct) − g(r − ct) < ∞, 0 < −g(−ct) < ∞, 0 < −g(−ct) = G(t) < ∞, g(t) = −G t −c .
  • 193. Partial Differential Equations Igor Yanovsky, 2005 193 The wavefront is defined as r = ct. We have u(r, t) = 1 r g(r − ct) = − 1 r G r − ct −c = − 1 ct G(0). |u(r, t)| = 1 t − 1 c G(0) . The amplitude of the wavefront decays like 1 t .
  • 194. Partial Differential Equations Igor Yanovsky, 2005 194 Problem (S’00, #8). a) Show that for a smooth function F on the line, while u(x, t) = F(ct + |x|)/|x| may look like a solution of the wave equation utt = c2 u in R3 , it actually is not. Do this by showing that for any smooth function φ(x, t) with compact support R3×R u(x, t)(φtt − φ) dxdt = 4π R φ(0, t)F(ct) dt. Note that, setting r = |x|, for any function w which only depends on r one has w = r−2 (r2 wr)r = wrr + 2 r wr. b) If F(0) = F (0) = 0, what is the true solution to utt = u with initial conditions u(x, 0) = F(|x|)/|x| and ut(x, 0) = F (|x|)/|x|? c) (Ralston Hw) Suppose u(x, t) is a solution to the wave equation utt = c2 u in R3 × R with u(x, t) = w(|x|, t) and u(x, 0) = 0. Show that u(x, t) = F(|x| + ct) − F(|x| − ct) |x| for a function F of one variable. Proof. a) We have R3 R u (φtt − φ) dxdt = lim →0 R dt |x|> u (φtt − φ) dx = lim →0 R dt |x|> φ (utt − u) dx + |x|= ∂u ∂n φ − u ∂φ ∂n dS . The final equality is derived by integrating by parts twice in t, and using Green’s theorem: Ω (v u − u v) dx = ∂Ω v ∂u ∂n − u ∂v ∂n ds. Since dS = 2 sinφ dφ dθ and ∂ ∂n = − ∂ ∂r , substituting u(x, t) = F(|x| + ct)/|x| gives: R3 R u (φtt − φ) dxdt = R 4πφF(ct) dt. Thus, u is not a weak solution to the wave equation. b) c) We want to show that v(|x|, t) = |x|w(|x|, t) is a solution to the wave equation in one space dimension and hence must have the from v = F(|x| +ct) +G(|x| −ct). Then we can argue that w will be undefined at x = 0 for some t unless F(ct) + G(−ct) = 0 for all t. We work in spherical coordinates. Note that w and v are independent of φ and θ. We have: vtt(r, t) = c2 w = c2 1 r2 (r2 wr)r = c2 1 r2 (2rwr + r2 wrr), ⇒ rwtt = c2 rwrr + 2wr. Thus we see that vtt = c2 vrr, and we can conclude that v(r, t) = F(r + ct) + G(r − ct) and w(r, t) = F(r + ct) + G(r − ct) r .
  • 195. Partial Differential Equations Igor Yanovsky, 2005 195 limr→0 w(r, t) does not exist unless F(ct) + G(−ct) = 0 for all t. Hence w(r, t) = F(ct + r) + G(ct − r) r , and u(x, t) = F(ct + |x|) + G(ct − |x|) |x| .
  • 196. Partial Differential Equations Igor Yanovsky, 2005 196 17 Problems: Laplace Equation A fundamental solution K(x) for the Laplace operator is a distribution satisfying 44 K(x) = δ(x) The fundamental solution for the Laplace operator is K(x) = 1 2π log |x| if n = 2 1 (2−n)ωn |x|2−n if n ≥ 3. 17.1 Green’s Function and the Poisson Kernel Green’s function is a special fundamental solution satisfying 45 G(x, ξ) = δ(x) for x ∈ Ω G(x, ξ) = 0 for x ∈ ∂Ω, (17.1) To construct the Green’s function, ➀ consider wξ(x) with wξ(x) = 0 in Ω and wξ(x) = −K(x − ξ) on ∂Ω; ➁ consider G(x, ξ) = K(x − ξ) + wξ(x), which is a fundamental solution satisfying (17.1). Problem 1. Given a particular distribution solution to the set of Dirichlet problems uξ(x) = δξ(x) for x ∈ Ω uξ(x) = 0 for x ∈ ∂Ω, how would you use this to solve u = 0 for x ∈ Ω u(x) = g(x) for x ∈ ∂Ω. Proof. uξ(x) = G(x, ξ), a Green’s function. G is a fundamental solution to the Laplace operator, G(x, ξ) = 0, x ∈ ∂Ω. In this problem, it is assumed that G(x, ξ) is known for Ω. Then u(ξ) = Ω G(x, ξ) u dx + ∂Ω u(x) ∂G(x, ξ) ∂nx dSx for every u ∈ C2 (Ω). In particular, if u = 0 in Ω and u = g on ∂Ω, then we obtain the Poisson integral formula u(ξ) = ∂Ω ∂G(x, ξ) ∂nx g(x) dSx, 44 We know that u(x) = Rn K(x−y)f(y)dy is a distribution solution of u = f when f is integrable and has compact support. In particular, we have u(x) = Rn K(x − y) u(y) dy whenever u ∈ C∞ 0 (Rn ). The above result is a consequence of: u(x) = Ω δ(x − y)u(y) dy = ( K) ∗ u = K ∗ ( u) = Ω K(x − y) u(y) dy. 45 Green’s function is useful in satisfying Dirichlet boundary conditions.
  • 197. Partial Differential Equations Igor Yanovsky, 2005 197 where H(x, ξ) = ∂G(x,ξ) ∂nx is the Poisson kernel. Thus if we know that the Dirichlet problem has a solution u ∈ C2 (Ω), then we can calculate u from the Poisson integral formula (provided of course that we can compute G(x, ξ)).
  • 198. Partial Differential Equations Igor Yanovsky, 2005 198 Dirichlet Problem on a Half-Space. Solve the n-dimensional Laplace/Poisson equation on the half-space with Dirichlet boundary conditions. Proof. Use the method of reflection to construct Green’s function. Let Ω be an upper half-space in Rn. If x = (x , xn), where x ∈ Rn−1, we can see |x − ξ| = |x − ξ∗ |, and hence K(x − ξ) = K(x − ξ∗ ). Thus G(x, ξ) = K(x − ξ) − K(x − ξ∗ ) is the Green’s function on Ω. G(x, ξ) is harmonic in Ω, and G(x, ξ) = 0 on ∂Ω. To compute the Poisson kernel, we must differentiate G(x, ξ) in the negative xn direction. For n ≥ 2, ∂ ∂xn K(x − ξ) = xn − ξn ωn |x − ξ|−n , so that the Poisson kernel is given by − ∂ ∂xn G(x, ξ) xn=0 = 2ξn ωn |x − ξ|−n , for x ∈ Rn−1 . Thus, the solution is u(ξ) = ∂Ω ∂G(x, ξ) ∂nx g(x) dSx = 2ξn ωn Rn−1 g(x ) |x − ξ|n dx . If g(x ) is bounded and continuous for x ∈ Rn−1, then u(ξ) is C∞ and harmonic in Rn + and extends continuously to Rn + such that u(ξ ) = g(ξ ).
  • 199. Partial Differential Equations Igor Yanovsky, 2005 199 Problem (F’95, #3): Neumann Problem on a Half-Space. a) Consider the Neumann problem in the upper half plane, Ω = {x = (x1, x2) : −∞ < x1 < ∞, x2 > 0}: u = ux1x1 + ux2x2 = 0 x ∈ Ω, ux2 (x1, 0) = f(x1) − ∞ < x1 < ∞. Find the corresponding Green’s function and conclude that u(ξ) = u(ξ1, ξ2) = 1 2π ∞ −∞ ln [(x1 − ξ1)2 + ξ2 2] · f(x1) dx1 is a solution of the problem. b) Show that this solution is bounded in Ω if and only if ∞ −∞ f(x1) dx1 = 0. Proof. a) Notation: x = (x, y), ξ = (x0, y0). Since K(x−ξ) = 1 2π log |x−ξ|, n = 2. ➀ First, we find the Green’s function. We have K(x − ξ) = 1 2π log (x − x0)2 + (y − y0)2. Let G(x, ξ) = K(x − ξ) + ω(x). Since the problem is Neumann, we need: G(x, ξ) = δ(x − ξ), ∂G ∂y ((x, 0), ξ) = 0. G((x, y), ξ) = 1 2π log (x − x0)2 + (y − y0)2 + ω((x, y), ξ), ∂G ∂y ((x, y), ξ) = 1 2π y − y0 (x − x0)2 + (y − y0)2 + ωy((x, y), ξ), ∂G ∂y ((x, 0), ξ) = − 1 2π y0 (x − x0)2 + y2 0 + ωy((x, 0), ξ) = 0. Let ω((x, y), ξ) = a 2π log (x − x0)2 + (y + y0)2. Then, ∂G ∂y ((x, 0), ξ) = − 1 2π y0 (x − x0)2 + y2 0 + a 2π y0 (x − x0)2 + y2 0 = 0. Thus, a = 1. G((x, y), ξ) = 1 2π log (x − x0)2 + (y − y0)2 + 1 2π log (x − x0)2 + (y + y0)2. 46 ➁ Consider Green’s identity (after cutting out B (ξ) and having → 0): Ω (u G − G u =0 ) dx = ∂Ω u ∂G ∂n =0 −G ∂u ∂n dS 46 Note that for the Dirichlet problem, we would have gotten the “-” sign instead of “+” in front of ω.
  • 200. Partial Differential Equations Igor Yanovsky, 2005 200 Since ∂u ∂n = ∂u ∂(−y) = −f(x), we have Ω u δ(x − ξ) dx = ∞ −∞ G((x, y), ξ) f(x) dx, u(ξ) = ∞ −∞ G((x, y), ξ) f(x) dx. For y = 0, we have G((x, y), ξ) = 1 2π log (x − x0)2 + y2 0 + 1 2π log (x − x0)2 + y2 0 = 1 2π 2 log (x − x0)2 + y2 0 = 1 2π log (x − x0)2 + y2 0 . Thus, u(ξ) = 1 2π ∞ −∞ log (x − x0)2 + y2 0 f(x) dx. b) Show that this solution is bounded in Ω if and only if ∞ −∞ f(x1) dx1 = 0. Consider the Green’s identity: Ω u dxdy = ∂Ω ∂u ∂n dS = − ∞ −∞ ∂u ∂y dx = ∞ −∞ f(x) dx = 0. Note that the Green’s identity applies to bounded domains Ω. R −R f dx1 + 2π 0 ∂u ∂r R dθ = 0. ???
  • 201. Partial Differential Equations Igor Yanovsky, 2005 201 McOwen 4.2 # 6. For n = 2, use the method of reflections to find the Green’s function for the first quadrant Ω = {(x, y) : x, y > 0}. Proof. For x ∈ ∂Ω, |x − ξ(0) | · |x − ξ(2) | = |x − ξ(1) | · |x − ξ(3) |, |x − ξ(0) | = |x − ξ(1) | · |x − ξ(3) | |x − ξ(2)| . But ξ(0) = ξ, so for n = 2, G(x, ξ) = 1 2π log |x − ξ| − 1 2π log |x − ξ(1)| · |x − ξ(3)| |x − ξ(2)| . G(x, ξ) = 0, x ∈ ∂Ω. Problem. Use the method of images to solve G = δ(x − ξ) in the first quadrant with G = 0 on the boundary. Proof. To solve the problem in the first quadrant we take a reflection to the fourth quadrant and the two are reflected to the left half. G = δ(x − ξ(0) ) − δ(x − ξ(1) ) − δ(x − ξ(2) ) + δ(x − ξ(3) ). G = 1 2π log |x − ξ(0)| |x − ξ(3)| |x − ξ(1)| |x − ξ(2)| = 1 2π log (x − x0)2 + (y − y0)2 (x + x0)2 + (y + y0)2 (x − x0)2 + (y + y0)2 (x + x0)2 + (y − y0)2 . Note that on the axes G = 0.
  • 202. Partial Differential Equations Igor Yanovsky, 2005 202 Problem (S’96, #3). Construct a Green’s function for the following mixed Dirichlet- Neumann problem in Ω = {x = (x1, x2) ∈ R2 : x1 > 0, x2 > 0}: u = ∂2 u ∂x2 1 + ∂2 u ∂x2 2 = f, x ∈ Ω, ux2 (x1, 0) = 0, x1 > 0, u(0, x2) = 0, x2 > 0. Proof. Notation: x = (x, y), ξ = (x0, y0). Since K(x − ξ) = 1 2π log |x − ξ|, n = 2. K(x − ξ) = 1 2π log (x − x0)2 + (y − y0)2. Let G(x, ξ) = K(x − ξ) + ω(x). At (0, y), y > 0, G (0, y), ξ = 1 2π log x2 0 + (y − y0)2 + ω(0, y) = 0. Also, Gy (x, y), ξ = 1 2π 1 2 · 2(y − y0) (x − x0)2 + (y − y0)2 + wy(x, y) = 1 2π y − y0 (x − x0)2 + (y − y0)2 + wy(x, y). At (x, 0), x > 0, Gy (x, 0), ξ = − 1 2π y0 (x − x0)2 + y2 0 + wy(x, 0) = 0. We have ω((x, y), ξ) = a 2π log (x + x0)2 + (y − y0)2 + b 2π log (x − x0)2 + (y + y0)2 + c 2π log (x + x0)2 + (y + y0)2. Using boundary conditions, we have 0 = G((0, y), ξ) = 1 2π log x2 0 + (y − y0)2 + ω(0, y) = 1 2π log x2 0 + (y − y0)2 + a 2π log x2 0 + (y − y0)2 + b 2π log x2 0 + (y + y0)2 + c 2π log x2 0 + (y + y0)2. Thus, a = −1, c = −b. Also, 0 = Gy((x, 0), ξ) = − 1 2π y0 (x − x0)2 + y2 0 + wy(x, 0) = − 1 2π y0 (x − x0)2 + y2 0 − (−1) 2π y0 (x + x0)2 + y2 0 + b 2π y0 (x − x0)2 + y2 0 + (−b) 2π y0 (x + x0)2 + y2 0 . Thus, b = 1, and G((x, y), ξ) = 1 2π log (x − x0)2 + (y − y0)2 + ω(x) = 1 2π log (x − x0)2 + (y − y0)2
  • 203. Partial Differential Equations Igor Yanovsky, 2005 203 − log (x + x0)2 + (y − y0)2 + log (x − x0)2 + (y + y0)2 − log (x + x0)2 + (y + y0)2 . It can be seen that G((x, y), ξ) = 0 on x = 0, for example.
  • 204. Partial Differential Equations Igor Yanovsky, 2005 204 Dirichlet Problem on a Ball. Solve the n-dimensional Laplace/Poisson equation on the ball with Dirichlet boundary conditions. Proof. Use the method of reflection to construct Green’s function. Let Ω = {x ∈ Rn : |x| < a}. For ξ ∈ Ω, define ξ∗ = a2ξ |ξ|2 as its reflection in ∂Ω; note ξ∗ /∈ Ω. |x − ξ∗ | |x − ξ| = a |ξ| for |x| = a. ⇒ |x − ξ| = |ξ| a |x − ξ∗ |. (17.2) From (17.2) we conclude that for x ∈ ∂Ω (i.e. |x| = a), K(x − ξ) = ⎧ ⎨ ⎩ 1 2π log |ξ| a |x − ξ∗ | if n = 2 a |ξ| n−2 K(x − ξ∗ ) if n ≥ 3. (17.3) Define for x, ξ ∈ Ω: G(x, ξ) = ⎧ ⎨ ⎩ K(x − ξ) − 1 2π log |ξ| a |x − ξ∗ | if n = 2 K(x − ξ) − a |ξ| n−2 K(x − ξ∗ ) if n ≥ 3. Since ξ∗ is not in Ω, the second terms on the RHS are harmonic in x ∈ Ω. Moreover, by (17.3) we have G(x, ξ) = 0 if x ∈ ∂Ω. Thus, G is the Green’s function for Ω. u(ξ) = ∂Ω ∂G(x, ξ) ∂nx g(x) dSx = a2 − |ξ|2 aωn |x|=a g(x) |x − ξ|n dSx.
  • 205. Partial Differential Equations Igor Yanovsky, 2005 205 17.2 The Fundamental Solution Problem (F’99, #2). ➀ Given that Ka(x − y) and Kb(x − y) are the kernels for the operators ( − aI)−1 and ( − bI)−1 on L2 (Rn ), where 0 < a < b, show that ( − aI)( − bI) has a fundamental solution of the form c1Ka + c2Kb. ➁ Use the preceding to find a fundamental solution for 2 − , when n = 3. Proof. METHOD ❶: ➀ ( − aI)u = f ( − bI)u = f u = Ka f u = Kb f fundamental solution ⇔ kernel ⇒ u = Kaf u = Kbf if u ∈ L2 , ( − aI)u = (−|ξ|2 − a)u = f ( − bI)u = (−|ξ|2 − b)u = f ⇒ u = − 1 (ξ2 + a) f(ξ) u = − 1 (ξ2 + b) f(ξ) ⇒ Ka = − 1 ξ2 + a Kb = − 1 ξ2 + b ( − aI)( − bI)u = f, 2 − (a + b) + abI u = f, u = 1 (ξ2 + a)(ξ2 + b) f(ξ) = Knewf(ξ), Knew = 1 (ξ2 + a)(ξ2 + b) = 1 b − a − 1 ξ2 + b + 1 ξ2 + a = 1 b − a (Kb − Ka), Knew = 1 b − a (Kb − Ka), c1 = 1 b − a , c2 = − 1 b − a . ➁ n = 3 is not relevant (may be used to assume Ka, Kb ∈ L2 ). For 2 − , a = 0, b = 1 above, or more explicitly ( 2 − )u = f, (ξ4 + ξ2 )u = f, u = 1 (ξ4 + ξ2) f, K = 1 (ξ4 + ξ2) = 1 ξ2(ξ2 + 1) = − 1 ξ2 + 1 + 1 ξ2 = K1 − K0.
  • 206. Partial Differential Equations Igor Yanovsky, 2005 206 METHOD ❷: • For u ∈ C∞ 0 (Rn) we have: u(x) = Rn Ka(x − y) ( − aI) u(y) dy, ➀ u(x) = Rn Kb(x − y) ( − bI) u(y) dy. ➁ Let u(x) = c1( − bI) φ(x), for ➀ u(x) = c2( − aI) φ(x), for ➁ for φ(x) ∈ C∞ 0 (Rn ). Then, c1( − bI)φ(x) = Rn Ka(x − y) ( − aI) c1( − bI)φ(y) dy, c2( − aI)φ(x) = Rn Kb(x − y) ( − bI) c2( − aI)φ(y) dy. We add two equations: (c1 + c2) φ(x) − (c1b + c2a)φ(x) = Rn (c1Ka + c2Kb) ( − aI) ( − bI) φ(y) dy. If c1 = −c2 and −(c1b + c2a) = 1, that is, c1 = 1 a−b, we have: φ(x) = Rn 1 a − b (Ka − Kb) ( − aI) ( − bI) φ(y) dy, which means that 1 a−b(Ka − Kb) is a fundamental solution of ( − aI)( − bI). • 2 − = ( − 1) = ( − 0I)( − 1I). ( − 0I) has fundamental solution K0 = − 1 4πr in R3 . To find K, a fundamental solution for ( − 1I), we need to solve for a radially symmetric solution of ( − 1I)K = δ. In spherical coordinates, in R3, the above expression may be written as: K + 2 r K − K = 0. Let K = 1 r w(r), K = 1 r w − 1 r2 w, K = 1 r w − 2 r2 w + 2 r3 w. Plugging these into , we obtain: 1 r w − 1 r w = 0, or w − w = 0.
  • 207. Partial Differential Equations Igor Yanovsky, 2005 207 Thus, w = c1er + c2e−r , K = 1 r w(r) = c1 er r + c2 e−r r . Suppose v(x) ≡ 0 for |x| ≥ R and let Ω = BR(0); for small > 0 let Ω = Ω − B (0). Note: ( − I)K(|x|) = 0 in Ω . Consider Green’s identity (∂Ω = ∂Ω ∪ ∂B (0)): Ω K(|x|) v − v K(|x|) dx = ∂Ω K(|x|) ∂v ∂n − v ∂K(|x|) ∂n dS =0, since v≡0 for x≥R + ∂B (0) K(|x|) ∂v ∂n − v ∂K(|x|) ∂n dS We add − Ω K(|x|) v dx + Ω v K(|x|) dx to LHS to get: Ω K(|x|)( − I)v − v ( − I)K(|x|) = 0, in Ω dx = ∂B (0) K(|x|) ∂v ∂n − v ∂K(|x|) ∂n dS. lim →0 Ω K(|x|)( − I)v dx = Ω K(|x|)( − I)v dx. Since K(r) = c1 er r + c2 e−r r is integrable at x = 0. On ∂B (0), K(|x|) = K( ). Thus, 47 ∂B (0) K(|x|) ∂v ∂n dS = K( ) ∂B (0) ∂v ∂n dS ≤ c1 e + c2 e− 4π 2 max ∇v → 0, as → 0. ∂B (0) v(x) ∂K(|x|) ∂n dS = ∂B (0) 1 − c1e + c2e− + 1 2 c1e + c2e− v(x) dS = 1 − c1e + c2e− + 1 2 c1e + c2e− ∂B (0) v(x) dS = 1 − c1e + c2e− + 1 2 c1e + c2e− ∂B (0) v(0) dS + 1 − c1e + c2e− + 1 2 c1e + c2e− ∂B (0) [v(x) − v(0)] dS → 1 2 c1e + c2e− v(0) 4π 2 → 4π(c1 + c2)v(0) = −v(0). Thus, taking c1 = c2, we have c1 = c2 = − 1 8π , which gives Ω K(|x|)( − I)v dx = lim →0 Ω K(|x|)( − I)v dx = v(0), 47 In R3 , for |x| = , K(|x|) = K( ) = c1 e + c2 e− . ∂K(|x|) ∂n = − ∂K( ) ∂r = −c1 e − e 2 − c2 − e− − e− 2 = 1 − c1e + c2e− + 1 2 c1e + c2e− , since n points inwards. n points toward 0 on the sphere |x| = (i.e., n = −x/|x|).
  • 208. Partial Differential Equations Igor Yanovsky, 2005 208 that is K(r) = − 1 8π er r + e−r r = − 1 4πr cosh(r) is the fundamental solution of ( − I). By part (a), 1 a−b(Ka − Kb) is a fundamental solution of ( − aI)( − bI). Here, the fundamental solution of ( − 0I)( − 1I) is 1 −1 (K0 − K) = − − 1 4πr + 1 4πr cosh(r) = 1 4πr 1 − cosh(r) .
  • 209. Partial Differential Equations Igor Yanovsky, 2005 209 Problem (F’91, #3). Prove that − 1 4π cos k|x| |x| is a fundamental solution for ( + k2 ) in R3 where |x| = x2 1 + x2 2 + x2 3, i.e. prove that for any smooth function f(x) with compact support u(x) = − 1 4π R3 cos k|x − y| |x − y| f(y) dy is a solution to ( + k2 )u = f. Proof. For v ∈ C∞ 0 (Rn ), we want to show that for K(|x|) = − 1 4π cos k|x| |x| , we have ( + k2)K = δ, i.e. Rn K(|x|) ( + k2 )v(x) dx = v(0). Suppose v(x) ≡ 0 for |x| ≥ R and let Ω = BR(0); for small > 0 let Ω = Ω − B (0). ( + k2 )K(|x|) = 0 in Ω . Consider Green’s identity (∂Ω = ∂Ω ∪ ∂B (0)): Ω K(|x|) v − v K(|x|) dx = ∂Ω K(|x|) ∂v ∂n − v ∂K(|x|) ∂n dS =0, since v≡0 for x≥R + ∂B (0) K(|x|) ∂v ∂n − v ∂K(|x|) ∂n dS We add Ω k2 K(|x|) v dx − Ω v k2 K(|x|) dx to LHS to get: Ω K(|x|)( + k2 )v − v ( + k2 )K(|x|) = 0, in Ω dx = ∂B (0) K(|x|) ∂v ∂n − v ∂K(|x|) ∂n dS. lim →0 Ω K(|x|)( + k2 )v dx = Ω K(|x|)( + k2 )v dx. Since K(r) = − cos kr 4πr is integrable at x = 0. On ∂B (0), K(|x|) = K( ). Thus, 48 ∂B (0) K(|x|) ∂v ∂n dS = K( ) ∂B (0) ∂v ∂n dS ≤ − cos k 4π 4π 2 max ∇v → 0, as → 0. 48 In R3 , for |x| = , K(|x|) = K( ) = − cos k 4π . ∂K(|x|) ∂n = − ∂K( ) ∂r = 1 4π − k sin k − cos k 2 = − 1 4π k sin k + cos k , since n points inwards. n points toward 0 on the sphere |x| = (i.e., n = −x/|x|).
  • 210. Partial Differential Equations Igor Yanovsky, 2005 210 ∂B (0) v(x) ∂K(|x|) ∂n dS = ∂B (0) − 1 4π k sink + cos k v(x) dS = − 1 4π k sink + cos k ∂B (0) v(x) dS = − 1 4π k sink + cos k ∂B (0) v(0) dS − 1 4π k sin k + cos k ∂B (0) [v(x) − v(0)] dS = − 1 4π k sink + cos k v(0) 4π 2 − 1 4π k sin k + cos k [v(x) − v(0)] 4π 2 →0, (v is continuous) → − cos k v(0) → −v(0). Thus, Ω K(|x|)( + k2 )v dx = lim →0 Ω K(|x|)( + k2 )v dx = v(0), that is, K(r) = − 1 4π cos kr r is the fundamental solution of + k2 . Problem (F’97, #2). Let u(x) be a solution of the Helmholtz equation u + k2 u = 0 x ∈ R3 satisfying the “radiation” conditions u = O 1 r , ∂u ∂r − iku = O 1 r2 , |x| = r → ∞. Prove that u ≡ 0. Hint: A fundamental solution to the Helmholtz equation is 1 4πr eikr. Use the Green formula. Proof. Denote K(|x|) = 1 4πr eikr , a fundamental solution. Thus, ( + k2 )K = δ. Let x0 be any point and Ω = BR(x0); for small > 0 let Ω = Ω − B (x0). ( + k2 )K(|x|) = 0 in Ω . Consider Green’s identity (∂Ω = ∂Ω ∪ ∂B (x0)): Ω u ( + k2 )K − K( + k2 )u dx = 0 = ∂Ω u ∂K ∂n − K ∂u ∂n dS + ∂B (x0) u ∂K ∂n − K ∂u ∂n dS →u(x0), as →0 . (It can be shown by the method previously used that the integral over B (x0) ap- proaches u(x0) as → 0.) Taking the limit when → 0, we obtain −u(x0) = ∂Ω u ∂K ∂n − K ∂u ∂n dS = ∂Ω u ∂ ∂r eik|x−x0| 4π|x − x0| − eik|x−x0| 4π|x − x0| ∂u ∂r dS = ∂Ω u ∂ ∂r eik|x−x0| 4π|x − x0| − ik eik|x−x0| 4π|x − x0| = O( 1 |x|2 ); (can be shown) − eik|x−x0| 4π|x − x0| ∂u ∂r − iku dS = O 1 R · O 1 R2 · 4πR2 − O 1 R · O 1 R2 · 4πR2 = 0. Taking the limit when R → ∞, we get u(x0) = 0.
  • 211. Partial Differential Equations Igor Yanovsky, 2005 211 Problem (S’02, #1). a) Find a radially symmetric solution, u, to the equation in R2, u = 1 2π log |x|, and show that u is a fundamental solution for 2 , i.e. show φ(0) = R2 u 2 φ dx for any smooth φ which vanishes for |x| large. b) Explain how to construct the Green’s function for the following boundary value in a bounded domain D ⊂ R2 with smooth boundary ∂D w = 0 and ∂w ∂n = 0 on ∂D, 2 w = f in D. Proof. a) Rewriting the equation in polar coordinates, we have u = 1 r rur r + 1 r2 uθθ = 1 2π log r. For a radially symmetric solution u(r), we have uθθ = 0. Thus, 1 r rur r = 1 2π log r, rur r = 1 2π r log r, rur = 1 2π r log r dr = r2 log r 4π − r2 8π , ur = r log r 4π − r 8π , u = 1 4π r log r dr − 1 8π r dr = 1 8π r2 log r − 1 . u(r) = 1 8π r2 log r − 1 . We want to show that u defined above is a fundamental solution of 2 for n = 2. That is R2 u 2 v dx = v(0), v ∈ C∞ 0 (Rn ). See the next page that shows that u defined as u(r) = 1 8π r2 log r is the Fundamental Solution of 2 . (The − 1 8π r2 term does not play any role.) In particular, the solution of 2 ω = f(x), if given by ω(x) = R2 u(x − y) 2 ω(y) dy = 1 8π R2 |x − y|2 log |x − y| − 1 f(y) dy.
  • 212. Partial Differential Equations Igor Yanovsky, 2005 212 b) Let K(x − ξ) = 1 8π |x − ξ|2 log |x − ξ| − 1 . We use the method of images to construct the Green’s function. Let G(x, ξ) = K(x − ξ) + ω(x). We need G(x, ξ) = 0 and ∂G ∂n (x, ξ) = 0 for x ∈ ∂Ω. Consider wξ(x) with 2wξ(x) = 0 in Ω, wξ(x) = −K(x − ξ) and ∂wξ ∂n (x) = −∂K ∂n (x − ξ) on ∂Ω. Note, we can find the Greens function for the upper-half plane, and then make a conformal map onto the domain.
  • 213. Partial Differential Equations Igor Yanovsky, 2005 213 Problem (S’97, #6). Show that the fundamental solution of 2 in R2 is given by V (x1, x2) = 1 8π r2 ln(r), r = |x − ξ|, and write the solution of 2 w = F(x1, x2). Hint: In polar coordinates, = 1 r ∂ ∂r (r ∂ ∂r )+ 1 r2 ∂2 ∂θ2 ; for example, V = 1 2π (1+ln(r)). Proof. Notation: x = (x1, x2). We have V (x) = 1 8π r2 log(r), In polar coordinates: (here, Vθθ = 0) V = 1 r rVr r = 1 r r 1 8π r2 log(r) r r = 1 8π 1 r r 2r log(r) + r r = 1 8π 1 r 2r2 log(r) + r2 r = 1 8π 1 r 4r + 4r log r = 1 2π (1 + log r). The fundamental solution V (x) for 2 is the distribution satisfying: 2V (r) = δ(r). 2 V = ( V ) = 1 2π (1 + log r) = 1 2π (1 + log r) = 1 2π 1 r r(1 + log r)r r = 1 2π 1 r r 1 r r = 1 2π 1 r (1)r = 0 for r = 0. Thus, 2V (r) = δ(r) ⇒ V is the fundamental solution. The approach above is not rigorous. See the next page that shows that V defined above is the Fundamental Solution of 2 . The solution of 2 ω = F(x), if given by ω(x) = R2 V (x − y) 2 ω(y) dy = 1 8π R2 |x − y|2 log |x − y| F(y) dy.
  • 214. Partial Differential Equations Igor Yanovsky, 2005 214 Show that the Fundamental Solution of 2 in R2 is given by: K(x) = 1 8π r2 ln(r), r = |x − ξ|, (17.4) Proof. For v ∈ C∞ 0 (Rn), we want to show Rn K(|x|) 2 v(x) dx = v(0). Suppose v(x) ≡ 0 for |x| ≥ R and let Ω = BR(0); for small > 0 let Ω = Ω − B (0). K(|x|) is biharmonic ( 2 K(|x|) = 0) in Ω . Consider Green’s identity (∂Ω = ∂Ω ∪ ∂B (0)): Ω K(|x|) 2 v dx = ∂Ω K(|x|) ∂ v ∂n − v ∂ K(|x|) ∂n ds + ∂Ω K(|x|) ∂v ∂n − v ∂K(|x|) ∂n ds =0, since v≡0 for x≥R + ∂B (0) K(|x|) ∂ v ∂n − v ∂ K(|x|) ∂n ds + ∂B (0) K(|x|) ∂v ∂n − v ∂K(|x|) ∂n ds. lim →0 Ω K(|x|) 2 v dx = Ω K(|x|) v2 dx. Since K(r) is integrable at x = 0. On ∂B (0), K(|x|) = K( ). Thus, 49 ∂B (0) K(|x|) ∂ v ∂n dS = K( ) ∂B (0) ∂ v ∂n dS ≤ K( ) ωn 1 max x∈Ω ∇( v) = 1 8π 2 log( ) ωn max x∈Ω ∇( v) → 0, as → 0. ∂B (0) v(x) ∂ K(|x|) ∂n dS = ∂B (0) − 1 2π v(x) dS = ∂B (0) − 1 2π v(0) dS + ∂B (0) − 1 2π [v(x) − v(0)] dS = − 1 2π v(0) 2π − max x∈∂B (0) v(x) − v(0) →0, (v is continuous) = −v(0). ∂B (0) K(|x|) ∂v ∂n dS = K( ) ∂B (0) ∂v ∂n dS ≤ 1 2π (1 + log ) 2π max x∈Ω |∇v| → 0, as → 0. ∂B (0) v ∂K(|x|) ∂n dS = ∂B (0) − 1 4π log − 1 8π v(x) dS ≤ 4π log + 1 2 · 2π max x∈∂B (0) | v| → 0, as → 0. 49 Note that for |x| = , K(|x|) = K( ) = 1 8π 2 log , K = 1 2π (1 + log ), ∂K(|x|) ∂n = − ∂K( ) ∂r = − 1 4π log − 1 8π , ∂ K ∂n = − ∂ K ∂r = − 1 2π .
  • 215. Partial Differential Equations Igor Yanovsky, 2005 215 ⇒ Ω K(|x|) 2 v dx = lim →0 Ω K(|x|) 2 v dx = v(0).
  • 216. Partial Differential Equations Igor Yanovsky, 2005 216 17.3 Radial Variables Problem (F’99, #8). Let u = u(x, t) solve the following PDE in two spatial dimen- sions − u = 1 for r < R(t), in which r = |x| is the radial variable, with boundary condition u = 0 on r = R(t). In addition assume that R(t) satisfies dR dt = − ∂u ∂r (r = R) with initial condition R(0) = R0. a) Find the solution u(x, t). b) Find an ODE for the outer radius R(t), and solve for R(t). Proof. a) Rewrite the equation in polar coordinates: − u = − 1 r (rur)r + 1 r2 uθθ = 1. For a radially symmetric solution u(r), we have uθθ = 0. Thus, 1 r (rur)r = −1, (rur)r = −r, rur = − r2 2 + c1, ur = − r 2 + c1 r , u(r, t) = − r2 4 + c1 log r + c2. Since we want u to be defined for r = 0, we have c1 = 0. Thus, u(r, t) = − r2 4 + c2. Using boundary conditions, we have u(R(t), t) = − R(t)2 4 + c2 = 0 ⇒ c2 = R(t)2 4 . Thus, u(r, t) = − r2 4 + R(t)2 4 . b) We have u(r, t) = − r2 4 + R(t)2 4 , ∂u ∂r = − r 2 , dR dt = − ∂u ∂r (r = R) = R 2 , (from ) dR R = dt 2 , log R = t 2 , R(t) = c1e t 2 , R(0) = c1 = R0. Thus,
  • 217. Partial Differential Equations Igor Yanovsky, 2005 217 R(t) = R0e t 2 .
  • 218. Partial Differential Equations Igor Yanovsky, 2005 218 Problem (F’01, #3). Let u = u(x, t) solve the following PDE in three spatial di- mensions u = 0 for R1 < r < R(t), in which r = |x| is the radial variable, with boundary conditions u(r = R(t), t) = 0, and u(r = R1, t) = 1. In addition assume that R(t) satisfies dR dt = − ∂u ∂r (r = R) with initial condition R(0) = R0 in which R0 > R1. a) Find the solution u(x, t). b) Find an ODE for the outer radius R(t). Proof. a) Rewrite the equation in spherical coordinates (n = 3, radial functions): u = ∂2 ∂r2 + 2 r ∂ ∂r u = 1 r2 (r2 ur)r = 0. (r2 ur)r = 0, r2 ur = c1, ur = c1 r2 , u(r, t) = − c1 r + c2. Using boundary conditions, we have u(R(t), t) = − c1 R(t) + c2 = 0 ⇒ c2 = c1 R(t) , u(R1, t) = − c1 R1 + c2 = 1. This gives c1 = R1R(t) R1 − R(t) , c2 = R1 R1 − R(t) . u(r, t) = − R1R(t) R1 − R(t) · 1 r + R1 R1 − R(t) . b) We have u(r, t) = − R1R(t) R1 − R(t) · 1 r + R1 R1 − R(t) , ∂u ∂r = R1R(t) R1 − R(t) · 1 r2 , dR dt = − ∂u ∂r (r = R) = − R1R(t) R1 − R(t) · 1 R(t)2 = − R1 (R1 − R(t)) R(t) (from ) Thus, an ODE for the outer radius R(t) is dR dt = R1 (R(t)−R1) R(t), R(0) = R0, R0 > R1.
  • 219. Partial Differential Equations Igor Yanovsky, 2005 219 Problem (S’02, #3). Steady viscous flow in a cylindrical pipe is described by the equation (u · ∇)u + 1 ρ ∇p − η ρ u = 0 on the domain −∞ < x1 < ∞, x2 2 +x2 3 ≤ R2 , where u = (u1, u2, u3) = (U(x2, x3), 0, 0) is the velocity vector, p(x1, x2, x3) is the pressure, and η and ρ are constants. a) Show that ∂p ∂x1 is a constant c, and that U = c/η. b) Assuming further that U is radially symmetric and U = 0 on the surface of the pipe, determine the mass Q of fluid passing through a cross-section of pipe per unit time in terms of c, ρ, η, and R. Note that Q = ρ {x2 2+x2 3≤R2} U dx2dx3. Proof. a) Since u = (u1, u2, u3) = (U(x2, x3), 0, 0), we have (u · ∇)u = (u1, u2, u3) · ∂u1 ∂x1 , ∂u2 ∂x2 , ∂u3 ∂x3 = (U(x2, x3), 0, 0) · (0, 0, 0) = 0. Thus, 1 ρ ∇p − η ρ u = 0, ∇p = η u, ∂p ∂x1 , ∂p ∂x2 , ∂p ∂x3 = η( u1, u2, u3), ∂p ∂x1 , ∂p ∂x2 , ∂p ∂x3 = η(Ux2x2 + Ux3x3 , 0, 0). We can make the following observations: ∂p ∂x1 = η (Ux2x2 + Ux3x3 ) indep. of x1 , ∂p ∂x2 = 0 ⇒ p = f(x1, x3), ∂p ∂x3 = 0 ⇒ p = g(x1, x2). Thus, p = h(x1). But ∂p ∂x1 is independent of x1. Therefore, ∂p ∂x1 = c. ∂p ∂x1 = η U, U = 1 η ∂p ∂x1 = c η .
  • 220. Partial Differential Equations Igor Yanovsky, 2005 220 b) Cylindrical Laplacian in R3 for radial functions is U = 1 r rUr r , 1 r rUr r = c η , rUr r = cr η , rUr = cr2 2η + c1, Ur = cr 2η + c1 r . For Ur to stay bounded for r = 0, we need c1 = 0. Thus, Ur = cr 2η , U = cr2 4η + c2, 0 = U(R) = cR2 4η + c2, ⇒ U = cr2 4η − cR2 4η = c 4η (r2 − R2 ). Q = ρ {x2 2+x2 3≤R2} U dx2dx3 = cρ 4η 2π 0 R 0 (r2 − R2 ) rdrdθ = − cρ 4η 2π 0 R4 4 dθ = − cρR4π 8η . It is not clear why Q is negative?
  • 221. Partial Differential Equations Igor Yanovsky, 2005 221 17.4 Weak Solutions Problem (S’98, #2). A function u ∈ H2 0 (Ω) is a weak solution of the biharmonic equation ⎧ ⎪⎨ ⎪⎩ 2u = f in Ω u = 0 on ∂Ω ∂u ∂n = 0 on ∂Ω provided Ω u v dx = Ω fv dx for all test functions v ∈ H2 0 (Ω). Prove that for each f ∈ L2 (Ω), there exists a unique weak solution for this problem. Here, H2 0 (Ω) is the closure of all smooth functions in Ω which vanish on the boundary and with finite H2 norm: ||u||2 2 = Ω(u2 xx + u2 xy + u2 yy) dxdy < ∞. Hint: use Lax-Milgram lemma. Proof. Multiply the equation by v ∈ H2 0 (Ω) and integrate over Ω: 2 u = f, Ω 2 u v dx = Ω f v dx, ∂Ω ∂ u ∂n v ds − ∂Ω u ∂v ∂n ds = 0 + Ω u v dx = Ω f v dx, Ω u v dx a(u,v) = Ω f v dx L(v) . Denote: V = H2 0 (Ω). Check the following conditions: ❶ a(·, ·) is continuous: ∃γ > 0, s.t. |a(u, v)| ≤ γ||u||V ||v||V , ∀u, v ∈ V ; ❷ a(·, ·) is V-elliptic: ∃α > 0, s.t. a(v, v) ≥ α||v||2 V , ∀v ∈ V ; ❸ L(·) is continuous: ∃Λ > 0, s.t. |L(v)| ≤ Λ||v||V , ∀v ∈ V.
  • 222. Partial Differential Equations Igor Yanovsky, 2005 222 We have 50 ❶ |a(u, v)|2 = Ω u v dx 2 ≤ Ω ( u)2 dx Ω ( v)2 dx ≤ ||u||2 H2 0(Ω)||v||2 H2 0(Ω). ❷ a(v, v) = Ω ( v)2 dx ≥ ||v||H2 0(Ω). ❸ |L(v)| = Ω f v dx ≤ Ω |f| |v| dx ≤ Ω f2 dx 1 2 Ω v2 dx 1 2 = ||f||L2(Ω)||v||L2(Ω) ≤ ||f||L2(Ω) Λ ||v||H2 0(Ω). Thus, by Lax-Milgram theorem, there exists a weak solution u ∈ H2 0 (Ω). Also, we can prove the stability result. α||u||2 H2 0(Ω) ≤ a(u, u) = |L(u)| ≤ Λ||u||H2 0(Ω), ⇒ ||u||H2 0(Ω) ≤ Λ α . Let u1, u2 be two solutions so that a(u1, v) = L(v), a(u2, v) = L(v) for all v ∈ V . Subtracting these two equations, we see that: a(u1 − u2, v) = 0 ∀v ∈ V. Applying the stability estimate (with L ≡ 0, i.e. Λ = 0), we conclude that ||u1 − u2||H2 0 (Ω) = 0, i.e. u1 = u2. 50 Cauchy-Schwarz Inequality: |(u, v)| ≤ ||u||||v|| in any norm, for example |uv|dx ≤ ( u2 dx) 1 2 ( v2 dx) 1 2 ; |a(u, v)| ≤ a(u, u) 1 2 a(v, v) 1 2 ; |v|dx = |v| · 1 dx = ( |v|2 dx) 1 2 ( 12 dx) 1 2 . Poincare Inequality: ||v||H2(Ω) ≤ C Ω ( v)2 dx. Green’s formula: Ω ( u)2 dx = Ω (u2 xx + u2 yy + 2uxxuyy) dxdy = Ω (u2 xx + u2 yy − 2uxxy uy) dxdy = Ω (u2 xx + u2 yy + 2|uxy|2 ) dxdy ≥ ||u||2 H2 0(Ω).
  • 223. Partial Differential Equations Igor Yanovsky, 2005 223 17.5 Uniqueness Problem. The solution of the Robin problem ∂u ∂n + αu = β, x ∈ ∂Ω for the Laplace equation is unique when α > 0 is a constant. Proof. Let u1 and u2 be two solutions of the Robin problem. Let w = u1 − u2. Then w = 0 in Ω, ∂w ∂n + αw = 0 on ∂Ω. Consider Green’s formula: Ω ∇u · ∇v dx = ∂Ω v ∂u ∂n ds − Ω v u dx. Setting w = u = v gives Ω |∇w|2 dx = ∂Ω w ∂w ∂n ds − Ω w w dx =0 . Boundary condition gives Ω |∇w|2 dx ≥0 = − ∂Ω αw2 ds ≤0 . Thus, w ≡ 0, and u1 ≡ u2. Hence, the solution to the Robin problem is unique. Problem. Suppose q(x) ≥ 0 for x ∈ Ω and consider solutions u ∈ C2 (Ω) ∩ C1 (Ω) of u − q(x)u = 0 in Ω. Establish uniqueness theorems for a) the Dirichlet problem: u(x) = g(x), x ∈ ∂Ω; b) the Neumann problem: ∂u/∂n = h(x), x ∈ ∂Ω. Proof. Let u1 and u2 be two solutions of the Dirichlet or Neumann problem. Let w = u1 − u2. Then w − q(x)w = 0 in Ω, w = 0 or ∂w ∂n = 0 on ∂Ω. Consider Green’s formula: Ω ∇u · ∇v dx = ∂Ω v ∂u ∂n ds − Ω v u dx. Setting w = u = v gives Ω |∇w|2 dx = ∂Ω w ∂w ∂n ds =0, Dirichlet or Neumann − Ω w w dx.
  • 224. Partial Differential Equations Igor Yanovsky, 2005 224 Ω |∇w|2 dx ≥0 = − Ω q(x)w2 dx ≤0 . Thus, w ≡ 0, and u1 ≡ u2. Hence, the solution to the Dirichlet and Neumann problems are unique. Problem (F’02, #8; S’93, #5). Let D be a bounded domain in R3. Show that a solution of the boundary value problem 2 u = f in D, u = u = 0 on ∂D is unique. Proof. Method I: Maximum Principle. Let u1, u2 be two solutions of the boundary value problem. Define w = u1 − u2. Then w satisfies 2 w = 0 in D, w = w = 0 on ∂D. So w is harmonic and thus achieves min and max on ∂D ⇒ w ≡ 0. So w is harmonic, but w ≡ 0 on ∂D ⇒ w ≡ 0. Hence, u1 = u2. Method II: Green’s Identities. Multiply the equation by w and integrate: w 2 w = 0, Ω w 2 w dx = 0, ∂Ω w ∂( w) ∂n ds =0 − Ω ∇w∇( )w dx = 0, − ∂Ω ∂w ∂n w ds =0 + Ω ( w)2 dx = 0. Thus, w ≡ 0. Now, multiply w = 0 by w. We get Ω |∇w|2 dx = 0. Thus, ∇w = 0 and w is a constant. Since w = 0 on ∂Ω, we have w ≡ 0. Problem (F’97, #6). a) Let u(x) ≥ 0 be continuous in closed bounded domain Ω ⊂ Rn, u is continuous in Ω, u = u2 and u|∂Ω = 0. Prove that u ≡ 0. b) What can you say about u(x) when the condition u(x) ≥ 0 in Ω is dropped?
  • 225. Partial Differential Equations Igor Yanovsky, 2005 225 Proof. a) Multiply the equation by u and integrate: u u = u3 , Ω u u dx = Ω u3 dx, ∂Ω u ∂u ∂n ds =0 − Ω |∇u|2 dx = Ω u3 dx, Ω u3 + |∇u|2 dx = 0. Since u(x) ≥ 0, we have u ≡ 0. b) If we don’t know that u(x) ≥ 0, then u can not be nonnegative on the entire domain Ω. That is, u(x) < 0, on some (or all) parts of Ω. If u is nonnegative on Ω, then u ≡ 0.
  • 226. Partial Differential Equations Igor Yanovsky, 2005 226 Problem (W’02, #5). Consider the boundary value problem u + n k=1 αk ∂u ∂xk − u3 = 0 in Ω, u = 0 on ∂Ω, where Ω is a bounded domain in Rn with smooth boundary. If the αk’s are constants, and u(x) has continuous derivatives up to second order, prove that u must vanish identically. Proof. Multiply the equation by u and integrate: u u + n k=1 αk ∂u ∂xk u − u4 = 0, Ω u u dx + Ω n k=1 αk ∂u ∂xk u dx − Ω u4 dx = 0, ∂Ω u ∂u ∂n ds = 0 − Ω |∇u|2 dx + Ω n k=1 αk ∂u ∂xk u dx ➀ − Ω u4 dx = 0. We will show that ➀ = 0. Ω αk ∂u ∂xk u dx = ∂Ω αk u2 ds = 0 − Ω αk u ∂u ∂xk dx, ⇒ 2 Ω αk ∂u ∂xk u dx = 0, ⇒ Ω n k=1 αk ∂u ∂xk u dx = 0. Thus, we have − Ω |∇u|2 dx − Ω u4 dx = 0, Ω |∇u|2 + Ω u4 dx = 0. Hence, |∇u|2 = 0 and u4 = 0. Thus, u ≡ 0. Note that Ω n k=1 αk ∂u ∂xk u dx = Ω α · ∇u u dx = ∂Ω α · nu2 ds = 0 − Ω α · ∇u u dx, and thus, Ω α · ∇u u dx = 0.
  • 227. Partial Differential Equations Igor Yanovsky, 2005 227 Problem (W’02, #9). Let D = {x ∈ R2 : x1 ≥ 0, x2 ≥ 0}, and assume that f is continuous on D and vanishes for |x| > R. a) Show that the boundary value problem u = f in D, u(x1, 0) = ∂u ∂x1 (0, x2) = 0 can have only one bounded solution. b) Find an explicit Green’s function for this boundary value problem. Proof. a) Let u1, u2 be two solutions of the boundary value problem. Define w = u1 − u2. Then w satisfies w = 0 in D, w(x1, 0) = ∂w ∂x1 (0, x2) = 0. Consider Green’s formula: D ∇u · ∇v dx = ∂D v ∂u ∂n ds − D v u dx. Setting w = u = v gives D |∇w|2 dx = ∂D w ∂w ∂n ds − D w w dx, D |∇w|2 dx = Rx1 w ∂w ∂n ds + Rx2 w ∂w ∂n ds + |x|>R w ∂w ∂n ds − D w w dx = Rx1 w(x1, 0) =0 ∂w ∂x2 ds + Rx2 w(0, x2) ∂w ∂x1 =0 ds + |x|>R w =0 ∂w ∂n ds − D w w =0 dx, D |∇w|2 dx = 0 ⇒ |∇w|2 = 0 ⇒ w = const. Since w(x1, 0) = 0 ⇒ w ≡ 0. Thus, u1 = u2. b) The similar problem is solved in the appropriate section (S’96, #3). Notice whenever you are on the boundary with variable x, |x − ξ(0) | = |x − ξ(1)||x − ξ(3)| |x − ξ(2)| . So, G(x, ξ) = 1 2π log |x − ξ| − log |x − ξ(1)||x − ξ(3)| |x − ξ(2)| is the Green’s function.
  • 228. Partial Differential Equations Igor Yanovsky, 2005 228 Problem (F’98, #4). In two dimensions x = (x, y), define the set Ωa as Ωa = Ω+ ∪ Ω− in which Ω+ = {|x − x0| ≤ a} ∩ {x ≥ 0} Ω− = {|x + x0| ≤ a} ∩ {x ≤ 0} = −Ω+ and x0 = (1, 0). Note that Ωa consists of two components when 0 < a < 1 and a single component when a > 1. Consider the Neumann problem ∇2 u = f, x ∈ Ωa ∂u/∂n = 0, x ∈ ∂Ωa in which Ω+ f(x) dx = 1 Ω− f(x) dx = −1 a) Show that this problem has a solution for 1 < a, but not for 0 < a < 1. (You do not need to construct the solution, only demonstrate solveability.) b) Show that maxΩa |∇u| → ∞ as a → 1 from above. (Hint: Denote L to be the line segment L = Ω+ ∩ Ω−, and note that its length |L| goes to 0 as a → 1.) Proof. a) We use the Green’s identity. For 1 < a, 0 = ∂Ωa ∂u ∂n ds = Ωa u dx = Ωa f(x) dx = Ω+ f(x) dx + Ω− f(x) dx = 1 − 1 = 0. Thus, the problem has a solution for 1 < a. For 0 < a < 1, Ω+ and Ω− are disjoint. Consider Ω+ : 0 = ∂Ω+ ∂u ∂n ds = Ω+ u dx = Ω+ f(x) dx = 1, 0 = ∂Ω− ∂u ∂n ds = Ω− u dx = Ω− f(x) dx = −1. We get contradictions. Thus, the solution does not exist for 0 < a < 1.
  • 229. Partial Differential Equations Igor Yanovsky, 2005 229 b) Using the Green’s identity, we have: (n+ is the unit normal to Ω+) Ω+ u dx = ∂Ω+ ∂u ∂n+ ds = L ∂u ∂n+ ds, Ω− u dx = ∂Ω− ∂u ∂n− ds = L ∂u ∂n− ds = − L ∂u ∂n+ ds. Ω+ u dx − Ω− u dx = 2 L ∂u ∂n+ ds, Ω+ f(x) dx − Ω− f(x) dx = 2 L ∂u ∂n+ ds. 2 = 2 L ∂u ∂n+ ds, 1 = L ∂u ∂n+ ds ≤ L ∂u ∂n+ ds ≤ L ∂u ∂n+ 2 + ∂u ∂τ 2 ≤ |L| max L |∇u| ≤ |L| max Ωa |∇u|. Thus, max Ωa |∇u| ≥ 1 |L| . As a → 1 (L → 0) ⇒ maxΩa |∇u| → ∞.
  • 230. Partial Differential Equations Igor Yanovsky, 2005 230 Problem (F’00, #1). Consider the Dirichlet problem in a bounded domain D ⊂ Rn with smooth boundary ∂D, u + a(x)u = f(x) in D, u = 0 on ∂D. a) Assuming that |a(x)| is small enough, prove the uniqueness of the classical solution. b) Prove the existence of the solution in the Sobolev space H1(D) assuming that f ∈ L2(D). Note: Use Poincare inequality. Proof. a) By Poincare Inequality, for any u ∈ C1 0 (D), we have ||u||2 2 ≤ C||∇u||2 2. Consider two solutions of the Dirichlet problem above. Let w = u1 − u2. Then, w satisfies w + a(x)w = 0 in D, w = 0 on ∂D. w w + a(x)w2 = 0, w w dx + a(x)w2 dx = 0, − |∇w|2 dx + a(x)w2 dx = 0, a(x)w2 dx = |∇w|2 dx ≥ 1 C w2 dx, (by Poincare inequality) a(x)w2 dx − 1 C w2 dx ≥ 0, |a(x)| w2 dx − 1 C w2 dx ≥ 0, |a(x)| − 1 C w2 dx ≥ 0. If |a(x)| < 1 C ⇒ w ≡ 0. b) Consider F(v, u) = − Ω (v u + a(x)vu) dx = − Ω vf(x) dx = F(v). F(v) is a bounded linear functional on v ∈ H1,2 (D), D = Ω. |F(v)| ≤ ||f||2||v||2 ≤ ||f||2C||v||H1,2(D) So by Riesz representation, there exists a solution u ∈ H1,2 0 (D) of − < u, v >= Ω v u + a(x)vu dx = Ω vf(x) dx = F(v) ∀v ∈ H1,2 0 (D).
  • 231. Partial Differential Equations Igor Yanovsky, 2005 231 Problem (S’91, #8). Define the operator Lu = uxx + uyy − 4(r2 + 1)u in which r2 = x2 + y2. a) Show that ϕ = er2 satisfies Lϕ = 0. b) Use this to show that the equation Lu = f in Ω ∂u ∂n = γ on ∂Ω has a solution only if Ω ϕf dx = ∂Ω ϕγ ds(x). Proof. a) Expressing Laplacian in polar coordinates, we obtain: Lu = 1 r (rur)r − 4(r2 + 1)u, Lϕ = 1 r (rϕr)r − 4(r2 + 1)ϕ = 1 r (2r2 er2 )r − 4(r2 + 1)er2 = 1 r (4rer2 + 2r2 · 2rer2 ) − 4r2 er2 − 4er2 = 0. b) We have ϕ = er2 = ex2+y2 = ex2 ey2 . From part (a), Lϕ = 0, ∂ϕ ∂n = ∇ϕ · n = (ϕx, ϕy) · n = (2xex2 ey2 , 2yex2 ey2 ) · n = 2er2 (x, y) · (−y, x) = 0. 51 Consider two equations: Lu = u − 4(r2 + 1)u, Lϕ = ϕ − 4(r2 + 1)ϕ. Multiply the first equation by ϕ and the second by u and subtract the two equations: ϕLu = ϕ u − 4(r2 + 1)uϕ, uLϕ = u ϕ − 4(r2 + 1)uϕ, ϕLu − uLϕ = ϕ u − u ϕ. Then, we start from the LHS of the equality we need to prove and end up with RHS: Ω ϕf dx = Ω ϕLu dx = Ω (ϕLu − uLϕ) dx = Ω (ϕ u − u ϕ) dx = Ω (ϕ ∂u ∂n − u ∂ϕ ∂n ) ds = Ω ϕ ∂u ∂n ds = Ω ϕγ ds. 51 The only shortcoming in the above proof is that we assume n = (−y, x), without giving an expla- nation why it is so.
  • 232. Partial Differential Equations Igor Yanovsky, 2005 232 17.6 Self-Adjoint Operators Consider an mth-order differential operator Lu = |α|≤m aα(x)Dα u. The integration by parts formula Ω uxk v dx = ∂Ω uvnk ds − Ω uvxk dx n = (n1, . . ., nn) ∈ Rn , with u or v vanishing near ∂Ω is: Ω uxk v dx = − Ω uvxk dx. We can repeat the integration by parts with any combination of derivatives Dα = (∂/∂x1)α1 · · ·(∂/∂xn)αn: Ω (Dα u)v dx = (−1)m Ω uDα v dx, (m = |α|). We have Ω (Lu) v dx = Ω |α|≤m aα(x)Dα u v dx = |α|≤m Ω aα(x) v Dα u dx = |α|≤m (−1)|α| Ω Dα (aα(x) v) u dx = Ω |α|≤m (−1)|α| Dα (aα(x) v) L∗(v) u dx = Ω L∗ (v) u dx, for all u ∈ Cm (Ω) and v ∈ C∞ 0 . The operator L∗ (v) = |α|≤m (−1)|α| Dα (aα(x) v) is called the adjoint of L. The operator is self-adjoint if L∗ = L. Also, L is self-adjoint if 52 Ω vL(u) dx = Ω uL(v) dx. 52 L = L∗ ⇔ (Lu|v) = (u|L∗ v) = (u|Lv).
  • 233. Partial Differential Equations Igor Yanovsky, 2005 233 Problem (F’92, #6). Consider the Laplace operator in the wedge 0 ≤ x ≤ y with boundary conditions ∂f ∂x = 0 on x = 0 ∂f ∂x − α ∂f ∂y = 0 on x = y. a) For which values of α is this operator self-adjoint? b) For such a value of α, suppose that f = e−r2/2 cos θ with these boundary conditions. Evaluate CR ∂ ∂r f ds in which CR is the circular arc of radius R connecting the boundaries x = 0 and x = y. Proof. a) We have Lu = u = 0 ∂u ∂x = 0 on x = 0 ∂u ∂x − α ∂u ∂y = 0 on x = y. The operator L is self-adjoint if: Ω (u Lv − v Lu) dx = 0. Ω (u Lv − v Lu) dx = Ω (u v − v u) dx = ∂Ω u ∂v ∂n − v ∂u ∂n ds = x=0 u ∂v ∂n − v ∂u ∂n ds + x=y u ∂v ∂n − v ∂u ∂n ds = x=0 u (∇v · n) − v (∇u · n) ds + x=y u (∇v · n) − v (∇u · n) ds = x=0 u (vx, vy) · (−1, 0) − v (ux, uy) · (−1, 0) ds + x=y u (vx, vy) · (1/ √ 2, −1/ √ 2) − v (ux, uy) · (1/ √ 2, −1/ √ 2) ds = x=0 u (0, vy) · (−1, 0) − v (0, uy) · (−1, 0) ds = 0 + x=y u (αvy, vy) · (1/ √ 2, −1/ √ 2) − v (αuy, uy) · (1/ √ 2, −1/ √ 2) ds = x=y uvy √ 2 (α − 1) − vuy √ 2 (α − 1) ds = need 0.
  • 234. Partial Differential Equations Igor Yanovsky, 2005 234 Thus, we need α = 1 so that L is self-adjoint. b) We have α = 1. Using Green’s identity and results from part (a), (∂f ∂n = 0 on x = 0 and x = y): Ω f dx = ∂Ω ∂f ∂n ds = ∂CR ∂f ∂n ds + x=0 ∂f ∂n =0 ds + x=y ∂f ∂n =0 ds = ∂CR ∂f ∂r ds. Thus, ∂CR ∂f ∂r ds = Ω f dx = R 0 π 2 π 4 e−r2/2 cos θ r drdθ = 1 − 1 √ 2 R 0 e−r2/2 r dr = 1 − 1 √ 2 (1 − e−R2/2 ).
  • 235. Partial Differential Equations Igor Yanovsky, 2005 235 Problem (F’99, #1). Suppose that u = 0 in the weak sense in Rn and that there is a constant C such that {|x−y|<1} |u(y)| dy < C, ∀x ∈ Rn . Show that u is constant. Proof. Consider Green’s formula: Ω ∇u · ∇v dx = ∂Ω v ∂u ∂n ds − Ω v u dx For v = 1, we have ∂Ω ∂u ∂n ds = Ω u dx. Let Br(x0) be a ball in Rn . We have 0 = Br(x0) u dx = ∂Br(x0) ∂u ∂n ds = rn−1 |x|=1 ∂u ∂r (x0 + rx) ds = rn−1 ωn ∂ ∂r 1 ωn |x|=1 u(x0 + rx) ds. Thus, 1 ωn |x|=1 u(x0 + rx) ds is independent of r. Hence, it is constant. By continuity, as r → 0, we obtain the Mean Value property: u(x0) = 1 ωn |x|=1 u(x0 + rx) ds. If |x−y|<1 |u(y)| dy < C ∀x ∈ Rn, we have |u(x)| < C in Rn. Since u is harmonic and bounded in Rn, u is constant by Liouville’s theorem. 53 53 Liouville’s Theorem: A bounded harmonic function defined on Rn is a constant.
  • 236. Partial Differential Equations Igor Yanovsky, 2005 236 Problem (S’01, #1). For bodies (bounded regions B in R3 ) which are not perfectly conducting one considers the boundary value problem 0 = ∇ · γ(x)∇u = 3 j=1 ∂ ∂xj γ(x) ∂u ∂xj u = f on ∂B. The function γ(x) is the “local conductivity” of B and u is the voltage. We define operator Λ(f) mapping the boundary data f to the current density at the boundary by Λ(f) = γ(x) ∂u ∂n , and ∂/∂n is the inward normal derivative (this formula defines the current density). a) Show that Λ is a symmetric operator, i.e. prove ∂B gΛ(f) dS = ∂B fΛ(g) dS. b) Use the positivity of γ(x) > 0 to show that Λ is negative as an operator, i.e., prove ∂B fΛ(f) dS ≤ 0. Proof. a) Let ∇ · γ(x)∇u = 0 on Ω, u = f on ∂Ω. ∇ · γ(x)∇v = 0 on Ω, v = g on ∂Ω. Λ(f) = γ(x) ∂u ∂n , Λ(g) = γ(x) ∂v ∂n . Since ∂/∂n is inward normal derivative, Green’s formula is: − ∂Ω v =g γ(x) ∂u ∂n dS − Ω ∇v · γ(x)∇u dx = Ω v∇ · γ(x)∇u dx. We have ∂Ω gΛ(f) dS = ∂Ω gγ(x) ∂u ∂n dS = − Ω ∇v · γ(x)∇u dx − Ω v ∇ · γ(x)∇u =0 dx = ∂Ω uγ(x) ∂v ∂n dS + Ω u ∇ · γ(x)∇v =0 dx = ∂Ω fγ(x) ∂v ∂n dS = ∂Ω fΛ(g) dS. b) We have γ(x) > 0. ∂Ω fΛ(f) dS = ∂Ω uγ(x) ∂u ∂n dS = − Ω u ∇ · γ(x)∇u =0 dx − Ω γ(x)∇u · ∇u dx = − Ω γ(x)|∇u|2 ≥0 ≤ 0.
  • 237. Partial Differential Equations Igor Yanovsky, 2005 237 Problem (S’01, #4). The Poincare Inequality states that for any bounded domain Ω in Rn there is a constant C such that Ω |u|2 dx ≤ C Ω |∇u|2 dx for all smooth functions u which vanish on the boundary of Ω. a) Find a formula for the “best” (smallest) constant for the domain Ω in terms of the eigenvalues of the Laplacian on Ω, and b) give the best constant for the rectangular domain in R2 Ω = {(x, y) : 0 ≤ x ≤ a, 0 ≤ y ≤ b}. Proof. a) Consider Green’s formula: Ω ∇u · ∇v dx = ∂Ω v ∂u ∂n ds − Ω v u dx. Setting u = v and with u vanishing on ∂Ω, Green’s formula becomes: Ω |∇u|2 dx = − Ω u u dx. Expanding u in the eigenfunctions of the Laplacian, u(x) = anφn(x), the formula above gives Ω |∇u|2 dx = − Ω ∞ n=1 anφn(x) ∞ m=1 −λmamφm(x) dx = ∞ m,n=1 λmanam Ω φnφm dx = ∞ n=1 λn|an|2 . Also, Ω |u|2 dx = Ω ∞ n=1 anφn(x) ∞ m=1 amφm(x) = ∞ n=1 |an|2 . Comparing and , and considering that λn increases as n → ∞, we obtain λ1 Ω |u|2 dx = λ1 ∞ n=1 |an|2 ≤ ∞ n=1 λn|an|2 = Ω |∇u|2 dx. Ω |u|2 dx ≤ 1 λ1 Ω |∇u|2 dx, with C = 1/λ1. b) For the rectangular domain Ω = {(x, y) : 0 ≤ x ≤ a, 0 ≤ y ≤ b} ⊂ R2, the eigenvalues of the Laplacian are λmn = π2 m2 a2 + n2 b2 , m, n = 1, 2, . . .. λ1 = λ11 = π2 1 a2 + 1 b2 , ⇒ C = 1 λ11 = 1 π2 1 ( 1 a2 + 1 b2 ) .
  • 238. Partial Differential Equations Igor Yanovsky, 2005 238
  • 239. Partial Differential Equations Igor Yanovsky, 2005 239 Problem (S’01, #6). a) Let B be a bounded region in R3 with smooth boundary ∂B. The “conductor” potential for the body B is the solution of Laplace’s equation outside B V = 0 in R3 /B subject to the boundary conditions, V = 1 on ∂B and V (x) tends to zero as |x| → ∞. Assuming that the conductor potential exists, show that it is unique. b) The “capacity” C(B) of B is defined to be the limit of |x|V (x) as |x| → ∞. Show that C(B) = − 1 4π ∂B ∂V ∂n dS, where ∂B is the boundary of B and n is the outer unit normal to it (i.e. the normal pointing “toward infinity”). c) Suppose that B ⊂ B. Show that C(B ) ≤ C(B). Proof. a) Let V1, V2 be two solutions of the boundary value problem. Define W = V1 − V2. Then W satisfies ⎧ ⎪⎨ ⎪⎩ W = 0 in R3 /B W = 0 on ∂B W → 0 as |x| → ∞. Consider Green’s formula: B ∇u · ∇v dx = ∂B v ∂u ∂n ds − B v u dx. Setting W = u = v gives B |∇W|2 dx = ∂B W =0 ∂W ∂n ds − B W W =0 dx = 0. Thus, |∇W|2 = 0 ⇒ W = const. Since W = 0 on ∂B, W ≡ 0, and V1 = V2. b & c) For (b)&(c), see the solutions from Ralston’s homework (a few pages down).
  • 240. Partial Differential Equations Igor Yanovsky, 2005 240 Problem (W’03, #2). Let L be the second order differential operator L = − a(x) in which x = (x1, x2, x3) is in the three-dimensional cube C = {0 < xi < 1, i = 1, 2, 3}. Suppose that a > 0 in C. Consider the eigenvalue problem Lu = λu for x ∈ C u = 0 for x ∈ ∂C. a) Show that all eigenvalues are negative. b) If u and v are eigenfunctions for distinct eigenvalues λ and μ, show that u and v are orthogonal in the appropriate product. c) If a(x) = a1(x1) + a2(x2) + a3(x3) find an expression for the eigenvalues and eigen- vectors of L in terms of the eigenvalues and eigenvectors of a set of one-dimensional problems. Proof. a) We have u − a(x)u = λu. Multiply the equation by u and integrate: u u − a(x)u2 = λu2 , Ω u u dx − Ω a(x)u2 dx = λ Ω u2 dx, ∂Ω u ∂u ∂n ds =0 − Ω |∇u|2 dx − Ω a(x)u2 dx = λ Ω u2 dx, λ = − Ω(|∇u|2 + a(x)u2) dx Ω u2 dx < 0. b) Let λ, μ, be the eigenvalues and u, v be the corresponding eigenfunctions. We have u − a(x)u = λu. (17.5) v − a(x)v = μv. (17.6) Multiply (17.5) by v and (17.6) by u and subtract equations from each other v u − a(x)uv = λuv, u v − a(x)uv = μuv. v u − u v = (λ − μ)uv. Integrating over Ω gives Ω v u − u v dx = (λ − μ) Ω uv dx, ∂Ω v ∂u ∂n − u ∂v ∂n =0 dx = (λ − μ) Ω uv dx. Since λ = μ, u and v are orthogonal on Ω.
  • 241. Partial Differential Equations Igor Yanovsky, 2005 241 c) The three one-dimensional eigenvalue problems are: u1x1x1 (x1) − a(x1)u1(x1) = λ1u1(x1), u2x2x2 (x2) − a(x2)u2(x2) = λ2u2(x2), u3x3x3 (x3) − a(x3)u3(x3) = λ3u3(x3). We need to derive how u1, u2, u3 and λ1, λ2, λ3 are related to u and λ.
  • 242. Partial Differential Equations Igor Yanovsky, 2005 242 17.7 Spherical Means Problem (S’95, #4). Consider the biharmonic operator in R3 2 u ≡ ∂2 ∂x2 + ∂2 ∂y2 + ∂2 ∂z2 2 u. a) Show that 2 is self-adjoint on |x| < 1 with the following boundary conditions on |x| = 1: u = 0, u = 0. Proof. a) We have Lu = 2 u = 0 u = 0 on |x| = 1 u = 0 on |x| = 1. The operator L is self-adjoint if: Ω (u Lv − v Lu) dx = 0. Ω (u Lv − v Lu) dx = Ω (u 2 v − v 2 u) dx = ∂Ω u ∂ v ∂n ds =0 − Ω ∇u · ∇( v) dx − ∂Ω v ∂ u ∂n ds =0 + Ω ∇v · ∇( u) dx = − ∂Ω v ∂u ∂n ds =0 + Ω u v dx + ∂Ω u ∂v ∂n ds =0 − Ω v u dx = 0.
  • 243. Partial Differential Equations Igor Yanovsky, 2005 243 b) Denote |x| = r and define the averages S(r) = (4πr2 )−1 |x|=r u(x) ds, V (r) = 4 3 πr3 −1 |x|≤r u(x) dx. Show that d dr S(r) = r 3 V (r). Hint: Rewrite S(r) as an integral over the unit sphere before differentiating; i.e., S(r) = (4π)−1 |x |=1 u(rx ) dx . c) Use the result of (b) to show that if u is biharmonic, i.e. 2 u = 0, then S(r) = u(0) + r2 6 u(0). Hint: Use the mean value theorem for u. b) Let x = x/|x|. We have 54 S(r) = 1 4πr2 |x|=r u(x) dSr = 1 4πr2 |x |=1 u(rx ) r2 dS1 = 1 4π |x |=1 u(rx ) dS1. dS dr = 1 4π |x |=1 ∂u ∂r (rx ) dS1 = 1 4π |x |=1 ∂u ∂n (rx ) dS1 = 1 4πr2 |x|=r ∂u ∂n (x) dSr = 1 4πr2 |x|≤r u dx. where we have used Green’s identity in the last equality. Also r 3 V (r) = 1 4πr2 |x|≤r u dx. c) Since u is biharmonic (i.e. u is harmonic), u has a mean value property. We have d dr S(r) = r 3 V (r) = r 3 4 3 πr3 −1 |x|≤r u(x) dx = r 3 u(0), S(r) = r2 6 u(0) + S(0) = u(0) + r2 6 u(0). 54 Change of variables: Surface integrals: x = rx in R3 : |x|=r u(x) dS = |x |=1 u(rx ) r2 dS1. Volume integrals: ξ = rξ in Rn : |ξ |<r h(x + ξ ) dξ = |ξ|<1 h(x + rξ) rn dξ.
  • 244. Partial Differential Equations Igor Yanovsky, 2005 244
  • 245. Partial Differential Equations Igor Yanovsky, 2005 245 Problem (S’00, #7). Suppose that u = u(x) for x ∈ R3 is biharmonic; i.e. that 2u ≡ ( u) = 0. Show that (4πr2 )−1 |x|=r u(x) ds(x) = u(0) + (r2 /6) u(0) through the following steps: a) Show that for any smooth f, d dr |x|≤r f(x) dx = |x|=r f(x) ds(x). b) Show that for any smooth f, d dr (4πr2 )−1 |x|=r f(x) ds(x) = (4πr2 )−1 |x|=r n · ∇f(x, y) ds in which n is the outward normal to the circle |x| = r. c) Use step (b) to show that d dr (4πr2 )−1 |x|=r f(x) ds(x) = (4πr2 )−1 |x|≤r f(x) dx. d) Combine steps (a) and (c) to obtain the final result. Proof. a) We can express the integral in Spherical Coordinates: 55 |x|≤R f(x) dx = R 0 2π 0 π 0 f(φ, θ, r) r2 sinφ dφ dθ dr. d dr |x|≤R f(x) dx = d dr R 0 2π 0 π 0 f(φ, θ, r) r2 sinφ dφ dθ dr = ??? = 2π 0 π 0 f(φ, θ, r) R2 sinφ dφ dθ = |x|=R f(x) dS. 55 Differential Volume in spherical coordinates: d3 ω = ω2 sin φ dφ dθ dω. Differential Surface Area on sphere: dS = ω2 sin φ dφ dθ.
  • 246. Partial Differential Equations Igor Yanovsky, 2005 246 b&c) We have d dr 1 4πr2 |x|=r f(x) dS = d dr 1 4πr2 |x |=1 f(rx ) r2 dS1 = 1 4π d dr |x |=1 f(rx ) dS1 = 1 4π |x |=1 ∂f ∂r (rx ) dS1 = 1 4π |x |=1 ∂f ∂n (rx ) dS1 = 1 4πr2 |x|=r ∂f ∂n (x) dS = 1 4πr2 |x|=r ∇f · n dS = 1 4πr2 |x|≤r f dx. Green’s formula was used in the last equality. Alternatively, d dr 1 4πr2 |x|=r f(x) dS = d dr 1 4πr2 2π 0 π 0 f(φ, θ, r) r2 sinφ dφ dθ = d dr 1 4π 2π 0 π 0 f(φ, θ, r) sinφ dφ dθ = 1 4π 2π 0 π 0 ∂f ∂r (φ, θ, r) sin φ dφ dθ = 1 4π 2π 0 π 0 ∇f · n sin φ dφ dθ = 1 4πr2 2π 0 π 0 ∇f · n r2 sinφ dφ dθ = 1 4πr2 |x|=r ∇f · n dS = 1 4πr2 |x|=r f dx. d) Since f is biharmonic (i.e. f is harmonic), f has a mean value property. From (c), we have 56 d dr 1 4πr2 |x|=r f(x) ds(x) = 1 4πr2 |x|≤r f(x) dx = r 3 1 4 3πr3 |x|≤r f(x) dx = r 3 f(0). 1 4πr2 |x|=r f(x) ds(x) = r2 6 f(0) + f(0). 56 Note that part (a) was not used. We use exactly the same derivation as we did in S’95 #4.
  • 247. Partial Differential Equations Igor Yanovsky, 2005 247 Problem (F’96, #4). Consider smooth solutions of u = k2u in dimension d = 2 with k > 0. a) Show that u satisfies the following ‘mean value property’: Mx (r) + 1 r Mx(r) − k2 Mx(r) = 0, in which Mx(r) is defined by Mx(r) = 1 2π 2π 0 u(x + r cos θ, y + r sinθ) dθ and the derivatives (denoted by ) are in r with x fixed. b) For k = 1, this equation is the modified Bessel equation (of order 0) f + 1 r f − f = 0, for which one solution (denoted as I0) is I0(r) = 1 2π 2π 0 er sin θ dθ. Find an expression for Mx(r) in terms of I0. Proof. a) Laplacian in polar coordinates written as: u = urr + 1 r ur + 1 r2 uθθ. Thus, the equation may be written as urr + 1 r ur + 1 r2 uθθ = k2 u. Mx(r) = 1 2π 2π 0 u dθ, Mx(r) = 1 2π 2π 0 ur dθ, Mx (r) = 1 2π 2π 0 urr dθ. Mx (r) + 1 r Mx(r) − k2 Mx(r) = 1 2π 2π 0 urr + 1 r ur − k2 u dθ = − 1 2πr2 2π 0 uθθ dθ = − 1 2πr2 uθ 2π 0 = 0. b) Note that w = er sin θ satisfies w = w, i.e. w = wrr + 1 r wr + 1 r2 wθθ = sin2 θ er sin θ + 1 r sinθ er sin θ + 1 r2 (−r sinθ er sin θ + r2 cos2 θ er sin θ ) = er sin θ = w. Thus, Mx(r) = ey 1 2π 2π 0 er sin θ dθ = ey I0.
  • 248. Partial Differential Equations Igor Yanovsky, 2005 248 57 57 Check with someone about the last result.
  • 249. Partial Differential Equations Igor Yanovsky, 2005 249 17.8 Harmonic Extensions, Subharmonic Functions Problem (S’94, #8). Suppose that Ω is a bounded region in R3 and that u = 1 on ∂Ω. If u = 0 in the exterior region R3 /Ω and u(x) → 0 as |x| → ∞, prove the following: a) u > 0 in R3/Ω; b) if ρ(x) is a smooth function such that ρ(x) = 1 for |x| > R and ρ(x) = 0 near ∂Ω, then for |x| > R, u(x) = − 1 4π R3/Ω ( (ρu))(y) |x − y| dy. c) lim|x|→∞ |x|u(x) exists and is non-negative. Proof. a) Let Br(0) denote the closed ball {x : |x| ≥ r}. Given ε > 0, we can find r large enough that Ω ∈ BR1 (0) and maxx∈∂BR1 (0) |u(x)| < ε, since |u(x)| → 0 as |x| → ∞. Since u is harmonic in BR1 − Ω, it takes its maximum and minimum on the boundary. Assume min x∈∂BR1 (0) u(x) = −a < 0 (where |a| < ε). We can find an R2 such that maxx∈BR2 (0) |u(x)| < a 2 ; hence u takes a minimum inside BR2 (0) − Ω, which is impossible; hence u ≥ 0. Now let V = {x : u(x) = 0} and let α = minx∈V |x|. Since u cannot take a minimum inside BR(0) (where R > α), it follows that u ≡ C and C = 0, but this contradicts u = 1 on ∂Ω. Hence u > 0 in R3 − Ω. b) For n = 3, K(|x − y|) = 1 (2 − n)ωn |x − y|2−n = − 1 4π 1 |x − y| . Since ρ(x) = 1 for |x| > R, then for x /∈ BR, we have (ρu) = u = 0. Thus, − 1 4π R3/Ω ( (ρu))(y) |x − y| dy = − 1 4π BR/Ω ( (ρu))(y) |x − y| dy = 1 4π BR/Ω ∇y 1 |x − y| · ∇y(ρu) dy − 1 4π ∂(BR/Ω) ∂ ∂n ρu 1 |x − y| dSy = − 1 4π BR/Ω 1 |x − y| ρu dy + 1 4π ∂(BR/Ω) ∂ ∂n 1 |x − y| ρu dSy − 1 4π ∂(BR/Ω) ∂ ∂n ρu 1 |x − y| dSy = ??? = u(x) − 1 4πR2 ∂B u dSy →0, as R→∞ − 1 4πR ∂B ∂u ∂n dSy →0, as R→∞ = u(x). c) See the next problem.
  • 250. Partial Differential Equations Igor Yanovsky, 2005 250 Ralston Hw. a) Suppose that u is a smooth function on R3 and u = 0 for |x| > R. If limx→∞ u(x) = 0, show that you can write u as a convolution of u with the − 1 4π|x| and prove that limx→∞ |x|u(x) = 0 exists. b) The “conductor potential” for Ω ⊂ R3 is the solution to the Dirichlet problem v = 0. The limit in part (a) is called the “capacity” of Ω. Show that if Ω1 ⊂ Ω2, then the capacity of Ω2 is greater or equal the capacity of Ω1. Proof. a) If we define v(x) = − 1 4π R3 u(y) |x − y| dy, then (u − v) = 0 in all R3 , and, since v(x) → 0 as |x| → ∞, we have lim|x|→∞(u(x)− v(x)) = 0. Thus, u − v must be bounded, and Liouville’s theorem implies that it is identically zero. Since we now have |x|u(x) = − 1 4π R3 |x| u(y) |x − y| dy, and |x|/|x − y| converges uniformly to 1 on {|y| ≤ R}, it follows that lim |x|→∞ |x|u(x) = − 1 4π R3 u(y) dy. b) Note that part (a) implies that the limit lim|x|→∞ |x|v(x) exists, because we can apply (a) to u(x) = φ(x)v(x), where φ is smooth and vanishes on Ω, but φ(x) = 1 for |x| > R. Let v1 be the conductor potential for Ω1 and v2 for Ω2. Since vi → ∞ as |x| → ∞ and vi = 1 on ∂Ωi, the max principle says that 1 > vi(x) > 0 for x ∈ R3 − Ωi. Consider v2 − v1. Since Ω1 ⊂ Ω2, this is defined in R3 − Ω2, positive on ∂Ω2, and has limit 0 as |x| → ∞. Thus, it must be positive in R3 − Ω2. Thus, lim|x|→∞ |x|(v2 − v1) ≥ 0. Problem (F’95, #4). 58 Let Ω be a simply connected open domain in R2 and u = u(x, y) be subharmonic there, i.e. u ≥ 0 in Ω. Prove that if DR = {(x, y) : (x − x0)2 + (y − y0)2 ≤ R2 } ⊂ Ω then u(x0, y0) ≤ 1 2π 2π 0 u(x0 + R cos θ, y0 + R sinθ) dθ. Proof. Let M(x0, R) = 1 2π 2π 0 u(x0 + R cos θ, y0 + R sin θ) dθ, w(r, θ) = u(x0 + R cos θ, y0 + R sin θ). Differentiate M(x0, R) with respect to R: d dr M(x0, R) = 1 2πR 2π 0 wr(R, θ)Rdθ, 58 See McOwen, Sec.4.3, p.131, #1.
  • 251. Partial Differential Equations Igor Yanovsky, 2005 251 59 59 See ChiuYen’s solutions and Sung Ha’s solutions (in two places). Nick’s solutions, as started above, have a very simplistic approach.
  • 252. Partial Differential Equations Igor Yanovsky, 2005 252 Ralston Hw (Maximum Principle). Suppose that u ∈ C(Ω) satisfies the mean value property in the connected open set Ω. a) Show that u satisfies the maximum principle in Ω, i.e. either u is constant or u(x) < supΩ u for all x ∈ Ω. b) Show that, if v is a continuous function on a closed ball Br(ξ) ⊂ Ω and has the mean value property in Br(ξ), then u = v on ∂Br(ξ) implies u = v in Br(ξ). Does this imply that u is harmonic in Ω? Proof. a) If u(x) is not less than supΩ u for all x ∈ Ω, then the set K = {x ∈ Ω : u(x) = sup Ω u} is nonempty. This set is closed because u is continuous. We will show it is also open. This implies that K = Ω because Ω is connected. Thus u is constant on Ω. Let x0 ∈ K. Since Ω is open, ∃δ > 0, s.t. Bδ(x0) = {x ∈ Rn : |x − x0| ≤ δ} ⊂ Ω. Let supΩ u = M. By the mean value property, for 0 ≤ r ≤ δ M = u(x0) = 1 A(Sn−1) |ξ|=1 u(x0 + rξ)dSξ, and 0 = 1 A(Sn−1) |ξ|=1 (M − u(x0 + rξ))dSξ. Sinse M −u(x0 +rξ) is a continuous nonnegative function on ξ, this implies M −u(x0 + rξ) = 0 for all ξ ∈ Sn−1. Thus u = 0 on Bδ(x0). b) Since u − v has the mean value property in the open interior of Br(ξ), by part a) it satisfies the maximum principle. Since it is continuous on Br(ξ), its supremum over the interior of Br(ξ) is its maximum on Br(ξ), and this maximum is assumed at a point x0 in Br(ξ). If x0 in the interior of Br(ξ), then u −v is constant ant the constant must be zero, since this is the value of u −v on the boundary. If x0 is on the boundary, then u − v must be nonpositive in the interior of Br(ξ). Applying the same argument to v − u, one finds that it is either identically zero or nonpositive in the interior of Br(ξ). Thus, u − v ≡ 0 on Br(ξ). Yes, it does follow that u harmonic in Ω. Take v in the preceding to be the harmonic function in the interior of Br(ξ) which agrees with u on the boundary. Since u = v on Br(ξ), u is harmonic in the interior of Br(ξ). Since Ω is open we can do this for every ξ ∈ Ω. Thus u is harmonic in Ω.
  • 253. Partial Differential Equations Igor Yanovsky, 2005 253 Ralston Hw. Assume Ω is a bounded open set in Rn and the Green’s function, G(x, y), for Ω exists. Use the strong maximum principle, i.e. either u(x) < supΩ u for all x ∈ Ω, or u is constant, to prove that G(x, y) < 0 for x, y ∈ Ω, x = y. Proof. G(x, y) = K(x, y) + ω(x, y). For each x ∈ Ω, f(y) = ω(x, y) is continuous on Ω, thus, bounded. So |ω(x, y)| ≤ Mx for all y ∈ Ω. K(x − y) → −∞ as y → x. Thus, given Mx, there is δ > 0, such that K(x − y) < −Mx when |x − y| = r and 0 < r ≤ δ. So for 0 < r ≤ δ the Green’s function with x fixed satisfies, G(x, y) is harmonic on Ω − Br(x), and G(x, y) ≤ 0 on the boundary of Ω − Br(x). Since we can choose r as small as we wish, we get G(x, y) < 0 for y ∈ Ω − {x}. Problem (W’03, #6). Assume that u is a harmonic function in the half ball D = {(x, y, z) : x2 +y2 +z2 < 1, z ≥ 0} which is continuously differentiable, and satis- fies u(x, y, 0) = 0. Show that u can be extended to be a harmonic function in the whole ball. If you propose and explicit extension for u, explain why the extension is harmonic. Proof. We can extend u to all of n-space by defining u(x , xn) = −u(x , −xn) for xn < 0. Define ω(x) = 1 aωn |y|=1 a2 − |x|2 |x − y|n v(y)dSy ω(x) is continuous on a closed ball B, harmonic in B. Poisson kernel is symmetric in y at xn = 0. ⇒ ω(x) = 0, (xn = 0). ω is harmonic for x ∈ B, xn ≥ 0,with the same boundary values ω = u. ω is harmonic ⇒ u can be extended to a harmonic function on the interior of B. Ralston Hw. Show that a bounded solution to the Dirichlet problem in a half space is unique. (Note that one can show that a bounded solution exists for any given bounded continuous Dirichlet data by using the Poisson kernel for the half space.) Proof. We have to show that a function, u, which is harmonic in the half-space, con- tinuous, equal to 0 when xn = 0, and bounded, must be identically 0. We can extend u to all of n-space by defining u(x , xn) = −u(x , −xn) for xn < 0. This extends u to a bounded harmonic function on all of n-space (by the problem above). Liouville’s theorem says u must be constant, and since u(x , 0) = 0, the constant is 0. So the original u must be identically 0. Ralston Hw. Suppose u is harmonic on the ball minus the origin, B0 = {x ∈ R3 : 0 < |x| < a}. Show that u(x) can be extended to a harmonic function on the ball B = {|x| < a} iff lim|x|→0 |x|u(x) = 0. Proof. The condition lim|x|→0 |x|u(x) = 0 is necessary, because harmonic functions are continuous. To prove the converse, let v be the function which is continuous on {|x| ≤ a/2}, harmonic on {|x| < a/2}, and equals u on {|x| = a/2}. One can construct v using the Poisson kernel. Since v is continuous, it is bounded, and we can assume that |v| ≤ M. Since lim|x|→0 |x|u(x) = 0, given > 0, we can choose δ, 0 < δ < a/2 such that − < |x|u(x) < when |x| < δ. Note that u, v − 2 /|x|, and v + 2 /|x| are harmonic
  • 254. Partial Differential Equations Igor Yanovsky, 2005 254 on {0 < |x| < a/2}. Choose b, 0 < b < min( , a/2), so that /b > M. Then on both {|x| = a/2} and {|x| = b} we have v − 2 /|x| < u(x) < v + 2 /|x|. Thus, by max principle these inequalities hold on {b ≤ |x| ≤ a/2}. Pick x with 0 < |x| ≤ a/2. u(x) = v(x). v is the extension of u on {|x| < a/2}, and u is extended on {|x| < a}.
  • 255. Partial Differential Equations Igor Yanovsky, 2005 255 18 Problems: Heat Equation McOwen 5.2 #7(a). Consider ⎧ ⎪⎨ ⎪⎩ ut = uxx for x > 0, t > 0 u(x, 0) = g(x) for x > 0 u(0, t) = 0 for t > 0, where g is continuous and bounded for x ≥ 0 and g(0) = 0. Find a formula for the solution u(x, t). Proof. Extend g to be an odd function on all of R: ˜g(x) = g(x), x ≥ 0 −g(−x), x < 0. Then, we need to solve ˜ut = ˜uxx for x ∈ R, t > 0 ˜u(x, 0) = ˜g(x) for x ∈ R. The solution is given by: 60 ˜u(x, t) = R K(x, y, t)g(y) dy = 1 √ 4πt ∞ −∞ e− (x−y)2 4t ˜g(y) dy = 1 √ 4πt ∞ 0 e−(x−y)2 4t ˜g(y) dy + 0 −∞ e−(x−y)2 4t ˜g(y) dy = 1 √ 4πt ∞ 0 e− (x−y)2 4t g(y) dy − ∞ 0 e− (x+y)2 4t g(y) dy = 1 √ 4πt ∞ 0 e −x2+2xy−y2 4t − e −x2−2xy−y2 4t g(y) dy = 1 √ 4πt ∞ 0 e− (x2+y2) 4t e xy 2t − e−xy 2t g(y) dy. u(x, t) = 1 √ 4πt ∞ 0 e− (x2+y2) 4t 2 sinh xy 2t g(y) dy. Since sinh(0) = 0, we can verify that u(0, t) = 0. 60 In calculations, we use: 0 −∞ ey dy = ∞ 0 e−y dy, and g(−y) = −g(y).
  • 256. Partial Differential Equations Igor Yanovsky, 2005 256 McOwen 5.2 #7(b). Consider ⎧ ⎪⎨ ⎪⎩ ut = uxx for x > 0, t > 0 u(x, 0) = g(x) for x > 0 ux(0, t) = 0 for t > 0, where g is continuous and bounded for x ≥ 0. Find a formula for the solution u(x, t). Proof. Extend g to be an even function 61 on all of R: ˜g(x) = g(x), x ≥ 0 g(−x), x < 0. Then, we need to solve ˜ut = ˜uxx for x ∈ R, t > 0 ˜u(x, 0) = ˜g(x) for x ∈ R. The solution is given by: 62 ˜u(x, t) = R K(x, y, t)g(y) dy = 1 √ 4πt ∞ −∞ e− (x−y)2 4t ˜g(y) dy = 1 √ 4πt ∞ 0 e−(x−y)2 4t ˜g(y) dy + 0 −∞ e−(x−y)2 4t ˜g(y) dy = 1 √ 4πt ∞ 0 e− (x−y)2 4t g(y) dy + ∞ 0 e− (x+y)2 4t g(y) dy = 1 √ 4πt ∞ 0 e −x2+2xy−y2 4t + e −x2−2xy−y2 4t g(y) dy = 1 √ 4πt ∞ 0 e− (x2+y2) 4t e xy 2t + e−xy 2t g(y) dy. u(x, t) = 1 √ 4πt ∞ 0 e− (x2+y2) 4t 2 cosh xy 2t g(y) dy. To check that the boundary condition holds, we perform the calculation: ux(x, t) = 1 √ 4πt ∞ 0 d dx e− (x2 +y2) 4t 2 cosh xy 2t g(y) dy = 1 √ 4πt ∞ 0 − 2x 4t e−(x2+y2) 4t 2 cosh xy 2t + e−(x2+y2) 4t 2 y 2t sinh xy 2t g(y) dy, ux(0, t) = 1 √ 4πt ∞ 0 0 · e−y2 4t 2 cosh0 + e−y2 4t 2 y 2t sinh 0 g(y) dy = 0. 61 Even extensions are always continuous. Not true for odd extensions. g odd is continuous if g(0) = 0. 62 In calculations, we use: 0 −∞ ey dy = ∞ 0 e−y dy, and g(−y) = g(y).
  • 257. Partial Differential Equations Igor Yanovsky, 2005 257 Problem (F’90, #5). The initial value problem for the heat equation on the whole real line is ft = fxx t ≥ 0 f(t = 0, x) = f0(x) with f0 smooth and bounded. a) Write down the Green’s function G(x, y, t) for this initial value problem. b) Write the solution f(x, t) as an integral involving G and f0. c) Show that the maximum values of |f(x, t)| and |fx(x, t)| are non-increasing as t increases, i.e. sup x |f(x, t)| ≤ sup x |f0(x)| sup x |fx(x, t)| ≤ sup x |f0x(x)|. When are these inequalities actually equalities? Proof. a) The fundamental solution K(x, y, t) = 1 √ 4πt e− |x−y|2 4t . The Green’s function is: 63 G(x, t; y, s) = 1 (2π)n π k(t − s) n 2 e − (x−y)2 4k(t−s) . b) The solution to the one-dimensional heat equation is u(x, t) = R K(x, y, t) f0(y) dy = 1 √ 4πt R e− |x−y|2 4t f0(y) dy. c) We have sup x |u(x, t)| = 1 √ 4πt R e− (x−y)2 4t f0(y) dy ≤ 1 √ 4πt R e− (x−y)2 4t f0(y) dy = 1 √ 4πt R e−y2 4t f0(x − y) dy ≤ sup x |f0(x)| 1 √ 4πt R e−y2 4t dy z = y √ 4t , dz = dy √ 4t ≤ sup x |f0(x)| 1 √ 4πt R e−z2 √ 4t dz = sup x |f0(x)| 1 √ π R e−z2 dz = √ π = sup x |f0(x)|. 63 The Green’s function for the heat equation on an infinite domain; derived in R. Haberman using the Fourier transform.
  • 258. Partial Differential Equations Igor Yanovsky, 2005 258 ux(x, t) = 1 √ 4πt R − 2(x − y) 4t e− (x−y)2 4t f0(y) dy = 1 √ 4πt R − d dy e− (x−y)2 4t f0(y) dy = 1 √ 4πt − e− (x−y)2 4t f0(y) ∞ −∞ = 0 + 1 √ 4πt R e− (x−y)2 4t f0y(y) dy, sup x |u(x, t)| ≤ 1 √ 4πt sup x |f0x(x)| R e−(x−y)2 4t dy = 1 √ 4πt sup x |f0x(x)| R e−z2 √ 4t dz = sup x |f0x(x)|. These inequalities are equalities when f0(x) and f0x(x) are constants, respectively.
  • 259. Partial Differential Equations Igor Yanovsky, 2005 259 Problem (S’01, #5). a) Show that the solution of the heat equation ut = uxx, −∞ < x < ∞ with square-integrable initial data u(x, 0) = f(x), decays in time, and there is a constant α independent of f and t such that for all t > 0 max x |ux(x, t)| ≤ αt−3 4 x |f(x)|2 dx 1 2 . b) Consider the solution ρ of the transport equation ρt +uρx = 0 with square-integrable initial data ρ(x, 0) = ρ0(x) and the velocity u from part (a). Show that ρ(x, t) remains square-integrable for all finite time R |ρ(x, t)|2 dx ≤ eCt 1 4 R |ρ0(x)|2 dx, where C does not depend on ρ0. Proof. a) The solution to the one-dimensional homogeneous heat equation is u(x, t) = 1 √ 4πt R e− (x−y)2 4t f(y) dy. Take the derivative with respect to x, we get 64 ux(x, t) = 1 √ 4πt R − 2(x − y) 4t e− (x−y)2 4t f(y) dy = − 1 4t 3 2 √ π R (x − y)e− (x−y)2 4t f(y) dy. |ux(x, t)| ≤ 1 4t 3 2 √ π R (x − y)e−(x−y)2 4t f(y) dy (Cauchy-Schwarz) ≤ 1 4t 3 2 √ π R (x − y)2 e− (x−y)2 2t dy 1 2 ||f||L2(R) z = x − y √ 2t , dz = − dy √ 2t = 1 4t 3 2 √ π R − z2 (2t) 3 2 e−z2 dz 1 2 ||f||L2(R) = (2t) 3 4 4t 3 2 √ π R z2 e−z2 dz M<∞ 1 2 ||f||L2(R) = Ct−3 4 M 1 2 ||f||L2(R) = αt−3 4 ||f||L2(R). b) Note: max x |u| = max x 1 √ 4πt R e−(x−y)2 4t f(y) dy ≤ 1 √ 4πt R e−(x−y)2 2t dy 1 2 ||f||L2(R) ≤ 1 √ 4πt R − e−z2 √ 2t dz 1 2 ||f||L2(R) z = x − y √ 2t , dz = − dy √ 2t = (2t) 1 4 2π 1 2 t 1 2 R e−z2 dz = √ π 1 2 ||f||L2(R) = Ct−1 4 ||f||L2(R). 65 64 Cauchy-Schwarz: |(u, v)| ≤ ||u||||v|| in any norm, for example |uv|dx ≤ ( u2 dx) 1 2 ( v2 dx) 1 2 65 See Yana’s and Alan’s solutions.
  • 260. Partial Differential Equations Igor Yanovsky, 2005 260 Problem (F’04, #2). Let u(x, t) be a bounded solution to the Cauchy problem for the heat equation ut = a2 uxx, t > 0, x ∈ R, a > 0, u(x, 0) = ϕ(x). Here ϕ(x) ∈ C(R) satisfies lim x→+∞ ϕ(x) = b, lim x→−∞ ϕ(x) = c. Compute the limit of u(x, t) as t → +∞, x ∈ R. Justify your argument carefully. Proof. For a = 1, the solution to the one-dimensional homogeneous heat equation is u(x, t) = 1 √ 4πt R e−(x−y)2 4t ϕ(y) dy. We want to transform the equation to vt = vxx. Make a change of variables: x = ay. u(x, t) = u(x(y), t) = u(ay, t) = v(y, t). Then, vy = uxxy = aux, vyy = auxxxy = a2 uxx, v(y, 0) = u(ay, 0) = ϕ(ay). Thus, the new problem is: vt = vyy, t > 0, y ∈ R, v(y, 0) = ϕ(ay). v(y, t) = 1 √ 4πt R e− (y−z)2 4t ϕ(az) dz. Since ϕ is continuous, and limx→+∞ ϕ(x) = b, limx→−∞ ϕ(x) = c, we have |ϕ(x)| < M, ∀x ∈ R. Thus, |v(y, t)| ≤ M √ 4πt R e−z2 4t dz s = z √ 4t , ds = dz √ 4t = M √ 4πt R e−s2 √ 4t ds = M √ π R e−s2 ds √ π = M. Integral in converges uniformly ⇒ lim = lim. For ψ = ϕ(a·): v(y, t) = 1 √ 4πt ∞ −∞ e− (y−z)2 4t ψ(z) dz = 1 √ 4πt ∞ −∞ e−z2 4t ψ(y − z) dz = 1 √ 4πt ∞ −∞ e−s2 ψ(y − s √ 4t) √ 4t ds = 1 √ π ∞ −∞ e−s2 ψ(y − s √ 4t) ds.
  • 261. Partial Differential Equations Igor Yanovsky, 2005 261 lim t→+∞ v(y, t) = 1 √ π ∞ 0 e−s2 lim t→+∞ ψ(y − s √ 4t) ds + 1 √ π 0 −∞ e−s2 lim t→+∞ ψ(y − s √ 4t) ds = 1 √ π ∞ 0 e−s2 c ds + 1 √ π 0 −∞ e−s2 b ds = c 1 √ π √ π 2 + b 1 √ π √ π 2 = c + b 2 .
  • 262. Partial Differential Equations Igor Yanovsky, 2005 262 Problem. Consider ut = kuxx + Q, 0 < x < 1 u(0, t) = 0, u(1, t) = 1. What is the steady state temperature? Proof. Set ut = 0, and integrate with respect to x twice: kuxx + Q = 0, uxx = − Q k , ux = − Q k x + a, u = − Q k x2 2 + ax + b. Boundary conditions give u(x) = − Q 2k x2 + 1 + Q 2k x.
  • 263. Partial Differential Equations Igor Yanovsky, 2005 263 18.1 Heat Equation with Lower Order Terms McOwen 5.2 #11. Find a formula for the solution of ut = u − cu in Rn × (0, ∞) u(x, 0) = g(x) on Rn . (18.1) Show that such solutions, with initial data g ∈ L2(Rn), are unique, even when c is negative. Proof. McOwen. Consider v(x, t) = ectu(x, t). The transformed problem is vt = v in Rn × (0, ∞) v(x, 0) = g(x) on Rn. (18.2) Since g is continuous and bounded in Rn, we have v(x, t) = Rn K(x, y, t) g(y) dy = 1 (4πt) n 2 Rn e− |x−y|2 4t g(y) dy, u(x, t) = e−ct v(x, t) = 1 (4πt) n 2 Rn e− |x−y|2 4t −ct g(y) dy. u(x, t) is a bounded solution since v(x, t) is. To prove uniqueness, assume there is another solution v of (18.2). w = v − v satisfies wt = w in Rn × (0, ∞) w(x, 0) = 0 on Rn. (18.3) Since bounded solutions of (18.3) are unique, and since w is a nontrivial solution, w is unbounded. Thus, v is unbounded, and therefore, the bounded solution v is unique.
  • 264. Partial Differential Equations Igor Yanovsky, 2005 264 18.1.1 Heat Equation Energy Estimates Problem (F’94, #3). Let u(x, y, t) be a twice continuously differential solution of ut = u − u3 in Ω ⊂ R2 , t ≥ 0 u(x, y, 0) = 0 in Ω u(x, y, t) = 0 in ∂Ω, t ≥ 0. Prove that u(x, y, t) ≡ 0 in Ω × [0, T]. Proof. Multiply the equation by u and integrate: uut = u u − u4 , Ω uut dx = Ω u u dx − Ω u4 dx, 1 2 d dt Ω u2 dx = ∂Ω u ∂u ∂n ds =0 − Ω |∇u|2 dx − Ω u4 dx, 1 2 d dt ||u||2 2 = − Ω |∇u|2 dx − Ω u4 dx ≤ 0. Thus, ||u(x, y, t)||2 ≤ ||u(x, y, 0)||2 = 0. Hence, ||u(x, y, t)||2 = 0, and u ≡ 0.
  • 265. Partial Differential Equations Igor Yanovsky, 2005 265 Problem (F’98, #5). Consider the heat equation ut − u = 0 in a two dimensional region Ω. Define the mass M as M(t) = Ω u(x, t) dx. a) For a fixed domain Ω, show M is a constant in time if the boundary conditions are ∂u/∂n = 0. b) Suppose that Ω = Ω(t) is evolving in time, with a boundary that moves at velocity v, which may vary along the boundary. Find a modified boundary condition (in terms of local quantities only) for u, so that M is constant. Hint: You may use the fact that d dt Ω(t) f(x, t) dx = Ω(t) ft(x, t) dx + ∂Ω(t) n · v f(x, t) dl, in which n is a unit normal vector to the boundary ∂Ω. Proof. a) We have ut − u = 0, on Ω ∂u ∂n = 0, on ∂Ω. We want to show that d dt M(t) = 0. We have 66 d dt M(t) = d dt Ω u(x, t) dx = Ω ut dx = Ω u dx = ∂Ω ∂u ∂n ds = 0. b) We need d dt M(t) = 0. 0 = d dt M(t) = d dt Ω(t) u(x, t) dx = Ω(t) ut dx + ∂Ω(t) n · v u ds = Ω(t) u dx + ∂Ω(t) n · v u ds = ∂Ω(t) ∂u ∂n ds + ∂Ω(t) n · v u ds = ∂Ω(t) ∇u · n ds + ∂Ω(t) n · v u ds = ∂Ω(t) n · (∇u + vu) ds. Thus, we need: n · (∇u + vu) ds = 0, on ∂Ω. 66 The last equality below is obtained from the Green’s formula: Ω u dx = Ω ∂u ∂n ds.
  • 266. Partial Differential Equations Igor Yanovsky, 2005 266 Problem (S’95, #3). Write down an explicit formula for a function u(x, t) solving ut + b · ∇u + cu = u in Rn × (0, ∞) u(x, 0) = f(x) on Rn. (18.4) where b ∈ Rn and c ∈ R are constants. Hint: First transform this to the heat equation by a linear change of the dependent and independent variables. Then solve the heat equation using the fundamental solution. Proof. Consider • u(x, t) = eα·x+βt v(x, t). ut = βeα·x+βt v + eα·x+βt vt = (vt + βv)eα·x+βt , ∇u = αeα·x+βt v + eα·x+βt ∇v = (αv + ∇v)eα·x+βt , ∇ · (∇u) = ∇ · (αv + ∇v)eα·x+βt = (α · ∇v + v)eα·x+βt + (|α|2 v + α · ∇v)eα·x+βt = v + 2α · ∇v + |α|2 v)eα·x+βt . Plugging this into (18.4), we obtain vt + βv + b · (αv + ∇v) + cv = v + 2α · ∇v + |α|2 v, vt + b − 2α · ∇v + β + b · α + c − |α|2 v = v. In order to get homogeneous heat equation, we set α = b 2 , β = − |b|2 4 − c, which gives vt = v in Rn × (0, ∞) v(x, 0) = e− b 2 ·x f(x) on Rn. The above PDE has the following solution: v(x, t) = 1 (4πt) n 2 Rn e− |x−y|2 4t e− b 2 ·y f(y) dy. Thus, u(x, t) = e b 2 ·x−( |b|2 4 +c)t v(x, t) = 1 (4πt) n 2 e b 2 ·x−( |b|2 4 +c)t Rn e−|x−y|2 4t e− b 2 ·y f(y) dy.
  • 267. Partial Differential Equations Igor Yanovsky, 2005 267 Problem (F’01, #7). Consider the parabolic problem ut = uxx + c(x)u (18.5) for −∞ < x < ∞, in which c(x) = 0 for |x| > 1, c(x) = 1 for |x| < 1. Find solutions of the form u(x, t) = eλt v(x) in which ∞ −∞ |u|2 dx < ∞. Hint: Look for v to have the form v(x) = ae−k|x| for |x| > 1, v(x) = b coslx for |x| < 1, for some a, b, k, l. Proof. Plug u(x, t) = eλtv(x) into (18.5) to get: λeλt v(x) = eλt v (x) + ceλt v(x), λv(x) = v (x) + cv(x), v (x) − λv(x) + cv(x) = 0. • For |x| > 1, c = 0. We look for solutions of the form v(x) = ae−k|x| . v (x) − λv(x) = 0, ak2 e−k|x| − aλe−k|x| = 0, k2 − λ = 0, k2 = λ, k = ± √ λ. Thus, v(x) = c1e− √ λx + c2e √ λx. Since we want ∞ −∞ |u|2 dx < ∞: u(x, t) = aeλt e− √ λx . • For |x| < 1, c = 1. We look for solutions of the form v(x) = b coslx. v (x) − λv(x) + v(x) = 0, −bl2 cos lx + (1 − λ)b coslx = 0, −l2 + (1 − λ) = 0, l2 = 1 − λ, l = ± √ 1 − λ. Thus, (since cos(−x) = cos x) u(x, t) = beλt cos (1 − λ)x. • We want v(x) to be continuous on R, and at x = ±1, in particular. Thus, ae− √ λ = b cos (1 − λ), a = be √ λ cos (1 − λ). • Also, v(x) is symmetric: ∞ −∞ |u|2 dx = 2 ∞ 0 |u|2 dx = 2 1 0 |u|2 dx + ∞ 1 |u|2 dx < ∞.
  • 268. Partial Differential Equations Igor Yanovsky, 2005 268 Problem (F’03, #3). ❶ The function h(X, T) = (4πT)−1 2 e−X2 4T satisfies (you do not have to show this) hT = hXX. Using this result, verify that for any smooth function U u(x, t) = e 1 3 t3−xt ∞ −∞ U(ξ) h(x − t2 − ξ, t) dξ satisfies ut + xu = uxx. ❷ Given that U(x) is bounded and continuous everywhere on −∞ ≤ x ≤ ∞, establish that lim t→0 ∞ −∞ U(ξ) h(x − ξ, t) dξ = U(x) ❸ and show that u(x, t) → U(x) as t → 0. (You may use the fact that ∞ 0 e−ξ2 dξ = 1 2 √ π.) Proof. We change the notation: h → K, U → g, ξ → y. We have K(X, T) = 1 √ 4πT e−X2 4T ❶ We want to verify that u(x, t) = e 1 3 t3−xt ∞ −∞ K(x − y − t2 , t) g(y) dy. satisfies ut + xu = uxx. We have ut = ∞ −∞ d dt e 1 3 t3−xt K(x − y − t2 , t) g(y) dy = ∞ −∞ (t2 − x) e 1 3 t3−xt K + e 1 3 t3−xt KX · (−2t) + KT g(y) dy, xu = ∞ −∞ x e 1 3 t3−xt K(x − y − t2 , t) g(y) dy, ux = ∞ −∞ d dx e 1 3 t3−xt K(x − y − t2 , t) g(y) dy = ∞ −∞ − t e 1 3 t3−xt K + e 1 3 t3−xt KX g(y) dy, uxx = ∞ −∞ d dx − t e 1 3 t3−xt K + e 1 3 t3−xt KX g(y) dy = ∞ −∞ t2 e 1 3 t3−xt K − t e 1 3 t3−xt KX − t e 1 3 t3−xt KX + e 1 3 t3−xt KXX g(y) dy.
  • 269. Partial Differential Equations Igor Yanovsky, 2005 269 Plugging these into , most of the terms cancel out. The remaining two terms cancel because KT = KXX. ❷ Given that g(x) is bounded and continuous on −∞ ≤ x ≤ ∞, we establish that 67 lim t→0 ∞ −∞ K(x − y, t) g(y) dy = g(x). Fix x0 ∈ Rn, ε > 0. Choose δ > 0 such that |g(y) − g(x0)| < ε if |y − x0| < δ, y ∈ Rn . Then if |x − x0| < δ 2 , we have: ( R K(x, t) dx = 1) R K(x − y, t) g(y) dy − g(x0) ≤ R K(x − y, t) [g(y) − g(x0)] dy ≤ Bδ(x0) K(x − y, t) g(y) − g(x0) dy ≤ ε R K(x−y,t) dy = ε + R−Bδ(x0) K(x − y, t) g(y) − g(x0) dy Furthermore, if |x − x0| ≤ δ 2 and |y − x0| ≥ δ, then |y − x0| ≤ |y − x| + δ 2 ≤ |y − x| + 1 2 |y − x0|. Thus, |y − x| ≥ 1 2 |y − x0|. Consequently, = ε + 2||g||L∞ R−Bδ(x0) K(x − y, t) dy ≤ ε + C √ t R−Bδ(x0) e− |x−y|2 4t dy ≤ ε + C √ t R−Bδ(x0) e− |y−x0|2 16t dy = ε + C √ t ∞ δ e− r2 16t r dr → ε + 0 as t → 0+ . Hence, if |x − x0| < δ 2 and t > 0 is small enough, |u(x, t) − g(x0)| < 2ε. 67 Evans, p. 47, Theorem 1 (c).
  • 270. Partial Differential Equations Igor Yanovsky, 2005 270 Problem (S’93, #4). The temperature T(x, t) in a stationary medium, x ≥ 0, is governed by the heat conduction equation ∂T ∂t = ∂2T ∂x2 . (18.6) Making the change of variable (x, t) → (u, t), where u = x/2 √ t, show that 4t ∂T ∂t = ∂2T ∂u2 + 2u ∂T ∂u . (18.7) Solutions of (18.7) that depend on u alone are called similarity solutions. 68 Proof. We change notation: the change of variables is (x, t) → (u, τ), where t = τ. After the change of variables, we have T = T(u(x, t), τ(t)). u = x 2 √ t ⇒ ut = − x 4t 3 2 , ux = 1 2 √ t , uxx = 0, τ = t ⇒ τt = 1, τx = 0. ∂T ∂t = ∂T ∂u ∂u ∂t + ∂T ∂τ , ∂T ∂x = ∂T ∂u ∂u ∂x , ∂2 T ∂x2 = ∂ ∂x ∂T ∂x = ∂ ∂x ∂T ∂u ∂u ∂x = ∂2 T ∂u2 ∂u ∂x ∂u ∂x + ∂T ∂u ∂2 u ∂x2 =0 = ∂2 T ∂u2 ∂u ∂x 2 . Thus, (18.6) gives: ∂T ∂u ∂u ∂t + ∂T ∂τ = ∂2T ∂u2 ∂u ∂x 2 , ∂T ∂u − x 4t 3 2 + ∂T ∂τ = ∂2 T ∂u2 1 2 √ t 2 , ∂T ∂τ = 1 4t ∂2T ∂u2 + x 4t 3 2 ∂T ∂u , 4t ∂T ∂τ = ∂2T ∂u2 + x √ t ∂T ∂u , 4t ∂T ∂τ = ∂2 T ∂u2 + 2u ∂T ∂u . 68 This is only the part of the qual problem.
  • 271. Partial Differential Equations Igor Yanovsky, 2005 271 19 Contraction Mapping and Uniqueness - Wave Recall that the solution to utt − c2uxx = f(x, t), u(x, 0) = g(x), ut(x, 0) = h(x), (19.1) is given by adding together d’Alembert’s formula and Duhamel’s principle: u(x, t) = 1 2 (g(x + ct) + g(x − ct)) + 1 2c x+ct x−ct h(ξ) dξ + 1 2c t 0 x+c(t−s) x−c(t−s) f(ξ, s) dξ ds. Problem (W’02, #8). a) Find an explicit solution of the following Cauchy problem ∂2u ∂t2 − ∂2u ∂x2 = f(t, x), u(0, x) = 0, ∂u ∂x (0, x) = 0. (19.2) b) Use part (a) to prove the uniqueness of the solution of the Cauchy problem ∂2u ∂t2 − ∂2u ∂x2 + q(t, x)u = 0, u(0, x) = 0, ∂u ∂x (0, x) = 0. (19.3) Here f(t, x) and q(t, x) are continuous functions. Proof. a) It was probably meant to give the ut initially. We rewrite (19.2) as utt − uxx = f(x, t), u(x, 0) = 0, ut(x, 0) = 0. (19.4) Duhamel’s principle, with c = 1, gives the solution to (19.4): u(x, t) = 1 2c t 0 x+c(t−s) x−c(t−s) f(ξ, s) dξ ds = 1 2 t 0 x+(t−s) x−(t−s) f(ξ, s) dξ ds. b) We use the Contraction Mapping Principle to prove uniqueness. Define the operator T(u) = 1 2 t 0 x+(t−s) x−(t−s) −q(ξ, s) u(ξ, s) dξ ds. on the Banach space C2,2, || · ||∞. We will show |Tun − Tun+1| < α||un − un+1|| where α < 1. Then {un}∞ n=1: un+1 = T(un) converges to a unique fixed point which is the unique solution of PDE. |Tun − Tun+1| = 1 2 t 0 x+(t−s) x−(t−s) −q(ξ, s) un(ξ, s) − un+1(ξ, s) dξ ds ≤ 1 2 t 0 ||q||∞||un − un+1||∞ 2(t − s) ds ≤ t2 ||q||∞||un − un+1||∞ ≤ α||un − un+1||∞, for small t. Thus, T is a contraction ⇒ ∃ a unique fixed point. Since Tu = u, u is the solution to the PDE.
  • 272. Partial Differential Equations Igor Yanovsky, 2005 272 Problem (F’00, #3). Consider the Goursat problem: Find the solution of the equation ∂2u ∂t2 − ∂2u ∂x2 + a(x, t)u = 0 in the square D, satisfying the boundary conditions u|γ1 = ϕ, u|γ2 = ψ, where γ1, γ2 are two adjacent sides D. Here a(x, t), ϕ and ψ are continuous functions. Prove the uniqueness of the solution of this Goursat problem. Proof. The change of variable μ = x + t, η = x − t transforms the equation to ˜uμη + ˜a(μ, η)˜u = 0. We integrate the equation: η 0 μ 0 ˜uμη(u, v) du dv = − η 0 μ 0 ˜a(μ, η) ˜udu dv, η 0 ˜uη(μ, v) − ˜uη(0, v) dv = − η 0 μ 0 ˜a(μ, η) ˜udu dv, ˜u(μ, η) = ˜u(μ, 0) + ˜u(0, η) − u(0, 0) − η 0 μ 0 ˜a(μ, η) ˜udu dv. We change the notation. In the new notation: f(x, y) = ϕ(x, y) − x 0 y 0 a(u, v)f(u, v) du dv, f = ϕ + Kf, f = ϕ + K(ϕ + Kf), · · · f = ϕ + ∞ n=1 Kn ϕ, f = Kf ⇒ f = 0, max 0<x<δ |f| ≤ δ max |a| max|f|. For small enough δ, the operator K is a contraction. Thus, there exists a unique fixed point of K, and f = Kf, where f is the unique solution.
  • 273. Partial Differential Equations Igor Yanovsky, 2005 273 20 Contraction Mapping and Uniqueness - Heat The solution of the initial value problem ut = u + f(x, t) for t > 0, x ∈ Rn u(x, 0) = g(x) for x ∈ Rn . (20.1) is given by u(x, t) = Rn ˜K(x − y, t) g(y) dy + t 0 Rn ˜K(x − y, t − s) f(y, s) dyds where ˜K(x, t) = ⎧ ⎨ ⎩ 1 (4πt) n 2 e− |x|2 4t for t > 0, 0 for t ≤ 0. Problem (F’00, #2). Consider the Cauchy problem ut − u + u2 (x, t) = f(x, t), x ∈ RN , 0 < t < T u(x, 0) = 0. Prove the uniqueness of the classical bounded solution assuming that T is small enough. Proof. Let {un} be a sequence of approximations to the solution, such that S(un) = un+1 = use Duhamel s principle t 0 Rn K(x − y, t − s) f(y, s) − u2 n(y, s) dy ds. We will show that S has a fixed point |S(un) − S(un+1)| ≤ α|un − un+1|, α < 1 ⇔ {un} converges to a uniques solution for small enough T. Since un, un+1 ∈ C2 (Rn ) ∩ C1 (t) ⇒ |un+1 + un| ≤ M. |S(un) − S(un+1)| ≤ t 0 Rn K(x − y, t − s) u2 n+1 − u2 n dy ds = t 0 Rn K(x − y, t − s) un+1 − un un+1 + un dy ds ≤ M t 0 Rn K(x − y, t − s) un+1 − un dy ds ≤ MM1 t 0 un+1(x, s) − un(x, s) ds ≤ MM1T ||un+1 − un||∞ < ||un+1 − un||∞ for small T. Thus, S is a contraction ⇒ ∃ a unique fixed point u ∈ C2 (Rn ) ∩ C1 (t) such that u = limn→∞ un. u is implicitly defined as u(x, t) = t 0 Rn K(x − y, t − s) f(y, s) − u2 (y, s) dy ds.
  • 274. Partial Differential Equations Igor Yanovsky, 2005 274 Problem (S’97, #3). a) Let Q(x) ≥ 0 such that ∞ x=−∞ Q(x) dx = 1, and define Q = 1 Q(x ). Show that (here ∗ denotes convolution) ||Q (x) ∗ w(x)||L∞ ≤ ||w(x)||L∞. In particular, let Qt(x) denote the heat kernel (at time t), then ||Qt(x) ∗ w1(x) − Qt(x) ∗ w2(x)||L∞ ≤ ||w1(x) − w2(x)||L∞. b) Consider the parabolic equation ut = uxx + u2 subject to initial conditions u(x, 0) = f(x). Show that the solution of this equation satisfies u(x, t) = Qt(x) ∗ f(x) + t 0 Qt−s(x) ∗ u2 (x, s) ds. (20.2) c) Fix t > 0. Let {un(x, t)}, n = 1, 2, . . . the fixed point iterations for the solution of (20.2) un+1(x, t) = Qt(x) ∗ f(x) + t 0 Qt−s(x) ∗ u2 n(x, s) ds. (20.3) Let Kn(t) = sup0≤m≤n ||um(x, t)||L∞. Using (a) and (b) show that ||un+1(x, t) − un(x, t)||L∞ ≤ 2 sup 0≤τ≤t Kn(τ) · t 0 ||un(x, s) − un−1(x, s)||L∞ ds. Conclude that the fixed point iterations in (20.3) converge if t is sufficiently small. Proof. a) We have ||Q (x) ∗ w(x)||L∞ = ∞ −∞ Q (x − y)w(y) dy ≤ ∞ −∞ Q (x − y)w(y) dy ≤ ||w||∞ ∞ −∞ Q (x − y) dy = ||w||∞ ∞ −∞ 1 Q x − y dy = ||w||∞ ∞ −∞ 1 Q y dy z = y , dz = dy = ||w||∞ ∞ −∞ Q(z) dz = ||w(x)||∞.
  • 275. Partial Differential Equations Igor Yanovsky, 2005 275 Qt(x) = 1√ 4πt e−x2 4t , the heat kernel. We have 69 ||Qt(x) ∗ w1(x) − Qt(x) ∗ w2(x)||L∞ = ∞ −∞ Qt(x − y)w1(y) dy − ∞ −∞ Qt(x − y)w2(y) dy ∞ = 1 √ 4πt ∞ −∞ e− (x−y)2 4t w1(y) dy − ∞ −∞ e− (x−y)2 4t w2(y) dy ∞ ≤ 1 √ 4πt ∞ −∞ e− (x−y)2 4t w1(y) − w2(y) dy ≤ w1(y) − w2(y) ∞ 1 √ 4πt ∞ −∞ e− (x−y)2 4t dy z = x − y √ 4t , dz = −dy √ 4t = w1(y) − w2(y) ∞ 1 √ 4πt ∞ −∞ e−z2 √ 4t dz = w1(y) − w2(y) ∞ 1 √ π ∞ −∞ e−z2 dz √ π = w1(y) − w2(y) ∞ . 69 Note: ∞ −∞ Qt(x) dx = 1 √ 4πt ∞ −∞ e− (x−y)2 4t dy = 1 √ 4πt ∞ −∞ e−z2 √ 4t dz = 1 √ π ∞ −∞ e−z2 dz = 1.
  • 276. Partial Differential Equations Igor Yanovsky, 2005 276 b) Consider ut = uxx + u2 , u(x, 0) = f(x). We will show that the solution of this equation satisfies u(x, t) = Qt(x) ∗ f(x) + t 0 Qt−s(x) ∗ u2 (x, s) ds. t 0 Qt−s(x) ∗ u2 (x, s) ds = t 0 R Qt−s(x − y) u2 (y, s) dy ds = t 0 R Qt−s(x − y) us(y, s) − uyy(y, s) dy ds = t 0 R d ds Qt−s(x − y)u(y, s) − d ds Qt−s(x − y) u(y, s) − Qt−s(x − y)uyy(y, s) dy ds = R Q0(x − y)u(y, t) dy − R Qt(x − y)u(y, 0) dy − t 0 R d ds Qt−s(x − y) u(y, s) + d2 dy2 Qt−s(x − y)u(y, s) = 0, since Qt satisfies heat equation dy ds = u(x, t) − R Qt(x − y)f(y) dy Note: lim t→0+ Q(x, t) = δ0(x) = δ(x). = u(x, t) − Qt(x) ∗ f(x). lim t→0+ R Q(x − y, t)v(y) dy = v(0). Note that we used: Dα(f ∗ g) = (Dαf) ∗ g = f ∗ (Dαg). c) Let un+1(x, t) = Qt(x) ∗ f(x) + t 0 Qt−s(x) ∗ u2 n(x, s) ds. ||un+1(x, t) − un(x, t)||L∞ = t 0 Qt−s(x) ∗ u2 n(x, s) − u2 n−1(x, s) ds ∞ ≤ t 0 Qt−s(x) ∗ u2 n(x, s) − u2 n−1(x, s) ∞ ds ≤ (a) t 0 u2 n(x, s) − u2 n−1(x, s) ∞ ds ≤ t 0 un(x, s) − un−1(x, s) ∞ un(x, s) + un−1(x, s) ∞ ds ≤ sup 0≤τ≤t un(x, s) + un−1(x, s) ∞ t 0 un(x, s) − un−1(x, s) ∞ ds ≤ 2 sup 0≤τ≤t Kn(τ) · t 0 ||un(x, s) − un−1(x, s)||L∞ ds. Also, ||un+1(x, t) − un(x, t)||L∞ ≤ 2t sup 0≤τ≤t Kn(τ) · ||un(x, s) − un−1(x, s)||L∞.
  • 277. Partial Differential Equations Igor Yanovsky, 2005 277 For t small enough, 2t sup0≤τ≤t Kn(τ) ≤ α < 1. Thus, T defined as Tu = Qt(x) ∗ f(x) + t 0 Qt−s(x) ∗ u2 (x, s) ds is a contraction, and has a unique fixed point u = Tu.
  • 278. Partial Differential Equations Igor Yanovsky, 2005 278 Problem (S’99, #3). Consider the system of equations ut = uxx + f(u, v) vt = 2vxx + g(u, v) to be solved for t > 0, −∞ < x < ∞, and smooth initial data with compact support: u(x, 0) = u0(x), v(x, 0) = v0(x). If f and g are uniformly Lipschitz continuous, give a proof of existence and unique- ness of the solution to this problem in the space of bounded continuous functions with ||u(·, t)|| = supx |u(x, t)|. Proof. The space of continuous bounded functions forms a complete metric space so the contraction mapping principle applies. First, let v(x, t) = w x√ 2 , t , then ut = uxx + f(u, w) wt = wxx + g(u, w). These initial value problems have the following solutions (K is the heat kernel): u(x, t) = Rn ˜K(x − y, t) u0(y) dy + t 0 Rn ˜K(x − y, t − s) f(u, w) dyds, w(x, t) = Rn ˜K(x − y, t) w0(y) dy + t 0 Rn ˜K(x − y, t − s) g(u, w) dyds. By the Lipshitz conditions, |f(u, w)| ≤ M1||u||, |g(u, w)| ≤ M2||w||. Now we can show the mappings, as defined below, are contractions: T1u = Rn ˜K(x − y, t) u0(y) dy + t 0 Rn ˜K(x − y, t − s) f(u, w) dyds, T2w = Rn ˜K(x − y, t) w0(y) dy + t 0 Rn ˜K(x − y, t − s) g(u, w) dyds. |T1(un) − T1(un+1)| ≤ t 0 Rn ˜K(x − y, t − s) f(un, w) − f(un+1, w) dy ds ≤ M1 t 0 Rn ˜K(x − y, t − s) un − un+1 dy ds ≤ M1 t 0 sup x un − un+1 Rn ˜K(x − y, t − s)dy ds ≤ M1 t 0 sup x un − un+1 ds ≤ M1t sup x un − un+1 < sup x un − un+1 for small t. We used the Lipshitz condition and R ˜K(x − y, t − s) dy = 1. Thus, for small t, T1 is a contraction, and has a unique fixed point. Thus, the solution is defined as u = T1u. Similarly, T2 is a contraction and has a unique fixed point. The solution is defined as w = T2w.
  • 279. Partial Differential Equations Igor Yanovsky, 2005 279 21 Problems: Maximum Principle - Laplace and Heat 21.1 Heat Equation - Maximum Principle and Uniqueness Let us introduce the “cylinder” U = UT = Ω × (0, T). We know that harmonic (and subharmonic) functions achieve their maximum on the boundary of the domain. For the heat equation, the result is improved in that the maximum is achieved on a certain part of the boundary, parabolic boundary: Γ = {(x, t) ∈ U : x ∈ ∂Ω or t = 0}. Let us also denote by C2;1 (U) functions satisfying ut, uxixj ∈ C(U). Weak Maximum Principle. Let u ∈ C2;1 (U) ∩ C(U) satisfy u ≥ ut in U. Then u achieves its maximum on the parabolic boundary of U: max U u(x, t) = max Γ u(x, t). (21.1) Proof. • First, assume u > ut in U. For 0 < τ < T consider Uτ = Ω × (0, τ), Γτ = {(x, t) ∈ Uτ : x ∈ ∂Ω or t = 0}. If the maximum of u on Uτ occurs at x ∈ Ω and t = τ, then ut(x, τ) ≥ 0 and u(x, τ) ≤ 0, violating our assumption; similarly, u cannot attain an interior maximum on Uτ . Hence (21.1) holds for Uτ : maxUτ u = maxΓτ u. But maxΓτ u ≤ maxΓ u and by continuity of u, maxU u = limτ→T maxUτ u. This establishes (21.1). • Second, we consider the general case of u ≥ ut in U. Let u = v + εt for ε > 0. Notice that v ≤ u on U and v − vt > 0 in U. Thus we may apply (21.1) to v: max U u = max U (v + εt) ≤ max U v + εT = max Γ v + εT ≤ max Γ u + εT. Letting ε → 0 establishes (21.1) for u.
  • 280. Partial Differential Equations Igor Yanovsky, 2005 280 Problem (S’98, #7). Prove that any smooth solution, u(x, y, t) in the unit box Ω = {(x, y) | − 1 ≤ x, y ≤ 1}, of the following equation ut = uux + uuy + u, t ≥ 0, (x, y) ∈ Ω u(x, y, 0) = f(x, y), (x, y) ∈ Ω satisfies the weak maximum principle, max Ω×[0,T ] u(x, y, t) ≤ max{ max 0≤t≤T u(±1, ±1, t), max (x,y)∈Ω f(x, y)}. Proof. Suppose u satisfies given equation. Let u = v + εt for ε > 0. Then, vt + ε = vvx + vvy + εt(vx + vy) + v. Suppose v has a maximum at (x0, y0, t0) ∈ Ω × (0, T). Then vx = vy = vt = 0 ⇒ ε = v ⇒ v > 0 ⇒ v has a minimum at (x0, y0, t0), a contradiction. Thus, the maximum of v is on the boundary of Ω × (0, T). Suppose v has a maximum at (x0, y0, T), (x0, y0) ∈ Ω. Then vx = vy = 0, vt ≥ 0 ⇒ ε ≤ v ⇒ v > 0 ⇒ v has a minimum at (x0, y0, T), a contradiction. Thus, max Ω×[0,T ] v ≤ max{ max 0≤t≤T v(±1, ±1, t), max (x,y)∈Ω f(x, y)}. Now max Ω×[0,T ] u = max Ω×[0,T ] (v + εt) ≤ max Ω×[0,T ] v + εT ≤ max{ max 0≤t≤T v(±1, ±1, t), max (x,y)∈Ω f(x, y)} + εT ≤ max{ max 0≤t≤T u(±1, ±1, t), max (x,y)∈Ω f(x, y)} + εT. Letting ε → 0 establishes the result.
  • 281. Partial Differential Equations Igor Yanovsky, 2005 281 21.2 Laplace Equation - Maximum Principle Problem (S’91, #6). Suppose that u satisfies Lu = auxx + buyy + cux + duy − eu = 0 with a > 0, b > 0, e > 0, for (x, y) ∈ Ω, with Ω a bounded open set in R2. a) Show that u cannot have a positive maximum or a negative minimum in the in- terior of Ω. b) Use this to show that the only function u satisfying Lu = 0 in Ω, u = 0 on ∂Ω and u continuous on Ω is u = 0. Proof. a) For an interior (local) maximum or minimum at an interior point (x, y), we have ux = 0, uy = 0. • Suppose u has a positive maximum in the interior of Ω. Then u > 0, uxx ≤ 0, uyy ≤ 0. With these values, we have auxx ≤0 + buyy ≤0 + cux =0 + duy =0 −eu <0 = 0, which leads to contradiction. Thus, u can not have a positive maximum in Ω. • Suppose u has a negative minimum in the interior of Ω. Then u < 0, uxx ≥ 0, uyy ≥ 0. With these values, we have auxx ≥0 + buyy ≥0 + cux =0 + duy =0 −eu >0 = 0, which leads to contradiction. Thus, u can not have a negative minimum in Ω. b) Since u can not have positive maximum in the interior of Ω, then maxu = 0 on Ω. Since u can not have negative minimum in the interior of Ω, then min u = 0 on Ω. Since u is continuous, u ≡ 0 on Ω.
  • 282. Partial Differential Equations Igor Yanovsky, 2005 282 22 Problems: Separation of Variables - Laplace Equation Problem 1: The 2D LAPLACE Equation on a Square. Let Ω = (0, π) × (0, π), and use separation of variables to solve the boundary value problem ⎧ ⎪⎨ ⎪⎩ uxx + uyy = 0 0 < x, y < π u(0, y) = 0 = u(π, y) 0 ≤ y ≤ π u(x, 0) = 0, u(x, π) = g(x) 0 ≤ x ≤ π, where g is a continuous function satisfying g(0) = 0 = g(π). Proof. Assume u(x, y) = X(x)Y (y), then substitution in the PDE gives X Y +XY = 0. X X = − Y Y = −λ. • From X + λX = 0, we get Xn(x) = an cos nx + bn sin nx. Boundary conditions give u(0, y) = X(0)Y (y) = 0 u(π, y) = X(π)Y (y) = 0 ⇒ X(0) = 0 = X(π). Thus, Xn(0) = an = 0, and Xn(x) = bn sin nx, n = 1, 2, . . .. −n2 bn sinnx + λbn sinnx = 0, λn = n2 , n = 1, 2, . . .. • With these values of λn we solve Y − n2Y = 0 to find Yn(y) = cn cosh ny + dn sinhny. Boundary conditions give u(x, 0) = X(x)Y (0) = 0 ⇒ Y (0) = 0 = cn. Yn(x) = dn sinh ny. • By superposition, we write u(x, y) = ∞ n=1 ˜an sin nx sinhny, which satifies the equation and the three homogeneous boundary conditions. The boundary condition at y = π gives u(x, π) = g(x) = ∞ n=1 ˜an sinnx sinh nπ, π 0 g(x) sinmx dx = ∞ n=1 ˜an sinhnπ π 0 sin nx sinmx dx = π 2 ˜am sinhmπ.
  • 283. Partial Differential Equations Igor Yanovsky, 2005 283 ˜an sinh nπ = 2 π π 0 g(x) sinnx dx.
  • 284. Partial Differential Equations Igor Yanovsky, 2005 284 Problem 2: The 2D LAPLACE Equation on a Square. Let Ω = (0, π)×(0, π), and use separation of variables to solve the mixed boundary value problem ⎧ ⎪⎨ ⎪⎩ u = 0 in Ω ux(0, y) = 0 = ux(π, y) 0 < y < π u(x, 0) = 0, u(x, π) = g(x) 0 < x < π. Proof. Assume u(x, y) = X(x)Y (y), then substitution in the PDE gives X Y +XY = 0. X X = − Y Y = −λ. • Consider X + λX = 0. If λ = 0, X0(x) = a0x + b0. If λ > 0, Xn(x) = an cos nx + bn sin nx. Boundary conditions give ux(0, y) = X (0)Y (y) = 0 ux(π, y) = X (π)Y (y) = 0 ⇒ X (0) = 0 = X (π). Thus, X0(0) = a0 = 0, and Xn(0) = nbn = 0. X0(x) = b0, Xn(x) = an cos nx, n = 1, 2, . . .. −n2 an cos nx + λan cos nx = 0, λn = n2 , n = 0, 1, 2, . . .. • With these values of λn we solve Y − n2Y = 0. If n = 0, Y0(y) = c0y + d0. If n = 0, Yn(y) = cn coshny + dn sinhny. Boundary conditions give u(x, 0) = X(x)Y (0) = 0 ⇒ Y (0) = 0. Thus, Y0(0) = d0 = 0, and Yn(0) = cn = 0. Y0(y) = c0y, Yn(y) = dn sinh ny, n = 1, 2, . . .. • We have u0(x, y) = X0(x)Y0(y) = b0c0y = ˜a0y, un(x, y) = Xn(x)Yn(y) = (an cos nx)(dn sinh ny) = ˜an cos nx sinhny. By superposition, we write u(x, y) = ˜a0y + ∞ n=1 ˜an cos nx sinhny, which satifies the equation and the three homogeneous boundary conditions. The fourth boundary condition gives u(x, π) = g(x) = ˜a0π + ∞ n=1 ˜an cos nx sinh nπ,
  • 285. Partial Differential Equations Igor Yanovsky, 2005 285 π 0 g(x) dx = π 0 ˜a0π + ∞ n=1 ˜an cos nx sinh nπ dx = ˜a0π2, π 0 g(x) cosmx dx = ∞ n=1 ˜an sinhnπ π 0 cos nx cos mx dx = π 2 ˜am sinh mπ. ˜a0 = 1 π2 π 0 g(x) dx, ˜an sinh nπ = 2 π π 0 g(x) cosnx dx.
  • 286. Partial Differential Equations Igor Yanovsky, 2005 286 Problem (W’04, #5) The 2D LAPLACE Equation in an Upper-Half Plane. Consider the Laplace equation ∂2u ∂x2 + ∂2u ∂y2 = 0, y > 0, −∞ < x < +∞ ∂u(x, 0) ∂y − u(x, 0) = f(x), where f(x) ∈ C∞ 0 (R1 ). Find a bounded solution u(x, y) and show that u(x, y) → 0 when |x| + y → ∞. Proof. Assume u(x, y) = X(x)Y (y), then substitution in the PDE gives X Y +XY = 0. X X = − Y Y = −λ. • Consider X + λX = 0. If λ = 0, X0(x) = a0x + b0. If λ > 0, Xn(x) = an cos √ λnx + bn sin √ λnx. Since we look for bounded solutions as |x| → ∞, we have a0 = 0. • Consider Y − λnY = 0. If λn = 0, Y0(y) = c0y + d0. If λn > 0, Yn(y) = cne− √ λny + dne √ λny . Since we look for bounded solutions as y → ∞, we have c0 = 0, dn = 0. Thus, u(x, y) = ˜a0 + ∞ n=1 e− √ λny ˜an cos λnx + ˜bn sin λnx . Initial condition gives: f(x) = uy(x, 0) − u(x, 0) = −˜a0 − ∞ n=1 ( λn + 1) ˜an cos λnx + ˜bn sin λnx . f(x) ∈ C∞ 0 (R1 ), i.e. has compact support [−L, L], for some L > 0. Thus the coefficients ˜an, ˜bn are given by L −L f(x) cos λnx dx = −( λn + 1)˜anL. L −L f(x) sin λnx dx = −( λn + 1)˜bnL. Thus, u(x, y) → 0 when |x| + y → ∞. 70 70 Note that if we change the roles of X and Y in , the solution we get will be unbounded.
  • 287. Partial Differential Equations Igor Yanovsky, 2005 287 Problem 3: The 2D LAPLACE Equation on a Circle. Let Ω be the unit disk in R2 and consider the problem u = 0 in Ω ∂u ∂n = h on ∂Ω, where h is a continuous function. Proof. Use polar coordinates (r, θ) urr + 1 r ur + 1 r2 uθθ = 0 for 0 ≤ r < 1, 0 ≤ θ < 2π ∂u ∂r (1, θ) = h(θ) for 0 ≤ θ < 2π. r2 urr + rur + uθθ = 0. Let r = e−t , u(r(t), θ). ut = urrt = −e−t ur, utt = (−e−t ur)t = e−t ur + e−2t urr = rur + r2 urr. Thus, we have utt + uθθ = 0. Let u(t, θ) = X(t)Y (θ), which gives X (t)Y (θ) + X(t)Y (θ) = 0. X (t) X(t) = − Y (θ) Y (θ) = λ. • From Y (θ) + λY (θ) = 0, we get Yn(θ) = an cos nθ + bn sin nθ. λn = n2 , n = 0, 1, 2, . . .. • With these values of λn we solve X (t) − n2X(t) = 0. If n = 0, X0(t) = c0t + d0. ⇒ X0(r) = −c0 log r + d0. If n = 0, Xn(t) = cnent + dne−nt ⇒ Xn(r) = cnr−n + dnrn. • We have u0(r, θ) = X0(r)Y0(θ) = (−c0 log r + d0)a0, un(r, θ) = Xn(r)Yn(θ) = (cnr−n + dnrn )(an cos nθ + bn sinnθ). But u must be finite at r = 0, so cn = 0, n = 0, 1, 2, . . .. u0(r, θ) = d0a0, un(r, θ) = dnrn (an cos nθ + bn sinnθ). By superposition, we write u(r, θ) = ˜a0 + ∞ n=1 rn (˜an cos nθ + ˜bn sinnθ). Boundary condition gives ur(1, θ) = ∞ n=1 n(˜an cos nθ + ˜bn sinnθ) = h(θ). The coefficients an, bn for n ≥ 1 are determined from the Fourier series for h(θ). a0 is not determined by h(θ) and therefore may take an arbitrary value. Moreover,
  • 288. Partial Differential Equations Igor Yanovsky, 2005 288 the constant term in the Fourier series for h(θ) must be zero [i.e., 2π 0 h(θ)dθ = 0]. Therefore, the problem is not solvable for an arbitrary function h(θ), and when it is solvable, the solution is not unique.
  • 289. Partial Differential Equations Igor Yanovsky, 2005 289 Problem 4: The 2D LAPLACE Equation on a Circle. Let Ω = {(x, y) ∈ R2 : x2 + y2 < 1} = {(r, θ) : 0 ≤ r < 1, 0 ≤ θ < 2π}, and use separation of variables (r, θ) to solve the Dirichlet problem u = 0 in Ω u(1, θ) = g(θ) for 0 ≤ θ < 2π. Proof. Use polar coordinates (r, θ) urr + 1 r ur + 1 r2 uθθ = 0 for 0 ≤ r < 1, 0 ≤ θ < 2π u(1, θ) = g(θ) for 0 ≤ θ < 2π. r2 urr + rur + uθθ = 0. Let r = e−t, u(r(t), θ). ut = urrt = −e−t ur, utt = (−e−t ur)t = e−t ur + e−2t urr = rur + r2 urr. Thus, we have utt + uθθ = 0. Let u(t, θ) = X(t)Y (θ), which gives X (t)Y (θ) + X(t)Y (θ) = 0. X (t) X(t) = − Y (θ) Y (θ) = λ. • From Y (θ) + λY (θ) = 0, we get Yn(θ) = an cos nθ + bn sin nθ. λn = n2, n = 0, 1, 2, . . .. • With these values of λn we solve X (t) − n2 X(t) = 0. If n = 0, X0(t) = c0t + d0. ⇒ X0(r) = −c0 log r + d0. If n = 0, Xn(t) = cnent + dne−nt ⇒ Xn(r) = cnr−n + dnrn . • We have u0(r, θ) = X0(r)Y0(θ) = (−c0 log r + d0)a0, un(r, θ) = Xn(r)Yn(θ) = (cnr−n + dnrn )(an cos nθ + bn sinnθ). But u must be finite at r = 0, so cn = 0, n = 0, 1, 2, . . .. u0(r, θ) = d0a0, un(r, θ) = dnrn (an cos nθ + bn sinnθ). By superposition, we write u(r, θ) = ˜a0 + ∞ n=1 rn (˜an cos nθ + ˜bn sinnθ). Boundary condition gives u(1, θ) = ˜a0 + ∞ n=1 (˜an cos nθ + ˜bn sin nθ) = g(θ).
  • 290. Partial Differential Equations Igor Yanovsky, 2005 290 ˜a0 = 1 π π 0 g(θ) dθ, ˜an = 2 π π 0 g(θ) cosnθ dθ, ˜bn = 2 π π 0 g(θ) sinnθ dθ.
  • 291. Partial Differential Equations Igor Yanovsky, 2005 291 Problem (F’94, #6): The 2D LAPLACE Equation on a Circle. Find all solutions of the homogeneous equation uxx + uyy = 0, x2 + y2 < 1, ∂u ∂n − u = 0, x2 + y2 = 1. Hint: = 1 r ∂ ∂r (r ∂ ∂r ) + 1 r2 ∂2 ∂θ2 in polar coordinates. Proof. Use polar coordinates (r, θ): urr + 1 r ur + 1 r2 uθθ = 0 for 0 ≤ r < 1, 0 ≤ θ < 2π ∂u ∂r (1, θ) − u(1, θ) = 0 for 0 ≤ θ < 2π. Since we solve the equation on a circle, we have periodic conditions: u(r, 0) = u(r, 2π) ⇒ X(r)Y (0) = X(r)Y (2π) ⇒ Y (0) = Y (2π), uθ(r, 0) = uθ(r, 2π) ⇒ X(r)Y (0) = X(r)Y (2π) ⇒ Y (0) = Y (2π). Also, we want the solution to be bounded. In particular, u is bounded for r = 0. r2 urr + rur + uθθ = 0. Let r = e−t, u(r(t), θ), we have utt + uθθ = 0. Let u(t, θ) = X(t)Y (θ), which gives X (t)Y (θ) + X(t)Y (θ) = 0. X (t) X(t) = − Y (θ) Y (θ) = λ. • From Y (θ) + λY (θ) = 0, we get Yn(θ) = an cos √ λθ + bn sin √ λθ. Using periodic condition: Yn(0) = an, Yn(2π) = an cos( λn 2π) + bn sin( λn 2π) = an ⇒ λn = n ⇒ λn = n2 . Thus, Yn(θ) = an cos nθ + bn sinnθ. • With these values of λn we solve X (t) − n2 X(t) = 0. If n = 0, X0(t) = c0t + d0. ⇒ X0(r) = −c0 log r + d0. If n = 0, Xn(t) = cnent + dne−nt ⇒ Xn(r) = cnr−n + dnrn. u must be finite at r = 0 ⇒ cn = 0, n = 0, 1, 2, . . .. u(r, θ) = ˜a0 + ∞ n=1 rn (˜an cos nθ + ˜bn sinnθ). Boundary condition gives 0 = ur(1, θ) − u(1, θ) = −˜a0 + ∞ n=1 (n − 1)(˜an cos nθ + ˜bn sinnθ). Calculating Fourier coefficients gives −2π˜a0 = 0 ⇒ ˜a0 = 0. π(n − 1)an = 0 ⇒ ˜an = 0, n = 2, 3, . . .. a1, b1 are constants. Thus, u(r, θ) = r(˜a1 cos θ + ˜b1 sin θ).
  • 292. Partial Differential Equations Igor Yanovsky, 2005 292 Problem (S’00, #4). a) Let (r, θ) be polar coordinates on the plane, i.e. x1 + ix2 = reiθ . Solve the boudary value problem u = 0 in r < 1 ∂u/∂r = f(θ) on r = 1, beginning with the Fourier series for f (you may assume that f is continuously dif- ferentiable). Give your answer as a power series in x1 + ix2 plus a power series in x1 − ix2. There is a necessary condition on f for this boundary value problem to be solvable that you will find in the course of doing this. b) Sum the series in part (a) to get a representation of u in the form u(r, θ) = 2π 0 N(r, θ − θ )f(θ ) dθ . Proof. a) Green’s identity gives the necessary compatibility condition on f: 2π 0 f(θ) dθ = r=1 ∂u ∂r dθ = ∂Ω ∂u ∂n ds = Ω u dx = 0. Use polar coordinates (r, θ): urr + 1 r ur + 1 r2 uθθ = 0 for 0 ≤ r < 1, 0 ≤ θ < 2π ∂u ∂r (1, θ) = f(θ) for 0 ≤ θ < 2π. Since we solve the equation on a circle, we have periodic conditions: u(r, 0) = u(r, 2π) ⇒ X(r)Y (0) = X(r)Y (2π) ⇒ Y (0) = Y (2π), uθ(r, 0) = uθ(r, 2π) ⇒ X(r)Y (0) = X(r)Y (2π) ⇒ Y (0) = Y (2π). Also, we want the solution to be bounded. In particular, u is bounded for r = 0. r2 urr + rur + uθθ = 0. Let r = e−t, u(r(t), θ), we have utt + uθθ = 0. Let u(t, θ) = X(t)Y (θ), which gives X (t)Y (θ) + X(t)Y (θ) = 0. X (t) X(t) = − Y (θ) Y (θ) = λ. • From Y (θ) + λY (θ) = 0, we get Yn(θ) = an cos √ λθ + bn sin √ λθ. Using periodic condition: Yn(0) = an, Yn(2π) = an cos( λn 2π) + bn sin( λn 2π) = an ⇒ λn = n ⇒ λn = n2 . Thus, Yn(θ) = an cos nθ + bn sinnθ. • With these values of λn we solve X (t) − n2X(t) = 0. If n = 0, X0(t) = c0t + d0. ⇒ X0(r) = −c0 log r + d0.
  • 293. Partial Differential Equations Igor Yanovsky, 2005 293 If n = 0, Xn(t) = cnent + dne−nt ⇒ Xn(r) = cnr−n + dnrn . u must be finite at r = 0 ⇒ cn = 0, n = 0, 1, 2, . . .. u(r, θ) = ˜a0 + ∞ n=1 rn (˜an cos nθ + ˜bn sinnθ). Since ur(r, θ) = ∞ n=1 nrn−1 (˜an cos nθ + ˜bn sinnθ), the boundary condition gives ur(1, θ) = ∞ n=1 n (˜an cos nθ + ˜bn sinnθ) = f(θ). ˜an = 1 nπ 2π 0 f(θ) cos nθ dθ, ˜bn = 1 nπ 2π 0 f(θ) sin nθ dθ. ˜a0 is not determined by f(θ) (since 2π 0 f(θ) dθ = 0). Therefore, it may take an arbitrary value. Moreover, the constant term in the Fourier series for f(θ) must be zero [i.e., 2π 0 f(θ)dθ = 0]. Therefore, the problem is not solvable for an arbitrary function f(θ), and when it is solvable, the solution is not unique. b) In part (a), we obtained the solution and the Fourier coefficients: ˜an = 1 nπ 2π 0 f(θ ) cos nθ dθ , ˜bn = 1 nπ 2π 0 f(θ ) sinnθ dθ . u(r, θ) = ˜a0 + ∞ n=1 rn (˜an cos nθ + ˜bn sinnθ) = ˜a0 + ∞ n=1 rn 1 nπ 2π 0 f(θ ) cos nθ dθ cos nθ + 1 nπ 2π 0 f(θ ) sin nθ dθ sinnθ = ˜a0 + ∞ n=1 rn nπ 2π 0 f(θ ) cos nθ cos nθ + sin nθ sinnθ dθ = ˜a0 + ∞ n=1 rn nπ 2π 0 f(θ ) cos n(θ − θ) dθ = ˜a0 + 2π 0 ∞ n=1 rn nπ cos n(θ − θ ) N(r,θ−θ ) f(θ ) dθ .
  • 294. Partial Differential Equations Igor Yanovsky, 2005 294 Problem (S’92, #6). Consider the Laplace equation uxx + uyy = 0 for x2 + y2 ≥ 1. Denoting by x = r cos θ, y = r sinθ polar coordinates, let f = f(θ) be a given smooth function of θ. Construct a uniformly bounded solution which satisfies boundary conditions u = f for x2 + y2 = 1. What conditions has f to satisfy such that lim x2+y2→∞ (x2 + y2 )u(x, y) = 0? Proof. Use polar coordinates (r, θ): urr + 1 r ur + 1 r2 uθθ = 0 for r ≥ 1 u(1, θ) = f(θ) for 0 ≤ θ < 2π. Since we solve the equation on outside of a circle, we have periodic conditions: u(r, 0) = u(r, 2π) ⇒ X(r)Y (0) = X(r)Y (2π) ⇒ Y (0) = Y (2π), uθ(r, 0) = u(r, 2π) ⇒ X(r)Y (0) = X(r)Y (2π) ⇒ Y (0) = Y (2π). Also, we want the solution to be bounded. In particular, u is bounded for r = ∞. r2 urr + rur + uθθ = 0. Let r = e−t, u(r(t), θ), we have utt + uθθ = 0. Let u(t, θ) = X(t)Y (θ), which gives X (t)Y (θ) + X(t)Y (θ) = 0. X (t) X(t) = − Y (θ) Y (θ) = λ. • From Y (θ) + λY (θ) = 0, we get Yn(θ) = an cos √ λθ + bn sin √ λθ. Using periodic condition: Yn(0) = an, Yn(2π) = an cos( λn 2π) + bn sin( λn 2π) = an ⇒ λn = n ⇒ λn = n2 . Thus, Yn(θ) = an cos nθ + bn sinnθ. • With these values of λn we solve X (t) − n2X(t) = 0. If n = 0, X0(t) = c0t + d0. ⇒ X0(r) = −c0 log r + d0. If n = 0, Xn(t) = cnent + dne−nt ⇒ Xn(r) = cnr−n + dnrn. u must be finite at r = ∞ ⇒ c0 = 0, dn = 0, n = 1, 2, . . .. u(r, θ) = ˜a0 + ∞ n=1 r−n (˜an cos nθ + ˜bn sin nθ). Boundary condition gives f(θ) = u(1, θ) = ˜a0 + ∞ n=1 (˜an cos nθ + ˜bn sinnθ).
  • 295. Partial Differential Equations Igor Yanovsky, 2005 295 ⎧ ⎪⎨ ⎪⎩ 2π˜a0 = 2π 0 f(θ) dθ, π˜an = 2π 0 f(θ) cos nθ dθ, π˜bn = 2π 0 f(θ) sinnθ dθ. ⇒ ⎧ ⎪⎨ ⎪⎩ f0 = ˜a0 = 1 2π 2π 0 f(θ) dθ, fn = ˜an = 1 π 2π 0 f(θ) cos nθ dθ, ˜fn = ˜bn = 1 π 2π 0 f(θ) sinnθ dθ. • We need to find conditions for f such that lim x2+y2→∞ (x2 + y2 )u(x, y) = 0, or lim r→∞ r2 u(r, θ) = need 0, lim r→∞ r2 f0 + ∞ n=1 r−n (fn cos nθ + ˜fn sin nθ) = need 0. Since lim r→∞ ∞ n>2 r2−n (fn cos nθ + ˜fn sin nθ) = 0, we need lim r→∞ r2 f0 + 2 n=1 r2−n (fn cos nθ + ˜fn sinnθ) = need 0. Thus, the conditions are fn, ˜fn = 0, n = 0, 1, 2.
  • 296. Partial Differential Equations Igor Yanovsky, 2005 296 Problem (F’96, #2): The 2D LAPLACE Equation on a Semi-Annulus. Solve the Laplace equation in the semi-annulus ⎧ ⎪⎪⎪⎪⎨ ⎪⎪⎪⎪⎩ u = 0, 1 < r < 2, 0 < θ < π, u(r, 0) = u(r, π) = 0, 1 < r < 2, u(1, θ) = sinθ, 0 < θ < π, u(2, θ) = 0, 0 < θ < π. Hint: Use the formula = 1 r ∂ ∂r (r ∂ ∂r ) + 1 r2 ∂2 ∂θ2 for the Laplacian in polar coordinates. Proof. Use polar coordinates (r, θ) urr + 1 r ur + 1 r2 uθθ = 0 1 < r < 2, 0 < θ < π, r2 urr + rur + uθθ = 0. With r = e−t, we have utt + uθθ = 0. Let u(t, θ) = X(t)Y (θ), which gives X (t)Y (θ) + X(t)Y (θ) = 0. X (t) X(t) = − Y (θ) Y (θ) = λ. • From Y (θ) + λY (θ) = 0, we get Yn(θ) = an cos √ λθ + bn sin √ λθ. Boundary conditions give un(r, 0) = 0 = Xn(r)Yn(0) = 0, ⇒ Yn(0) = 0, un(r, π) = 0 = Xn(r)Yn(π) = 0, ⇒ Yn(π) = 0. Thus, 0 = Yn(0) = an, and Yn(π) = bn sin √ λπ = 0 ⇒ √ λ = n ⇒ λn = n2 . Thus, Yn(θ) = bn sinnθ, n = 1, 2, . . .. • With these values of λn we solve X (t) − n2X(t) = 0. If n = 0, X0(t) = c0t + d0. ⇒ X0(r) = −c0 log r + d0. If n > 0, Xn(t) = cnent + dne−nt ⇒ Xn(r) = cnr−n + dnrn. • We have, u(r, θ) = ∞ n=1 Xn(r)Yn(θ) = ∞ n=1 (˜cnr−n + ˜dnrn ) sinnθ. Using the other two boundary conditions, we obtain sinθ = u(1, θ) = ∞ n=1 (˜cn + ˜dn) sinnθ ⇒ ˜c1 + ˜d1 = 1, ˜cn + ˜dn = 0, n = 2, 3, . . .. 0 = u(2, θ) = ∞ n=1 (˜cn2−n + ˜dn2n ) sinnθ ⇒ ˜cn2−n + ˜dn2n = 0, n = 1, 2, . . .. Thus, the coefficients are given by c1 = 4 3 , d1 = − 1 3 ; cn = 0, dn = 0.
  • 297. Partial Differential Equations Igor Yanovsky, 2005 297 u(r, θ) = 4 3r − r 3 sinθ.
  • 298. Partial Differential Equations Igor Yanovsky, 2005 298 Problem (S’98, #8): The 2D LAPLACE Equation on a Semi-Annulus. Solve ⎧ ⎪⎨ ⎪⎩ u = 0, 1 < r < 2, 0 < θ < π, u(r, 0) = u(r, π) = 0, 1 < r < 2, u(1, θ) = u(2, θ) = 1, 0 < θ < π. Proof. Use polar coordinates (r, θ) urr + 1 r ur + 1 r2 uθθ = 0 for 1 < r < 2, 0 < θ < π, r2 urr + rur + uθθ = 0. With r = e−t , we have utt + uθθ = 0. Let u(t, θ) = X(t)Y (θ), which gives X (t)Y (θ) + X(t)Y (θ) = 0. X (t) X(t) = − Y (θ) Y (θ) = λ. • From Y (θ) + λY (θ) = 0, we get Yn(θ) = an cos nθ + bn sin nθ. Boundary conditions give un(r, 0) = 0 = Xn(r)Yn(0) = 0, ⇒ Yn(0) = 0, un(r, π) = 0 = Xn(r)Yn(π) = 0, ⇒ Yn(π) = 0. Thus, 0 = Yn(0) = an, and Yn(θ) = bn sin nθ. λn = n2 , n = 1, 2, . . .. • With these values of λn we solve X (t) − n2X(t) = 0. If n = 0, X0(t) = c0t + d0. ⇒ X0(r) = −c0 log r + d0. If n > 0, Xn(t) = cnent + dne−nt ⇒ Xn(r) = cnr−n + dnrn. • We have, u(r, θ) = ∞ n=1 Xn(r)Yn(θ) = ∞ n=1 (˜cnr−n + ˜dnrn ) sinnθ. Using the other two boundary conditions, we obtain u(1, θ) = 1 = ∞ n=1 (˜cn + ˜dn) sinnθ, u(2, θ) = 1 = ∞ n=1 (˜cn2−n + ˜dn2n ) sinnθ, which give the two equations for ˜cn and ˜dn: π 0 sin nθ dθ = π 2 (˜cn + ˜dn), π 0 sin nθ dθ = π 2 (˜cn2−n + ˜dn2n ), that can be solved.
  • 299. Partial Differential Equations Igor Yanovsky, 2005 299 Problem (F’89, #1). Consider Laplace equation inside a 90◦ sector of a circular annulus u = 0 a < r < b, 0 < θ < π 2 subject to the boundary conditions ∂u ∂θ (r, 0) = 0, ∂u ∂θ (r, π 2 ) = 0, ∂u ∂r (a, θ) = f1(θ), ∂u ∂r (b, θ) = f2(θ), where f1(θ), f2(θ) are continuously differentiable. a) Find the solution of this equation with the prescribed boundary conditions using separation of variables. Proof. a) Use polar coordinates (r, θ) urr + 1 r ur + 1 r2 uθθ = 0 for a < r < b, 0 < θ < π 2 , r2 urr + rur + uθθ = 0. With r = e−t, we have utt + uθθ = 0. Let u(t, θ) = X(t)Y (θ), which gives X (t)Y (θ) + X(t)Y (θ) = 0. X (t) X(t) = − Y (θ) Y (θ) = λ. • From Y (θ) + λY (θ) = 0, we get Yn(θ) = an cos √ λθ + bn sin √ λθ. Boundary conditions give unθ(r, 0) = Xn(r)Yn(0) = 0 ⇒ Yn(0) = 0, unθ(r, π 2 ) = Xn(r)Yn( π 2 ) = 0 ⇒ Yn( π 2 ) = 0. Yn(θ) = −an √ λn sin √ λnθ + bn √ λn cos √ λnθ. Thus, Yn(0) = bn √ λn = 0 ⇒ bn = 0. Yn(π 2 ) = −an √ λn sin √ λn π 2 = 0 ⇒ √ λn π 2 = nπ ⇒ λn = (2n)2. Thus, Yn(θ) = an cos(2nθ), n = 0, 1, 2, . . .. In particular, Y0(θ) = a0t + b0. Boundary conditions give Y0(θ) = b0. • With these values of λn we solve X (t) − (2n)2 X(t) = 0. If n = 0, X0(t) = c0t + d0. ⇒ X0(r) = −c0 log r + d0. If n > 0, Xn(t) = cne2nt + dne−2nt ⇒ Xn(r) = cnr−2n + dnr2n. u(r, θ) = ˜c0 log r + ˜d0 + ∞ n=1 (˜cnr−2n + ˜dnr2n ) cos(2nθ). Using the other two boundary conditions, we obtain ur(r, θ) = ˜c0 r + ∞ n=1 (−2n˜cnr−2n−1 + 2n ˜dnr2n−1 ) cos(2nθ).
  • 300. Partial Differential Equations Igor Yanovsky, 2005 300 f1(θ) = ur(a, θ) = ˜c0 a + 2 ∞ n=1 n(−˜cna−2n−1 + ˜dna2n−1 ) cos(2nθ), f2(θ) = ur(b, θ) = ˜c0 b + 2 ∞ n=1 n(−˜cnb−2n−1 + ˜dnb2n−1 ) cos(2nθ). which give the two equations for ˜cn and ˜dn: π 2 0 f1(θ) cos(2nθ) dθ = π 2 n(−˜cna−2n−1 + ˜dna2n−1 ), π 2 0 f2(θ) sin(2nθ) dθ = π 2 n(−˜cnb−2n−1 + ˜dnb2n−1 ). b) Show that the solution exists if and only if a π 2 0 f1(θ) dθ − b π 2 0 f2(θ) dθ = 0. Proof. Using Green’s identity, we obtain: 0 = Ω u dx = ∂Ω ∂u ∂n = π 2 0 ∂u ∂r (b, θ) dθ + 0 π 2 − ∂u ∂r (a, θ) dθ + b a − ∂u ∂θ (r, 0) dr + a b ∂u ∂θ r, π 2 dr = π 2 0 f2(θ) dθ + π 2 0 f1(θ) dθ + 0 + 0 = π 2 0 f1(θ) dθ + π 2 0 f2(θ) dθ. c) Is the solution unique? Proof. No, since the boundary conditions are Neumann. The solution is unique only up to a constant.
  • 301. Partial Differential Equations Igor Yanovsky, 2005 301 Problem (S’99, #4). Let u(x, y) be harmonic inside the unit disc, with boundary values along the unit circle u(x, y) = 1, y > 0 0, y ≤ 0. Compute u(0, 0) and u(0, y). Proof. Since u is harmonic, u = 0. Use polar coordinates (r, θ) ⎧ ⎪⎨ ⎪⎩ urr + 1 r ur + 1 r2 uθθ = 0 0 ≤ r < 1, 0 ≤ θ < 2π u(1, θ) = 1, 0 < θ < π 0, π ≤ θ ≤ 2π. r2 urr + rur + uθθ = 0. With r = e−t , we have utt + uθθ = 0. Let u(t, θ) = X(t)Y (θ), which gives X (t)Y (θ) + X(t)Y (θ) = 0. X (t) X(t) = − Y (θ) Y (θ) = λ. • From Y (θ) + λY (θ) = 0, we get Yn(θ) = an cos nθ + bn sin nθ. λn = n2, n = 1, 2, . . .. • With these values of λn we solve X (t) − n2 X(t) = 0. If n = 0, X0(t) = c0t + d0. ⇒ X0(r) = −c0 log r + d0. If n > 0, Xn(t) = cnent + dne−nt ⇒ Xn(r) = cnr−n + dnrn . • We have u0(r, θ) = X0(r)Y0(θ) = (−c0 log r + d0)a0, un(r, θ) = Xn(r)Yn(θ) = (cnr−n + dnrn )(an cos nθ + bn sinnθ). But u must be finite at r = 0, so cn = 0, n = 0, 1, 2, . . .. u0(r, θ) = ˜a0, un(r, θ) = rn (˜an cos nθ + ˜bn sin nθ). By superposition, we write u(r, θ) = ˜a0 + ∞ n=1 rn (˜an cos nθ + ˜bn sinnθ). Boundary condition gives u(1, θ) = ˜a0 + ∞ n=1 (˜an cos nθ + ˜bn sin nθ) = 1, 0 < θ < π 0, π ≤ θ ≤ 2π, and the coefficients ˜an and ˜bn are determined from the above equation. 71 71 See Yana’s solutions, where Green’s function on a unit disk is constructed.
  • 302. Partial Differential Equations Igor Yanovsky, 2005 302 23 Problems: Separation of Variables - Poisson Equation Problem (F’91, #2): The 2D POISSON Equation on a Quarter-Circle. Solve explicitly the following boundary value problem uxx + uyy = f(x, y) in the domain Ω = {(x, y), x > 0, y > 0, x2 + y2 < 1} with boundary conditions u = 0 for y = 0, 0 < x < 1, ∂u ∂x = 0 for x = 0, 0 < y < 1, u = 0 for x > 0, y > 0, x2 + y2 = 1. Function f(x, y) is known and is assumed to be continuous. Proof. Use polar coordinates (r, θ): ⎧ ⎪⎪⎪⎪⎨ ⎪⎪⎪⎪⎩ urr + 1 r ur + 1 r2 uθθ = f(r, θ) 0 ≤ r < 1, 0 ≤ θ < π 2 u(r, 0) = 0 0 ≤ r < 1, uθ(r, π 2 ) = 0 0 ≤ r < 1, u(1, θ) = 0 0 ≤ θ ≤ π 2 . We solve r2 urr + rur + uθθ = 0. Let r = e−t, u(r(t), θ), we have utt + uθθ = 0. Let u(t, θ) = X(t)Y (θ), which gives X (t)Y (θ) + X(t)Y (θ) = 0. X (t) X(t) = − Y (θ) Y (θ) = λ. • From Y (θ) + λY (θ) = 0, we get Yn(θ) = an cos √ λθ + bn sin √ λθ. Boundary conditions: u(r, 0) = X(r)Y (0) = 0 uθ(r, π 2 ) = X(r)Y (π 2 ) = 0 ⇒ Y (0) = Y π 2 = 0. Thus, Yn(0) = an = 0, and Yn(π 2 ) = √ λnbn cos √ λn π 2 = 0 ⇒ √ λn π 2 = nπ − π 2 , n = 1, 2, . . . ⇒ λn = (2n − 1)2. Thus, Yn(θ) = bn sin(2n − 1)θ, n = 1, 2, . . .. Thus, we have u(r, θ) = ∞ n=1 Xn(r) sin[(2n − 1)θ].
  • 303. Partial Differential Equations Igor Yanovsky, 2005 303 We now plug this equation into with inhomogeneous term and obtain ∞ n=1 Xn(t) sin[(2n − 1)θ] − (2n − 1)2 Xn(t) sin[(2n − 1)θ] = f(t, θ), ∞ n=1 Xn(t) − (2n − 1)2 Xn(t) sin[(2n − 1)θ] = f(t, θ), π 4 Xn(t) − (2n − 1)2 Xn(t) = π 2 0 f(t, θ) sin[(2n − 1)θ] dθ, Xn(t) − (2n − 1)2 Xn(t) = 4 π π 2 0 f(t, θ) sin[(2n − 1)θ] dθ. The solution to this equation is Xn(t) = cne(2n−1)t + dne−(2n−1)t + Unp(t), or Xn(r) = cnr−(2n−1) + dnr(2n−1) + unp(r), where unp is the particular solution of inhomogeneous equation. u must be finite at r = 0 ⇒ cn = 0, n = 1, 2, . . .. Thus, u(r, θ) = ∞ n=1 dnr(2n−1) + unp(r) sin[(2n − 1)θ]. Using the last boundary condition, we have 0 = u(1, θ) = ∞ n=1 dn + unp(1) sin[(2n − 1)θ], ⇒ 0 = π 4 (dn + unp(1)), ⇒ dn = −unp(1). u(r, θ) = ∞ n=1 − unp(1)r(2n−1) + unp(r) sin[(2n − 1)θ]. The method used to solve this problem is similar to section Problems: Eigenvalues of the Laplacian - Poisson Equation: 1) First, we find Yn(θ) eigenfunctions. 2) Then, we plug in our guess u(t, θ) = X(t)Y (θ) into the equation utt + uθθ = f(t, θ) and solve an ODE in X(t). Note the similar problem on 2D Poisson equation on a square domain. The prob- lem is used by first finding the eigenvalues and eigenfunctions of the Laplacian, and then expanding f(x, y) in eigenfunctions, and comparing coefficients of f with the gen- eral solution u(x, y). Here, however, this could not be done because of the circular geometry of the domain. In particular, the boundary conditions do not give enough information to find explicit representations for μm and νn. Also, the condition u = 0 for x > 0, y > 0, x2 +y2 = 1
  • 304. Partial Differential Equations Igor Yanovsky, 2005 304 can not be used. 72 72 ChiuYen’s solutions have attempts to solve this problem using Green’s function.
  • 305. Partial Differential Equations Igor Yanovsky, 2005 305 24 Problems: Separation of Variables - Wave Equation Example (McOwen 3.1 #2). We considered the initial/boundary value problem and solved it using Fourier Series. We now solve it using the Separation of Variables. ⎧ ⎪⎨ ⎪⎩ utt − uxx = 0 0 < x < π, t > 0 u(x, 0) = 1, ut(x, 0) = 0 0 < x < π u(0, t) = 0, u(π, t) = 0 t ≥ 0. (24.1) Proof. Assume u(x, t) = X(x)T(t), then substitution in the PDE gives XT −X T = 0. X X = T T = −λ. • From X + λX = 0, we get Xn(x) = an cos nx + bn sin nx. Boundary conditions give u(0, t) = X(0)T(t) = 0 u(π, t) = X(π)T(t) = 0 ⇒ X(0) = X(π) = 0. Thus, Xn(0) = an = 0, and Xn(x) = bn sinnx, λn = n2, n = 1, 2, . . .. • With these values of λn, we solve T +n2 T = 0 to find Tn(t) = cn sinnt+dn cos nt. Thus, u(x, t) = ∞ n=1 ˜cn sin nt + ˜dn cos nt sin nx, ut(x, t) = ∞ n=1 n˜cn cos nt − n ˜dn sin nt sinnx. • Initial conditions give 1 = u(x, 0) = ∞ n=1 ˜dn sin nx, 0 = ut(x, 0) = ∞ n=1 n˜cn sinnx. By orthogonality, we may multiply both equations by sinmx and integrate: π 0 sin mx dx = ˜dm π 2 , π 0 0 dx = n˜cn π 2 , which gives the coefficients ˜dn = 2 nπ (1 − cos nπ) = 4 nπ , n odd, 0, n even, and ˜cn = 0. Plugging the coefficients into a formula for u(x, t), we get u(x, t) = 4 π ∞ n=0 cos(2n + 1)t sin(2n + 1)x (2n + 1) .
  • 306. Partial Differential Equations Igor Yanovsky, 2005 306 Example. Use the method of separation of variables to find the solution to: ⎧ ⎪⎨ ⎪⎩ utt + 3ut + u = uxx, 0 < x < 1 u(0, t) = 0, u(1, t) = 0, u(x, 0) = 0, ut(x, 0) = x sin(2πx). Proof. Assume u(x, t) = X(x)T(t), then substitution in the PDE gives XT + 3XT + XT = X T, T T + 3 T T + 1 = X X = −λ. • From X + λX = 0, Xn(x) = an cos √ λnx + bn sin √ λnx. Boundary conditions give u(0, t) = X(0)T(t) = 0 u(1, t) = X(1)T(t) = 0 ⇒ X(0) = X(1) = 0. Thus, Xn(0) = an = 0, and Xn(x) = bn sin √ λnx. Xn(1) = bn sin √ λn = 0. Hence, √ λn = nπ, or λn = (nπ)2 , n = 1, 2, . . .. λn = (nπ)2 , Xn(x) = bn sinnπx. • With these values of λn, we solve T + 3T + T = −λnT, T + 3T + T = −(nπ)2 T, T + 3T + (1 + (nπ)2 )T = 0. We can solve this 2nd-order ODE with the following guess, T(t) = cest to obtain s = −3 2 ± 5 4 − (nπ)2. For n ≥ 1, 5 4 − (nπ)2 < 0. Thus, s = −3 2 ± i (nπ)2 − 5 4. Tn(t) = e−3 2 t cn cos (nπ)2 − 5 4 t + dn sin (nπ)2 − 5 4 t . u(x, t) = X(x)T(t) = ∞ n=1 e−3 2 t cn cos (nπ)2 − 5 4 t + dn sin (nπ)2 − 5 4 t sinnπx. • Initial conditions give 0 = u(x, 0) = ∞ n=1 cn sin nπx. By orthogonality, we may multiply this equations by sin mπx and integrate: 1 0 0 dx = 1 2 cm ⇒ cm = 0.
  • 307. Partial Differential Equations Igor Yanovsky, 2005 307 Thus, u(x, t) = ∞ n=1 dne−3 2 t sin (nπ)2 − 5 4 t sinnπx. ut(x, t) = ∞ n=1 − 3 2 dne−3 2 t sin (nπ)2 − 5 4 t + dne−3 2 t (nπ)2 − 5 4 cos (nπ)2 − 5 4 t sinnπx, x sin(2πx) = ut(x, 0) = ∞ n=1 dn (nπ)2 − 5 4 sinnπx. By orthogonality, we may multiply this equations by sin mπx and integrate: 1 0 x sin(2πx) sin(mπx) dx = dm 1 2 (mπ)2 − 5 4 , dn = 2 (nπ)2 − 5 4 1 0 x sin(2πx) sin(nπx) dx. u(x, t) = e−3 2 t ∞ n=1 dn sin (nπ)2 − 5 4 t sin nπx. Problem (F’04, #1). Solve the following initial-boundary value problem for the wave equation with a potential term, ⎧ ⎪⎨ ⎪⎩ utt − uxx + u = 0 0 < x < π, t < 0 u(0, t) = u(π, t) = 0 t > 0 u(x, 0) = f(x), ut(x, 0) = 0 0 < x < π, where f(x) = x if x ∈ (0, π/2), π − x if x ∈ (π/2, π). The answer should be given in terms of an infinite series of explicitly given functions. Proof. Assume u(x, t) = X(x)T(t), then substitution in the PDE gives XT − X T + XT = 0, T T + 1 = X X = −λ. • From X + λX = 0, Xn(x) = an cos √ λnx + bn sin √ λnx. Boundary conditions give u(0, t) = X(0)T(t) = 0 u(π, t) = X(π)T(t) = 0 ⇒ X(0) = X(π) = 0. Thus, Xn(0) = an = 0, and Xn(x) = bn sin √ λnx. Xn(π) = bn sin √ λnπ = 0. Hence, √ λn = n, or λn = n2 , n = 1, 2, . . .. λn = n2 , Xn(x) = bn sinnx.
  • 308. Partial Differential Equations Igor Yanovsky, 2005 308 • With these values of λn, we solve T + T = −λnT, T + T = −n2 T, Tn + (1 + n2 )Tn = 0. The solution to this 2nd-order ODE is of the form: Tn(t) = cn cos 1 + n2 t + dn sin 1 + n2 t. u(x, t) = X(x)T(t) = ∞ n=1 cn cos 1 + n2 t + dn sin 1 + n2 t sinnx. ut(x, t) = ∞ n=1 − cn( 1 + n2) sin 1 + n2 t + dn( 1 + n2) cos 1 + n2 t sin nx. • Initial conditions give f(x) = u(x, 0) = ∞ n=1 cn sin nx. 0 = ut(x, 0) = ∞ n=1 dn( 1 + n2) sinnx. By orthogonality, we may multiply both equations by sinmx and integrate: π 0 f(x) sinmx dx = cm π 2 , π 0 0 dx = dm π 2 1 + m2, which gives the coefficients cn = 2 π π 0 f(x) sinnx dx = 2 π π 2 0 x sinnx dx + 2 π π π 2 (π − x) sinnx dx = 2 π − x 1 n cos nx π 2 0 + 1 n π 2 0 cos nx dx + 2 π − π n cos nx π π 2 + x 1 n cos nx π π 2 − 1 n π π 2 cos nx dx = 2 π − π 2n cos nπ 2 + 1 n2 sin nπ 2 − 1 n2 sin0 + 2 π − π n cos nπ + π n cos nπ 2 + π n cos nπ − π 2n cos nπ 2 − 1 n2 sin nπ + 1 n2 sin nπ 2 = 2 π 1 n2 sin nπ 2 + 2 π 1 n2 sin nπ 2 = 4 πn2 sin nπ 2 = ⎧ ⎪⎨ ⎪⎩ 0, n = 2k 4 πn2 , n = 4m + 1 − 4 πn2 , n = 4m + 3 = 0, n = 2k (−1) n−1 2 4 πn2 , n = 2k + 1. dn = 0. u(x, t) = ∞ n=1 cn cos 1 + n2 t sinnx.
  • 309. Partial Differential Equations Igor Yanovsky, 2005 309 25 Problems: Separation of Variables - Heat Equation Problem (F’94, #5). Solve the initial-boundary value problem ⎧ ⎪⎨ ⎪⎩ ut = uxx 0 < x < 2, t > 0 u(x, 0) = x2 − x + 1 0 ≤ x ≤ 2 u(0, t) = 1, u(2, t) = 3 t > 0. Find limt→+∞ u(x, t). Proof. ➀ First, we need to obtain function v that satisfies vt = vxx and takes 0 boundary conditions. Let • v(x, t) = u(x, t) + (ax + b), (25.1) where a and b are constants to be determined. Then, vt = ut, vxx = uxx. Thus, vt = vxx. We need equation (25.1) to take 0 boundary conditions for v(0, t) and v(2, t): v(0, t) = 0 = u(0, t) + b = 1 + b ⇒ b = −1, v(2, t) = 0 = u(2, t) + 2a − 1 = 2a + 2 ⇒ a = −1. Thus, (25.1) becomes v(x, t) = u(x, t) − x − 1. (25.2) The new problem is ⎧ ⎪⎨ ⎪⎩ vt = vxx, v(x, 0) = (x2 − x + 1) − x − 1 = x2 − 2x, v(0, t) = v(2, t) = 0. ➁ We solve the problem for v using the method of separation of variables. Let v(x, t) = X(x)T(t), which gives XT − X T = 0. X X = T T = −λ. From X + λX = 0, we get Xn(x) = an cos √ λx + bn sin √ λx. Using boundary conditions, we have v(0, t) = X(0)T(t) = 0 v(2, t) = X(2)T(t) = 0 ⇒ X(0) = X(2) = 0. Hence, Xn(0) = an = 0, and Xn(x) = bn sin √ λx. Xn(2) = bn sin 2 √ λ = 0 ⇒ 2 √ λ = nπ ⇒ λn = (nπ 2 )2. Xn(x) = bn sin nπx 2 , λn = nπ 2 2 .
  • 310. Partial Differential Equations Igor Yanovsky, 2005 310 With these values of λn, we solve T + nπ 2 2 T = 0 to find Tn(t) = cne−( nπ 2 )2t . v(x, t) = ∞ n=1 Xn(x)Tn(t) = ∞ n=1 ˜cn e−( nπ 2 )2t sin nπx 2 . Coefficients ˜cn are obtained using the initial condition: v(x, 0) = ∞ n=1 ˜cn sin nπx 2 = x2 − 2x. ˜cn = 2 0 (x2 − 2x) sin nπx 2 dx = 0 n is even, − 32 (nπ)3 n is odd. ⇒ v(x, t) = ∞ n=2k−1 − 32 (nπ)3 e−( nπ 2 )2t sin nπx 2 . We now use equation (25.2) to convert back to function u: u(x, t) = v(x, t) + x + 1. u(x, t) = ∞ n=2k−1 − 32 (nπ)3 e−( nπ 2 )2t sin nπx 2 + x + 1. lim t→+∞ u(x, t) = x + 1.
  • 311. Partial Differential Equations Igor Yanovsky, 2005 311 Problem (S’96, #6). Let u(x, t) be the solution of the initial-boundary value problem for the heat equation ⎧ ⎪⎨ ⎪⎩ ut = uxx 0 < x < L, t > 0 u(x, 0) = f(x) 0 ≤ x ≤ L ux(0, t) = ux(L, t) = A t > 0 (A = Const). Find v(x) - the limit of u(x, t) when t → ∞. Show that v(x) is one of the inifinitely many solutions of the stationary problem vxx = 0 0 < x < L vx(0) = vx(L) = A. Proof. ➀ First, we need to obtain function v that satisfies vt = vxx and takes 0 boundary conditions. Let • v(x, t) = u(x, t) + (ax + b), (25.3) where a and b are constants to be determined. Then, vt = ut, vxx = uxx. Thus, vt = vxx. We need equation (25.3) to take 0 boundary conditions for vx(0, t) and vx(L, t). vx = ux + a. vx(0, t) = 0 = ux(0, t) + a = A + a ⇒ a = −A, vx(L, t) = 0 = ux(L, t) + a = A + a ⇒ a = −A. We may set b = 0 (infinitely many solutions are possible, one for each b). Thus, (25.3) becomes v(x, t) = u(x, t) − Ax. (25.4) The new problem is ⎧ ⎪⎨ ⎪⎩ vt = vxx, v(x, 0) = f(x) − Ax, vx(0, t) = vx(L, t) = 0. ➁ We solve the problem for v using the method of separation of variables. Let v(x, t) = X(x)T(t), which gives XT − X T = 0. X X = T T = −λ. From X + λX = 0, we get Xn(x) = an cos √ λx + bn sin √ λx. Using boundary conditions, we have vx(0, t) = X (0)T(t) = 0 vx(L, t) = X (L)T(t) = 0 ⇒ X (0) = X (L) = 0.
  • 312. Partial Differential Equations Igor Yanovsky, 2005 312 Xn(x) = −an √ λ sin √ λx + bn √ λ cos √ λx. Hence, Xn(0) = bn √ λn = 0 ⇒ bn = 0; and Xn(x) = an cos √ λx. Xn(L) = −an √ λ sinL √ λ = 0 ⇒ L √ λ = nπ ⇒ λn = (nπ L )2 . Xn(x) = an cos nπx L , λn = nπ L 2 . With these values of λn, we solve T + nπ L 2 T = 0 to find T0(t) = c0, Tn(t) = cne−( nπ L )2t , n = 1, 2, . . .. v(x, t) = ∞ n=1 Xn(x)Tn(t) = ˜c0 + ∞ n=1 ˜cn e−( nπ L )2t cos nπx L . Coefficients ˜cn are obtained using the initial condition: v(x, 0) = ˜c0 + ∞ n=1 ˜cn cos nπx L = f(x) − Ax. L˜c0 = L 0 (f(x) − Ax) dx = L 0 f(x) dx − AL2 2 ⇒ ˜c0 = 1 L L 0 f(x) dx − AL 2 , L 2 ˜cn = L 0 (f(x) − Ax) cos nπx L dx ⇒ ˜cn = 1 L L 0 (f(x) − Ax) cos nπx L dx. ⇒ v(x, t) = 1 L L 0 f(x) dx − AL 2 + ∞ n ˜cn e−( nπ L )2t cos nπx L . We now use equation (25.4) to convert back to function u: u(x, t) = v(x, t) + Ax. u(x, t) = 1 L L 0 f(x) dx − AL 2 + ∞ n ˜cn e−( nπ L )2t cos nπx L + Ax. lim t→+∞ u(x, t) = Ax + b, b arbitrary. To show that v(x) is one of the inifinitely many solutions of the stationary problem vxx = 0 0 < x < L vx(0) = vx(L) = A, we can solve the boundary value problem to obtain v(x, t) = Ax+b, where b is arbitrary.
  • 313. Partial Differential Equations Igor Yanovsky, 2005 313 Heat Equation with Nonhomogeneous Time-Independent BC in N-dimensions. The solution to this problem takes somewhat different approach than in the last few prob- lems, but is similar. Consider the following initial-boundary value problem, ⎧ ⎪⎨ ⎪⎩ ut = u, x ∈ Ω, t ≥ 0 u(x, 0) = f(x), x ∈ Ω u(x, t) = g(x), x ∈ ∂Ω, t > 0. Proof. Let w(x) be the solution of the Dirichlet problem: w = 0, x ∈ Ω w(x) = g(x), x ∈ ∂Ω and let v(x, t) be the solution of the IBVP for the heat equation with homogeneous BC: ⎧ ⎪⎨ ⎪⎩ vt = v, x ∈ Ω, t ≥ 0 v(x, 0) = f(x) − w(x), x ∈ Ω v(x, t) = 0, x ∈ ∂Ω, t > 0. Then u(x, t) satisfies u(x, t) = v(x, t) + w(x). lim t→∞ u(x, t) = w(x).
  • 314. Partial Differential Equations Igor Yanovsky, 2005 314 Nonhomogeneous Heat Equation with Nonhomogeneous Time-Independent BC in N dimensions. Describe the method of solution of the problem ⎧ ⎪⎨ ⎪⎩ ut = u + F(x, t), x ∈ Ω, t ≥ 0 u(x, 0) = f(x), x ∈ Ω u(x, t) = g(x), x ∈ ∂Ω, t > 0. Proof. ❶ We first find u1, the solution to the homogeneous heat equation (no F(x, t)). Let w(x) be the solution of the Dirichlet problem: w = 0, x ∈ Ω w(x) = g(x), x ∈ ∂Ω and let v(x, t) be the solution of the IBVP for the heat equation with homogeneous BC: ⎧ ⎪⎨ ⎪⎩ vt = v, x ∈ Ω, t ≥ 0 v(x, 0) = f(x) − w(x), x ∈ Ω v(x, t) = 0, x ∈ ∂Ω, t > 0. Then u1(x, t) satisfies u1(x, t) = v(x, t) + w(x). lim t→∞ u1(x, t) = w(x). ❷ The solution to the homogeneous equation with 0 boundary conditions is given by Duhamel’s principle. u2t = u2 + F(x, t) for t > 0, x ∈ Rn u2(x, 0) = 0 for x ∈ Rn. (25.5) Duhamel’s principle gives the solution: u2(x, t) = t 0 Rn ˜K(x − y, t − s) F(y, s) dy ds Note: u2(x, t) = 0 on ∂Ω may not be satisfied. u(x, t) = v(x, t) + w(x) + t 0 Rn ˜K(x − y, t − s) F(y, s) dy ds.
  • 315. Partial Differential Equations Igor Yanovsky, 2005 315 Problem (S’98, #5). Find the solution of ⎧ ⎪⎨ ⎪⎩ ut = uxx, t ≥ 0, 0 < x < 1, u(x, 0) = 0, 0 < x < 1, u(0, t) = 1 − e−t, ux(1, t) = e−t − 1, t > 0. Prove that limt→∞ u(x, t) exists and find it. Proof. ➀ First, we need to obtain function v that satisfies vt = vxx and takes 0 boundary conditions. Let • v(x, t) = u(x, t) + (ax + b) + (c1 cos x + c2 sin x)e−t , (25.6) where a, b, c1, c2 are constants to be determined. Then, vt = ut − (c1 cos x + c2 sinx)e−t , vxx = uxx + (−c1 cos x − c2 sin x)e−t . Thus, vt = vxx. We need equation (25.6) to take 0 boundary conditions for v(0, t) and vx(1, t): v(0, t) = 0 = u(0, t) + b + c1e−t = 1 − e−t + b + c1e−t . Thus, b = −1, c1 = 1, and (25.6) becomes v(x, t) = u(x, t) + (ax − 1) + (cos x + c2 sin x)e−t . (25.7) vx(x, t) = ux(x, t) + a + (− sinx + c2 cos x)e−t , vx(1, t) = 0 = ux(1, t) + a + (− sin1 + c2 cos 1)e−t = −1 + a + (1 − sin 1 + c2 cos 1)e−t . Thus, a = 1, c2 = sin 1−1 cos 1 , and equation (25.7) becomes v(x, t) = u(x, t) + (x − 1) + (cos x + sin 1 − 1 cos 1 sin x)e−t . (25.8) Initial condition tranforms to: v(x, 0) = u(x, 0) + (x − 1) + (cos x + sin 1 − 1 cos 1 sin x) = (x − 1) + (cos x + sin1 − 1 cos 1 sinx). The new problem is ⎧ ⎪⎨ ⎪⎩ vt = vxx, v(x, 0) = (x − 1) + (cos x + sin 1−1 cos 1 sinx), v(0, t) = 0, vx(1, t) = 0. ➁ We solve the problem for v using the method of separation of variables. Let v(x, t) = X(x)T(t), which gives XT − X T = 0. X X = T T = −λ.
  • 316. Partial Differential Equations Igor Yanovsky, 2005 316 From X + λX = 0, we get Xn(x) = an cos √ λx + bn sin √ λx. Using the first boundary condition, we have v(0, t) = X(0)T(t) = 0 ⇒ X(0) = 0. Hence, Xn(0) = an = 0, and Xn(x) = bn sin √ λx. We also have vx(1, t) = X (1)T(t) = 0 ⇒ X (1) = 0. Xn(x) = √ λbn cos √ λx, Xn(1) = √ λbn cos √ λ = 0, cos √ λ = 0, √ λ = nπ + π 2 . Thus, Xn(x) = bn sin nπ + π 2 x, λn = nπ + π 2 2 . With these values of λn, we solve T + nπ + π 2 2 T = 0 to find Tn(t) = cne−(nπ+π 2 )2t . v(x, t) = ∞ n=1 Xn(x)Tn(t) = ∞ n=1 ˜bn sin nπ + π 2 x e−(nπ+π 2 )2t . We now use equation (25.8) to convert back to function u: u(x, t) = v(x, t) − (x − 1) − (cos x + sin 1 − 1 cos 1 sin x)e−t . u(x, t) = ∞ n=1 ˜bn sin nπ + π 2 x e−(nπ+π 2 )2t − (x − 1) − (cos x + sin 1 − 1 cos 1 sin x)e−t . Coefficients ˜bn are obtained using the initial condition: u(x, 0) = ∞ n=1 ˜bn sin nπ + π 2 x − (x − 1) − (cos x + sin1 − 1 cos 1 sinx). ➂ Finally, we can check that the differential equation and the boundary conditions are satisfied: u(0, t) = 1 − (1 + 0)e−t = 1 − e−t . ux(x, t) = ∞ n=1 ˜bn nπ + π 2 cos nπ + π 2 x e−(nπ+π 2 )2t − 1 + (sinx − sin1 − 1 cos 1 cos x)e−t , ux(1, t) = −1 + (sin1 − sin 1 − 1 cos 1 cos 1)e−t = −1 + e−t . ut = ∞ n=1 −˜bn nπ + π 2 2 sin nπ + π 2 x e−(nπ+π 2 )2t + (cos x + sin1 − 1 cos 1 sin x)e−t = uxx.
  • 317. Partial Differential Equations Igor Yanovsky, 2005 317 Problem (F’02, #6). The temperature of a rod insulated at the ends with an ex- ponentially decreasing heat source in it is a solution of the following boundary value problem: ⎧ ⎪⎨ ⎪⎩ ut = uxx + e−2t g(x) for (x, t) ∈ [0, 1] × R+ ux(0, t) = ux(1, t) = 0 u(x, 0) = f(x). Find the solution to this problem by writing u as a cosine series, u(x, t) = ∞ n=0 an(t) cos nπx, and determine limt→∞ u(x, t). Proof. Let g accept an expansion in eigenfunctions g(x) = b0 + ∞ n=1 bn cos nπx with bn = 2 1 0 g(x) cosnπx dx. Plugging in the PDE gives: a0(t) + ∞ n=1 an(t) cosnπx = − ∞ n=1 n2 π2 an(t) cos nπx + b0e−2t + e−2t ∞ n=1 bn cos nπx, which gives a0(t) = b0e−2t, an(t) + n2π2an(t) = bne−2t, n = 1, 2, . . .. Adding homogeneous and particular solutions of the above ODEs, we obtain the solu- tions a0(t) = c0 − b0 2 e−2t , an(t) = cne−n2π2t − bn 2−n2π2 e−2t , n = 1, 2, . . ., for some constants cn, n = 0, 1, 2, . . .. Thus, u(x, t) = ∞ n=0 cne−n2π2t − bn 2 − n2π2 e−2t cos nπx. Initial condition gives u(x, 0) = ∞ n=0 cn − bn 2 − n2π2 cos nπx = f(x), As, t → ∞, the only mode that survives is n = 0: u(x, t) → c0 + b0 2 as t → ∞.
  • 318. Partial Differential Equations Igor Yanovsky, 2005 318 Problem (F’93, #4). a) Assume f, g ∈ C∞ . Give the compatibility conditions which f and g must satisfy if the following problem is to possess a solution. u = f(x) x ∈ Ω ∂u ∂n (s) = g(s) s ∈ ∂Ω. Show that your condition is necessary for a solution to exist. b) Give an explicit solution to ⎧ ⎪⎨ ⎪⎩ ut = uxx + cos x x ∈ [0, 2π] ux(0, t) = ux(2π, t) = 0 t > 0 u(x, 0) = cos x + cos 2x x ∈ [0, 2π]. c) Does there exist a steady state solution to the problem in (b) if ux(0) = 1 ux(2π) = 0 ? Explain your answer. Proof. a) Integrating the equation and using Green’s identity gives: Ω f(x) dx = Ω u dx = ∂Ω ∂u ∂n ds = ∂Ω g(s) ds. b) With • v(x, t) = u(x, t) − cos x the problem above transforms to ⎧ ⎪⎨ ⎪⎩ vt = vxx vx(0, t) = vx(2π, t) = 0 v(x, 0) = cos 2x. We solve this problem for v using the separation of variables. Let v(x, t) = X(x)T(t), which gives XT = X T. X X = T T = −λ. From X + λX = 0, we get Xn(x) = an cos √ λx + bn sin √ λx. Xn(x) = − √ λnan sin √ λx + √ λnbn cos √ λx. Using boundary conditions, we have vx(0, t) = X (0)T(t) = 0 vx(2π, t) = X (2π)T(t) = 0 ⇒ X (0) = X (2π) = 0. Hence, Xn(0) = √ λnbn = 0, and Xn(x) = an cos √ λnx. Xn(2π) = − √ λnan sin √ λn2π = 0 ⇒ √ λn = n 2 ⇒ λn = (n 2 )2. Thus, Xn(x) = an cos nx 2 , λn = n 2 2
  • 319. Partial Differential Equations Igor Yanovsky, 2005 319 With these values of λn, we solve T + n 2 2 T = 0 to find Tn(t) = cne−( n 2 )2t . v(x, t) = ∞ n=0 Xn(x)Tn(t) = ∞ n=0 ˜an e−( n 2 )2t cos nx 2 . Initial condition gives v(x, 0) = ∞ n=0 ˜an cos nx 2 = cos 2x. Thus, ˜a4 = 1, ˜an = 0, n = 4. Hence, v(x, t) = e−4t cos 2x. u(x, t) = v(x, t) + cos x = e−4t cos 2x + cos x. c) Does there exist a steady state solution to the problem in (b) if ux(0) = 1 ux(2π) = 0 ? Explain your answer. c) Set ut = 0. We have uxx + cos x = 0 x ∈ [0, 2π] ux(0) = 1, ux(2π) = 0. uxx = − cos x, ux = − sin x + C, u(x) = cos x + Cx + D. Boundary conditions give: 1 = ux(0) = C, 0 = ux(2π) = C ⇒ contradiction There exists no steady state solution. We may use the result we obtained in part (a) with uxx = cos x = f(x). We need Ω f(x) dx = ∂Ω ∂u ∂n ds, 2π 0 cos x dx =0 = ux(2π) − ux(0) = −1 given .
  • 320. Partial Differential Equations Igor Yanovsky, 2005 320 Problem (F’96, #7). Solve the parabolic problem u v t = 1 1 2 0 2 u v xx , 0 ≤ x ≤ π, t > 0 u(x, 0) = sinx, u(0, t) = u(π, t) = 0, v(x, 0) = sin x, v(0, t) = v(π, t) = 0. Prove the energy estimate (for general initial data) π x=0 [u2 (x, t) + v2 (x, t)] dx ≤ c π x=0 [u2 (x, 0) + v2 (x, 0)] dx for come constant c. Proof. We can solve the second equation for v and then use the value of v to solve the first equation for u. 73 ➀ We have ⎧ ⎪⎨ ⎪⎩ vt = 2vxx, 0 ≤ x ≤ π, t > 0 v(x, 0) = sinx, v(0, t) = v(π, t) = 0. Assume v(x, t) = X(x)T(t), then substitution in the PDE gives XT = 2X T. T T = 2 X X = −λ. From X + λ 2 X = 0, we get Xn(x) = an cos λ 2 x + bn sin λ 2 x. Boundary conditions give v(0, t) = X(0)T(t) = 0 v(π, t) = X(π)T(t) = 0 ⇒ X(0) = X(π) = 0. Thus, Xn(0) = an = 0, and Xn(x) = bn sin λ 2 x. Xn(π) = bn sin λ 2 π = 0. Hence λ 2 = n, or λ = 2n2 . λ = 2n2 , Xn(x) = bn sinnx. With these values of λn, we solve T + 2n2T = 0 to get Tn(t) = cne−2n2t. Thus, the solution may be written in the form v(x, t) = ∞ n=1 ˜ane−2n2t sin nx. From initial condition, we get v(x, 0) = ∞ n=1 ˜an sinnx = sinx. Thus, ˜a1 = 1, ˜an = 0, n = 2, 3, . . .. v(x, t) = e−2t sin x. 73 Note that if the matrix was fully inseparable, we would have to find eigenvalues and eigenvectors, just as we did for the hyperbolic systems.
  • 321. Partial Differential Equations Igor Yanovsky, 2005 321 ➁ We have ⎧ ⎪⎨ ⎪⎩ ut = uxx − 1 2 e−2t sinx, 0 ≤ x ≤ π, t > 0 u(x, 0) = sin x, u(0, t) = u(π, t) = 0. Let u(x, t) = ∞ n=1 un(t) sinnx. Plugging this into the equation, we get ∞ n=1 un(t) sinnx + ∞ n=1 n2 un(t) sinnx = − 1 2 e−2t sin x. For n = 1: u1(t) + u1(t) = − 1 2 e−2t . Combining homogeneous and particular solution of the above equation, we obtain: u1(t) = 1 2 e−2t + c1e−t . For n = 2, 3, . . .: un(t) + n2 un(t) = 0, un(t) = cne−n2t . Thus, u(x, t) = 1 2 e−2t + c1e−t sinx + ∞ n=2 cne−n2t sinnx = 1 2 e−2t sinx + ∞ n=1 cne−n2t sinnx. From initial condition, we get u(x, 0) = 1 2 sin x + ∞ n=1 cn sinnx = sin x. Thus, c1 = 1 2 , cn = 0, n = 2, 3, . . .. u(x, t) = 1 2 sinx (e−2t + e−t ). To prove the energy estimate (for general initial data) π x=0 [u2 (x, t) + v2 (x, t)] dx ≤ c π x=0 [u2 (x, 0) + v2 (x, 0)] dx for come constant c, we assume that u(x, 0) = ∞ n=1 an sinnx, v(x, 0) = ∞ n=1 bn sinnx.
  • 322. Partial Differential Equations Igor Yanovsky, 2005 322 The general solutions are obtained by the same method as above u(x, t) = 1 2 e−2t sinx + ∞ n=1 cne−n2 t sinnx, v(x, t) = ∞ n=1 bne−2n2t sinnx. π x=0 [u2 (x, t) + v2 (x, t)] dx = π x=0 1 2 e−2t sinx + ∞ n=1 cne−n2t sinnx 2 + ∞ n=1 bne−2n2t sinnx 2 dx ≤ ∞ n=1 (b2 n + a2 n) π x=0 sin2 nx dx ≤ π x=0 [u2 (x, 0) + v2 (x, 0)] dx.
  • 323. Partial Differential Equations Igor Yanovsky, 2005 323 26 Problems: Eigenvalues of the Laplacian - Laplace The 2D LAPLACE Equation (eigenvalues/eigenfuctions of the Laplacian). Consider ⎧ ⎪⎨ ⎪⎩ uxx + uyy + λu = 0 in Ω u(0, y) = 0 = u(a, y) for 0 ≤ y ≤ b, u(x, 0) = 0 = u(x, b) for 0 ≤ x ≤ a. (26.1) Proof. We can solve this problem by separation of variables. Let u(x, y) = X(x)Y (y), then substitution in the PDE gives X Y + XY + λXY = 0. X X + Y Y + λ = 0. Letting λ = μ2 + ν2 and using boundary conditions, we find the equations for X and Y : X + μ2 X = 0 Y + ν2 Y = 0 X(0) = X(a) = 0 Y (0) = Y (b) = 0. The solutions of these one-dimensional eigenvalue problems are μm = mπ a νn = nπ b Xm(x) = sin mπx a Yn(y) = sin nπy b , where m, n = 1, 2, . . .. Thus we obtain solutions of (26.1) of the form λmn = π2 m2 a2 + n2 b2 umn(x, y) = sin mπx a sin nπy b , where m, n = 1, 2, . . .. Observe that the eigenvalues {λmn}∞ m,n=1 are positive. The smallest eigenvalue λ11 has only one eigenfunction u11(x, y) = sin(πx/a) sin(πy/b); notice that u11 is positive in Ω. Other eigenvalues λ may correspond to more than one choice of m and n; for example, in the case a = b we have λnm = λnm. For this λ, there are two linearly independent eigenfunctions. However, for a particular value of λ there are at most finitely many linearly independent eigenfunctions. Moreover, b 0 a 0 umn(x, y) um n (x, y) dx dy = b 0 a 0 sin mπx a sin nπy b sin m πx a sin n πy b dx dy = a 2 b 0 sin nπy b sin n πy b dy 0 = ab 4 if m = m and n = n 0 if m = m or n = n . In particular, the {umn} are pairwise orthogonal. We could normalize each umn by a scalar multiple (i.e. multiply by 4/ab) so that ab/4 above becomes 1. Let us change the notation somewhat so that each eigenvalue λn corresponds to a particular eigenfunction φn(x). If we choose an orthonormal basis of eigenfunctions in each eigenspace, we may arrange that {φn}∞ n=1 is pairwise orthonormal: Ω φn(x)φm(x) dx = 1 if m = n 0 if m = n.
  • 324. Partial Differential Equations Igor Yanovsky, 2005 324 In this notation, the eigenfunction expansion of f(x) defined on Ω becomes f(x) ∼ ∞ n=1 anφn(x), where an = Ω f(x)φn(x) dx.
  • 325. Partial Differential Equations Igor Yanovsky, 2005 325 Problem (S’96, #4). Let D denote the rectangular D = {(x, y) ∈ R2 : 0 < x < a, 0 < y < b}. Find the eigenvalues of the following Dirichlet problem: ( + λ)u = 0 in D u = 0 on ∂D. Proof. The problem may be rewritten as ⎧ ⎪⎨ ⎪⎩ uxx + uyy + λu = 0 in Ω u(0, y) = 0 = u(a, y) for 0 ≤ y ≤ b, u(x, 0) = 0 = u(x, b) for 0 ≤ x ≤ a. We may assume that the eigenvalues λ are positive, λ = μ2 + ν2 . Then, λmn = π2 m2 a2 + n2 b2 umn(x, y) = sin mπx a sin nπy b , m, n = 1, 2, . . .. Problem (W’04, #1). Consider the differential equation: ∂2u(x, y) ∂x2 + ∂2u(x, y) ∂y2 + λu(x, y) = 0 (26.2) in the strip {(x, y), 0 < y < π, −∞ < x < +∞} with boundary conditions u(x, 0) = 0, u(x, π) = 0. (26.3) Find all bounded solutions of the boundary value problem (26.4), (26.5) when a) λ = 0, b) λ > 0, c) λ < 0. Proof. a) λ = 0. We have uxx + uyy = 0. Assume u(x, y) = X(x)Y (y), then substitution in the PDE gives X Y + XY = 0. Boundary conditions give u(x, 0) = X(x)Y (0) = 0 u(x, π) = X(x)Y (π) = 0 ⇒ Y (0) = Y (π) = 0. Method I: We have X X = − Y Y = −c, c > 0. From X + cX = 0, we have Xn(x) = an cos √ cx + bn sin √ cx. From Y − cY = 0, we have Yn(y) = cne− √ cy + dne √ cy . Y (0) = cn + dn = 0 ⇒ cn = −dn.
  • 326. Partial Differential Equations Igor Yanovsky, 2005 326 Y (π) = cne− √ cπ − cne √ cπ = 0 ⇒ cn = 0 ⇒ Yn(y) = 0. ⇒ u(x, y) = X(x)Y (y) = 0. Method II: We have X X = − Y Y = c, c > 0. From X − cX = 0, we have Xn(x) = ane− √ cx + bne √ cx. Since we look for bounded solutions for −∞ < x < ∞, an = bn = 0 ⇒ Xn(x) = 0. From Y + cY = 0, we have Yn(y) = cn cos √ cy + dn sin √ cy. Y (0) = cn = 0, Y (π) = dn sin √ cπ = 0 ⇒ √ c = n ⇒ c = n2. ⇒ Yn(y) = dn sin nx = 0. ⇒ u(x, y) = X(x)Y (y) = 0. b) λ > 0. We have X X + Y Y + λ = 0. Letting λ = μ2 + ν2, and using boundary conditions for Y , we find the equations: X + μ2 X = 0 Y + ν2 Y = 0 Y (0) = Y (π) = 0. The solutions of these one-dimensional eigenvalue problems are Xm(x) = am cos μmx + bm sinμmx. νn = n, Yn(y) = dn sin ny, where m, n = 1, 2, . . .. u(x, y) = ∞ m,n=1 umn(x, y) = ∞ m,n=1 (am cos μmx + bm sinμmx) sinny. c) λ < 0. We have uxx + uyy + λu = 0, u(x, 0) = 0, u(x, π) = 0. u ≡ 0 is the solution to this equation. We will show that this solution is unique. Let u1 and u2 be two solutions, and consider w = u1 − u2. Then, w + λw = 0, w(x, 0) = 0, w(x, π) = 0. Multiply the equation by w and integrate: w w + λw2 = 0, Ω w w dx + λ Ω w2 dx = 0, ∂Ω w ∂w ∂n ds =0 − Ω |∇w|2 dx + λ Ω w2 dx = 0, Ω |∇w|2 dx ≥0 = λ Ω w2 dx ≤0 .
  • 327. Partial Differential Equations Igor Yanovsky, 2005 327 Thus, w ≡ 0 and the solution u(x, y) ≡ 0 is unique.
  • 328. Partial Differential Equations Igor Yanovsky, 2005 328 Problem (F’95, #5). Find all bounded solutions for the following boundary value problem in the strip 0 < x < a, −∞ < y < ∞, ( + k2 )u = 0 (k = Const > 0), u(0, y) = 0, ux(a, y) = 0. In particular, show that when ak ≤ π, the only bounded solution to this problem is u ≡ 0. Proof. Let u(x, y) = X(x)Y (y), then we have X Y + XY + k2XY = 0. X X + Y Y + k2 = 0. Letting k2 = μ2 + ν2 and using boundary conditions, we find: X + μ2 X = 0, Y + ν2 Y = 0. X(0) = X (a) = 0. The solutions of these one-dimensional eigenvalue problems are μm = (m − 1 2 )π a , Xm(x) = sin (m − 1 2 )πx a Yn(y) = cn cos νny + dn sinνny, where m, n = 1, 2, . . .. Thus we obtain solutions of the form k2 mn = (m − 1 2 )π a 2 +ν2 n, umn(x, y) = sin (m − 1 2)πx a cn cos νny+dn sin νny , where m, n = 1, 2, . . .. u(x, y) = ∞ m,n=1 umn(x, y) = ∞ m,n=1 sin (m − 1 2 )πx a cn cos νny + dn sinνny . • We can take an alternate approach and prove the second part of the question. We have X Y + XY + k2 XY = 0, − Y Y = X X + k2 = c2 . We obtain Yn(y) = cn cos cy + dn sin cy. The second equation gives X + k2 X = c2 X, X + (k2 − c2 )X = 0, Xm(x) = ame √ c2−k2x + bme √ c2−k2x . Thus, Xm(x) is bounded only if k2 − c2 > 0, (if k2 − c2 = 0, X = 0, and Xm(x) = amx + bm, BC’s give Xm(x) = πx, unbounded), in which case Xm(x) = am cos k2 − c2 x + bm sin k2 − c2 x.
  • 329. Partial Differential Equations Igor Yanovsky, 2005 329 Boundary conditions give Xm(0) = am = 0. Xm(x) = bm k2 − c2 cos k2 − c2 x, Xm(a) = bm k2 − c2 cos k2 − c2 a = 0, k2 − c2 a = mπ − π 2 , m = 1, 2, . . ., k2 − c2 = π a m − 1 2 2 , k2 = π a 2 m − 1 2 2 + c2 , a2 k2 > π2 m − 1 2 2 , ak > π m − 1 2 , m = 1, 2, . . .. Thus, bounded solutions exist only when ak > π 2 . Problem (S’90, #2). Show that the boundary value problem ∂2u(x, y) ∂x2 + ∂2u(x, y) ∂y2 + k2 u(x, y) = 0, (26.4) where −∞ < x < +∞, 0 < y < π, k > 0 is a constant, u(x, 0) = 0, u(x, π) = 0 (26.5) has a bounded solution if and only if k ≥ 1. Proof. We have uxx + uyy + k2 u = 0, X Y + XY + k2 XY = 0, − X X = Y Y + k2 = c2 . We obtain Xm(x) = am cos cx + bm sin cx. The second equation gives Y + k2 Y = c2 Y, Y + (k2 − c2 )Y = 0, Yn(y) = cne √ c2−k2y + dne √ c2−k2y . Thus, Yn(y) is bounded only if k2 −c2 > 0, (if k2 −c2 = 0, Y = 0, and Yn(y) = cny+dn, BC’s give Y ≡ 0), in which case Yn(y) = cn cos k2 − c2 y + dn sin k2 − c2 y. Boundary conditions give Yn(0) = cn = 0. Yn(π) = dn sin √ k2 − c2 π = 0 ⇒ √ k2 − c2 = n ⇒ k2 − c2 = n2 ⇒ k2 = n2 + c2 , n = 1, 2, . . .. Hence, k > n, n = 1, 2, . . ..
  • 330. Partial Differential Equations Igor Yanovsky, 2005 330 Thus, bounded solutions exist if k ≥ 1. Note: If k = 1, then c = 0, which gives trivial solutions for Yn(y). u(x, y) = ∞ m,n=1 Xm(x)Yn(y) = ∞ m,n=1 sin ny Xm(x).
  • 331. Partial Differential Equations Igor Yanovsky, 2005 331 McOwen, 4.4 #7; 266B Ralston Hw. Show that the boundary value problem −∇ · a(x)∇u + b(x)u = λu in Ω u = 0 on ∂Ω has only trivial solution with λ ≤ 0, when b(x) ≥ 0 and a(x) > 0 in Ω. Proof. Multiplying the equation by u and integrating over Ω, we get Ω −u∇ · a∇u dx + Ω bu2 dx = λ Ω u2 dx. Since ∇ · (ua∇u) = u∇ · a∇u + a|∇u|2 , we have Ω −∇ · (ua∇u) dx + Ω a|∇u|2 dx + Ω bu2 dx = λ Ω u2 dx. (26.6) Using divergence theorem, we obtain ∂Ω − u =0 a ∂u ∂n ds + Ω a|∇u|2 dx + Ω bu2 dx = λ Ω u2 dx, Ω a >0 |∇u|2 dx + Ω b ≥0 u2 dx = λ ≤0 Ω u2 dx, Thus, ∇u = 0 in Ω, and u is constant. Since u = 0 on ∂Ω, u ≡ 0 on Ω. Similar Problem I: Note that this argument also works with Neumann B.C.: −∇ · a(x)∇u + b(x)u = λu in Ω ∂u/∂n = 0 on ∂Ω Using divergence theorem, (26.6) becomes ∂Ω −ua ∂u ∂n =0 ds + Ω a|∇u|2 dx + Ω bu2 dx = λ Ω u2 dx, Ω a >0 |∇u|2 dx + Ω b ≥0 u2 dx = λ ≤0 Ω u2 dx. Thus, ∇u = 0, and u = const on Ω. Hence, we now have Ω b ≥0 u2 dx = λ ≤0 Ω u2 dx, which implies λ = 0. This gives the useful information that for the eigenvalue problem74 −∇ · a(x)∇u + b(x)u = λu ∂u/∂n = 0, λ = 0 is an eigenvalue, its eigenspace is the set of constants, and all other λ’s are positive. 74 In Ralston’s Hw#7 solutions, there is no ‘-’ sign in front of ∇ · a(x)∇u below, which is probably a typo.
  • 332. Partial Differential Equations Igor Yanovsky, 2005 332 Similar Problem II: If λ ≤ 0, we show that the only solution to the problem below is the trivial solution. u + λu = 0 in Ω u = 0 on ∂Ω Ω u u dx + λ Ω u2 dx = 0, ∂Ω u =0 ∂u ∂n ds − Ω |∇u|2 dx + λ ≤0 Ω u2 dx = 0. Thus, ∇u = 0 in Ω, and u is constant. Since u = 0 on ∂Ω, u ≡ 0 on Ω.
  • 333. Partial Differential Equations Igor Yanovsky, 2005 333 27 Problems: Eigenvalues of the Laplacian - Poisson The ND POISSON Equation (eigenvalues/eigenfunctions of the Laplacian). Suppose we want to find the eigenfunction expansion of the solution of u = f in Ω u = 0 on ∂Ω, when f has the expansion in the orthonormal Dirichlet eigenfunctions φn: f(x) ∼ ∞ n=1 anφn(x), where an = Ω f(x)φn(x) dx. Proof. Writing u = cnφn and inserting into −λu = f, we get ∞ n=1 −λncnφn = ∞ n=1 anφn(x). Thus, cn = −an/λn, and u(x) = − ∞ n=1 anφn(x) λn . The 1D POISSON Equation (eigenvalues/eigenfunctions of the Laplacian). For the boundary value problem u = f(x) u(0) = 0, u(L) = 0, the related eigenvalue problem is φ = −λφ φ(0) = 0, φ(L) = 0. The eigenvalues are λn = (nπ/L)2, and the corresponding eigenfunctions are sin(nπx/L), n = 1, 2, . . .. Writing u = cnφn = cn sin(nπx/L) and inserting into −λu = f, we get ∞ n=1 −cn nπ L 2 sin nπx L = f(x), L 0 ∞ n=1 −cn nπ L 2 sin nπx L sin mπx L dx = L 0 f(x) sin mπx L dx, −cn nπ L 2 L 2 = L 0 f(x) sin nπx L dx, cn = − 2 L L 0 f(x) sin(nπx/L) dx (nπ/L)2 .
  • 334. Partial Differential Equations Igor Yanovsky, 2005 334 u(x) = cn sin(nπx/L) = ∞ n=1 − 2 L L 0 f(ξ) sin(nπx/L) sin(nπξ/L) dξ (nπ/L)2 , u = L 0 f(ξ) − 2 L ∞ n=1 sin(nπx/L) sin(nπξ/L) (nπ/L)2 = G(x,ξ) dξ. See similar, but more complicated, problem in Sturm-Liouville Problems (S’92, #2(c)).
  • 335. Partial Differential Equations Igor Yanovsky, 2005 335 Example: Eigenfunction Expansion of the GREEN’s Function. Suppose we fix x and attempt to expand the Green’s function G(x, y) in the orthonormal eigenfunctions φn(y): G(x, y) ∼ ∞ n=1 an(x)φn(y), where an(x) = Ω G(x, z)φn(z) dz. Proof. We can rewrite u + λu = 0 in Ω, u = 0 on ∂Ω, as an integral equation 75 u(x) + λ Ω G(x, y)u(y) dy = 0. Suppose, u(x) = cnφn(x). Plugging this into , we get ∞ m=1 cmφm(x) + λ Ω ∞ n=1 an(x)φn(y) ∞ m=1 cmφm(y) dy = 0, ∞ m=1 cmφm(x) + λ ∞ n=1 an(x) ∞ m=1 cm Ω φn(y)φm(y) dy = 0, ∞ n=1 cnφn(x) + ∞ n=1 λan(x)cn = 0, ∞ n=1 cn φn(x) + λan(x) = 0, an(x) = − φn(x) λn . Thus, G(x, y) ∼ ∞ n=1 − φn(x)φn(y) λn . 75 See the section: ODE - Integral Equations.
  • 336. Partial Differential Equations Igor Yanovsky, 2005 336 The 2D POISSON Equation (eigenvalues/eigenfunctions of the Laplacian). Solve the boundary value problem ⎧ ⎪⎨ ⎪⎩ uxx + uyy = f(x, y) for 0 < x < a, 0 < y < b u(0, y) = 0 = u(a, y) for 0 ≤ y ≤ b, u(x, 0) = 0 = u(x, b) for 0 ≤ x ≤ a. (27.1) f(x, y) ∈ C2, f(x, y) = 0 if x = 0, x = a, y = 0, y = b, f(x, y) = 2 √ ab ∞ m,n=1 cmn sin mπx a sin nπy b . Proof. ➀ First, we find eigenvalues/eigenfunctions of the Laplacian. ⎧ ⎪⎨ ⎪⎩ uxx + uyy + λu = 0 in Ω u(0, y) = 0 = u(a, y) for 0 ≤ y ≤ b, u(x, 0) = 0 = u(x, b) for 0 ≤ x ≤ a. Let u(x, y) = X(x)Y (y), then substitution in the PDE gives X Y + XY + λXY = 0. X X + Y Y + λ = 0. Letting λ = μ2 + ν2 and using boundary conditions, we find the equations for X and Y : X + μ2 X = 0 Y + ν2 Y = 0 X(0) = X(a) = 0 Y (0) = Y (b) = 0. The solutions of these one-dimensional eigenvalue problems are μm = mπ a νn = nπ b Xm(x) = sin mπx a Yn(y) = sin nπy b , where m, n = 1, 2, . . .. Thus we obtain eigenvalues and normalized eigenfunctions of the Laplacian: λmn = π2 m2 a2 + n2 b2 φmn(x, y) = 2 √ ab sin mπx a sin nπy b , where m, n = 1, 2, . . .. Note that f(x, y) = ∞ m,n=1 cmnφmn. ➁ Second, writing u(x, y) = ˜cmnφmn and inserting into −λu = f, we get − ∞ m,n=1 λmn˜cmnφmn(x, y) = ∞ m,n=1 cmnφmn(x, y). Thus, ˜cmn = − cmn λmn . u(x, y) = − ∞ n=1 cmn λmn φmn(x, y),
  • 337. Partial Differential Equations Igor Yanovsky, 2005 337 with λmn, φmn(x) given above, and cmn given by b 0 a 0 f(x, y)φmn dx dy = b 0 a 0 ∞ m ,n =1 cm n φm n φmn dx dy = cmn.
  • 338. Partial Differential Equations Igor Yanovsky, 2005 338 28 Problems: Eigenvalues of the Laplacian - Wave In the section on the wave equation, we considered an initial boundary value problem for the one-dimensional wave equation on an interval, and we found that the solu- tion could be obtained using Fourier series. If we replace the Fourier series by an expansion in eigenfunctions, we can consider an initial/boundary value problem for the n-dimensional wave equation. The ND WAVE Equation (eigenvalues/eigenfunctions of the Laplacian). Consider ⎧ ⎪⎨ ⎪⎩ utt = u for x ∈ Ω, t > 0 u(x, 0) = g(x), ut(x, 0) = h(x) for x ∈ Ω u(x, t) = 0 for x ∈ ∂Ω, t > 0. Proof. For g, h ∈ C2(Ω) with g = h = 0 on ∂Ω, we have eigenfunction expansions g(x) = ∞ n=1 anφn(x) and h(x) = ∞ n=1 bnφn(x). Assume the solution u(x, t) may be expanded in the eigenfunctions with coefficients depending on t: u(x, t) = ∞ n=1 un(t)φn(x). This implies ∞ n=1 un(t)φn(x) = − ∞ n=1 λnun(t)φn(x), un(t) + λnun(t) = 0 for each n. Since λn > 0, this ordinary differential equation has general solution un(t) = An cos λnt + Bn sin λnt. Thus, u(x, t) = ∞ n=1 An cos λnt + Bn sin λnt φn(x), ut(x, t) = ∞ n=1 − λnAn sin λnt + λnBn cos λnt φn(x), u(x, 0) = ∞ n=1 Anφn(x) = g(x), ut(x, 0) = ∞ n=1 λnBnφn(x) = h(x). Comparing with , we obtain An = an, Bn = bn √ λn . Thus, the solution is given by u(x, t) = ∞ n=1 an cos λnt + bn √ λn sin λnt φn(x),
  • 339. Partial Differential Equations Igor Yanovsky, 2005 339 with an = Ω g(x)φn(x) dx, bn = Ω h(x)φn(x) dx.
  • 340. Partial Differential Equations Igor Yanovsky, 2005 340 The 2D WAVE Equation (eigenvalues/eigenfunctions of the Laplacian). Let Ω = (0, a) × (0, b) and consider ⎧ ⎪⎨ ⎪⎩ utt = uxx + uyy for x ∈ Ω, t > 0 u(x, 0) = g(x), ut(x, 0) = h(x) for x ∈ Ω u(x, t) = 0 for x ∈ ∂Ω, t > 0. (28.1) Proof. ➀ First, we find eigenvalues/eigenfunctions of the Laplacian. ⎧ ⎪⎨ ⎪⎩ uxx + uyy + λu = 0 in Ω u(0, y) = 0 = u(a, y) for 0 ≤ y ≤ b, u(x, 0) = 0 = u(x, b) for 0 ≤ x ≤ a. Let u(x, y) = X(x)Y (y), then substitution in the PDE gives X Y + XY + λXY = 0. X X + Y Y + λ = 0. Letting λ = μ2 + ν2 and using boundary conditions, we find the equations for X and Y : X + μ2 X = 0 Y + ν2 Y = 0 X(0) = X(a) = 0 Y (0) = Y (b) = 0. The solutions of these one-dimensional eigenvalue problems are μm = mπ a νn = nπ b Xm(x) = sin mπx a Yn(y) = sin nπy b , where m, n = 1, 2, . . .. Thus we obtain eigenvalues and normalized eigenfunctions of the Laplacian: λmn = π2 m2 a2 + n2 b2 φmn(x, y) = 2 √ ab sin mπx a sin nπy b , where m, n = 1, 2, . . .. ➁ Second, we solve the Wave Equation (28.1) using the “space” eigenfunctions. For g, h ∈ C2(Ω) with g = h = 0 on ∂Ω, we have eigenfunction expansions 76 g(x) = ∞ n=1 anφn(x) and h(x) = ∞ n=1 bnφn(x). Assume u(x, t) = ∞ n=1 un(t)φn(x). This implies un(t) + λnun(t) = 0 for each n. 76 In 2D, φn is really φmn, and x is (x, y).
  • 341. Partial Differential Equations Igor Yanovsky, 2005 341 Since λn > 0, this ordinary differential equation has general solution un(t) = An cos λnt + Bn sin λnt. Thus, u(x, t) = ∞ n=1 An cos λnt + Bn sin λnt φn(x), ut(x, t) = ∞ n=1 − λnAn sin λnt + λnBn cos λnt φn(x), u(x, 0) = ∞ n=1 Anφn(x) = g(x), ut(x, 0) = ∞ n=1 λnBnφn(x) = h(x). Comparing with , we obtain An = an, Bn = bn √ λn . Thus, the solution is given by u(x, t) = ∞ m,n=1 amn cos λmnt + bmn √ λmn sin λmnt φmn(x), with λmn, φmn(x) given above, and amn = Ω g(x)φmn(x) dx, bmn = Ω h(x)φmn(x) dx.
  • 342. Partial Differential Equations Igor Yanovsky, 2005 342 McOwen, 4.4 #3; 266B Ralston Hw. Consider the initial-boundary value problem ⎧ ⎪⎨ ⎪⎩ utt = u + f(x, t) for x ∈ Ω, t > 0 u(x, t) = 0 for x ∈ ∂Ω, t > 0 u(x, 0) = 0, ut(x, 0) = 0 for x ∈ Ω. Use Duhamel’s principle and an expansion of f in eigenfunctions to obtain a (formal) solution. Proof. a) We expand u in terms of the Dirichlet eigenfunctions of Laplacian in Ω. φn + λnφn = 0 in Ω, φn = 0 on ∂Ω. Assume u(x, t) = ∞ n=1 an(t)φn(x), an(t) = Ω φn(x)u(x, t) dx. f(x, t) = ∞ n=1 fn(t)φn(x), fn(t) = Ω φn(x)f(x, t) dx. an(t) = Ω φn(x)utt dx = Ω φn( u + f) dx = Ω φn u dx + Ω φnf dx = Ω φnu dx + Ω φnf dx = −λn Ω φnu dx + Ω φnf dx fn = −λnan(t) + fn(t). an(0) = Ω φn(x)u(x, 0) dx = 0. an(0) = Ω φn(x)ut(x, 0) dx = 0. 77 Thus, we have an ODE which is converted and solved by Duhamel’s principle: ⎧ ⎪⎨ ⎪⎩ an + λnan = fn(t) an(0) = 0 an(0) = 0 ⇒ ⎧ ⎪⎨ ⎪⎩ ˜an + λn˜an = 0 ˜an(0, s) = 0 ˜an(0, s) = fn(s) an(t) = t 0 ˜an(t−s, s) ds. With the anzats ˜an(t, s) = c1 cos √ λnt + c2 sin √ λnt, we get c1 = 0, c2 = fn(s)/ √ λn, or ˜an(t, s) = fn(s) sin √ λnt √ λn . Duhamel’s principle gives an(t) = t 0 ˜an(t − s, s) ds = t 0 fn(s) sin( √ λn(t − s)) √ λn ds. u(x, t) = ∞ n=1 φn(x) √ λn t 0 fn(s) sin( λn(t − s)) ds. 77 We used Green’s formula: ∂Ω φn ∂u ∂n − u∂φn ∂n ds = Ω (φn u − φnu) dx. On ∂Ω, u = 0; φn = 0 since eigenfunctions are Dirichlet.
  • 343. Partial Differential Equations Igor Yanovsky, 2005 343 Problem (F’90, #3). Consider the initial-boundary value problem ⎧ ⎪⎨ ⎪⎩ utt = a(t)uxx + f(x, t) 0 ≤ x ≤ π, t ≥ 0 u(0, t) = u(π, t) = 0 t ≥ 0 u(x, 0) = g(x), ut(x, 0) = h(x) 0 ≤ x ≤ π, where the coefficient a(t) = 0. a) Express (formally) the solution of this problem by the method of eigenfunction ex- pansions. b) Show that this problem is not well-posed if a ≡ −1. Hint: Take f = 0 and prove that the solution does not depend continuously on the initial data g, h. Proof. a) We expand u in terms of the Dirichlet eigenfunctions of Laplacian in Ω. φnxx + λnφn = 0 in Ω, φn(0) = φn(π) = 0. That gives us the eigenvalues and eigenfunctions of the Laplacian: λn = n2, φn(x) = sinnx. Assume u(x, t) = ∞ n=1 un(t)φn(x), un(t) = Ω φn(x)u(x, t) dx. f(x, t) = ∞ n=1 fn(t)φn(x), fn(t) = Ω φn(x)f(x, t) dx. g(x) = ∞ n=1 gnφn(x), gn = Ω φn(x)g(x) dx. h(x) = ∞ n=1 hnφn(x), hn = Ω φn(x)h(x) dx. un(t) = Ω φn(x)utt dx = Ω φn(a(t)uxx + f) dx = a(t) Ω φnuxx dx + Ω φnf dx = a(t) Ω φnxxu dx + Ω φnf dx = −λna(t) Ω φnu dx + Ω φnf dx fn = −λna(t)un(t) + fn(t). un(0) = Ω φn(x)u(x, 0) dx = Ω φn(x)g(x) dx = gn. un(0) = Ω φn(x)ut(x, 0) dx = Ω φn(x)h(x) dx = hn. Thus, we have an ODE which is converted and solved by Duhamel’s principle: ⎧ ⎪⎨ ⎪⎩ un + λna(t)un = fn(t) un(0) = gn un(0) = hn.
  • 344. Partial Differential Equations Igor Yanovsky, 2005 344 Note: The initial data is not 0; therefore, the Duhamel’s principle is not applicable. Also, the ODE is not linear in t, and it’s solution is not obvious. Thus, u(x, t) = ∞ n=1 un(t)φn(x), where un(t) are solutions of .
  • 345. Partial Differential Equations Igor Yanovsky, 2005 345 b) Assume we have two solutions, u1 and u2, to the PDE: ⎧ ⎪⎨ ⎪⎩ u1tt + u1xx = 0, u1(0, t) = u1(π, t) = 0, u1(x, 0) = g1(x), u1t(x, 0) = h1(x); ⎧ ⎪⎨ ⎪⎩ u2tt + u2xx = 0, u2(0, t) = u2(π, t) = 0, u2(x, 0) = g2(x), u2t(x, 0) = h2(x). Note that the equation is elliptic, and therefore, the maximum principle holds. In order to prove that the solution does not depend continuously on the initial data g, h, we need to show that one of the following conditions holds: max Ω |u1 − u2| > max ∂Ω |g1 − g2|, max Ω |ut1 − ut2| > max ∂Ω |h1 − h2|. That is, the difference of the two solutions is not bounded by the difference of initial data. By the method of separation of variables, we may obtain u(x, t) = ∞ n=1 (an cos nt + bn sinnt) sinnx, u(x, 0) = ∞ n=1 an sinnx = g(x), ut(x, 0) = ∞ n=1 nbn sinnx = h(x). Not complete. We also know that for elliptic equations, and for Laplace equation in particular, the value of the function u has to be prescribed on the entire boundary, i.e. u = g on ∂Ω, which is not the case here, making the problem under-determined. Also, ut is prescribed on one of the boundaries, making the problem overdetermined.
  • 346. Partial Differential Equations Igor Yanovsky, 2005 346 29 Problems: Eigenvalues of the Laplacian - Heat The ND HEAT Equation (eigenvalues/eigenfunctions of the Laplacian). Consider the initial value problem with homogeneous Dirichlet condition: ⎧ ⎪⎨ ⎪⎩ ut = u for x ∈ Ω, t > 0 u(x, 0) = g(x) for x ∈ Ω u(x, t) = 0 for x ∈ ∂Ω, t > 0. Proof. For g ∈ C2(Ω) with g = 0 on ∂Ω, we have eigenfunction expansion g(x) = ∞ n=1 anφn(x) Assume the solution u(x, t) may be expanded in the eigenfunctions with coefficients depending on t: u(x, t) = ∞ n=1 un(t)φn(x). This implies ∞ n=1 un(t)φn(x) = −λn ∞ n=1 un(t)φn(x), un(t) + λnun(t) = 0, which has the general solution un(t) = Ane−λnt . Thus, u(x, t) = ∞ n=1 Ane−λnt φn(x), u(x, 0) = ∞ n=1 Anφn(x) = g(x). Comparing with , we obtain An = an. Thus, the solution is given by u(x, t) = ∞ n=1 ane−λnt φn(x), with an = Ω g(x)φn(x) dx. Also u(x, t) = ∞ n=1 ane−λnt φn(x) = ∞ n=1 Ω g(y)φn(y) dy e−λnt φn(x) = Ω ∞ n=1 e−λnt φn(x)φn(y) K(x,y,t), heat kernel g(y) dy
  • 347. Partial Differential Equations Igor Yanovsky, 2005 347 The 2D HEAT Equation (eigenvalues/eigenfunctions of the Laplacian). Let Ω = (0, a) × (0, b) and consider ⎧ ⎪⎨ ⎪⎩ ut = uxx + uyy for x ∈ Ω, t > 0 u(x, 0) = g(x) for x ∈ Ω u(x, t) = 0 for x ∈ ∂Ω, t > 0. (29.1) Proof. ➀ First, we find eigenvalues/eigenfunctions of the Laplacian. ⎧ ⎪⎨ ⎪⎩ uxx + uyy + λu = 0 in Ω u(0, y) = 0 = u(a, y) for 0 ≤ y ≤ b, u(x, 0) = 0 = u(x, b) for 0 ≤ x ≤ a. Let u(x, y) = X(x)Y (y), then substitution in the PDE gives X Y + XY + λXY = 0. X X + Y Y + λ = 0. Letting λ = μ2 + ν2 and using boundary conditions, we find the equations for X and Y : X + μ2 X = 0 Y + ν2 Y = 0 X(0) = X(a) = 0 Y (0) = Y (b) = 0. The solutions of these one-dimensional eigenvalue problems are μm = mπ a νn = nπ b Xm(x) = sin mπx a Yn(y) = sin nπy b , where m, n = 1, 2, . . .. Thus we obtain eigenvalues and normalized eigenfunctions of the Laplacian: λmn = π2 m2 a2 + n2 b2 φmn(x, y) = 2 √ ab sin mπx a sin nπy b , where m, n = 1, 2, . . .. ➁ Second, we solve the Heat Equation (29.1) using the “space” eigenfunctions. For g ∈ C2(Ω) with g = 0 on ∂Ω, we have eigenfunction expansion g(x) = ∞ n=1 anφn(x). Assume u(x, t) = ∞ n=1 un(t)φn(x). This implies un(t) + λnun(t) = 0, which has the general solution un(t) = Ane−λnt . Thus, u(x, t) = ∞ n=1 Ane−λnt φn(x), u(x, 0) = ∞ n=1 Anφn(x) = g(x).
  • 348. Partial Differential Equations Igor Yanovsky, 2005 348 Comparing with , we obtain An = an. Thus, the solution is given by u(x, t) = ∞ m,n=1 amne−λmnt φmn(x), with λmn, φmn given above and amn = Ω g(x)φmn(x) dx.
  • 349. Partial Differential Equations Igor Yanovsky, 2005 349 Problem (S’91, #2). Consider the heat equation ut = uxx + uyy on the square Ω = {0 ≤ x ≤ 2π, 0 ≤ y ≤ 2π} with periodic boundary conditions and with initial data u(0, x, y) = f(x, y). a) Find the solution using separation of variables. Proof. ➀ First, we find eigenvalues/eigenfunctions of the Laplacian. ⎧ ⎪⎨ ⎪⎩ uxx + uyy + λu = 0 in Ω u(0, y) = u(2π, y) for 0 ≤ y ≤ 2π, u(x, 0) = u(x, 2π) for 0 ≤ x ≤ 2π. Let u(x, y) = X(x)Y (y), then substitution in the PDE gives X Y + XY + λXY = 0. X X + Y Y + λ = 0. Letting λ = μ2 + ν2 and using periodic BC’s, we find the equations for X and Y : X + μ2 X = 0 Y + ν2 Y = 0 X(0) = X(2π) Y (0) = Y (2π). The solutions of these one-dimensional eigenvalue problems are μm = m νn = n Xm(x) = eimx Yn(y) = einy , where m, n = . . ., −2, −1, 0, 1, 2, . . .. Thus we obtain eigenvalues and normalized eigen- functions of the Laplacian: λmn = m2 + n2 φmn(x, y) = eimx einy , where m, n = . . ., −2, −1, 0, 1, 2, . . .. ➁ Second, we solve the Heat Equation using the “space” eigenfunctions. Assume u(x, y, t) = ∞ m,n=−∞ umn(t)eimxeiny. This implies umn(t) + (m2 + n2 )umn(t) = 0, which has the general solution un(t) = cmne−(m2+n2)t . Thus, u(x, y, t) = ∞ m,n=−∞ cmne−(m2+n2)t eimx einy .
  • 350. Partial Differential Equations Igor Yanovsky, 2005 350 u(x, y, 0) = ∞ m,n=−∞ cmneimx einy = f(x, y), 2π 0 2π 0 f(x, y)eimx einy dxdy = 2π 0 2π 0 ∞ m,n=−∞ cmneimx einy eim x ein y dxdy = 2π 2π 0 ∞ n=−∞ cmneiny ein y dy = 4π2 cmn. cmn = 1 4π2 2π 0 2π 0 f(x, y)e−imx e−iny dxdy = fmn.
  • 351. Partial Differential Equations Igor Yanovsky, 2005 351 b) Show that the integral Ω u2(x, y, t) dxdy is decreasing in t, if f is not constant. Proof. We have ut = uxx + uyy Multiply the equation by u and integrate: uut = u u, 1 2 d dt u2 = u u, 1 2 d dt Ω u2 dxdy = Ω u u dxdy = ∂Ω u ∂u ∂n ds =0, (periodic BC) − Ω |∇u|2 dxdy = − Ω |∇u|2 dxdy ≤ 0. Equality is obtained only when ∇u = 0 ⇒ u = constant ⇒ f = constant. If f is not constant, Ω u2 dxdy is decreasing in t.
  • 352. Partial Differential Equations Igor Yanovsky, 2005 352 Problem (F’98, #3). Consider the eigenvalue problem d2φ dx2 + λφ = 0, φ(0) − dφ dx (0) = 0, φ(1) + dφ dx (1) = 0. a) Show that all eigenvalues are positive. b) Show that there exist a sequence of eigenvalues λ = λn, each of which satisfies tan √ λ = 2 √ λ λ − 1 . c) Solve the following initial-boundary value problem on 0 < x < 1, t > 0 ∂u ∂t = ∂2u ∂x2 , u(0, t) − ∂u ∂x (0, t) = 0, u(1, t) + ∂u ∂x (1, t) = 0, u(x, 0) = f(x). You may call the relevant eigenfunctions φn(x) and assume that they are known. Proof. a) • If λ = 0, the ODE reduces to φ = 0. Try φ(x) = Ax + B. From the first boundary condition, φ(0) − φ (0) = 0 = B − A ⇒ B = A. Thus, the solution takes the form φ(x) = Ax+A. The second boundary condition gives φ(1) + φ (1) = 0 = 3A ⇒ A = B = 0. Thus the only solution is φ ≡ 0, which is not an eigenfunction, and 0 not an eigenvalue. • If λ < 0, try φ(x) = esx , which gives s = ± √ −λ = ±β ∈ R. Hence, the family of solutions is φ(x) = Aeβx +Be−βx. Also, φ (x) = βAeβx −βBe−βx. The boundary conditions give φ(0) − φ (0) = 0 = A + B − βA + βB = A(1 − β) + B(1 + β), (29.2) φ(1)+φ (1) = 0 = Aeβ +Be−β +βAeβ −βBe−β = Aeβ (1+β)+Be−β (1−β). (29.3) From (29.2) and (29.3) we get 1 + β 1 − β = − A B and 1 + β 1 − β = − B A e−2β , or A B = e−β . From (29.2), β = A + B A − B and thus, A B = e A+B B−A , which has no solutions. b) Since λ > 0, the anzats φ = esx gives s = ±i √ λ and the family of solutions takes the form φ(x) = A sin(x √ λ) + B cos(x √ λ). Then, φ (x) = A √ λ cos(x √ λ) − B √ λ sin(x √ λ). The first boundary condition gives φ(0) − φ (0) = 0 = B − A √ λ ⇒ B = A √ λ.
  • 353. Partial Differential Equations Igor Yanovsky, 2005 353 Hence, φ(x) = A sin(x √ λ) + A √ λ cos(x √ λ). The second boundary condition gives φ(1) + φ (1) = 0 = A sin( √ λ) + A √ λ cos( √ λ) + A √ λ cos( √ λ) − Aλ sin( √ λ) = A (1 − λ) sin( √ λ) + 2 √ λ cos( √ λ) A = 0 (since A = 0 implies B = 0 and φ = 0, which is not an eigenfunction). Therefore, −(1 − λ) sin( √ λ) = 2 √ λ cos( √ λ), and thus tan( √ λ) = 2 √ λ λ−1 . c) We may assume that the eigenvalues/eigenfunctins of the Laplacian, λn and φn(x), are known. We solve the Heat Equation using the “space” eigenfunctions. ⎧ ⎪⎨ ⎪⎩ ut = uxx, u(0, t) − ux(0, t) = 0, u(1, t) + ux(1, t) = 0, u(x, 0) = f(x). For f, we have an eigenfunction expansion f(x) = ∞ n=1 anφn(x). Assume u(x, t) = ∞ n=1 un(t)φn(x). This implies un(t) + λnun(t) = 0, which has the general solution un(t) = Ane−λnt . Thus, u(x, t) = ∞ n=1 Ane−λnt φn(x), u(x, 0) = ∞ n=1 Anφn(x) = f(x). Comparing with , we have An = an. Thus, the solution is given by u(x, t) = ∞ n=1 ane−λnt φn(x), with an = 1 0 f(x)φn(x) dx.
  • 354. Partial Differential Equations Igor Yanovsky, 2005 354 Problem (W’03, #3); 266B Ralston Hw. Let Ω be a smooth domain in three dimensions and consider the initial-boundary value problem for the heat equation ⎧ ⎪⎨ ⎪⎩ ut = u + f(x) for x ∈ Ω, t > 0 ∂u/∂n = 0 for x ∈ ∂Ω, t > 0 u(x, 0) = g(x) for x ∈ Ω, in which f and g are known smooth functions with ∂g/∂n = 0 for x ∈ ∂Ω. a) Find an approximate formula for u as t → ∞. Proof. We expand u in terms of the Neumann eigenfunctions of Laplacian in Ω. φn + λnφn = 0 in Ω, ∂φn ∂n = 0 on ∂Ω. Note that here λ1 = 0 and φ1 is the constant V −1/2, where V is the volume of Ω. Assume u(x, t) = ∞ n=1 an(t)φn(x), an(t) = Ω φn(x)u(x, t) dx. f(x) = ∞ n=1 fnφn(x), fn = Ω φn(x)f(x) dx. g(x) = ∞ n=1 gnφn(x), gn = Ω φn(x)g(x) dx. an(t) = Ω φn(x)ut dx = Ω φn( u + f) dx = Ω φn u dx + Ω φnf dx = Ω φnu dx + Ω φnf dx = −λn Ω φnu dx + Ω φnf dx fn = −λnan + fn. an(0) = Ω φn(x)u(x, 0) dx = Ω φng dx = gn. 78 Thus, we solve the ODE: an + λnan = fn an(0) = gn. For n = 1, λ1 = 0, and we obtain a1(t) = f1t + g1. For n ≥ 2, the homogeneous solution is anh = ce−λnt. The anzats for a particular solution is anp = c1t + c2, which gives c1 = 0 and c2 = fn/λn. Using the initial condition, we obtain an(t) = gn − fn λn e−λnt + fn λn . 78 We used Green’s formula: ∂Ω φn ∂u ∂n − u∂φn ∂n ds = Ω (φn u − φnu) dx. On ∂Ω, ∂u ∂n = 0; ∂φn ∂n = 0 since eigenfunctions are Neumann.
  • 355. Partial Differential Equations Igor Yanovsky, 2005 355 u(x, t) = (f1t + g1)φ1(x) + ∞ n=2 gn − fn λn e−λnt + fn λn φn(x). If f1 = 0 Ω f(x) dx = 0 , lim t→∞ u(x, t) = g1φ1 + ∞ n=2 fnφn λn . If f1 = 0 Ω f(x) dx = 0 , lim t→∞ u(x, t) ∼ f1φ1t.
  • 356. Partial Differential Equations Igor Yanovsky, 2005 356 b) If g ≥ 0 and f > 0, show that u > 0 for all t > 0.
  • 357. Partial Differential Equations Igor Yanovsky, 2005 357 Problem (S’97, #2). a) Consider the eigenvalue problem for the Laplace operator in Ω ∈ R2 with zero Neumann boundary condition uxx + uyy + λu = 0 in Ω ∂u ∂n = 0 on ∂Ω. Prove that λ0 = 0 is the lowest eigenvalue and that it is simple. b) Assume that the eigenfunctions φn(x, y) of the problem in (a) form a complete orthogonal system, and that f(x, y) has a uniformly convergent expansion f(x, y) = ∞ n=0 fnφn(x, y). Solve the initial value problem ut = u + f(x, y) subject to initial and boundary conditions u(x, y, 0) = 0, ∂u ∂n u|∂Ω = 0. What is the behavior of u(x, y, t) as t → ∞? c) Consider the problem with Neumann boundary conditions vxx + vyy + f(x, y) = 0 in Ω ∂v ∂nv = 0 on ∂Ω. When does a solution exist? Find this solution, and find its relation with the behavior of lim u(x, y, t) in (b) as t → ∞. Proof. a) Suppose this eigenvalue problem did have a solution u with λ ≤ 0. Multiplying u + λu = 0 by u and integrating over Ω, we get Ω u u dx + λ Ω u2 dx = 0, ∂Ω u ∂u ∂n =0 ds − Ω |∇u|2 dx + λ Ω u2 dx = 0, Ω |∇u|2 dx = λ ≤0 Ω u2 dx, Thus, ∇u = 0 in Ω, and u is constant in Ω. Hence, we now have 0 = λ ≤0 Ω u2 dx. For nontrivial u, we have λ = 0. For this eigenvalue problem, λ = 0 is an eigenvalue, its eigenspace is the set of constants, and all other λ’s are positive.
  • 358. Partial Differential Equations Igor Yanovsky, 2005 358 b) We expand u in terms of the Neumann eigenfunctions of Laplacian in Ω. 79 φn + λnφn = 0 in Ω, ∂φn ∂n = 0 on ∂Ω. u(x, y, t) = ∞ n=1 an(t)φn(x, y), an(t) = Ω φn(x, y)u(x, y, t) dx. an(t) = Ω φn(x, y)ut dx = Ω φn( u + f) dx = Ω φn u dx + Ω φnf dx = Ω φnu dx + Ω φnf dx = −λn Ω φnu dx + Ω φnf dx fn = −λnan + fn. an(0) = Ω φn(x, y)u(x, y, 0) dx = 0. 80 Thus, we solve the ODE: an + λnan = fn an(0) = 0. For n = 1, λ1 = 0, and we obtain a1(t) = f1t. For n ≥ 2, the homogeneous solution is anh = ce−λnt . The anzats for a particular solution is anp = c1t + c2, which gives c1 = 0 and c2 = fn/λn. Using the initial condition, we obtain an(t) = − fn λn e−λnt + fn λn . u(x, t) = f1φ1t + ∞ n=2 − fn λn e−λnt + fn λn φn(x). If f1 = 0 Ω f(x) dx = 0 , lim t→∞ u(x, t) = ∞ n=2 fnφn λn . If f1 = 0 Ω f(x) dx = 0 , lim t→∞ u(x, t) ∼ f1φ1t. c) Integrate v + f(x, y) = 0 over Ω: Ω f dx = − Ω v dx = − Ω ∇ · ∇v dx =1 − ∂Ω ∂v ∂n ds =2 0, where we used 1 divergence theorem and 2 Neumann boundary conditions. Thus, the solution exists only if Ω f dx = 0. 79 We use dx dy → dx. 80 We used Green’s formula: ∂Ω φn ∂u ∂n − u∂φn ∂n ds = Ω (φn u − φnu) dx. On ∂Ω, ∂u ∂n = 0; ∂φn ∂n = 0 since eigenfunctions are Neumann.
  • 359. Partial Differential Equations Igor Yanovsky, 2005 359 Assume v(x, y) = ∞ n=0 anφn(x, y). Since we have f(x, y) = ∞ n=0 fnφn(x, y), we obtain − ∞ n=0 λnanφn + ∞ n=0 fnφn = 0, −λnanφn + fnφn = 0, an = fn λn . v(x, y) = ∞ n=0( fn λn )φn(x, y).
  • 360. Partial Differential Equations Igor Yanovsky, 2005 360 29.1 Heat Equation with Periodic Boundary Conditions in 2D (with extra terms) Problem (F’99, #5). In two spatial dimensions, consider the differential equation ut = −ε u − 2 u with periodic boundary conditions on the unit square [0, 2π]2 . a) If ε = 2 find a solution whose amplitude increases as t increases. b) Find a value ε0, so that the solution of this PDE stays bounded as t → ∞, if ε < ε0. Proof. a) Eigenfunctions of the Laplacian. The periodic boundary conditions imply a Fourier Series solution of the form: u(x, t) = m,n amn(t)ei(mx+ny) . ut = m,n amn(t)ei(mx+ny) , u = uxx + uyy = − m,n (m2 + n2 ) amn(t)ei(mx+ny) , 2 u = uxxxx + 2uxxyy + uyyyy = m,n (m4 + 2m2 n2 + n4 ) amn(t)ei(mx+ny) = m,n (m2 + n2 )2 amn(t)ei(mx+ny) . Plugging this into the PDE, we obtain amn(t) = ε(m2 + n2 )amn(t) − (m2 + n2 )2 amn(t), amn(t) − [ε(m2 + n2 ) − (m2 + n2 )2 ]amn(t) = 0, amn(t) − (m2 + n2 )[ε − (m2 + n2 )]amn(t) = 0. The solution to the ODE above is amn(t) = αmn e(m2+n2)[ε−(m2+n2 )]t . u(x, t) = m,n αmn e(m2+n2)[ε−(m2+n2 )]t ei(mx+ny) oscillates . When ε = 2, we have u(x, t) = m,n αmn e(m2+n2)[2−(m2+n2)]t ei(mx+ny) . We need a solution whose amplitude increases as t increases. Thus, we need those αmn > 0, with (m2 + n2 )[2 − (m2 + n2 )] > 0, 2 − (m2 + n2 ) > 0, 2 > m2 + n2 . Hence, αmn > 0 for (m, n) = (0, 0), (m, n) = (1, 0), (m, n) = (0, 1). Else, αmn = 0. Thus, u(x, t) = α00 + α10et eix + α01et eiy = 1 + et eix + et eiy = 1 + et (cos x + i sinx) + et (cos y + i siny).
  • 361. Partial Differential Equations Igor Yanovsky, 2005 361 b) For ε ≤ ε0 = 1, the solution stays bounded as t → ∞.
  • 362. Partial Differential Equations Igor Yanovsky, 2005 362 Problem (F’93, #1). Suppose that a and b are constants with a ≥ 0, and consider the equation ut = uxx + uyy − au3 + bu (29.4) in which u(x, y, t) is 2π-periodic in x and y. a) Let u be a solution of (29.4) with ||u(t = 0)|| = 2π 0 2π 0 |u(x, y, t = 0)|2 dxdy1/2 < . Derive an explicit bound on ||u(t)|| and show that it stays finite for all t. b) If a = 0, construct the normal modes for (29.4); i.e. find all solutions of the form u(x, y, t) = eλt+ikx+ily . c) Use these normal modes to construct a solution of (29.4) with a = 0 for the initial data u(x, y, t = 0) = 1 1 − 1 2 eix + 1 1 − 1 2e−ix . Proof. a) Multiply the equation by u and integrate: ut = u − au3 + bu, uut = u u − au4 + bu2 , Ω uut dx = Ω u u dx − Ω au4 dx + Ω bu2 dx, 1 2 d dt Ω u2 dx = ∂Ω u ∂u ∂n ds =0, u periodic on [0,2π]2 − Ω |∇u|2 dx − Ω au4 dx ≤0 + Ω bu2 dx, d dt ||u||2 2 ≤ 2b ||u||2 2, ||u||2 2 ≤ ||u(x, 0)||2 2 e2bt , ||u||2 ≤ ||u(x, 0)||2 ebt ≤ ε ebt . Thus, ||u|| stays finite for all t. b) Since a = 0, plugging u = eλt+ikx+ily into the equation, we obtain: ut = uxx + uyy + bu, λ eλt+ikx+ily = (−k2 − l2 + b) eλt+ikx+ily , λ = −k2 − l2 + b. Thus, ukl = e(−k2−l2+b)t+ikx+ily , u(x, y, t) = k,l akl e(−k2−l2+b)t+ikx+ily .
  • 363. Partial Differential Equations Igor Yanovsky, 2005 363 c) Using the initial condition, we obtain: u(x, y, 0) = k,l akl ei(kx+ly) = 1 1 − 1 2eix + 1 1 − 1 2 e−ix = ∞ k=0 1 2 eix k + ∞ k=0 1 2 e−ix k = ∞ k=0 1 2k eikx + ∞ k=0 1 2k e−ikx , = 2 + ∞ k=1 1 2k eikx + −∞ k=−1 1 2−k eikx . Thus, l = 0, and we have ∞ k=−∞ ak eikx = 2 + ∞ k=1 1 2k eikx + −∞ k=−1 1 2−k eikx , ⇒ a0 = 2; ak = 1 2k , k > 0; ak = 1 2−k , k < 0 ⇒ a0 = 2; ak = 1 2|k| , k = 0. u(x, y, t) = 2ebt + +∞ k=−∞, k=0 1 2|k| e(−k2+b)t+ikx . 81 81 Note a similar question formulation in F’92 #3(b).
  • 364. Partial Differential Equations Igor Yanovsky, 2005 364 Problem (S’00, #3). Consider the initial-boundary value problem for u = u(x, y, t) ut = u − u for (x, y) ∈ [0, 2π]2, with periodic boundary conditions and with u(x, y, 0) = u0(x, y) in which u0 is periodic. Find an asymptotic expansion for u for t large with terms tending to zero increasingly rapidly as t → ∞. Proof. Since we have periodic boundary conditions, assume u(x, y, t) = m,n umn(t) ei(mx+ny) . Plug this into the equation: m,n umn(t) ei(mx+ny) = m,n (−m2 − n2 − 1) umn(t) ei(mx+ny) , umn(t) = (−m2 − n2 − 1) umn(t), umn(t) = amn e(−m2−n2−1)t , u(x, y, t) = m,n amn e−(m2+n2+1)t ei(mx+ny) . Since u0 is periodic, u0(x, y) = m,n u0mn ei(mx+ny) , u0mn = 1 4π2 2π 0 2π 0 u0(x, y) e−i(mx+ny) dxdy. Initial condition gives: u(x, y, 0) = m,n amn ei(mx+ny) = u0(x, y), m,n amn ei(mx+ny) = m,n u0mn ei(mx+ny) , ⇒ amn = u0mn. u(x, y, t) = m,n u0mn e−(m2+n2+1)t ei(mx+ny) . u0mn e−(m2+n2+1)t ei(mx+ny) → 0 as t → ∞, since e−(m2+n2+1)t → 0 as t → ∞.
  • 365. Partial Differential Equations Igor Yanovsky, 2005 365 30 Problems: Fourier Transform Problem (S’01, #2b). Write the solution of initial value problem Ut − 1 0 5 3 Ux = 0, for general initial data u(1) (x, 0) u(2)(x, 0) = f(x) 0 as an inverse Fourier transform. You may assume that f is smooth and rapidly decreasing as |x| → ∞. Proof. Consider the original system: u (1) t − u(1) x = 0, u (2) t − 5u(1) x − 3u(2) x = 0. Take the Fourier transform in x. The transformed initial value problems are: u (1) t − iξu(1) = 0, u(1) (ξ, 0) = f(ξ), u (2) t − 5iξu(1) − 3iξu(2) = 0, u(2) (ξ, 0) = 0. Solving the first ODE for u(1) gives: u(1) (ξ, t) = f(ξ)eiξt . With this u(1) , the second initial value problem becomes u (2) t − 3iξu(2) = 5iξf(ξ)eiξt , u(2) (ξ, 0) = 0. The homogeneous solution of the above ODE is: u (2) h (ξ, t) = c1e3iξt . With u (2) p = c2eiξt as anzats for a particular solution, we obtain: iξc2eiξt − 3iξc2eiξt = 5iξf(ξ)eiξt , −2iξc2eiξt = 5iξf(ξ)eiξt , c2 = − 5 2 f(ξ). ⇒ u(2) p (ξ, t) = − 5 2 f(ξ)eiξt . u(2) (ξ, t) = u (2) h (ξ, t) + u(2) p (ξ, t) = c1e3iξt − 5 2 f(ξ)eiξt . We find c1 using initial conditions: u(2) (ξ, 0) = c1 − 5 2 f(ξ) = 0 ⇒ c1 = 5 2 f(ξ). Thus, u(2) (ξ, t) = 5 2 f(ξ) e3iξt − eiξt .
  • 366. Partial Differential Equations Igor Yanovsky, 2005 366 u(1) (x, t) and u(2) (x, t) are be obtained by taking inverse Fourier transform: u(1) (x, t) = u(1) (ξ, t) ∨ = 1 √ 2π Rn eixξ f(ξ) eiξt dξ, u(2) (x, t) = u(2) (ξ, t) ∨ = 1 √ 2π Rn eixξ 5 2 f(ξ) e3iξt − eiξt dξ.
  • 367. Partial Differential Equations Igor Yanovsky, 2005 367 Problem (S’02, #4). Use the Fourier transform on L2 (R) to show that du dx + cu(x) + u(x − 1) = f (30.1) has a unique solution u ∈ L2(R) for each f ∈ L2(R) when |c| > 1 - you may assume that c is a real number. Proof. u ∈ L2(R). Define its Fourier transform u by u(ξ) = 1 √ 2π R e−ixξ u(x) dx for ξ ∈ R. du dx (ξ) = iξu(ξ). We can find u(x − 1)(ξ) in two ways. • Let u(x − 1 y ) = v(x), and determinte v(ξ): u(x − 1)(ξ) = v(ξ) = 1 √ 2π R e−ixξ v(x) dx = 1 √ 2π R e−i(y+1)ξ u(y) dy = 1 √ 2π R e−iyξ e−iξ u(y) dy = e−iξ u(ξ). • We can also write the definition for u(ξ) and substitute x − 1 later in calculations: u(ξ) = 1 √ 2π R e−iyξ u(y) dy = 1 √ 2π R e−i(x−1)ξ u(x − 1) dx = 1 √ 2π R e−ixξ eiξ u(x − 1) dx = eiξ u(x − 1)(ξ), ⇒ u(x − 1)(ξ) = e−iξ u(ξ). Substituting into (30.1), we obtain iξu(ξ) + cu(ξ) + e−iξ u(ξ) = f(ξ), u(ξ) = f(ξ) iξ + c + e−iξ . u(x) = f(ξ) iξ + c + e−iξ ∨ = f B ∨ = 1 √ 2π f ∗ B, where B = 1 iξ + c + e−iξ , ⇒ B = 1 iξ + c + e−iξ ∨ = 1 √ 2π R eixξ iξ + c + e−iξ dξ. For |c| > 1, u(ξ) exists for all ξ ∈ R, so that u(x) = (u(ξ))∨ and this is unique by the Fourier Inversion Theorem. Note that in Rn , becomes u(x − 1)(ξ) = v(ξ) = 1 (2π) n 2 Rn e−ix·ξ v(x) dx = 1 (2π) n 2 Rn e−i(y+1)·ξ u(y) dy = 1 (2π) n 2 Rn e−iy·ξ e−i1·ξ u(y) dy = e−i1·ξ u(ξ) = e(−i j ξj) u(ξ).
  • 368. Partial Differential Equations Igor Yanovsky, 2005 368 Problem (F’96, #3). Find the fundamental solution for the equation ut = uxx − xux. (30.2) Hint: The Fourier transform converts this problem into a PDE which can be solved using the method of characteristics. Proof. u ∈ L2 (R). Define its Fourier transform u by u(ξ) = 1 √ 2π R e−ixξ u(x) dx for ξ ∈ R. ux(ξ) = iξ u(ξ), uxx(ξ) = (iξ)2 u(ξ) = −ξ2 u(ξ). We find xux(ξ) in two steps: ➀ Multiplication by x: −ixu(ξ) = 1 √ 2π R e−ixξ − ixu(x) dx = d dξ u(ξ). ⇒ xu(x)(ξ) = i d dξ u(ξ). ➁ Using the previous result, we find: xux(x)(ξ) = 1 √ 2π R e−ixξ xux(x) dx = 1 √ 2π e−ixξ xu ∞ −∞ = 0 − 1 √ 2π R (−iξ)e−ixξ x + e−ixξ u dx = 1 √ 2π iξ R e−ixξ x u dx − 1 √ 2π R e−ixξ u dx = iξ xu(x)(ξ) − u(ξ) = iξ i d dξ u(ξ) − u(ξ) = −ξ d dξ u(ξ) − u(ξ). ⇒ xux(x)(ξ) = −ξ d dξ u(ξ) − u(ξ). Plugging these into (30.2), we get: ∂ ∂t u(ξ, t) = −ξ2 u(ξ, t) − − ξ d dξ u(ξ, t) − u(ξ, t) , ut = −ξ2 u + ξuξ + u, ut − ξuξ = −(ξ2 − 1)u. We now solve the above equation by characteristics. We change the notation: u → u, t → y, ξ → x. We have uy − xux = −(x2 − 1)u. dx dt = −x ⇒ x = c1e−t , (c1 = xet ) dy dt = 1 ⇒ y = t + c2, dz dt = −(x2 − 1)z = −(c2 1e−2t − 1)z ⇒ dz z = −(c2 1e−2t − 1)dt ⇒ log z = 1 2 c2 1e−2t + t + c3 = x2 2 + t + c3 = x2 2 + y − c2 + c3 ⇒ z = ce x2 2 +y .
  • 369. Partial Differential Equations Igor Yanovsky, 2005 369 Changing the notation back, we have u(ξ, t) = ce ξ2 2 +t . Thus, we have u(ξ, t) = ce ξ2 2 +t . We use Inverse Fourier Tranform to get u(x, t): 82 u(x, t) = 1 √ 2π R eixξ u(ξ, t) dξ = 1 √ 2π R eixξ ce ξ2 2 +t dξ = c √ 2π et R eixξ e ξ2 2 dξ = c √ 2π et R eixξ+ξ2 2 dξ = c √ 2π et R e 2ixξ+ξ2 2 dξ = c √ 2π et R e (ξ+ix)2 2 dξ e x2 2 = c √ 2π et e x2 2 R e y2 2 dy = c √ 2π et e x2 2 √ 2π = c et e x2 2 . u(x, t) = c et e x2 2 . Check: ut = c et e x2 2 , ux = c et xe x2 2 , uxx = c et e x2 2 + x2 e x2 2 . Thus, ut = uxx − xux, c et e x2 2 = c et e x2 2 + x2 e x2 2 − x c et xe x2 2 . 82 We complete the square for powers of exponentials.
  • 370. Partial Differential Equations Igor Yanovsky, 2005 370 Problem (W’02, #4). a) Solve the initial value problem ∂u ∂t + n k=1 ak(t) ∂u ∂xk + a0(t)u = 0, x ∈ Rn , u(0, x) = f(x) where ak(t), k = 1, . . ., n, and a0(t) are continuous functions, and f is a continuous function. You may assume f has compact support. b) Solve the initial value problem ∂u ∂t + n k=1 ak(t) ∂u ∂xk + a0(t)u = f(x, t), x ∈ Rn , u(0, x) = 0 where f is continuous in x and t. Proof. a) Use the Fourier transform to solve this problem. u(ξ, t) = 1 (2π) n 2 Rn e−ix·ξ u(x, t) dx for ξ ∈ R. ∂u ∂xk = iξku. Thus, the equation becomes: ut + i n k=1 ak(t)ξku + a0(t)u = 0, u(ξ, 0) = f(ξ), or ut + i a(t) · ξ u + a0(t)u = 0, ut = − i a(t) · ξ + a0(t) u. This is an ODE in u with solution: u(ξ, t) = ce− t 0 (ia(s)·ξ+a0(s)) ds , u(ξ, 0) = c = f(ξ). Thus, u(ξ, t) = f(ξ) e− t 0 (i a(s)·ξ+a0(s)) ds . Use the Inverse Fourier transform to get u(x, t): u(x, t) = u(ξ, t)∨ = f(ξ) e− t 0 (i a(s)·ξ+a0(s)) ds ∨ = (f ∗ g)(x) (2π) n 2 , where g(ξ) = e− t 0 (i a(s)·ξ+a0(s)) ds . g(x) = 1 (2π) n 2 Rn eix·ξ g(ξ) dξ = 1 (2π) n 2 Rn eix·ξ e− t 0 (ia(s)·ξ+a0(s)) ds dξ. u(x, t) = (f ∗ g)(x) (2π) n 2 = 1 (2π)n Rn Rn ei(x−y)·ξ e− t 0 (ia(s)·ξ+a0(s)) ds dξ f(y) dy. b) Use Duhamel’s Principle and the result from (a). u(x, t) = t 0 U(x, t − s, s) ds, where U(x, t, s) solves ∂U ∂t + n k=1 ak(t) ∂U ∂xk + a0(t)U = 0, U(x, 0, s) = f(x, s).
  • 371. Partial Differential Equations Igor Yanovsky, 2005 371 u(x, t) = t 0 U(x, t − s, s) ds = 1 (2π)n t 0 Rn Rn ei(x−y)·ξ e− t−s 0 (ia(s)·ξ+a0(s)) ds dξ f(y, s) dy ds.
  • 372. Partial Differential Equations Igor Yanovsky, 2005 372 Problem (S’93, #2). a) Define the Fourier transform 83 f(ξ) = ∞ −∞ eixξ f(x) dx. State the inversion theorem. If f(ξ) = ⎧ ⎪⎨ ⎪⎩ π, |ξ| < a, 1 2 π, |ξ| = a, 0, |ξ| > a, where a is a real constant, what f(x) does the inversion theorem give? b) Show that f(x − b) = eiξb f(x), where b is a real constant. Hence, using part (a) and Parseval’s theorem, show that 1 π ∞ −∞ sin a(x + z) x + z sina(x + ξ) x + ξ dx = sina(z − ξ) z − ξ , where z and ξ are real constants. Proof. a) • The inverse Fourier transform for f ∈ L1 (Rn ): f∨ (ξ) = 1 2π ∞ −∞ e−ixξ f(x) dx for ξ ∈ R. Fourier Inversion Theorem: Assume f ∈ L2 (R). Then f(x) = 1 2π ∞ −∞ e−ixξ f(ξ) dξ = 1 2π ∞ −∞ ∞ −∞ ei(y−x)ξ f(y) dy dξ = (f)∨ (x). • Parseval’s theorem (Plancherel’s theorem) (for this definition of the Fourier transform). Assume f ∈ L1(Rn) ∩ L2(Rn). Then f, f∨ ∈ L2(Rn) and 1 2π ||f||L2(Rn) = ||f∨ ||L2(Rn) = ||f||L2(Rn), or ∞ −∞ |f(x)|2 dx = 1 2π ∞ −∞ |f(ξ)|2 dξ. Also, ∞ −∞ f(x) g(x)dx = 1 2π ∞ −∞ f(ξ) g(ξ) dξ. • We can write f(ξ) = π, |ξ| < a, 0, |ξ| > a. 83 Note that the Fourier transform is defined incorrectly here. There should be ‘-’ sign in e−ixξ . Need to be careful, since the consequences of this definition propagate throughout the solution.
  • 373. Partial Differential Equations Igor Yanovsky, 2005 373 f(x) = (f(ξ))∨ = 1 2π ∞ −∞ e−ixξ f(ξ) dξ = 1 2π −a −∞ 0 dξ + 1 2π a −a e−ixξ π dξ + 1 2π ∞ a 0 dξ = 1 2 a −a e−ixξ dξ = − 1 2ix e−ixξ ξ=a ξ=−a = − 1 2ix e−iax − eiax = sinax x . b) • Let f(x − b y ) = g(x), and determinte g(ξ): f(x − b)(ξ) = g(ξ) = R eixξ g(x) dx = R ei(y+b)ξ f(y) dy = R eiyξ eibξ f(y) dy = eibξ f(ξ). • With f(x) = sin ax x (from (a)), we have 1 π ∞ −∞ sina(x + z) x + z sin a(x + s) x + s dx = 1 π ∞ −∞ f(x + z)f(x + s) dx (x = x + s, dx = dx) = 1 π ∞ −∞ f(x + z − s)f(x ) dx (Parseval’s) = 1 π 1 2π ∞ −∞ f(x + z − s)f(x ) dξ part (b) = 1 2π2 ∞ −∞ f(ξ) e−i(z−s)ξ f(ξ) dξ = 1 2π2 a −a f(ξ) 2 e−i(z−s)ξ dξ = 1 2π2 a −a π2 e−i(z−s)ξ dξ = 1 2 a −a e−i(z−s)ξ dξ = 1 −2i(z − s) e−i(z−s)ξ ξ=a ξ=−a = ei(z−s)a − e−i(z−s)a 2i(z − s) = sin a(z − s) z − s .
  • 374. Partial Differential Equations Igor Yanovsky, 2005 374 Problem (F’03, #5). ❶ State Parseval’s relation for Fourier transforms. ❷ Find the Fourier transform ˆf(ξ) of f(x) = eiαx/2 √ πy, |x| ≤ y 0, |x| > y, in which y and α are constants. ❸ Use this in Parseval’s relation to show that ∞ −∞ sin2 (α − ξ)y (α − ξ)2 dξ = πy. What does the transform ˆf(ξ) become in the limit y → ∞? ❹ Use Parseval’s relation to show that sin(α − β)y (α − β) = 1 π ∞ −∞ sin(α − ξ)y (α − ξ) sin(β − ξ)y (β − ξ) dξ. Proof. • f ∈ L2(R). Define its Fourier transform u by f(ξ) = 1 √ 2π R e−ixξ f(x) dx for ξ ∈ R. ❶ Parseval’s theorem (Plancherel’s theorem): Assume f ∈ L1 (Rn ) ∩ L2 (Rn ). Then f, f∨ ∈ L2 (Rn ) and ||f||L2(Rn) = ||f∨ ||L2(Rn) = ||f||L2(Rn), or ∞ −∞ |f(x)|2 dx = ∞ −∞ |f(ξ)|2 dξ. Also, ∞ −∞ f(x) g(x)dx = ∞ −∞ f(ξ) g(ξ) dξ. ❷ Find the Fourier transform of f: f(ξ) = 1 √ 2π R e−ixξ f(x) dx = 1 √ 2π y −y e−ixξ eiαx 2 √ πy dx = 1 2π √ 2y y −y ei(α−ξ)x dx = 1 2π √ 2y 1 i(α − ξ) ei(α−ξ)x x=y x=−y = 1 2iπ √ 2y(α − ξ) ei(α−ξ)y − e−i(α−ξ)y = siny(α − ξ) π √ 2y(α − ξ) . ❸ Parseval’s theorem gives: ∞ −∞ |f(ξ)|2 dξ = ∞ −∞ |f(x)|2 dx, ∞ −∞ sin2 y(α − ξ) π22y(α − ξ)2 dξ = y −y e2iαx 4πy dx, ∞ −∞ sin2 y(α − ξ) (α − ξ)2 dξ = π 2 y −y dx, ∞ −∞ sin2 y(α − ξ) (α − ξ)2 dξ = πy.
  • 375. Partial Differential Equations Igor Yanovsky, 2005 375 ❹ We had f(ξ) = siny(α − ξ) π √ 2y(α − ξ) . • We make change of variables: α − ξ = β − ξ . Then, ξ = ξ + α − β. We have f(ξ) = f(ξ + α − β) = siny(β − ξ ) (β − ξ ) , or f(ξ + α − β) = siny(β − ξ) (β − ξ) . • We will also use the following result. Let f(ξ + a ξ ) = g(ξ), and determinte g(ξ)∨ : f(ξ + a)∨ = g(ξ)∨ = 1 √ 2π R eixξ g(ξ) dξ = 1 √ 2π R eix(ξ −a) f(ξ ) dξ = e−ixa f(x). • Using these results, we have 1 π ∞ −∞ sin(α − ξ)y (α − ξ) sin(β − ξ)y (β − ξ) dξ = 1 π (π 2y)2 ∞ −∞ f(ξ) f(ξ + α − β) dξ = 2πy ∞ −∞ f(x) e−(α−β)ix f(x) dx = 2πy ∞ −∞ f(x)2 e−(α−β)ix dx = 2πy y −y e2iαx 4πy e−(α−β)ix dx = 1 2 y −y e−(α−β)ix dx = 1 −2i(α − β) e−(α−β)ix x=y x=−y = 1 −2i(α − β) e−(α−β)iy − e(α−β)iy = sin(α − β)y α − β .
  • 376. Partial Differential Equations Igor Yanovsky, 2005 376 Problem (S’95, #5). For the Laplace equation f ≡ ∂2 ∂x2 + ∂2 ∂y2 f = 0 (30.3) in the upper half plane y ≥ 0, consider • the Dirichlet problem f(x, 0) = g(x); • the Neumann problem ∂ ∂y f(x, 0) = h(x). Assume that f, g and h are 2π periodic in x and that f is bounded at infinity. Find the Fourier transform N of the Dirichlet-Neumann map. In other words, find an operator N taking the Fourier transform of g to the Fourier transform of h; i.e. Ngk = hk. Proof. We solve the problem by two methods. ❶ Fourier Series. Since f is 2π-periodic in x, we can write f(x, y) = ∞ n=−∞ an(y) einx . Plugging this into (30.3), we get the ODE: ∞ n=−∞ − n2 an(y)einx + an(y)einx = 0, an(y) − n2 an(y) = 0. Initial conditions give: (g and h are 2π-periodic in x) f(x, 0) = ∞ n=−∞ an(0)einx = g(x) = ∞ n=−∞ gneinx ⇒ an(0) = gn. fy(x, 0) = ∞ n=−∞ an(0)einx = h(x) = ∞ n=−∞ hneinx ⇒ an(0) = hn. Thus, the problems are: an(y) − n2 an(y) = 0, an(0) = gn, (Dirichlet) an(0) = hn. (Neumann) ⇒ an(y) = bneny + cne−ny , n = 1, 2, . . .; a0(y) = b0y + c0. an(y) = nbneny − ncne−ny , n = 1, 2, . . .; a0(y) = b0. Since f is bounded at y = ±∞, we have: bn = 0 for n > 0, cn = 0 for n < 0, b0 = 0, c0 arbitrary.
  • 377. Partial Differential Equations Igor Yanovsky, 2005 377 • n > 0: an(y) = cne−ny , an(0) = cn = gn, (Dirichlet) an(0) = −ncn = hn. (Neumann) ⇒ −ngn = hn. • n < 0: an(y) = bneny , an(0) = bn = gn, (Dirichlet) an(0) = nbn = hn. (Neumann) ⇒ ngn = hn. −|n|gn = hn, n = 0. • n = 0 : a0(y) = c0, a0(0) = c0 = g0, (Dirichlet) a0(0) = 0 = h0. (Neumann) Note that solution f(x, y) may be written as f(x, y) = ∞ n=−∞ an(y) einx = a0(y) + −1 n=−∞ an(y) einx + ∞ n=1 an(y) einx = c0 + −1 n=−∞ bneny einx + ∞ n=1 cne−ny einx = g0 + −1 n=−∞ gneny einx + ∞ n=1 gne−ny einx , (Dirichlet) c0 + −1 n=−∞ hn n eny einx + ∞ n=1 −hn n e−ny einx . (Neumann) ❷ Fourier Transform. The Fourier transform of f(x, y) in x is: f(ξ, y) = 1 √ 2π ∞ −∞ e−ixξ f(x, y) dx, f(x, y) = 1 √ 2π ∞ −∞ eixξ f(ξ, y) dξ. (iξ)2 f(ξ, y) + fyy(ξ, y) = 0, fyy − ξ2 f = 0. The solution to this ODE is: f(ξ, y) = c1eξy + c2e−ξy . For ξ > 0, c1 = 0; for ξ < 0, c2 = 0. • ξ > 0 : f(ξ, y) = c2e−ξy , fy(ξ, y) = −ξc2e−ξy , c2 = f(ξ, 0) = 1 √ 2π ∞ −∞ e−ixξ f(x, 0) dx = 1 √ 2π ∞ −∞ e−ixξ g(x) dx = g(ξ), (Dirichlet) −ξc2 = fy(ξ, 0) = 1 √ 2π ∞ −∞ e−ixξ fy(x, 0) dx = 1 √ 2π ∞ −∞ e−ixξ h(x) dx = h(ξ). (Neumann) ⇒ −ξg(ξ) = h(ξ).
  • 378. Partial Differential Equations Igor Yanovsky, 2005 378 • ξ < 0 : f(ξ, y) = c1eξy , fy(ξ, y) = ξc1eξy , c1 = f(ξ, 0) = 1 √ 2π ∞ −∞ e−ixξ f(x, 0) dx = 1 √ 2π ∞ −∞ e−ixξ g(x) dx = g(ξ), (Dirichlet) ξc1 = fy(ξ, 0) = 1 √ 2π ∞ −∞ e−ixξ fy(x, 0) dx = 1 √ 2π ∞ −∞ e−ixξ h(x) dx = h(ξ). (Neumann) ⇒ ξg(ξ) = h(ξ). −|ξ|g(ξ) = h(ξ).
  • 379. Partial Differential Equations Igor Yanovsky, 2005 379 Problem (F’97, #3). Consider the Dirichlet problem in the half-space xn > 0, n ≥ 2: u + a ∂u ∂xn + k2 u = 0, xn > 0 u(x , 0) = f(x ), x = (x1, . . ., xn−1). Here a and k are constants. Use the Fourier transform to show that for any f(x ) ∈ L2(Rn−1) there exists a solution u(x , xn) of the Dirichlet problem such that Rn |u(x , xn)|2 dx ≤ C for all 0 < xn < +∞. Proof. 84 Denote ξ = (ξ , ξn). Transform in the first n − 1 variables: −|ξ |2 u(ξ , xn) + ∂2u ∂x2 n (ξ , xn) + a ∂u ∂xn (ξ , xn) + k2 u(ξ , xn) = 0. Thus, the ODE and initial conditions of the transformed problem become: uxnxn + auxn + (k2 − |ξ |2 )u = 0, u(ξ , 0) = f(ξ ). With the anzats u = cesxn , we obtain s2 + as + (k2 − |ξ |2) = 0, and s1,2 = −a ± a2 − 4(k2 − |ξ |2) 2 . Choosing only the negative root, we obtain the solution: 85 u(ξ , xn) = c(ξ ) e −a− √ a2−4(k2−|ξ |2) 2 xn . u(ξ , 0) = c = f(ξ ). Thus, u(ξ , xn) = f(ξ ) e −a− √ a2−4(k2−|ξ |2) 2 xn . Parseval’s theorem gives: ||u||2 L2(Rn−1) = ||u||2 L2(Rn−1) = Rn−1 |u(ξ , xn)|2 dξ = Rn−1 f(ξ ) e −a− √ a2−4(k2−|ξ |2) 2 xn 2 dξ ≤ Rn−1 f(ξ ) 2 dξ = ||f||2 L2(Rn−1) = ||f||2 L2(Rn−1) ≤ C, since f(x ) ∈ L2 (Rn−1 ). Thus, u(x , xn) ∈ L2 (Rn−1 ). 84 Note that the last element of x = (x , xn) = (x1, . . . , xn−1, xn), i.e. xn, plays a role of time t. As such, the PDE may be written as u + utt + aut + k2 u = 0. 85 Note that a > 0 should have been provided by the statement of the problem.
  • 380. Partial Differential Equations Igor Yanovsky, 2005 380 Problem (F’89, #7). Find the following fundamental solutions a) ∂G(x, y, t) ∂t = a(t) ∂2G(x, y, t) ∂x2 + b(t) ∂G(x, y, t) ∂x + c(t)G(x, y, t) for t > 0 G(x, y, 0) = δ(x − y), where a(t), b(t), c(t) are continuous functions on [0, +∞], a(t) > 0 for t > 0. b) ∂G ∂t (x1, . . ., xn, y1, . . ., yn, t) = n k=1 ak(t) ∂G ∂xk for t > 0, G(x1, . . ., xn, y1, . . ., yn, 0) = δ(x1 − y1)δ(x2 − y2) . . .δ(xn − yn). Proof. a) We use the Fourier transform to solve this problem. Transform the equation in the first variable only. That is, G(ξ, y, t) = 1 √ 2π R e−ixξ G(x, y, t) dx. The equation is transformed to an ODE, that can be solved: Gt(ξ, y, t) = −a(t) ξ2 G(ξ, y, t) + i b(t) ξ G(ξ, y, t) + c(t) G(ξ, y, t), Gt(ξ, y, t) = − a(t) ξ2 + i b(t) ξ + c(t) G(ξ, y, t), G(ξ, y, t) = c e t 0 [−a(s)ξ2+i b(s)ξ+c(s)] ds . We can also transform the initial condition: G(ξ, y, 0) = δ(x − y)(ξ) = e−iyξ δ(ξ) = 1 √ 2π e−iyξ . Thus, the solution of the transformed problem is: G(ξ, y, t) = 1 √ 2π e−iyξ e t 0 [−a(s)ξ2+i b(s)ξ+c(s)] ds . The inverse Fourier transform gives the solution to the original problem: G(x, y, t) = G(ξ, y, t) ∨ = 1 √ 2π R eixξ G(ξ, y, t) dξ = 1 √ 2π R eixξ 1 √ 2π e−iyξ e t 0 [−a(s)ξ2+i b(s)ξ+c(s)] ds dξ = 1 2π R ei(x−y)ξ e t 0 [−a(s)ξ2+i b(s)ξ+c(s)] ds dξ. b) Denote x = (x1, . . ., xn), y = (y1, . . ., yn). Transform in x: G(ξ, y, t) = 1 (2π) n 2 Rn e−ix·ξ G(x, y, t) dx. The equation is transformed to an ODE, that can be solved: Gt(ξ, y, t) = n k=1 ak(t) iξk G(ξ, y, t), G(ξ, y, t) = c ei t 0 [ n k=1 ak(s) ξk] ds .
  • 381. Partial Differential Equations Igor Yanovsky, 2005 381 We can also transform the initial condition: G(ξ, y, 0) = δ(x1 − y1)δ(x2 − y2) . . .δ(xn − yn) (ξ) = e−iy·ξ δ(ξ) = 1 (2π) n 2 e−iy·ξ . Thus, the solution of the transformed problem is: G(ξ, y, t) = 1 (2π) n 2 e−iy·ξ ei t 0 [ n k=1 ak(s) ξk] ds . The inverse Fourier transform gives the solution to the original problem: G(x, y, t) = G(ξ, y, t) ∨ = 1 (2π) n 2 Rn eix·ξ G(ξ, y, t) dξ = 1 (2π) n 2 Rn eix·ξ 1 (2π) n 2 e−iy·ξ ei t 0 [ n k=1 ak(s) ξk] ds dξ = 1 (2π)n Rn ei(x−y)·ξ ei t 0 [ n k=1 ak(s) ξk] ds dξ.
  • 382. Partial Differential Equations Igor Yanovsky, 2005 382 Problem (W’02, #7). Consider the equation ∂2 ∂x2 1 + · · · + ∂2 ∂x2 n u = f in Rn , (30.4) where f is an integrable function (i.e. f ∈ L1(Rn)), satisfying f(x) = 0 for |x| ≥ R. Solve (30.4) by Fourier transform, and prove the following results. a) There is a solution of (30.4) belonging to L2(Rn) if n > 4. b) If Rn f(x) dx = 0, there is a solution of (30.4) belonging to L2 (Rn ) if n > 2. Proof. u = f, −|ξ|2 u(ξ) = f(ξ), u(ξ) = − 1 |ξ|2 f(ξ), ξ ∈ Rn , u(x) = − f(ξ) |ξ|2 ∨ . a) Then ||u||L2(Rn) = Rn |f(ξ)|2 |ξ|4 dξ 1 2 ≤ |ξ|<1 |f(ξ)|2 |ξ|4 dξ A + |ξ|≥1 |f(ξ)|2 |ξ|4 dξ B 1 2 . Notice, ||f||2 = ||f||2 ≥ B, so B < ∞. Use polar coordinates on A. A = |ξ|<1 |f(ξ)|2 |ξ|4 dξ = 1 0 Sn−1 |f|2 r4 rn−1 dSn−1 dr = 1 0 Sn−1 |f|2 rn−5 dSn−1 dr. If n > 4, A ≤ Sn−1 |f|2 dSn−1 = ||f||2 2 < ∞. ||u||L2(Rn) = ||u||L2(Rn) = (A + B) 1 2 < ∞.
  • 383. Partial Differential Equations Igor Yanovsky, 2005 383 b) We have u(x, t) = − f(ξ) |ξ|2 ∨ = − 1 (2π) n 2 ∞ −∞ eix·ξ f(ξ) |ξ|2 dξ = − 1 (2π) n 2 ∞ −∞ eix·ξ |ξ|2 1 (2π) n 2 ∞ −∞ e−iy·ξ f(y) dy dξ = − 1 (2π)n ∞ −∞ f(y) ∞ −∞ ei(x−y)·ξ |ξ|2 dξ dy = − 1 (2π)n ∞ −∞ f(y) 1 0 Sn−1 ei(x−y)r r2 rn−1 dSn−1 dr dy = − 1 (2π)n ∞ −∞ f(y) 1 0 Sn−1 ei(x−y)r rn−3 dSn−1 dr ≤ M < ∞, if n>2. dy. |u(x, t)| = 1 (2π)n ∞ −∞ M f(y) dy < ∞.
  • 384. Partial Differential Equations Igor Yanovsky, 2005 384 Problem (F’02, #7). For the right choice of the constant c, the function F(x, y) = c(x + iy)−1 is a fundamental solution for the equation ∂u ∂x + i ∂u ∂y = f in R2 . Find the right choice of c, and use your answer to compute the Fourier transform (in distribution sense) of (x + iy)−1 . Proof. 86 = ∂ ∂x + i ∂ ∂y ∂ ∂x − i ∂ ∂y . F1(x, y) = 1 2π log |z| is the fundamental solution of the Laplacian. z = x + iy. F1(x, y) = δ, ∂ ∂x + i ∂ ∂y ∂ ∂x − i ∂ ∂y F(x, y) = δ. hx + ihy = e−i(xξ1+yξ2) . Suppose h = h(xξ1 + yξ2) or h = ce−i(xξ1+yξ2). ⇒ c − iξ1 e−i(xξ1+yξ2) − i2 ξ2 e−i(xξ1+yξ2) = −ic(ξ1 − iξ2) e−i(xξ1+yξ2) ≡ e−i(xξ1+yξ2) , ⇒ −ic(ξ1 − iξ2) = 1, ⇒ c = − 1 i(ξ1 − iξ2) , ⇒ h(x, y) = − 1 i(ξ1 − iξ2) e−i(xξ1+yξ2) . Integrate by parts: 1 x + iy (ξ) = R2 e−i(xξ1+yξ2) 1 i(ξ1 − iξ2) ∂ ∂x + i ∂ ∂y 1 (x + iy) − 0 dxdy = 1 i(ξ1 − iξ2) = 1 i(ξ2 + iξ1) . 86 Alan solved in this problem in class.
  • 385. Partial Differential Equations Igor Yanovsky, 2005 385 31 Laplace Transform If u ∈ L1(R+), we define its Laplace transform to be L[u(t)] = u# (s) = ∞ 0 e−st u(t) dt (s > 0). In practice, for a PDE involving time, it may be useful to perform a Laplace transform in t, holding the space variables x fixed. The inversion formula for the Laplace transform is: u(t) = L−1 [u# (s)] = 1 2πi c+i∞ c−i∞ est u# (s) ds. Example: f(t) = 1. L[1] = ∞ 0 e−st · 1 dt = − 1 s e−st t=∞ t=0 = 1 s for s > 0. Example: f(t) = eat. L[eat ] = ∞ 0 e−st eat dt = ∞ 0 e(a−s)t dt = 1 a − s e(a−s)t t=∞ t=0 = 1 s − a for s > a. Convolution: We want to find an inverse Laplace transform of 1 s · 1 s2+1 . L−1 1 s L[f] · 1 s2 + 1 L[g] = f ∗ g = t 0 1 · sint dt = 1 − cos t. Partial Derivatives: u = u(x, t) L[ut] = ∞ 0 e−st ut dt = e−st u(x, t) t=∞ t=0 + s ∞ 0 e−st u dt = sL[u] − u(x, 0), L[utt] = ∞ 0 e−st utt dt = e−st ut t=∞ t=0 + s ∞ 0 e−st ut dt = −ut(x, 0) + sL[ut] = s2 L[u] − su(x, 0) − ut(x, 0), L[ux] = ∞ 0 e−st ux dt = ∂ ∂x L[u], L[uxx] = ∞ 0 e−st uxx dt = ∂2 ∂x2 L[u]. Heat Equation: Consider ut − u = 0 in U × (0, ∞) u = f on U × {t = 0}, and perform a Laplace transform with respect to time: L[ut] = ∞ 0 e−st ut dt = sL[u] − u(x, 0) = sL[u] − f(x), L[ u] = ∞ 0 e−st u dt = L[u].
  • 386. Partial Differential Equations Igor Yanovsky, 2005 386 Thus, the transformed problem is: sL[u] − f(x) = L[u]. Writing v(x) = L[u], we have − v + sv = f in U. Thus, the solution of this equation with RHS f is the Laplace transform of the solution of the heat equation with initial data f.
  • 387. Partial Differential Equations Igor Yanovsky, 2005 387 Table of Laplace Transforms: L[f] = f# (s) L[sinat] = a s2 + a2 , s > 0 L[cos at] = s s2 + a2 , s > 0 L[sinhat] = a s2 − a2 , s > |a| L[coshat] = s s2 − a2 , s > |a| L[eat sinbt] = b (s − a)2 + b2 , s > a L[eat cos bt] = s − a (s − a)2 + b2 , s > a L[tn ] = n! sn+1 , s > 0 L[tn eat ] = n! (s − a)n+1 , s > a L[H(t − a)] = e−as s , s > 0 L[H(t − a) f(t − a)] = e−as L[f], L[af(t) + bg(t)] = aL[f] + bL[g], L[f(t) ∗ g(t)] = L[f] L[g], L t 0 g(t − t) f(t ) dt = L[f] L[g], L df dt = sL[f] − f(0), L d2 f dt2 = s2 L[f] − sf(0) − f (0), f = df dt L dnf dtn = sn L[f] − sn−1 f(0) − . . . − fn−1 (0), L[f(at)] = 1 a f# s a , L[ebt f(t)] = f# (s − b), L[tf(t)] = − d ds L[f], L f(t) t = ∞ s f# (s ) ds , L t 0 f(t ) dt = 1 s L[f], L[J0(at)] = (s2 + a2 )−1 2 , L[δ(t − a)] = e−sa . Example: f(t) = sint. After integrating by parts twice, we obtain: L[sint] = ∞ 0 e−st sint dt = 1 − s2 ∞ 0 e−st sin t dt, ⇒ ∞ 0 e−st sint dt = 1 1 + s2 .
  • 388. Partial Differential Equations Igor Yanovsky, 2005 388 Example: f(t) = tn . L[tn ] = ∞ 0 e−st tn dt = − tne−st s ∞ 0 + n s ∞ 0 e−st tn−1 dt = n s L[tn−1 ] = n s n − 1 s L[tn−2 ] = . . . = n! sn L[1] = n! sn+1 .
  • 389. Partial Differential Equations Igor Yanovsky, 2005 389 Problem (F’00, #6). Consider the initial-boundary value problem ut − uxx + au = 0, t > 0, x > 0 u(x, 0) = 0, x > 0 u(0, t) = g(t), t > 0, where g(t) is continuous function with a compact support, and a is constant. Find the explicit solution of this problem. Proof. We solve this problem using the Laplace transform. L[u(x, t)] = u# (x, s) = ∞ 0 e−st u(x, t) dt (s > 0). L[ut] = ∞ 0 e−st ut dt = e−st u(x, t) t=∞ t=0 + s ∞ 0 e−st u dt = su# (x, s) − u(x, 0) = su# (x, s), (since u(x, 0) = 0) L[uxx] = ∞ 0 e−st uxx dt = ∂2 ∂x2 u# (x, s), L[u(0, t)] = u# (0, s) = ∞ 0 e−st g(t) dt = g# (s). Plugging these into the equation, we obtain the ODE in u# : su# (x, s) − ∂2 ∂x2 u# (x, s) + au# (x, s) = 0. (u#)xx − (s + a)u# = 0, u#(0, s) = g#(s). This initial value problem has a solution: u# (x, s) = c1e √ s+a x + c2e− √ s+a x . Since we want u to be bounded as x → ∞, we have c1 = 0, so u# (x, s) = c2e− √ s+a x . u# (0, s) = c2 = g# (s), thus, u# (x, s) = g# (s)e− √ s+a x . To obtain u(x, t), we take the inverse Laplace transform of u#(x, s): u(x, t) = L−1 [u# (x, s)] = L−1 g# (s) L[g] e− √ s+a x L[f] = g ∗ f = g ∗ L−1 e− √ s+a x = g ∗ 1 2πi c+i∞ c−i∞ est e− √ s+a x ds , u(x, t) = t 0 g(t − t ) 1 2πi c+i∞ c−i∞ est e− √ s+a x ds dt .
  • 390. Partial Differential Equations Igor Yanovsky, 2005 390 Problem (F’04, #8). The function y(x, t) satisfies the partial differential equation x ∂y ∂x + ∂2y ∂x∂t + 2y = 0, and the boundary conditions y(x, 0) = 1, y(0, t) = e−at , where a ≥ 0. Find the Laplace transform, y(x, s), of the solution, and hence derive an expression for y(x, t) in the domain x ≥ 0, t ≥ 0. Proof. We change the notation: y → u. We have xux + uxt + 2u = 0, u(x, 0) = 1, u(0, t) = e−at . The Laplace transform is defined as: L[u(x, t)] = u# (x, s) = ∞ 0 e−st u(x, t) dt (s > 0). L[xux] = ∞ 0 e−st xux dt = x ∞ 0 e−st ux dt = x(u# )x, L[uxt] = ∞ 0 e−st uxt dt = e−st ux(x, t) t=∞ t=0 + s ∞ 0 e−st ux dt = s(u# )x − ux(x, 0) = s(u# )x, (since u(x, 0) = 0) L[u(0, t)] = u# (0, s) = ∞ 0 e−st e−at dt = ∞ 0 e−(s+a)t dt = − 1 s + a e−(s+a)t t=∞ t=0 = 1 s + a . Plugging these into the equation, we obtain the ODE in u#: (x + s)(u# )x + 2u# = 0, u# (0, s) = 1 s+a , which can be solved: (u# )x u# = − 2 x + s ⇒ log u# = −2 log(x + s) + c1 ⇒ u# = c2elog(x+s)−2 = c2 (x + s)2 . From the initial conditions: u# (0, s) = c2 s2 = 1 s + a ⇒ c2 = s2 s + a . u# (x, s) = s2 (s + a)(x + s)2 . To obtain u(x, t), we take the inverse Laplace transform of u#(x, s): u(x, t) = L−1 [u# (x, s)] = L−1 s2 (s + a)(x + s)2 = 1 2πi c+i∞ c−i∞ est s2 (s + a)(x + s)2 ds. u(x, t) = 1 2πi c+i∞ c−i∞ est s2 (s + a)(x + s)2 ds.
  • 391. Partial Differential Equations Igor Yanovsky, 2005 391 Problem (F’90, #1). Using the Laplace transform, or any other convenient method, solve the Volterra integral equation u(x) = sinx + x 0 sin(x − y)u(y) dy. Proof. Rewrite the equation: u(t) = sint + t 0 sin(t − t )u(t ) dt , u(t) = sint + (sint) ∗ u. Taking the Laplace transform of each of the elements in : L[u(t)] = u# (s) = ∞ 0 e−st u(t) dt, L[sint] = 1 1 + s2 , L[(sint) ∗ u] = L[sint] ∗ L[u] = u# 1 + s2 . Plugging these into the equation: u# = 1 1 + s2 + u# 1 + s2 = u# + 1 1 + s2 . u# (s) = 1 s2 . To obtain u(t), we take the inverse Laplace transform of u#(s): u(t) = L−1 [u# (s)] = L−1 1 s2 = t. u(t) = t.
  • 392. Partial Differential Equations Igor Yanovsky, 2005 392 Problem (F’91, #5). In what follows, the Laplace transform of x(t) is denoted either by x(s) or by Lx(t). ❶ Show that, for integral n ≥ 0, L(tn ) = n! sn+1 . ❷ Hence show that LJ0(2 √ ut) = 1 s e−u/s , where J0(z) = ∞ n=0 (−1)n (1 2z)2n n!n! is a Bessel function. ❸ Hence show that L ∞ 0 J0(2 √ ut)x(u) du = 1 s x 1 s . (31.1) ❹ Assuming that LJ0(at) = 1 √ a2 + s2 , prove with the help of (31.1) that if t ≥ 0 ∞ 0 J0(au)J0(2 √ ut) du = 1 a J0 t a . Hint: For the last part, use the uniqueness of the Laplace transform. Proof. ❶ L[tn ] = ∞ 0 e−st g tn f dt = − tn e−st s ∞ 0 = 0 + n s ∞ 0 e−st tn−1 dt = n s L[tn−1 ] = n s n − 1 s L[tn−2 ] = . . . = n! sn L[1] = n! sn+1 . ❷ LJ0(2 √ ut) = L ∞ n=0 (−1)n un tn n!n! = ∞ n=0 (−1)n un n!n! L[tn ] = ∞ n=0 (−1)n un n!sn+1 = 1 s ∞ n=0 (−1)n n! u s n = 1 s e−u s . ❸ L ∞ 0 J0(2 √ ut) x(u) du = ∞ 0 L[J0(2 √ ut)] x(u) du = 1 s ∞ 0 e−u s x(u) du = 1 s x# 1 s , where x# (s) = ∞ 0 e−us x(u) du.
  • 393. Partial Differential Equations Igor Yanovsky, 2005 393 32 Linear Functional Analysis 32.1 Norms || · || is a norm on a vector space X if i) ||x|| = 0 iff x = 0. ii) ||αx|| = |α| · ||x|| for all scalars α. iii) ||x + y|| ≤ ||x|| + ||y|| (the triangle inequality). The norm induces the distance function d(x, y) = ||x − y|| so that X is a metric space, called a normed vector space. 32.2 Banach and Hilbert Spaces A Banach space is a normed vector space that is complete in that norm’s metric. I.e. a complete normed linear space is a Banach space. A Hilbert space is an inner product space for which the corresponding normed space is complete. I.e. a complete inner product space is a Hilbert space. Examples: 1) Let K be a compact set of Rn and let C(K) denote the space of continuous functions on K. Since every u ∈ C(K) achieves maximum and minimum values on K, we may define ||u||∞ = max x∈K |u(x)|. || · ||∞ is indeed a norm on C(K) and since a uniform limit of continuous functions is continuous, C(K) is a Banach space. However, this norm cannot be derived from an inner product, so C(K) is not a Hilbert space. 2) C(K) is not a Banach space with || · ||2 norm. (Bell-shaped functions on [0, 1] may converge to a discontinuous δ-function). In general, the space of continuous functions on [0, 1], with the norm || · ||p, 1 ≤ p < ∞, is not a Banach space, since it is not complete. 3) Rn and Cn are real and complex Banach spaces (with a Eucledian norm). 4) Lp are Banach spaces (with || · ||p norm). 5) The space of bounded real-valued functions on a set S, with the sup norm || · ||S are Banach spaces. 6) The space of bounded continuous real-valued functions on a metric space X is a Banach space. 32.3 Cauchy-Schwarz Inequality |(u, v)| ≤ ||u||||v|| in any norm, for example |uv|dx ≤ ( u2 dx) 1 2 ( v2 dx) 1 2 |a(u, v)| ≤ a(u, u) 1 2 a(v, v) 1 2 |v|dx = |v| · 1 dx = ( |v|2 dx) 1 2 ( 12 dx) 1 2 32.4 H¨older Inequality Ω |uv| dx ≤ ||u||p||v||q, which holds for u ∈ Lp (Ω) and v ∈ Lq (Ω), where 1 p + 1 q = 1. In particular, this shows uv ∈ L1 (Ω).
  • 394. Partial Differential Equations Igor Yanovsky, 2005 394 32.5 Minkowski Inequality ||u + v||p ≤ ||u||p + ||v||p, which holds for u, v ∈ Lp(Ω). In particular, it shows u + v ∈ Lp(Ω). Using the Minkowski Inequality, we find that || · ||p is a norm on Lp (Ω). The Riesz-Fischer theorem asserts that Lp(Ω) is complete in this norm, so Lp(Ω) is a Banach space under the norm || · ||p. If p = 2, then L2(Ω) is a Hilbert space with inner product (u, v) = Ω uv dx. Example: Ω ∈ Rn bounded domain, C1 (¯Ω) denotes the functions that, along with their first-order derivatives, extend continuously to the compact set ¯Ω. Then C1 (¯Ω) is a Banach space under the norm ||u||1,∞ = max x∈¯Ω (|∇u(x)| + |u(x)|). Note that C1(Ω) is not a Banach space since ||u||1,∞ need not be finite for u ∈ C1(Ω). 32.6 Sobolev Spaces A Sobolev space is a space of functions whose distributional derivatives (up to some fixed order) exist in an Lp -space. Let Ω be a domain in Rn, and let us introduce < u, v >1= Ω (∇u · ∇v + uv) dx, (32.1) ||u||1,2 = √ < u, u >1 = Ω (|∇u|2 + |u|2 ) dx 1 2 (32.2) when these expressions are defined and finite. For example, (32.1) and (32.2) are defined for functions in C1 0 (Ω). However, C1 0 (Ω) is not complete under the norm (32.2), and so does not form a Hilbert space. Divergence Theorem ∂Ω A · n dS = Ω div A dx Trace Theorem u L2(∂Ω) ≤ C u H1(Ω) Ω smooth or square Poincare Inequality u p ≤ C ∇u p 1 ≤ p ≤ ∞ Ω |u(x)|2 dx ≤ C Ω |∇u(x)|2 dx u ∈ C1 0 (Ω), H1,2 0 (Ω) i.e. p = 2 u − uΩ p ≤ ∇u p u ∈ H1,p 0 (Ω)
  • 395. Partial Differential Equations Igor Yanovsky, 2005 395 uΩ = 1 |Ω| Ω u(x) dx (Average value of u over Ω), |Ω| is the volume of Ω Notes ∂u ∂n = ∇u · n = n1 ∂u ∂x1 + n2 ∂u ∂x2 |∇u|2 = u2 x1 + u2 x2 Ω ∇|u| dx = Ω |u| u ∇u dx √ ab ≤ a + b 2 ⇒ ab ≤ a2 + b2 2 ⇒ ||∇u||||u|| ≤ ||∇u||2 + ||u||2 2 u∇u = ∇( u2 2 ) Ω (uxy)2 dx = Ω uxxuyy dx ∀u ∈ H2 0 (Ω) Ω square Problem (F’04, #6). Let q ∈ C1 0 (R3 ). Prove that the vector field u(x) = 1 4π R3 q(y)(x − y) |x − y|3 dy enjoys the following properties: 87 a) u(x) is conservative; b) div u(x) = q(x) for all x ∈ R3; c) |u(x)| = O(|x|−2 ) for large x. Furthermore, prove that the proverties (1), (2), and (3) above determine the vector field u(x) uniquely. Proof. a) To show that u(x) is conservative, we need to show that curl u = 0. The curl of V is another vector field defined by curl V = ∇ × V = det ⎛ ⎝ e1 e2 e3 ∂1 ∂2 ∂3 V1 V2 V3 ⎞ ⎠ = ∂V3 ∂x2 − ∂V2 ∂x3 , ∂V1 ∂x3 − ∂V3 ∂x1 , ∂V2 ∂x1 − ∂V1 ∂x2 . Consider V (x) = x |x|3 = (x1, x2, x3) (x2 1 + x2 2 + x2 3) 3 2 . Then, u(x) = 1 4π R3 q(y) V (x − y) dy, curl u(x) = 1 4π R3 q(y) curlx V (x − y) dy. curl V (x) = curl (x1, x2, x3) (x2 1 + x2 2 + x2 3) 3 2 = −3 2 · 2x2x3 (x2 1 + x2 2 + x2 3) 5 2 − −3 2 · 2x3x2 (x2 1 + x2 2 + x2 3) 5 2 , −3 2 · 2x3x1 (x2 1 + x2 2 + x2 3) 5 2 − −3 2 · 2x1x3 (x2 1 + x2 2 + x2 3) 5 2 , −3 2 · 2x1x2 (x2 1 + x2 2 + x2 3) 5 2 − = (0, 0, 0). 87 McOwen, p. 138-140.
  • 396. Partial Differential Equations Igor Yanovsky, 2005 396 Thus, curl u = 1 4π R3 q(y) · 0 dy = 0, and u(x) is conservative. b) Note that the Laplace kernel in R3 is − 1 4πr . u(x) = 1 4π R3 q(y)(x − y) |x − y|3 dy = 1 4π R3 q(r) r r3 r dr = R3 q(r) 4πr dr = q. c) Consider F(x) = − 1 4π R3 q(y) |x − y| dy. F(x) is O(|x|−1) as |x| → ∞. Note that u = ∇F, which is clearly O(|x|−2 ) as |x| → ∞.