0% found this document useful (0 votes)
40 views56 pages

MPA2

PPT of Digital Morphing Topic in Digital Image Processing

Uploaded by

Shubham Mittal
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
40 views56 pages

MPA2

PPT of Digital Morphing Topic in Digital Image Processing

Uploaded by

Shubham Mittal
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 56

Multimedia Processing and

Applications
Dr. Aloke Datta
Digital Image Processing
Fundamentals
Structure of the Human Eye
Distribution of discrete light
receptors over the surface of the
retina
2 classes of receptors: cones and
rods
• Cones: 6-7 million in each eye,
mainly located in the fovea.
Highly sensitive to colour, fine
details.
“Photopic” or bright-light vision
• Rods: 75-150 million, distributed.
Sensitive to low level of illumination,
not involved in colour vision.
“Scotopic” or dim-light vision
Image Formation in the Eye

Photo camera: lens has fixed focal length. Focusing at various distances by varying
distance between lens and imaging plane (location of film or chip)

Human eye: converse. Distance lens-imaging region (retina) is fixed. Focal length
for proper focus obtained by varying the shape of the lens.
Weber Ratio
• Brightness discrimination is
the ability of the eye to
discriminate between changes
in light intensity at any specific
adaptation level.
• The quantity Ic/I, where Ic is
the increment of illumination
discriminable 50% of the time
with background illumination
I, is called the Weber ratio. A
small value of Weber ratio,
means good brightness
discrimination.
• Brightness
discrimination is poor at
low levels of
illumination. The two
branches in the curve
indicate that at low
levels of illumination
vision is carried out by
the rods, whereas at
high level by the cones.
Perceived Brightness

• Mach Band
• Perceived intensity is not a
simple function of actual
intensity
• The visual system tends to
undershoot or overshoot
around the boundary of
regions of different intensities
Perceived Brightness

• Simultaneous contrast phenomenon: a region’s perceived brightness does


not depend simply on its intensity
• The second phenomena, called simultaneous contrast, a spot may
appears to the eye to become darker as the background gets lighter.
Optical Illusion

• Optical illusions occurs when the eye fills in non-existing


information or wrongly perceives geometrical properties
of objects.
Image Sensing and Acquisition
 Electromagnetic energy source and sensor that can detect the
energy of the electromagnetic source are needed to generate an
image. EM source will illuminate the objects that need to be
imaged and then a sensor will detect the reflected energy from the
objects.

 Different objects will have different degree of reflections and


absorption of the electromagnetic energy. These differences in
reflections and absorption are the reasons for objects to appear
distinct in the images.

 Transform of illumination energy into digital images:


 The incoming energy is transformed into a voltage by the combination of
input electrical power and sensor material.
 A digital quantity is obtained from each sensor by digitizing its response.
• Ex: Photodiode
• Made of silicon
• Filter in front: increase
selectivity
A Simple Image Formation Model
Mathematical representation of monochromatic
images.
– Two dimensional function f(x,y), where f is the gray
level of a pixel at location x and y.
– The values of the function f at different locations are
proportional to the energy radiated from the imaged
object.
A Simple Image Formation Model

0< f(x,y) <

f(x,y)=i(x,y)*r(x,y) Reflectivity
f(x,y)=i(x,y)*t(x,y) Transmissivity

0< i(x,y) <


0  r(x,y) and t(x,y)  1
3.4. A Simple Image Formation Model

i(x,y) : Sun on clear day 90,000 lm/m2


: Sun on cloudy day 10,000 lm/m2
: Full moon 0.1 lm/m2
: Commercial office 1,000 lm/m2

r(x,y) :Black Velvet 0.01


:Stainless Steel 0.65
:Flat-white Wall Paint 0.80
:Silver-plated Metal 0.90
:Snow 0.93
Why Digitization
• Theory of Real numbers – between any two
given points there are infinite number of points
• An image should be represented by infinite
number of points
• Each such image point may contain one of the
infinitely many possible intensity/ color values
needing infinite number of bits
• Obviously such a representation is not possible
Image Sampling and Quantization

• Converting an analog image to a digital image require


sampling and quantization
• Sampling: is digitizing the coordinate values
• Quantization: is digitizing the amplitude values

H.R. Pourreza
Image Sampling and Quantization
Image Sampling and Quantization
• The quality of a digital image is determined to a large degree by the number of
samples and discrete intensity levels used in sampling and quantization.
• However image content is also an important consideration in choosing these
parameters
4.2. Representing Digital Images

H.R. Pourreza
4.2. Representing Digital Images

The pixel intensity levels (gray scale levels) are in the


interval of [0, L-1].

0  ai,j  L-1 Where L = 2k


The dynamic range of an image is the range of values spanned by
the gray scale.

The number, b, of bits required to store a digitized image of size


M by N is
b=MNk
Representing Digital Images

Elaine image of size 512 by


512 pixels (5 by 5 inches),
The dynamic range is [0,
255].
Find the following:
• The number of bits
required to represent a pixel
• The size of the image in
bits?

8 bits & 512X512X8=2097152


bits=262144byte=256 KB
• A common measure of transmission for digital data is
baud rate, defined as the number of bits transmitted
per second. Generally, transmission is accomplished in
packets consisting of a start bit, a byte (8 bits) of
information, and a stop bit. Using this fact, how many
minutes would it take to transmit a 1024 X 1024 with
256 gray levels using a 56K baud modem?

• 1024X1024X(8+2)/56X1024=1024X10/56sec
=128X10/7sec=128X10/7X60 min=64/21
min=3.047min ==3min
Spatial and Gray-Level Resolution

• Spatial resolution = measure of the smallest discernible


detail in an image
• Quantitatively (most common measures): line pairs per
unit distance or dots (pixels) per unit distance (printing
and publishing industry). In the US: dots per inch (dpi)
• e.g. newspapers: 75 dpi, magazines: 133 dpi, glossy
brochures: 175 dpi, DIP book: 2400 dpi

• Gray-Level resolution or Intensity resolution = smallest


discernible change in intensity level
• Most common: 8bit. 16bit when needed. 32 bits rare.
Exceptions: 10 or 12 bits
• An image is 2400 pixels wide and 1800 pixels high.
The image was scanned at 300 dpi. What is the
physical size of the image?

• The physical size= width/resolution X height


/resolution

=2400/300 X 1800/300

= 8 inches X 6 inches
Spatial Resolution

An image of size 10241024 is printed on paper of size 2.75  2.75 inch.


Resolution = 1024/2.75 = 372 pixels/inch (dots per inch, dpi)
Spatial Resolution
Gray-Level Resolution

k=7
k=8 L = 128
L = 256

k=6 k=5
L = 64 L = 32
Gray-Level Resolution

k=3
k=4
L=8
L = 16

k=2 k=1
L=4
L=2
• The pixel values of the
5X5 image is
represented by 8 bits.

• Determine f with gray


level resolution is 2k,
when k=5.
• Dividing the image by 2
will reduce its gray level
resolution by one bit.
• To reduce the gray level
resolution from 8-bit to
5-bit, we have to reduce
3 bits.
• Thus, we divide the 8-
bit image by 23=8 to get
the 5-bit image.
Zooming
• It is increasing the number of pixels in an image
so that image appears larger.

• Zooming requires two steps


– Creation of new pixel locations
– Assignment of gray levels on those new location

• There are various ways to assign gray level


• Nearest Neighbor Interpolation
• Bilinear Interpolation
Nearest Neighbour Interpolation
• Interpolation = process of using known data to
estimate values at unknown locations

• Intensity level assignment for one pixel : look for


its closest pixel in the original image and assign
its intensity (nearest neighbour interpolation)

• Problem of this approach: undesirable effects


such as distortion of straight edges, check board
effect etc
Example of nearest Neighbor
Interpolation
• The example below shows 8-bit image zooming
by 2-times using nearest neighbor interpolation.
Bilinear Interpolation
• It is performed by linear interpolation of
adjacent pixels.
• It may create blurring effect.
• Bilinear interpolation: use the 4 nearest
neighbours to estimate the intensity at a given
location (x,y):
• v(x,y) = ax + by + cxy + d (a,b,c,d determined
from the 4 equations written using the 4
neighbours
125 170 129
172 170 175
125 128 128

v( x, y )  ax  by  cxy  d
(2,2)

(2,4)
V(3,3) = a(3) + b(3) +c(9) +d
(4,2) (3,3)
125 = a(2) + b(2) +c(4) +d (4,4)
170 = a(2) + b(4) +c(8) +d
172 = a(4) + b(2) +c(8) +d
170 = a(4) + b(4) +c(16) +d
Zooming of Digital Images
Shrinking Digital Images
• Image shrinking is done in a similar manner as just described for
zooming. For example, to shrink an image by half, we delete every other row
and column.

•Shrinking involves reduction of pixels and it means lost of irrecoverable


information.
Zooming & Shrinking Digital Images
• Zooming may be viewed as oversampling, where as shrinking
may be viewed as undersampling.

•Zooming and shrinking is applied to digital images, while


sampling and quantization is applied to analog images.
Basic Relationship of Pixels
(0,0) x

(x-1,y-1) (x,y-1) (x+1,y-1)

(x-1,y) (x,y) (x+1,y)


y

(x-1,y+1) (x,y+1) (x+1,y+1)

Conventional indexing method


Basic Relationship of Pixels
(0,0) y

(x-1,y-1) (x-1,y) (x-1,y+1)

(x,y-1) (x,y) (x,y+1)


x

(x+1,y-1) (x+1,y) (x+1,y+1)

Conventional indexing method


Neighbors of a Pixel
Neighborhood relation is used to tell adjacent pixels. It is useful for
analyzing regions.

(x,y-1) 4-neighbors of p:

(x-1,y)
(x-1,y) p (x+1,y)
(x+1,y)
N4(p) = (x,y-1)
(x,y+1)
(x,y+1)

4-neighborhood relation considers only vertical and


horizontal neighbors.
Note: q N4(p) implies p N4(q)
Neighbors of a Pixel (cont.)

(x-1,y-1) (x,y-1) (x+1,y-1) 8-neighbors of p:

(x-1,y-1)
(x-1,y) p (x+1,y)
(x,y-1)
(x+1,y-1)
(x-1,y)
(x-1,y+1) (x,y+1) (x+1,y+1) (x+1,y)
N8(p) = (x-1,y+1)
(x,y+1)
(x+1,y+1)

8-neighborhood relation considers all neighbor pixels.


Neighbors of a Pixel (cont.)

(x-1,y-1) (x+1,y-1) Diagonal neighbors of p:

(x-1,y-1)
p
(x+1,y-1)
ND(p) = (x-1,y1)
(x+1,y+1)
(x-1,y+1) (x+1,y+1)

Diagonal -neighborhood relation considers only diagonal


neighbor pixels.
Connectivity
Connectivity is adapted from neighborhood relation. Two pixels
are connected if they are in the same class (i.e. the same color or
the same range of intensity) and they are neighbors of one another.

For p and q from the same class


w 4-connectivity: p and q are 4-connected if q  N4(p)
w 8-connectivity: p and q are 8-connected if q  N8(p)
w mixed-connectivity (m-connectivity):
p and q are m-connected if
(i) q  N4(p)
or
(ii) q  ND(p) and N4(p)  N4(q) = 
Adjacency
A pixel p is adjacent to pixel q if they are connected.
Two image subsets S1 and S2 are adjacent if some pixel
in S1 is adjacent to some pixel in S2

S1
S2

We can define type of adjacency: 4-adjacency, 8-adjacency


or m-adjacency depending on type of connectivity.
Path
A path from pixel p at (x,y) to pixel q at (s,t) is a sequence
of distinct pixels:
(x0,y0), (x1,y1), (x2,y2),…, (xn,yn)
such that
(x0,y0) = (x,y) and (xn,yn) = (s,t)
and
(xi,yi) is adjacent to (xi-1,yi-1), i = 1,…,n

q
p

We can define type of path: 4-path, 8-path or m-path


depending on type of adjacency.
Path (cont..)

8-path m-path

p p p

q q q

m-path from p to q
8-path from p to q
solves this ambiguity
results in some ambiguity
5.1.2. Digital Path (or Curve)

q
p
00000011100
00110100010
00100011100
00100010000
00011100000

(1,3) , (1,2), (2,2), (3,2), (4,3), (4,4), (4,5), (3,6), (2,6), (1,5)
(considering (0,0) as origin or 1st pixel)
Length of path=9
(a) Let V={0,1} and
compute the lengths
of the shortest 4-, 8-, (q)
and m-path between p 3 1 2 1
and q. If a particular
path does not exist 2 2 0 2
between these two 1 2 1 1
points, explain why?
(b) Repeat for v={1,2} 1 0 1 2
(p)
Distance
For pixel p, q, and z with coordinates (x,y), (s,t) and (u,v),
D is a distance function or metric if

w D(p,q) 0 (D(p,q) = 0 if and only if p = q)

w D(p,q) = D(q,p)

w D(p,q) D(p,z) + D(z,q)

Example: Euclidean distance

De ( p, q)  ( x - s ) 2  ( y - t ) 2
Distance (cont.)
D4-distance (city-block distance) is defined as

D4 ( p, q)  x - s  y - t

2 1 2

2 1 0 1 2
2 1 2
2

Pixels with D4(p) = 1 is 4-neighbors of p.


Distance (cont.)
D8-distance (chessboard distance) is defined as

D8 ( p, q)  max( x - s , y - t )

2 2 2 2 2

2 1 1 1 2

2 1 0 1 2
2 1 1 1 2
2 2 2 2 2

Pixels with D8(p) = 1 is 8-neighbors of p.


5. Some Basic Relationship Between Pixels

• 4-neighbors N4(p): A pixel p at (x,y) has 4 horizontal and vertical


neighbours, whose coordinates are
(x-1 , y), (x+1 , y), (x , y-1), (x , y+1)

• 4-diagonal neighbors ND(p)


(x-1 , y-1), (x-1 , y+1), (x+1 , y+1), (x+1 , y-1)

• 8-neighbors N8(p)= N4(p) U ND(p)


All pixels in N4(p) and in ND(p)
Thank You

You might also like