0% found this document useful (0 votes)
97 views42 pages

Digital Image Processing: Week # 2 Lecture # 4-6

This document provides an overview of digital image processing concepts including: - A simple image formation model involving illumination and reflectance components that combine to form an image function. - The processes of sampling and quantization required to convert a continuous image into digital form by sampling coordinate values and amplitude values. - Representing digital images as 2D arrays with rows and columns defining spatial coordinates and intensity levels defining amplitude. - Factors that affect spatial resolution and image quality such as sampling rate, number of intensity levels, and the relationship between these factors.

Uploaded by

Aamir Chohan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
97 views42 pages

Digital Image Processing: Week # 2 Lecture # 4-6

This document provides an overview of digital image processing concepts including: - A simple image formation model involving illumination and reflectance components that combine to form an image function. - The processes of sampling and quantization required to convert a continuous image into digital form by sampling coordinate values and amplitude values. - Representing digital images as 2D arrays with rows and columns defining spatial coordinates and intensity levels defining amplitude. - Factors that affect spatial resolution and image quality such as sampling rate, number of intensity levels, and the relationship between these factors.

Uploaded by

Aamir Chohan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 42

Digital Image Processing

Week # 2
Lecture # 4-6

1
A Simple Image Formation Model
• Light that is void of color is called achromatic or monochromatic light.
The only attribute of such light is its intensity, or amount. The term gray
level generally is used to describe monochromatic intensity because it
ranges from black, to grays, and finally to white. We will be mostly
dealing with the gray scale images in this course.
• We shall denote images by two-dimensional functions of the form f(x, y).
The value or amplitude of ‘f’ at spatial coordinates (x, y) is a positive
scalar quantity whose physical meaning is determined by the source of
the image.
• The function f(x, y) may be characterized by two components: (1) the
amount of source illumination incident on the scene being viewed, and
(2) the amount of illumination reflected by the objects in the scene.
Appropriately, these are called the illumination and reflectance
components and are denoted by i(x, y) and r(x, y), respectively. The two
functions combine as a product to form f(x, y): f(x, y)=i(x, y)r(x, y)
• The nature of i(x, y) is determined by the illumination source, and r(x, y) is determined by the
characteristics of the imaged objects.
• It is noted that these expressions also are applicable to images formed via transmission of the
illumination through a medium, such as a chest X-ray. In this case, we would deal with a
transmissivity instead of a reflectivity function.
• Typical values of r(x,y)? 0.01 for black velvet, 0.65 for stainless steel, 0.80 for flat-white wall
paint, 0.90 for silver-plated metal, and 0.93 for snow.
• Typical ranges of i(x,y)? On a clear day, the sun may produce in excess of 90,000 lm/m^2 of
illumination on the surface of the Earth. This figure decreases to less than 10,000 lm/m^2 on
a cloudy day. On a clear evening, a full moon yields about 0.1 lm/m^2 of illumination. The
typical illumination level in a commercial office is about 1000 lm/m^2.
Image Sampling and Quantization
• An image f(x,y) may be continuous with respect to the x- and y-coordinates, and also
in amplitude. To convert it to digital form, we have to sample the function in both
coordinates and in amplitude. Digitizing the coordinate values is called sampling.
Digitizing the amplitude values is called quantization.
• Starting from top we go on digitizing an image till bottom, scanning each line.

• Going for lower sampling rate results in degradation (block artifacts)


• Is there a guiding principle to cater for this dilemma? Yes wait until we discuss
frequency domain
Representing Digital Images
• A digital image f(x,y) can be seen as a 2D-array with M rows (0,1,…,M-1) and N columns
(0,1,..,N-1). The coordinates x and y are referred to as spatial coordinates.

• ‘M’ and ‘N’ could be any suitable integers


• Number of levels L are in powers of 2 i.e. L=2^k (due to hardware considerations)
• Quantization levels are equally spaced in the level [0,L-1]
• Dynamic range: Ratio of the maximum measurable intensity to the minimum
detectable intensity level in the system
• Image Contrast: Difference between highest and lowest intensity levels
• High dynamic range-> high contrast; low dynamic range-> low contrast-> dull washed-
out gray
• Number of bits required to store a digitized image: b= MxNxk or b=(N^2)*k (if M=N)
Spatial Resolution
• Smallest discernible detail in an image
• Defined as dots (pixels) per unit distance
• dpi (dots per inch)
• Resolution of newspaper 75 dpi, magazines 133
dpi, books 2400 dpi
Important: Resolution should be stated with spatial
Dimensions of the image. For instance saying that
Resolution of an image is 1024 x 1024 is meaningless
Without the mention of spatial dimension of the
Image. It will merely reflect the size but not the
Resolution. High sampling rate results in high
Spatial resolution.

Spatial Resolution is related to SAMPLING


Effect of Varying Intensity Levels (Quantization)
• We keep the number of samples constant (i.e. no degradation in spatial resolution)
and decrease the number of intensity levels in power of 2. 256, 128, 64, 32, 16, 8,
4, 2
Images (d)-(h) we note very fine ridge-like structures in areas of smooth gray levels
(particularly in the skull).This effect, caused by the use of an insufficient number
of gray levels in smooth areas of a digital image, is called false contouring
• We have so far studied the effects of varying N and k independently

• Detail of an image correspond to transitions in the pixel intensity. Plotting pixel


intensities would show the transitions, the greater the transition means the
greater the detail and therefore it relates to high frequency components.
• Left image: lower detail; Middle image: medium detail; Right image: High detail
• Isopreferences curves: Points lying on a curve in N-k plane (N=image resolution; k=
no of bits for quantization) have same subjective quality.
• Images with more details tend to have vertical isopreference curve. It shows that
for high detail images only a small number of intensity levels are enough to
maintain good quality for a fix number of N.
• For images with low details we note that to maintain same quality, as N increases
we need lesser intensity levels to maintain the quality.
• Decrease in intensity levels mean increase in contrast. This is perceived as
improved quality by humans.
Image Interpolation
• Interpolation: Estimating unknown values from the known values
• Image interpolation is used to estimate intensity of pixels
• Application: Image reshaping/resizing, rotation etc
• Nearest Neighbor Image interpolation: Suppose that we have an image of size
500*500 pixels and we want to enlarge it 1.5 times to 750*750 pixels.
Conceptually, one of the easiest ways to visualize zooming is laying an imaginary
750*750 grid over the original image. Obviously, the spacing in the grid would be
less than one pixel because we are fitting it over a smaller image. In order to
perform gray-level assignment for any point in the overlay, we look for the closest
pixel in the original image and assign its gray level to the new pixel in the grid.
When we are done with all points in the overlay grid, we simply expand it to the
original specified size to obtain the zoomed image. This method of gray-level
assignment is called nearest neighbor interpolation
• Bilinear Interpolation: Uses 4 nearest neighbor
• Bicubic Interpolation: Uses 16 nearest neighbor
Images to discuss Interpolation

Original
Some Basic Relationships between the Pixels
• Neighbors of a pixel: A pixel p at coordinates (x, y) has four horizontal and vertical
neighbors whose coordinates are given by (x+1, y), (x-1, y), (x, y+1), (x, y-1). This
set of pixels, called the 4-neighbors of p, is denoted by N_4(p). Each pixel is a unit
distance from (x, y), and some of the neighbors of p lie outside the digital image if
(x, y) is on the border of the image. The four diagonal neighbors of p have
coordinates (x+1, y+1), (x+1, y-1), (x-1, y+1), (x-1, y-1) and are denoted by ND(p).
These points, together with the 4-neighbors, are called the 8-neighbors of p,
denoted by N8(p). As before, some of the points in N_D(p) and N_8(p) fall outside
the image if (x, y) is on the border of the image
Adjacency

• Let V: a set of intensity values used to define adjacency and


connectivity.
• In a binary image, V = {1}, if we are referring to adjacency of
pixels with value 1.
• In a gray-scale image, the idea is the same, but V typically
contains more elements, for example, V = {180, 181, 182, …,
200}
• If the possible intensity values 0 – 255, V set can be any
subset of these 256 values.
Types of Adjacencies
1. 4-adjacency: Two pixels p and q with values from V are 4-adjacent if q is
in the set N4(p).
2. 8-adjacency: Two pixels p and q with values from V are 8-adjacent if q is
in the set N8(p).
3. m-adjacency =(mixed) is a modification of 8-adjacency. It is introduced to
eliminate the ambiguities that often arise when 8-adjacency is used.
Two pixels p and q with values from V are m-adjacent if :
• q is in N4(p) or
• q is in ND(p) and the set N4(p) ∩ N4(q) has no pixel whose values
are from V (no intersection)
Types of Adjacency
• In this example, we can note that to connect between two
pixels (finding a path between two pixels):
– In 8-adjacency way, you can find multiple paths
between two pixels
– While, in m-adjacency, you can find only one path
between two pixels
• So, m-adjacency has eliminated the multiple path
connection that has been generated by the 8-adjacency.
• Two subsets S1 and S2 are adjacent, if some pixel in S1 is
adjacent to some pixel in S2. Adjacent means, either 4-, 8-
or m-adjacency.
A Digital Path
• A digital path (or curve) from pixel p with coordinate
(x,y) to pixel q with coordinate (s,t) is a sequence of
distinct pixels with coordinates (x0,y0), (x1,y1), …, (xn,
yn) where (x0,y0) = (x,y) and (xn, yn) = (s,t) and pixels
(xi, yi) and (xi-1, yi-1) are adjacent for 1 ≤ i ≤ n
• n is the length of the path
• If (x0,y0) = (xn, yn), the path is closed.
• We can specify 4-, 8- or m-paths depending on the
type of adjacency specified.
A Digital Path
• Return to the previous example:

In figure (b) the paths between the top right and


bottom right pixels are 8-paths. And the path
between the same 2 pixels in figure (c) is m-path
Connectivity
• Let S represent a subset of pixels in an image,
two pixels p and q are said to be connected in
S if there exists a path between them
consisting entirely of pixels in S.
• For any pixel p in S, the set of pixels that are
connected to it in S is called a connected
component of S. If it only has one connected
component, then set S is called a connected
set.
Region and Boundary
• Region
Let R be a subset of pixels in an image, we call R a
region of the image if R is a connected set.
• Boundary
The boundary (also called border or contour)
of a region R is the set of pixels in the region
that have one or more neighbors that are not
in R.
Region and Boundary
If R happens to be an entire image, then its boundary is
defined as the set of pixels in the first and last rows and
columns in the image.

This extra definition is required because an image has no


neighbors beyond its borders

Normally, when we refer to a region, we are referring to


subset of an image, and any pixels in the boundary of the
region that happen to coincide with the border of the image
are included implicitly as part of the region boundary.
Distance Measures
Distance: Sample Problems
Mathematical Tools
• a_i and a_j are constants and and f_i(x,y) and f_j(x,y) are images of the same size

• Examples of linear and nonlinear operations page 95-96


Arithmetic Operations
• s(x,y)=f(x,y)+d(x,y)
• d(x,y)= f(x,y)-d(x,y)
• p(x,y)= f(x,y)xd(x,y)
• v(x,y)= f(x,y)/d(x,y)
• Application: Image averaging for noise reduction (noise is uncorrelated and has
zero mean)
• Images used in averaging and subtraction should be registered (aligned)
Set and Logical Operations
• Discussion page 82-83
Logical Operations
• For binary images we think of foreground (1-valued) and background (0-valued)
set of pixels.
• In binary images we refer union as OR, intersection as AND and complement as
NOT logical operations.
• In binary images we deal with regions (foreground or background), usually we
take foreground as a reference (loosely speaking)
• Gray-scale operations are array operations and need images to have similar spatial
dimensions (registered).
• Binary operations are based on regions and therefore the resultant regions can
vary in sizes
• AND, OR and NOT logical operations are considered functionally complete
• Other operations can be performed using these basic logical operations.
• Fuzzy sets discussion page 106-107
Fuzzy Sets
• Preceding operations (logical and set) are crisp in the sense that elements either
belong to or don’t belong to a set. This poses a serious limitation.
• Let U be a set of all people and let A be a “set of young people”
• Based on a membership function we assign a value ‘1’ or ‘0’ to the elements in U
• 1- young; 0 – not young
• Membership function is simply thresholding function
• Let’s set a threshold of 20. people with age < 20 are designated as young and the
rest are “not young”
• Consider age 21, is it fair to call them “not young” ?
• What we need is a gradual transition between two states (definitely young and
definitely not young)
• We need a membership function which depicts this gradual transition between
definitely young and definitely not young.
• When we say a person is 50% young we know that his age lies middle way
between definitely young and definitely not young.
Spatial Operations
• Single Pixel Operation: We perform transformation on pixel by pixel basis.
• Operation on one pixel is independent of others
Neighborhood Operations

• Net effect of averaging is to perform local blurring. This is used to eliminate small
details and thus render blobs corresponding to the largest region of an image.
Geometric Spatial Transformation and Image Registration
• Two basic operations are involved: (1) a spatial transformation of coordinates, and
(2) intensity calculations for the spatially transformed pixels
• The above framework gives us the framework of concatenating together various
operations.
• If we want to resize an image, rotate it and move the result to some location. We
simply form a 3x3 matrix equal to the product of the scaling, rotation and
translation matrices shown in above table.
• (2) Intensity interpolation of spatially transformed pixels -> nearest neighbor,
bilinear and bicubic interpolations as discussed earlier.
Image Registration
THE END
42

You might also like