SlideShare a Scribd company logo
S. Anu H Nair Int. Journal of Engineering Research and Applications www.ijera.com
ISSN : 2248-9622, Vol. 5, Issue 1( Part 3), January 2015, pp.77-83
83www.ijera.com 77 | P a g e
Analysis of Image Fusion Techniques for fingerprint Palmprint
Multimodal Biometric System
S. Anu H Naira
, Dr. P. Arunab
a
Asst. Professor, Department of CSE, Annamalai University, Tamil Nadu, 608002, India
b
Professor, Department of CSE, Annamalai University, Tamil Nadu, 608002, India
Abstract
The multimodal Biometric System using multiple sources of information has been widely recognized. However
computational models for multimodal biometrics recognition have only recently received attention. In this paper
the fingerprint and palmprint images are chosen and fused together using image fusion methods. The biometric
features are subjected to modality extraction. Different fusion methods like average fusion, minimum fusion,
maximum fusion, discrete wavelet transform fusion and stationary wavelet transformfusion are implemented for
the fusion of extracting modalities. The best fused template is analyzed by applying various fusion metrics. Here
the DWT fused image provided better results.
Keywords: Multimodal biometrics, feature fusion, average fusion, discrete wavelet, stationary wavelet,
minimum fusion, maximum fusion
I. Introduction
Biometrics acts as a source for identifying a
human being. This is used for authentication and
identification purposes. In order to overcome the
limitations of unimodal biometric system multimodal
biometrics came into existence. A multimodal
biometric system combines two or more biometric
data recognition results such as a combination of a
subject's fingerprint, face, iris and voice. This helps
to increase the reliability of personal identification
system that discriminates between an authorized
person and a fraudulent person.
Multimodal biometric system has addressed
some issues related to unimodal such as, (a) Non-
universality or insufficient population coverage
(reduce failure to enroll rate which increases
population coverage). (b) It becomes absolutely
unmanageable for an impostor to imitate multiple
biometric traits of a legitimately enrolled individual.
(c) Multimodal-biometric systems offer climbing
evidence in solving the problem of noisy data (illness
affecting voice, scar affecting fingerprint).
In this paper, a novel approach for creating a
multimodal biometric system has been proposed. The
multimodal biometric system is implemented using
the different fusion schemes such as Average Fusion,
Minimum Fusion, Maximum Fusion, DWT Fusion
and SWT Fusion. In modality extraction level, the
information extracted from different modalities is
stored in vectors on the basis of their modality. These
modalities are then blended to produce a joint
template. Fusion at feature extraction level generates
a homogeneous template for fingerprint, iris and
palmprint features.
II. Literature review
[1] proposed a fingerprint classification method
where types of singular points and the number of
each type of point are chosen as features. [2]
designed an orientation diffusion model for
fingerprint extraction where corepoints and ridgeline
flow are used. [3] created a novel minutiae based
fingerprint matching system which creates a feature
vector template from the extracted core points and
ridges. [4] modeled a palmprint based recognition
system which uses texture and dominant orientation
pixels as features. [5] identified a palmprint
recognition method which uses blanket dimension for
extracting image texture information.[6] presented a
typical palmprint identification system which
constructed a pattern from the orientation and
response features. [7] designed a new palmprint
matching system based on the extraction of feature
points identified by the intersection of creases and
lines. [8] proposed an efficient representation method
which can be used for classification. [9] created a
model that fused voice and iris biometric features.
This model acted as a novel representation of existing
biometric data. [10] proposed user specific and
selective fusion strategy for an enrolled user. [11]
identified a new geometrical feature Width Centroid
Contour Distance for finger geometry biometric. [12]
developed a face and ear biometric system which
uses a feature weighing scheme called Sparse Coding
error ratio. [13] proposed the fusion method based on
a compressive sensing theory which contains over
complete dictionary, an algorithm for sparse vector
approximation and fusion rule. [14] identified the
feature extraction techniques for three modalities viz.
RESEARCH ARTICLE OPEN ACCESS
S. Anu H Nair Int. Journal of Engineering Research and Applications www.ijera.com
ISSN : 2248-9622, Vol. 5, Issue 1( Part 3), January 2015, pp.
www.ijera.com 78 | P a g e
fingerprint, iris and face. The extracted data is stored
as a template which can be fused using density based
score level fusion.
III. Proposed work
The proposed work describes the fusion of
multimodal biometric images such as fingerprint and
palmprint. The fingerprint and palmprint images are
subjected to modality extraction. The extracted
modalities are fused together using several fusion
methods. The best fused template is identified by
applying different metrics. The proposed work is
shown in Figure 1.
Fig. 1: Structure of proposed work
IV. Biometric modality extraction
4.1 Modality extraction from a Fingerprint image:
The fingerprint image is fed as the input. The
first step is to apply the Adaptive histogram
equalization technique to increase the contrast of the
grayscale image. The next step is to apply Orientation
process that is used to find the direction of the ridges
in the fingerprint image. This can be achieved by
using the SOBEL filter to detect the edges of the
image.ROI selection is used to give maximum
magnitude of convolution in the region of core point
which is the next step. This fingerprint masking is
used to select the region where the fingerprint images
are present. Thinning operations reduce connected
patterns to a width of a single pixel while maintaining
their topology. Once this is done, the feature of the
fingerprint is successfully extracted. The process of
feature extraction of the fingerprint image is
represented in Figure 2.
Fig. 2:Modality extraction from fingerprint image
4.2 Modality extraction from a Palmprint image:
The palmprint image is fed as the input. The
contrast of the grayscale image is enhanced by using
the Adaptive histogram equalization technique. The
noise from the image is removed by applying a
diffusion filter. Edge detection is performed using a
Sobel filter to identify the ridges. Thinning
algorithms reduce connected patterns to a width of a
single pixel while maintaining their topology. Once
this is done, the modality of the Palmprint is
Input Fingerprint image
Implement orientation process to identify the direction of edges
Select ROI using fingerprint mask
Extracted Modality
Perform adaptive histogram equalization to obtain good contrast of the image
Fingerprint image
Palmprint image
Fingerprint
Modality
Extraction
Palmprint
modality
Extraction
Average
Fusion/
Minimum
Fusion/
Maximum
Fusion/
SWT Fusion/
DWT Fusion
Fused Image
Extracted
Modalities
S. Anu H Nair Int. Journal of Engineering Research and Applications www.ijera.com
ISSN : 2248-9622, Vol. 5, Issue 1( Part 3), January 2015, pp.
www.ijera.com 79 | P a g e
successfully extracted. The above steps are depicted in Figure 3.
Fig.3: Modality extraction from palmprint image
V. Implementation of Image Fusion
Algorithms
Image fusion is proposed for creating a fused
template which further serves as an input to the
watermarking system.
5.1 Simple Average
It is a well-documented fact that regions of
images that are in focus tend to be of higher pixel
intensity. Thus this algorithm is a simple way of
obtaining an output image with all regions in focus.
The value of the pixel P (i, j) of each image is taken
and added. This sum is then divided by 2 to obtain
the average. The average value is assigned to the
corresponding pixel of the output image which is
given in equation below. This is repeated for all pixel
values [15].
K (i, j) = { X (i, j) + Y (i, j) } / 2 (1)
Where X (i , j) and Y ( i, j) are two input images.
(a) (b) (c)
Fig. 4 (a) – Extracted Fingerprint Modality,(b)-Extracted Palmprint Modality,(c)-Fused Image
5.2 Select Maximum
The greater the pixel values are the more in-
focus regions in the image. Thus this algorithm
chooses the in-focus regions from each input image
by choosing the greatest value for each pixel,
resulting in highly focused output. The value of the
pixel P (i, j) of each image is taken and compared to
each other. The greatest pixel value is assigned to the
corresponding pixel [15].
(a) (b) (c)
Fig.4 (a) – Extracted Fingerprint Modality,(b)-Extracted Palmprint Modality,(c)-Fused Image
Input Palmprint image
Perform Histogram Equalization to obtain good contrast of the image
Implement Diffusion filter to reduce noise
Detect the edges to identify ridges using sobel
Extracted
modality
Implement thinning operation
S. Anu H Nair Int. Journal of Engineering Research and Applications www.ijera.com
ISSN : 2248-9622, Vol. 5, Issue 1( Part 3), January 2015, pp.77-83
83www.ijera.com 80 | P a g e
5.3 Select Minimum
The Lower the pixel values the more in focus the
image. Thus this algorithm chooses the in-focus
regions from each input image by choosing the
greatest value for each pixel, resulting in Lower
focused output. The value of the pixel P (i, j) of each
image is taken and compared to each other. The
Lower pixel value is assigned to the corresponding
pixel [15].
(a) (b) (c)
Fig.5 (a) – Extracted Fingerprint Modality,(b)-Extracted Palmprint Modality,(c)-Fused Image
5.4 Discrete Wavelet Transform (DWT)
The wavelets-based approach is appropriate for
performing fusion tasks for the following reasons: -
(1) It is a multiscale (multi resolution) approach well
suited to manage the different image resolutions,
useful in a number of image processing applications
including the image fusion. (2) The discrete wavelets
transform (DWT) allows the image decomposition in
different kinds of coefficients preserving the image
information. (3) Once the coefficients are merged the
final fused image is achieved through the inverse
discrete wavelets transform (IDWT), where the
information in the merged coefficients is also
preserved [16].
𝑦 𝑛 = 𝑥 ∗ 𝑔 𝑛 = 𝑥 𝑘 𝑔 𝑛 − 𝑘𝛼
𝑘=−𝛼 (2)
𝑦𝑙𝑜𝑤 𝑛 = 𝑥 𝑘 𝑔 2𝑛 − 𝑘𝛼
𝑘=−𝛼 (3)
𝑦 𝑕𝑖𝑔𝑕 𝑛 = 𝑥 𝑘 𝑕 2𝑛 − 𝑘𝛼
𝑘=−𝛼 (4)
where x is the DWT signal,g is the low pass filter and
h is the high pass filter.The 2×2 Haar matrix that is
associated with the Haar wavelet is
𝟏 𝟏
𝟏 −𝟏
.
Fig 6: DWT Flow Chart
The wavelet transform decomposes the image
into low-high, high-low, high-high spatial frequency
bands at different scales and the low-low band at the
coarsest scale which is shown. The L-L band contains
the average image information whereas the other
bands contain directional information due to spatial
orientation. Higher absolute values of wavelet
coefficients in the high bands correspond to salient
features such as edges or lines. The basic steps
performed in image fusion given.
The method is as follows:
 Perform independent wavelet decomposition of
the two images until level L
 DWT coefficients from two input images’ are
fused pixel-by-pixel
 It is obtained by choosing the average of the
approximation coefficients.
 Inverse DWT is performed to obtain the fused
image
The wavelets-based approach is appropriate for
performing fusion tasks for the following reasons:-
Once the coefficients are merged the final fused
image is achieved through the inverse discrete
wavelets transform (IDWT), where the information in
the merged coefficients is also preserved.
B
A
Fusion of
Images
Fused
Image
DWT
DWT
IDWT
S. Anu H Nair Int. Journal of Engineering Research and Applications www.ijera.com
ISSN : 2248-9622, Vol. 5, Issue 1( Part 3), January 2015, pp.77-83
83www.ijera.com 81 | P a g e
(a) (b) (c)
Fig.7 (a) – Extracted Fingerprint Modality, (b)-Extracted Palmprint Modality,(c)-Fused Image
5.5 Stationary Wavelet Transform:-
It fuses two multi-focused images by the means
of wavelet but instead of using Discrete Wavelet
Transform (DWT) to decompose images into
frequency domain we use Discrete Stationary
Wavelet Transform (DSWT or SWT). The Stationary
Wavelet Transform is a wavelet transform algorithm
which is designed to overcome the lack of translation
invariance of the Discrete Wavelet Transform.
Translation Invariance is achieved by removing the
down samplers and the up samplers in the DWT and
up-sampling the filter coefficients by a factor of 2(j-
1) in the j level of the algorithm. The SWT is an
inherently redundant scheme as the output of each
level of SWT contains the same number of samples
as the input – thus for a decomposition of N levels
there is a redundancy of N in the wavelet
coefficients. The Stationary Wavelet Transform is a
wavelet transform algorithm which is designed to
overcome the lack of translation invariance of the
Discrete Wavelet Transform. Translation Invariance
is achieved by removing the down samplers and the
up samplers in the DWT and up-sampling the filter
coefficients by a factor of 2(j-1) in the jth level of the
algorithm. The SWT is an inherently redundant
scheme as the output of each level of SWT contains
the same number of samples as the input – so for a
decomposition of N levels there is a redundancy of N
in the wavelet coefficients. In summary, the SWT
method can be described as follows[16]
• Decompose the two source images using
SWT at one level resulting in three details
subbands and one approximation subband
(HL, LH, HH and LL bands).
• Then take the average of approximate parts
of images.
• Take the absolute values of horizontal
details of the image and subtract the second
part of image from first.
D = (abs (H1L2)-abs (H2L2))>=0
(5)
For fused horizontal part make element wise
multiplication of D and horizontal detail of
first image and then subtract another
horizontal detail of second image multiplied
by logical not of D from first.
• Find D for vertical and diagonal parts and
obtain the fused vertical and details of
image.
• Fused image is obtained by taking inverse
stationary wavelet transform.
(a) (b) (c)
Fig.8 (a) – Extracted Fingerprint Modality,(b)-Extracted Palmprint Modality,(c)-Fused Image
VI. Performance Metrics Used for the
analysis of fused images
6.1 Xydeas and Petrovic Metric - Qabf
A normalized weighted performance metric of a
given process p that fuses A and B into F, is given as:
[12]
𝑄 𝐴𝐵/𝐹
=
𝑄
𝑚 ,𝑛
𝐴𝐹
𝑤 𝑚 ,𝑛
𝐴 +𝑄
𝑚 ,𝑛
𝐵𝐹
𝑤 𝑚 ,𝑛
𝐵𝑁
𝑛=1
𝑀
𝑚 =1
𝑤 𝑚 ,𝑛
𝐴 +𝑤 𝑚 ,𝑛
𝐵𝑁
𝑛=1
𝑀
𝑚 =1
(6)
where A,B and F represent the input and fused
images respectively. The definition of QAF
and QBF
are same and given as
QAF
(m,n) = QAF
g (m,n) QAF
α (m,n) (7)
where Q⃰ F
g}, Q⃰ F
α are the edge strength and
orientation values at location (m,n) for images A and
S. Anu H Nair Int. Journal of Engineering Research and Applications www.ijera.com
ISSN : 2248-9622, Vol. 5, Issue 1( Part 3), January 2015, pp.
www.ijera.com 82 | P a g e
B. The dynamic range for QAB/F
is [ 0, 1 ] and it
should be close to one for better fusion.
6.2 Visual Information Fidelity(VIF)
VIF first decomposes the natural image into
several sub-bands and parses each sub-band into
blocks[13]. Then, VIF measures the visual
information by computing mutual information in the
different models in each block and each sub-band.
Finally,the image quality value is measured by
integrating visual information for all the blocks and
all the sub-bands. This relies on modeling of the
statistical image source, the image distortion channel
and the human visual distortion channel. Images
come from a common class: the class of natural
scene. Image quality assessment is done based on
information fidelity where the channel imposes
fundamental limits on how much information could
flow from the source (the reference image), through
the channel (the image distortion process) to the
receiver (the human observer).
VIF = Distorted Image Information / Reference
Image Information (8)
6.3 Fusion Mutual Information
It measures the degree of dependence of two
images[6].If the joint histogram between I1(x,y) and
If (x,y) is defined as 𝑕𝐼1𝐼 𝑓
(𝑖, 𝑗) and I2 (x,y) and If(x,y)
is defined as 𝑕𝐼2𝐼 𝑓
(𝑖, 𝑗) then Fusion Mutual
Information (FMI) is given as
𝐹𝑀𝐼 = 𝑀𝐼𝐼1𝐼 𝑓
+ 𝑀𝐼𝐼2𝐼 𝑓
(9)
where
𝑀𝐼𝐼1𝐼 𝑓
= 𝑕𝐼1𝐼 𝑓
(𝑖, 𝑗) log2
𝑕 𝐼1𝐼 𝑓
(𝑖,𝑗 )
𝑕 𝐼1 (𝑖,𝑗 )𝑕 𝐼 𝑓
(𝑖,𝑗)
𝑁
𝑗=1
𝑀
𝑖=1
(10)
𝑀𝐼𝐼2𝐼 𝑓
= 𝑕𝐼2𝐼 𝑓
(𝑖, 𝑗) log2
𝑕 𝐼2𝐼 𝑓
(𝑖,𝑗 )
𝑕 𝐼2 (𝑖,𝑗 )𝑕 𝐼 𝑓
(𝑖,𝑗)
𝑁
𝑗=1
𝑀
𝑖=1
(11)
The value is high for a best fused image.
6.4Average Gradient
The Average Gradient is applied to measure the
detailed information in the images.
𝑔 =
1
𝑀−1 𝑁−1
𝜕𝑓 𝑥,𝑦
𝜕𝑥
2
+
𝜕𝑓 𝑥,𝑦
𝜕𝑦
2
𝑁−1
𝑦−1
𝑀−1
𝑥=1
(12)
6.5 Entropy
Entropy is defined as amount of information
contained in a signal. The entropy of the image can
be evaluated as
H = - ∑ P(i)*log2(P(di)) (13)
where G is the number of possible gray levels, P(di)
is probability of occurrence of a particular gray level
di. If entropy of fused image is higher than parent
image then it indicates that the fused image contains
more information
Table :1 Quality of Fingerprint and Palmprint Fused Template
Metrics
Fusion
Methods
Qabf VIF MI
Average
Gradient
Entropy
DWT Fusion 0.61 0.27 3.82 30.70 7.81
AverageFusion 0.34 0.28 2.86 18.67 7.30
Minimum Fusion 0.34 0.28 2.87 18.66 7.31
SWT Fusion 0.24 0.20 1.98 16.57 7.06
MaximumFusion 0.20 0.08 2.52 10.43 4.36
From the above table DWT Fusion method provided better results when compared to other fusion methods.
S. Anu H Nair Int. Journal of Engineering Research and Applications www.ijera.com
ISSN : 2248-9622, Vol. 5, Issue 1( Part 3), January 2015, pp.77-83
83www.ijera.com 83 | P a g e
VII. Conclusion
In this paper a novel feature level fusion
algorithm for multimodal biometric images like
fingerprint and palmprintis proposed. Each biometric
feature is individually extracted and the obtained
modalities were fused together. As a result the fusion
mechanism has produced successfully the fused
template and thebestfused template has been
identified using several metrics. CASIA database is
chosen for the biometric images. All the images are 8
bit gray-level JPEG image with the resolution of
320*280.The experimental results show that theDWT
fused template provided better results than other
fused templates.
References
[1] Jing-Ming Guo, Yun-Fu Liu, Jla-Yu Chang,
Jiann-Der Lee, “Fingerprint classification
based on decision tree from singular points
and orientation field ”, Expert Systems with
Applications, Elsevier,41(2), 2014,752–764.
[2] Kai Cao, Liaojun Pang, Jimin Liang, Jie Tia,
“Fingerprint classification by a hierarchical
classifier”,Pattern Recognition,Elsevier,
46(12),2013,3186–3197.
[3] Ayman Mohammad Bahaa-Eldin, “A
medium resolution fingerprint matching
system”,Ain Shams Engineering
Journal,Elsevier,4(3) , 2013,393–408.
[4] Kamlesh Tiwari, Devendra K. Arya, G.S.
Badrinath, Phalguni Gupta, “Designing
palmprint based recognition system using
local structure tensor and force field
transformation for human identification”,
Neurocomputing,Elsevier, 116,2013, 222–
230,.
[5] Xiumei Guo, Weidong Zhou, Yu Wang,
“Palmprint recognition algorithm with
horizontally expanded blanket dimension”,
Neurocomputing, Elsevier, 127, 2014,152–
160.
[6] Feng Yue, WangmengZuo, “Consistency
analysis on orientation features for fast and
accurate palmprint identification”,
Information Sciences, Elsevier, 268,
2013,78–90.
[7] O. Nibouche, J. Jiang, “Palmprint matching
using feature points and SVD factorisation”,
Digital Signal Processing Elsevier,
23(4),2013, 1154–1162.
[8] Jing Li, Jian Cao, Kaixuan Lu, “Improve the
two-phase test samples representation
method for palmprint recognition”, Optik -
International Journal for Light and Electron
Optics, Elsevier, 124(24), 2013, 6651–6656.
[9] Anne M.P. Canuto, Fernando Pintro, João C.
Xavier-Junio, “Investigating fusion
approaches in multi-biometric cancellable
recognition”, Expert Systems with
Applications, Elsevier, 40(6),2013,1971–
1980.
[10] Norman Poh, Arun Ross, Weifeng Lee,
Josef Kittle, “A user-specific and selective
multimodal biometric fusion strategy by
ranking subjects”, Pattern Recognition,
Elsevier, 46(12), 2013,3341–3357.
[11] MohdShahrimieMohdAsaari, Shahrel A.
Suandi, BakhtiarAffendiRosdi, “Fusion of
Band Limited Phase Only Correlation and
Width Centroid Contour Distance for finger
based biometrics”,Expert Systems with
Applications, Elsevier, 41(7), 2014, 3367–
3382.
[12] Zengxi Huang, Yiguang Liu, Chunguang Li,
Menglong Yang, Liping Chen, “A robust
face and ear based multimodal biometric
system using sparse representation”,Pattern
Recognition,Elsevier46(8),2013, 2156–
2168.
[13] Meng Ding, Li Wei, Bangfeng Wang
,“Research on fusion method for infrared
and visible images via compressive
sensing”, Infrared Physics &
Technology,Elsevier, 57,2013,56–67.
[14] Mr.J.Aravinth, Dr.S.Valarmathy, “A Novel
Feature Extraction Techniques for
Multimodal Score Fusion Using Density
Based Gaussian Mixture Model Approach”,
International Journal of Emerging
Technology and Advanced Engineering2(1),
2012, 189-197.
[15] Deepak Kumar Sahu,,M.P.Parsai, “
Different Image Fusion Techniques –A
Critical Review”, International Journal of
Modern Engineering Research (IJMER)
2(5), 2012, 4298-4301.
[16] Sukhpreet Singh, ,Rachna Rajput, “A
Comparative Study of Classification of
Image Fusion Techniques”,International
Journal Of Engineering And Computer
Science , 3(7), 2014,. 7350-7353.

More Related Content

What's hot (18)

PDF
Az33298300
IJERA Editor
 
PDF
[IJET-V2I2P6] Authors:Atul Ganbawle , Prof J.A. Shaikh
IJET - International Journal of Engineering and Techniques
 
PDF
DETECTION OF CONCEALED WEAPONS IN X-RAY IMAGES USING FUZZY K-NN
IJCSEIT Journal
 
PDF
Analysis of Digital Image Forgery Detection using Adaptive Over-Segmentation ...
IRJET Journal
 
PDF
A Novel Approach To Detection and Evaluation of Resampled Tampered Images
CSCJournals
 
PDF
Land Boundary Detection of an Island using improved Morphological Operation
CSCJournals
 
PDF
IRJET-Vision Based Occupant Detection in Unattended Vehicle
IRJET Journal
 
PDF
Ijarcet vol-2-issue-7-2262-2267
Editor IJARCET
 
PDF
The Effects of Segmentation Techniques in Digital Image Based Identification ...
TELKOMNIKA JOURNAL
 
PDF
Enhancing Security and Privacy Issue in Airport by Biometric based Iris Recog...
idescitation
 
PDF
F0342032038
ijceronline
 
PDF
IRJET- Fake Paper Currency Recognition
IRJET Journal
 
PDF
1304.2109
S.M. Zamshad Farhan
 
PDF
PERFORMANCE ANALYSIS USING SINGLE SEEDED REGION GROWING ALGORITHM
AM Publications
 
PDF
IRJET - A Systematic Observation in Digital Image Forgery Detection using MATLAB
IRJET Journal
 
PDF
Image Processing Algorithm for Fruit Identification
IRJET Journal
 
PDF
Dp34707712
IJERA Editor
 
PDF
IRJET- Image Segmentation Techniques: A Review
IRJET Journal
 
Az33298300
IJERA Editor
 
[IJET-V2I2P6] Authors:Atul Ganbawle , Prof J.A. Shaikh
IJET - International Journal of Engineering and Techniques
 
DETECTION OF CONCEALED WEAPONS IN X-RAY IMAGES USING FUZZY K-NN
IJCSEIT Journal
 
Analysis of Digital Image Forgery Detection using Adaptive Over-Segmentation ...
IRJET Journal
 
A Novel Approach To Detection and Evaluation of Resampled Tampered Images
CSCJournals
 
Land Boundary Detection of an Island using improved Morphological Operation
CSCJournals
 
IRJET-Vision Based Occupant Detection in Unattended Vehicle
IRJET Journal
 
Ijarcet vol-2-issue-7-2262-2267
Editor IJARCET
 
The Effects of Segmentation Techniques in Digital Image Based Identification ...
TELKOMNIKA JOURNAL
 
Enhancing Security and Privacy Issue in Airport by Biometric based Iris Recog...
idescitation
 
F0342032038
ijceronline
 
IRJET- Fake Paper Currency Recognition
IRJET Journal
 
PERFORMANCE ANALYSIS USING SINGLE SEEDED REGION GROWING ALGORITHM
AM Publications
 
IRJET - A Systematic Observation in Digital Image Forgery Detection using MATLAB
IRJET Journal
 
Image Processing Algorithm for Fruit Identification
IRJET Journal
 
Dp34707712
IJERA Editor
 
IRJET- Image Segmentation Techniques: A Review
IRJET Journal
 

Viewers also liked (8)

PDF
Crankshaft_OneSheet
Kevin DeSouza
 
DOCX
Phòng ngừa xơ cứng động mạch bằng thực phẩm
shizuko497
 
PPT
Niukat resurssit viisaasti käyttöön - Sääntelystä biotalouden edistäjä
Linnunmaa Oy
 
PDF
Perion - NOAH14 London
NOAH Advisors
 
PPTX
Create a Custom Internet Business - #LoveWhatYouDo
pureecommerce
 
PDF
159139028 povesti-copii-fratii-grimm
lcosteiu2005
 
PPTX
LEÇON 271 – C’est la vision du Christ que j’utiliserai aujourd’hui.
Pierrot Caron
 
Crankshaft_OneSheet
Kevin DeSouza
 
Phòng ngừa xơ cứng động mạch bằng thực phẩm
shizuko497
 
Niukat resurssit viisaasti käyttöön - Sääntelystä biotalouden edistäjä
Linnunmaa Oy
 
Perion - NOAH14 London
NOAH Advisors
 
Create a Custom Internet Business - #LoveWhatYouDo
pureecommerce
 
159139028 povesti-copii-fratii-grimm
lcosteiu2005
 
LEÇON 271 – C’est la vision du Christ que j’utiliserai aujourd’hui.
Pierrot Caron
 
Ad

Similar to Analysis of Image Fusion Techniques for fingerprint Palmprint Multimodal Biometric System (20)

PDF
Ijaems apr-2016-1 Multibiometric Authentication System Processed by the Use o...
INFOGAIN PUBLICATION
 
PDF
Review of three categories of fingerprint recognition 2
prjpublications
 
PDF
Review of three categories of fingerprint recognition
prjpublications
 
PDF
Review of three categories of fingerprint recognition 2
prj_publication
 
PDF
International Journal of Computational Engineering Research(IJCER)
ijceronline
 
PDF
Enhanced Thinning Based Finger Print Recognition
IJCI JOURNAL
 
PDF
E017443136
IOSR Journals
 
PDF
Fingerprint Recognition Using Minutiae Based and Discrete Wavelet Transform
AM Publications
 
PDF
FINGERPRINT CLASSIFICATION MODEL BASED ON NEW COMBINATION OF PARTICLE SWARM O...
IAEME Publication
 
PDF
COMPARATIVE ANALYSIS OF MINUTIAE BASED FINGERPRINT MATCHING ALGORITHMS
ijcsit
 
PDF
D56021216
IJERA Editor
 
PDF
Performance Enhancement Of Multimodal Biometrics Using Cryptosystem
IJERA Editor
 
PDF
1834 1840
Editor IJARCET
 
PDF
1834 1840
Editor IJARCET
 
PDF
A Review Paper on Personal Identification with An Efficient Method Of Combina...
IRJET Journal
 
PDF
Review on Fingerprint Recognition
EECJOURNAL
 
PDF
50120130405034
IAEME Publication
 
PDF
E0543135
IOSR Journals
 
PDF
Introduction To Palmprint Recognition
IRJET Journal
 
PDF
G0333946
iosrjournals
 
Ijaems apr-2016-1 Multibiometric Authentication System Processed by the Use o...
INFOGAIN PUBLICATION
 
Review of three categories of fingerprint recognition 2
prjpublications
 
Review of three categories of fingerprint recognition
prjpublications
 
Review of three categories of fingerprint recognition 2
prj_publication
 
International Journal of Computational Engineering Research(IJCER)
ijceronline
 
Enhanced Thinning Based Finger Print Recognition
IJCI JOURNAL
 
E017443136
IOSR Journals
 
Fingerprint Recognition Using Minutiae Based and Discrete Wavelet Transform
AM Publications
 
FINGERPRINT CLASSIFICATION MODEL BASED ON NEW COMBINATION OF PARTICLE SWARM O...
IAEME Publication
 
COMPARATIVE ANALYSIS OF MINUTIAE BASED FINGERPRINT MATCHING ALGORITHMS
ijcsit
 
D56021216
IJERA Editor
 
Performance Enhancement Of Multimodal Biometrics Using Cryptosystem
IJERA Editor
 
1834 1840
Editor IJARCET
 
1834 1840
Editor IJARCET
 
A Review Paper on Personal Identification with An Efficient Method Of Combina...
IRJET Journal
 
Review on Fingerprint Recognition
EECJOURNAL
 
50120130405034
IAEME Publication
 
E0543135
IOSR Journals
 
Introduction To Palmprint Recognition
IRJET Journal
 
G0333946
iosrjournals
 
Ad

Recently uploaded (20)

PPTX
Farrell_Programming Logic and Design slides_10e_ch02_PowerPoint.pptx
bashnahara11
 
PPTX
What-is-the-World-Wide-Web -- Introduction
tonifi9488
 
PDF
RAT Builders - How to Catch Them All [DeepSec 2024]
malmoeb
 
PDF
Research-Fundamentals-and-Topic-Development.pdf
ayesha butalia
 
PPTX
AI in Daily Life: How Artificial Intelligence Helps Us Every Day
vanshrpatil7
 
PPTX
AI Code Generation Risks (Ramkumar Dilli, CIO, Myridius)
Priyanka Aash
 
PPTX
Agile Chennai 18-19 July 2025 | Emerging patterns in Agentic AI by Bharani Su...
AgileNetwork
 
PPTX
AVL ( audio, visuals or led ), technology.
Rajeshwri Panchal
 
PDF
MASTERDECK GRAPHSUMMIT SYDNEY (Public).pdf
Neo4j
 
PPTX
Agentic AI in Healthcare Driving the Next Wave of Digital Transformation
danielle hunter
 
PDF
Researching The Best Chat SDK Providers in 2025
Ray Fields
 
PDF
Responsible AI and AI Ethics - By Sylvester Ebhonu
Sylvester Ebhonu
 
PDF
Tea4chat - another LLM Project by Kerem Atam
a0m0rajab1
 
PDF
Trying to figure out MCP by actually building an app from scratch with open s...
Julien SIMON
 
PPTX
Agile Chennai 18-19 July 2025 | Workshop - Enhancing Agile Collaboration with...
AgileNetwork
 
PDF
OFFOFFBOX™ – A New Era for African Film | Startup Presentation
ambaicciwalkerbrian
 
PDF
AI Unleashed - Shaping the Future -Starting Today - AIOUG Yatra 2025 - For Co...
Sandesh Rao
 
PDF
How Open Source Changed My Career by abdelrahman ismail
a0m0rajab1
 
PDF
introduction to computer hardware and sofeware
chauhanshraddha2007
 
PDF
The Future of Mobile Is Context-Aware—Are You Ready?
iProgrammer Solutions Private Limited
 
Farrell_Programming Logic and Design slides_10e_ch02_PowerPoint.pptx
bashnahara11
 
What-is-the-World-Wide-Web -- Introduction
tonifi9488
 
RAT Builders - How to Catch Them All [DeepSec 2024]
malmoeb
 
Research-Fundamentals-and-Topic-Development.pdf
ayesha butalia
 
AI in Daily Life: How Artificial Intelligence Helps Us Every Day
vanshrpatil7
 
AI Code Generation Risks (Ramkumar Dilli, CIO, Myridius)
Priyanka Aash
 
Agile Chennai 18-19 July 2025 | Emerging patterns in Agentic AI by Bharani Su...
AgileNetwork
 
AVL ( audio, visuals or led ), technology.
Rajeshwri Panchal
 
MASTERDECK GRAPHSUMMIT SYDNEY (Public).pdf
Neo4j
 
Agentic AI in Healthcare Driving the Next Wave of Digital Transformation
danielle hunter
 
Researching The Best Chat SDK Providers in 2025
Ray Fields
 
Responsible AI and AI Ethics - By Sylvester Ebhonu
Sylvester Ebhonu
 
Tea4chat - another LLM Project by Kerem Atam
a0m0rajab1
 
Trying to figure out MCP by actually building an app from scratch with open s...
Julien SIMON
 
Agile Chennai 18-19 July 2025 | Workshop - Enhancing Agile Collaboration with...
AgileNetwork
 
OFFOFFBOX™ – A New Era for African Film | Startup Presentation
ambaicciwalkerbrian
 
AI Unleashed - Shaping the Future -Starting Today - AIOUG Yatra 2025 - For Co...
Sandesh Rao
 
How Open Source Changed My Career by abdelrahman ismail
a0m0rajab1
 
introduction to computer hardware and sofeware
chauhanshraddha2007
 
The Future of Mobile Is Context-Aware—Are You Ready?
iProgrammer Solutions Private Limited
 

Analysis of Image Fusion Techniques for fingerprint Palmprint Multimodal Biometric System

  • 1. S. Anu H Nair Int. Journal of Engineering Research and Applications www.ijera.com ISSN : 2248-9622, Vol. 5, Issue 1( Part 3), January 2015, pp.77-83 83www.ijera.com 77 | P a g e Analysis of Image Fusion Techniques for fingerprint Palmprint Multimodal Biometric System S. Anu H Naira , Dr. P. Arunab a Asst. Professor, Department of CSE, Annamalai University, Tamil Nadu, 608002, India b Professor, Department of CSE, Annamalai University, Tamil Nadu, 608002, India Abstract The multimodal Biometric System using multiple sources of information has been widely recognized. However computational models for multimodal biometrics recognition have only recently received attention. In this paper the fingerprint and palmprint images are chosen and fused together using image fusion methods. The biometric features are subjected to modality extraction. Different fusion methods like average fusion, minimum fusion, maximum fusion, discrete wavelet transform fusion and stationary wavelet transformfusion are implemented for the fusion of extracting modalities. The best fused template is analyzed by applying various fusion metrics. Here the DWT fused image provided better results. Keywords: Multimodal biometrics, feature fusion, average fusion, discrete wavelet, stationary wavelet, minimum fusion, maximum fusion I. Introduction Biometrics acts as a source for identifying a human being. This is used for authentication and identification purposes. In order to overcome the limitations of unimodal biometric system multimodal biometrics came into existence. A multimodal biometric system combines two or more biometric data recognition results such as a combination of a subject's fingerprint, face, iris and voice. This helps to increase the reliability of personal identification system that discriminates between an authorized person and a fraudulent person. Multimodal biometric system has addressed some issues related to unimodal such as, (a) Non- universality or insufficient population coverage (reduce failure to enroll rate which increases population coverage). (b) It becomes absolutely unmanageable for an impostor to imitate multiple biometric traits of a legitimately enrolled individual. (c) Multimodal-biometric systems offer climbing evidence in solving the problem of noisy data (illness affecting voice, scar affecting fingerprint). In this paper, a novel approach for creating a multimodal biometric system has been proposed. The multimodal biometric system is implemented using the different fusion schemes such as Average Fusion, Minimum Fusion, Maximum Fusion, DWT Fusion and SWT Fusion. In modality extraction level, the information extracted from different modalities is stored in vectors on the basis of their modality. These modalities are then blended to produce a joint template. Fusion at feature extraction level generates a homogeneous template for fingerprint, iris and palmprint features. II. Literature review [1] proposed a fingerprint classification method where types of singular points and the number of each type of point are chosen as features. [2] designed an orientation diffusion model for fingerprint extraction where corepoints and ridgeline flow are used. [3] created a novel minutiae based fingerprint matching system which creates a feature vector template from the extracted core points and ridges. [4] modeled a palmprint based recognition system which uses texture and dominant orientation pixels as features. [5] identified a palmprint recognition method which uses blanket dimension for extracting image texture information.[6] presented a typical palmprint identification system which constructed a pattern from the orientation and response features. [7] designed a new palmprint matching system based on the extraction of feature points identified by the intersection of creases and lines. [8] proposed an efficient representation method which can be used for classification. [9] created a model that fused voice and iris biometric features. This model acted as a novel representation of existing biometric data. [10] proposed user specific and selective fusion strategy for an enrolled user. [11] identified a new geometrical feature Width Centroid Contour Distance for finger geometry biometric. [12] developed a face and ear biometric system which uses a feature weighing scheme called Sparse Coding error ratio. [13] proposed the fusion method based on a compressive sensing theory which contains over complete dictionary, an algorithm for sparse vector approximation and fusion rule. [14] identified the feature extraction techniques for three modalities viz. RESEARCH ARTICLE OPEN ACCESS
  • 2. S. Anu H Nair Int. Journal of Engineering Research and Applications www.ijera.com ISSN : 2248-9622, Vol. 5, Issue 1( Part 3), January 2015, pp. www.ijera.com 78 | P a g e fingerprint, iris and face. The extracted data is stored as a template which can be fused using density based score level fusion. III. Proposed work The proposed work describes the fusion of multimodal biometric images such as fingerprint and palmprint. The fingerprint and palmprint images are subjected to modality extraction. The extracted modalities are fused together using several fusion methods. The best fused template is identified by applying different metrics. The proposed work is shown in Figure 1. Fig. 1: Structure of proposed work IV. Biometric modality extraction 4.1 Modality extraction from a Fingerprint image: The fingerprint image is fed as the input. The first step is to apply the Adaptive histogram equalization technique to increase the contrast of the grayscale image. The next step is to apply Orientation process that is used to find the direction of the ridges in the fingerprint image. This can be achieved by using the SOBEL filter to detect the edges of the image.ROI selection is used to give maximum magnitude of convolution in the region of core point which is the next step. This fingerprint masking is used to select the region where the fingerprint images are present. Thinning operations reduce connected patterns to a width of a single pixel while maintaining their topology. Once this is done, the feature of the fingerprint is successfully extracted. The process of feature extraction of the fingerprint image is represented in Figure 2. Fig. 2:Modality extraction from fingerprint image 4.2 Modality extraction from a Palmprint image: The palmprint image is fed as the input. The contrast of the grayscale image is enhanced by using the Adaptive histogram equalization technique. The noise from the image is removed by applying a diffusion filter. Edge detection is performed using a Sobel filter to identify the ridges. Thinning algorithms reduce connected patterns to a width of a single pixel while maintaining their topology. Once this is done, the modality of the Palmprint is Input Fingerprint image Implement orientation process to identify the direction of edges Select ROI using fingerprint mask Extracted Modality Perform adaptive histogram equalization to obtain good contrast of the image Fingerprint image Palmprint image Fingerprint Modality Extraction Palmprint modality Extraction Average Fusion/ Minimum Fusion/ Maximum Fusion/ SWT Fusion/ DWT Fusion Fused Image Extracted Modalities
  • 3. S. Anu H Nair Int. Journal of Engineering Research and Applications www.ijera.com ISSN : 2248-9622, Vol. 5, Issue 1( Part 3), January 2015, pp. www.ijera.com 79 | P a g e successfully extracted. The above steps are depicted in Figure 3. Fig.3: Modality extraction from palmprint image V. Implementation of Image Fusion Algorithms Image fusion is proposed for creating a fused template which further serves as an input to the watermarking system. 5.1 Simple Average It is a well-documented fact that regions of images that are in focus tend to be of higher pixel intensity. Thus this algorithm is a simple way of obtaining an output image with all regions in focus. The value of the pixel P (i, j) of each image is taken and added. This sum is then divided by 2 to obtain the average. The average value is assigned to the corresponding pixel of the output image which is given in equation below. This is repeated for all pixel values [15]. K (i, j) = { X (i, j) + Y (i, j) } / 2 (1) Where X (i , j) and Y ( i, j) are two input images. (a) (b) (c) Fig. 4 (a) – Extracted Fingerprint Modality,(b)-Extracted Palmprint Modality,(c)-Fused Image 5.2 Select Maximum The greater the pixel values are the more in- focus regions in the image. Thus this algorithm chooses the in-focus regions from each input image by choosing the greatest value for each pixel, resulting in highly focused output. The value of the pixel P (i, j) of each image is taken and compared to each other. The greatest pixel value is assigned to the corresponding pixel [15]. (a) (b) (c) Fig.4 (a) – Extracted Fingerprint Modality,(b)-Extracted Palmprint Modality,(c)-Fused Image Input Palmprint image Perform Histogram Equalization to obtain good contrast of the image Implement Diffusion filter to reduce noise Detect the edges to identify ridges using sobel Extracted modality Implement thinning operation
  • 4. S. Anu H Nair Int. Journal of Engineering Research and Applications www.ijera.com ISSN : 2248-9622, Vol. 5, Issue 1( Part 3), January 2015, pp.77-83 83www.ijera.com 80 | P a g e 5.3 Select Minimum The Lower the pixel values the more in focus the image. Thus this algorithm chooses the in-focus regions from each input image by choosing the greatest value for each pixel, resulting in Lower focused output. The value of the pixel P (i, j) of each image is taken and compared to each other. The Lower pixel value is assigned to the corresponding pixel [15]. (a) (b) (c) Fig.5 (a) – Extracted Fingerprint Modality,(b)-Extracted Palmprint Modality,(c)-Fused Image 5.4 Discrete Wavelet Transform (DWT) The wavelets-based approach is appropriate for performing fusion tasks for the following reasons: - (1) It is a multiscale (multi resolution) approach well suited to manage the different image resolutions, useful in a number of image processing applications including the image fusion. (2) The discrete wavelets transform (DWT) allows the image decomposition in different kinds of coefficients preserving the image information. (3) Once the coefficients are merged the final fused image is achieved through the inverse discrete wavelets transform (IDWT), where the information in the merged coefficients is also preserved [16]. 𝑦 𝑛 = 𝑥 ∗ 𝑔 𝑛 = 𝑥 𝑘 𝑔 𝑛 − 𝑘𝛼 𝑘=−𝛼 (2) 𝑦𝑙𝑜𝑤 𝑛 = 𝑥 𝑘 𝑔 2𝑛 − 𝑘𝛼 𝑘=−𝛼 (3) 𝑦 𝑕𝑖𝑔𝑕 𝑛 = 𝑥 𝑘 𝑕 2𝑛 − 𝑘𝛼 𝑘=−𝛼 (4) where x is the DWT signal,g is the low pass filter and h is the high pass filter.The 2×2 Haar matrix that is associated with the Haar wavelet is 𝟏 𝟏 𝟏 −𝟏 . Fig 6: DWT Flow Chart The wavelet transform decomposes the image into low-high, high-low, high-high spatial frequency bands at different scales and the low-low band at the coarsest scale which is shown. The L-L band contains the average image information whereas the other bands contain directional information due to spatial orientation. Higher absolute values of wavelet coefficients in the high bands correspond to salient features such as edges or lines. The basic steps performed in image fusion given. The method is as follows:  Perform independent wavelet decomposition of the two images until level L  DWT coefficients from two input images’ are fused pixel-by-pixel  It is obtained by choosing the average of the approximation coefficients.  Inverse DWT is performed to obtain the fused image The wavelets-based approach is appropriate for performing fusion tasks for the following reasons:- Once the coefficients are merged the final fused image is achieved through the inverse discrete wavelets transform (IDWT), where the information in the merged coefficients is also preserved. B A Fusion of Images Fused Image DWT DWT IDWT
  • 5. S. Anu H Nair Int. Journal of Engineering Research and Applications www.ijera.com ISSN : 2248-9622, Vol. 5, Issue 1( Part 3), January 2015, pp.77-83 83www.ijera.com 81 | P a g e (a) (b) (c) Fig.7 (a) – Extracted Fingerprint Modality, (b)-Extracted Palmprint Modality,(c)-Fused Image 5.5 Stationary Wavelet Transform:- It fuses two multi-focused images by the means of wavelet but instead of using Discrete Wavelet Transform (DWT) to decompose images into frequency domain we use Discrete Stationary Wavelet Transform (DSWT or SWT). The Stationary Wavelet Transform is a wavelet transform algorithm which is designed to overcome the lack of translation invariance of the Discrete Wavelet Transform. Translation Invariance is achieved by removing the down samplers and the up samplers in the DWT and up-sampling the filter coefficients by a factor of 2(j- 1) in the j level of the algorithm. The SWT is an inherently redundant scheme as the output of each level of SWT contains the same number of samples as the input – thus for a decomposition of N levels there is a redundancy of N in the wavelet coefficients. The Stationary Wavelet Transform is a wavelet transform algorithm which is designed to overcome the lack of translation invariance of the Discrete Wavelet Transform. Translation Invariance is achieved by removing the down samplers and the up samplers in the DWT and up-sampling the filter coefficients by a factor of 2(j-1) in the jth level of the algorithm. The SWT is an inherently redundant scheme as the output of each level of SWT contains the same number of samples as the input – so for a decomposition of N levels there is a redundancy of N in the wavelet coefficients. In summary, the SWT method can be described as follows[16] • Decompose the two source images using SWT at one level resulting in three details subbands and one approximation subband (HL, LH, HH and LL bands). • Then take the average of approximate parts of images. • Take the absolute values of horizontal details of the image and subtract the second part of image from first. D = (abs (H1L2)-abs (H2L2))>=0 (5) For fused horizontal part make element wise multiplication of D and horizontal detail of first image and then subtract another horizontal detail of second image multiplied by logical not of D from first. • Find D for vertical and diagonal parts and obtain the fused vertical and details of image. • Fused image is obtained by taking inverse stationary wavelet transform. (a) (b) (c) Fig.8 (a) – Extracted Fingerprint Modality,(b)-Extracted Palmprint Modality,(c)-Fused Image VI. Performance Metrics Used for the analysis of fused images 6.1 Xydeas and Petrovic Metric - Qabf A normalized weighted performance metric of a given process p that fuses A and B into F, is given as: [12] 𝑄 𝐴𝐵/𝐹 = 𝑄 𝑚 ,𝑛 𝐴𝐹 𝑤 𝑚 ,𝑛 𝐴 +𝑄 𝑚 ,𝑛 𝐵𝐹 𝑤 𝑚 ,𝑛 𝐵𝑁 𝑛=1 𝑀 𝑚 =1 𝑤 𝑚 ,𝑛 𝐴 +𝑤 𝑚 ,𝑛 𝐵𝑁 𝑛=1 𝑀 𝑚 =1 (6) where A,B and F represent the input and fused images respectively. The definition of QAF and QBF are same and given as QAF (m,n) = QAF g (m,n) QAF α (m,n) (7) where Q⃰ F g}, Q⃰ F α are the edge strength and orientation values at location (m,n) for images A and
  • 6. S. Anu H Nair Int. Journal of Engineering Research and Applications www.ijera.com ISSN : 2248-9622, Vol. 5, Issue 1( Part 3), January 2015, pp. www.ijera.com 82 | P a g e B. The dynamic range for QAB/F is [ 0, 1 ] and it should be close to one for better fusion. 6.2 Visual Information Fidelity(VIF) VIF first decomposes the natural image into several sub-bands and parses each sub-band into blocks[13]. Then, VIF measures the visual information by computing mutual information in the different models in each block and each sub-band. Finally,the image quality value is measured by integrating visual information for all the blocks and all the sub-bands. This relies on modeling of the statistical image source, the image distortion channel and the human visual distortion channel. Images come from a common class: the class of natural scene. Image quality assessment is done based on information fidelity where the channel imposes fundamental limits on how much information could flow from the source (the reference image), through the channel (the image distortion process) to the receiver (the human observer). VIF = Distorted Image Information / Reference Image Information (8) 6.3 Fusion Mutual Information It measures the degree of dependence of two images[6].If the joint histogram between I1(x,y) and If (x,y) is defined as 𝑕𝐼1𝐼 𝑓 (𝑖, 𝑗) and I2 (x,y) and If(x,y) is defined as 𝑕𝐼2𝐼 𝑓 (𝑖, 𝑗) then Fusion Mutual Information (FMI) is given as 𝐹𝑀𝐼 = 𝑀𝐼𝐼1𝐼 𝑓 + 𝑀𝐼𝐼2𝐼 𝑓 (9) where 𝑀𝐼𝐼1𝐼 𝑓 = 𝑕𝐼1𝐼 𝑓 (𝑖, 𝑗) log2 𝑕 𝐼1𝐼 𝑓 (𝑖,𝑗 ) 𝑕 𝐼1 (𝑖,𝑗 )𝑕 𝐼 𝑓 (𝑖,𝑗) 𝑁 𝑗=1 𝑀 𝑖=1 (10) 𝑀𝐼𝐼2𝐼 𝑓 = 𝑕𝐼2𝐼 𝑓 (𝑖, 𝑗) log2 𝑕 𝐼2𝐼 𝑓 (𝑖,𝑗 ) 𝑕 𝐼2 (𝑖,𝑗 )𝑕 𝐼 𝑓 (𝑖,𝑗) 𝑁 𝑗=1 𝑀 𝑖=1 (11) The value is high for a best fused image. 6.4Average Gradient The Average Gradient is applied to measure the detailed information in the images. 𝑔 = 1 𝑀−1 𝑁−1 𝜕𝑓 𝑥,𝑦 𝜕𝑥 2 + 𝜕𝑓 𝑥,𝑦 𝜕𝑦 2 𝑁−1 𝑦−1 𝑀−1 𝑥=1 (12) 6.5 Entropy Entropy is defined as amount of information contained in a signal. The entropy of the image can be evaluated as H = - ∑ P(i)*log2(P(di)) (13) where G is the number of possible gray levels, P(di) is probability of occurrence of a particular gray level di. If entropy of fused image is higher than parent image then it indicates that the fused image contains more information Table :1 Quality of Fingerprint and Palmprint Fused Template Metrics Fusion Methods Qabf VIF MI Average Gradient Entropy DWT Fusion 0.61 0.27 3.82 30.70 7.81 AverageFusion 0.34 0.28 2.86 18.67 7.30 Minimum Fusion 0.34 0.28 2.87 18.66 7.31 SWT Fusion 0.24 0.20 1.98 16.57 7.06 MaximumFusion 0.20 0.08 2.52 10.43 4.36 From the above table DWT Fusion method provided better results when compared to other fusion methods.
  • 7. S. Anu H Nair Int. Journal of Engineering Research and Applications www.ijera.com ISSN : 2248-9622, Vol. 5, Issue 1( Part 3), January 2015, pp.77-83 83www.ijera.com 83 | P a g e VII. Conclusion In this paper a novel feature level fusion algorithm for multimodal biometric images like fingerprint and palmprintis proposed. Each biometric feature is individually extracted and the obtained modalities were fused together. As a result the fusion mechanism has produced successfully the fused template and thebestfused template has been identified using several metrics. CASIA database is chosen for the biometric images. All the images are 8 bit gray-level JPEG image with the resolution of 320*280.The experimental results show that theDWT fused template provided better results than other fused templates. References [1] Jing-Ming Guo, Yun-Fu Liu, Jla-Yu Chang, Jiann-Der Lee, “Fingerprint classification based on decision tree from singular points and orientation field ”, Expert Systems with Applications, Elsevier,41(2), 2014,752–764. [2] Kai Cao, Liaojun Pang, Jimin Liang, Jie Tia, “Fingerprint classification by a hierarchical classifier”,Pattern Recognition,Elsevier, 46(12),2013,3186–3197. [3] Ayman Mohammad Bahaa-Eldin, “A medium resolution fingerprint matching system”,Ain Shams Engineering Journal,Elsevier,4(3) , 2013,393–408. [4] Kamlesh Tiwari, Devendra K. Arya, G.S. Badrinath, Phalguni Gupta, “Designing palmprint based recognition system using local structure tensor and force field transformation for human identification”, Neurocomputing,Elsevier, 116,2013, 222– 230,. [5] Xiumei Guo, Weidong Zhou, Yu Wang, “Palmprint recognition algorithm with horizontally expanded blanket dimension”, Neurocomputing, Elsevier, 127, 2014,152– 160. [6] Feng Yue, WangmengZuo, “Consistency analysis on orientation features for fast and accurate palmprint identification”, Information Sciences, Elsevier, 268, 2013,78–90. [7] O. Nibouche, J. Jiang, “Palmprint matching using feature points and SVD factorisation”, Digital Signal Processing Elsevier, 23(4),2013, 1154–1162. [8] Jing Li, Jian Cao, Kaixuan Lu, “Improve the two-phase test samples representation method for palmprint recognition”, Optik - International Journal for Light and Electron Optics, Elsevier, 124(24), 2013, 6651–6656. [9] Anne M.P. Canuto, Fernando Pintro, João C. Xavier-Junio, “Investigating fusion approaches in multi-biometric cancellable recognition”, Expert Systems with Applications, Elsevier, 40(6),2013,1971– 1980. [10] Norman Poh, Arun Ross, Weifeng Lee, Josef Kittle, “A user-specific and selective multimodal biometric fusion strategy by ranking subjects”, Pattern Recognition, Elsevier, 46(12), 2013,3341–3357. [11] MohdShahrimieMohdAsaari, Shahrel A. Suandi, BakhtiarAffendiRosdi, “Fusion of Band Limited Phase Only Correlation and Width Centroid Contour Distance for finger based biometrics”,Expert Systems with Applications, Elsevier, 41(7), 2014, 3367– 3382. [12] Zengxi Huang, Yiguang Liu, Chunguang Li, Menglong Yang, Liping Chen, “A robust face and ear based multimodal biometric system using sparse representation”,Pattern Recognition,Elsevier46(8),2013, 2156– 2168. [13] Meng Ding, Li Wei, Bangfeng Wang ,“Research on fusion method for infrared and visible images via compressive sensing”, Infrared Physics & Technology,Elsevier, 57,2013,56–67. [14] Mr.J.Aravinth, Dr.S.Valarmathy, “A Novel Feature Extraction Techniques for Multimodal Score Fusion Using Density Based Gaussian Mixture Model Approach”, International Journal of Emerging Technology and Advanced Engineering2(1), 2012, 189-197. [15] Deepak Kumar Sahu,,M.P.Parsai, “ Different Image Fusion Techniques –A Critical Review”, International Journal of Modern Engineering Research (IJMER) 2(5), 2012, 4298-4301. [16] Sukhpreet Singh, ,Rachna Rajput, “A Comparative Study of Classification of Image Fusion Techniques”,International Journal Of Engineering And Computer Science , 3(7), 2014,. 7350-7353.