IOSR Journal of Electrical and Electronics Engineering (IOSR-JEEE)
e-ISSN: 2278-1676,p-ISSN: 2320-3331, Volume 10, Issue 3 Ver. I (May – Jun. 2015), PP 53-56
www.iosrjournals.org
DOI: 10.9790/1676-10315356 www.iosrjournals.org 53 | Page
Image Compression using DWT and Principal Component
Analysis
Gurpreet Kaur, Kamaljeet Kaur
Abstract: A block wise implementation of principal component algorithm is suggested din the base work. The
main disadvantage with the earlier work is that it takes time as the size of the image increases and further
looping due to blocking effect of the algorithm. The compression performance reduces as the block size reduces
or no. of block increases. In order to solve the mentioned problems, the images are decomposed using the
discrete wavelet transform using haar wavelet. The global principal component analysis algorithm is applied on
LH and HL frequency sub-band images. This enables the compression by preserving the critical boundaries or
contours that are to be preserved while compressing the image so that minimum information is lost in
compression during thresholding process.
Index Terms: PCA Principal Component Algorithm, PCA Principal Component Analysis, DWT
Discrete Wavelet Transform
I. Introduction
PICTURES are the representation of a particular scene. Images can be of many types such as graphic,
optical, mental etc. If we talk about digital images then these are the images that can be stored on hard disk.
Pictures are the most common and convenient means of conveying or transmitting information. They portray
spatial information that can be recognized as an object.
II. Related Works
In the paper entitled “Combined Sparse Representation Based on Curvelet Transform and Local DCT
for Multi-layered Image Compression” proposed a new multi-layered representation technique for image
compression which combine curvelet transform and local DCT in order to benefit from the advantages of each.
He proposed morphological component analysis (MCA) method to separate the image into two layers, piecewise
smooth layer and textured structure layer, respectively associated to curvelet transform and local DCT. Each
layer is encoded independently with a different transform at a different bit rate. [1]
In the paper entitled “A Novel Image Deblocking Method Based on Curvelet Transform” described that
an Image block effect is due to the quantification process using Discrete Cosine Transform (DCT) to
compression coding, which dumps some frequency, and leads to noticeable discontinuous leaps. A deblocking
algorithm based on curvelet transform is proposed in this paper. [2]
In the paper entitled “Laplacian Pyramid Versus Wavelet Decomposition for Image Sequence Coding”
presented that there have been many applications in the multiresolution representations for image processing
and data compression. Several approaches have been developed on this domain using different ways, the most
widely used are sub-band decompositions with filter banks and pyramid transforms. [3]
The presented paper reviews the different techniques for image compression and presents a comparison
in a tabular way. Spatial as well as frequency domain techniques are discussed here in detail. Further, different
transform like curvelet, wavelet and dct are discussed in detail as image compression algorithm. [4]
In the paper entitled “Image Compression using Digital Curvelet Transform” presented a novel
approach to digital image compression using a new mathematical transform, the curvelet transform. The
transform has shown promising results over wavelet transform for 2-D signals. Wavelets, though well suited to
point singularities have limitations with orientation selectivity, and therefore, do not represent two-dimensional
singularities (e.g. smooth curves) effectively. [5]
In the paper entitled “Curvelet Transform Based Embedded Lossy Image Compression” described that
Curvelet transform is one of the recently developed multiscale transform, which possess directional features and
provides optimally sparse representation of objects with edges. He proposed an algorithm for lossy image
compression based on the second generation digital curvelet transform. [6]
In the paper entitled “Curvelet-based Image Compression with SPIHT” proposed a new compression
methodology, which uses curvelet coefficients with SPIHT (Set Partitioning in Hierarchical Trees) encoding
scheme. The first phase deals with the transformation of the stimulus image into the curvelet coefficients. [7]
Discrete cosine transform in combination with principal component analysis algorithm is discussed
here for hyper spectral image compression. Appreciable improvement in psnr and compression ratio is observed
using the proposed algorithm. [8]
Image Compression using DWT and Principal Component Analysis
DOI: 10.9790/1676-10315356 www.iosrjournals.org 54 | Page
In the paper entitled “A New lossless Image Compression Technique based on Bose, Chandhuri and
Hocquengham (BCH) Codes” presented an efficient lossless image compression approach that utilizes BCH
coding with compression. The use of BCH code improves the results of the Huffman algorithm in terms of
increasing the compression ratio. Therefore, the experiment confirms that the proposed technique is suitable for
compression of text, image and video files and would be useful in numerous network applications. [9]
In the paper entitled “Fast and Efficient Medical Image Compression Using Contourlet Transform”
presented a Wavelet based contourlet image compression algorithm. In the diagnosis of medical images, the
significant part (ROI) is separate out from the rest of the image using fuzzy C- means algorithm and then to the
resultant image optimized contourlet transform is applied to enhance the visual quality. The regions of less
significance are compressed using Discrete Wavelet transform. To the resultant image Huffman coding is
applied to get the compressed image. [10]
In the paper entitled “A Novel image fusion method using Contourlet Transform” introduced a novel
image fusion algorithm based on contourlet transform. The principal of contourlet and its good performance in
expressing the singularity of two or higher dimensional are studied. Experiments show that the proposed method
works better in preserving the edge and texture information than wavelet transform method and laplacian
pyramid methods do in image fusion. [11]
In the paper entitled “Fusion of Multimodality Medical Images using Combined Activity Level
Measurement and Contourlet Transform” described a novel combined Activity Level Measurement (ALM) and
Contourlet Transform (CNT) for spatially registered, multi-sensor, multi- resolution medical images. [12]
In the paper entitled “A Novel image deblocking method based on Curvelet transform” presented the
algorithm which processes the curvelet coefficients separately which are obtained by curvelet transform of the
degraded images to recovery the images. The coefficients corresponding to block effect of the original image
can be found for every layer and different layers using different methods. [13]
In the paper entitled “SAR and panchromatic image fusion based on region features in nonsubsampled
contourlet transform Domain” presented a novel fusion algorithm based on the imaging characteristic of the
SAR image. The algorithm performs the different fusion rules for each particular region independently. [14]
In the paper entitled “Performance Analysis of Multi Source Fused Medical Images using Multi
resolution transforms” described that the image fusion combines information from multiple images of the same
scene to get a composite image that is more suitable for human visual perception or further image processing
tasks. The fused output obtained after the inverse transform of fused sub band coefficients. [15]
III. Image Acquisition and Preprocessing
Principal Components Analysis (PCA) is a mathematical formulation used in the reduction of data
dimensions. Once patterns are found, they can be compressed, i.e., their dimensions can be reduced without
much loss of information. In summary, the PCA formulation may be used as a digital image compression
algorithm with a low level of loss. The reduced dimension computational structure is selected so that relevant
data characteristics are identified with little loss of information. Such a reduction is advantageous in several
instances: for image compression, data representation, calculation reduction necessary in subsequent processing,
etc.
Use of the PCA technique in data dimension reduction is justified by the easy representation of
multidimensional data, using the information contained in the data covariance matrix. The description of
Principal Component Analysis is made by means of the explanation of eigen values and eigenvectors of a
matrix.
Following steps are carried out in order to compress the given input image using PCA:
Step-1: Convert the RGB image into gray scale Image i.e. (8-bit) color format. The gray
image format is a row x column matrix of color intensities in 8- bit size.
Step-2: Centering of the Image is done by computing the mean of the image intensity and then
subtracting each pixel gray value from mean gray value.
Step-3: The covariance of the CI is computed by the following expression:
CovImg = CI(r,c) * CI(r,c)T
Where CI(r,c)T is the transpose of the matrix CI(r,c).
Step-4: The eigen values and eigen vectors of the covariance matrix are computed by
Image Compression using DWT and Principal Component Analysis
DOI: 10.9790/1676-10315356 www.iosrjournals.org 55 | Page
using the following expression:
A.V=λ.V
Where, A= :m x m matrix (Gray Image matrix)
V= m x 1 non-zero vector (Eigen Vector)
Λ :scalar (Eigen Values)
Any value of λ for which this equation has a solution is called the eigen value of A and the vector V which
corresponds to this value is called the eigen vector of A.
Step-5: A.V = λ.V
A.V - λ.I.V=0
(A - λ.I).V=0
Finding the roots of |A - λ.I| will give the eigen values and for each of these eigen values there will be an
eigen vector.
Step-6: A threshold is selected and eigen value less than the threshold are removed. Or in other words,
largest eigen values and corresponding eigen vectors are extracted out based on some
threshold values. These are called the principal components of the image (centered
image).
Step-7: Now, based on the principal components, the image can be divided into different eigen vectors as
follows:
Largest eigen vector images are stored as principal image components and is the compressed image.
IV. Quality Metrics for Performance Evaluation
For judging the performance of an image Compression techniques and comparison with proposed work, some
quality measures have been developed as follows:
Peak-Signal-to-noise ratio (PSNR):
The peak-signal to noise ratio (PSNR) was used to evaluate the reconstructed image quality. The PSNR is
defined as follows:
PSNR = 10 log 10 2552 (1.5)
(f (i, j) – f’ (i, j)) 2
Where, m × n is the size of the original image and f (i,j) and f’(i,j) are the gray-level pixel values of the
original and reconstructed images, respectively.
Standard Deviation (SD): The standard variation of an image is given by:
σ2 = (x (I,j) –µ)2 (1.6)
This corresponds to the degree of deviation between the gray levels and its mean value, for the overall image.
3). Entropy E: The expression of the information entropy of an image is given by:
Where L denotes the number of gray level, pi equals the ratio between the number of pixels whose gray
value equals i (0 to L - 1) and the total pixel number contained in an image. The information entropy measures
the richness of information in an image.
V. Results
The presented algorithm is implemented on matlab version 7.5. A data base of 100 pair of ear’s image
is prepared in jpeg format. The results show accuracy above 95% in identifying the correct query image from
that of data base image. The standard deviation and entropy are the fair performance measure based on which
Image Compression using DWT and Principal Component Analysis
DOI: 10.9790/1676-10315356 www.iosrjournals.org 56 | Page
the distinction could be achieved. However, more statistical features could be added to increase the uniqueness
when the data base becomes large.
Table 1: Standard Deviation and Entropy of Query Image with respect to data base images
Query
Image
Data Base
Image
SD Entropy
1 15 0.021 1.203
2 20 0.023 1.142
3 18 0.009 1.112
4 5 0.011 1.283
Keeping the same query image, the processing time is computed for different size data base and is tabulated
below:
Table2: Processing Time Estimate
Query
Image
Data
Base Size
Processing
Time (Secs.)
1 25 10 Secs.
1 50 25 Secs
1 75 60 Secs.
1 100 95 Secs.
VI. Conclusion
The results show a fair accuracy in identification of the query image to its respective data base image
and are 95% accurate. However, it is observed that as the data base size increases, the processing time increases
proportionately. This is due to lot of matrix reshaping, arithmetic calculation and looping etc. The processing
time may be improved with some parallel algorithm development for the same.
References
[1]. Rajesh M Bodade, Maj Jayesh Nayyar , “Shift Invariant Ear Feature Extraction using Dual Tree Complex Wavelet Transform for Ear
Recognition”, ACEEE Int. J. on Information Technology, Vol. 02, No. 02, April 2012
[2]. Michał Chora´s_Image Processing Group, Institute of Telecommunications, “Image Feature Extraction Methods for Ear Biometrics -
A Survey”, 6th International Conference on Computer Information, 2007
[3]. Bahattin Kocaman, Mürvet K_rc_, Ece Olcay Güne_, Yüksel Çak_r, Özlem Özbudak, “ON EAR BIOMETRICS”, IEEE 2009
[4]. Pflug C. Busch, “Ear biometrics: a survey of detection, feature extraction and recognition methods”, IET Biometrics, April 2012
[5]. Md. Mahbubur Rahman, Md. Rashedul Islam, Nazmul Islam Bhuiyan, Bulbul Ahmed, Md. Aminul Islam, “Person Identification
Using Ear Biometrics”, International Journal of The Computer, the Internet and Management Vol. 15#2 (May - August, 2007) pp 1 – 8
[6]. Hanna-Kaisa Lammi, “EAR BIOMETRICS”, Laboratory of Information Processing, 2004
[7]. D. J. Hurley, B. Arbab-Zavar, and M. S. Nixon, “THE EAR AS A BIOMETRIC”, EUSIPCO, 2007
[8]. Ernő Jeges, László Máté, “Model-Based Human Ear Localization and Feature Extraction”, ICMED, April 2007
[9]. Mark Burge, Wilhelm Burger, “Ear Biometrics in Computer Vision”, IEEE 2000

More Related Content

PDF
PIPELINED ARCHITECTURE OF 2D-DCT, QUANTIZATION AND ZIGZAG PROCESS FOR JPEG IM...
PDF
An efficient image compression algorithm using dct biorthogonal wavelet trans...
PDF
C1802050815
PDF
Ijetcas14 504
PDF
Medial Axis Transformation based Skeletonzation of Image Patterns using Image...
PDF
A Novel Algorithm for Watermarking and Image Encryption
PDF
QUALITY ASSESSMENT OF PIXEL-LEVEL IMAGE FUSION USING FUZZY LOGIC
PDF
Performance analysis of Hybrid Transform, Hybrid Wavelet and Multi-Resolutio...
PIPELINED ARCHITECTURE OF 2D-DCT, QUANTIZATION AND ZIGZAG PROCESS FOR JPEG IM...
An efficient image compression algorithm using dct biorthogonal wavelet trans...
C1802050815
Ijetcas14 504
Medial Axis Transformation based Skeletonzation of Image Patterns using Image...
A Novel Algorithm for Watermarking and Image Encryption
QUALITY ASSESSMENT OF PIXEL-LEVEL IMAGE FUSION USING FUZZY LOGIC
Performance analysis of Hybrid Transform, Hybrid Wavelet and Multi-Resolutio...

What's hot (17)

PDF
EFFICIENT IMAGE COMPRESSION USING LAPLACIAN PYRAMIDAL FILTERS FOR EDGE IMAGES
PDF
Multiexposure Image Fusion
PDF
Performance Analysis of Compression Techniques Using SVD, BTC, DCT and GP
PDF
Kv3419501953
PDF
Automatic Determination Number of Cluster for NMKFC-Means Algorithms on Image...
PDF
Content Based Image Retrieval Using 2-D Discrete Wavelet Transform
PDF
A novel approach to Image Fusion using combination of Wavelet Transform and C...
PDF
ROI Based Image Compression in Baseline JPEG
PDF
An improved image compression algorithm based on daubechies wavelets with ar...
PDF
Medial axis transformation based skeletonzation of image patterns using image...
PDF
Jl2516751681
PDF
11.0003www.iiste.org call for paper_d_discrete cosine transform for image com...
PDF
An efficient fusion based up sampling technique for restoration of spatially ...
PDF
An Efficient Approach for Image Enhancement Based on Image Fusion with Retine...
PDF
A systematic image compression in the combination of linear vector quantisati...
PDF
3 d discrete cosine transform for image compression
PDF
H017416670
EFFICIENT IMAGE COMPRESSION USING LAPLACIAN PYRAMIDAL FILTERS FOR EDGE IMAGES
Multiexposure Image Fusion
Performance Analysis of Compression Techniques Using SVD, BTC, DCT and GP
Kv3419501953
Automatic Determination Number of Cluster for NMKFC-Means Algorithms on Image...
Content Based Image Retrieval Using 2-D Discrete Wavelet Transform
A novel approach to Image Fusion using combination of Wavelet Transform and C...
ROI Based Image Compression in Baseline JPEG
An improved image compression algorithm based on daubechies wavelets with ar...
Medial axis transformation based skeletonzation of image patterns using image...
Jl2516751681
11.0003www.iiste.org call for paper_d_discrete cosine transform for image com...
An efficient fusion based up sampling technique for restoration of spatially ...
An Efficient Approach for Image Enhancement Based on Image Fusion with Retine...
A systematic image compression in the combination of linear vector quantisati...
3 d discrete cosine transform for image compression
H017416670
Ad

Viewers also liked (20)

DOCX
Gunja Sinha
PDF
O1102019296
PPTX
Perifericos de_entrada
PPTX
Diferencia entre estrategia, campaña y planificación publicitaria slideshare
DOCX
Diplomado iava proyecto final
PDF
OpenStack Upstream開発におけるCI品質向上施策
PPTX
Elemental cost analysis
PDF
J1103035766
PDF
H017524854
PDF
F010513844
DOCX
Presidentsw de mexico
PDF
G1803024452
PDF
G1102034148
PDF
G1102025055
PDF
I012135157
PDF
HOS_Baseline_Report_Cambodia_English_version_2014_1st_edition
PDF
M017369095
PDF
N017318992
DOCX
Russell Kirkman Mate 198B Final Design Report
PDF
I011138286
Gunja Sinha
O1102019296
Perifericos de_entrada
Diferencia entre estrategia, campaña y planificación publicitaria slideshare
Diplomado iava proyecto final
OpenStack Upstream開発におけるCI品質向上施策
Elemental cost analysis
J1103035766
H017524854
F010513844
Presidentsw de mexico
G1803024452
G1102034148
G1102025055
I012135157
HOS_Baseline_Report_Cambodia_English_version_2014_1st_edition
M017369095
N017318992
Russell Kirkman Mate 198B Final Design Report
I011138286
Ad

Similar to H010315356 (20)

PDF
A Review on Image Compression using DCT and DWT
PDF
B017120611
PDF
Wavelet-Based Warping Technique for Mobile Devices
PDF
Jl2516751681
PDF
BIG DATA-DRIVEN FAST REDUCING THE VISUAL BLOCK ARTIFACTS OF DCT COMPRESSED IM...
PDF
A Comprehensive lossless modified compression in medical application on DICOM...
PDF
H1802054851
PDF
Survey paper on image compression techniques
PDF
EXTENDED WAVELET TRANSFORM BASED IMAGE INPAINTING ALGORITHM FOR NATURAL SCENE...
PDF
Image compression using Hybrid wavelet Transform and their Performance Compa...
PDF
An efficient image segmentation approach through enhanced watershed algorithm
PDF
Comparative Study between DCT and Wavelet Transform Based Image Compression A...
PDF
I017125357
PDF
Paper id 27201451
PDF
41 9147 quantization encoding algorithm based edit tyas
PDF
An efficient color image compression technique
PDF
Review of Diverse Techniques Used for Effective Fractal Image Compression
PDF
International Journal of Engineering Research and Development (IJERD)
PDF
0 nidhi sethi_finalpaper--1-5
PDF
An approach for color image compression of bmp and tiff images using dct and dwt
A Review on Image Compression using DCT and DWT
B017120611
Wavelet-Based Warping Technique for Mobile Devices
Jl2516751681
BIG DATA-DRIVEN FAST REDUCING THE VISUAL BLOCK ARTIFACTS OF DCT COMPRESSED IM...
A Comprehensive lossless modified compression in medical application on DICOM...
H1802054851
Survey paper on image compression techniques
EXTENDED WAVELET TRANSFORM BASED IMAGE INPAINTING ALGORITHM FOR NATURAL SCENE...
Image compression using Hybrid wavelet Transform and their Performance Compa...
An efficient image segmentation approach through enhanced watershed algorithm
Comparative Study between DCT and Wavelet Transform Based Image Compression A...
I017125357
Paper id 27201451
41 9147 quantization encoding algorithm based edit tyas
An efficient color image compression technique
Review of Diverse Techniques Used for Effective Fractal Image Compression
International Journal of Engineering Research and Development (IJERD)
0 nidhi sethi_finalpaper--1-5
An approach for color image compression of bmp and tiff images using dct and dwt

More from IOSR Journals (20)

PDF
A011140104
PDF
M0111397100
PDF
L011138596
PDF
K011138084
PDF
J011137479
PDF
I011136673
PDF
G011134454
PDF
H011135565
PDF
F011134043
PDF
E011133639
PDF
D011132635
PDF
C011131925
PDF
B011130918
PDF
A011130108
PDF
I011125160
PDF
H011124050
PDF
G011123539
PDF
F011123134
PDF
E011122530
PDF
D011121524
A011140104
M0111397100
L011138596
K011138084
J011137479
I011136673
G011134454
H011135565
F011134043
E011133639
D011132635
C011131925
B011130918
A011130108
I011125160
H011124050
G011123539
F011123134
E011122530
D011121524

Recently uploaded (20)

PDF
Co-training pseudo-labeling for text classification with support vector machi...
PDF
EIS-Webinar-Regulated-Industries-2025-08.pdf
PDF
“The Future of Visual AI: Efficient Multimodal Intelligence,” a Keynote Prese...
PDF
Transform-Your-Supply-Chain-with-AI-Driven-Quality-Engineering.pdf
PPTX
MuleSoft-Compete-Deck for midddleware integrations
PDF
Auditboard EB SOX Playbook 2023 edition.
PDF
Introduction to MCP and A2A Protocols: Enabling Agent Communication
PDF
Advancing precision in air quality forecasting through machine learning integ...
PDF
LMS bot: enhanced learning management systems for improved student learning e...
PDF
Lung cancer patients survival prediction using outlier detection and optimize...
PDF
Connector Corner: Transform Unstructured Documents with Agentic Automation
PPTX
Module 1 Introduction to Web Programming .pptx
PDF
4 layer Arch & Reference Arch of IoT.pdf
PPTX
agenticai-neweraofintelligence-250529192801-1b5e6870.pptx
PDF
Dell Pro Micro: Speed customer interactions, patient processing, and learning...
PDF
Data Virtualization in Action: Scaling APIs and Apps with FME
PDF
Accessing-Finance-in-Jordan-MENA 2024 2025.pdf
PPTX
future_of_ai_comprehensive_20250822032121.pptx
PDF
AI.gov: A Trojan Horse in the Age of Artificial Intelligence
PDF
Transform-Quality-Engineering-with-AI-A-60-Day-Blueprint-for-Digital-Success.pdf
Co-training pseudo-labeling for text classification with support vector machi...
EIS-Webinar-Regulated-Industries-2025-08.pdf
“The Future of Visual AI: Efficient Multimodal Intelligence,” a Keynote Prese...
Transform-Your-Supply-Chain-with-AI-Driven-Quality-Engineering.pdf
MuleSoft-Compete-Deck for midddleware integrations
Auditboard EB SOX Playbook 2023 edition.
Introduction to MCP and A2A Protocols: Enabling Agent Communication
Advancing precision in air quality forecasting through machine learning integ...
LMS bot: enhanced learning management systems for improved student learning e...
Lung cancer patients survival prediction using outlier detection and optimize...
Connector Corner: Transform Unstructured Documents with Agentic Automation
Module 1 Introduction to Web Programming .pptx
4 layer Arch & Reference Arch of IoT.pdf
agenticai-neweraofintelligence-250529192801-1b5e6870.pptx
Dell Pro Micro: Speed customer interactions, patient processing, and learning...
Data Virtualization in Action: Scaling APIs and Apps with FME
Accessing-Finance-in-Jordan-MENA 2024 2025.pdf
future_of_ai_comprehensive_20250822032121.pptx
AI.gov: A Trojan Horse in the Age of Artificial Intelligence
Transform-Quality-Engineering-with-AI-A-60-Day-Blueprint-for-Digital-Success.pdf

H010315356

  • 1. IOSR Journal of Electrical and Electronics Engineering (IOSR-JEEE) e-ISSN: 2278-1676,p-ISSN: 2320-3331, Volume 10, Issue 3 Ver. I (May – Jun. 2015), PP 53-56 www.iosrjournals.org DOI: 10.9790/1676-10315356 www.iosrjournals.org 53 | Page Image Compression using DWT and Principal Component Analysis Gurpreet Kaur, Kamaljeet Kaur Abstract: A block wise implementation of principal component algorithm is suggested din the base work. The main disadvantage with the earlier work is that it takes time as the size of the image increases and further looping due to blocking effect of the algorithm. The compression performance reduces as the block size reduces or no. of block increases. In order to solve the mentioned problems, the images are decomposed using the discrete wavelet transform using haar wavelet. The global principal component analysis algorithm is applied on LH and HL frequency sub-band images. This enables the compression by preserving the critical boundaries or contours that are to be preserved while compressing the image so that minimum information is lost in compression during thresholding process. Index Terms: PCA Principal Component Algorithm, PCA Principal Component Analysis, DWT Discrete Wavelet Transform I. Introduction PICTURES are the representation of a particular scene. Images can be of many types such as graphic, optical, mental etc. If we talk about digital images then these are the images that can be stored on hard disk. Pictures are the most common and convenient means of conveying or transmitting information. They portray spatial information that can be recognized as an object. II. Related Works In the paper entitled “Combined Sparse Representation Based on Curvelet Transform and Local DCT for Multi-layered Image Compression” proposed a new multi-layered representation technique for image compression which combine curvelet transform and local DCT in order to benefit from the advantages of each. He proposed morphological component analysis (MCA) method to separate the image into two layers, piecewise smooth layer and textured structure layer, respectively associated to curvelet transform and local DCT. Each layer is encoded independently with a different transform at a different bit rate. [1] In the paper entitled “A Novel Image Deblocking Method Based on Curvelet Transform” described that an Image block effect is due to the quantification process using Discrete Cosine Transform (DCT) to compression coding, which dumps some frequency, and leads to noticeable discontinuous leaps. A deblocking algorithm based on curvelet transform is proposed in this paper. [2] In the paper entitled “Laplacian Pyramid Versus Wavelet Decomposition for Image Sequence Coding” presented that there have been many applications in the multiresolution representations for image processing and data compression. Several approaches have been developed on this domain using different ways, the most widely used are sub-band decompositions with filter banks and pyramid transforms. [3] The presented paper reviews the different techniques for image compression and presents a comparison in a tabular way. Spatial as well as frequency domain techniques are discussed here in detail. Further, different transform like curvelet, wavelet and dct are discussed in detail as image compression algorithm. [4] In the paper entitled “Image Compression using Digital Curvelet Transform” presented a novel approach to digital image compression using a new mathematical transform, the curvelet transform. The transform has shown promising results over wavelet transform for 2-D signals. Wavelets, though well suited to point singularities have limitations with orientation selectivity, and therefore, do not represent two-dimensional singularities (e.g. smooth curves) effectively. [5] In the paper entitled “Curvelet Transform Based Embedded Lossy Image Compression” described that Curvelet transform is one of the recently developed multiscale transform, which possess directional features and provides optimally sparse representation of objects with edges. He proposed an algorithm for lossy image compression based on the second generation digital curvelet transform. [6] In the paper entitled “Curvelet-based Image Compression with SPIHT” proposed a new compression methodology, which uses curvelet coefficients with SPIHT (Set Partitioning in Hierarchical Trees) encoding scheme. The first phase deals with the transformation of the stimulus image into the curvelet coefficients. [7] Discrete cosine transform in combination with principal component analysis algorithm is discussed here for hyper spectral image compression. Appreciable improvement in psnr and compression ratio is observed using the proposed algorithm. [8]
  • 2. Image Compression using DWT and Principal Component Analysis DOI: 10.9790/1676-10315356 www.iosrjournals.org 54 | Page In the paper entitled “A New lossless Image Compression Technique based on Bose, Chandhuri and Hocquengham (BCH) Codes” presented an efficient lossless image compression approach that utilizes BCH coding with compression. The use of BCH code improves the results of the Huffman algorithm in terms of increasing the compression ratio. Therefore, the experiment confirms that the proposed technique is suitable for compression of text, image and video files and would be useful in numerous network applications. [9] In the paper entitled “Fast and Efficient Medical Image Compression Using Contourlet Transform” presented a Wavelet based contourlet image compression algorithm. In the diagnosis of medical images, the significant part (ROI) is separate out from the rest of the image using fuzzy C- means algorithm and then to the resultant image optimized contourlet transform is applied to enhance the visual quality. The regions of less significance are compressed using Discrete Wavelet transform. To the resultant image Huffman coding is applied to get the compressed image. [10] In the paper entitled “A Novel image fusion method using Contourlet Transform” introduced a novel image fusion algorithm based on contourlet transform. The principal of contourlet and its good performance in expressing the singularity of two or higher dimensional are studied. Experiments show that the proposed method works better in preserving the edge and texture information than wavelet transform method and laplacian pyramid methods do in image fusion. [11] In the paper entitled “Fusion of Multimodality Medical Images using Combined Activity Level Measurement and Contourlet Transform” described a novel combined Activity Level Measurement (ALM) and Contourlet Transform (CNT) for spatially registered, multi-sensor, multi- resolution medical images. [12] In the paper entitled “A Novel image deblocking method based on Curvelet transform” presented the algorithm which processes the curvelet coefficients separately which are obtained by curvelet transform of the degraded images to recovery the images. The coefficients corresponding to block effect of the original image can be found for every layer and different layers using different methods. [13] In the paper entitled “SAR and panchromatic image fusion based on region features in nonsubsampled contourlet transform Domain” presented a novel fusion algorithm based on the imaging characteristic of the SAR image. The algorithm performs the different fusion rules for each particular region independently. [14] In the paper entitled “Performance Analysis of Multi Source Fused Medical Images using Multi resolution transforms” described that the image fusion combines information from multiple images of the same scene to get a composite image that is more suitable for human visual perception or further image processing tasks. The fused output obtained after the inverse transform of fused sub band coefficients. [15] III. Image Acquisition and Preprocessing Principal Components Analysis (PCA) is a mathematical formulation used in the reduction of data dimensions. Once patterns are found, they can be compressed, i.e., their dimensions can be reduced without much loss of information. In summary, the PCA formulation may be used as a digital image compression algorithm with a low level of loss. The reduced dimension computational structure is selected so that relevant data characteristics are identified with little loss of information. Such a reduction is advantageous in several instances: for image compression, data representation, calculation reduction necessary in subsequent processing, etc. Use of the PCA technique in data dimension reduction is justified by the easy representation of multidimensional data, using the information contained in the data covariance matrix. The description of Principal Component Analysis is made by means of the explanation of eigen values and eigenvectors of a matrix. Following steps are carried out in order to compress the given input image using PCA: Step-1: Convert the RGB image into gray scale Image i.e. (8-bit) color format. The gray image format is a row x column matrix of color intensities in 8- bit size. Step-2: Centering of the Image is done by computing the mean of the image intensity and then subtracting each pixel gray value from mean gray value. Step-3: The covariance of the CI is computed by the following expression: CovImg = CI(r,c) * CI(r,c)T Where CI(r,c)T is the transpose of the matrix CI(r,c). Step-4: The eigen values and eigen vectors of the covariance matrix are computed by
  • 3. Image Compression using DWT and Principal Component Analysis DOI: 10.9790/1676-10315356 www.iosrjournals.org 55 | Page using the following expression: A.V=λ.V Where, A= :m x m matrix (Gray Image matrix) V= m x 1 non-zero vector (Eigen Vector) Λ :scalar (Eigen Values) Any value of λ for which this equation has a solution is called the eigen value of A and the vector V which corresponds to this value is called the eigen vector of A. Step-5: A.V = λ.V A.V - λ.I.V=0 (A - λ.I).V=0 Finding the roots of |A - λ.I| will give the eigen values and for each of these eigen values there will be an eigen vector. Step-6: A threshold is selected and eigen value less than the threshold are removed. Or in other words, largest eigen values and corresponding eigen vectors are extracted out based on some threshold values. These are called the principal components of the image (centered image). Step-7: Now, based on the principal components, the image can be divided into different eigen vectors as follows: Largest eigen vector images are stored as principal image components and is the compressed image. IV. Quality Metrics for Performance Evaluation For judging the performance of an image Compression techniques and comparison with proposed work, some quality measures have been developed as follows: Peak-Signal-to-noise ratio (PSNR): The peak-signal to noise ratio (PSNR) was used to evaluate the reconstructed image quality. The PSNR is defined as follows: PSNR = 10 log 10 2552 (1.5) (f (i, j) – f’ (i, j)) 2 Where, m × n is the size of the original image and f (i,j) and f’(i,j) are the gray-level pixel values of the original and reconstructed images, respectively. Standard Deviation (SD): The standard variation of an image is given by: σ2 = (x (I,j) –µ)2 (1.6) This corresponds to the degree of deviation between the gray levels and its mean value, for the overall image. 3). Entropy E: The expression of the information entropy of an image is given by: Where L denotes the number of gray level, pi equals the ratio between the number of pixels whose gray value equals i (0 to L - 1) and the total pixel number contained in an image. The information entropy measures the richness of information in an image. V. Results The presented algorithm is implemented on matlab version 7.5. A data base of 100 pair of ear’s image is prepared in jpeg format. The results show accuracy above 95% in identifying the correct query image from that of data base image. The standard deviation and entropy are the fair performance measure based on which
  • 4. Image Compression using DWT and Principal Component Analysis DOI: 10.9790/1676-10315356 www.iosrjournals.org 56 | Page the distinction could be achieved. However, more statistical features could be added to increase the uniqueness when the data base becomes large. Table 1: Standard Deviation and Entropy of Query Image with respect to data base images Query Image Data Base Image SD Entropy 1 15 0.021 1.203 2 20 0.023 1.142 3 18 0.009 1.112 4 5 0.011 1.283 Keeping the same query image, the processing time is computed for different size data base and is tabulated below: Table2: Processing Time Estimate Query Image Data Base Size Processing Time (Secs.) 1 25 10 Secs. 1 50 25 Secs 1 75 60 Secs. 1 100 95 Secs. VI. Conclusion The results show a fair accuracy in identification of the query image to its respective data base image and are 95% accurate. However, it is observed that as the data base size increases, the processing time increases proportionately. This is due to lot of matrix reshaping, arithmetic calculation and looping etc. The processing time may be improved with some parallel algorithm development for the same. References [1]. Rajesh M Bodade, Maj Jayesh Nayyar , “Shift Invariant Ear Feature Extraction using Dual Tree Complex Wavelet Transform for Ear Recognition”, ACEEE Int. J. on Information Technology, Vol. 02, No. 02, April 2012 [2]. Michał Chora´s_Image Processing Group, Institute of Telecommunications, “Image Feature Extraction Methods for Ear Biometrics - A Survey”, 6th International Conference on Computer Information, 2007 [3]. Bahattin Kocaman, Mürvet K_rc_, Ece Olcay Güne_, Yüksel Çak_r, Özlem Özbudak, “ON EAR BIOMETRICS”, IEEE 2009 [4]. Pflug C. Busch, “Ear biometrics: a survey of detection, feature extraction and recognition methods”, IET Biometrics, April 2012 [5]. Md. Mahbubur Rahman, Md. Rashedul Islam, Nazmul Islam Bhuiyan, Bulbul Ahmed, Md. Aminul Islam, “Person Identification Using Ear Biometrics”, International Journal of The Computer, the Internet and Management Vol. 15#2 (May - August, 2007) pp 1 – 8 [6]. Hanna-Kaisa Lammi, “EAR BIOMETRICS”, Laboratory of Information Processing, 2004 [7]. D. J. Hurley, B. Arbab-Zavar, and M. S. Nixon, “THE EAR AS A BIOMETRIC”, EUSIPCO, 2007 [8]. Ernő Jeges, László Máté, “Model-Based Human Ear Localization and Feature Extraction”, ICMED, April 2007 [9]. Mark Burge, Wilhelm Burger, “Ear Biometrics in Computer Vision”, IEEE 2000