0% found this document useful (0 votes)
73 views11 pages

Sat - 153.Pdf - Gmentation of Features Using Neural Network With Cardiac Dataset

This document discusses image segmentation of cardiac bi-ventricles from magnetic resonance images. It states that accurate segmentation is important for analyzing cardiovascular function, but MR images provide little edge information due to similar intensity distributions across regions. The proposed system converts color images to grayscale, then extracts image features like color, weight, depth and pixels before classifier training using a neural network. A region of interest segmentation algorithm is used to detect and segment defective areas, with backpropagation training and testing the network on weighted images.

Uploaded by

Vj Kumar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
73 views11 pages

Sat - 153.Pdf - Gmentation of Features Using Neural Network With Cardiac Dataset

This document discusses image segmentation of cardiac bi-ventricles from magnetic resonance images. It states that accurate segmentation is important for analyzing cardiovascular function, but MR images provide little edge information due to similar intensity distributions across regions. The proposed system converts color images to grayscale, then extracts image features like color, weight, depth and pixels before classifier training using a neural network. A region of interest segmentation algorithm is used to detect and segment defective areas, with backpropagation training and testing the network on weighted images.

Uploaded by

Vj Kumar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 11

ABSTRACT

Accurate segmentation of cardiac bi-ventricle (CBV) from magnetic resonance (MR)


images has a great significance to analyze and evaluate the function of the
cardiovascular system. However, the majority of cardiac MR images show that the
similar intensity distribution in different regions, thus providing a little edge information.
In this proposed system the input images are color images means then we convert to
gray scale from the color images. The image features like color, weight, and depth
and pixel information to apply before the classifier (neural network). The ROI (Region
of interest) segmentation algorithm is used in order to detect segment the portion of
defected areas. The back propagation neural network concept is used for training the
image and testing the image with the help of weight estimating classifier.

v
TABLE OF CONTENTS

CHAPTER NO. TOPIC PAGE:NO

ABSTRACT iv

LIST OF ABBREVIATIONS viii

LIST OF FIGURES ix

1. INTRODUCTION 01

1.1 IMAGE PROCESSING 02

1.2 TYPES OF DIGITAL IMAGE 02

1.3 COLOR MODEL 02

1.3.1 RGB COLOUR MODEL 02

1.3.2 CONVERTING COLOR TO GRAYSCALE 03

1.3.3 SINGLE CHANNEL COLOR IMAGES 04

1.3.4 HISTOGRAM EQUALIZATION 05

1.4 DIGITAL IMAGE PROCESSING 06

1.4.1 IMAGE REPRESENTATION 06

1.4.2 IMAGE ENHANCEMENT 07

1.4.3 IMAGE SEGMENTATION 07

1.5 FUNDAMENTAL IN DIGITAL IMAGE 08

1.5.1 IMAGE ACQUISITION 08

1.5.2 IMAGE PREPROCESSING 08

1.5.3 IMAGE SEGMENTATION 09

vi
1.5.4 Image representation 09

1.5.5 Image recognition 09

2. LITERATURE SURVEY 10

3. AIM AND SCOPE 16

3.1 AIM 16

3.2 SCOPE 16

3.3 EXISTING SYSTEM 16

3.4 PROPOSED SYSTEM 16

3.5 MODULE DESCRIPTION 16

3.6 BLOCK DIAGRAM 17

3.7 SEGMENTATION 17

3.7.1 EDGE-BASED SEGMENTATION 18

3.7.2 REGION-BASED SEGMENTATION 18

3.8 ACTIVE CONTOURS 19

3.8.1 EDGE-BASED ACTIVE CONTOURS 19

3.8.2 REGION-BASED ACTIVE CONTOURS 20

3.9 NEURAL NETWORK 21

3.9.1 HISTORICAL BACKGROUND 21

3.9.2 BASICS OF NEURAL NETWORK 22

3.9.3 NEURAL VS CONVENTIONAL 23

3.9.4 NETWORK LAYERS 24

vii
4. SYSTEM REQUIREMENTS 25

4.1 HARDWARE REQUIREMENTS 25

4.2 SOFTWARE REQUIREMENTS 25

4.3 INTRODUCTION TO MATLAB 26

4.3.1 Tools and development 26

4.3.2 Mathematical function library 26

4.3.3 The language 26

4.3.4 Graphics 26

4.3.5 External interfaces 27

4.3.6 Image processing toolbox 27

4.3.7 Computer vision toolbox 27

4.3.8 Image Acquisition toolbox 29

5. RESULTS AND DISCUSSION 30

5.1 RESULTS 30

5.2 DISCUSSION 32

6. SUMMARY AND CONCLUSION 33

6.1 SUMMARY 33

6.2 CONCLUSION 33

REFERENCES 34

APPENDICES 35

vii
LIST OF ABBREVIATIONS

RGB – Red Green Blue


CMYK – Cyan Magenta Yellow Black
LV – Left Ventricle
RV – Right Ventricle
ACDC – Accurate Coordinate Dataset Collection
LVC, RVC – Left and Right Ventricular Cavity
LVM – Left Ventricular Myocardium
MLP – Multi Layer Perceptron
CAD – Computer Assisted Diagnosis
ESV - End Systolic Volume
EDV – End Diastolic Volume
EF – Ejection Fraction
SV – Stroke Volume
VM – Ventricular Mass
ICC – Infraclass Correlation Coefficient
NCD – Non-Communicable Disease
ROI – Region Of Interest
CVD – Cardiovascular Disease
DALY – Disability Adjusted Life Year
ISBI – International Symposium on Biomedical Imaging
DIC – Differential Interference Contrast
CNN – Convolutional Neural Network
CBV – Cardiac Bi-Ventricle
MSE – Mean Square Error

ix
LIST OF FIGURES

FIGURE TITLE PAGE:NO

1.1 RGB color model 03


1.2 Composition of RGB from 3
Grayscale images 04
1.3 Original image 05
1.4 Color-equalized image 05
1.5 Image representation 06
1.6 Fundamental steps in
digital image processing 08
3.1 Segmentation and classification 17
3.2 Neural network 22
5.1 Preprocessing and
Segmentation 30
5.2 Performance, Training
State and Error
Histogram graph 31

x
CHAPTER 1

INTRODUCTION

INTRODUCTION TO IMAGE PROCESSING

Image Processing is a method to enhance raw images received from


cameras/sensors placed on satellites, space probes and aircrafts or pictures taken in
normal day-to-day life for various applications. Various techniques have been
developed in Image Processing during the last four to five decades. Most of the
techniques are developed for enhancing images obtained from unmanned
spacecrafts, space probes and military reconnaissance flights. Image Processing
systems are becoming popular due to easy availability of powerful personnel
computers, large size memory devices, graphics software‟s etc. Image processing is
a physical process used to convert an image signal into a physical image. The image
signal can be either digital or analog. The actual output itself can be an actual physical
image or the characteristics of an image.

Image processing is photography. In this process, an image is captured or


scans using a camera to create a digital or analog image. In order to produce a
physical picture, the image is processed using the appropriate technology based on
the input source type. In digital photography, the image is stored as a computer file.
This file is translated using photographic software to generate an actual image. The
colors, shading, and nuances are all captured at the time the photograph is taken the
software translates this information into an image. When creating images using analog
photography, the image is burned into a film using a chemical reaction triggered by
controlled exposure to light. The image is processed in a darkroom, using special
chemicals to create the actual image. This process is decreasing in popularity due to
the opening of digital photography, which requires less effort and special training to
product images. The field of digital imaging has created a whole range of new
applications and tools that were previously impossible. Face recognition software,
medical image processing and remote sensing are all possible due to the development
of digital image processing. Specialized computer programs are used to enhance.

1
TYPES OF DIGITAL IMAGE

For photographic purposes, there are two important types of digital images:
color and grayscale. Color images are made up of colored pixels while grayscale
images are made of pixels in different shades of gray.

 Grayscale Images: A grayscale image is made up of pixels, each of which holds a


single number corresponding to the gray level of the image at a particular location.
These gray levels span the full range from black to white in a series of very fine
steps, normally 256 different grays. Assuming 256 gray levels, each black and
white pixel can be stored in a single byte (8 bits) of memory.

 Color Images: A color image is made up of pixels, each of which holds three
numbers corresponding to the red, green and blue levels of the image at a
particular location. Assuming 256 levels, each color pixel can be stored in three
bytes (24 bits) of memory. Note that for images of the same size, a black & white
version will use three times less memory than a color version.

 Binary Images: Binary images use only a single bit to represent each pixel. Since
a bit can only exist in two states- ON or OFF; every pixel in a binary image must
be one of two colors, usually black or white. This inability to represent intermediate
shades of gray is what limits their usefulness in dealing with photographic images.

COLOR MODEL
RGB Color Model

A representation of additive color mixing. Projection of primary color lights on


a screen shows secondary colors where two overlap; the combination of all three of
red, green, and blue in appropriate intensities makes white. The RGB color model is
an additive color model in which red, green, and blue light is added together in various
ways to reproduce a broad array of colors. The name of the model comes from the
initials of the three additive primary colors, red, green, and blue.

2
Fig 1.1 RGB color model

The main purpose of the RGB color model is for the sensing, representation,
and display of images in electronic systems, such as televisions and computers,
though it has also been used in conventional photography.

Converting Color to Grayscale

Conversion of a color image to grayscale is not unique; different weighting of


the color channels effectively represents the effect of shooting black-and-white film
with different-colored photographic filters on the cameras.

To convert any color to a grayscale representation of its luminance, first one


must obtain the values of its red, green, and blue (RGB) primaries in linear intensity
encoding, by gamma expansion. Then, add together 30% of the red value, 59% of the
green value, and 11% of the blue value (these weights depend on the exact choice of
the RGB primaries, but are typical). Regardless of the scale employed (0.0 to 1.0, 0
to 255, 0% to 100%, etc.) the resultant number is the desired linear luminance value;
it typically needs to be gamma compressed to get ba ck to a conventional grayscale
representation.

This is not the method used to obtain the luma in the YUV and related color
models, used in standard color TV and video systems as PAL and NTSC, as well as
in the Lab color model. These systems directly compute a gamma-compressed luma
as a linear combination of gamma-compressed primary intensities, rather than use
linearization via gamma expansion and compression to convert a gray

3
intensity value to RGB, simply set all the three primary color components red, green
and blue to the gray value, correcting to a different gamma if necessary.

Single Channel Color Images

Color images are often built of several stacked color channels, each of them
representing value levels of the given channel. For example, RGB images are
composed of three independent channels for red, green and blue primary color
components. CMYK images have four channels for cyan, magenta, yellow and black
ink plates, etc.

Here is an example of color channel splitting of a full RGB color image. The
column at left shows the isolated color channels in natural colors, while at right there
are their grayscale equivalences:

The reverse is also possible to build a full color image from their separate
grayscale channels. By mangling channels, using offsets, rotating and other
manipulations, artistic effects can be achieved instead of accurately reproducing the
original image.

Fig 1.2 Composition of RGB from 3 Grayscale images

4
Image Enhancement

Sometimes images obtained from satellites and conventional and digital


cameras lack in contrast and brightness because of the limitations of imaging sub
systems and illumination conditions while capturing image. Images may have different
types of noise. In image enhancement, the goal is to accentuate certain image
features for subsequent analysis or for image display. Examples include contrast and
edge enhancement, pseudo-coloring, noise filtering, sharpening, and magnifying.
Image enhancement is useful in feature extraction, image analysis and an image
display. The enhancement process itself does not increase the inherent information
content in the data. It simply emphasizes certain specified image characteristics.
Enhancement algorithms are generally interactive and application-dependent.

Image Segmentation

Image segmentation is the process that subdivides an image into its constituent
parts or objects. The level to which this subdivision is carried out depends on the
problem being solved, i.e., the segmentation should stop when the objects of interest
in an application have been isolated e.g., in autonomous air-to-ground target
acquisition, suppose our interest lies in identifying vehicles on a road, the first step is
to segment the road from the image and then to segment the contents of the road
down to potential vehicles.

FUNDAMENTAL IN DIGITAL IMAGE PROCESSING

The various basic steps are as follows

 Image Acquisition
 Image preprocessing
 Image segmentation
 Image Representation and Description
 Image Recognition and Interpretation and knowledge base.

You might also like