SlideShare a Scribd company logo
SUBJECT 20EEE523T
INTELLIGENT CONTROLLER
UNIT 2 PATTERN ASSOCIATION
Present by
Mrs. R.SATHIYA
Reg no :PA2313005023001
Research Scholar, Department of Electrical and Electronics Engineering
SRM Institute of Technology & Science, Chennai
HEBB RULE – PATTERN ASSOCIATION
Contd ..
• it learns associations between input patterns and output
patterns
• A pattern association can be trained to respond with a
certain output pattern when presented with an input
pattern.
• The connection weights can be adjusted in order to change
the input/output behaviour
• It specifies how a network changes it weights for a given
input/output association.
• The most commonly used learning rules with pattern
associators are the Hebb rule and the Delta rule
Training Algorithms For Pattern Association
• used for finding the weights for an associative memory NN.
• Its is represented in binary or bipolar.
• Similar algorithm with slight extension where finding the
weights by outer products.
• We want to consider examples in which the input to the net
after training is a pattern that is similar to, but not the same
as one of the training inputs.
• Each association is an input-ouput vector pair ,s:t
Training Algorithms For Pattern Association
Contd ..
• To store a set of association s(p):t(p),p=1,…P,where
• The weight matrix W is given by
Outer product
• Instead of obtaining W by iterative updates ,it can be
computed from the training set by calculating the
outer product of s and t
• The weights are initially zero
• The outer product of two vectors:
example
contd..
Example
Perfect recall versus cross talk
• The suitability of the hebb rule for a particular problem
depends on the correlation among the input training vectors .
• If the input vector are uncorrelated(orthogonal),the hebb rule
will produce the correct weights ,and the response of the net
when tested with one of the training vectors will be prefect
recall of the input vector’s associated target
• If the input vector are not orthogonal ,the response will
include a portion of each of their target values .This is
commonly called cross talk
Contd ..
• two vector are orthogonal ,if their dot product is 0.
• Orthogonality between the input patterns can be
checked only for binary or bipolar patterns
Delta rule
• In its original form ,as introduced in chapter 2,the delta rule
assumed that the activation function for the output unit was
the identity function.
• A simple extension allows for the use of any differential
activation function;we shall call this the extended delta rule
Associate Memory Network
• These kinds of neural networks work on the basis of pattern
association, which means they can store different patterns and at
the time of giving an output they can produce one of the stored
patterns by matching them with the given input pattern.
• These types of memories are also called Content-Addressable
Memory (CAM)Associative memory makes a parallel search with
the stored patterns as data files.
• Example
Contd ..
the two types of associative memories
• Auto Associative Memory
• Hetero Associative memory
Auto associative Memory
• .training input and output vector are same
• Determination of weight is called storing of vectors
• Weight is set to zero
• Auto associative net with no self connection
• Its performance is judged by its ability to reproduce a stored
pattern from noisy input
• Its performance in general better for bipolar vector than binary
vector
Architecture
• Input and output
vector are same
• The input vector
has n inputs and
output vector has n
outputs
• The input and
output are
connected through
weighted
connection
Training Algorithm
Testing Algorithm
• An auto associative can be used to determine whether the given
vector is ‘known’ or ‘unknown vector’
• A net is known to recognize a ‘known’vector if the net produce a
pattern of activation on the output which is same as one stored
• Testing procedure as follows
Example
Hetero Associative Memory
• The input training vector and the output target vectors are
not the same.
• Determination of weight by hebb rule or delta rule
• The weights are determined so that the network stores a set
of patterns.
• Hetero associative network is static in nature, hence, there
would be no non-linear and delay operations.
.
Architecture
• Input has ‘n’ input and
output has ‘m’ output
• There weighted
interconnection
between input and
output
• Associative memory
neural networks are
nets in which the
weights are determined
in such a way that the
net can store a set of P
pattern association
• Each association is pair
of vector (s(p),t(p)), with
p=1,2,…..P
Testing Algorithm
Example
Contd..
Artificial Neural Network - Hopfield
Networks
• Hopfield neural network was invented by Dr. John J. Hopfield in
1982.
• It consists of a single layer which contains one or more fully
connected recurrent neurons.
• The Hopfield network is commonly used for auto-association and
optimization tasks
• Two types of network
1.discrete hopfield network
2.continuous hopfield network
Discrete hopfield network
• A Hopfield network which operates in a discrete line fashion or
in other words, it can be said the input and output patterns are
discrete vector, which can be either binary 0, 1 or bipolar +1, -1
in nature.
• The network has symmetrical weights with no self-connections
i.e.,
• Only one unit updates Its activation at a time
• The asynchronous updating of the units allows a function,
known as an energy or Lyapunov function, to be found for the
net
Architecture
Architecture
Following are some important points to
keep in mind about discrete Hopfield
network
• This model consists of neurons with one
inverting and one non-inverting output.
• The output of each neuron should be
the input of other neurons but not the
input of self.
• Weight/connection strength is
represented by Wij .
• Connections can be excitatory as well as
inhibitory. It would be excitatory, if the
output of the neuron is same as the
input, otherwise inhibitory.
• Weights should be symmetrical, i.e. Wij
= Wji
• the output from Y1 going to Y2 , Yi and
Yn ,have the weights W12 , W1i and
W1n respectively. Similarly,other arcs
have the weights on them
Algorithm
Testing algorithm
.
Example – recalling of corrupted
pattern
Example
Contd..
Energy function
• An energy function is defined as a function that is
bonded and non-increasing function of the state of
the system.
• Energy function Ef, also called Lyapunov function
determines the stability of discrete Hopfield
network, and is characterized as follows
Contd ..
The change in energy depends on the fact that only one unit can update its
activation at a time.
Storage capacity
• The number of binary pattern that can be stored
and recalled in a net with reasonable accuracy is
given approximately by
• For bipolar pattern
Where n is number of neuron in a net
Continuous Hopfield network
• In comparison with Discrete Hopfield network,
continuous network has time as a continuous variable.
• It is also used in auto association and optimization
problems such as travelling salesman problem.
• Node has continuous graded output
• Energy decreases continuously with time
• Electrical circuit which uses non- linear amplifers and
resistors
• Used in building hopfield with VLSI technology
Energy function
Iterative Autoassociative networks
• Net does not respond to the input signal with the
stored target pattern.
• Respond like stored pattern.
• Use the first response as input to the net again.
• Iterative auto associative network recover original
stored vector when presented with test vector
close to it.
• Recurrent Autoassociative networks.
Example
Contd ..
Linear Autoassociative Memory
• James Anderson, 1977
• Based on Hebbian rule.
• Linear algebra is used for analyzing the performance of the
net.
• Stored vector is eigen vector.
• Eigen value-number of times the vector are presented
• When the input vector is X, then output response is XW,
where W is the weight matrix.
Brain In The Box Network
• An activity pattern inside the box receives positive
feedback on certain components, which will force it
outward.
• When it hit the walls, it moves to the corner of the
box where it remains such
• Represents saturation limit of each state.
• Restricted between -1 and +1.
• Self connection exists.
Training Algorithm
Autoassociative With Threshold unit
• If threshold unit is set, then a threshold function is used
as activation function
• Training algorithm
EXAMPLE
Contd ..
Temporal Associative Memory
Network
• Storing sequence of patterns as dynamic
transitions.
• Temporal patterns and associative memory with
this capacity is temporal associative memory
network.
Bidirectional associative
memory(BAM)
• It is first proposed by Bart Kosko in the year 1988
• Performs backward and forward search
• It associates patterns, say from set A to patterns
from set B and vice versa is also performed.
• Encodes bipolar/binary pattern using hebbian
learning rule
• Human memory is necessarily associative.
• It uses a chain of mental associations to recover a
lost memory .eg if we have lost an umberalla
BAM Architecture
• Weights are bidirectional
• X layer has ‘n’ input units
• Y layer has ‘m’ output units
• Weight matrix from X to Y is W
and from Y to X is WT
• Process repeated untill the input
and output vector become
unchanged (reach stable state)
• two types
1 .discrete BAM
2. continuous BAM
DISCRETE BIDIRECTIONAL AUTO
ASSOCIATIVE MEMORY
• Here weights are found to be the sum of outer product of bipolar
form training vector pair.
• Activation function is step up function with non zero threshold
• Determination of Weights
1. Let the input vectors be denoted by s(p) and target vectors
by t(p)
2.the weight matrix to store a set of input and target vectors,
where s(p) = (s1(p), .. , si(p), ... , sn(p))
t(p) = (t1(p), .. , tj(p), ... , tm(p))
3.It can be determined by Hebb rule training a1gorithm.
4. if the input is binary , the weight matrix W = {wij} is given by
contd
• If the input vector are bipolar , the weight matrix W =
{wij} can be defined as
• Activation function for BAM
• The activation function is based on whether the input
target vector pairs used are binary or bipolar
• The activation function for the Y-layer
1. With binary input vectors is
2. with bipolar input vector is
sathiya new final.pptx
Testing Algorithm for Discrete
Bidirectional Associative Memory
Continuous BAM
• A continuous BAM[Kosko, 1988] transforms input smoothly and
continuously into output in the range [0, 1] using the logistic
sigmoid function as the activation function for all units
• For binary input vectors,the weights are
• The activation function is the logistic sigmoid
• With bias included ,the net input is
Hamming distance ,analysis of energy
function and storage capacity
• Hamming distance
• the number of mismatched component of two given
bipolar /binary vector .
• Denoted by
• Average distance =
Contd..
• Energy function
• Stability is determined by lyapunov
function(energy function)
Storage capacity
• Memory capacity min(m,n)
• “n” is the number of unit in X layer and “m” is the
number of unit in Y layer
• More conservative capacity is estimated as follows
Application of BAM
• Fault Detection
• Pattern Association
• Real Time Patient Monitoring
• Medical Diagnosis
• Pattern Mapping
• Pattern Recognition systems
• Optimization problems
• Constraint satisfaction problem
Example
sathiya new final.pptx
Competitive learning network
• It is concerned with unsupervised training in which the
output node tries to compete with each other to
represent the input pattern
• Basic concept of competitive network
• This network is just like single layer feed –forward
network have feedback connection between output .
• The connection between the outputs are inhibitory type
,which shown by dotted lines ,which means the
competitors never support themselves
Contd..
• Example
• Considering set of student if you want to classify them on
the basis of evaluation performance, their score my be
calculated and the one who score is higher than the
others should be the winner
• The is called competitive net .the extreme form of these
competitive net is called winner –take –all
• i.e ;only one neuron in the competing group will posses
non zero output signal at the end of competition
• Only one neuron is active at a time. Only the winner has
updated weights, the rest remain unchanged.
Contd..
• Some of the neural network that comes under these
category
1. Kohonen self orgnizing feature maps
2. Learning vector quantization
3. Adaptive resonance theory
Kohonen self organizing feature map
• Self Organizing Map (or Kohonen Map or SOM) is a type
of Artificial Neural Network .
• It follows an unsupervised learning approach and trained
its network through a competitive learning algorithm.
• SOM is used for clustering and mapping (or
dimensionality reduction) techniques to map
multidimensional data onto lower-dimensional which
allows people to reduce complex problems for easy
interpretation.
• SOM has two layers,
1. Input layer
2. Output layer.
operation
• SOM operates in Two modes (1) Training
(2) Mapping
• Training Process:
Develops the map using competitive procedure
(Vector Quantization)
• Mapping Process:
Classifies the new supplied input based on the
training outcomes
• Basic competitive learning implies that the competition
process takes place before the cycle of learning.
• The competition process suggests that some criteria select
a winning processing element.
• After the winning processing element is selected, its
weight vector is adjusted according to the used learning
law.
• Feature mapping is a process which converts the patterns
of arbitrary dimensionality into a response of one or two
dimensions array of neurons.
• The network performing such a mapping is called feature
map. The reason for reducing the higher dimensionality,
the ability to preserve the neighbor topology.
Training algorithm
sathiya new final.pptx
Application-speech recognition
• The short segments of the speech waveform is
given as input .
• It will map the same kind of phonemes as the
output array, called feature extraction technique.
• After extracting the features, with the help of some
acoustic models as back-end processing, it will
recognize the utterance
Learning vector quantization(LVQ)
• Purpose :dimentionality reduction and data compression
• Self organizing map (SOM) is to encode a large set of input
vector{x} by finding a smaller set of representatives
/prototype/cluster
• It is a supervised version of vector quantization that can be
used when we have labelled input data
• It is a two stage process- a SOM is followed by LVQ
• The first step is feature selection: the unsupervised
identification of a reasonably a small set of pattern
• Second step is classification -where the feature domains are
assigned to individual classes
Architecture
Example
• the first step is to train the machine
with all the different fruits one by one
like this:
• If the shape of the object is rounded and
has a depression at the top, is red in
color, then it will be labeled as –Apple.
• If the shape of the object is a long curving
cylinder having Green-Yellow color, then
it will be labeled as –Banana.
• after training the data, you have given a
new separate fruit, say Banana from the
basket, and asked to identify it.
• It will first classify the fruit with its shape
and color and would confirm the fruit
name as BANANA and put it in the
Banana category. Thus the machine
learns the things from training
data(basket containing fruits) and then
applies the knowledge to test data(new
fruit).
Flowchart
sathiya new final.pptx
sathiya new final.pptx
adaptive resonance theory
• adaptive - that they are open to new learning
• resonance - without discarding the previous or the old
information
• The ART networks are known to solve the stability-
plasticity dilemma
• i.e., stability refers to their nature of memorizing the
learning
• plasticity refers to the fact that they are flexible to gain
new information.
• Due to this the nature of ART they are always able to
learn new input patterns without forgetting the past
Contd..
• Invented by Grossberg in 1976 and based on
unsupervised learning model.
• Resonance means a target vector matches close enough
the input vector.
• ART matching leads to resonance and only in resonance
state the ART network learns.
• Suitable for problems that uses online dynamic large
databases.
• Types: (1) ART 1- classifies binary input vectors
(2) ART 2 – clusters real valued input (continuous
valued) vectors.
• Used to solve Plasticity – stability dilemma.
Architecture
• It consist of
1. A comparison field
2. A recognition field-
composed of neuron
3. A vigilance parameter
4. A reset module
• Comparison phase − In this phase, a comparison of the
input vector to the comparison layer vector is done.
• Recognition phase − The input vector is compared with the
classification presented at every node in the output layer.
The output of the neuron becomes “1” if it best matches
with the classification applied, otherwise it becomes “0”.
• Vigliance parameter –After the i/p vectors are classified a
reset module compares the strength of match to vigilance
parameter (defined by the user).
• Higher vigilance produces fine detailed memories and
lower vigilance value gives more general memory.
• Rest module - compares the strength of recognition
phase. When vigilance threshold is met then
training starts otherwise neurons are inhibited until
a new i/p is provided
• There are two set of weights
(1) Bottom -up weight - from F1 layer to F2 Layer
(2) Top –Down weight – F2 to F1 Layer
Notation used in algorithm
Training algorithm
Contd..
Application
• ART neural networks used for fast, stable learning and
prediction have been applied in different areas.
• Application of ART
• target recognition, face recognition, medical diagnosis,
signature verification, mobile control robot
• Signature verification:
• signature verification is used in bank check confirmation, ATM
access, etc.
• the training of the network is finished using ART1 that uses
global features as input vector and
• The testing phase has two step 1.the verification and
2. recognition phase
• In the initial step, the input vector is coordinated with the
stored reference vector, which was used as a training set, and in
the second step, cluster formation takes place.
Signature verification -flowchart
sathiya new final.pptx

More Related Content

PDF
Associative Learning Artificial Intelligence
21118057
 
PPT
Principles of soft computing-Associative memory networks
Sivagowry Shathesh
 
PDF
CCS355 Neural Network & Deep Learning Unit II Notes with Question bank .pdf
Guru Nanak Technical Institutions
 
PPTX
Hetro associative memory
DEEPENDRA KORI
 
PPTX
Training Algorithms for Pattern Association.pptx
ManjulaRavichandran5
 
PPTX
lecture 10 - Associative_Memory_Neural_Networks_pptx.pptx
bk996051
 
PPT
Lec 3-4-5-learning
Taymoor Nazmy
 
PPTX
Associative_Memory_Neural_Networks_pptx.pptx
dgfsdf1
 
Associative Learning Artificial Intelligence
21118057
 
Principles of soft computing-Associative memory networks
Sivagowry Shathesh
 
CCS355 Neural Network & Deep Learning Unit II Notes with Question bank .pdf
Guru Nanak Technical Institutions
 
Hetro associative memory
DEEPENDRA KORI
 
Training Algorithms for Pattern Association.pptx
ManjulaRavichandran5
 
lecture 10 - Associative_Memory_Neural_Networks_pptx.pptx
bk996051
 
Lec 3-4-5-learning
Taymoor Nazmy
 
Associative_Memory_Neural_Networks_pptx.pptx
dgfsdf1
 

Similar to sathiya new final.pptx (20)

PPTX
0321204662_lec07_2.pptxjnj bnkm jbnkmo kjmkn
sgamitgill77
 
PPTX
Topic 3.NN and DL Hopfield Networks.pptx
ManjulaRavichandran5
 
PDF
Artificial Neural Network Lecture 6- Associative Memories & Discrete Hopfield...
Mohammed Bennamoun
 
PPTX
Hopfield Neural Network
zahramojtahediin
 
PPTX
NN12345671234567890-9876543234567(Ass-4).pptx
SAKSHISHARMA686201
 
PDF
Artificial neural networks
stellajoseph
 
PPT
Ann
vini89
 
PDF
NETWORK LEARNING AND TRAINING OF A CASCADED LINK-BASED FEED FORWARD NEURAL NE...
ijaia
 
PPT
NN-Ch3 (1).ppt
RafeeqAhmed42
 
PPTX
Hopfield Networks
Kanchana Rani G
 
PPTX
Unit iii update
Indira Priyadarsini
 
PDF
NN-Ch3.PDF
gnans Kgnanshek
 
PDF
Learning in Networks: were Pavlov and Hebb right?
Victor Miagkikh
 
PPTX
Mathematical Foundation of Discrete time Hopfield Networks
Akhil Upadhyay
 
PDF
International Refereed Journal of Engineering and Science (IRJES)
irjes
 
PPT
neuros
airsrch
 
PPT
Neutral Network
Divyansh Sawant
 
PPTX
latest TYPES OF NEURAL NETWORKS (2).pptx
MdMahfoozAlam5
 
PPTX
Associative memory network
Dr. C.V. Suresh Babu
 
PPTX
neural-networks (1)
rockeysuseelan
 
0321204662_lec07_2.pptxjnj bnkm jbnkmo kjmkn
sgamitgill77
 
Topic 3.NN and DL Hopfield Networks.pptx
ManjulaRavichandran5
 
Artificial Neural Network Lecture 6- Associative Memories & Discrete Hopfield...
Mohammed Bennamoun
 
Hopfield Neural Network
zahramojtahediin
 
NN12345671234567890-9876543234567(Ass-4).pptx
SAKSHISHARMA686201
 
Artificial neural networks
stellajoseph
 
Ann
vini89
 
NETWORK LEARNING AND TRAINING OF A CASCADED LINK-BASED FEED FORWARD NEURAL NE...
ijaia
 
NN-Ch3 (1).ppt
RafeeqAhmed42
 
Hopfield Networks
Kanchana Rani G
 
Unit iii update
Indira Priyadarsini
 
NN-Ch3.PDF
gnans Kgnanshek
 
Learning in Networks: were Pavlov and Hebb right?
Victor Miagkikh
 
Mathematical Foundation of Discrete time Hopfield Networks
Akhil Upadhyay
 
International Refereed Journal of Engineering and Science (IRJES)
irjes
 
neuros
airsrch
 
Neutral Network
Divyansh Sawant
 
latest TYPES OF NEURAL NETWORKS (2).pptx
MdMahfoozAlam5
 
Associative memory network
Dr. C.V. Suresh Babu
 
neural-networks (1)
rockeysuseelan
 
Ad

Recently uploaded (20)

PDF
Presentation of the MIPLM subject matter expert Erdem Kaya
MIPLM
 
DOCX
Unit 5: Speech-language and swallowing disorders
JELLA VISHNU DURGA PRASAD
 
PPTX
Dakar Framework Education For All- 2000(Act)
santoshmohalik1
 
PDF
Sunset Boulevard Student Revision Booklet
jpinnuck
 
PPTX
HISTORY COLLECTION FOR PSYCHIATRIC PATIENTS.pptx
PoojaSen20
 
PPTX
Measures_of_location_-_Averages_and__percentiles_by_DR SURYA K.pptx
Surya Ganesh
 
PPTX
CARE OF UNCONSCIOUS PATIENTS .pptx
AneetaSharma15
 
PPTX
CONCEPT OF CHILD CARE. pptx
AneetaSharma15
 
PPTX
FSSAI (Food Safety and Standards Authority of India) & FDA (Food and Drug Adm...
Dr. Paindla Jyothirmai
 
PDF
Types of Literary Text: Poetry and Prose
kaelandreabibit
 
PPTX
Artificial-Intelligence-in-Drug-Discovery by R D Jawarkar.pptx
Rahul Jawarkar
 
PDF
Virat Kohli- the Pride of Indian cricket
kushpar147
 
PDF
1.Natural-Resources-and-Their-Use.ppt pdf /8th class social science Exploring...
Sandeep Swamy
 
PDF
The Minister of Tourism, Culture and Creative Arts, Abla Dzifa Gomashie has e...
nservice241
 
PPTX
An introduction to Prepositions for beginners.pptx
drsiddhantnagine
 
PPTX
How to Close Subscription in Odoo 18 - Odoo Slides
Celine George
 
PDF
RA 12028_ARAL_Orientation_Day-2-Sessions_v2.pdf
Seven De Los Reyes
 
PPTX
Introduction to pediatric nursing in 5th Sem..pptx
AneetaSharma15
 
PPTX
TEF & EA Bsc Nursing 5th sem.....BBBpptx
AneetaSharma15
 
PDF
Antianginal agents, Definition, Classification, MOA.pdf
Prerana Jadhav
 
Presentation of the MIPLM subject matter expert Erdem Kaya
MIPLM
 
Unit 5: Speech-language and swallowing disorders
JELLA VISHNU DURGA PRASAD
 
Dakar Framework Education For All- 2000(Act)
santoshmohalik1
 
Sunset Boulevard Student Revision Booklet
jpinnuck
 
HISTORY COLLECTION FOR PSYCHIATRIC PATIENTS.pptx
PoojaSen20
 
Measures_of_location_-_Averages_and__percentiles_by_DR SURYA K.pptx
Surya Ganesh
 
CARE OF UNCONSCIOUS PATIENTS .pptx
AneetaSharma15
 
CONCEPT OF CHILD CARE. pptx
AneetaSharma15
 
FSSAI (Food Safety and Standards Authority of India) & FDA (Food and Drug Adm...
Dr. Paindla Jyothirmai
 
Types of Literary Text: Poetry and Prose
kaelandreabibit
 
Artificial-Intelligence-in-Drug-Discovery by R D Jawarkar.pptx
Rahul Jawarkar
 
Virat Kohli- the Pride of Indian cricket
kushpar147
 
1.Natural-Resources-and-Their-Use.ppt pdf /8th class social science Exploring...
Sandeep Swamy
 
The Minister of Tourism, Culture and Creative Arts, Abla Dzifa Gomashie has e...
nservice241
 
An introduction to Prepositions for beginners.pptx
drsiddhantnagine
 
How to Close Subscription in Odoo 18 - Odoo Slides
Celine George
 
RA 12028_ARAL_Orientation_Day-2-Sessions_v2.pdf
Seven De Los Reyes
 
Introduction to pediatric nursing in 5th Sem..pptx
AneetaSharma15
 
TEF & EA Bsc Nursing 5th sem.....BBBpptx
AneetaSharma15
 
Antianginal agents, Definition, Classification, MOA.pdf
Prerana Jadhav
 
Ad

sathiya new final.pptx

  • 1. SUBJECT 20EEE523T INTELLIGENT CONTROLLER UNIT 2 PATTERN ASSOCIATION Present by Mrs. R.SATHIYA Reg no :PA2313005023001 Research Scholar, Department of Electrical and Electronics Engineering SRM Institute of Technology & Science, Chennai
  • 2. HEBB RULE – PATTERN ASSOCIATION
  • 3. Contd .. • it learns associations between input patterns and output patterns • A pattern association can be trained to respond with a certain output pattern when presented with an input pattern. • The connection weights can be adjusted in order to change the input/output behaviour • It specifies how a network changes it weights for a given input/output association. • The most commonly used learning rules with pattern associators are the Hebb rule and the Delta rule
  • 4. Training Algorithms For Pattern Association • used for finding the weights for an associative memory NN. • Its is represented in binary or bipolar. • Similar algorithm with slight extension where finding the weights by outer products. • We want to consider examples in which the input to the net after training is a pattern that is similar to, but not the same as one of the training inputs. • Each association is an input-ouput vector pair ,s:t
  • 5. Training Algorithms For Pattern Association
  • 6. Contd .. • To store a set of association s(p):t(p),p=1,…P,where • The weight matrix W is given by
  • 7. Outer product • Instead of obtaining W by iterative updates ,it can be computed from the training set by calculating the outer product of s and t • The weights are initially zero • The outer product of two vectors:
  • 11. Perfect recall versus cross talk • The suitability of the hebb rule for a particular problem depends on the correlation among the input training vectors . • If the input vector are uncorrelated(orthogonal),the hebb rule will produce the correct weights ,and the response of the net when tested with one of the training vectors will be prefect recall of the input vector’s associated target • If the input vector are not orthogonal ,the response will include a portion of each of their target values .This is commonly called cross talk
  • 12. Contd .. • two vector are orthogonal ,if their dot product is 0. • Orthogonality between the input patterns can be checked only for binary or bipolar patterns
  • 13. Delta rule • In its original form ,as introduced in chapter 2,the delta rule assumed that the activation function for the output unit was the identity function. • A simple extension allows for the use of any differential activation function;we shall call this the extended delta rule
  • 14. Associate Memory Network • These kinds of neural networks work on the basis of pattern association, which means they can store different patterns and at the time of giving an output they can produce one of the stored patterns by matching them with the given input pattern. • These types of memories are also called Content-Addressable Memory (CAM)Associative memory makes a parallel search with the stored patterns as data files. • Example
  • 15. Contd .. the two types of associative memories • Auto Associative Memory • Hetero Associative memory
  • 16. Auto associative Memory • .training input and output vector are same • Determination of weight is called storing of vectors • Weight is set to zero • Auto associative net with no self connection • Its performance is judged by its ability to reproduce a stored pattern from noisy input • Its performance in general better for bipolar vector than binary vector
  • 17. Architecture • Input and output vector are same • The input vector has n inputs and output vector has n outputs • The input and output are connected through weighted connection
  • 19. Testing Algorithm • An auto associative can be used to determine whether the given vector is ‘known’ or ‘unknown vector’ • A net is known to recognize a ‘known’vector if the net produce a pattern of activation on the output which is same as one stored • Testing procedure as follows
  • 21. Hetero Associative Memory • The input training vector and the output target vectors are not the same. • Determination of weight by hebb rule or delta rule • The weights are determined so that the network stores a set of patterns. • Hetero associative network is static in nature, hence, there would be no non-linear and delay operations. .
  • 22. Architecture • Input has ‘n’ input and output has ‘m’ output • There weighted interconnection between input and output • Associative memory neural networks are nets in which the weights are determined in such a way that the net can store a set of P pattern association • Each association is pair of vector (s(p),t(p)), with p=1,2,…..P
  • 26. Artificial Neural Network - Hopfield Networks • Hopfield neural network was invented by Dr. John J. Hopfield in 1982. • It consists of a single layer which contains one or more fully connected recurrent neurons. • The Hopfield network is commonly used for auto-association and optimization tasks • Two types of network 1.discrete hopfield network 2.continuous hopfield network
  • 27. Discrete hopfield network • A Hopfield network which operates in a discrete line fashion or in other words, it can be said the input and output patterns are discrete vector, which can be either binary 0, 1 or bipolar +1, -1 in nature. • The network has symmetrical weights with no self-connections i.e., • Only one unit updates Its activation at a time • The asynchronous updating of the units allows a function, known as an energy or Lyapunov function, to be found for the net
  • 28. Architecture Architecture Following are some important points to keep in mind about discrete Hopfield network • This model consists of neurons with one inverting and one non-inverting output. • The output of each neuron should be the input of other neurons but not the input of self. • Weight/connection strength is represented by Wij . • Connections can be excitatory as well as inhibitory. It would be excitatory, if the output of the neuron is same as the input, otherwise inhibitory. • Weights should be symmetrical, i.e. Wij = Wji • the output from Y1 going to Y2 , Yi and Yn ,have the weights W12 , W1i and W1n respectively. Similarly,other arcs have the weights on them
  • 31. Example – recalling of corrupted pattern
  • 34. Energy function • An energy function is defined as a function that is bonded and non-increasing function of the state of the system. • Energy function Ef, also called Lyapunov function determines the stability of discrete Hopfield network, and is characterized as follows
  • 35. Contd .. The change in energy depends on the fact that only one unit can update its activation at a time.
  • 36. Storage capacity • The number of binary pattern that can be stored and recalled in a net with reasonable accuracy is given approximately by • For bipolar pattern Where n is number of neuron in a net
  • 37. Continuous Hopfield network • In comparison with Discrete Hopfield network, continuous network has time as a continuous variable. • It is also used in auto association and optimization problems such as travelling salesman problem. • Node has continuous graded output • Energy decreases continuously with time • Electrical circuit which uses non- linear amplifers and resistors • Used in building hopfield with VLSI technology
  • 39. Iterative Autoassociative networks • Net does not respond to the input signal with the stored target pattern. • Respond like stored pattern. • Use the first response as input to the net again. • Iterative auto associative network recover original stored vector when presented with test vector close to it. • Recurrent Autoassociative networks.
  • 42. Linear Autoassociative Memory • James Anderson, 1977 • Based on Hebbian rule. • Linear algebra is used for analyzing the performance of the net. • Stored vector is eigen vector. • Eigen value-number of times the vector are presented • When the input vector is X, then output response is XW, where W is the weight matrix.
  • 43. Brain In The Box Network • An activity pattern inside the box receives positive feedback on certain components, which will force it outward. • When it hit the walls, it moves to the corner of the box where it remains such • Represents saturation limit of each state. • Restricted between -1 and +1. • Self connection exists.
  • 45. Autoassociative With Threshold unit • If threshold unit is set, then a threshold function is used as activation function • Training algorithm
  • 48. Temporal Associative Memory Network • Storing sequence of patterns as dynamic transitions. • Temporal patterns and associative memory with this capacity is temporal associative memory network.
  • 49. Bidirectional associative memory(BAM) • It is first proposed by Bart Kosko in the year 1988 • Performs backward and forward search • It associates patterns, say from set A to patterns from set B and vice versa is also performed. • Encodes bipolar/binary pattern using hebbian learning rule • Human memory is necessarily associative. • It uses a chain of mental associations to recover a lost memory .eg if we have lost an umberalla
  • 50. BAM Architecture • Weights are bidirectional • X layer has ‘n’ input units • Y layer has ‘m’ output units • Weight matrix from X to Y is W and from Y to X is WT • Process repeated untill the input and output vector become unchanged (reach stable state) • two types 1 .discrete BAM 2. continuous BAM
  • 51. DISCRETE BIDIRECTIONAL AUTO ASSOCIATIVE MEMORY • Here weights are found to be the sum of outer product of bipolar form training vector pair. • Activation function is step up function with non zero threshold • Determination of Weights 1. Let the input vectors be denoted by s(p) and target vectors by t(p) 2.the weight matrix to store a set of input and target vectors, where s(p) = (s1(p), .. , si(p), ... , sn(p)) t(p) = (t1(p), .. , tj(p), ... , tm(p)) 3.It can be determined by Hebb rule training a1gorithm. 4. if the input is binary , the weight matrix W = {wij} is given by
  • 52. contd • If the input vector are bipolar , the weight matrix W = {wij} can be defined as • Activation function for BAM • The activation function is based on whether the input target vector pairs used are binary or bipolar • The activation function for the Y-layer 1. With binary input vectors is 2. with bipolar input vector is
  • 54. Testing Algorithm for Discrete Bidirectional Associative Memory
  • 55. Continuous BAM • A continuous BAM[Kosko, 1988] transforms input smoothly and continuously into output in the range [0, 1] using the logistic sigmoid function as the activation function for all units • For binary input vectors,the weights are • The activation function is the logistic sigmoid • With bias included ,the net input is
  • 56. Hamming distance ,analysis of energy function and storage capacity • Hamming distance • the number of mismatched component of two given bipolar /binary vector . • Denoted by • Average distance =
  • 57. Contd.. • Energy function • Stability is determined by lyapunov function(energy function)
  • 58. Storage capacity • Memory capacity min(m,n) • “n” is the number of unit in X layer and “m” is the number of unit in Y layer • More conservative capacity is estimated as follows
  • 59. Application of BAM • Fault Detection • Pattern Association • Real Time Patient Monitoring • Medical Diagnosis • Pattern Mapping • Pattern Recognition systems • Optimization problems • Constraint satisfaction problem
  • 62. Competitive learning network • It is concerned with unsupervised training in which the output node tries to compete with each other to represent the input pattern • Basic concept of competitive network • This network is just like single layer feed –forward network have feedback connection between output . • The connection between the outputs are inhibitory type ,which shown by dotted lines ,which means the competitors never support themselves
  • 63. Contd.. • Example • Considering set of student if you want to classify them on the basis of evaluation performance, their score my be calculated and the one who score is higher than the others should be the winner • The is called competitive net .the extreme form of these competitive net is called winner –take –all • i.e ;only one neuron in the competing group will posses non zero output signal at the end of competition • Only one neuron is active at a time. Only the winner has updated weights, the rest remain unchanged.
  • 64. Contd.. • Some of the neural network that comes under these category 1. Kohonen self orgnizing feature maps 2. Learning vector quantization 3. Adaptive resonance theory
  • 65. Kohonen self organizing feature map • Self Organizing Map (or Kohonen Map or SOM) is a type of Artificial Neural Network . • It follows an unsupervised learning approach and trained its network through a competitive learning algorithm. • SOM is used for clustering and mapping (or dimensionality reduction) techniques to map multidimensional data onto lower-dimensional which allows people to reduce complex problems for easy interpretation. • SOM has two layers, 1. Input layer 2. Output layer.
  • 66. operation • SOM operates in Two modes (1) Training (2) Mapping • Training Process: Develops the map using competitive procedure (Vector Quantization) • Mapping Process: Classifies the new supplied input based on the training outcomes
  • 67. • Basic competitive learning implies that the competition process takes place before the cycle of learning. • The competition process suggests that some criteria select a winning processing element. • After the winning processing element is selected, its weight vector is adjusted according to the used learning law. • Feature mapping is a process which converts the patterns of arbitrary dimensionality into a response of one or two dimensions array of neurons. • The network performing such a mapping is called feature map. The reason for reducing the higher dimensionality, the ability to preserve the neighbor topology.
  • 70. Application-speech recognition • The short segments of the speech waveform is given as input . • It will map the same kind of phonemes as the output array, called feature extraction technique. • After extracting the features, with the help of some acoustic models as back-end processing, it will recognize the utterance
  • 71. Learning vector quantization(LVQ) • Purpose :dimentionality reduction and data compression • Self organizing map (SOM) is to encode a large set of input vector{x} by finding a smaller set of representatives /prototype/cluster • It is a supervised version of vector quantization that can be used when we have labelled input data • It is a two stage process- a SOM is followed by LVQ • The first step is feature selection: the unsupervised identification of a reasonably a small set of pattern • Second step is classification -where the feature domains are assigned to individual classes
  • 73. Example • the first step is to train the machine with all the different fruits one by one like this: • If the shape of the object is rounded and has a depression at the top, is red in color, then it will be labeled as –Apple. • If the shape of the object is a long curving cylinder having Green-Yellow color, then it will be labeled as –Banana. • after training the data, you have given a new separate fruit, say Banana from the basket, and asked to identify it. • It will first classify the fruit with its shape and color and would confirm the fruit name as BANANA and put it in the Banana category. Thus the machine learns the things from training data(basket containing fruits) and then applies the knowledge to test data(new fruit).
  • 77. adaptive resonance theory • adaptive - that they are open to new learning • resonance - without discarding the previous or the old information • The ART networks are known to solve the stability- plasticity dilemma • i.e., stability refers to their nature of memorizing the learning • plasticity refers to the fact that they are flexible to gain new information. • Due to this the nature of ART they are always able to learn new input patterns without forgetting the past
  • 78. Contd.. • Invented by Grossberg in 1976 and based on unsupervised learning model. • Resonance means a target vector matches close enough the input vector. • ART matching leads to resonance and only in resonance state the ART network learns. • Suitable for problems that uses online dynamic large databases. • Types: (1) ART 1- classifies binary input vectors (2) ART 2 – clusters real valued input (continuous valued) vectors. • Used to solve Plasticity – stability dilemma.
  • 79. Architecture • It consist of 1. A comparison field 2. A recognition field- composed of neuron 3. A vigilance parameter 4. A reset module
  • 80. • Comparison phase − In this phase, a comparison of the input vector to the comparison layer vector is done. • Recognition phase − The input vector is compared with the classification presented at every node in the output layer. The output of the neuron becomes “1” if it best matches with the classification applied, otherwise it becomes “0”. • Vigliance parameter –After the i/p vectors are classified a reset module compares the strength of match to vigilance parameter (defined by the user). • Higher vigilance produces fine detailed memories and lower vigilance value gives more general memory.
  • 81. • Rest module - compares the strength of recognition phase. When vigilance threshold is met then training starts otherwise neurons are inhibited until a new i/p is provided • There are two set of weights (1) Bottom -up weight - from F1 layer to F2 Layer (2) Top –Down weight – F2 to F1 Layer
  • 82. Notation used in algorithm
  • 85. Application • ART neural networks used for fast, stable learning and prediction have been applied in different areas. • Application of ART • target recognition, face recognition, medical diagnosis, signature verification, mobile control robot • Signature verification: • signature verification is used in bank check confirmation, ATM access, etc. • the training of the network is finished using ART1 that uses global features as input vector and • The testing phase has two step 1.the verification and 2. recognition phase • In the initial step, the input vector is coordinated with the stored reference vector, which was used as a training set, and in the second step, cluster formation takes place.