SlideShare a Scribd company logo
ADAPTIVE CHANNEL
         EQUALIZATION




       College of Technology, Pantnagar
G.B.Pant University of Agriculture and Technology,
                    Pantnagar


                                             Kamal Bhatt
                 M.Tech-Electronics & Communication Engg.
                                                ID-44036
NEURAL NETWORK
Neural networks are the simplified models of the biological
neuron systems.

 Neural networks are typically organized in layers. Layers are
made up of a number of interconnected 'nodes' .which contain an
'activation function'.

Patterns are presented to the network via the 'input layer', which
communicates to one or more 'hidden layers' where the actual
processing is done via a system of weighted 'connections'.

The hidden layers then link to an 'output layer' where the answer
is output
MODEL OF ARTIFICIAL NEURON
   An appropriate model/simulation of the nervous system should be
    able to produce similar responses and behaviours in artificial
    systems.
   The nervous system is build by relatively simple units, the neurons,
    so copying their behaviour and functionality should be the solution.
LEARNING IN A SIMPLE NEURON

   Preceptron Learning Algorithm:

1. Initialize weights
2. Present a pattern and target output
                                                2
                                    y f [ wxi]
                                      2       i
3. Compute output :           y    f [ wi x ]
                                          0
                                            i       i
                                      i 0


4. Update weights :           wi(t 1 wi(t)
                                    )                   wi

Repeat starting at 2 until acceptable level of error
NEURAL NETWORK ARCHITECTURE
   An artificial Neural Network is defined as a data
    processing system consisting of a large number of
    interconnected processing elements or artificial
    neurons.
    There are three fundamentally different classes of
    neural networks. Those are.

            Single layer feedforward Networks.

            Multilayer feedforward Networks.

            Recurrent Networks.
Application
The tasks to which artificial neural networks are applied
tend to fall within the following broad categories:

•Function approximation, or regression analysis,
including time series prediction and modeling.

•Classification, including pattern and sequence
recognition, novelty detection and sequential
decision making.

•Data processing, including filtering, clustering,
blind signal separation and compression.
Equalization History
 The LMS algorithm by Widrow and Hoff in 1960
paved the way for the development of adaptive filters
used for equalisation.

Lucky used this algorithm in 1965 to design adaptive
channel equalisers. Maximum Likelihood Sequence
Estimator (MLSE) equaliser and its Viterbi
implementation in 1970’s.

The multi layer perceptron (MLP) based symbol-by-
symbol equalisers was developed in 1990
During 1989 to 1995 some efficient nonlinear artificial
neural network equalizer structure for channel equalization
were proposed, those include Chebyshev Neural Network
, Functional link ANN

In 2002 Kevin M. Passino described the Optimization
Foraging Theory in article “Biomimicry of Bacterial Foraging”

More recently in 2008, a rank based statistics approach
known as Wilcoxon learning method has been proposed for
signals processing application to mitigate the linear and
nonlinear learning problems.
Digital Communication
Systems
Equalizers

Adaptive channel equalizers have played an important role in
digital communication systems.

Equalizer works like an inversed filter which is placed at
the front end of the receiver. Its transfer function is inverse to
the transfer function of the associated channel , is able to
reduce the error causes between the desired and estimated
signal.

This is achieved through a process of training. During this
period the transmitter transmits a fixed data sequence and the
receiver has a copy of the same.
We use Equalizers to compensate the received signals which
are corrupted by the noise, interference and signal power
attenuation introduced by communication channels during
transmission.

Linear transversal filters (LTF) are commonly used in the
design of channel equalizers. The linear equalizers fail to work
well when transmitted signals have encountered severe
nonlinear distortion.

A neural network (NN) has the capability of complicatedly
mapping the input to the output signals, which makes the NN-
based equalizers a potentially suitable solution to deal with
nonlinear channel distortion.
Adaptive equalization
The problem of equalization may be treated as a problem of signals
classification, so neural networks (NN) are quite promising candidates
because they can produce arbitrarily complex decision region.

Studies performed during the last decade have established the
superiority of neural equalizers comparative to the traditional equalizers,
in conditions of shigh nonlinear distortions and rapidly varying signals.

Several different neural equalizers architectures have been
developed, mostly combinations between a conventional linear
transversal filter (LTE) and a neural network.

The LTE eliminates the linear distortions, such as ISI, so the NN can
be focused on compensating the nonlinearities. There have been
studies on the following structures: a LTE and a multilayer perception
(MLP) , a LTE and a radial basis function network (RBF) a LTE and a
recurrent neural network
MLP networks are sometimes plagued by long training
times and may be trapped at bad local minima.

RBF networks often provide a faster and more robust
solution to the equalization problem. In addition, the RBF
neural network has a structure similar to the optimal
Bayesian symbol decision Therefore, the RBF is an ideal
processing structure to implement the optimal Bayesian
equalizer

. The RBF performances are better than the LTE and MLP
equalizers. g. Several learning algorithms have been
proposed to update the RBF parameters. However, the most
popular algorithm consists of an unsupervised learning rule
for the centers of hidden neurons and a supervised learning
rule for the weights of the output neurons.
The centers are generally updated using the k-means clustering
algorithm which consists of computing the squared distance
between the input vector and the centers, choosing a minimum
squared distance, and moving the corresponding center closer to
the input vector.

The k mean algorithm has some potential problems:
classification depend on the initials values of the centers of
RBF, on the type of chosen distance, on the number of classes. If a
center is inappropriate chosen it may never be updated, so it may
never represent a class.

 Here is proposed a new competitive method to update the RBF
centers, which recompenses the winning neuron and penalizes the
second winner, named rival..
Gradient Based Adaptive Algorithm
An adaptive algorithm is a procedure for adjusting the
parameters of an adaptive filter to minimize a cost function
chosen for the task at hand.
In this case, the parameters in ω(t) correspond
to the impulse response values of the filter at
time n. We can write the output signal y(t) as




 The general form of an adaptive FIR filtering algorithm is



where G( ) is a particular vector-valued nonlinear function(
depends on cost function chosen), μ(t) is a step size
parameter, e(t) and s(t) are the error signal and input signal
vector, respectively, and ω (t) is a vector of states that store
pertinent information about the characteristics of the input and
error signals
The Mean-Squared Error (MSE) cost function can be
defined as




 WMSE(t) can be found from the solution to the system of
 equations




 The method of steepest descent is an optimization procedure
 for minimizing the cost function J(t) with respect to a set of
 adjustable parameters W(t). This procedure adjusts each
 parameter of the system according to relationship
Linear Equalization
    Algorithms
LMS ALGORITHM

• In the family of stochastic gradient algorithms
• Approximation of the steepest – descent method
• Based on the MMSE criterion.(Minimum Mean square
  Error)
• Adaptive process containing two input signals:
•      1.) Filtering process, producing output signal.
•      2.) Desired signal (Training sequence)
• Adaptive process: recursive adjustment of filter tap
  weights
LMS ALGORITHM STEPS
                                   M 1
                                                  *
                        yn               un    k wk n
   Filter output                  k 0

                           en       dn        yn
   Estimation error
                                    wk n 1         wk n    u n k e* n
   Tap-weight adaptation



    update value       old value          learning -      tap
                                                                  error
    of tap - weigth    of tap - weight    rate            input
                                                                  signal
    vector             vector             parameter vector



                                                                      21
Recursive Least Square Algorithm

The recursive least squares (RLS) algorithm is another
algorithm for determining the coefficients of an adaptive filter.
In contrast to the LMS algorithm, the RLS algorithm uses
information from all past input samples (and not only from the
current tap-input samples) to estimate the (inverse of the)
autocorrelation matrix of the input vector.

To decrease the influence of input samples from the far
past, a weighting factor for the influence of each sample is
used. This cost function can be represented as
Adaptive equalization
Non Linear Equalizers
Multilayer Perceptron Network

In 1958, Rosenblatt demonstrated some practical
applications using the perceptron. The perceptron is a
single level connection of McCulloch-Pitts neurons is
called as Single-layer feed forward networks.

The network is capable of linearly separating the input
vectors into
pattern of classes by a hyper plane. Similarly many
perceptrons can be connected in layers to provide a
MLP network, the input signal propagates through the
network in a forward direction, on a layer-by-layer
basis. This network has been applied successfully to
solve
diverse problems.
MLP Neural Network Using BP Algorithm
Generally MLP is trained using popular error back-
propagation algorithm.Si represent the inputs
s1, s2………. sn to the network, and yk
represents the output of the final layer of the neural
network. The
connecting weights between the input to the first hidden
layer, first to second hidden layer and the second
hidden layer to the output layers are represented by

respectively.

The final output layer of the MLP may be expressed as
The final output yk(t) at the output of neuron k, is compared with the desired
output d(t) and the resulting error signal e(t) is obtained as




 The instantaneous value of the total error energy is obtained by
 summing all error signals over all neurons in the output layer, that is




This error signal is used to update the weights and thresholds of the hidden
layers as well as the output layer. The updated weights are,
Adaptive equalization
Functional Link Artificial Neural Network
FLANN is a novel single layer ANN network in which the original input
pattern is expanded to a higher dimensional space using nonlinear
functions, which provides arbitrarily complex decision regions by
generating nonlinear decision boundaries.

The main purpose of enhanced the functional expansion block to used
for the channel equalization process.

Each element undergoes nonlinear expansion to form M elements such
that the resultant matrix has the dimension of N×M. The functional
expansion of the element xk by power series expansion is carried out using
the equation given in
Adaptive equalization
At tth iteration the error signal e(t) can be
computed as




 The weight vector can be updated by least mean
 square (LMS) algorithm, as
BER Performance of FLANN equalizer compared with
LMS, RLS based Equalizer
Chebyshev Artificial Neural Network
Chebyshev artificial neural network is similar to FLANN.
The difference being that in a FLANN the input signal is
expanded to higher dimension using functional expansion.
In Chebyshev the input is expanded using Chebyshev
polynomial. Similarly as FLANN network given in the
ChNN weights are updated by LMS algorithm. The
Chebyshev polynomials generated using the recursive
formula given as
Adaptive equalization
BER Performance of ChNN equalizer compared with FLANN and
LMS, RLS based equalizer
Radial Basis Function Equalizer
The centres of the RBF networks are updated using k-means
clustering algorithm. This RBF structure can be extended for
multidimensional output as well. Gaussian kernel is the most
popular form of kernel function for equalization application, it
can be represented as



This network can implement a mapping Frbf : Rm→ R by the
function




Training of the RBF networks involves setting the
parameters for the centres Ci, spread σr and the linear
weights ωi RBF spread parameter, σr 2 is set to channel
noise variance σn 2
This provides the optimum RBF network as an equaliser.
BER Performance RBF Equalizer Compared ChNN, FLANN,
LMS, RLS equalizer
Conclusion
We observed that RLS provides faster convergence
rate than LMS equalizer.
We observed that MLP equalizer is a feed-forward
network trained using BP algorithm, it performed better
than the linear equalizer, but it has a drawback of slow
convergence rate, depending upon the number of nodes and
layers.
Optimal equalizer based on maximum a-posterior
probability (MAP) criterion can be implemented using Radial
basis function (RBF) network.
RBF equalizer mitigation all the ISI, CCI and BN
interference and provide minimum BER plot. But it has one
draw back that if input is increased the number of centres of
the network increases and makes the network more
complicated.
REFERENCES
•Haykin, S., "Adaptive Filter Theory", Prentice Hall,2005
•Haykin.S “Neural Network”, PHI 2003
•Kavita Burse, R. N. Yadav, and S. C. Shrivastava
Channel Equalization Using Neural Networks: A Review „ IEEE
Transactions     on Systems, Man, And Cybernetics —Part B:
CYBERNETICS, VOL. 40, NO. 3, MAY 2010‟
•Jagdish C. Patra, Ranendra N. Pal, Rameswar Baliarsingh, and
Ganapati Panda : Nonlinear Channel Equalization for QAM
Constellation Using Artificial Neural Network „ IEEE Transactions on
Systems, Man, And Cybernetics —Part B: CYBERNETICS, VOL. 29,
NO. 2, APRIL 1999‟
•Amalendu Patnaik„, Dimitrios E. Anagnostou„, Rabindra K. Mishra‟,
Christos G. Christodoulou„, and J. C. Lyke‟ „
Applications of Neural Networks in Wireless Communications „IEEE
Antennas and Propagation Magazine, Vol. 46, No. 3. June 2004
•R. Rojas: Neural Networks, Springer-Verlag, Berlin, 1996
•http://www.geocities.com/SiliconValley/Lakes/6007/Neural.htm

More Related Content

PPTX
Adaptive filter
A. Shamel
 
PPTX
IIR filter
ssuser2797e4
 
PPTX
Channel capacity
PALLAB DAS
 
PPT
Adaptive filter
Sivaranjan Goswami
 
PPTX
Adaptive equalization
Oladapo Abiodun
 
PPTX
FM-Foster - Seeley Discriminator.pptx
ArunChokkalingam
 
PDF
Lecture Notes on Adaptive Signal Processing-1.pdf
VishalPusadkar1
 
Adaptive filter
A. Shamel
 
IIR filter
ssuser2797e4
 
Channel capacity
PALLAB DAS
 
Adaptive filter
Sivaranjan Goswami
 
Adaptive equalization
Oladapo Abiodun
 
FM-Foster - Seeley Discriminator.pptx
ArunChokkalingam
 
Lecture Notes on Adaptive Signal Processing-1.pdf
VishalPusadkar1
 

What's hot (20)

PPTX
Channel Equalisation
Poonan Sahoo
 
PPTX
VSB and Hilbert Transform
poongodi ravikumar
 
PPT
Noise in Communication System
Izah Asmadi
 
PPTX
Introduction to equalization
Harshit Srivastava
 
PPTX
Frequency modulation
gopi789
 
PPT
Digital Filters Part 1
Premier Farnell
 
PPTX
Physical layer interface & standards
Srashti Vyas
 
PPTX
M ary psk and m ary qam ppt
DANISHAMIN950
 
PPTX
Pulse code modulation (PCM)
Mahima Shastri
 
PDF
Digital Signal Processing Tutorial:Chapt 3 frequency analysis
Chandrashekhar Padole
 
PPTX
Butterworth filter
MOHAMMAD AKRAM
 
DOCX
Receivers
Bairoju Maneesha
 
PPTX
Multirate DSP
@zenafaris91
 
PPTX
Sub band project
Siraj Sidhik
 
PPS
Pulse modulation
stk_gpg
 
PDF
RF Circuit Design - [Ch4-2] LNA, PA, and Broadband Amplifier
Simen Li
 
PDF
ASK,FSK and M-PSK using Matlab
Amirah Nadrah Ghazali
 
PPTX
Optical heterodyne detection
VaishaliVaishali14
 
PDF
Multiple Access Techniques
International Islamic University Chittagong
 
PPT
Chapter 5
wafaa_A7
 
Channel Equalisation
Poonan Sahoo
 
VSB and Hilbert Transform
poongodi ravikumar
 
Noise in Communication System
Izah Asmadi
 
Introduction to equalization
Harshit Srivastava
 
Frequency modulation
gopi789
 
Digital Filters Part 1
Premier Farnell
 
Physical layer interface & standards
Srashti Vyas
 
M ary psk and m ary qam ppt
DANISHAMIN950
 
Pulse code modulation (PCM)
Mahima Shastri
 
Digital Signal Processing Tutorial:Chapt 3 frequency analysis
Chandrashekhar Padole
 
Butterworth filter
MOHAMMAD AKRAM
 
Receivers
Bairoju Maneesha
 
Multirate DSP
@zenafaris91
 
Sub band project
Siraj Sidhik
 
Pulse modulation
stk_gpg
 
RF Circuit Design - [Ch4-2] LNA, PA, and Broadband Amplifier
Simen Li
 
ASK,FSK and M-PSK using Matlab
Amirah Nadrah Ghazali
 
Optical heterodyne detection
VaishaliVaishali14
 
Chapter 5
wafaa_A7
 
Ad

Similar to Adaptive equalization (20)

PPT
ai7.ppt
MrHacker61
 
PPT
Machine Learning Neural Networks Artificial
webinartrainer
 
PPT
Machine Learning Neural Networks Artificial Intelligence
webinartrainer
 
PPT
ANNs have been widely used in various domains for: Pattern recognition Funct...
vijaym148
 
PPT
Game theory.pdf textbooks content Artificical
webinartrainer
 
PPT
ai7.ppt
qwerty432737
 
PPT
ann ppt , multilayer perceptron. presentation on
SoumabhaBhim
 
PPT
ai...........................................
abhisheknagaraju126
 
PPT
ai7 (1) Artificial Neural Network Intro .ppt
AiniBasit
 
PPT
INTRODUCTION TO ARTIFICIAL INTELLIGENCE.
SoumitraKundu4
 
PPTX
Artificial neural networks
ShwethaShreeS
 
PDF
Adaptive Noise Cancellation using Multirate Techniques
IJERD Editor
 
PPTX
Artificial Neural Networks ppt.pptx for final sem cse
NaveenBhajantri1
 
PPTX
Machine learning
mamatha08
 
PPTX
Data science training ang placements
bhuvan8999
 
PPTX
Data science course
prathyusha1234
 
PPTX
Data science certification
prathyusha1234
 
PPTX
Data science certification course
bhuvan8999
 
PPTX
Data science certification in bangalore
prathyusha1234
 
ai7.ppt
MrHacker61
 
Machine Learning Neural Networks Artificial
webinartrainer
 
Machine Learning Neural Networks Artificial Intelligence
webinartrainer
 
ANNs have been widely used in various domains for: Pattern recognition Funct...
vijaym148
 
Game theory.pdf textbooks content Artificical
webinartrainer
 
ai7.ppt
qwerty432737
 
ann ppt , multilayer perceptron. presentation on
SoumabhaBhim
 
ai...........................................
abhisheknagaraju126
 
ai7 (1) Artificial Neural Network Intro .ppt
AiniBasit
 
INTRODUCTION TO ARTIFICIAL INTELLIGENCE.
SoumitraKundu4
 
Artificial neural networks
ShwethaShreeS
 
Adaptive Noise Cancellation using Multirate Techniques
IJERD Editor
 
Artificial Neural Networks ppt.pptx for final sem cse
NaveenBhajantri1
 
Machine learning
mamatha08
 
Data science training ang placements
bhuvan8999
 
Data science course
prathyusha1234
 
Data science certification
prathyusha1234
 
Data science certification course
bhuvan8999
 
Data science certification in bangalore
prathyusha1234
 
Ad

Recently uploaded (20)

PDF
What is CFA?? Complete Guide to the Chartered Financial Analyst Program
sp4989653
 
PPTX
Applications of matrices In Real Life_20250724_091307_0000.pptx
gehlotkrish03
 
PPTX
A Smarter Way to Think About Choosing a College
Cyndy McDonald
 
PDF
The Minister of Tourism, Culture and Creative Arts, Abla Dzifa Gomashie has e...
nservice241
 
PDF
Health-The-Ultimate-Treasure (1).pdf/8th class science curiosity /samyans edu...
Sandeep Swamy
 
PPTX
INTESTINALPARASITES OR WORM INFESTATIONS.pptx
PRADEEP ABOTHU
 
PPTX
Cleaning Validation Ppt Pharmaceutical validation
Ms. Ashatai Patil
 
PPTX
Artificial-Intelligence-in-Drug-Discovery by R D Jawarkar.pptx
Rahul Jawarkar
 
DOCX
Unit 5: Speech-language and swallowing disorders
JELLA VISHNU DURGA PRASAD
 
PPTX
An introduction to Prepositions for beginners.pptx
drsiddhantnagine
 
PPTX
Information Texts_Infographic on Forgetting Curve.pptx
Tata Sevilla
 
PPTX
CARE OF UNCONSCIOUS PATIENTS .pptx
AneetaSharma15
 
PPTX
Sonnet 130_ My Mistress’ Eyes Are Nothing Like the Sun By William Shakespear...
DhatriParmar
 
PPTX
PROTIEN ENERGY MALNUTRITION: NURSING MANAGEMENT.pptx
PRADEEP ABOTHU
 
PPTX
CONCEPT OF CHILD CARE. pptx
AneetaSharma15
 
PPTX
How to Apply for a Job From Odoo 18 Website
Celine George
 
PPTX
HISTORY COLLECTION FOR PSYCHIATRIC PATIENTS.pptx
PoojaSen20
 
PPTX
Continental Accounting in Odoo 18 - Odoo Slides
Celine George
 
PPTX
BASICS IN COMPUTER APPLICATIONS - UNIT I
suganthim28
 
PDF
Module 2: Public Health History [Tutorial Slides]
JonathanHallett4
 
What is CFA?? Complete Guide to the Chartered Financial Analyst Program
sp4989653
 
Applications of matrices In Real Life_20250724_091307_0000.pptx
gehlotkrish03
 
A Smarter Way to Think About Choosing a College
Cyndy McDonald
 
The Minister of Tourism, Culture and Creative Arts, Abla Dzifa Gomashie has e...
nservice241
 
Health-The-Ultimate-Treasure (1).pdf/8th class science curiosity /samyans edu...
Sandeep Swamy
 
INTESTINALPARASITES OR WORM INFESTATIONS.pptx
PRADEEP ABOTHU
 
Cleaning Validation Ppt Pharmaceutical validation
Ms. Ashatai Patil
 
Artificial-Intelligence-in-Drug-Discovery by R D Jawarkar.pptx
Rahul Jawarkar
 
Unit 5: Speech-language and swallowing disorders
JELLA VISHNU DURGA PRASAD
 
An introduction to Prepositions for beginners.pptx
drsiddhantnagine
 
Information Texts_Infographic on Forgetting Curve.pptx
Tata Sevilla
 
CARE OF UNCONSCIOUS PATIENTS .pptx
AneetaSharma15
 
Sonnet 130_ My Mistress’ Eyes Are Nothing Like the Sun By William Shakespear...
DhatriParmar
 
PROTIEN ENERGY MALNUTRITION: NURSING MANAGEMENT.pptx
PRADEEP ABOTHU
 
CONCEPT OF CHILD CARE. pptx
AneetaSharma15
 
How to Apply for a Job From Odoo 18 Website
Celine George
 
HISTORY COLLECTION FOR PSYCHIATRIC PATIENTS.pptx
PoojaSen20
 
Continental Accounting in Odoo 18 - Odoo Slides
Celine George
 
BASICS IN COMPUTER APPLICATIONS - UNIT I
suganthim28
 
Module 2: Public Health History [Tutorial Slides]
JonathanHallett4
 

Adaptive equalization

  • 1. ADAPTIVE CHANNEL EQUALIZATION College of Technology, Pantnagar G.B.Pant University of Agriculture and Technology, Pantnagar Kamal Bhatt M.Tech-Electronics & Communication Engg. ID-44036
  • 2. NEURAL NETWORK Neural networks are the simplified models of the biological neuron systems.  Neural networks are typically organized in layers. Layers are made up of a number of interconnected 'nodes' .which contain an 'activation function'. Patterns are presented to the network via the 'input layer', which communicates to one or more 'hidden layers' where the actual processing is done via a system of weighted 'connections'. The hidden layers then link to an 'output layer' where the answer is output
  • 3. MODEL OF ARTIFICIAL NEURON  An appropriate model/simulation of the nervous system should be able to produce similar responses and behaviours in artificial systems.  The nervous system is build by relatively simple units, the neurons, so copying their behaviour and functionality should be the solution.
  • 4. LEARNING IN A SIMPLE NEURON Preceptron Learning Algorithm: 1. Initialize weights 2. Present a pattern and target output 2 y f [ wxi] 2 i 3. Compute output : y f [ wi x ] 0 i i i 0 4. Update weights : wi(t 1 wi(t) ) wi Repeat starting at 2 until acceptable level of error
  • 5. NEURAL NETWORK ARCHITECTURE  An artificial Neural Network is defined as a data processing system consisting of a large number of interconnected processing elements or artificial neurons.  There are three fundamentally different classes of neural networks. Those are.  Single layer feedforward Networks.  Multilayer feedforward Networks.  Recurrent Networks.
  • 6. Application The tasks to which artificial neural networks are applied tend to fall within the following broad categories: •Function approximation, or regression analysis, including time series prediction and modeling. •Classification, including pattern and sequence recognition, novelty detection and sequential decision making. •Data processing, including filtering, clustering, blind signal separation and compression.
  • 7. Equalization History  The LMS algorithm by Widrow and Hoff in 1960 paved the way for the development of adaptive filters used for equalisation. Lucky used this algorithm in 1965 to design adaptive channel equalisers. Maximum Likelihood Sequence Estimator (MLSE) equaliser and its Viterbi implementation in 1970’s. The multi layer perceptron (MLP) based symbol-by- symbol equalisers was developed in 1990
  • 8. During 1989 to 1995 some efficient nonlinear artificial neural network equalizer structure for channel equalization were proposed, those include Chebyshev Neural Network , Functional link ANN In 2002 Kevin M. Passino described the Optimization Foraging Theory in article “Biomimicry of Bacterial Foraging” More recently in 2008, a rank based statistics approach known as Wilcoxon learning method has been proposed for signals processing application to mitigate the linear and nonlinear learning problems.
  • 10. Equalizers Adaptive channel equalizers have played an important role in digital communication systems. Equalizer works like an inversed filter which is placed at the front end of the receiver. Its transfer function is inverse to the transfer function of the associated channel , is able to reduce the error causes between the desired and estimated signal. This is achieved through a process of training. During this period the transmitter transmits a fixed data sequence and the receiver has a copy of the same.
  • 11. We use Equalizers to compensate the received signals which are corrupted by the noise, interference and signal power attenuation introduced by communication channels during transmission. Linear transversal filters (LTF) are commonly used in the design of channel equalizers. The linear equalizers fail to work well when transmitted signals have encountered severe nonlinear distortion. A neural network (NN) has the capability of complicatedly mapping the input to the output signals, which makes the NN- based equalizers a potentially suitable solution to deal with nonlinear channel distortion.
  • 13. The problem of equalization may be treated as a problem of signals classification, so neural networks (NN) are quite promising candidates because they can produce arbitrarily complex decision region. Studies performed during the last decade have established the superiority of neural equalizers comparative to the traditional equalizers, in conditions of shigh nonlinear distortions and rapidly varying signals. Several different neural equalizers architectures have been developed, mostly combinations between a conventional linear transversal filter (LTE) and a neural network. The LTE eliminates the linear distortions, such as ISI, so the NN can be focused on compensating the nonlinearities. There have been studies on the following structures: a LTE and a multilayer perception (MLP) , a LTE and a radial basis function network (RBF) a LTE and a recurrent neural network
  • 14. MLP networks are sometimes plagued by long training times and may be trapped at bad local minima. RBF networks often provide a faster and more robust solution to the equalization problem. In addition, the RBF neural network has a structure similar to the optimal Bayesian symbol decision Therefore, the RBF is an ideal processing structure to implement the optimal Bayesian equalizer . The RBF performances are better than the LTE and MLP equalizers. g. Several learning algorithms have been proposed to update the RBF parameters. However, the most popular algorithm consists of an unsupervised learning rule for the centers of hidden neurons and a supervised learning rule for the weights of the output neurons.
  • 15. The centers are generally updated using the k-means clustering algorithm which consists of computing the squared distance between the input vector and the centers, choosing a minimum squared distance, and moving the corresponding center closer to the input vector. The k mean algorithm has some potential problems: classification depend on the initials values of the centers of RBF, on the type of chosen distance, on the number of classes. If a center is inappropriate chosen it may never be updated, so it may never represent a class.  Here is proposed a new competitive method to update the RBF centers, which recompenses the winning neuron and penalizes the second winner, named rival..
  • 16. Gradient Based Adaptive Algorithm An adaptive algorithm is a procedure for adjusting the parameters of an adaptive filter to minimize a cost function chosen for the task at hand.
  • 17. In this case, the parameters in ω(t) correspond to the impulse response values of the filter at time n. We can write the output signal y(t) as The general form of an adaptive FIR filtering algorithm is where G( ) is a particular vector-valued nonlinear function( depends on cost function chosen), μ(t) is a step size parameter, e(t) and s(t) are the error signal and input signal vector, respectively, and ω (t) is a vector of states that store pertinent information about the characteristics of the input and error signals
  • 18. The Mean-Squared Error (MSE) cost function can be defined as WMSE(t) can be found from the solution to the system of equations The method of steepest descent is an optimization procedure for minimizing the cost function J(t) with respect to a set of adjustable parameters W(t). This procedure adjusts each parameter of the system according to relationship
  • 19. Linear Equalization Algorithms
  • 20. LMS ALGORITHM • In the family of stochastic gradient algorithms • Approximation of the steepest – descent method • Based on the MMSE criterion.(Minimum Mean square Error) • Adaptive process containing two input signals: • 1.) Filtering process, producing output signal. • 2.) Desired signal (Training sequence) • Adaptive process: recursive adjustment of filter tap weights
  • 21. LMS ALGORITHM STEPS M 1 * yn un k wk n  Filter output k 0 en dn yn  Estimation error wk n 1 wk n u n k e* n  Tap-weight adaptation update value old value learning - tap error of tap - weigth of tap - weight rate input signal vector vector parameter vector 21
  • 22. Recursive Least Square Algorithm The recursive least squares (RLS) algorithm is another algorithm for determining the coefficients of an adaptive filter. In contrast to the LMS algorithm, the RLS algorithm uses information from all past input samples (and not only from the current tap-input samples) to estimate the (inverse of the) autocorrelation matrix of the input vector. To decrease the influence of input samples from the far past, a weighting factor for the influence of each sample is used. This cost function can be represented as
  • 25. Multilayer Perceptron Network In 1958, Rosenblatt demonstrated some practical applications using the perceptron. The perceptron is a single level connection of McCulloch-Pitts neurons is called as Single-layer feed forward networks. The network is capable of linearly separating the input vectors into pattern of classes by a hyper plane. Similarly many perceptrons can be connected in layers to provide a MLP network, the input signal propagates through the network in a forward direction, on a layer-by-layer basis. This network has been applied successfully to solve diverse problems.
  • 26. MLP Neural Network Using BP Algorithm
  • 27. Generally MLP is trained using popular error back- propagation algorithm.Si represent the inputs s1, s2………. sn to the network, and yk represents the output of the final layer of the neural network. The connecting weights between the input to the first hidden layer, first to second hidden layer and the second hidden layer to the output layers are represented by respectively. The final output layer of the MLP may be expressed as
  • 28. The final output yk(t) at the output of neuron k, is compared with the desired output d(t) and the resulting error signal e(t) is obtained as The instantaneous value of the total error energy is obtained by summing all error signals over all neurons in the output layer, that is This error signal is used to update the weights and thresholds of the hidden layers as well as the output layer. The updated weights are,
  • 30. Functional Link Artificial Neural Network FLANN is a novel single layer ANN network in which the original input pattern is expanded to a higher dimensional space using nonlinear functions, which provides arbitrarily complex decision regions by generating nonlinear decision boundaries. The main purpose of enhanced the functional expansion block to used for the channel equalization process. Each element undergoes nonlinear expansion to form M elements such that the resultant matrix has the dimension of N×M. The functional expansion of the element xk by power series expansion is carried out using the equation given in
  • 32. At tth iteration the error signal e(t) can be computed as The weight vector can be updated by least mean square (LMS) algorithm, as
  • 33. BER Performance of FLANN equalizer compared with LMS, RLS based Equalizer
  • 34. Chebyshev Artificial Neural Network Chebyshev artificial neural network is similar to FLANN. The difference being that in a FLANN the input signal is expanded to higher dimension using functional expansion. In Chebyshev the input is expanded using Chebyshev polynomial. Similarly as FLANN network given in the ChNN weights are updated by LMS algorithm. The Chebyshev polynomials generated using the recursive formula given as
  • 36. BER Performance of ChNN equalizer compared with FLANN and LMS, RLS based equalizer
  • 38. The centres of the RBF networks are updated using k-means clustering algorithm. This RBF structure can be extended for multidimensional output as well. Gaussian kernel is the most popular form of kernel function for equalization application, it can be represented as This network can implement a mapping Frbf : Rm→ R by the function Training of the RBF networks involves setting the parameters for the centres Ci, spread σr and the linear weights ωi RBF spread parameter, σr 2 is set to channel noise variance σn 2 This provides the optimum RBF network as an equaliser.
  • 39. BER Performance RBF Equalizer Compared ChNN, FLANN, LMS, RLS equalizer
  • 40. Conclusion We observed that RLS provides faster convergence rate than LMS equalizer. We observed that MLP equalizer is a feed-forward network trained using BP algorithm, it performed better than the linear equalizer, but it has a drawback of slow convergence rate, depending upon the number of nodes and layers. Optimal equalizer based on maximum a-posterior probability (MAP) criterion can be implemented using Radial basis function (RBF) network. RBF equalizer mitigation all the ISI, CCI and BN interference and provide minimum BER plot. But it has one draw back that if input is increased the number of centres of the network increases and makes the network more complicated.
  • 41. REFERENCES •Haykin, S., "Adaptive Filter Theory", Prentice Hall,2005 •Haykin.S “Neural Network”, PHI 2003 •Kavita Burse, R. N. Yadav, and S. C. Shrivastava Channel Equalization Using Neural Networks: A Review „ IEEE Transactions on Systems, Man, And Cybernetics —Part B: CYBERNETICS, VOL. 40, NO. 3, MAY 2010‟ •Jagdish C. Patra, Ranendra N. Pal, Rameswar Baliarsingh, and Ganapati Panda : Nonlinear Channel Equalization for QAM Constellation Using Artificial Neural Network „ IEEE Transactions on Systems, Man, And Cybernetics —Part B: CYBERNETICS, VOL. 29, NO. 2, APRIL 1999‟ •Amalendu Patnaik„, Dimitrios E. Anagnostou„, Rabindra K. Mishra‟, Christos G. Christodoulou„, and J. C. Lyke‟ „ Applications of Neural Networks in Wireless Communications „IEEE Antennas and Propagation Magazine, Vol. 46, No. 3. June 2004 •R. Rojas: Neural Networks, Springer-Verlag, Berlin, 1996 •http://www.geocities.com/SiliconValley/Lakes/6007/Neural.htm