0% found this document useful (0 votes)
37 views20 pages

History of Neural Network

Uploaded by

Babee Hane
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
37 views20 pages

History of Neural Network

Uploaded by

Babee Hane
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 20

History: The 1940's to the 1970's

In 1943, neurophysiologist Warren McCulloch and mathematician Walter Pitts


wrote a paper on how neurons might work. In order to describe how neurons in
the brain might work, they modeled a simple neural network using electrical
circuits.

In 1949, Donald Hebb wrote The Organization of Behavior, a work which pointed
out the fact that neural pathways are strengthened each time they are used, a
concept fundamentally essential to the ways in which humans learn. If two nerves
fire at the same time, he argued, the connection between them is enhanced.

As computers became more advanced in the 1950's, it was finally possible to


simulate a hypothetical neural network. The first step towards this was made by
Nathanial Rochester from the IBM research laboratories. Unfortunately for him, the
first attempt to do so failed.

In 1959, Bernard Widrow and Marcian Hoff of Stanford developed models called
"ADALINE" and "MADALINE." In a typical display of Stanford's love for acronymns,
the names come from their use of Multiple ADAptive LINear Elements. ADALINE
was developed to recognize binary patterns so that if it was reading streaming
bits from a phone line, it could predict the next bit. MADALINE was the first neural
network applied to a real world problem, using an adaptive filter that eliminates
echoes on phone lines. While the system is as ancient as air traffic control systems,
like air traffic control systems, it is still in commercial use.

In 1962, Widrow & Hoff developed a learning procedure that examines the value
before the weight adjusts it (i.e. 0 or 1) according to the rule: Weight Change =
(Pre-Weight line value) * (Error / (Number of Inputs)). It is based on the idea that
while one active perceptron may have a big error, one can adjust the weight
values to distribute it across the network, or at least to adjacent perceptrons.
Applying this rule still results in an error if the line before the weight is 0, although
this will eventually correct itself. If the error is conserved so that all of it is distributed
to all of the weights than the error is eliminated.

Despite the later success of the neural network, traditional von Neumann
architecture took over the computing scene, and neural research was left
behind. Ironically, John von Neumann himself suggested the imitation of neural
functions by using telegraph relays or vacuum tubes.

In the same time period, a paper was written that suggested there could not be
an extension from the single layered neural network to a multiple layered neural
network. In addition, many people in the field were using a learning function that
was fundamentally flawed because it was not differentiable across the entire line.
As a result, research and funding went drastically down.

This was coupled with the fact that the early successes of some neural networks
led to an exaggeration of the potential of neural networks, especially considering
the practical technology at the time. Promises went unfulfilled, and at times
greater philosophical questions led to fear. Writers pondered the effect that the
so-called "thinking machines" would have on humans, ideas which are still around
today.

The idea of a computer which programs itself is very appealing. If Microsoft's


Windows 2000 could reprogram itself, it might be able to repair the thousands of
bugs that the programming staff made. Such ideas were appealing but very
difficult to implement. In addition, von Neumann architecture was gaining in
popularity. There were a few advances in the field, but for the most part research
was few and far between.

In 1972, Kohonen and Anderson developed a similar network independently of


one another, which we will discuss more about later. They both used matrix
mathematics to describe their ideas but did not realize that what they were doing
was creating an array of analog ADALINE circuits. The neurons are supposed to
activate a set of outputs instead of just one.

The first multilayered network was developed in 1975, an unsupervised network.

Conventional computing versus artificial neural


networks
There are fundamental differences between conventional computing and the
use of neural networks. In order to best illustrate these differences one must
examine two different types of learning, the top-down approach and the bottom-
up approach. Then we'll look at what it means to learn and finally compare
conventional computing with artificial neural networks.

Top-downlearning
With the advent of neural networks, there are several tradeoffs between the von
Neumann architecture and the architecture used in neural networks. One of the
fundamental differences between the two is that traditional computing methods
work well for problems that have a definite algorithm, problems that can be
solved by a set of rules. Problems such as creating graphs of equations, keeping
track of orders on amazon.com, and even to some extent algorithms of simple
games, are not difficult for a conventional computer.
In order to examine more deeply the benefits and tradeoffs of conventional
computing and think about how a computer learns, we need to introduce the
notion of top-down versus bottom-up learning.

The notion of top-down learning is best described in programming lingo as a set


of sequential, cascading if/else statements which are not self-modifying. In other
words, the computer is programmed to perform certain actions at pre-defined
decision points. Let's say you want to write an algorithm to decide whether a
human should go to sleep. The program may be simple:

if (IsTired()) {

GoToSleep();
} else {
StayAwake();
}

But in the real world, it is not that simple. If you are in class, you do not want to sleep since you
would be missing valuable information. Or if you are working on an important programming
assignment, you may be determined to finish it before you go to sleep. A revised version may
thus look like:

if (IsTired()) {

if (!IsInClass() && !WorkingOnProject()) {


GoToSleep();
} else {
if (IsInClass()) {
Stay Awake();
} else {
if (WorkingOnProject()) {
if (AssignmentIsDueTomorrow()) {
if (AssignmentIsCompleted()) {
GoToSleep();
} else {
StayAwake();
}
} else {
GoToSleep();
}
} else {
StayAwake();
}
}
} else {
StayAwake();
}

This "simple" program only looks at a few possible things which may affect the
decision--whether you're tired, you're in class, or you have an assignment due
tomorrow.

This is a decision that human beings make with ease (although college students
don't always make the correct decision), yet to program it into a computer takes
much more code than the lines presented above. There are so many different
aspects which might affect the ultimate decision that an algorithm programmed
into the computer using this top-down method would be impossible.

This is the fundamental limitation of the top-down method: in order to program a


complex task, the programmer may have to spend years developing a correct
top-down approach. This doesn't even begin to include the subtle possibilities
which the programmer may not think of. Complex tasks may never be
programmed using the top-down approach.

Bottom-Up Learning
The bottom up approach learns more by example or by doing than a complex
set of cascading if/else statements. It tries to do something in a particular fashion,
and if that doesn't work, it tries something else. By keeping track of the actions
that didn't work and the ones that did, one can learn. Moreover, the program is
inherently self-modifying. One can make a program in C that modifies itself, but
the idea of code being self-modifying is avoided in the commercial world
because of the difficulty of debugging self-modifying code.

This is the way in which Arthur Samuel programmed his checkers-playing machine.
The machine started out incompetent at playing checkers. But as it went through
several games, it learned a little each time, adjusting its own program, so that in
the end it could beat the individual who programmed it.

Another reason for using a bottom-up approach in neural networks is that there is
already an existing example which solves the problems associated with the
approach: humans.

What does it mean to learn?


This brings us into the overriding philosophical question: what does it mean to
learn? Is learning in humans simply a bunch of cascading if/else statements that,
when we learn, modify themselves to create new combinations? Or does
interacting with the environment change the arrangement and intensity of
neurons to form a bottom-up approach?
It seems as though the human brain has a combination of each. We do not have
to learn that if we're hungry we have to eat or if we're tired, we need sleep. These
things seem to be hardwired into the human brain. Other things, such as reading
a book, come only with interacting with the environment and the society around
an individual.

It seems likely that many applications of neural networks in the future would have
to use a combination of instructions hardwired into the system and a neural
network. For example, the chess machine that Arthur Samuel built did not start
with a blank slate and learn how to play checkers over time; one machine was
provided with the knowledge of a checkers book--things like which moves are
ideal and which moves fail miserably.

Comparison between conventional computers and


neural networks
Parallel processing

One of the major advantages of the neural network is its ability to do many things
at once. With traditional computers, processing is sequential--one task, then the
next, then the next, and so on. The idea of threading makes it appear to the
human user that many things are happening at one time. For instance, the
Netscape throbber is shooting meteors at the same time that the page is loading.
However, this is only an appearance; processes are not actually happening
simultaneously.

The artificial neural network is an inherently multiprocessor-friendly architecture.


Without much modification, it goes beyond one or even two processors of the
von Neumann architecture. The artificial neural network is designed from the
onset to be parallel. Humans can listen to music at the same time they do their
homework--at least, that's what we try to convince our parents in high school.
With a massively parallel architecture, the neural network can accomplish a lot in
less time. The tradeoff is that processors have to be specifically designed for the
neural network.

The ways in which they function


Another fundamental difference between traditional computers and artificial
neural networks is the way in which they function. While computers function
logically with a set of rules and calculations, artificial neural networks can function
via images, pictures, and concepts.

Based upon the way they function, traditional computers have to learn by rules,
while artificial neural networks learn by example, by doing something and then
learning from it. Because of these fundamental differences, the applications to
which we can tailor them are extremely different. We will explore some of the
applications later in the presentation.

Self-programming

The "connections" or concepts learned by each type of architecture is different


as well. The von Neumann computers are programmable by higher level
languages like C or Java and then translating that down to the machine's
assembly language. Because of their style of learning, artificial neural networks
can, in essence, "program themselves." While the conventional computers must
learn only by doing different sequences or steps in an algorithm, neural networks
are continuously adaptable by truly altering their own programming. It could be
said that conventional computers are limited by their parts, while neural networks
can work to become more than the sum of their parts.

Speed

The speed of each computer is dependant upon different aspects of the


processor. Von Neumann machines requires either big processors or the tedious,
error-prone idea of parallel processors, while neural networks requires the use of
multiple chips customly built for the application.

Applications of neural networks

Character Recognition - The idea of character recognition has become very


important as handheld devices like the Palm Pilot are becoming increasingly
popular. Neural networks can be used to recognize handwritten characters.

Image Compression - Neural networks can receive and process vast amounts
of information at once, making them useful in image compression. With the
Internet explosion and more sites using more images on their sites, using neural
networks for image compression is worth a look.

Stock Market Prediction - The day-to-day business of the stock market is


extremely complicated. Many factors weigh in whether a given stock will go up
or down on any given day. Since neural networks can examine a lot of
information quickly and sort it all out, they can be used to predict stock prices.

Traveling Saleman's Problem - Interestingly enough, neural networks can solve


the traveling salesman problem, but only to a certain degree of approximation.

Medicine, Electronic Nose, Security, and Loan Applications - These are


some applications that are in their proof-of-concept stage, with the acception of
a neural network that will decide whether or not to grant a loan, something that
has already been used more successfully than many humans.

Miscellaneous Applications - These are some very interesting (albeit at times a


little absurd) applications of neural networks. .

History: The 1980's to the present


In 1982, interest in the field was renewed. John Hopfield of Caltech presented a
paper to the National Academy of Sciences. His approach was to create more
useful machines by using bidirectional lines. Previously, the connections
between neurons was only one way.

That same year, Reilly and Cooper used a "Hybrid network" with multiple layers,
each layer using a different problem-solving strategy.

Also in 1982, there was a joint US-Japan conference on Cooperative/Competitive


Neural Networks. Japan announced a new Fifth Generation effort on neural
networks, and US papers generated worry that the US could be left behind in the
field. (Fifth generation computing involves artificial intelligence. First generation
used switches and wires, second generation used the transister, third state used
solid-state technology like integrated circuits and higher level programming
languages, and the fourth generation is code generators.) As a result, there was
more funding and thus more research in the field.
In 1986, with multiple layered neural networks in the news, the problem was how
to extend the Widrow-Hoff rule to multiple layers. Three independent groups of
researchers, one of which included David Rumelhart, a former member of
Stanford's psychology department, came up with similar ideas which are now
called back propagation networks because it distributes pattern recognition
errors throughout the network. Hybrid networks used just two layers, these back-
propagation networks use many. The result is that back-propagation networks are
"slow learners," needing possibly thousands of iterations to learn.

Now, neural networks are used in several applications, some of which we will
describe later in our presentation. The fundamental idea behind the nature of
neural networks is that if it works in nature, it must be able to work in computers.
The future of neural networks, though, lies in the development of hardware. Much
like the advanced chess-playing machines like Deep Blue, fast, efficient neural
networks depend on hardware being specified for its eventual use.

Research that concentrates on developing neural networks is relatively slow. Due


to the limitations of processors, neural networks take weeks to learn. Some
companies are trying to create what is called a "silicon compiler" to generate a
specific type of integrated circuit that is optimized for the application of neural
networks. Digital, analog, and optical chips are the different types of chips being
developed. One might immediately discount analog signals as a thing of the past.
However neurons in the brain actually work more like analog signals than digital
signals. While digital signals have two distinct states (1 or 0, on or off), analog
signals vary between minimum and maximum values. It may be awhile, though,
before optical chips can be used in commercial applications.

In classification and prediction problems, we are provided with training sets with
desired outputs, so backpropagation together with feed-forward networks are
useful in modeling the input-output relationship. However, sometimes we have to
analyze raw data of which we have no prior knowledge. The only possible way is
to find out special features of the data and arrange the data in clusters so that
elements that are similar to each other are grouped together. Such a process can
be readily performed using simple competitive networks.

Simple competitive networks are composed of two networks: the Hemming net
and the Maxnet. Each of them specializes in a different function:

1. The Hemming net measures how much the input vector resembles the
weight vector of each perceptron.
2. The maxnet finds the perceptron with the maximum value.

In order to understand how these two seemingly unrelated networks function


together, we need to examine more closely the details of each one.
The Hemming net:

(Fig.1) A Hemming net.

Each perceptron at the top layer of the Hemming net calculates a weighted sum
of the input values. This weighted sum can be interpreted as the dot product of
the input vector and the weight vector.

, where i and w are the input vector and the weight


vector respectively.

If w and i are of unit length, then the dot product depends only on cos theta.
Since the cosine function increases as the angle decreases, the dot product gets
bigger when the two vectors are close to each other (the angle between them
is small). Hence the weighted sum each perceptron calculates is a measure of
how close its weight vector resembles the input vector.

The Maxnet:

(Fig.2) A Hemming net.

The maxnet is a fully connected network with each node connecting to every
other nodes, including itself. The basic idea is that the nodes compete against
each other by sending out inhibiting signals to each other.
This is done by setting the weights of the connections between different nodes to
be negative and applying the following algorithm:

Algorithm

Using the above algorithm, all nodes converge to 0 except for the node with the maximum initial
value. In this way the maxnet finds out the node with the maximum value.

Putting them together:


(Fig.3) A Simple Competitive network.

In a simple competitive network, a Maxnet connects the top nodes of the Hemming net.
Whenever an input is presented, the Hemming net finds out the “distance” of the weight vector
of each node from the input vector via the dot product, while the Maxnet selects the node with
the greatest dot product. In this way, the whole network selects the node with its weight vector
closest to the input vector, i.e. the winner.

The network learns by moving the winning weight vector towards the input vector:

while the other weight vectors remain unchanged.

(Fig.4) The winner learns by moving towards the input vector.

This process is repeated for all the samples for many times. If the samples are in clusters, then
every time the winning weight vector moves towards a particular sample in one of the clusters.
Eventually each of the weight vectors would converge to the centroid of one cluster. At this
point, the training is complete.

(Fig.5) After training, the weight vectors become centroids of various clusters.
When new samples are presented to a trained net, it is compared to the weight vectors which are
the centroids of each cluster. By measuring the distance from the weight vectors using the
Hemming net, the sample would be correctly grouped into the cluster to which it is closest.

In classification and prediction problems, we are provided with training sets with desired
outputs, so backpropagation together with feed-forward networks are useful in modeling the
input-output relationship. However, sometimes we have to analyze raw data of which we have no
prior knowledge. The only possible way is to find out special features of the data and arrange the
data in clusters so that elements that are similar to each other are grouped together. Such a
process can be readily performed using simple competitive networks.

Simple competitive networks are composed of two networks: the Hemming net and the Maxnet.
Each of them specializes in a different function:

1. The Hemming net measures how much the input vector resembles the weight vector of
each perceptron.
2. The maxnet finds the perceptron with the maximum value.

In order to understand how these two seemingly unrelated networks function together, we need
to examine more closely the details of each one.

The Hemming net:

(Fig.1) A Hemming net.


Each perceptron at the top layer of the Hemming net calculates a weighted sum of the input
values. This weighted sum can be interpreted as the dot product of the input vector and the
weight vector.

, where i and w are the input vector and the weight vector
respectively.

If w and i are of unit length, then the dot product depends only on cos theta. Since the cosine
function increases as the angle decreases, the dot product gets bigger when the two vectors are
close to each other (the angle between them is small). Hence the weighted sum each perceptron
calculates is a measure of how close its weight vector resembles the input vector.

The Maxnet:

(Fig.2) A Hemming net.

The maxnet is a fully connected network with each node connecting to every other nodes,
including itself. The basic idea is that the nodes compete against each other by sending out
inhibiting signals to each other.

This is done by setting the weights of the connections between different nodes to be negative and
applying the following algorithm:

Algorithm
Using the above algorithm, all nodes converge to 0 except for the node with the maximum initial
value. In this way the maxnet finds out the node with the maximum value.

Putting them together:

(Fig.3) A Simple Competitive network.

In a simple competitive network, a Maxnet connects the top nodes of the Hemming net.
Whenever an input is presented, the Hemming net finds out the “distance” of the weight vector
of each node from the input vector via the dot product, while the Maxnet selects the node with
the greatest dot product. In this way, the whole network selects the node with its weight vector
closest to the input vector, i.e. the winner.

The network learns by moving the winning weight vector towards the input vector:

while the other weight vectors remain unchanged.

(Fig.4) The winner learns by moving towards the input vector.


This process is repeated for all the samples for many times. If the samples are in clusters, then
every time the winning weight vector moves towards a particular sample in one of the clusters.
Eventually each of the weight vectors would converge to the centroid of one cluster. At this
point, the training is complete.

(Fig.5) After training, the weight vectors become centroids of various clusters.

When new samples are presented to a trained net, it is compared to the weight vectors which are
the centroids of each cluster. By measuring the distance from the weight vectors using the
Hemming net, the sample would be correctly grouped into the cluster to which it is closest.

To Neural Networks and Beyond!


Neural Networks and Consciousness

So, neural networks are very good at a wide variety of problems, most of which
involve finding trends in large quantities of data. They are better suited than
traditional computer architecture to problems that humans are naturally good at
and which computers are traditionally bad at ? image recognition, making
generalizations, that sort of thing. And researchers are continually constructing
networks that are better at these problems.

But will neural networks ever fully simulate the human brain? Will they be as
complex and as functional? Will a machine ever be conscious of its own
existence?

Simulating human consciousness and emotion is still the realm of science fiction.
It may happen one day, or it may not ? this is an issue we won't delve into here,
because, of course, there are huge philosophical arguments about what
consciousness is, and if it can possibly be simulated by a machine... do we have
souls or some special life-force that is impossible to simulate in a machine? If not,
how do we make the jump from, as one researcher puts it, "an electrical reaction
in the brain to suddenly seeing the world around one with all its distances, its colors
and chiaroscuro?" Well, like I said, we won't delve into it here; the issue is far too
deep, and, in the end, perhaps irresolvable... (if you want to delve, check out
http://www.culture.com.au/brain_proj/neur_net.htm and http://www.iasc-
bg.org.yu/Papers/Work-97/work-97.html)

Perhaps NNs can, though, give us some insight into the "easy problems" of
consciousness: how does the brain process environmental stimulation? How does
it integrate information? But, the real question is, why and how is all of this
processing, in humans, accompanied by an experienced inner life, and can a
machine achieve such a self-awareness?

Of course, the whole future of neural networks does not reside in attempts to
simulate consciousness. Indeed, that is of relatively small concern at the moment;
more pressing are issues of how to improve the systems we have.

Want to find out more? Try these sites:

Neural Networks and the Computational Brain, a somewhat technical and somewhat
philosophical treatise on the potential of modeling consciousness:
http://www.culture.com.au/brain_proj/neur_net.htm

ThoughtTreasure, a database of 25,000 concepts, 55,000 English and French


words and phrases, 50,000 assertions, and 100 scripts, which is attempting to bring
natural language and commonsense capabilities to computers:
http://www.signiform.com/tt/htm/tt.htm. The conclusions page is especially clear
and useful: http://www.signiform.com/tt/book/Concl.html

Hierarchical Neural Networks and Brainwaves: Towards a Theory of


Consciousness: This paper gives "a comparative biocybernetical analysis of the
possibilities in modeling consciousness and other psychological functions
(perception, memorizing, learning, emotions, language, creativity, thinking, and
transpersonal interactions!), by using biocybernetical models of hierarchical
neural networks and brainwaves." " http://www.iasc-bg.org.yu/Papers/Work-
97/work-97.html

Fuzzy Logic Information: FuzzyTech:http://www.fuzzytech.com/: a vast resource


for information, simulations, and applications of fuzzy logic.

Neural Networking Hardware: Neural Networks in Hardware: Architectures,


Products and Applications
http://www.particle.kth.se/~lindsey/HardwareNNWCourse/
Neural Network Resources
Note: One of the easiest ways to find good resources on neural networks is to go to
www.about.com or www.infind.com and enter search query: "neural networks."
Another GREAT list of Neural Network Resources ("If it's on the web, it's listed here") is
http://www.geocities.com/SiliconValley/Lakes/6007/Neural.htm

• Introductions to Neural Networks


• Applications of Neural Networks (and some fun applets!)
• Neural Networks and Simulated Consciousness
• Books

Introductory Material

• FAQs: Frequently Asked Questions: Newsgroup:This site contains detailed but


comprehensible answers to common questions about NNs. Particularly useful for me was
the question "What can you do with an ANN?" It also contains an excellent set of links to
online NN resources. Highly recommended for content, if not for beauty of site design.

• Fabulous NN introduction: Artificial Neural Networks Technology (from the


Department of Defense Information Analysis Center):An excellent introduction to the
basic principles of neural networks, this article has many clear graphics and non-
mathematical, but thorough, explanations. It investigates the basic architecture of a neural
network, including the various configurations and learning mechanisms, and also
discusses the history and future applications of the neural network.

• A textbook and network simulator in one: Brain Wave:This introductory, online


textbook also contains an interactive applet for making one's own simple networks. This
text contains many excellent, clear graphics, and non-technical explanations of the
various neural net architectures.

• Fun, simple applets: Web Applets for Interactive Tutorials on Artificial Neural Learning

• Animated neuron and NN Introduction: An Introduction To Neural Networks:This


page contains a great animated gif of a biological neuron, and also includes a general
introduction to neural networks.

• General Introduction: Generation 5'sIntroduction to Neural Networks:This article


provides an introduction to neural networks for a non-technical audience.
• Introduction to Kohonen networks: Kohonen Networks:This article deals with one
specific type of neural network, the Kohonen network, which is used to execute
unsupervised learning. In unsupervised learning, there is no comparison of the network's
answer to specific desired output, and the simulated neurons have the property of self-
organization.

• Introduction and comparison with Von Neumann: An Introduction to Neural


Networks:Dr Leslie Smith presents a good comparison between the advantages and
disadvantages of the von Neumann architecture and neural networks, and also provides a
quick survey of different kinds of networks. These include the BP network, RGF
network, simple perceptron network, and Kohonen network. The article also examines
where neural networks are applicable and where they may possibly be applied in the
future.

• Somewhat Technical NN Description: Statsoft's Neural Networks:This article provides


a lengthy and somewhat technical description of neural networks. It looks at RBF
networks, probabilistic neural networks, generalized regression neural networks, linear
networks, and Kohonen networks. It looks at the artificial model of neural networks and
how the human brain is modeled with neural networks. It also examines feedforward
structures and the structures most useful in solving problems.

• Applets for XOR and Kohonen: The HTML Neural Net Consulter.

• Intro from ZSolutions, a NN-provider: Introduction to Neural Networks:Using a


simple example of a neural network developed to predict the number of runs scored by a
baseball team, this article investigates the architecture and potential uses of neural
networks. It also serves as promotional material: Z Solutions provides neural networks
for corporations.

• Intro 2 from ZSolutions, a NN-provider: Want to Try Neural Nets?A somewhat


propaganda-esque document for a neural networking company, this article nonetheless
provides a good introductory survey of neural networks and the potential for
implementing them in the real world.

• Fuzzy Logic Information: FuzzyTech:a vast resource for information, simulations, and
applications of fuzzy logic.

Applications of Neural Networks (and some fun applets!)

• Information on over 50 NN applications: BrainMaker Neural Network Software:Great


list of examples of specific NN applications regarding stocks, business, medicine, and
manufacturing.

• Applet for 8-queens problem: 8-queens problem and neural networks


• Applet for Travelling Salesman: Elastic Net Method for Travelling Salesman
Problem:The elastic net is a kind of neural networks which is used for optimization
problems; this applet demonstrates its use applied to the Travelling Salesman Problem.

• Handwriting Recognition Applet: Artificial Neural Network Handwriting


Recognizer:This applet demonstrates neural networks applied to handwriting recognition.
Users train a neural network on 10 digits of their handwriting and then test its recognition
accuracy.

• A few applets: Takefuji LabsNeural demo by Java.

• NNs as Information Analyzers: Technology Brief: Artificial Neural Networks as


Information Analysis Tools:The article discusses how neural networks, as information
analysis tools, can be used in the medical industry, the environment, security and law
enforcement, and mechanical system diagnosis.

• NNS for Medical Diagnosis: Technology Brief: Computer Assisted Medical


Diagnosis:This article looks at the ways in which a neural network can assist doctors in
medical diagnoses. It investigates uses in the cardiovascular system, diagnosing coronary
artery disease, and chemical analysis.

• Applet for Travelling Salesman: Travelling Salesman ProblemThis applet demonstrates


a neural network applied to a 2-D travelling salesman problem.

• Applet for Image Compression: Image Compression using Backprop.

• Fuzzy Logic Information: FuzzyTech:a vast resource for information, simulations, and
applications of fuzzy logic.

Neural Networks and Simulated Consciousness

• Technical / Philosophical Paper: Neural Networks and the Computational Brain

• Database of Common Sense: ThoughtTreasure:ThoughtTreasure is a database of 25,000


concepts, 55,000 English and French words and phrases, 50,000 assertions, and 100
scripts, which is attempting to bring natural language and commonsense capabilities to
computers. The conclusions page is especially clear and concise.
• Long and thorough paper from Yugoslavia: Hierarchical Neural Networks and
Brainwaves: Towards a Theory of Consciousness:This paper gives "a comparative
biocybernetical analysis of the possibilities in modeling consciousness and other
psychological functions (perception, memorizing, learning, emotions, language,
creativity, thinking, and transpersonal interactions!), by using biocybernetical models of
hierarchical neural networks and brainwaves."

Books

• Mehrotra, Kishan, Chilukuri Mohan, and Sanjay Ranka. Elements of Artificial Neural
Networks. Boston: MIT Press, 1997.

• Skapura, David M. Building Neural Networks. Menlo Park, CA: Addison-Wesley


Publishing Company, 1996. This is an introductory textbook for the construction of
actual neural networks. It investigates many of the potential uses of neural networks in a
manner aimed at allowing students themselves to create these networks. Sample C code is
also provided.

You might also like