SlideShare a Scribd company logo
4
Most read
5
Most read
16
Most read
(Approved by AICTE & affiliated to UPTU, Lucknow)
A
Seminar Report On
Artificial Neural Network
Submitted in partial fulfillment of the requirement for the award of
the degree of
B.Tech in
Information Technology
DEPARTMENT OF COMPUTER SCIENCE &
INFORMATION TECHNOLOGY ENGG.
Submitted By:-
ANJALI
Branch- IT
Semester- 6th
Dr. Anand Sharma Mr. Konark Sharma
(HOD, CS/IT Dept.) (Seminar-in-Charge)
2015-2016
2
CERTIFICATE
This is to certify that the Seminar Report entitled “Artificial Neural Network”
submitted by Ms. ANJALI has been a record of student’s own work carried out
individually in my guidance for the partial fulfillment of the degree Of Bachelor
Of Technology in Information Technology of Aligarh College Of Engineering &
Technology during the 6th Sem.
It is further certified to the best of my knowledge and belief that this work has
not been submitted elsewhere for the award of any other degree.
___________________
Mr. Konark Sharma
(Seminar In-charge)
3
ACKNOWLEDGEMENT
All praise to Almighty, the most beneficent, the most merciful, who bestowed
upon us the courage, patience and strength to embark upon this work and carry
it to the completion.
I feel privileged to express my deep sense of gratitude and highest appreciation
to
Mr. Konark Sharma,
Asst. professor,
Dept. of CS/IT Engg.
for his instant support and providing me with incalculable suggestions and
guidance. I sincerely acknowledge him for his support on literature, critical
comments & moral support which he rendered at all stages of the discussion
which was deeply helpful.
I also acknowledge my friends & Parents for their moral support & timely ideas
in completion of this Seminar. I promise to pay the reward of their help &
guidance in form of similar or even better ways to support others throughout
my life.
___________________
Anjali
4
1) Introduction 5-6
2) ANN’s Basic Structure 7-8
3) Types of ANNs 9-10
4) Machine Learning 11
5) Comparisons 12
6) Properties of ANNs 13
7) Applications of ANNs 14
8) Advantages 15
9) Disadvantages 15
10) Conclusion 16
11) References 16
INDEX
5
In machine learning and cognitive science, artificial neural networks (ANNs)
are a family of models inspired by biological neural networks (the central nervous
systems of animals, in particular the brain) and are used to estimate or
approximate functions that can depend on a large number of inputs and are
generally unknown. Artificial neural networks are generally presented as systems
of interconnected "neurons" which exchange messages between each other. The
connections have numeric weights that can be tuned based on experience, making
neural nets adaptive to inputs and capable of learning.
For example, a neural network for handwriting recognition is defined by a set of
input neurons which may be activated by the pixels of an input image. After being
weighted and transformed by a function (determined by the network's designer),
the activations of these neurons are then passed on to other neurons. This process
is repeated until finally, an output neuron is activated. This determines which
character was read.
Like other machine learning methods – systems that learn from data – neural
networks have been used to solve a wide variety of tasks that are hard to solve
using ordinary rule-based programming, including computer vision and speech
recognition.
Background
Examinations of humans' central nervous systems inspired the concept of
artificial neural networks. In an artificial neural network, simple artificial nodes,
known as "neurons", "neurodes", "processing elements" or "units", are connected
together to form a network which mimics a biological neural network.
There is no single formal definition of what an artificial neural network is.
However, a class of statistical models may commonly be called "neural" if it
possesses the following characteristics:
1. Contains sets of adaptive weights, i.e. numerical parameters that are tuned
by a learning algorithm, and
2. Capability of approximating non-linear functions of their inputs.
The adaptive weights can be thought of as connection strengths between neurons,
which are activated during training and prediction.
Neural networks are similar to biological neural networks in the performing of
functions collectively and in parallel by the units, rather than there being a clear
delineation of subtasks to which individual units are assigned. The term "neural
network" usually refers to models employed in statistics, cognitive psychology
and artificial intelligence. Neural network models which command the central
INTRODUCTION
6
nervous system and the rest of the brain are part of theoretical neuroscience and
computational neuro science.
In modern software implementations of artificial neural networks, the approach
inspired by biology has been largely abandoned for a more practical approach
based on statistics and signal processing. In some of these systems, neural
networks or parts of neural networks (like artificial neurons) form components in
larger systems that combine both adaptive and non-adaptive elements. While the
more general approach of such systems is more suitable for real-world problem
solving, it has little to do with the traditional, artificial intelligence connectionist
models. What they do have in common, however, is the principle of non-linear,
distributed, parallel and local processing and adaptation. Historically, the use of
neural network models marked a directional shift in the late eighties from high-
level (symbolic) artificial intelligence, characterized by expert systems with
knowledge embodied in if-then rules, to low-level (sub-symbolic) machine
learning, characterized by knowledge embodied in the parameters of a dynamical
system.
The inventor of the first neurocomputer, Dr. Robert Hecht-Nielsen, defines a
neural network as −
"...a computing system made up of a number of simple, highly interconnected
processing elements, which process information by their dynamic state response
to external inputs.”
7
The idea of ANNs is based on the belief that working of human brain by making
the right connections, can be imitated using silicon and wires as living neurons
and dendrites.
The human brain is composed of 100 billion nerve cells called neurons. They are
connected to other thousand cells by Axons. Stimuli from external environment
or inputs from sensory organs are accepted by dendrites. These inputs create
electric impulses, which quickly travel through the neural network. A neuron can
then send the message to other neuron to handle the issue or does not send it
forward. The human Neural system working is as shown below:
ANNs are composed of multiple nodes, which imitate biological neurons of
human brain. The neurons are connected by links and they interact with each
other. The nodes can take input data and perform simple operations on the data.
The result of these operations is passed to other neurons. The output at each
node is called its activation or node value.
ANN’s BASIC STRUCTURE
8
Each link is associated with weight. ANNs are capable of learning, which takes
place by altering weight values. The following illustration shows a simple ANN
−
The basic artificial neuron is as follows-
9
There are two Artificial Neural Network topologies − FreeForward and
Feedback.
FeedForward ANN
The information flow is unidirectional. A unit sends information to other unit
from which it does not receive any information. There are no feedback loops.
They are used in pattern generation/recognition/classification. They have fixed
inputs and outputs.
FeedBack ANN
Here, feedback loops are allowed. They are used in content addressable
memories.
TYPES OF ANN
10
Working of ANNs
In the topology diagrams shown, each arrow represents a connection between two
neurons and indicates the pathway for the flow of information. Each connection
has a weight, an integer number that controls the signal between the two neurons.
If the network generates a “good or desired” output, there is no need to adjust the
weights. However, if the network generates a “poor or undesired” output or an
error, then the system alters the weights in order to improve subsequent results.
11
ANNs are capable of learning and they need to be trained. There are several
learning strategies −
 Supervised Learning − It involves a teacher that is scholar than the ANN
itself. For example, the teacher feeds some example data about which the
teacher already knows the answers.
For example, pattern recognizing. The ANN comes up with guesses while
recognizing. Then the teacher provides the ANN with the answers. The
network then compares it guesses with the teacher’s “correct” answers and
makes adjustments according to errors.
In supervised training, both the inputs and the outputs are provided. The
network then processes the inputs and compares its resulting outputs
against the desired outputs. Errors are then propagated back through the
system, causing the system to adjust the weights which control the
network. This process occurs over and over as the weights are continually
tweaked. The set of data which enables the training is called the "training
set." During the training of a network the same set of data is processed
many times as the connection weights are ever refined.
 Unsupervised Learning − It is required when there is no example data set
with known answers. For example, searching for a hidden pattern. In this
case, clustering i.e. dividing a set of elements into groups according to
some unknown pattern is carried out based on the existing data sets present.
At the present time, unsupervised learning is not well understood. This
adaption to the environment is the promise which would enable science
fiction types of robots to continually learn on their own as they encounter
new situations and new environments. Life is filled with situations where
exact training sets do not exist. Some of these situations involve military
action where new combat techniques and new weapons might be
encountered. Because of this unexpected aspect to life and the human
desire to be prepared, there continues to be research into, and hope for, this
field. Yet, at the present time, the vast bulk of neural network work is in
systems with supervised learning. Supervised learning is achieving results.
This is also called Adaptive Learning.
 Reinforcement Learning – This strategy built on observation. The ANN
makes a decision by observing its environment. If the observation is
negative, the network adjusts its weights to be able to make a different
required decision the next time.
MACHINE LEARNING
12
Comparisons of the computing approaches is given in the table below:
CHARACTERISTICS TRADITIONAL COMPUTING
(including Expert Systems)
ARTIFICIAL NEURAL
NETWORKS
Processing style
Functions
Sequential
Logically (left brained)
via Rules Concepts
Calculations
Parallel
Gestault (right brained)
via Images
Pictures
Controls
Learning Method
Applications
by rules (didactically)
Accounting
word processing
math inventory
digital communications
by example (Socratically)
Sensor processing
speech recognition
pattern recognition
text recognition
A comparison of artificial intelligence's expert systems and neural networks is
contained in Table below:
Characteristics Von Neumann
Architecture
Used for Expert Systems
Artificial Neural
Networks
Processors VLSI
(traditional processors)
Artificial Neural Networks;
variety of technologies;
hardware development is on going
Processing Approach Separate The same
Processing Approach Processes problem rule at a
one time; sequential
Multiple, simultaneously
Connections Externally programmable Dynamically self programming
Self learning Only algorithmic
parameters modified
Continuously adaptable
Fault tolerance None without special
processors
Significant in the very nature of the
interconnected neurons
Neurobiology in design None Moderate
Programming Through a rule based
complicated
Self-programming; but network
must be set up properly
Ability to be fast Requires big processors Requires multiple custom-built chips
COMPARISONS
13
Computational power
The multilayer perceptron is a universal function approximator, as proven by the
universal approximation theorem. However, the proof is not constructive
regarding the number of neurons required or the settings of the weights.
Work by Hava Siegelmann and Eduardo D. Sontag has provided a proof that a
specific recurrent architecture with rational valued weights (as opposed to full
precision real number-valued weights) has the full power of a Universal Turing
Machine[54]
using a finite number of neurons and standard linear connections.
Further, it has been shown that the use of irrational values for weights results in
a machine with super-Turing power.
Capacity
Artificial neural network models have a property called 'capacity', which roughly
corresponds to their ability to model any given function. It is related to the amount
of information that can be stored in the network and to the notion of complexity.
Convergence
Nothing can be said in general about convergence since it depends on a number
of factors. Firstly, there may exist many local minima. This depends on the cost
function and the model. Secondly, the optimization method used might not be
guaranteed to converge when far away from a local minimum. Thirdly, for a very
large amount of data or parameters, some methods become impractical. In
general, it has been found that theoretical guarantees regarding convergence are
an unreliable guide to practical application.
Generalization and statistics
In applications where the goal is to create a system that generalizes well in unseen
examples, the problem of over-training has emerged. This arises in convoluted or
over-specified systems when the capacity of the network significantly exceeds
the needed free parameters. There are two schools of thought for avoiding this
problem: The first is to use cross-validation and similar techniques to check for
the presence of overtraining and optimally select hyper parameters such as to
minimize the generalization error. The second is to use some form of
regularization. This is a concept that emerges naturally in a probabilistic
(Bayesian) framework, where the regularization can be performed by selecting a
larger prior probability over simpler models; but also in statistical learning theory,
where the goal is to minimize over two quantities: the 'empirical risk' and the
'structural risk', which roughly corresponds to the error over the training set and
the predicted error in unseen data due to overfitting.
PROPERTIES OF ANNs
14
They can perform tasks that are easy for a human but difficult for a machine −
 Aerospace − Autopilot aircrafts, aircraft fault detection.
 Automotive − Automobile guidance systems.
 Military − Weapon orientation and steering, target tracking, object
discrimination, facial recognition, signal/image identification.
 Electronics − Code sequence prediction, IC chip layout, chip failure
analysis, machine vision, voice synthesis.
 Financial − Real estate appraisal, loan advisor, mortgage screening,
corporate bond rating, portfolio trading program, corporate financial
analysis, currency value prediction, document readers, credit application
evaluators.
 Industrial − Manufacturing process control, product design and analysis,
quality inspection systems, welding quality analysis, paper quality
prediction, chemical product design analysis, dynamic modeling of
chemical process systems, machine maintenance analysis, project bidding,
planning, and management.
 Medical − Cancer cell analysis, EEG and ECG analysis, prosthetic design,
transplant time optimizer.
 Speech − Speech recognition, speech classification, text to speech
conversion.
 Telecommunications − Image and data compression, automated
information services, real-time spoken language translation.
 Transportation − Truck Brake system diagnosis, vehicle scheduling,
routing systems.
 Software − Pattern Recognition in facial recognition, optical character
recognition, etc.
 Time Series Prediction − ANNs are used to make predictions on stocks
and natural calamities.
 Signal Processing − Neural networks can be trained to process an audio
signal and filter it appropriately in the hearing aids.
 Control − ANNs are often used to make steering decisions of physical
vehicles.
 Anomaly Detection − As ANNs are expert at recognizing patterns, they
can also be trained to generate an output when something unusual occurs
that misfits the pattern.
APPLICATIONS OF ANNs
15
 It involves human like thinking.
 They handle noisy or missing data.
 They can work with large number of variables or parameters.
 They provide general solutions with good predictive accuracy.
 System has got property of continuous learning.
 They deal with the non-linearity in the world in which we live.
 A neural network can perform tasks that a linear program cannot.
 When an element of the neural network fails, it can continue without any
problem by their parallel nature.
 A neural network learns and does not need to be reprogrammed.
 It can be implemented in any application.
 It can be implemented without any problem.
 The neural network needs training to operate.
 The architecture of a neural network is different from the architecture of
microprocessors therefore needs to be emulated.
 Requires high processing time for large neural networks.
ADVANTAGES
DISADVANTAGES
16
The computing world has a lot to gain from neural networks. Their ability to learn
by example makes them very flexible and powerful. Furthermore there is no need
to devise an algorithm in order to perform a specific task; i.e. there is no need to
understand the internal mechanisms of that task. They are also very well suited
for real time systems because of their fast response and computational times
which are due to their parallel architecture.
Neural networks also contribute to other areas of research such as neurology and
psychology. They are regularly used to model parts of living organisms and to
investigate the internal mechanisms of the brain.
Perhaps the most exciting aspect of neural networks is the possibility that
someday 'conscious' networks might be produced. There is a number of scientists
arguing that consciousness is a 'mechanical' property and that 'conscious' neural
networks are a realistic possibility.
Finally, we can say that even though neural networks have a huge potential we
will only get the best of them when they are integrated with computing, AI, fuzzy
logic and related subjects.
1) https://en.wikipedia.org/wiki/Artificial_neural_network
2) http://www.psych.utoronto.ca/users/reingold/courses/ai/cache/neural3.html
3) http://www.slideshare.net/nilmani14/neural-network-3019822
4) http://studymafia.org/artificial-neural-network-seminar-ppt-with-pdf-report/
5)http://www.tutorialspoint.com/artificial_intelligence/artificial_intelligence_iss
ues.htm
CONCLUSION
REFERENCES

More Related Content

What's hot (20)

PPT
Artificial Neural Network seminar presentation using ppt.
Mohd Faiz
 
PDF
Blue Brain Seminar Report
Varun A M
 
PPTX
NEUROMORPHIC COMPUTING.pptx
komalpawooskar1
 
PDF
20 Latest Computer Science Seminar Topics on Emerging Technologies
Seminar Links
 
PPT
brain computer-interfaces PPT
Vijay Mehta
 
PPTX
Implement iot using python
AnkitBirla5
 
PPTX
Blue Brain
Nikhil Vyas
 
PPTX
Neuromorphic computing
SreekuttanJayakumar
 
DOCX
Best topics for seminar
shilpi nagpal
 
DOCX
Brain-Computer Interface (BCI)-Seminar Report
josnapv
 
PPTX
Green cloud computing
Shreyas Khare
 
PPTX
Neuralink technical seminar
Rahul Agarwal
 
PPTX
Zigbee Presentation
Maathu Michael
 
PPTX
IoT & Smart City
Md Mizanur Rahman
 
PPTX
Artifical Neural Network and its applications
Sangeeta Tiwari
 
PPT
Blue brain ppt
Deepak Sarangi
 
PPT
BLUE EYES TECHNOLOGY
Chaitanya Ram
 
PPTX
presentation on Edge computing
sairamgoud16
 
PPT
Nano computing
manpreetgrewal
 
DOCX
Computer science seminar topics
123seminarsonly
 
Artificial Neural Network seminar presentation using ppt.
Mohd Faiz
 
Blue Brain Seminar Report
Varun A M
 
NEUROMORPHIC COMPUTING.pptx
komalpawooskar1
 
20 Latest Computer Science Seminar Topics on Emerging Technologies
Seminar Links
 
brain computer-interfaces PPT
Vijay Mehta
 
Implement iot using python
AnkitBirla5
 
Blue Brain
Nikhil Vyas
 
Neuromorphic computing
SreekuttanJayakumar
 
Best topics for seminar
shilpi nagpal
 
Brain-Computer Interface (BCI)-Seminar Report
josnapv
 
Green cloud computing
Shreyas Khare
 
Neuralink technical seminar
Rahul Agarwal
 
Zigbee Presentation
Maathu Michael
 
IoT & Smart City
Md Mizanur Rahman
 
Artifical Neural Network and its applications
Sangeeta Tiwari
 
Blue brain ppt
Deepak Sarangi
 
BLUE EYES TECHNOLOGY
Chaitanya Ram
 
presentation on Edge computing
sairamgoud16
 
Nano computing
manpreetgrewal
 
Computer science seminar topics
123seminarsonly
 

Viewers also liked (20)

DOCX
Project Report -Vaibhav
Vaibhav Dhattarwal
 
PPTX
Neural network & its applications
Ahmed_hashmi
 
PDF
Artificial neural networks
stellajoseph
 
PPTX
Introduction to Neural networks (under graduate course) Lecture 3 of 9
Randa Elanwar
 
DOCX
Final Year Project Report for B.Tech on Neural Network
Akash Bisariya
 
PPTX
Introduction Of Artificial neural network
Nagarajan
 
PPTX
Neural networks
Slideshare
 
PPTX
Artificial intelligence NEURAL NETWORKS
REHMAT ULLAH
 
PPTX
neural network
STUDENT
 
DOC
Question bank soft computing
Mohit Singh
 
PDF
Body Burden of Toxicants
v2zq
 
PDF
_Seminar_synopsis
Sanjeev Kumar
 
PDF
Neural Network Classification and its Applications in Insurance Industry
Inderjeet Singh
 
PDF
الذكاء الصنعي
ssaa2020
 
PDF
Matlab 2013 14 papers astract
IGEEKS TECHNOLOGIES
 
PPT
Ai in education
arteimi
 
PPTX
الذكاء الاصطناعي
nada labib
 
PDF
Final Year Project Report
Josh Hammond
 
PPTX
image denoising technique using disctere wavelet transform
alishapb
 
PPTX
Intelligence Agent - Artificial Intelligent (AI)
mufassirin
 
Project Report -Vaibhav
Vaibhav Dhattarwal
 
Neural network & its applications
Ahmed_hashmi
 
Artificial neural networks
stellajoseph
 
Introduction to Neural networks (under graduate course) Lecture 3 of 9
Randa Elanwar
 
Final Year Project Report for B.Tech on Neural Network
Akash Bisariya
 
Introduction Of Artificial neural network
Nagarajan
 
Neural networks
Slideshare
 
Artificial intelligence NEURAL NETWORKS
REHMAT ULLAH
 
neural network
STUDENT
 
Question bank soft computing
Mohit Singh
 
Body Burden of Toxicants
v2zq
 
_Seminar_synopsis
Sanjeev Kumar
 
Neural Network Classification and its Applications in Insurance Industry
Inderjeet Singh
 
الذكاء الصنعي
ssaa2020
 
Matlab 2013 14 papers astract
IGEEKS TECHNOLOGIES
 
Ai in education
arteimi
 
الذكاء الاصطناعي
nada labib
 
Final Year Project Report
Josh Hammond
 
image denoising technique using disctere wavelet transform
alishapb
 
Intelligence Agent - Artificial Intelligent (AI)
mufassirin
 
Ad

Similar to Artificial Neural Network report (20)

DOCX
Artifical neural networks
alldesign
 
PDF
Artificial Neural Network and its Applications
shritosh kumar
 
PDF
BACKPROPOGATION ALGO.pdfLECTURE NOTES WITH SOLVED EXAMPLE AND FEED FORWARD NE...
DurgadeviParamasivam
 
PPTX
Artificial neural network
Mohd Arafat Shaikh
 
DOCX
Neural networks of artificial intelligence
alldesign
 
PDF
Neural Network
Sayyed Z
 
PPTX
neural networks
joshiblog
 
PDF
[IJET V2I2P20] Authors: Dr. Sanjeev S Sannakki, Ms.Anjanabhargavi A Kulkarni
IJET - International Journal of Engineering and Techniques
 
PDF
Neural networks are parallel computing devices.docx.pdf
neelamsanjeevkumar
 
DOCX
Neural network
Guvera Ganesh
 
DOCX
ABSTRACT.docxiyhkkkkkkkkkkkkkkkkkkkkkkkkkkkk
suriyakalavinoth
 
PDF
Artificial Neural Network: A brief study
IRJET Journal
 
PDF
A Study of Social Media Data and Data Mining Techniques
IJERA Editor
 
PPTX
02 Fundamental Concepts of ANN
Tamer Ahmed Farrag, PhD
 
PPTX
Artificial Neural Network - Basic Concepts.pptx
israrali348234
 
PPT
ANN_B.TechPresentation of ANN basics.ppt
KuldeepSinghBrar3
 
PPTX
artificial neural network lec 2 rt .pptx
RohanMalik45
 
PPT
NeuralNetworksbasics for Deeplearning
TSANKARARAO
 
PPT
Neural-Networks full covering AI networks.ppt
cs18115
 
Artifical neural networks
alldesign
 
Artificial Neural Network and its Applications
shritosh kumar
 
BACKPROPOGATION ALGO.pdfLECTURE NOTES WITH SOLVED EXAMPLE AND FEED FORWARD NE...
DurgadeviParamasivam
 
Artificial neural network
Mohd Arafat Shaikh
 
Neural networks of artificial intelligence
alldesign
 
Neural Network
Sayyed Z
 
neural networks
joshiblog
 
[IJET V2I2P20] Authors: Dr. Sanjeev S Sannakki, Ms.Anjanabhargavi A Kulkarni
IJET - International Journal of Engineering and Techniques
 
Neural networks are parallel computing devices.docx.pdf
neelamsanjeevkumar
 
Neural network
Guvera Ganesh
 
ABSTRACT.docxiyhkkkkkkkkkkkkkkkkkkkkkkkkkkkk
suriyakalavinoth
 
Artificial Neural Network: A brief study
IRJET Journal
 
A Study of Social Media Data and Data Mining Techniques
IJERA Editor
 
02 Fundamental Concepts of ANN
Tamer Ahmed Farrag, PhD
 
Artificial Neural Network - Basic Concepts.pptx
israrali348234
 
ANN_B.TechPresentation of ANN basics.ppt
KuldeepSinghBrar3
 
artificial neural network lec 2 rt .pptx
RohanMalik45
 
NeuralNetworksbasics for Deeplearning
TSANKARARAO
 
Neural-Networks full covering AI networks.ppt
cs18115
 
Ad

More from Anjali Agrawal (11)

PDF
Seminar report on Google Glass, Blu-ray & Green IT
Anjali Agrawal
 
PDF
Time Table Reminder Andorid App SRS
Anjali Agrawal
 
PDF
Artificial Neural Network Abstract
Anjali Agrawal
 
PDF
Green IT Report
Anjali Agrawal
 
PPTX
Webcast
Anjali Agrawal
 
PPTX
Employee Management System
Anjali Agrawal
 
PPTX
Applications of microprocessor
Anjali Agrawal
 
PPTX
Performance evaluation of hypercube interconnectionm
Anjali Agrawal
 
PPTX
One time password(otp)
Anjali Agrawal
 
PPTX
E commerce
Anjali Agrawal
 
PPTX
Macintosh
Anjali Agrawal
 
Seminar report on Google Glass, Blu-ray & Green IT
Anjali Agrawal
 
Time Table Reminder Andorid App SRS
Anjali Agrawal
 
Artificial Neural Network Abstract
Anjali Agrawal
 
Green IT Report
Anjali Agrawal
 
Employee Management System
Anjali Agrawal
 
Applications of microprocessor
Anjali Agrawal
 
Performance evaluation of hypercube interconnectionm
Anjali Agrawal
 
One time password(otp)
Anjali Agrawal
 
E commerce
Anjali Agrawal
 
Macintosh
Anjali Agrawal
 

Recently uploaded (20)

PDF
Viol_Alessandro_Presentazione_prelaurea.pdf
dsecqyvhbowrzxshhf
 
DOCX
CS-802 (A) BDH Lab manual IPS Academy Indore
thegodhimself05
 
PPTX
GitOps_Without_K8s_Training_detailed git repository
DanialHabibi2
 
PDF
Electrical Engineer operation Supervisor
ssaruntatapower143
 
PDF
MAD Unit - 1 Introduction of Android IT Department
JappanMavani
 
DOC
MRRS Strength and Durability of Concrete
CivilMythili
 
PPTX
Presentation 2.pptx AI-powered home security systems Secure-by-design IoT fr...
SoundaryaBC2
 
PPTX
Introduction to Design of Machine Elements
PradeepKumarS27
 
PPTX
2025 CGI Congres - Surviving agile v05.pptx
Derk-Jan de Grood
 
PPT
Electrical Safety Presentation for Basics Learning
AliJaved79382
 
PPT
Carmon_Remote Sensing GIS by Mahesh kumar
DhananjayM6
 
PDF
smart lot access control system with eye
rasabzahra
 
PPTX
The Role of Information Technology in Environmental Protectio....pptx
nallamillisriram
 
PPTX
Thermal runway and thermal stability.pptx
godow93766
 
PPTX
Worm gear strength and wear calculation as per standard VB Bhandari Databook.
shahveer210504
 
PPTX
Introduction to Basic Renewable Energy.pptx
examcoordinatormesu
 
PPTX
fatigue in aircraft structures-221113192308-0ad6dc8c.pptx
aviatecofficial
 
PDF
Pressure Measurement training for engineers and Technicians
AIESOLUTIONS
 
DOCX
8th International Conference on Electrical Engineering (ELEN 2025)
elelijjournal653
 
PPTX
Product Development & DevelopmentLecture02.pptx
zeeshanwazir2
 
Viol_Alessandro_Presentazione_prelaurea.pdf
dsecqyvhbowrzxshhf
 
CS-802 (A) BDH Lab manual IPS Academy Indore
thegodhimself05
 
GitOps_Without_K8s_Training_detailed git repository
DanialHabibi2
 
Electrical Engineer operation Supervisor
ssaruntatapower143
 
MAD Unit - 1 Introduction of Android IT Department
JappanMavani
 
MRRS Strength and Durability of Concrete
CivilMythili
 
Presentation 2.pptx AI-powered home security systems Secure-by-design IoT fr...
SoundaryaBC2
 
Introduction to Design of Machine Elements
PradeepKumarS27
 
2025 CGI Congres - Surviving agile v05.pptx
Derk-Jan de Grood
 
Electrical Safety Presentation for Basics Learning
AliJaved79382
 
Carmon_Remote Sensing GIS by Mahesh kumar
DhananjayM6
 
smart lot access control system with eye
rasabzahra
 
The Role of Information Technology in Environmental Protectio....pptx
nallamillisriram
 
Thermal runway and thermal stability.pptx
godow93766
 
Worm gear strength and wear calculation as per standard VB Bhandari Databook.
shahveer210504
 
Introduction to Basic Renewable Energy.pptx
examcoordinatormesu
 
fatigue in aircraft structures-221113192308-0ad6dc8c.pptx
aviatecofficial
 
Pressure Measurement training for engineers and Technicians
AIESOLUTIONS
 
8th International Conference on Electrical Engineering (ELEN 2025)
elelijjournal653
 
Product Development & DevelopmentLecture02.pptx
zeeshanwazir2
 

Artificial Neural Network report

  • 1. (Approved by AICTE & affiliated to UPTU, Lucknow) A Seminar Report On Artificial Neural Network Submitted in partial fulfillment of the requirement for the award of the degree of B.Tech in Information Technology DEPARTMENT OF COMPUTER SCIENCE & INFORMATION TECHNOLOGY ENGG. Submitted By:- ANJALI Branch- IT Semester- 6th Dr. Anand Sharma Mr. Konark Sharma (HOD, CS/IT Dept.) (Seminar-in-Charge) 2015-2016
  • 2. 2 CERTIFICATE This is to certify that the Seminar Report entitled “Artificial Neural Network” submitted by Ms. ANJALI has been a record of student’s own work carried out individually in my guidance for the partial fulfillment of the degree Of Bachelor Of Technology in Information Technology of Aligarh College Of Engineering & Technology during the 6th Sem. It is further certified to the best of my knowledge and belief that this work has not been submitted elsewhere for the award of any other degree. ___________________ Mr. Konark Sharma (Seminar In-charge)
  • 3. 3 ACKNOWLEDGEMENT All praise to Almighty, the most beneficent, the most merciful, who bestowed upon us the courage, patience and strength to embark upon this work and carry it to the completion. I feel privileged to express my deep sense of gratitude and highest appreciation to Mr. Konark Sharma, Asst. professor, Dept. of CS/IT Engg. for his instant support and providing me with incalculable suggestions and guidance. I sincerely acknowledge him for his support on literature, critical comments & moral support which he rendered at all stages of the discussion which was deeply helpful. I also acknowledge my friends & Parents for their moral support & timely ideas in completion of this Seminar. I promise to pay the reward of their help & guidance in form of similar or even better ways to support others throughout my life. ___________________ Anjali
  • 4. 4 1) Introduction 5-6 2) ANN’s Basic Structure 7-8 3) Types of ANNs 9-10 4) Machine Learning 11 5) Comparisons 12 6) Properties of ANNs 13 7) Applications of ANNs 14 8) Advantages 15 9) Disadvantages 15 10) Conclusion 16 11) References 16 INDEX
  • 5. 5 In machine learning and cognitive science, artificial neural networks (ANNs) are a family of models inspired by biological neural networks (the central nervous systems of animals, in particular the brain) and are used to estimate or approximate functions that can depend on a large number of inputs and are generally unknown. Artificial neural networks are generally presented as systems of interconnected "neurons" which exchange messages between each other. The connections have numeric weights that can be tuned based on experience, making neural nets adaptive to inputs and capable of learning. For example, a neural network for handwriting recognition is defined by a set of input neurons which may be activated by the pixels of an input image. After being weighted and transformed by a function (determined by the network's designer), the activations of these neurons are then passed on to other neurons. This process is repeated until finally, an output neuron is activated. This determines which character was read. Like other machine learning methods – systems that learn from data – neural networks have been used to solve a wide variety of tasks that are hard to solve using ordinary rule-based programming, including computer vision and speech recognition. Background Examinations of humans' central nervous systems inspired the concept of artificial neural networks. In an artificial neural network, simple artificial nodes, known as "neurons", "neurodes", "processing elements" or "units", are connected together to form a network which mimics a biological neural network. There is no single formal definition of what an artificial neural network is. However, a class of statistical models may commonly be called "neural" if it possesses the following characteristics: 1. Contains sets of adaptive weights, i.e. numerical parameters that are tuned by a learning algorithm, and 2. Capability of approximating non-linear functions of their inputs. The adaptive weights can be thought of as connection strengths between neurons, which are activated during training and prediction. Neural networks are similar to biological neural networks in the performing of functions collectively and in parallel by the units, rather than there being a clear delineation of subtasks to which individual units are assigned. The term "neural network" usually refers to models employed in statistics, cognitive psychology and artificial intelligence. Neural network models which command the central INTRODUCTION
  • 6. 6 nervous system and the rest of the brain are part of theoretical neuroscience and computational neuro science. In modern software implementations of artificial neural networks, the approach inspired by biology has been largely abandoned for a more practical approach based on statistics and signal processing. In some of these systems, neural networks or parts of neural networks (like artificial neurons) form components in larger systems that combine both adaptive and non-adaptive elements. While the more general approach of such systems is more suitable for real-world problem solving, it has little to do with the traditional, artificial intelligence connectionist models. What they do have in common, however, is the principle of non-linear, distributed, parallel and local processing and adaptation. Historically, the use of neural network models marked a directional shift in the late eighties from high- level (symbolic) artificial intelligence, characterized by expert systems with knowledge embodied in if-then rules, to low-level (sub-symbolic) machine learning, characterized by knowledge embodied in the parameters of a dynamical system. The inventor of the first neurocomputer, Dr. Robert Hecht-Nielsen, defines a neural network as − "...a computing system made up of a number of simple, highly interconnected processing elements, which process information by their dynamic state response to external inputs.”
  • 7. 7 The idea of ANNs is based on the belief that working of human brain by making the right connections, can be imitated using silicon and wires as living neurons and dendrites. The human brain is composed of 100 billion nerve cells called neurons. They are connected to other thousand cells by Axons. Stimuli from external environment or inputs from sensory organs are accepted by dendrites. These inputs create electric impulses, which quickly travel through the neural network. A neuron can then send the message to other neuron to handle the issue or does not send it forward. The human Neural system working is as shown below: ANNs are composed of multiple nodes, which imitate biological neurons of human brain. The neurons are connected by links and they interact with each other. The nodes can take input data and perform simple operations on the data. The result of these operations is passed to other neurons. The output at each node is called its activation or node value. ANN’s BASIC STRUCTURE
  • 8. 8 Each link is associated with weight. ANNs are capable of learning, which takes place by altering weight values. The following illustration shows a simple ANN − The basic artificial neuron is as follows-
  • 9. 9 There are two Artificial Neural Network topologies − FreeForward and Feedback. FeedForward ANN The information flow is unidirectional. A unit sends information to other unit from which it does not receive any information. There are no feedback loops. They are used in pattern generation/recognition/classification. They have fixed inputs and outputs. FeedBack ANN Here, feedback loops are allowed. They are used in content addressable memories. TYPES OF ANN
  • 10. 10 Working of ANNs In the topology diagrams shown, each arrow represents a connection between two neurons and indicates the pathway for the flow of information. Each connection has a weight, an integer number that controls the signal between the two neurons. If the network generates a “good or desired” output, there is no need to adjust the weights. However, if the network generates a “poor or undesired” output or an error, then the system alters the weights in order to improve subsequent results.
  • 11. 11 ANNs are capable of learning and they need to be trained. There are several learning strategies −  Supervised Learning − It involves a teacher that is scholar than the ANN itself. For example, the teacher feeds some example data about which the teacher already knows the answers. For example, pattern recognizing. The ANN comes up with guesses while recognizing. Then the teacher provides the ANN with the answers. The network then compares it guesses with the teacher’s “correct” answers and makes adjustments according to errors. In supervised training, both the inputs and the outputs are provided. The network then processes the inputs and compares its resulting outputs against the desired outputs. Errors are then propagated back through the system, causing the system to adjust the weights which control the network. This process occurs over and over as the weights are continually tweaked. The set of data which enables the training is called the "training set." During the training of a network the same set of data is processed many times as the connection weights are ever refined.  Unsupervised Learning − It is required when there is no example data set with known answers. For example, searching for a hidden pattern. In this case, clustering i.e. dividing a set of elements into groups according to some unknown pattern is carried out based on the existing data sets present. At the present time, unsupervised learning is not well understood. This adaption to the environment is the promise which would enable science fiction types of robots to continually learn on their own as they encounter new situations and new environments. Life is filled with situations where exact training sets do not exist. Some of these situations involve military action where new combat techniques and new weapons might be encountered. Because of this unexpected aspect to life and the human desire to be prepared, there continues to be research into, and hope for, this field. Yet, at the present time, the vast bulk of neural network work is in systems with supervised learning. Supervised learning is achieving results. This is also called Adaptive Learning.  Reinforcement Learning – This strategy built on observation. The ANN makes a decision by observing its environment. If the observation is negative, the network adjusts its weights to be able to make a different required decision the next time. MACHINE LEARNING
  • 12. 12 Comparisons of the computing approaches is given in the table below: CHARACTERISTICS TRADITIONAL COMPUTING (including Expert Systems) ARTIFICIAL NEURAL NETWORKS Processing style Functions Sequential Logically (left brained) via Rules Concepts Calculations Parallel Gestault (right brained) via Images Pictures Controls Learning Method Applications by rules (didactically) Accounting word processing math inventory digital communications by example (Socratically) Sensor processing speech recognition pattern recognition text recognition A comparison of artificial intelligence's expert systems and neural networks is contained in Table below: Characteristics Von Neumann Architecture Used for Expert Systems Artificial Neural Networks Processors VLSI (traditional processors) Artificial Neural Networks; variety of technologies; hardware development is on going Processing Approach Separate The same Processing Approach Processes problem rule at a one time; sequential Multiple, simultaneously Connections Externally programmable Dynamically self programming Self learning Only algorithmic parameters modified Continuously adaptable Fault tolerance None without special processors Significant in the very nature of the interconnected neurons Neurobiology in design None Moderate Programming Through a rule based complicated Self-programming; but network must be set up properly Ability to be fast Requires big processors Requires multiple custom-built chips COMPARISONS
  • 13. 13 Computational power The multilayer perceptron is a universal function approximator, as proven by the universal approximation theorem. However, the proof is not constructive regarding the number of neurons required or the settings of the weights. Work by Hava Siegelmann and Eduardo D. Sontag has provided a proof that a specific recurrent architecture with rational valued weights (as opposed to full precision real number-valued weights) has the full power of a Universal Turing Machine[54] using a finite number of neurons and standard linear connections. Further, it has been shown that the use of irrational values for weights results in a machine with super-Turing power. Capacity Artificial neural network models have a property called 'capacity', which roughly corresponds to their ability to model any given function. It is related to the amount of information that can be stored in the network and to the notion of complexity. Convergence Nothing can be said in general about convergence since it depends on a number of factors. Firstly, there may exist many local minima. This depends on the cost function and the model. Secondly, the optimization method used might not be guaranteed to converge when far away from a local minimum. Thirdly, for a very large amount of data or parameters, some methods become impractical. In general, it has been found that theoretical guarantees regarding convergence are an unreliable guide to practical application. Generalization and statistics In applications where the goal is to create a system that generalizes well in unseen examples, the problem of over-training has emerged. This arises in convoluted or over-specified systems when the capacity of the network significantly exceeds the needed free parameters. There are two schools of thought for avoiding this problem: The first is to use cross-validation and similar techniques to check for the presence of overtraining and optimally select hyper parameters such as to minimize the generalization error. The second is to use some form of regularization. This is a concept that emerges naturally in a probabilistic (Bayesian) framework, where the regularization can be performed by selecting a larger prior probability over simpler models; but also in statistical learning theory, where the goal is to minimize over two quantities: the 'empirical risk' and the 'structural risk', which roughly corresponds to the error over the training set and the predicted error in unseen data due to overfitting. PROPERTIES OF ANNs
  • 14. 14 They can perform tasks that are easy for a human but difficult for a machine −  Aerospace − Autopilot aircrafts, aircraft fault detection.  Automotive − Automobile guidance systems.  Military − Weapon orientation and steering, target tracking, object discrimination, facial recognition, signal/image identification.  Electronics − Code sequence prediction, IC chip layout, chip failure analysis, machine vision, voice synthesis.  Financial − Real estate appraisal, loan advisor, mortgage screening, corporate bond rating, portfolio trading program, corporate financial analysis, currency value prediction, document readers, credit application evaluators.  Industrial − Manufacturing process control, product design and analysis, quality inspection systems, welding quality analysis, paper quality prediction, chemical product design analysis, dynamic modeling of chemical process systems, machine maintenance analysis, project bidding, planning, and management.  Medical − Cancer cell analysis, EEG and ECG analysis, prosthetic design, transplant time optimizer.  Speech − Speech recognition, speech classification, text to speech conversion.  Telecommunications − Image and data compression, automated information services, real-time spoken language translation.  Transportation − Truck Brake system diagnosis, vehicle scheduling, routing systems.  Software − Pattern Recognition in facial recognition, optical character recognition, etc.  Time Series Prediction − ANNs are used to make predictions on stocks and natural calamities.  Signal Processing − Neural networks can be trained to process an audio signal and filter it appropriately in the hearing aids.  Control − ANNs are often used to make steering decisions of physical vehicles.  Anomaly Detection − As ANNs are expert at recognizing patterns, they can also be trained to generate an output when something unusual occurs that misfits the pattern. APPLICATIONS OF ANNs
  • 15. 15  It involves human like thinking.  They handle noisy or missing data.  They can work with large number of variables or parameters.  They provide general solutions with good predictive accuracy.  System has got property of continuous learning.  They deal with the non-linearity in the world in which we live.  A neural network can perform tasks that a linear program cannot.  When an element of the neural network fails, it can continue without any problem by their parallel nature.  A neural network learns and does not need to be reprogrammed.  It can be implemented in any application.  It can be implemented without any problem.  The neural network needs training to operate.  The architecture of a neural network is different from the architecture of microprocessors therefore needs to be emulated.  Requires high processing time for large neural networks. ADVANTAGES DISADVANTAGES
  • 16. 16 The computing world has a lot to gain from neural networks. Their ability to learn by example makes them very flexible and powerful. Furthermore there is no need to devise an algorithm in order to perform a specific task; i.e. there is no need to understand the internal mechanisms of that task. They are also very well suited for real time systems because of their fast response and computational times which are due to their parallel architecture. Neural networks also contribute to other areas of research such as neurology and psychology. They are regularly used to model parts of living organisms and to investigate the internal mechanisms of the brain. Perhaps the most exciting aspect of neural networks is the possibility that someday 'conscious' networks might be produced. There is a number of scientists arguing that consciousness is a 'mechanical' property and that 'conscious' neural networks are a realistic possibility. Finally, we can say that even though neural networks have a huge potential we will only get the best of them when they are integrated with computing, AI, fuzzy logic and related subjects. 1) https://en.wikipedia.org/wiki/Artificial_neural_network 2) http://www.psych.utoronto.ca/users/reingold/courses/ai/cache/neural3.html 3) http://www.slideshare.net/nilmani14/neural-network-3019822 4) http://studymafia.org/artificial-neural-network-seminar-ppt-with-pdf-report/ 5)http://www.tutorialspoint.com/artificial_intelligence/artificial_intelligence_iss ues.htm CONCLUSION REFERENCES