SlideShare a Scribd company logo
UNIT I
Concept of AI
History
Current Status
Scope
Agents,Environments
Problem formulation
Review of Tree and Graph Structures
State Space Representation
Search Graph and Search Tree
Textbook
Artificial Intelligence: A Modern Approach (AIMA)
(Second Edition) by Stuart Russell and Peter Norvig
Concept of AI
What is Artificial
Intelligence?
 Artificial Intelligence:
 build and understand intelligent entities
 Intelligence:
 “the capacity to learn and solve problems”
 the ability to act rationally
Two main dimensions:
 Thought processes vs behavior
 Human-like vs rational-like
Views of AI fall into four categories/approaches:
Thinking humanly Thinking rationally
Acting humanly Acting rationally
Acting Humanly:Turing Test
(Can Machine think? A. M. Turing, 1950)
AI system passes if interrogator cannot tell
which one is the machine.
Acting humanly: Turing Test
To pass the test, computer need to possess
 Natural Language Processing – to communicate with
the machine;
 Knowledge Representation – to store and manipulate
information;
 Automated reasoning – to use the stored information
to answer questions and draw new conclusions;
 Machine Learning – to adapt to new circumstances
and to detect and extrapolate patterns.
Turing test  identified key research areas in AI:
Total Turing Test:
 To pass the Total Turing Test, the computer
needs,
 Computer vision –to perceive objects
 Robotics-manipulate objects and move about.
Thinking humanly: cognitive modeling
Requires scientific theories of internal activities
of the brain; How to validate?
1) Cognitive Science (top-down) 
Predicting and testing behavior of human
subjects
– computer models + experimental
techniques from psychology
2) Cognitive Neuroscience (bottom-up) 
Direct identification from neurological data
Thinking rationally: "laws of thought“
Proposed by Aristotle;
Given the correct premises, it yields the correct
conclusion
Socrates is a man
All men are mortal
--------------------------
Therefore Socrates is mortal
Logic  Making the right inferences!
Acting rationally: rational agent
An agent is anything that can be viewed
as perceiving its environment through
sensors and acting upon that
environment through actuators.
Rational behavior: doing the right thing;
that which is expected to maximize goal
achievement, given the available
information;
Foundations of AI
 Philosophy logic, methods of reasoning, mind vs. matter,
foundations of learning and knowledge
 Mathematics logic, probability, computation
 Economics utility, decision theory
 Neuroscience biological basis of intelligence(how brain process
information?)
 Psychology computational models of human intelligence(how
humans and animals think and act)
 Computer engineering how to build efficient computers?
 Linguistics rules of language, language acquisition(how does
language relate to thought)
 Control theory design of dynamical systems that use controller to
achieve desired behavior
History
 1943 McCulloch & Pitts “Boolean circuit model of brain”
 1950 Turing’s “Computing Machinery and Intelligence”
 1951 Minsky and Edmonds
• Built a neural net computer SNARC
• Used 3000 vacuum tubes and 40 neurons
The Birthplace of
“Artificial Intelligence”, 1956
 1956 Dartmouth meeting: “Artificial
Intelligence” adopted
 1956 Newell and Simon Logic theorist
(LT)- proves theorem.
Early enthusiasm,great
expectations (1952-1969)
 GPS- Newell and Simon-thinks like humans(1952)
 Samuel Checkers that learns (1952)
 McCarthy - Lisp (1958),
 Geometry theorem prover - Gelernter (1959)
 Robinson’s resolution(1963)
 Slagles – SAINT solves calculus problems(1963).
 Daniel Bobrows Student program solved algebra story
problems(1964).
 1968- TomEvans Analogy program solved geometric
analogy problems that appear in IQ test.
271- Fall 2008
 1966-1974 a dose of reality
 Problems with computation
 1969 :Minsky and Papert Published the book Perceptrons,
demonstrating the limitations of neural networks.
 1969-1979 Knowledge-based systems
 1969:Dendral:Inferring molecular structures
Mycin: diagnosing blood infections
Prolog Language PLANNER became popular
Minsky developed frames as a representation and reasoning
language.
 1980-present: AI becomes an industry
 Japanese Government announced Fifth generation project to build
intelligent computers
 AI Winter –companies failed to deliver on extra vagant promises
 1986-present: return of neural networks
Many research were done by psychologists on Neural networks
 1987-present: AI becomes a Science
 HMMs, planning, belief network
Emergence of Intelligent agents(1995-present)
o The agent architecture SOAR developed
o The agents environment is internet.
o Web based applications, search engines, recommender systems,
websites
Current Status
271- Fall 2008
Scope
271- Fall 2008
artificial Intelligence unit1 ppt (1).ppt
271- Fall 2008
Agents,Environments
Intelligent Agents
 Agents and environments
 Rationality
 Nature of Environments
 Structure of Agents
Agents
 An agent is anything that can be viewed as
perceiving its environment through sensors
and acting upon that environment through
actuators
 Human agent:
sensors- eyes, ears, and other organs
actuators- hands, legs, mouth, and
other body parts
 Robotic agent:
Sensors - cameras and infrared range
finders
Actuators - motors
 an agent perceives its environment through
sensors
 the complete set of inputs at a given time is
called a percept
 the current percept, or a sequence of
percepts may influence the actions of an
agent –percept sequence
 The agent function maps from percept histories to
actions:[f: P*  A].The agent function is an
abstract mathematical description.
 The agent function will be implemented by an agent
program.The agent program is a concrete
implementation running on the agent
architecture .
Vacuum-cleaner world
 Percepts:
Location and status,
e.g., [A,Dirty]
 Actions:
Left, Right, Suck, NoOp
Example vacuum agent program:
function Vacuum-Agent([location,status]) returns an action
 if status = Dirty then return Suck
 else if location = A then return Right
 else if location = B then return Left
artificial Intelligence unit1 ppt (1).ppt
Rationality
 A rational agent is one that does the right
thing. Every entry in the table for the agent
function is filled out correctly.
 It is based on
 performance measure
 percept sequence
 background knowledge
 feasible actions
Omniscience, Learning and
Autonomy
 an omniscient agent deals with the actual
outcome of its actions
 a rational agent deals with the expected
outcome of actions
 a rational agent not only gather information
but also learns as much as possible from the
percepts it receives.
 a rational agent should be autonomous –it
should learn what it can do to compensate
for partial or incorrect prior knowledge.
Nature of Environments
Specifying the task environment
 Before we design an intelligent agent, we must specify its “task
environment”:
 Problem specification: Performance measure, Environment,
Actuators, Sensors (PEAS)
Example of Agent Types and their PEAS description:
 Example: automated taxi driver
 Performance measure
• Safe, fast, legal, comfortable trip, maximize profits
 Environment
• Roads, other traffic, pedestrians, customers
 Actuators
• Steering wheel, accelerator, brake, signal, horn
 Sensors
• Cameras, sonar, speedometer, GPS, odometer, engine
sensors, keyboard
 Example: Agent = Medical diagnosis system
Performance measure: Healthy patient, minimize costs,
lawsuits
Environment: Patient, hospital, staff
Actuators: Screen display (questions, tests, diagnoses,
treatments, referrals)
Sensors: Keyboard (entry of symptoms, findings,
patient's answers)
 Example: Agent = Part-picking robot
Performance measure: Percentage of parts in correct
bins
Environment: Conveyor belt with parts, bins
Actuators: Jointed arm and hand
Sensors: Camera, joint angle sensors
Artificial Intelligence a modern approach
 Example: Agent = Interactive English tutor
Performance measure: Maximize student's score on test
Environment: Set of students
Actuators: Screen display (exercises, suggestions, corrections)
Sensors: Keyboard
Example: Agent = Satellite image system
Performance measure: Correct image categorization
Environment: Downlink from satellite
Actuators: Display categorization of scene
Sensors: Color pixel array
Properties of Task Environment
 Fully observable (vs. partially observable): The agent's sensors
give it access to the complete state of the environment at each point
in time
e.g an automated taxi doesn’t has sensor to see what other drivers are
doing/thinking.
 Deterministic (vs. stochastic): The next state of the environment is
completely determined by the current state and the agent’s action
 Strategic: the environment is deterministic except for the actions
of other agents
e.g Vacuum world is Deterministic while Taxi Driving is Stochastic –
as one can exactly predict the behaviour of traffic
 Episodic (vs. sequential): The agent's experience is divided into
atomic “episodes,” and the choice of action in each episode depends
only on the episode itself
 E.g. an agent sorting defective parts in an assembly line is episodic
while a taxi driving agent or a chess playing agent are sequential ….
 Static (vs. dynamic): The environment is unchanged while an agent is
deliberating
 Semidynamic: the environment does not change with the passage
of time, but the agent's performance score does
e.g.Taxi Driving is Dynamic, Crossword Puzzle solver is static,chess
played with a clock is semidynamic
 Discrete (vs. continuous): The environment provides a fixed number of
distinct percepts, actions, and environment states
e.g. chess game has finite number of states
• Taxi Driving is continuous-state and continuous-time problem …
 Single agent (vs. multi-agent): An agent operating by itself in an
environment
e.g. An agent solving a crossword puzzle is in a single agent
environment
• Agent in chess playing is in two-agent environment
task
environm.
observable determ./
stochastic
episodic/
sequential
static/
dynamic
discrete/
continuous
agents
crossword
puzzle
fully determ. sequential static discrete single
chess with
clock
fully strategic sequential semi discrete multi
poker partial stochastic sequential static discrete multi
back
gammon
fully stochastic sequential static discrete multi
taxi
driving
partial stochastic sequential dynamic continuous multi
medical
diagnosis
partial stochastic sequential dynamic continuous single
image
analysis
fully determ. episodic semi continuous single
partpicking
robot
partial stochastic episodic dynamic continuous single
refinery
controller
partial stochastic sequential dynamic continuous single
interact.
Eng. tutor
partial stochastic sequential dynamic discrete multi
Structure of Agents
 An agent is completely specified by the agent function
mapping percept sequences to actions.
 The agent program implements the agent function
mapping percepts sequences to actions
Agent=architecture + program.
Architecture= sort of computing device with physical
sensors and actuators.
 Aim of AI is to design the agent program
Table-Driven agent
Function Table-Driven-Agent(percept)
Static: percepts, a sequence, initially empty
table, a table of actions, indexed by percept
sequences, initially fully specified
append percept to the end of percepts
action <- Lookup(percepts,table)
Return action
The table agent program is invoked for each new percept and returns an action
each time. It keeps track of percept sequences using its own private data structure.
Table-lookup agent
 Drawbacks:
 Huge table
 Take a long time to build the table
 No autonomy
 Even with learning, need a long time to learn
the table entries.
 Example : let P be the set of possible percepts and T be the
lifetime of the agent (the total number of percepts it will receive)
then the lookup table will contain PT entries.
 The table of the vacuum agent (VA) will contain more than 4T
entries (VA has 4 possible percepts).
 Four basic kinds of agent program are
 Simple reflex agents
 Model-based reflex agents
 Goal-based agents
 Utility-based agents
All of these can be turned into learning agents
Simple reflex agents
 Single current percept : the agent select an action on the
basis current percept, ignoring the rest of percept history.
 Example : The vacuum agent (VA) is a simple reflex agent,
because its decision is based only on the current location and
on whether that contains dirt.
 Rules relate
 “State” based on percept
 “action” for agent to perform
 “Condition-action” rule:
If a then b: e.g.
vacuum agent (VA) : if in(A) and dirty(A), then vacuum
taxi driving agent (TA): if car-in-front-is-braking then initiate-
braking.
Agent program for a simple reflex agent
The vacuum agent program is very small compared to the corresponding
table : it cuts down the number of possibilities from 4T to 4. This reduction
comes from the ignoring of the history percepts.
Simple reflex agent Program
Function Simple-Reflex-Agent(percept)
Static: rules, set of condition-actions rules;
state <- Interpret-Input(percept)
Rule <- Rule-Match(state, rules)
action <- Rule-Action[Rule]
return action
A simple reflex agent. It acts according to rule whose condition matches the
current state, as defined by the percept.
Schematic diagram of a Simple
reflex agent
example: vacuum cleaner world
Limited Intelligence
Fails if environment
is partially observable
current state of decision process
Simple reflex agents
Artificial Intelligence a modern approach
45
 Simple but very limited intelligence.
 Action does not depend on percept history, only on
current percept.
 Therefore no memory requirements.
 Infinite loops
 Suppose vacuum cleaner does not observe location.
What do you do given location = clean? Left of A or
right on B -> infinite loop.
 Possible Solution: Randomize action.
Model-based reflex agents
 Solution to partial observability problems
 Maintain state
• Keep track of parts of the world can't see now
• Maintain internal state that depends on the percept
history
 Update previous state based on
• Knowledge of how world changes, e.g. TA : an overtaking car
generally will be closer behind than it was a moment ago.
• Knowledge of effects of own actions, e.g. TA: When the agent
turns the steering wheel clockwise the car turns to the right.
• => Model called “Model of the world” implements the
knowledge about how the world work.
46
Schematic diagram of a Model-based reflex agents
Models the world by:
modeling how the world changes
how it’s actions change the world
description of
current world state
sometimes it is unclear what to do
without a clear goal
Model-based reflex agents
Function Model-based-Reflex-Agent(percept)
Static: state, a description of the current world state
rules, set of condition-actions rules;
actions, the most recent action, initially none
State<-Update-State(oldInternalState,LastAction,percept)
rule<- Rule-Match(State, rules)
action <- Rule-Action[rule]
return action
A model-based reflex agent. It keep track of the current state of the world using
an internal model. It then chooses an action in the same way as the reflect agent.
Goal-based agents
Artificial Intelligence a modern approach
49
• knowing state and environment? Enough?
– Taxi can go left, right, straight
• Have a goal
 A destination to get to
 Uses knowledge about a goal to guide its
actions
 E.g., Search, planning
 Goal-based Agents are much more flexible in
responding to a changing environment;
accepting different goals.
Goal-based agents
Goals provide reason to prefer one action over the other.
We need to predict the future: we need to plan & search
Artificial Intelligence a modern approach
51
• Reflex agent breaks when it sees brake lights. Goal based agent
reasons
– Brake light -> car in front is stopping -> I should stop -> I should use brake
Utility-based agents
Artificial Intelligence a modern approach
52
 Goals are not always enough
 Many action sequences get taxi to destination
 Consider other things. How fast, how safe…..
 A utility function maps a state onto a real
number which describes the associated degree
of “happiness”, “goodness”, “success”.
 Where does the utility measure come from?
 Economics: money.
 Biology: number of offspring.
 Your life?
Utility-based agents
Some solutions to goal states are better than others.
Which one is best is given by a utility function.
Which combination of goals is preferred?
Learning agents
How does an agent improve over time?
By monitoring it’s performance and suggesting
better modeling, new action rules, etc.
Evaluates
current
world
state
changes
action
rules
suggests
explorations
“old agent”
model world
and
decide on
actions to be
taken
Learning Agents can be divided into 4 conceptual
components:
1. Learning elements are responsible for
improvements
2. Performance elements are responsible for
selecting external actions (previous knowledge)
3. Critic tells the learning elements how well the
agent is doing with respect to a fixed performance
standard.
4. Problem generator is responsible for suggesting
actions that will lead to new and informative
experience.
Example :Automated Taxi driving
•The performance element consists of whatever collection of knowledge and
procedures the TA has for selecting its driving actions.
•The critic observes the world and passes information along to the learning
element. For example after the taxi makes a quick left turn across three lanes
the critic observes the shocking language used by other drivers. From this
experience the learning element is able to formulate a rule saying this was a
bad action, and the performance element is modified by installing this new rule.
•The problem generator may identify certain areas of behavior in need of
improvement and suggest experiments : such as testing the brakes on different
road surfaces under different conditions.
•The learning element can make change in any Knowledge of previous agent
types : observation between two states (how the world evolves), observation of
results of actions (what my action do). (learn from what happens when
strong brake is applied on a wet road …)
56
Problem Formulation
Problem Solving agents
Example problems
Searching for solutions
57
Problem Solving agents:
1. Goal Formulation: Set of one or more (desirable) world
states.
2. Problem formulation: What actions and states to
consider given a goal and an initial state.
3. Search for solution: Given the problem, search for a
solution --- a sequence of actions to achieve the goal
starting from the initial state.
4. Execution of the solution
58
Example: Path Finding problem
 Formulate goal:
 be in Bucharest
(Romania)
 Formulate problem:
 action: drive between
pair of connected
cities (direct road)
 state: be in a city
(20 world states)
 Find solution:
 sequence of cities
leading from start to
goal state, e.g., Arad,
Sibiu, Fagaras,
Bucharest
 Execution
 drive from Arad to
Bucharest according
to the solution
Initial
State
Goal
State
Environment: fully observable (map),
deterministic, and the agent knows effects
of each action.
Well defined Problems and
solutions
A problem can defined by 4 components
1.Initial state: starting point from which the agent sets
out
2.Operator: description of an action
State space: all states reachable from the initial
state by any sequence of actions
Path: sequence of actions leading from one state
to another
3.Goal test: determines if a given state is the goal
state
4.Path cost function: assign a numeric cost to
each path.
60
61
Example Problems
 Toy problems
 Illustrate/test various problem-solving methods
 Concise, exact description
 Can be used to compare performance
 Examples: 8-puzzle, 8-queens problem, Cryptarithmetic,
Vacuum world, Missionaries and cannibals.
 Real-world problem
 More difficult
 No single, agreed-upon specification (state, successor function,
edgecost)
 Examples: Route finding, VLSI layout, Robot navigation,
Assembly sequencing
Toy problems:
Simple Vacuum World
 states
 two locations
 dirty, clean
 initial state
 any legitimate state
 successor function (operators)
 left, right, suck
 goal test
 all squares clean
 path cost
 one unit per action
Properties: discrete locations, discrete dirt (binary), deterministic
The 8-puzzle
[Note: optimal solution of n-Puzzle family is NP-hard]
8-Puzzle
 states
 location of tiles (including blank tile)
 initial state
 any legitimate configuration
 successor function (operators)
 move tile
 alternatively: move blank
 goal test
 any legitimate configuration of tiles
 path cost
 one unit per move
Properties: abstraction leads to discrete configurations, discrete moves,deterministic
8-Queens
 incremental formulation
 states
• arrangement of up to 8 queens
on the board
 initial state
• empty board
 successor function (operators)
• add a queen to any square
 goal test
• all queens on board
• no queen attacked
 path cost
• irrelevant (all solutions equally
valid)
 complete-state formulation
 states
 arrangement of 8 queens on the board
 initial state
 all 8 queens on board
 successor function (operators)
 move a queen to a different square
 goal test
 no queen attacked
 path cost
 irrelevant (all solutions equally valid)
66
Real-world problems
 Route finding
 Defined in terms of locations and transitions along links between
them
 Applications: routing in computer networks, automated travel
advisory systems, airline travel planning systems
 states
 locations
 initial state
 starting point
 successor function (operators)
 move from one location to another
 goal test
 arrive at a certain location
 path cost
 may be quite complex
• money, time, travel comfort, scenery, ...
67
 Touring and traveling salesperson problems
 “Visit every city on the map at least once”
 Needs information about the visited cities
 Goal: Find the shortest tour that visits all cities
 NP-hard, but a lot of effort has been spent on improving the
capabilities of TSP algorithms
 Applications: planning movements of automatic circuit board drills
 VLSI layout
 positioning millions of components and connections on a chip to
minimize area, circuit delays, etc.
 Place cells on a chip so they don’t overlap and there is room for
connecting wires to be placed between the cells
 Robot navigation
 Generalization of the route finding problem
• No discrete set of routes
• Robot can move in a continuous space
• Infinite set of possible actions and states
 Assembly sequencing
 Automatic assembly of complex objects
 The problem is to find an order in which to assemble
the parts of some object
 Protein design
sequence of amino acids that will fold into the 3-
dimensional protein with the right properties to cure some
disease.
Searching for Solutions/
Search Graph & Search Tree
Search through the state space.
We will consider search techniques that use an
explicit search tree that is generated by the initial state and
successor function .
search Tree example Node selected
for expansion.
Nodes added to tree.
Selected for expansion.
Added to tree.
Note: Arad added (again) to tree!
(reachable from Sibiu)
Not necessarily a problem, but
in Graph-Search, we will avoid
this by maintaining an
“explored” list.
An informal description of
general Tree search algorithm
initialize (initial node)
Loop
choose a node for expansion according to strategy
goal node?  done
expand node with successor function
73
states vs. nodes
74
 A state is a (representation of) a physical configuration
 A node is a data structure with 5 components state, parent node, action,
path cost,depth
General Tree Search Algorithm
function TREE-SEARCH(problem, fringe) returns solution
fringe := INSERT(MAKE-NODE(INITIAL-STATE[problem]), fringe)
loop do
if EMPTY?(fringe) then return failure
node := REMOVE-FIRST(fringe)
if GOAL-TEST[problem] applied to STATE[node] succeeds
then return SOLUTION(node)
fringe := INSERT-ALL(EXPAND(node, problem), fringe)
 generate the node from the initial state of the problem
 repeat
 return failure if there are no more nodes in the fringe
 examine the current node; if it’s a goal, return the solution
 expand the current node, and add the new nodes to the fringe
Measuring Problem solving performance
76
An algorithms performance can be evaluated in 4 ways:
1.Completeness: does it always find a solution if one exists?
2. Time Complexity: how long does it take to find a solution
3.Space Complexity: how much memory does it need to perform the
search
4.Optimality: does the strategy find the optimal solution
 Time and space complexity are measured in terms of
 b: branching factor(max no of successors of any node) of the
search tree
 d: depth of the shallowest goal node
 m: maximum length of the state space (may be ∞)

More Related Content

What's hot (20)

PPTX
Swarm intelligence
Nishi Malhotra
 
PPTX
Ant colony optimization (aco)
gidla vinay
 
PPTX
Intelligent agent
Arvind sahu
 
PDF
Ant Colony Optimization
Pratik Poddar
 
PPTX
peas description of task environment with different types of properties
monircse2
 
PPTX
Problem Formulation in Artificial Inteligence Projects
Dr. C.V. Suresh Babu
 
PDF
I.ITERATIVE DEEPENING DEPTH FIRST SEARCH(ID-DFS) II.INFORMED SEARCH IN ARTIFI...
vikas dhakane
 
PPTX
Intelligent Agents
Amar Jukuntla
 
PPT
Particle Swarm Optimization - PSO
Mohamed Talaat
 
PPTX
Chapter 2 intelligent agents
LukasJohnny
 
PPT
Swarm intelligence
Shaheena Begum Mohammed
 
PPTX
Control Strategies in AI
Amey Kerkar
 
PPTX
A* Algorithm
Dr. C.V. Suresh Babu
 
PDF
Informed search
Amit Kumar Rathi
 
PPTX
Adversarial Search
Megha Sharma
 
PPTX
Intelligent agent
Geeta Jaswani
 
PDF
Lecture 3 problem solving
Vajira Thambawita
 
PPTX
AI: Learning in AI
DataminingTools Inc
 
PPTX
AI3391 Session 9 Greedy Best first search algorithm.pptx
Guru Nanak Technical Institutions
 
PDF
Ai lecture 01(unit03)
vikas dhakane
 
Swarm intelligence
Nishi Malhotra
 
Ant colony optimization (aco)
gidla vinay
 
Intelligent agent
Arvind sahu
 
Ant Colony Optimization
Pratik Poddar
 
peas description of task environment with different types of properties
monircse2
 
Problem Formulation in Artificial Inteligence Projects
Dr. C.V. Suresh Babu
 
I.ITERATIVE DEEPENING DEPTH FIRST SEARCH(ID-DFS) II.INFORMED SEARCH IN ARTIFI...
vikas dhakane
 
Intelligent Agents
Amar Jukuntla
 
Particle Swarm Optimization - PSO
Mohamed Talaat
 
Chapter 2 intelligent agents
LukasJohnny
 
Swarm intelligence
Shaheena Begum Mohammed
 
Control Strategies in AI
Amey Kerkar
 
A* Algorithm
Dr. C.V. Suresh Babu
 
Informed search
Amit Kumar Rathi
 
Adversarial Search
Megha Sharma
 
Intelligent agent
Geeta Jaswani
 
Lecture 3 problem solving
Vajira Thambawita
 
AI: Learning in AI
DataminingTools Inc
 
AI3391 Session 9 Greedy Best first search algorithm.pptx
Guru Nanak Technical Institutions
 
Ai lecture 01(unit03)
vikas dhakane
 

Similar to artificial Intelligence unit1 ppt (1).ppt (20)

PPT
Lecture1
chandsek666
 
PPTX
AI INTELLIGENT AGENTS AND ENVIRONMENT.pptx
Ananthi Palanisamy
 
PPT
Artificial Intelligence Module 1_additional2.ppt
AranAgarwal1
 
PPT
Unit 1.ppt
GEETHAS668001
 
PPT
Introduction
butest
 
PDF
Lectures_on_Artificial_Intelligence_08.09.16.pdf
KpsMurugesan Kpsm
 
PPT
Unit 1.ppt
BaskarChelladurai
 
PPTX
Introduction to ai and it's agents .pptx
toomiccomic287
 
PPTX
AI: Introduction to artificial intelligence
Datamining Tools
 
PPTX
AI: Introduction to artificial intelligence
DataminingTools Inc
 
PPTX
AIES Unit I(2022).pptx
AayushSharma261
 
PDF
AIES Unit_1 (2022).pdf...............................
FahimAbullais1
 
PPTX
unit 1.pptx
UjjawalPandey19
 
PPT
Machine learning and artificial intelligence solutions PPT
DrRohitKumarSinghal1
 
PPT
Machine learning and artificial intelligence solutions
DrRohitKumarSinghal1
 
PDF
Lect 1_Introduction to AI and ML.pdf
gadissaassefa
 
PDF
Fundamentals of ai chapter 1 introduction
keyurparmar923
 
PPT
EELU AI lecture 1- fall 2022-2023 - Chapter 01- Introduction.ppt
DaliaMagdy12
 
Lecture1
chandsek666
 
AI INTELLIGENT AGENTS AND ENVIRONMENT.pptx
Ananthi Palanisamy
 
Artificial Intelligence Module 1_additional2.ppt
AranAgarwal1
 
Unit 1.ppt
GEETHAS668001
 
Introduction
butest
 
Lectures_on_Artificial_Intelligence_08.09.16.pdf
KpsMurugesan Kpsm
 
Unit 1.ppt
BaskarChelladurai
 
Introduction to ai and it's agents .pptx
toomiccomic287
 
AI: Introduction to artificial intelligence
Datamining Tools
 
AI: Introduction to artificial intelligence
DataminingTools Inc
 
AIES Unit I(2022).pptx
AayushSharma261
 
AIES Unit_1 (2022).pdf...............................
FahimAbullais1
 
unit 1.pptx
UjjawalPandey19
 
Machine learning and artificial intelligence solutions PPT
DrRohitKumarSinghal1
 
Machine learning and artificial intelligence solutions
DrRohitKumarSinghal1
 
Lect 1_Introduction to AI and ML.pdf
gadissaassefa
 
Fundamentals of ai chapter 1 introduction
keyurparmar923
 
EELU AI lecture 1- fall 2022-2023 - Chapter 01- Introduction.ppt
DaliaMagdy12
 
Ad

More from Ramya Nellutla (17)

PDF
Module -6.pdf Machine Learning Types and examples
Ramya Nellutla
 
PDF
Module 5.pdf Machine Learning Types and examples
Ramya Nellutla
 
PPTX
Unit -3.pptx cloud Security unit -3 notes
Ramya Nellutla
 
PPT
MOBILE SECURITY -UNIT -II PPT IV- PROFESSIONAL ELEFCTIVE
Ramya Nellutla
 
PPTX
Unit-1Mobile Security Notes Mobile Communication
Ramya Nellutla
 
PDF
Deep network notes.pdf
Ramya Nellutla
 
PDF
pentration testing.pdf
Ramya Nellutla
 
PPTX
Deep Learning.pptx
Ramya Nellutla
 
PDF
Unit-I PPT.pdf
Ramya Nellutla
 
PDF
- Social Engineering Unit- II Part- I.pdf
Ramya Nellutla
 
PPTX
Datamodels.pptx
Ramya Nellutla
 
PPT
Unit-3-Part-1 [Autosaved].ppt
Ramya Nellutla
 
PDF
E5-roughsets unit-V.pdf
Ramya Nellutla
 
PPTX
Unit-3.pptx
Ramya Nellutla
 
PDF
Unit-II -Soft Computing.pdf
Ramya Nellutla
 
PPT
SC01_IntroductionSC-Unit-I.ppt
Ramya Nellutla
 
PPTX
- Fuzzy Systems -II.pptx
Ramya Nellutla
 
Module -6.pdf Machine Learning Types and examples
Ramya Nellutla
 
Module 5.pdf Machine Learning Types and examples
Ramya Nellutla
 
Unit -3.pptx cloud Security unit -3 notes
Ramya Nellutla
 
MOBILE SECURITY -UNIT -II PPT IV- PROFESSIONAL ELEFCTIVE
Ramya Nellutla
 
Unit-1Mobile Security Notes Mobile Communication
Ramya Nellutla
 
Deep network notes.pdf
Ramya Nellutla
 
pentration testing.pdf
Ramya Nellutla
 
Deep Learning.pptx
Ramya Nellutla
 
Unit-I PPT.pdf
Ramya Nellutla
 
- Social Engineering Unit- II Part- I.pdf
Ramya Nellutla
 
Datamodels.pptx
Ramya Nellutla
 
Unit-3-Part-1 [Autosaved].ppt
Ramya Nellutla
 
E5-roughsets unit-V.pdf
Ramya Nellutla
 
Unit-3.pptx
Ramya Nellutla
 
Unit-II -Soft Computing.pdf
Ramya Nellutla
 
SC01_IntroductionSC-Unit-I.ppt
Ramya Nellutla
 
- Fuzzy Systems -II.pptx
Ramya Nellutla
 
Ad

Recently uploaded (20)

PDF
Set Relation Function Practice session 24.05.2025.pdf
DrStephenStrange4
 
PPTX
原版一样(Acadia毕业证书)加拿大阿卡迪亚大学毕业证办理方法
Taqyea
 
PPTX
MPMC_Module-2 xxxxxxxxxxxxxxxxxxxxx.pptx
ShivanshVaidya5
 
PPT
Oxygen Co2 Transport in the Lungs(Exchange og gases)
SUNDERLINSHIBUD
 
PPTX
Electron Beam Machining for Production Process
Rajshahi University of Engineering & Technology(RUET), Bangladesh
 
PDF
Ethics and Trustworthy AI in Healthcare – Governing Sensitive Data, Profiling...
AlqualsaDIResearchGr
 
PPTX
Introduction to Neural Networks and Perceptron Learning Algorithm.pptx
Kayalvizhi A
 
PPTX
Data_Analytics_Presentation_By_Malik_Azanish_Asghar.pptx
azanishmalik1
 
PDF
Unified_Cloud_Comm_Presentation anil singh ppt
anilsingh298751
 
PPTX
Cyclic_Redundancy_Check_Presentation.pptx
alhjranyblalhmwdbdal
 
PPTX
ISO/IEC JTC 1/WG 9 (MAR) Convenor Report
Kurata Takeshi
 
PPTX
Types of Bearing_Specifications_PPT.pptx
PranjulAgrahariAkash
 
PDF
OT-cybersecurity-solutions-from-TXOne-Deployment-Model-Overview-202306.pdf
jankokersnik70
 
PDF
UNIT-4-FEEDBACK AMPLIFIERS AND OSCILLATORS (1).pdf
Sridhar191373
 
PPTX
Mining Presentation Underground - Copy.pptx
patallenmoore
 
PPTX
Benefits_^0_Challigi😙🏡💐8fenges[1].pptx
akghostmaker
 
PPTX
Presentation on Foundation Design for Civil Engineers.pptx
KamalKhan563106
 
PPTX
Coding about python and MySQL connectivity
inderjitsingh1985as
 
PDF
Statistical Data Analysis Using SPSS Software
shrikrishna kesharwani
 
PDF
Detailed manufacturing Engineering and technology notes
VIKKYsing
 
Set Relation Function Practice session 24.05.2025.pdf
DrStephenStrange4
 
原版一样(Acadia毕业证书)加拿大阿卡迪亚大学毕业证办理方法
Taqyea
 
MPMC_Module-2 xxxxxxxxxxxxxxxxxxxxx.pptx
ShivanshVaidya5
 
Oxygen Co2 Transport in the Lungs(Exchange og gases)
SUNDERLINSHIBUD
 
Electron Beam Machining for Production Process
Rajshahi University of Engineering & Technology(RUET), Bangladesh
 
Ethics and Trustworthy AI in Healthcare – Governing Sensitive Data, Profiling...
AlqualsaDIResearchGr
 
Introduction to Neural Networks and Perceptron Learning Algorithm.pptx
Kayalvizhi A
 
Data_Analytics_Presentation_By_Malik_Azanish_Asghar.pptx
azanishmalik1
 
Unified_Cloud_Comm_Presentation anil singh ppt
anilsingh298751
 
Cyclic_Redundancy_Check_Presentation.pptx
alhjranyblalhmwdbdal
 
ISO/IEC JTC 1/WG 9 (MAR) Convenor Report
Kurata Takeshi
 
Types of Bearing_Specifications_PPT.pptx
PranjulAgrahariAkash
 
OT-cybersecurity-solutions-from-TXOne-Deployment-Model-Overview-202306.pdf
jankokersnik70
 
UNIT-4-FEEDBACK AMPLIFIERS AND OSCILLATORS (1).pdf
Sridhar191373
 
Mining Presentation Underground - Copy.pptx
patallenmoore
 
Benefits_^0_Challigi😙🏡💐8fenges[1].pptx
akghostmaker
 
Presentation on Foundation Design for Civil Engineers.pptx
KamalKhan563106
 
Coding about python and MySQL connectivity
inderjitsingh1985as
 
Statistical Data Analysis Using SPSS Software
shrikrishna kesharwani
 
Detailed manufacturing Engineering and technology notes
VIKKYsing
 

artificial Intelligence unit1 ppt (1).ppt

  • 1. UNIT I Concept of AI History Current Status Scope Agents,Environments Problem formulation Review of Tree and Graph Structures State Space Representation Search Graph and Search Tree
  • 2. Textbook Artificial Intelligence: A Modern Approach (AIMA) (Second Edition) by Stuart Russell and Peter Norvig
  • 3. Concept of AI What is Artificial Intelligence?  Artificial Intelligence:  build and understand intelligent entities  Intelligence:  “the capacity to learn and solve problems”  the ability to act rationally Two main dimensions:  Thought processes vs behavior  Human-like vs rational-like
  • 4. Views of AI fall into four categories/approaches: Thinking humanly Thinking rationally Acting humanly Acting rationally
  • 5. Acting Humanly:Turing Test (Can Machine think? A. M. Turing, 1950) AI system passes if interrogator cannot tell which one is the machine.
  • 6. Acting humanly: Turing Test To pass the test, computer need to possess  Natural Language Processing – to communicate with the machine;  Knowledge Representation – to store and manipulate information;  Automated reasoning – to use the stored information to answer questions and draw new conclusions;  Machine Learning – to adapt to new circumstances and to detect and extrapolate patterns. Turing test  identified key research areas in AI:
  • 7. Total Turing Test:  To pass the Total Turing Test, the computer needs,  Computer vision –to perceive objects  Robotics-manipulate objects and move about.
  • 8. Thinking humanly: cognitive modeling Requires scientific theories of internal activities of the brain; How to validate? 1) Cognitive Science (top-down)  Predicting and testing behavior of human subjects – computer models + experimental techniques from psychology 2) Cognitive Neuroscience (bottom-up)  Direct identification from neurological data
  • 9. Thinking rationally: "laws of thought“ Proposed by Aristotle; Given the correct premises, it yields the correct conclusion Socrates is a man All men are mortal -------------------------- Therefore Socrates is mortal Logic  Making the right inferences!
  • 10. Acting rationally: rational agent An agent is anything that can be viewed as perceiving its environment through sensors and acting upon that environment through actuators. Rational behavior: doing the right thing; that which is expected to maximize goal achievement, given the available information;
  • 11. Foundations of AI  Philosophy logic, methods of reasoning, mind vs. matter, foundations of learning and knowledge  Mathematics logic, probability, computation  Economics utility, decision theory  Neuroscience biological basis of intelligence(how brain process information?)  Psychology computational models of human intelligence(how humans and animals think and act)  Computer engineering how to build efficient computers?  Linguistics rules of language, language acquisition(how does language relate to thought)  Control theory design of dynamical systems that use controller to achieve desired behavior
  • 12. History  1943 McCulloch & Pitts “Boolean circuit model of brain”  1950 Turing’s “Computing Machinery and Intelligence”  1951 Minsky and Edmonds • Built a neural net computer SNARC • Used 3000 vacuum tubes and 40 neurons
  • 13. The Birthplace of “Artificial Intelligence”, 1956  1956 Dartmouth meeting: “Artificial Intelligence” adopted  1956 Newell and Simon Logic theorist (LT)- proves theorem.
  • 14. Early enthusiasm,great expectations (1952-1969)  GPS- Newell and Simon-thinks like humans(1952)  Samuel Checkers that learns (1952)  McCarthy - Lisp (1958),  Geometry theorem prover - Gelernter (1959)  Robinson’s resolution(1963)  Slagles – SAINT solves calculus problems(1963).  Daniel Bobrows Student program solved algebra story problems(1964).  1968- TomEvans Analogy program solved geometric analogy problems that appear in IQ test.
  • 15. 271- Fall 2008  1966-1974 a dose of reality  Problems with computation  1969 :Minsky and Papert Published the book Perceptrons, demonstrating the limitations of neural networks.  1969-1979 Knowledge-based systems  1969:Dendral:Inferring molecular structures Mycin: diagnosing blood infections Prolog Language PLANNER became popular Minsky developed frames as a representation and reasoning language.  1980-present: AI becomes an industry  Japanese Government announced Fifth generation project to build intelligent computers  AI Winter –companies failed to deliver on extra vagant promises  1986-present: return of neural networks Many research were done by psychologists on Neural networks
  • 16.  1987-present: AI becomes a Science  HMMs, planning, belief network Emergence of Intelligent agents(1995-present) o The agent architecture SOAR developed o The agents environment is internet. o Web based applications, search engines, recommender systems, websites
  • 21. Agents,Environments Intelligent Agents  Agents and environments  Rationality  Nature of Environments  Structure of Agents
  • 22. Agents  An agent is anything that can be viewed as perceiving its environment through sensors and acting upon that environment through actuators  Human agent: sensors- eyes, ears, and other organs actuators- hands, legs, mouth, and other body parts  Robotic agent: Sensors - cameras and infrared range finders Actuators - motors
  • 23.  an agent perceives its environment through sensors  the complete set of inputs at a given time is called a percept  the current percept, or a sequence of percepts may influence the actions of an agent –percept sequence
  • 24.  The agent function maps from percept histories to actions:[f: P*  A].The agent function is an abstract mathematical description.  The agent function will be implemented by an agent program.The agent program is a concrete implementation running on the agent architecture .
  • 25. Vacuum-cleaner world  Percepts: Location and status, e.g., [A,Dirty]  Actions: Left, Right, Suck, NoOp Example vacuum agent program: function Vacuum-Agent([location,status]) returns an action  if status = Dirty then return Suck  else if location = A then return Right  else if location = B then return Left
  • 27. Rationality  A rational agent is one that does the right thing. Every entry in the table for the agent function is filled out correctly.  It is based on  performance measure  percept sequence  background knowledge  feasible actions
  • 28. Omniscience, Learning and Autonomy  an omniscient agent deals with the actual outcome of its actions  a rational agent deals with the expected outcome of actions  a rational agent not only gather information but also learns as much as possible from the percepts it receives.  a rational agent should be autonomous –it should learn what it can do to compensate for partial or incorrect prior knowledge.
  • 29. Nature of Environments Specifying the task environment  Before we design an intelligent agent, we must specify its “task environment”:  Problem specification: Performance measure, Environment, Actuators, Sensors (PEAS) Example of Agent Types and their PEAS description:  Example: automated taxi driver  Performance measure • Safe, fast, legal, comfortable trip, maximize profits  Environment • Roads, other traffic, pedestrians, customers  Actuators • Steering wheel, accelerator, brake, signal, horn  Sensors • Cameras, sonar, speedometer, GPS, odometer, engine sensors, keyboard
  • 30.  Example: Agent = Medical diagnosis system Performance measure: Healthy patient, minimize costs, lawsuits Environment: Patient, hospital, staff Actuators: Screen display (questions, tests, diagnoses, treatments, referrals) Sensors: Keyboard (entry of symptoms, findings, patient's answers)
  • 31.  Example: Agent = Part-picking robot Performance measure: Percentage of parts in correct bins Environment: Conveyor belt with parts, bins Actuators: Jointed arm and hand Sensors: Camera, joint angle sensors
  • 32. Artificial Intelligence a modern approach  Example: Agent = Interactive English tutor Performance measure: Maximize student's score on test Environment: Set of students Actuators: Screen display (exercises, suggestions, corrections) Sensors: Keyboard
  • 33. Example: Agent = Satellite image system Performance measure: Correct image categorization Environment: Downlink from satellite Actuators: Display categorization of scene Sensors: Color pixel array
  • 34. Properties of Task Environment  Fully observable (vs. partially observable): The agent's sensors give it access to the complete state of the environment at each point in time e.g an automated taxi doesn’t has sensor to see what other drivers are doing/thinking.  Deterministic (vs. stochastic): The next state of the environment is completely determined by the current state and the agent’s action  Strategic: the environment is deterministic except for the actions of other agents e.g Vacuum world is Deterministic while Taxi Driving is Stochastic – as one can exactly predict the behaviour of traffic  Episodic (vs. sequential): The agent's experience is divided into atomic “episodes,” and the choice of action in each episode depends only on the episode itself  E.g. an agent sorting defective parts in an assembly line is episodic while a taxi driving agent or a chess playing agent are sequential ….
  • 35.  Static (vs. dynamic): The environment is unchanged while an agent is deliberating  Semidynamic: the environment does not change with the passage of time, but the agent's performance score does e.g.Taxi Driving is Dynamic, Crossword Puzzle solver is static,chess played with a clock is semidynamic  Discrete (vs. continuous): The environment provides a fixed number of distinct percepts, actions, and environment states e.g. chess game has finite number of states • Taxi Driving is continuous-state and continuous-time problem …  Single agent (vs. multi-agent): An agent operating by itself in an environment e.g. An agent solving a crossword puzzle is in a single agent environment • Agent in chess playing is in two-agent environment
  • 36. task environm. observable determ./ stochastic episodic/ sequential static/ dynamic discrete/ continuous agents crossword puzzle fully determ. sequential static discrete single chess with clock fully strategic sequential semi discrete multi poker partial stochastic sequential static discrete multi back gammon fully stochastic sequential static discrete multi taxi driving partial stochastic sequential dynamic continuous multi medical diagnosis partial stochastic sequential dynamic continuous single image analysis fully determ. episodic semi continuous single partpicking robot partial stochastic episodic dynamic continuous single refinery controller partial stochastic sequential dynamic continuous single interact. Eng. tutor partial stochastic sequential dynamic discrete multi
  • 37. Structure of Agents  An agent is completely specified by the agent function mapping percept sequences to actions.  The agent program implements the agent function mapping percepts sequences to actions Agent=architecture + program. Architecture= sort of computing device with physical sensors and actuators.  Aim of AI is to design the agent program
  • 38. Table-Driven agent Function Table-Driven-Agent(percept) Static: percepts, a sequence, initially empty table, a table of actions, indexed by percept sequences, initially fully specified append percept to the end of percepts action <- Lookup(percepts,table) Return action The table agent program is invoked for each new percept and returns an action each time. It keeps track of percept sequences using its own private data structure.
  • 39. Table-lookup agent  Drawbacks:  Huge table  Take a long time to build the table  No autonomy  Even with learning, need a long time to learn the table entries.  Example : let P be the set of possible percepts and T be the lifetime of the agent (the total number of percepts it will receive) then the lookup table will contain PT entries.  The table of the vacuum agent (VA) will contain more than 4T entries (VA has 4 possible percepts).
  • 40.  Four basic kinds of agent program are  Simple reflex agents  Model-based reflex agents  Goal-based agents  Utility-based agents All of these can be turned into learning agents
  • 41. Simple reflex agents  Single current percept : the agent select an action on the basis current percept, ignoring the rest of percept history.  Example : The vacuum agent (VA) is a simple reflex agent, because its decision is based only on the current location and on whether that contains dirt.  Rules relate  “State” based on percept  “action” for agent to perform  “Condition-action” rule: If a then b: e.g. vacuum agent (VA) : if in(A) and dirty(A), then vacuum taxi driving agent (TA): if car-in-front-is-braking then initiate- braking.
  • 42. Agent program for a simple reflex agent The vacuum agent program is very small compared to the corresponding table : it cuts down the number of possibilities from 4T to 4. This reduction comes from the ignoring of the history percepts.
  • 43. Simple reflex agent Program Function Simple-Reflex-Agent(percept) Static: rules, set of condition-actions rules; state <- Interpret-Input(percept) Rule <- Rule-Match(state, rules) action <- Rule-Action[Rule] return action A simple reflex agent. It acts according to rule whose condition matches the current state, as defined by the percept.
  • 44. Schematic diagram of a Simple reflex agent example: vacuum cleaner world Limited Intelligence Fails if environment is partially observable current state of decision process
  • 45. Simple reflex agents Artificial Intelligence a modern approach 45  Simple but very limited intelligence.  Action does not depend on percept history, only on current percept.  Therefore no memory requirements.  Infinite loops  Suppose vacuum cleaner does not observe location. What do you do given location = clean? Left of A or right on B -> infinite loop.  Possible Solution: Randomize action.
  • 46. Model-based reflex agents  Solution to partial observability problems  Maintain state • Keep track of parts of the world can't see now • Maintain internal state that depends on the percept history  Update previous state based on • Knowledge of how world changes, e.g. TA : an overtaking car generally will be closer behind than it was a moment ago. • Knowledge of effects of own actions, e.g. TA: When the agent turns the steering wheel clockwise the car turns to the right. • => Model called “Model of the world” implements the knowledge about how the world work. 46
  • 47. Schematic diagram of a Model-based reflex agents Models the world by: modeling how the world changes how it’s actions change the world description of current world state sometimes it is unclear what to do without a clear goal
  • 48. Model-based reflex agents Function Model-based-Reflex-Agent(percept) Static: state, a description of the current world state rules, set of condition-actions rules; actions, the most recent action, initially none State<-Update-State(oldInternalState,LastAction,percept) rule<- Rule-Match(State, rules) action <- Rule-Action[rule] return action A model-based reflex agent. It keep track of the current state of the world using an internal model. It then chooses an action in the same way as the reflect agent.
  • 49. Goal-based agents Artificial Intelligence a modern approach 49 • knowing state and environment? Enough? – Taxi can go left, right, straight • Have a goal  A destination to get to  Uses knowledge about a goal to guide its actions  E.g., Search, planning  Goal-based Agents are much more flexible in responding to a changing environment; accepting different goals.
  • 50. Goal-based agents Goals provide reason to prefer one action over the other. We need to predict the future: we need to plan & search
  • 51. Artificial Intelligence a modern approach 51 • Reflex agent breaks when it sees brake lights. Goal based agent reasons – Brake light -> car in front is stopping -> I should stop -> I should use brake
  • 52. Utility-based agents Artificial Intelligence a modern approach 52  Goals are not always enough  Many action sequences get taxi to destination  Consider other things. How fast, how safe…..  A utility function maps a state onto a real number which describes the associated degree of “happiness”, “goodness”, “success”.  Where does the utility measure come from?  Economics: money.  Biology: number of offspring.  Your life?
  • 53. Utility-based agents Some solutions to goal states are better than others. Which one is best is given by a utility function. Which combination of goals is preferred?
  • 54. Learning agents How does an agent improve over time? By monitoring it’s performance and suggesting better modeling, new action rules, etc. Evaluates current world state changes action rules suggests explorations “old agent” model world and decide on actions to be taken
  • 55. Learning Agents can be divided into 4 conceptual components: 1. Learning elements are responsible for improvements 2. Performance elements are responsible for selecting external actions (previous knowledge) 3. Critic tells the learning elements how well the agent is doing with respect to a fixed performance standard. 4. Problem generator is responsible for suggesting actions that will lead to new and informative experience.
  • 56. Example :Automated Taxi driving •The performance element consists of whatever collection of knowledge and procedures the TA has for selecting its driving actions. •The critic observes the world and passes information along to the learning element. For example after the taxi makes a quick left turn across three lanes the critic observes the shocking language used by other drivers. From this experience the learning element is able to formulate a rule saying this was a bad action, and the performance element is modified by installing this new rule. •The problem generator may identify certain areas of behavior in need of improvement and suggest experiments : such as testing the brakes on different road surfaces under different conditions. •The learning element can make change in any Knowledge of previous agent types : observation between two states (how the world evolves), observation of results of actions (what my action do). (learn from what happens when strong brake is applied on a wet road …) 56
  • 57. Problem Formulation Problem Solving agents Example problems Searching for solutions 57
  • 58. Problem Solving agents: 1. Goal Formulation: Set of one or more (desirable) world states. 2. Problem formulation: What actions and states to consider given a goal and an initial state. 3. Search for solution: Given the problem, search for a solution --- a sequence of actions to achieve the goal starting from the initial state. 4. Execution of the solution 58
  • 59. Example: Path Finding problem  Formulate goal:  be in Bucharest (Romania)  Formulate problem:  action: drive between pair of connected cities (direct road)  state: be in a city (20 world states)  Find solution:  sequence of cities leading from start to goal state, e.g., Arad, Sibiu, Fagaras, Bucharest  Execution  drive from Arad to Bucharest according to the solution Initial State Goal State Environment: fully observable (map), deterministic, and the agent knows effects of each action.
  • 60. Well defined Problems and solutions A problem can defined by 4 components 1.Initial state: starting point from which the agent sets out 2.Operator: description of an action State space: all states reachable from the initial state by any sequence of actions Path: sequence of actions leading from one state to another 3.Goal test: determines if a given state is the goal state 4.Path cost function: assign a numeric cost to each path. 60
  • 61. 61 Example Problems  Toy problems  Illustrate/test various problem-solving methods  Concise, exact description  Can be used to compare performance  Examples: 8-puzzle, 8-queens problem, Cryptarithmetic, Vacuum world, Missionaries and cannibals.  Real-world problem  More difficult  No single, agreed-upon specification (state, successor function, edgecost)  Examples: Route finding, VLSI layout, Robot navigation, Assembly sequencing
  • 62. Toy problems: Simple Vacuum World  states  two locations  dirty, clean  initial state  any legitimate state  successor function (operators)  left, right, suck  goal test  all squares clean  path cost  one unit per action Properties: discrete locations, discrete dirt (binary), deterministic
  • 63. The 8-puzzle [Note: optimal solution of n-Puzzle family is NP-hard]
  • 64. 8-Puzzle  states  location of tiles (including blank tile)  initial state  any legitimate configuration  successor function (operators)  move tile  alternatively: move blank  goal test  any legitimate configuration of tiles  path cost  one unit per move Properties: abstraction leads to discrete configurations, discrete moves,deterministic
  • 65. 8-Queens  incremental formulation  states • arrangement of up to 8 queens on the board  initial state • empty board  successor function (operators) • add a queen to any square  goal test • all queens on board • no queen attacked  path cost • irrelevant (all solutions equally valid)  complete-state formulation  states  arrangement of 8 queens on the board  initial state  all 8 queens on board  successor function (operators)  move a queen to a different square  goal test  no queen attacked  path cost  irrelevant (all solutions equally valid)
  • 66. 66 Real-world problems  Route finding  Defined in terms of locations and transitions along links between them  Applications: routing in computer networks, automated travel advisory systems, airline travel planning systems  states  locations  initial state  starting point  successor function (operators)  move from one location to another  goal test  arrive at a certain location  path cost  may be quite complex • money, time, travel comfort, scenery, ...
  • 67. 67  Touring and traveling salesperson problems  “Visit every city on the map at least once”  Needs information about the visited cities  Goal: Find the shortest tour that visits all cities  NP-hard, but a lot of effort has been spent on improving the capabilities of TSP algorithms  Applications: planning movements of automatic circuit board drills  VLSI layout  positioning millions of components and connections on a chip to minimize area, circuit delays, etc.  Place cells on a chip so they don’t overlap and there is room for connecting wires to be placed between the cells  Robot navigation  Generalization of the route finding problem • No discrete set of routes • Robot can move in a continuous space • Infinite set of possible actions and states
  • 68.  Assembly sequencing  Automatic assembly of complex objects  The problem is to find an order in which to assemble the parts of some object  Protein design sequence of amino acids that will fold into the 3- dimensional protein with the right properties to cure some disease.
  • 69. Searching for Solutions/ Search Graph & Search Tree Search through the state space. We will consider search techniques that use an explicit search tree that is generated by the initial state and successor function .
  • 70. search Tree example Node selected for expansion.
  • 71. Nodes added to tree.
  • 72. Selected for expansion. Added to tree. Note: Arad added (again) to tree! (reachable from Sibiu) Not necessarily a problem, but in Graph-Search, we will avoid this by maintaining an “explored” list.
  • 73. An informal description of general Tree search algorithm initialize (initial node) Loop choose a node for expansion according to strategy goal node?  done expand node with successor function 73
  • 74. states vs. nodes 74  A state is a (representation of) a physical configuration  A node is a data structure with 5 components state, parent node, action, path cost,depth
  • 75. General Tree Search Algorithm function TREE-SEARCH(problem, fringe) returns solution fringe := INSERT(MAKE-NODE(INITIAL-STATE[problem]), fringe) loop do if EMPTY?(fringe) then return failure node := REMOVE-FIRST(fringe) if GOAL-TEST[problem] applied to STATE[node] succeeds then return SOLUTION(node) fringe := INSERT-ALL(EXPAND(node, problem), fringe)  generate the node from the initial state of the problem  repeat  return failure if there are no more nodes in the fringe  examine the current node; if it’s a goal, return the solution  expand the current node, and add the new nodes to the fringe
  • 76. Measuring Problem solving performance 76 An algorithms performance can be evaluated in 4 ways: 1.Completeness: does it always find a solution if one exists? 2. Time Complexity: how long does it take to find a solution 3.Space Complexity: how much memory does it need to perform the search 4.Optimality: does the strategy find the optimal solution  Time and space complexity are measured in terms of  b: branching factor(max no of successors of any node) of the search tree  d: depth of the shallowest goal node  m: maximum length of the state space (may be ∞)