SlideShare a Scribd company logo
Artificial Intelligence
Prepared by:
Ataklti Nguse
Chapter Two: Intelligent Agent
01/08/2025 1
2.1. Intelligent Agent
• An intelligent agent is an autonomous entity which act upon an environment using
sensors and actuators for achieving goals.
• An intelligent agent may learn from the environment to achieve their goals.
• The following are the main four rules for an AI agent:
– Rule 1: An AI agent must have the ability to perceive the environment.
– Rule 2: The observation must be used to make decisions.
– Rule 3: Decision should result in an action.
– Rule 4: The action taken by an AI agent must be a rational action.
• AI is the science of building machines (agents) that act rationally with respect to a
goal.
• A thermostat is an example of an intelligent agent.
01/08/2025 2
2.2. Agent and Environment
• An agent can be anything that perceive its environment through sensors and act
upon that environment through actuators.
• percept to refer to the agent’s perceptual inputs at any given instant.
• Sensor: is a device which detects the change in the environment and sends the
information to other electronic devices.
• Actuators The actuators are only responsible for moving and controlling a
system
• Effectors: Effectors are the devices which affect the environment.
• Agent has a Goal.
01/08/2025 3
…Continued
01/08/2025 4
• An AI system is composed of an agent and its environment.
• The agents act in their environment.
• The environment is not the part of agent and may contain other agents.
Human Agent and Robot Agent
Human Agents Robot Agents
Sensors Eyes, Ears, Nose Camera, microphone, Scanners,
recorder, infrared (range finders)
Effectors Hands, Legs,
Mouth
Various Motors (Artificial Legs,
Artificial Hands,… )Speaker, light
01/08/2025 5
Exercise
• List down the sensors and effectors of the
following agents ?
1. software agent
2. Vacuum cleaner agent
01/08/2025 6
2.2.1. Rational Agent:
• A rational agent is an agent which has clear preference, models
uncertainty, and acts in a way to maximize its performance measure with
all possible actions.
• on the basis of :-
• percept sequence
• built-in knowledge base
• Rationality can be judged on the basis of following points by(PEAS)−
– Performance measure which defines the success criterion.
– Agent prior knowledge of its environment.
– Best possible actions that an agent can perform.
– The sequence of percepts.
• Here performance measure is the objective for the success of an agent's
behavior.
01/08/2025 7
PEAS for self-driving cars:
01/08/2025 8
…Continued
• Let's suppose a self-driving car then PEAS representation will be:
• Performance: Safety, time, legal drive, comfort….
• Environment: Roads, other vehicles, road signs, pedestrian….
• Actuators: Steering, accelerator, brake, signal, horn….
• Sensors: Camera, GPS, speedometer, odometer, microphone,
keyboard ,..
01/08/2025 9
Develop PEAS description for the following
task environment:
1. Vacuum cleaner agent
2. Medical diagnosis system
3. Robot soccer player
4. Shopping for used AI books on the Internet
01/08/2025 10
2.2.2.Agent Environment in AI
•An environment is everything in the world which surrounds the agent.
•The environment is where agent lives, operate and provide the agent with
something to sense and act upon it.
•Agent perceives and acts in an environment.
•Properties of Environments:
–Fully observable vs. partially observable
–Deterministic vs. stochastic
–Episodic vs. non-episodic
–Static vs. Dynamic
–Discrete vs. continuous
01/08/2025 11
Fully observable vs. partially
observable
• If an agent sensor can sense or access the complete state of an
environment at each point of time then it is a fully observable
environment, else it is partially observable.
• A fully observable environment is easy as there is no need to
maintain the internal state to keep track history of the world.
• Chess ,checker are fully observable
• Automatic taxi driving, poker… partially observable
01/08/2025 12
Deterministic vs. Stochastic
• If an agent's current state and selected action can completely determine the
next state of the environment, then such environment is called a
deterministic environment.
• A stochastic environment is random in nature and cannot be determined
completely by an agent.
• In a deterministic, fully observable environment, agent does not need to
worry about uncertainty.
• Taxi driving is non-deterministic (i.e. stochastic)
01/08/2025 13
Episodic vs. Sequential
• In an episodic environment, there is a series of one-shot actions, and only
the current percept is required for the action.
• However, in Sequential environment, an agent requires memory of past
actions to determine the next best actions.
• Taxi driving is sequential, while mushroom-picking
robot is episodic
01/08/2025 14
Static vs. Dynamic
• If the environment can change itself while an agent is deliberating then such
environment is called a dynamic environment else it is called a static
environment.
• Static environments are easy to deal because an agent does not need to
continue looking at the world while deciding for an action.
• However for dynamic environment, agents need to keep looking at the world
at each action.
• Taxi driving is an example of a dynamic environment whereas Crossword
puzzles are an example of a static environment.
01/08/2025 15
Discrete vs. Continuous
• If in an environment there are a finite number of percepts and actions that
can be performed within it, then such an environment is called a discrete
environment else it is called continuous environment.
• A chess game comes under discrete environment as there is a finite number
of moves that can be performed.
• A self-driving car is an example of a continuous environment.
• Taxi driving is continuous - speed location are in a range of continuous
values.
• Chess is discrete - there are a fixed number of possible moves on each item
01/08/2025 16
Examples for Environment Types
Problems Observable Deterministic Episodic Static Discrete
Crossword
Puzzle
Yes Yes No Yes Yes
mushroom-
picking
robot
No No Yes No No
Web shopping
program
No No No No Yes
Tutor No No No Yes Yes
Medical
Diagnosis
No No No No No
Taxi driving No No No No No
Below are lists of properties of a number of familiar environments
01/08/2025 17
The real world is (of course) partially observable, stochastic, sequential, dynamic,
continuous,…
2.3. Rationality vs. Omniscience
•Rationality maximizes expected performance, while perfection maximizes
actual performance.
•Rational agent acts so as to achieve one's goals, given one's beliefs (one that
does the right thing).
–What does right thing mean? one that will cause the agent to be most
successful and is expected to maximize goal achievement, given the
available information
•An Omniscient agent knows the actual outcome of its actions, and can act
accordingly, but in reality omniscience is impossible.
•A rational agent is autonomous if it can learn to compensate for partial or
incorrect prior knowledge
01/08/2025 18
2.4.The Structure of Intelligent
Agents
• Agent’s structure can be viewed as −
• Agent = Architecture + Agent Program
• Architecture = the machinery that an agent executes on.
• Agent Program = an implementation of an agent function.
– Feeds the program’s action choices to the effectors.
• An agent program executes on the physical architecture to produce function
f.
• Agent Function − It is a map from the precept sequence to an action.
01/08/2025 19
2.5. Types of AI Agents
• Agents can be grouped into five classes based on their degree of perceived
intelligence and capability.
• These are given below:
– Simple Reflex Agent
– Model-based reflex agent
– Goal-based agents
– Utility-based agent
– Learning agent
01/08/2025 20
2.5.1. SIMPLE REFLEX AGENT
• These agents take decisions on the basis of the current percepts and ignore
the rest of the percept history.
• They choose actions only based on the current percept.
• They are rational only if a correct decision is made only on the basis of
current precept.
• Their environment is completely observable.
• The Simple reflex agent works on Condition-action rule.
• Such as a Room Cleaner agent, it works only if there is dirt in the room.
• They have very limited intelligence and not adaptive to changes in the
environment. 21
Structure of simple reflex agent
01/08/2025 22
2.5.2. Model-Based Reflex Agent
01/08/2025 23
•The Model-based agent can work in a partially observable environment, and
track the situation.
•A model-based agent has two important factors:
•Model: It is knowledge about "how things happen in the world," so it is called
a Model-based agent.
•Internal State: It is a representation of the current state based on percept
history.
• Updating the agent state requires information about: How the world evolves
• How the agent's action affects the world.
Structure of model-based reflex agent
01/08/2025 24
2.5.3. Goal based agents
01/08/2025 25
•The knowledge of the current state environment is not always sufficient to
decide for an agent to what to do.
•The agent needs to know its goal which describes desirable situations.
•Goal-based agents expand the capabilities of the model-based agent by having
the "goal" information.
•They choose an action, so that they can achieve the goal.
•Uses knowledge about a goal to guide its actions
• E.g., Search, planning
Structure of goal-based agent
01/08/2025 26
2.5.4. Utility based agents
01/08/2025 27
•These agents are similar to the goal-based agent but provide an extra
component of utility measurement which makes them different by providing a
measure of success at a given state.
•Goals alone are not enough to generate high-quality behavior in most
environments.
•Utility-based agent act based not only goals but also the best way to achieve
the goal.
•It depends efficiently action achieves the goals.
•A utility function maps a state onto a real number which describes the
associated degree of happiness.
Structure of utility-based agent
01/08/2025 28
2.5.5.Learning Agents
• All agents can improve their performance through learning.
29
Structure of learning agent
01/08/2025 30

More Related Content

PPTX
CS Artificial intelligence chapter 2.pptx
ethiouniverse
 
PPTX
Agents-Artificial Intelligence with different types of agents
veronica380506
 
PPT
Intelligent agent artificial intelligent CSE 315
EftysInnovation
 
PDF
Artificial intelligence what is agent and all about agent
c8h5fzfpf6
 
PPTX
Artificial intelligence Agents lecture slides
MuhammadAamirGulzarA
 
PPT
901470_Ch2.ppt901470_Ch2.ppt901470_Ch2.ppt901470_Ch2.ppt
nashitahalwaz95
 
PPT
Artificial intelligence and machine learning
RiniBhandari
 
PPT
901470_Ch Intelligent agent introduction2.ppt
ratnababum
 
CS Artificial intelligence chapter 2.pptx
ethiouniverse
 
Agents-Artificial Intelligence with different types of agents
veronica380506
 
Intelligent agent artificial intelligent CSE 315
EftysInnovation
 
Artificial intelligence what is agent and all about agent
c8h5fzfpf6
 
Artificial intelligence Agents lecture slides
MuhammadAamirGulzarA
 
901470_Ch2.ppt901470_Ch2.ppt901470_Ch2.ppt901470_Ch2.ppt
nashitahalwaz95
 
Artificial intelligence and machine learning
RiniBhandari
 
901470_Ch Intelligent agent introduction2.ppt
ratnababum
 

Similar to 2 Intelligent Agent articial intellligence.pptx (20)

PDF
lec02_intelligentAgentsintelligentAgentsintelligentAgentsintelligentAgents
AlvasCSE
 
PPTX
Artificial Intelligence
Vinod Kumar Meghwar
 
PPTX
m2-agents.pptx
RitwikNayan
 
PPT
Agents_AI.ppt
sandeep54552
 
PPT
introduction to inteligent IntelligentAgent.ppt
dejene3
 
PDF
2_1_Intelligent Agent , Type of Intelligent Agent and Environment .pdf
Chandra Meena
 
PPTX
AI Chapter Two.pArtificial Intelligence Chapter One.pptxptx
gadisaAdamu
 
PPTX
AI_Ch2.pptx
qwtadhsaber
 
PPTX
1.1 What are Agent and Environment.pptx
Suvamvlogs
 
PDF
Artificial Intelligence Course of BIT Unit 2
Home
 
PPTX
Artificial intelligence(03)
Nazir Ahmed
 
PPTX
Artificial Intelligence and Machine Learning.pptx
MANIPRADEEPS1
 
PPTX
Lecture 4 (1).pptx
SumairaRasool6
 
PDF
ai-slides-1233566181695672-2 (1).pdf
ShivareddyGangam
 
PDF
Artificial Intelligence chapter 1 and 2(1).pdf
52230153
 
PPTX
AI_Lec1.pptx ist step to enter in AI field
inambscs4508
 
PPT
Artificial Intelligent Agents
Dr. Mazhar Ali Dootio
 
PPTX
CS4700-Agents_v3.pptx
0137RajatThakur
 
PPTX
CS4700-Agents_v3 (1).pptxCS4700-Agents_v3 (1).pptx
nashitahalwaz95
 
PPT
Artificial Intelligence Lecture Slide-08
asmshafi1
 
lec02_intelligentAgentsintelligentAgentsintelligentAgentsintelligentAgents
AlvasCSE
 
Artificial Intelligence
Vinod Kumar Meghwar
 
m2-agents.pptx
RitwikNayan
 
Agents_AI.ppt
sandeep54552
 
introduction to inteligent IntelligentAgent.ppt
dejene3
 
2_1_Intelligent Agent , Type of Intelligent Agent and Environment .pdf
Chandra Meena
 
AI Chapter Two.pArtificial Intelligence Chapter One.pptxptx
gadisaAdamu
 
AI_Ch2.pptx
qwtadhsaber
 
1.1 What are Agent and Environment.pptx
Suvamvlogs
 
Artificial Intelligence Course of BIT Unit 2
Home
 
Artificial intelligence(03)
Nazir Ahmed
 
Artificial Intelligence and Machine Learning.pptx
MANIPRADEEPS1
 
Lecture 4 (1).pptx
SumairaRasool6
 
ai-slides-1233566181695672-2 (1).pdf
ShivareddyGangam
 
Artificial Intelligence chapter 1 and 2(1).pdf
52230153
 
AI_Lec1.pptx ist step to enter in AI field
inambscs4508
 
Artificial Intelligent Agents
Dr. Mazhar Ali Dootio
 
CS4700-Agents_v3.pptx
0137RajatThakur
 
CS4700-Agents_v3 (1).pptxCS4700-Agents_v3 (1).pptx
nashitahalwaz95
 
Artificial Intelligence Lecture Slide-08
asmshafi1
 
Ad

More from gadisaAdamu (20)

PDF
Addis ababa of education plan.docxJOSY 10 C.pdf
gadisaAdamu
 
PDF
Addis ababa college of education plan.docxjosy 10 A.pdf
gadisaAdamu
 
PPT
Lecture -3 Classification(Decision Tree).ppt
gadisaAdamu
 
PPT
Lecture -2 Classification (Machine Learning Basic and kNN).ppt
gadisaAdamu
 
PPT
Lecture -8 Classification(AdaBoost) .ppt
gadisaAdamu
 
PPT
Lecture -10 AI Reinforcement Learning.ppt
gadisaAdamu
 
PPTX
Updated Lensa Research Proposal (1).pptx
gadisaAdamu
 
PPTX
Lensa research presentation Powepoint.pptx
gadisaAdamu
 
PPTX
Lensa Habtamu Updated one Powerpoint.pptx
gadisaAdamu
 
PPTX
Updated Lensa Research Proposal (1).pptx
gadisaAdamu
 
PPTX
Lensa Updated research presentation Powerpoint.pptx
gadisaAdamu
 
PPTX
Artificial Intelligence Chapter One.pptx
gadisaAdamu
 
PPTX
Introduction to Embeded System chapter 1 and 2.pptx
gadisaAdamu
 
PPT
Chapter Five Synchonization distributed Sytem.ppt
gadisaAdamu
 
PPTX
Introduction to Embeded System chapter one and 2.pptx
gadisaAdamu
 
PPT
chapter distributed System chapter 3 3.ppt
gadisaAdamu
 
PPTX
Chapter 2- distributed system Communication.pptx
gadisaAdamu
 
PPTX
Chapter 1-Introduction to distributed system.pptx
gadisaAdamu
 
PPTX
chapter AI 4 Kowledge Based Agent.pptx
gadisaAdamu
 
PPT
Articial intelligence chapter 3 problem solver.ppt
gadisaAdamu
 
Addis ababa of education plan.docxJOSY 10 C.pdf
gadisaAdamu
 
Addis ababa college of education plan.docxjosy 10 A.pdf
gadisaAdamu
 
Lecture -3 Classification(Decision Tree).ppt
gadisaAdamu
 
Lecture -2 Classification (Machine Learning Basic and kNN).ppt
gadisaAdamu
 
Lecture -8 Classification(AdaBoost) .ppt
gadisaAdamu
 
Lecture -10 AI Reinforcement Learning.ppt
gadisaAdamu
 
Updated Lensa Research Proposal (1).pptx
gadisaAdamu
 
Lensa research presentation Powepoint.pptx
gadisaAdamu
 
Lensa Habtamu Updated one Powerpoint.pptx
gadisaAdamu
 
Updated Lensa Research Proposal (1).pptx
gadisaAdamu
 
Lensa Updated research presentation Powerpoint.pptx
gadisaAdamu
 
Artificial Intelligence Chapter One.pptx
gadisaAdamu
 
Introduction to Embeded System chapter 1 and 2.pptx
gadisaAdamu
 
Chapter Five Synchonization distributed Sytem.ppt
gadisaAdamu
 
Introduction to Embeded System chapter one and 2.pptx
gadisaAdamu
 
chapter distributed System chapter 3 3.ppt
gadisaAdamu
 
Chapter 2- distributed system Communication.pptx
gadisaAdamu
 
Chapter 1-Introduction to distributed system.pptx
gadisaAdamu
 
chapter AI 4 Kowledge Based Agent.pptx
gadisaAdamu
 
Articial intelligence chapter 3 problem solver.ppt
gadisaAdamu
 
Ad

Recently uploaded (20)

PPTX
Inventory management chapter in automation and robotics.
atisht0104
 
PPTX
Module2 Data Base Design- ER and NF.pptx
gomathisankariv2
 
PPT
SCOPE_~1- technology of green house and poyhouse
bala464780
 
PDF
67243-Cooling and Heating & Calculation.pdf
DHAKA POLYTECHNIC
 
PDF
Zero Carbon Building Performance standard
BassemOsman1
 
PDF
Chad Ayach - A Versatile Aerospace Professional
Chad Ayach
 
PDF
Biodegradable Plastics: Innovations and Market Potential (www.kiu.ac.ug)
publication11
 
PDF
2010_Book_EnvironmentalBioengineering (1).pdf
EmilianoRodriguezTll
 
PDF
20ME702-Mechatronics-UNIT-1,UNIT-2,UNIT-3,UNIT-4,UNIT-5, 2025-2026
Mohanumar S
 
PDF
Cryptography and Information :Security Fundamentals
Dr. Madhuri Jawale
 
PPTX
Introduction of deep learning in cse.pptx
fizarcse
 
DOCX
SAR - EEEfdfdsdasdsdasdasdasdasdasdasdasda.docx
Kanimozhi676285
 
PDF
Software Testing Tools - names and explanation
shruti533256
 
PPTX
AgentX UiPath Community Webinar series - Delhi
RohitRadhakrishnan8
 
PDF
flutter Launcher Icons, Splash Screens & Fonts
Ahmed Mohamed
 
PDF
67243-Cooling and Heating & Calculation.pdf
DHAKA POLYTECHNIC
 
PDF
67243-Cooling and Heating & Calculation.pdf
DHAKA POLYTECHNIC
 
PPTX
business incubation centre aaaaaaaaaaaaaa
hodeeesite4
 
PDF
settlement FOR FOUNDATION ENGINEERS.pdf
Endalkazene
 
PPTX
Color Model in Textile ( RGB, CMYK).pptx
auladhossain191
 
Inventory management chapter in automation and robotics.
atisht0104
 
Module2 Data Base Design- ER and NF.pptx
gomathisankariv2
 
SCOPE_~1- technology of green house and poyhouse
bala464780
 
67243-Cooling and Heating & Calculation.pdf
DHAKA POLYTECHNIC
 
Zero Carbon Building Performance standard
BassemOsman1
 
Chad Ayach - A Versatile Aerospace Professional
Chad Ayach
 
Biodegradable Plastics: Innovations and Market Potential (www.kiu.ac.ug)
publication11
 
2010_Book_EnvironmentalBioengineering (1).pdf
EmilianoRodriguezTll
 
20ME702-Mechatronics-UNIT-1,UNIT-2,UNIT-3,UNIT-4,UNIT-5, 2025-2026
Mohanumar S
 
Cryptography and Information :Security Fundamentals
Dr. Madhuri Jawale
 
Introduction of deep learning in cse.pptx
fizarcse
 
SAR - EEEfdfdsdasdsdasdasdasdasdasdasdasda.docx
Kanimozhi676285
 
Software Testing Tools - names and explanation
shruti533256
 
AgentX UiPath Community Webinar series - Delhi
RohitRadhakrishnan8
 
flutter Launcher Icons, Splash Screens & Fonts
Ahmed Mohamed
 
67243-Cooling and Heating & Calculation.pdf
DHAKA POLYTECHNIC
 
67243-Cooling and Heating & Calculation.pdf
DHAKA POLYTECHNIC
 
business incubation centre aaaaaaaaaaaaaa
hodeeesite4
 
settlement FOR FOUNDATION ENGINEERS.pdf
Endalkazene
 
Color Model in Textile ( RGB, CMYK).pptx
auladhossain191
 

2 Intelligent Agent articial intellligence.pptx

  • 1. Artificial Intelligence Prepared by: Ataklti Nguse Chapter Two: Intelligent Agent 01/08/2025 1
  • 2. 2.1. Intelligent Agent • An intelligent agent is an autonomous entity which act upon an environment using sensors and actuators for achieving goals. • An intelligent agent may learn from the environment to achieve their goals. • The following are the main four rules for an AI agent: – Rule 1: An AI agent must have the ability to perceive the environment. – Rule 2: The observation must be used to make decisions. – Rule 3: Decision should result in an action. – Rule 4: The action taken by an AI agent must be a rational action. • AI is the science of building machines (agents) that act rationally with respect to a goal. • A thermostat is an example of an intelligent agent. 01/08/2025 2
  • 3. 2.2. Agent and Environment • An agent can be anything that perceive its environment through sensors and act upon that environment through actuators. • percept to refer to the agent’s perceptual inputs at any given instant. • Sensor: is a device which detects the change in the environment and sends the information to other electronic devices. • Actuators The actuators are only responsible for moving and controlling a system • Effectors: Effectors are the devices which affect the environment. • Agent has a Goal. 01/08/2025 3
  • 4. …Continued 01/08/2025 4 • An AI system is composed of an agent and its environment. • The agents act in their environment. • The environment is not the part of agent and may contain other agents.
  • 5. Human Agent and Robot Agent Human Agents Robot Agents Sensors Eyes, Ears, Nose Camera, microphone, Scanners, recorder, infrared (range finders) Effectors Hands, Legs, Mouth Various Motors (Artificial Legs, Artificial Hands,… )Speaker, light 01/08/2025 5
  • 6. Exercise • List down the sensors and effectors of the following agents ? 1. software agent 2. Vacuum cleaner agent 01/08/2025 6
  • 7. 2.2.1. Rational Agent: • A rational agent is an agent which has clear preference, models uncertainty, and acts in a way to maximize its performance measure with all possible actions. • on the basis of :- • percept sequence • built-in knowledge base • Rationality can be judged on the basis of following points by(PEAS)− – Performance measure which defines the success criterion. – Agent prior knowledge of its environment. – Best possible actions that an agent can perform. – The sequence of percepts. • Here performance measure is the objective for the success of an agent's behavior. 01/08/2025 7
  • 8. PEAS for self-driving cars: 01/08/2025 8
  • 9. …Continued • Let's suppose a self-driving car then PEAS representation will be: • Performance: Safety, time, legal drive, comfort…. • Environment: Roads, other vehicles, road signs, pedestrian…. • Actuators: Steering, accelerator, brake, signal, horn…. • Sensors: Camera, GPS, speedometer, odometer, microphone, keyboard ,.. 01/08/2025 9
  • 10. Develop PEAS description for the following task environment: 1. Vacuum cleaner agent 2. Medical diagnosis system 3. Robot soccer player 4. Shopping for used AI books on the Internet 01/08/2025 10
  • 11. 2.2.2.Agent Environment in AI •An environment is everything in the world which surrounds the agent. •The environment is where agent lives, operate and provide the agent with something to sense and act upon it. •Agent perceives and acts in an environment. •Properties of Environments: –Fully observable vs. partially observable –Deterministic vs. stochastic –Episodic vs. non-episodic –Static vs. Dynamic –Discrete vs. continuous 01/08/2025 11
  • 12. Fully observable vs. partially observable • If an agent sensor can sense or access the complete state of an environment at each point of time then it is a fully observable environment, else it is partially observable. • A fully observable environment is easy as there is no need to maintain the internal state to keep track history of the world. • Chess ,checker are fully observable • Automatic taxi driving, poker… partially observable 01/08/2025 12
  • 13. Deterministic vs. Stochastic • If an agent's current state and selected action can completely determine the next state of the environment, then such environment is called a deterministic environment. • A stochastic environment is random in nature and cannot be determined completely by an agent. • In a deterministic, fully observable environment, agent does not need to worry about uncertainty. • Taxi driving is non-deterministic (i.e. stochastic) 01/08/2025 13
  • 14. Episodic vs. Sequential • In an episodic environment, there is a series of one-shot actions, and only the current percept is required for the action. • However, in Sequential environment, an agent requires memory of past actions to determine the next best actions. • Taxi driving is sequential, while mushroom-picking robot is episodic 01/08/2025 14
  • 15. Static vs. Dynamic • If the environment can change itself while an agent is deliberating then such environment is called a dynamic environment else it is called a static environment. • Static environments are easy to deal because an agent does not need to continue looking at the world while deciding for an action. • However for dynamic environment, agents need to keep looking at the world at each action. • Taxi driving is an example of a dynamic environment whereas Crossword puzzles are an example of a static environment. 01/08/2025 15
  • 16. Discrete vs. Continuous • If in an environment there are a finite number of percepts and actions that can be performed within it, then such an environment is called a discrete environment else it is called continuous environment. • A chess game comes under discrete environment as there is a finite number of moves that can be performed. • A self-driving car is an example of a continuous environment. • Taxi driving is continuous - speed location are in a range of continuous values. • Chess is discrete - there are a fixed number of possible moves on each item 01/08/2025 16
  • 17. Examples for Environment Types Problems Observable Deterministic Episodic Static Discrete Crossword Puzzle Yes Yes No Yes Yes mushroom- picking robot No No Yes No No Web shopping program No No No No Yes Tutor No No No Yes Yes Medical Diagnosis No No No No No Taxi driving No No No No No Below are lists of properties of a number of familiar environments 01/08/2025 17 The real world is (of course) partially observable, stochastic, sequential, dynamic, continuous,…
  • 18. 2.3. Rationality vs. Omniscience •Rationality maximizes expected performance, while perfection maximizes actual performance. •Rational agent acts so as to achieve one's goals, given one's beliefs (one that does the right thing). –What does right thing mean? one that will cause the agent to be most successful and is expected to maximize goal achievement, given the available information •An Omniscient agent knows the actual outcome of its actions, and can act accordingly, but in reality omniscience is impossible. •A rational agent is autonomous if it can learn to compensate for partial or incorrect prior knowledge 01/08/2025 18
  • 19. 2.4.The Structure of Intelligent Agents • Agent’s structure can be viewed as − • Agent = Architecture + Agent Program • Architecture = the machinery that an agent executes on. • Agent Program = an implementation of an agent function. – Feeds the program’s action choices to the effectors. • An agent program executes on the physical architecture to produce function f. • Agent Function − It is a map from the precept sequence to an action. 01/08/2025 19
  • 20. 2.5. Types of AI Agents • Agents can be grouped into five classes based on their degree of perceived intelligence and capability. • These are given below: – Simple Reflex Agent – Model-based reflex agent – Goal-based agents – Utility-based agent – Learning agent 01/08/2025 20
  • 21. 2.5.1. SIMPLE REFLEX AGENT • These agents take decisions on the basis of the current percepts and ignore the rest of the percept history. • They choose actions only based on the current percept. • They are rational only if a correct decision is made only on the basis of current precept. • Their environment is completely observable. • The Simple reflex agent works on Condition-action rule. • Such as a Room Cleaner agent, it works only if there is dirt in the room. • They have very limited intelligence and not adaptive to changes in the environment. 21
  • 22. Structure of simple reflex agent 01/08/2025 22
  • 23. 2.5.2. Model-Based Reflex Agent 01/08/2025 23 •The Model-based agent can work in a partially observable environment, and track the situation. •A model-based agent has two important factors: •Model: It is knowledge about "how things happen in the world," so it is called a Model-based agent. •Internal State: It is a representation of the current state based on percept history. • Updating the agent state requires information about: How the world evolves • How the agent's action affects the world.
  • 24. Structure of model-based reflex agent 01/08/2025 24
  • 25. 2.5.3. Goal based agents 01/08/2025 25 •The knowledge of the current state environment is not always sufficient to decide for an agent to what to do. •The agent needs to know its goal which describes desirable situations. •Goal-based agents expand the capabilities of the model-based agent by having the "goal" information. •They choose an action, so that they can achieve the goal. •Uses knowledge about a goal to guide its actions • E.g., Search, planning
  • 26. Structure of goal-based agent 01/08/2025 26
  • 27. 2.5.4. Utility based agents 01/08/2025 27 •These agents are similar to the goal-based agent but provide an extra component of utility measurement which makes them different by providing a measure of success at a given state. •Goals alone are not enough to generate high-quality behavior in most environments. •Utility-based agent act based not only goals but also the best way to achieve the goal. •It depends efficiently action achieves the goals. •A utility function maps a state onto a real number which describes the associated degree of happiness.
  • 28. Structure of utility-based agent 01/08/2025 28
  • 29. 2.5.5.Learning Agents • All agents can improve their performance through learning. 29
  • 30. Structure of learning agent 01/08/2025 30