LECTURE 3:
AGENT PROPERTIES


  Artificial Intelligence II – Multi-Agent Systems
      Introduction to Multi-Agent Systems
            URV, Winter-Spring 2010




                                                     1
Overview
 What properties should an agent have?
 What kind of things are not agents
  Objects
  Expert Systems
 Common problems in agent development




                                         2
Defining agents
 Many possible definitions of “agents”
 Each author provides a set of
 characteristics/properties that are considered
 important to the notion of agenthood
 They can be divided into
   Internal: determine the actions within an agent
   External: affect the interaction of the agent with
   other (computational/human) agents




                                                        3
1-Flexibility

 An intelligent agent is a
 computer system
 capable of flexible
 action in some dynamic
 environment
 By flexible, we mean:
   reactive
   proactive
   social




                             4
2-Reactivity
 If a program’s environment is guaranteed to be fixed, the
 program does not need to worry about its own success or
 failure – it just executes blindly
   Example of fixed environment: compiler
 The real world is not like that: things change, information
 is incomplete. Many (most?) interesting environments are
 dynamic
   Multi-agent world
 Software is hard to build for dynamic domains: programs
 must take into account possibility of failure
 A reactive system is one that maintains an ongoing
 interaction with its environment, and responds to changes
 that occur in it (in time for the response to be useful)




                                                               5
Ways to achieve reactivity
 [Recall lecture from last week]
 Reactive architectures
   [Situation – Action] rules
   Layered, behaviour-based architectures
 Deliberative architectures
   Symbolic world model, long-term goals
   Reasoning, planning
 Hybrid architectures
   Reactive layer + deliberative layer




                                            6
3-Proactiveness
  Reacting to an environment
  is relatively easy
  (e.g., stimulus → response
  rules)
  But we generally want agents
  to do things for us, act on our
  behalf
  Hence they must exhibit
  goal-directed behaviour
  Agents should be proactive




                                    7
Aspects of proactiveness
 Generating and attempting to achieve goals
 Behaviour not driven solely by events
 Taking the initiative when appropriate
 Executing actions/giving advice/making
 recommendations/making suggestions
 without an explicit user request
 Recognizing opportunities on the fly
   Available resources
   Chances of cooperation




                                              8
Example of proactiveness (I)
 Personal Assistant Agent, running
 continuously on our mobile phone
 Location tracking (e.g. GPS)
 Knows our preferences
   Cultural activities
   Food
 Can proactively warn us when we are close
 to an interesting cultural activity, or if it is
 lunch time and we are close to a restaurant
 that offers our favourite food [Turist@]




                                                    9
10
Example of proactiveness (II)
   Set of agents embedded in the home of an old
   or disabled person
   Detects the movement of the person around
   the house and the actions he/she performs
   Learns the usual daily patterns of behaviour
   Can detect abnormal situations, and
   proactively send warnings/alarms to
   family/health services
     E.g. Too much time in the same position, long time
     in the bathroom, whole day without going into the
     kitchen, ...




                                                          11
12
Balancing Reactive and
Goal-Oriented Behaviour
 We want our agents to be reactive, responding to
 changing conditions in an appropriate (timely)
 fashion
 We want our agents to systematically work towards
 long-term goals
 These two considerations can be at odds with one
 another
 Designing an agent that can balance the two
 remains an open research problem
      [recall hybrid architectures from last week]




                                                     13
4-Social Ability
  The real world is a multi-agent environment:
  we cannot go around attempting to achieve
  goals without taking others into account
  Some goals can only be achieved with the
  cooperation of others
  Similarly for many computer environments:
  witness the Internet
  Social ability in agents is the ability to interact
  with other agents (and possibly humans) via
  some kind of agent-communication language,
  and perhaps cooperate with others




                                                        14
Requirements for communication
  Agent communication language
    FIPA-ACL
    Message types
    Message attributes
  Agent communication protocols
  Languages to represent the content of the
  messages between agents
  Shared ontologies
  World-wide standards




                                              15
High-level activities
 Communication is a first step towards sophisticated
 activities:
   Coordination
     How to divide a task between a group of agents
     Distributed planning
   Cooperation
     Share intermediate results
     Share resources
     Distributed problem solving
   Negotiation [e-commerce]
     Conditions in an economic transaction
     Find the agent that can provide a service with the best
     conditions
                   [Second part of the course]




                                                               16
Other aspects related to communication
  Security issues
    Authentication
    Encryption
  Finding other agents that provide services,
  matchmaking
    Quite difficult in open systems
  Trust
    To what extent can we trust the other agents of the
    system?
    Reputation models




                                                          17
5-Rationality

 An agent will act in order to achieve its
 goals
 It will not act in such a way as to prevent
 its goals being achieved
   At least insofar as its beliefs permit
 For instance, it will not apply deductive
 procedures without a purpose, as in
 CLIPS style




                                               18
6-Reasoning capabilities
 Essential aspect for intelligent/rational
 behaviour
 Knowledge base with beliefs on the world
 Ability to infer and extrapolate based on
 current knowledge and experiences
 Capacity to make plans
 This is the characteristic that distinguishes an
 intelligent agent from a more “robotic”
 reactive-like agent




                                                    19
Kinds of reasoning in AI (I)
  Knowledge-based systems /
  expert systems
    Reasoning techniques
    especialised in the system’s
    domain
    Forward-chaining,
    backward-chaining, hybrid
  Rule-based systems
    Knowledge is represented
    as a set of rules
    Detect – Select – Apply
    execution cycle
    CLIPS




                                   20
Kinds of reasoning in AI (II)

 Case-based reasoning
   Using similarity to solved
   problems
 Approximate reasoning
   Fuzzy logic, Bayesian
   networks, probabilities,
   etc




                                21
7-Learning
 Basic component for an intelligent
 behaviour
 It can be considered as the [automatic]
 improvement of the performance of the
 agent over time
 Machine Learning: large area within AI




                                           22
Ways to improve
 Make less mistakes
 Do not repeat computations performed in
 the past
 Find solutions more quickly
 Find better solutions
 Solve a wider range of problems
 Learn user preferences and adapt the
 behaviour accordingly




                                           23
Learning tourist profiles in Turist@
 The tourist may fill an initial questionnaire
 Analyze the tourist queries
   E.g. Science-fiction films
 Analyze tourist votes
   Museum of Modern Art: very good
 Cluster tourists with similar preferences
   Recommend activities highly valued by tourists in
   the same class




                                                       24
Advantages of learning systems
 Can adapt better to a dynamic environment,
 or to unknown situations
   Without the need of an exhaustive set of rules
   defined at design time
 Can leverage previous positive/negative
 experiences to act more intelligently in the
 future




                                                    25
8-Autonomy

 Very important difference between agents
 and traditional programs
 Ability to pursue goals in an autonomous
 way, without direct continuous
 interaction/commands from the user
 Given a vague/imprecise goal, the agent
 must determine the best way to attain it




                                            26
Autonomous decisions
 Given a certain goal ...
   Which actions should I perform?
   How should I perform these actions?
   Should I seek/request/(buy !) help/collaboration
   from other agents?


 Less work for the human user !!!




                                                      27
Autonomy requirements
 To have autonomy, it is necessary for an
 agent …
  To have a control on its own actions
    An agent cannot be obliged to do anything
  To have a control on its internal state
    The agent’s state cannot be externally modified by
    another agent
  To have the appropriate access to the resources
  and capabilities needed to perform its tasks
    E.g. access to Internet, communication channels with
    other agents




                                                           28
Autonomy limitations
 Sometimes the user may restrict the
 autonomy of the agent
   For instance, the agent could have autonomy to
   search in Internet for the best place to buy a
   given book ...
   … but the agent could not have the autonomy to
   actually buy the book, using the credit card
   details of the user




                                                    29
Issues
 Autonomy also raises complex issues:
   Legal issues
     Who is the responsible of the agent’s actions?
   Ethical issues
     To what extent should decisions be delegated to
     computational agents?
     E.g. agents in medical decision support systems




                                                       30
9-Temporal continuity
  Agents are continuously running
  processes
    Running active in the foreground or
    sleeping/passive in the background until a
    certain message arrives
  Not once-only computations or scripts
  that map a single input to a single output
  and then terminate




                                                 31
10-Mobility
  Mobile agents can be executing in a given
  computer and, at some point in time,
  move physically through a network (e.g.
  Internet) to another computer, and
  continue their execution there
  In most applications the idea is to go
  somewhere to perform a given task and
  then come back to the initial host with the
  obtained results




                                                32
Example of use: access to a DB

 Imagine there is a DB in Australia with
 thousands of images, and we need to
 select some images with specific
 properties
 We have to make some computations on
 the images to decide whether to select
 them or not




                                           33
Option 1: remote requests

 The agent in our computer makes hundreds
 of requests to the agent managing the DB in
 Australia
   Continuous connection required
   Heavy use of the bandwidth
   All computations made on our computer




                                               34
Option 2: local access
Establish connection
Send a specialised agent to the Australian
computer holding the DB
<connection can be dismissed here>
Make local accesses to the DB, analysing the
images there
Re-establish connection
Our agent comes back with the selected
images




                                               35
Another example: tourist travel




                                  36
Problems of mobile agents
 Security
   How can I accept mobile agents in my computer? (virus !!!)
 Privacy
   Is it secure to send an agent with the details of my credit
   card, or with my personal preferences?
 Technical management
   Each computer has to be able to “pack” an agent, send it to
   another machine, receive agents from other machines,
   “validate” them, and letting them execute locally.




                                                                 37
Kinds of mobility
   Do not confuse
    Mobile agents
      Agents that can move from one computer to
      another
    Agents running in mobile devices
      Agents executing in portable devices such as
      PDAs, Tablet PCs, portable computers or mobile
      phones




                                                       38
11- Other properties …
 Benevolence
  An agent will always try to do what is asked of it
 Veracity
  An agent will not knowingly communicate false
  information
 Character
  Agents must seem honest, trustable, …
 Emotion
  Agents must exhibit emotional states, such as
  happiness, sadness, frustration, …




                                                       39
Relationships between properties

  More learning => more reactivity
  More reasoning => more proactivity
  More learning => more autonomy
  Less autonomy => less proactivity
  More reasoning => more rationality




                                       40
Conclusions
 It is almost impossible for an agent to have
 all those properties !!!
 Most basic properties:
   Autonomy
   Reactiveness
   Reasoning and learning
   Communication

  Task in practical exercise: think about the properties you want
  your agents to have !!!




                                                                    41
Agents versus related technologies
  If agents are autonomous entities that
  display an intelligent behaviour, what
  makes them so different from other well
  known techniques, like object-oriented
  programming or intelligent (knowledge-
  based) systems?




                                            42
Agents and Objects
 Are agents just objects by another
 name?
 Object:
   encapsulates some state
   communicates via message passing
   has methods, corresponding to
   operations that may be performed on
   the state




                                         43
Agents vs Objects (I)
   Agents are autonomous
    Agents embody a stronger notion of
    autonomy than objects
    They decide for themselves whether or not
    to perform an action on request from another
    agent
    When a method is invoked on an object, it is
    always executed




                                                   44
Agents vs Objects (II)
   Agents are smart, intelligent
    Capable of flexible (reactive, proactive,
    social) behaviour
    The standard object model has nothing to
    say about such types of behaviour
   Agents are active
    A multi-agent system is inherently multi-
    threaded, in that each agent is assumed to
    have at least one thread of active control




                                                 45
Agents vs Objects (III)
       Objects                Agents
  Encapsulate state     Encapsulate state,
  and control over it   control over it via
  via methods           actions and goals
  Passive – have no     Active – can decide
  control over a        when to act and how
  method execution
  Non autonomous        Autonomous
  Reactive to events    Proactive




                                              46
In summary …
  Objects do it for free
  An object cannot refuse a method invocation
  Agents do it because they want to
  The service requester is authorised, the agent
  has enough resources available, the action is
  convenient for the agent, …
  Agents do it for money
  The agent can get an economic profit




                                                   47
Agents and Expert Systems
  Aren’t agents just expert systems by another
  name?
  Expert systems contain typically disembodied
  ‘expertise’ about some (abstract) domain of
  discourse (e.g. blood diseases)
  Example: MYCIN knows about blood diseases in
  humans
    It has a wealth of knowledge about blood diseases, in
    the form of rules
    A doctor can obtain expert advice about blood diseases
    by giving MYCIN facts, answering questions, and
    posing queries




                                                             48
Agents and Expert Systems
  Main differences:
   agents situated in an environment:
   MYCIN is not aware of the world — the only
   information that it obtains is by asking
   questions to the user
   agents act:
   MYCIN does not operate on patients
  Sometimes an expert system is agentified
  and included in a MAS




                                                49
Development of agent-oriented systems
  Agents have many interesting and positive
  properties, but …
   It is difficult to design, implement, deploy and
   maintain a MAS
   There aren’t any firmly established agent-
   oriented software engineering methodologies




                                                      50
Pitfalls of Agent Development
 There are several potential problems that
 should be carefully considered when starting
 an agent-based approach
 The main problem categories are:
  conceptual
  analysis and design
  micro (agent) level
  macro (society) level
  implementation




                                                51
Overselling Agents (I)
  Agents are not magic!
  If you can’t do it with ordinary
  software, you probably can’t do
  it with agents
  No evidence that any system
  developed using agent
  technology could not have
  been built just as easily using
  non-agent techniques




                                     52
Overselling agents (II)
   Agents may make it easier to solve certain
   classes of problems…but they do not
   make the impossible possible
   Agents are not AI by a back door
   Don’t completely equate agents and AI




                                                53
Universal Solution?
 Agents have been used in a wide range of
 applications, but they are not a universal
 solution
 For many applications, conventional software
 paradigms (e.g., OO) can be more
 appropriate
 Given a problem for which an agent and a
 non-agent approach appear equally good,
 prefer the non-agent solution!
 In summary: danger of believing that agents
 are the right solution to every problem




                                                54
Don’t Know Why You Want Agents
 Often, projects appear to be going well
 (“We have agents!”) , but there is no clear
 vision about where to go with them.
 The lesson: understand your reasons for
 attempting an agent development project,
 and what you expect to gain from it
 Ask youself: do we really need agent
 technology to solve this problem?




                                               55
Don’t Know What Agents Are Good For

 Having developed some agent
 technology, you search for an
 application to use them
 Putting the cart before the horse!
 The lesson: be sure you understand how and where
 your new technology may be most usefuly applied
 Do not attempt to apply it to arbitrary problems and
 resist temptation to apply it to every problem




                                                        56
Confuse Prototypes with Systems
 Prototypes are easy (particularly with nice
 GUI builders!)
 Field-tested production systems are hard
   For instance, how will the agent-based software
   be maintained? (e.g. in a hospital)
 The process of scaling up from a single-
 machine multi-threaded Java application to a
 multi-user distributed system is much harder
 than it appears




                                                     57
A Silver Bullet for Soft. Eng. (I)
  The holy grail of software engineering is a “silver
  bullet”: an order of magnitude improvement in
  software development
  Technologies promoted as the silver bullet:
    COBOL
    Automatic programming
    Expert systems
    Graphical/visual programming
    Object Oriented Programming
    Java




                                                        58
A Silver Bullet for Soft. Eng. (II)

  Agent technology is not a silver bullet
  It is true that there are good reasons to
  believe that agents are a useful way of
  tackling some problems …
  … but these arguments remain largely
  untested in practice




                                              59
Don’t Forget - It’s Distributed (I)

  Distributed systems = one of the most
  complex classes of computer systems to
  design and implement
    Not only different modules, but the
    interconnection between them
  Multi-agent systems tend to be distributed!
  Problems of distribution do not go away, just
  because a system is agent-based




                                                  60
Don’t Forget - It’s Distributed (II)
  A typical multi-agent system will
  be more complex than a typical
  distributed system
    Autonomous entities
    Conflicts between entities
    Dynamic forms of cooperation
    Emergent behaviour
  Recognize distributed systems
  problems
  Make use of DS-DAI expertise




                                       61
Don’t exploit concurrency (I)
 Many ways of cutting up any problem
  Functional decomposition
  Organizational decomposition
  Physical decomposition
  Resource-based decomposition
 One of the most obvious features of a poor
 multi-agent design is that the amount of
 concurrent problem solving is comparatively
 small or even in extreme cases non-existent




                                               62
Don’t exploit concurrency (II)
  Serial processing in
  distributed system!
     Agent A makes a task,
     sends results to B
     Agent B makes another
     task, sends results to C ...


 Only ever a single thread of control: concurrency,
one of the most important potential advantages of
multi-agent solutions, is not exploited
  If you don’t exploit concurrency, why have an
agent solution?




                                                      63
Want Your Own Architecture (I)
 Agent architectures: designs for building
 agents
 Many agent architectures have been
 proposed over the years
 Great temptation to imagine you need
 your own
 Driving forces behind this belief:
   “Not designed here” mindset
   Intellectual property




                                             64
Want Your Own Architecture (II)
 Problems:
  Architecture development takes years
  No clear payback (too much effort to reinvent
  the wheel)
 Some options:
  Buy one
  Take one off the shelf
  Do without (!)




                                                  65
Use Too Much AI
 Temptation to focus on the agent-specific
 aspects of the application
 Result: an agent framework too overburdened
 with experimental AI techniques to be usable
 Fuelled by “feature envy”, where one reads
 about agents that have the ability to learn, plan,
 talk, sing, dance…
 Resist the temptation to believe such features
 are essential in your agent system
 The lesson: build agents with a minimum of AI;
 as success is obtained with such systems,
 progressively evolve them into richer systems




                                                      66
Not Enough AI
 Don’t call your on-off switch an agent!
 Be realistic: it is becoming common to find
 everyday distributed systems referred to as
 MAS
 Problems:
   leads to the term “agent” losing any meaning
   raises expectations of software recipients
   leads to cynicism on the part of software
   developers




                                                  67
See agents everywhere
 “Pure” Agent-Oriented system = everything is an
 agent!
   Agents for addition, subtraction,…
 Naively viewing everything as an agent is
 inappropriate
   Combine agent and non-agent parts of an application
 Choose the right grain size
 More than 10 agents could already be a big system,
 in some circumstances




                                                         68
Too Many Agents
  Individual agents don’t have to be very
  complex to generate complex behaviour at
  the system level
  A large number of agents:
   Can lead to interesting and unexpected
   emergent functionality, but …
   … also to chaotic behaviour
  Lessons
   Keep interactions to a minimum
   Keep protocols simple




                                             69
Too few agents
 Some designers imagine a separate agent
 for every possible task
 Others don’t recognize value of a multi-
 agent approach at all
 One “all powerful” centralised planning and
 controller agent
 Result is like OO program with 1 class
 Lesson: choose agents of the right size




                                               70
System is anarchic
 Cannot simply bundle a group of agents
 together (except possibly in simulations)
 Most agent systems require system-level
 engineering
 For large systems, or for systems in which
 the society is supposed to act with some
 commonality of purpose, this is particularly
 important
 Organization structure (even in the form of
 formal communication channels) is essential




                                                71
Ignore Available Standards
 There are no established agent standards
 Developers often believe they have no choice
 but to design and build all agent-specific
 components from scratch
 But there are some de facto standards
 Examples:
   CORBA – communication middleware
   OWL – ontology language
   FIPA agent architecture
   FIPA-ACL – agent communication language




                                                72
Readings for this week

   M.Wooldridge: An introduction to MultiAgent
   Systems – beginning chapter 2, section 10.3
   Pitfalls of Agent-Oriented Development
   (M.Wooldridge, N.Jennings)
   PFC Turist@ (Alex Viejo)




                                                 73

More Related Content

PPTX
AI Agents, Agents in Artificial Intelligence
PDF
Introduction to agents and multi-agent systems
PDF
Agent architectures
PDF
Lecture 5 - Agent communication
PPT
JINI Technology
PDF
Introduction to intelligent systems
PPTX
Intelligent agent
PPTX
AI: AI & Problem Solving
AI Agents, Agents in Artificial Intelligence
Introduction to agents and multi-agent systems
Agent architectures
Lecture 5 - Agent communication
JINI Technology
Introduction to intelligent systems
Intelligent agent
AI: AI & Problem Solving

What's hot (20)

PDF
Lecture 2 agent and environment
PPTX
Software Quality Attributes
PPTX
Naive bayesian classification
PDF
Introduction to Parallel Computing
PDF
T9. Trust and reputation in multi-agent systems
PDF
Lecture 9 Markov decision process
PPTX
Off the-shelf components (cots)
PPTX
Intelligence Agent - Artificial Intelligent (AI)
PPTX
Requirements engineering for agile methods
PPTX
Agile Methodology PPT
PPT
Fundamentals of the Analysis of Algorithm Efficiency
PDF
Markov decision process
PPTX
Use case diagram
PPT
Spm unit 4
PPTX
Types of environment in Artificial Intelligence
PPTX
Virtualization
PPTX
Applications of Distributed Systems
PDF
Machine Learning Clustering
PPTX
Swarm intelligence
PPTX
Chapter 6 agent communications--agent communications
Lecture 2 agent and environment
Software Quality Attributes
Naive bayesian classification
Introduction to Parallel Computing
T9. Trust and reputation in multi-agent systems
Lecture 9 Markov decision process
Off the-shelf components (cots)
Intelligence Agent - Artificial Intelligent (AI)
Requirements engineering for agile methods
Agile Methodology PPT
Fundamentals of the Analysis of Algorithm Efficiency
Markov decision process
Use case diagram
Spm unit 4
Types of environment in Artificial Intelligence
Virtualization
Applications of Distributed Systems
Machine Learning Clustering
Swarm intelligence
Chapter 6 agent communications--agent communications
Ad

Viewers also liked (20)

PDF
Lecture 4- Agent types
PPTX
Software agents
PPT
Artificial Intelligence Chapter two agents
PPT
Learning agents
PDF
MAS course Lect13 industrial applications
PDF
Lect7MAS-Coordination
PPTX
AI: Learning in AI
PDF
Finding Paths in Large Spaces - A* and Hierarchical A*
PDF
Adding Tree and Tree
PDF
Software Agents for Internet of Things - at AINL 2014
PDF
Practical Non-Monotonic Reasoning
PDF
Machine learning Lecture 3
PPTX
Turing Test
PPT
Artificial Intelligence: Agent Technology
PPT
Artificial Intelligence: Data Mining
PPTX
Decision Tree Learning
PPT
Mobile agent
DOCX
Artificial Intelligence
DOCX
PDF
The seven faces of advice
Lecture 4- Agent types
Software agents
Artificial Intelligence Chapter two agents
Learning agents
MAS course Lect13 industrial applications
Lect7MAS-Coordination
AI: Learning in AI
Finding Paths in Large Spaces - A* and Hierarchical A*
Adding Tree and Tree
Software Agents for Internet of Things - at AINL 2014
Practical Non-Monotonic Reasoning
Machine learning Lecture 3
Turing Test
Artificial Intelligence: Agent Technology
Artificial Intelligence: Data Mining
Decision Tree Learning
Mobile agent
Artificial Intelligence
The seven faces of advice
Ad

Similar to Agent properties (20)

PPT
Agents(1).ppt
PPT
Lec 2-agents
PDF
Intelligent agent In Artificial Intelligence
PDF
What are AI Agents? Definition and Types - Tpoint Tech
PPTX
Artificial Intelligence jejeiejj3iriejrjifirirjdjeie
PPTX
UNIT I - AI.pptx
PPT
Presentation_DAI
PPT
IntelligentAgents.ppt
PPTX
Defination of prooperties and artificial intelligence
PDF
Agent testing
PDF
Agent basedqos
PDF
Iaetsd intelligent agent business development systems -trends and approach
PPTX
Lecture 2 Agents.pptx
PPTX
Artificial Intelligence and Machine Learning.pptx
PPTX
Intelligent Agents, A discovery on How A Rational Agent Acts
PDF
Artificial Intelligence Agent Behaviour I Math W Teahan
PDF
2_1_Intelligent Agent , Type of Intelligent Agent and Environment .pdf
PPTX
CS 3491 Artificial Intelligence and Machine Learning Unit I Problem Solving
PPTX
AI: Artificial Agents on the Go and its types
Agents(1).ppt
Lec 2-agents
Intelligent agent In Artificial Intelligence
What are AI Agents? Definition and Types - Tpoint Tech
Artificial Intelligence jejeiejj3iriejrjifirirjdjeie
UNIT I - AI.pptx
Presentation_DAI
IntelligentAgents.ppt
Defination of prooperties and artificial intelligence
Agent testing
Agent basedqos
Iaetsd intelligent agent business development systems -trends and approach
Lecture 2 Agents.pptx
Artificial Intelligence and Machine Learning.pptx
Intelligent Agents, A discovery on How A Rational Agent Acts
Artificial Intelligence Agent Behaviour I Math W Teahan
2_1_Intelligent Agent , Type of Intelligent Agent and Environment .pdf
CS 3491 Artificial Intelligence and Machine Learning Unit I Problem Solving
AI: Artificial Agents on the Go and its types

More from Antonio Moreno (15)

PDF
ECAI 2014 poster - Unsupervised semantic clustering of Twitter hashtags
PDF
Dynamic learning of keyword-based preferences for news recommendation (WI-2014)
PPTX
Automatic and unsupervised topic discovery in social networks
PDF
Artificial Intelligence Master URV-UPC-UB
PDF
URV Master on Computer Engineering: Computer Security and Inteligent Systems
PDF
EnoSigTur: recomanació personalitzada d'activitats enoturístiques
PDF
On the application of multi-agent systems in Health Care
PDF
Multi-agent systems applied in Health Care
PDF
Artificial Intelligence techniques in Tourism at URV
PDF
MAS course - Lect12 - URV health care applications
PDF
MAS course - Lect11 - URV applications
PDF
MAS Course - Lect10 - coordination
PDF
MAS course - Lect 9
PDF
Lect 8-auctions
PDF
Lect6-An introduction to ontologies and ontology development
ECAI 2014 poster - Unsupervised semantic clustering of Twitter hashtags
Dynamic learning of keyword-based preferences for news recommendation (WI-2014)
Automatic and unsupervised topic discovery in social networks
Artificial Intelligence Master URV-UPC-UB
URV Master on Computer Engineering: Computer Security and Inteligent Systems
EnoSigTur: recomanació personalitzada d'activitats enoturístiques
On the application of multi-agent systems in Health Care
Multi-agent systems applied in Health Care
Artificial Intelligence techniques in Tourism at URV
MAS course - Lect12 - URV health care applications
MAS course - Lect11 - URV applications
MAS Course - Lect10 - coordination
MAS course - Lect 9
Lect 8-auctions
Lect6-An introduction to ontologies and ontology development

Recently uploaded (20)

PDF
SaaS reusability assessment using machine learning techniques
PDF
Introduction to MCP and A2A Protocols: Enabling Agent Communication
PDF
Early detection and classification of bone marrow changes in lumbar vertebrae...
PPTX
Microsoft User Copilot Training Slide Deck
PPTX
AI-driven Assurance Across Your End-to-end Network With ThousandEyes
PDF
Advancing precision in air quality forecasting through machine learning integ...
PDF
Dell Pro Micro: Speed customer interactions, patient processing, and learning...
PDF
Transform-Your-Factory-with-AI-Driven-Quality-Engineering.pdf
PDF
“The Future of Visual AI: Efficient Multimodal Intelligence,” a Keynote Prese...
PDF
LMS bot: enhanced learning management systems for improved student learning e...
PDF
The-2025-Engineering-Revolution-AI-Quality-and-DevOps-Convergence.pdf
PDF
NewMind AI Weekly Chronicles – August ’25 Week IV
PDF
Rapid Prototyping: A lecture on prototyping techniques for interface design
PDF
Transform-Quality-Engineering-with-AI-A-60-Day-Blueprint-for-Digital-Success.pdf
PDF
giants, standing on the shoulders of - by Daniel Stenberg
PDF
Auditboard EB SOX Playbook 2023 edition.
PDF
Aug23rd - Mulesoft Community Workshop - Hyd, India.pdf
PDF
5-Ways-AI-is-Revolutionizing-Telecom-Quality-Engineering.pdf
PDF
Accessing-Finance-in-Jordan-MENA 2024 2025.pdf
PDF
AI.gov: A Trojan Horse in the Age of Artificial Intelligence
SaaS reusability assessment using machine learning techniques
Introduction to MCP and A2A Protocols: Enabling Agent Communication
Early detection and classification of bone marrow changes in lumbar vertebrae...
Microsoft User Copilot Training Slide Deck
AI-driven Assurance Across Your End-to-end Network With ThousandEyes
Advancing precision in air quality forecasting through machine learning integ...
Dell Pro Micro: Speed customer interactions, patient processing, and learning...
Transform-Your-Factory-with-AI-Driven-Quality-Engineering.pdf
“The Future of Visual AI: Efficient Multimodal Intelligence,” a Keynote Prese...
LMS bot: enhanced learning management systems for improved student learning e...
The-2025-Engineering-Revolution-AI-Quality-and-DevOps-Convergence.pdf
NewMind AI Weekly Chronicles – August ’25 Week IV
Rapid Prototyping: A lecture on prototyping techniques for interface design
Transform-Quality-Engineering-with-AI-A-60-Day-Blueprint-for-Digital-Success.pdf
giants, standing on the shoulders of - by Daniel Stenberg
Auditboard EB SOX Playbook 2023 edition.
Aug23rd - Mulesoft Community Workshop - Hyd, India.pdf
5-Ways-AI-is-Revolutionizing-Telecom-Quality-Engineering.pdf
Accessing-Finance-in-Jordan-MENA 2024 2025.pdf
AI.gov: A Trojan Horse in the Age of Artificial Intelligence

Agent properties

  • 1. LECTURE 3: AGENT PROPERTIES Artificial Intelligence II – Multi-Agent Systems Introduction to Multi-Agent Systems URV, Winter-Spring 2010 1
  • 2. Overview What properties should an agent have? What kind of things are not agents Objects Expert Systems Common problems in agent development 2
  • 3. Defining agents Many possible definitions of “agents” Each author provides a set of characteristics/properties that are considered important to the notion of agenthood They can be divided into Internal: determine the actions within an agent External: affect the interaction of the agent with other (computational/human) agents 3
  • 4. 1-Flexibility An intelligent agent is a computer system capable of flexible action in some dynamic environment By flexible, we mean: reactive proactive social 4
  • 5. 2-Reactivity If a program’s environment is guaranteed to be fixed, the program does not need to worry about its own success or failure – it just executes blindly Example of fixed environment: compiler The real world is not like that: things change, information is incomplete. Many (most?) interesting environments are dynamic Multi-agent world Software is hard to build for dynamic domains: programs must take into account possibility of failure A reactive system is one that maintains an ongoing interaction with its environment, and responds to changes that occur in it (in time for the response to be useful) 5
  • 6. Ways to achieve reactivity [Recall lecture from last week] Reactive architectures [Situation – Action] rules Layered, behaviour-based architectures Deliberative architectures Symbolic world model, long-term goals Reasoning, planning Hybrid architectures Reactive layer + deliberative layer 6
  • 7. 3-Proactiveness Reacting to an environment is relatively easy (e.g., stimulus → response rules) But we generally want agents to do things for us, act on our behalf Hence they must exhibit goal-directed behaviour Agents should be proactive 7
  • 8. Aspects of proactiveness Generating and attempting to achieve goals Behaviour not driven solely by events Taking the initiative when appropriate Executing actions/giving advice/making recommendations/making suggestions without an explicit user request Recognizing opportunities on the fly Available resources Chances of cooperation 8
  • 9. Example of proactiveness (I) Personal Assistant Agent, running continuously on our mobile phone Location tracking (e.g. GPS) Knows our preferences Cultural activities Food Can proactively warn us when we are close to an interesting cultural activity, or if it is lunch time and we are close to a restaurant that offers our favourite food [Turist@] 9
  • 10. 10
  • 11. Example of proactiveness (II) Set of agents embedded in the home of an old or disabled person Detects the movement of the person around the house and the actions he/she performs Learns the usual daily patterns of behaviour Can detect abnormal situations, and proactively send warnings/alarms to family/health services E.g. Too much time in the same position, long time in the bathroom, whole day without going into the kitchen, ... 11
  • 12. 12
  • 13. Balancing Reactive and Goal-Oriented Behaviour We want our agents to be reactive, responding to changing conditions in an appropriate (timely) fashion We want our agents to systematically work towards long-term goals These two considerations can be at odds with one another Designing an agent that can balance the two remains an open research problem [recall hybrid architectures from last week] 13
  • 14. 4-Social Ability The real world is a multi-agent environment: we cannot go around attempting to achieve goals without taking others into account Some goals can only be achieved with the cooperation of others Similarly for many computer environments: witness the Internet Social ability in agents is the ability to interact with other agents (and possibly humans) via some kind of agent-communication language, and perhaps cooperate with others 14
  • 15. Requirements for communication Agent communication language FIPA-ACL Message types Message attributes Agent communication protocols Languages to represent the content of the messages between agents Shared ontologies World-wide standards 15
  • 16. High-level activities Communication is a first step towards sophisticated activities: Coordination How to divide a task between a group of agents Distributed planning Cooperation Share intermediate results Share resources Distributed problem solving Negotiation [e-commerce] Conditions in an economic transaction Find the agent that can provide a service with the best conditions [Second part of the course] 16
  • 17. Other aspects related to communication Security issues Authentication Encryption Finding other agents that provide services, matchmaking Quite difficult in open systems Trust To what extent can we trust the other agents of the system? Reputation models 17
  • 18. 5-Rationality An agent will act in order to achieve its goals It will not act in such a way as to prevent its goals being achieved At least insofar as its beliefs permit For instance, it will not apply deductive procedures without a purpose, as in CLIPS style 18
  • 19. 6-Reasoning capabilities Essential aspect for intelligent/rational behaviour Knowledge base with beliefs on the world Ability to infer and extrapolate based on current knowledge and experiences Capacity to make plans This is the characteristic that distinguishes an intelligent agent from a more “robotic” reactive-like agent 19
  • 20. Kinds of reasoning in AI (I) Knowledge-based systems / expert systems Reasoning techniques especialised in the system’s domain Forward-chaining, backward-chaining, hybrid Rule-based systems Knowledge is represented as a set of rules Detect – Select – Apply execution cycle CLIPS 20
  • 21. Kinds of reasoning in AI (II) Case-based reasoning Using similarity to solved problems Approximate reasoning Fuzzy logic, Bayesian networks, probabilities, etc 21
  • 22. 7-Learning Basic component for an intelligent behaviour It can be considered as the [automatic] improvement of the performance of the agent over time Machine Learning: large area within AI 22
  • 23. Ways to improve Make less mistakes Do not repeat computations performed in the past Find solutions more quickly Find better solutions Solve a wider range of problems Learn user preferences and adapt the behaviour accordingly 23
  • 24. Learning tourist profiles in Turist@ The tourist may fill an initial questionnaire Analyze the tourist queries E.g. Science-fiction films Analyze tourist votes Museum of Modern Art: very good Cluster tourists with similar preferences Recommend activities highly valued by tourists in the same class 24
  • 25. Advantages of learning systems Can adapt better to a dynamic environment, or to unknown situations Without the need of an exhaustive set of rules defined at design time Can leverage previous positive/negative experiences to act more intelligently in the future 25
  • 26. 8-Autonomy Very important difference between agents and traditional programs Ability to pursue goals in an autonomous way, without direct continuous interaction/commands from the user Given a vague/imprecise goal, the agent must determine the best way to attain it 26
  • 27. Autonomous decisions Given a certain goal ... Which actions should I perform? How should I perform these actions? Should I seek/request/(buy !) help/collaboration from other agents? Less work for the human user !!! 27
  • 28. Autonomy requirements To have autonomy, it is necessary for an agent … To have a control on its own actions An agent cannot be obliged to do anything To have a control on its internal state The agent’s state cannot be externally modified by another agent To have the appropriate access to the resources and capabilities needed to perform its tasks E.g. access to Internet, communication channels with other agents 28
  • 29. Autonomy limitations Sometimes the user may restrict the autonomy of the agent For instance, the agent could have autonomy to search in Internet for the best place to buy a given book ... … but the agent could not have the autonomy to actually buy the book, using the credit card details of the user 29
  • 30. Issues Autonomy also raises complex issues: Legal issues Who is the responsible of the agent’s actions? Ethical issues To what extent should decisions be delegated to computational agents? E.g. agents in medical decision support systems 30
  • 31. 9-Temporal continuity Agents are continuously running processes Running active in the foreground or sleeping/passive in the background until a certain message arrives Not once-only computations or scripts that map a single input to a single output and then terminate 31
  • 32. 10-Mobility Mobile agents can be executing in a given computer and, at some point in time, move physically through a network (e.g. Internet) to another computer, and continue their execution there In most applications the idea is to go somewhere to perform a given task and then come back to the initial host with the obtained results 32
  • 33. Example of use: access to a DB Imagine there is a DB in Australia with thousands of images, and we need to select some images with specific properties We have to make some computations on the images to decide whether to select them or not 33
  • 34. Option 1: remote requests The agent in our computer makes hundreds of requests to the agent managing the DB in Australia Continuous connection required Heavy use of the bandwidth All computations made on our computer 34
  • 35. Option 2: local access Establish connection Send a specialised agent to the Australian computer holding the DB <connection can be dismissed here> Make local accesses to the DB, analysing the images there Re-establish connection Our agent comes back with the selected images 35
  • 37. Problems of mobile agents Security How can I accept mobile agents in my computer? (virus !!!) Privacy Is it secure to send an agent with the details of my credit card, or with my personal preferences? Technical management Each computer has to be able to “pack” an agent, send it to another machine, receive agents from other machines, “validate” them, and letting them execute locally. 37
  • 38. Kinds of mobility Do not confuse Mobile agents Agents that can move from one computer to another Agents running in mobile devices Agents executing in portable devices such as PDAs, Tablet PCs, portable computers or mobile phones 38
  • 39. 11- Other properties … Benevolence An agent will always try to do what is asked of it Veracity An agent will not knowingly communicate false information Character Agents must seem honest, trustable, … Emotion Agents must exhibit emotional states, such as happiness, sadness, frustration, … 39
  • 40. Relationships between properties More learning => more reactivity More reasoning => more proactivity More learning => more autonomy Less autonomy => less proactivity More reasoning => more rationality 40
  • 41. Conclusions It is almost impossible for an agent to have all those properties !!! Most basic properties: Autonomy Reactiveness Reasoning and learning Communication Task in practical exercise: think about the properties you want your agents to have !!! 41
  • 42. Agents versus related technologies If agents are autonomous entities that display an intelligent behaviour, what makes them so different from other well known techniques, like object-oriented programming or intelligent (knowledge- based) systems? 42
  • 43. Agents and Objects Are agents just objects by another name? Object: encapsulates some state communicates via message passing has methods, corresponding to operations that may be performed on the state 43
  • 44. Agents vs Objects (I) Agents are autonomous Agents embody a stronger notion of autonomy than objects They decide for themselves whether or not to perform an action on request from another agent When a method is invoked on an object, it is always executed 44
  • 45. Agents vs Objects (II) Agents are smart, intelligent Capable of flexible (reactive, proactive, social) behaviour The standard object model has nothing to say about such types of behaviour Agents are active A multi-agent system is inherently multi- threaded, in that each agent is assumed to have at least one thread of active control 45
  • 46. Agents vs Objects (III) Objects Agents Encapsulate state Encapsulate state, and control over it control over it via via methods actions and goals Passive – have no Active – can decide control over a when to act and how method execution Non autonomous Autonomous Reactive to events Proactive 46
  • 47. In summary … Objects do it for free An object cannot refuse a method invocation Agents do it because they want to The service requester is authorised, the agent has enough resources available, the action is convenient for the agent, … Agents do it for money The agent can get an economic profit 47
  • 48. Agents and Expert Systems Aren’t agents just expert systems by another name? Expert systems contain typically disembodied ‘expertise’ about some (abstract) domain of discourse (e.g. blood diseases) Example: MYCIN knows about blood diseases in humans It has a wealth of knowledge about blood diseases, in the form of rules A doctor can obtain expert advice about blood diseases by giving MYCIN facts, answering questions, and posing queries 48
  • 49. Agents and Expert Systems Main differences: agents situated in an environment: MYCIN is not aware of the world — the only information that it obtains is by asking questions to the user agents act: MYCIN does not operate on patients Sometimes an expert system is agentified and included in a MAS 49
  • 50. Development of agent-oriented systems Agents have many interesting and positive properties, but … It is difficult to design, implement, deploy and maintain a MAS There aren’t any firmly established agent- oriented software engineering methodologies 50
  • 51. Pitfalls of Agent Development There are several potential problems that should be carefully considered when starting an agent-based approach The main problem categories are: conceptual analysis and design micro (agent) level macro (society) level implementation 51
  • 52. Overselling Agents (I) Agents are not magic! If you can’t do it with ordinary software, you probably can’t do it with agents No evidence that any system developed using agent technology could not have been built just as easily using non-agent techniques 52
  • 53. Overselling agents (II) Agents may make it easier to solve certain classes of problems…but they do not make the impossible possible Agents are not AI by a back door Don’t completely equate agents and AI 53
  • 54. Universal Solution? Agents have been used in a wide range of applications, but they are not a universal solution For many applications, conventional software paradigms (e.g., OO) can be more appropriate Given a problem for which an agent and a non-agent approach appear equally good, prefer the non-agent solution! In summary: danger of believing that agents are the right solution to every problem 54
  • 55. Don’t Know Why You Want Agents Often, projects appear to be going well (“We have agents!”) , but there is no clear vision about where to go with them. The lesson: understand your reasons for attempting an agent development project, and what you expect to gain from it Ask youself: do we really need agent technology to solve this problem? 55
  • 56. Don’t Know What Agents Are Good For Having developed some agent technology, you search for an application to use them Putting the cart before the horse! The lesson: be sure you understand how and where your new technology may be most usefuly applied Do not attempt to apply it to arbitrary problems and resist temptation to apply it to every problem 56
  • 57. Confuse Prototypes with Systems Prototypes are easy (particularly with nice GUI builders!) Field-tested production systems are hard For instance, how will the agent-based software be maintained? (e.g. in a hospital) The process of scaling up from a single- machine multi-threaded Java application to a multi-user distributed system is much harder than it appears 57
  • 58. A Silver Bullet for Soft. Eng. (I) The holy grail of software engineering is a “silver bullet”: an order of magnitude improvement in software development Technologies promoted as the silver bullet: COBOL Automatic programming Expert systems Graphical/visual programming Object Oriented Programming Java 58
  • 59. A Silver Bullet for Soft. Eng. (II) Agent technology is not a silver bullet It is true that there are good reasons to believe that agents are a useful way of tackling some problems … … but these arguments remain largely untested in practice 59
  • 60. Don’t Forget - It’s Distributed (I) Distributed systems = one of the most complex classes of computer systems to design and implement Not only different modules, but the interconnection between them Multi-agent systems tend to be distributed! Problems of distribution do not go away, just because a system is agent-based 60
  • 61. Don’t Forget - It’s Distributed (II) A typical multi-agent system will be more complex than a typical distributed system Autonomous entities Conflicts between entities Dynamic forms of cooperation Emergent behaviour Recognize distributed systems problems Make use of DS-DAI expertise 61
  • 62. Don’t exploit concurrency (I) Many ways of cutting up any problem Functional decomposition Organizational decomposition Physical decomposition Resource-based decomposition One of the most obvious features of a poor multi-agent design is that the amount of concurrent problem solving is comparatively small or even in extreme cases non-existent 62
  • 63. Don’t exploit concurrency (II) Serial processing in distributed system! Agent A makes a task, sends results to B Agent B makes another task, sends results to C ... Only ever a single thread of control: concurrency, one of the most important potential advantages of multi-agent solutions, is not exploited If you don’t exploit concurrency, why have an agent solution? 63
  • 64. Want Your Own Architecture (I) Agent architectures: designs for building agents Many agent architectures have been proposed over the years Great temptation to imagine you need your own Driving forces behind this belief: “Not designed here” mindset Intellectual property 64
  • 65. Want Your Own Architecture (II) Problems: Architecture development takes years No clear payback (too much effort to reinvent the wheel) Some options: Buy one Take one off the shelf Do without (!) 65
  • 66. Use Too Much AI Temptation to focus on the agent-specific aspects of the application Result: an agent framework too overburdened with experimental AI techniques to be usable Fuelled by “feature envy”, where one reads about agents that have the ability to learn, plan, talk, sing, dance… Resist the temptation to believe such features are essential in your agent system The lesson: build agents with a minimum of AI; as success is obtained with such systems, progressively evolve them into richer systems 66
  • 67. Not Enough AI Don’t call your on-off switch an agent! Be realistic: it is becoming common to find everyday distributed systems referred to as MAS Problems: leads to the term “agent” losing any meaning raises expectations of software recipients leads to cynicism on the part of software developers 67
  • 68. See agents everywhere “Pure” Agent-Oriented system = everything is an agent! Agents for addition, subtraction,… Naively viewing everything as an agent is inappropriate Combine agent and non-agent parts of an application Choose the right grain size More than 10 agents could already be a big system, in some circumstances 68
  • 69. Too Many Agents Individual agents don’t have to be very complex to generate complex behaviour at the system level A large number of agents: Can lead to interesting and unexpected emergent functionality, but … … also to chaotic behaviour Lessons Keep interactions to a minimum Keep protocols simple 69
  • 70. Too few agents Some designers imagine a separate agent for every possible task Others don’t recognize value of a multi- agent approach at all One “all powerful” centralised planning and controller agent Result is like OO program with 1 class Lesson: choose agents of the right size 70
  • 71. System is anarchic Cannot simply bundle a group of agents together (except possibly in simulations) Most agent systems require system-level engineering For large systems, or for systems in which the society is supposed to act with some commonality of purpose, this is particularly important Organization structure (even in the form of formal communication channels) is essential 71
  • 72. Ignore Available Standards There are no established agent standards Developers often believe they have no choice but to design and build all agent-specific components from scratch But there are some de facto standards Examples: CORBA – communication middleware OWL – ontology language FIPA agent architecture FIPA-ACL – agent communication language 72
  • 73. Readings for this week M.Wooldridge: An introduction to MultiAgent Systems – beginning chapter 2, section 10.3 Pitfalls of Agent-Oriented Development (M.Wooldridge, N.Jennings) PFC Turist@ (Alex Viejo) 73