SlideShare a Scribd company logo
ARTIFICIAL INTELLIGENCE
AND
MACHINE LEARNING
21CS52
By Prof. Sanjay Kumar N V
KIT ,Tiptur
Artificial Intelligence problem solving agents
Artificial Intelligence problem solving agents
Artificial Intelligence problem solving agents
Artificial Intelligence problem solving agents
Artificial Intelligence problem solving agents
Artificial Intelligence problem solving agents
Artificial Intelligence problem solving agents
Artificial Intelligence problem solving agents
Artificial Intelligence problem solving agents
Module 1:
Chapter 1: Introduction:
What is AI?
Foundations and History of AI
Chapter 3: Problem solving:
‐
Problem solving agents
‐
Example problems
Searching for Solutions
Uninformed Search Strategies: Breadth First search, Depth First
Search,
Chapter 1: Introduction
• The Artificial Intelligence provides an introduction to AI which
will help you to understand the concepts behind Artificial
Intelligence.
• In this session, we will see various popular topics such as
History of AI, applications of AI, etc.
• In today's world, technology is growing very fast, and we
are getting in touch with different new technologies day by
day.
What is AI?
• Artificial Intelligence is composed of two words
Artificial and Intelligence
• where Artificial defines "man-made," and intelligence
defines "thinking power", hence AI means "a man-made
thinking power”. So, we can define AI as:
• "It is a branch of computer science by which we can create
intelligent machines which can behave like a human, think
like humans, and able to make decisions."
Introduction
• Here, one of the booming technologies of computer science is
Artificial Intelligence which is ready to create a new revolution
in the world by making intelligent machines.
• The Artificial Intelligence is now all around us.
• It is currently working with a variety of subfields, ranging from
general to specific, such as self-driving cars, playing chess,
proving theorems, playing music, Painting, etc.
Definition: A branch of computer science dealing with the
simulation of intelligent behavior in computers.
Objective:
• The capability of a machine to imitate intelligent human
behavior.
What are the tasks it can do:
• A computer system able to perform tasks that normally
require human intelligence such as visual perception, speech
recognition, decision making and translation between
languages.
Artificial Intelligence problem solving agents
Different technologies
• The field of artificial intelligence, or AI, goes further still: it attempts not just to
understand but also to build intelligent entities.
• AI is one of the newest fields in science and engineering. Work started in earnest
soon after World War II, and the name itself was coined in 1956.
• In Figure 1.1 we see eight definitions of AI, laid out along two dimensions. The
definitions on top are concerned with thought processes and reasoning, whereas
the ones on the bottom address behavior.
• The definitions on the left measure success in terms of fidelity to human
performance, whereas RATIONALITY the ones on the right measure against an
ideal performance measure, called rationality. A system is rational if it does the
“right thing,” given what it knows.
• Historically, all four approaches to AI have been followed, each by different people
with different methods.
• Let us look at the four approaches in more detail.
• A human-centered approach must be in part an empirical science, involving
observations and hypotheses about human behavior. A rationalist approach involves
a combination of mathematics and engineering
1 Acting humanly: The Turing Test approach:
• The Turing Test, proposed by Alan Turing (1950), was designed to provide a
satisfactory operational definition of intelligence.
• A computer passes the test if a human interrogator, after posing some written questions,
cannot tell whether the written responses come from a person or from a computer.
• The computer would need to possess the following capabilities:
• natural language processing: to enable it to communicate successfully in English
• knowledge representation: to store what it knows or hears
• automated reasoning: to use the stored information to answer questions and to draw new
conclusions
Artificial Intelligence problem solving agents
• machine learning: to adapt to new circumstances and to detect and extrapolate
patterns
• To pass the total Turing Test, the computer will need
1]computer vision to perceive objects, and
2]robotics to manipulate objects and move about.
2 Thinking humanly: The cognitive modeling approach:
• A given program thinks like a human, we must have some way of determining
how humans think. We need to get inside the actual workings of human minds.
• There are three ways to do this:
1) through introspection: trying to catch our own thoughts as they go by;
2) through psychological experiments: observing a person in action; and through
brain imaging—observing the brain in action
3) Once we have a sufficiently precise theory of the mind, it becomes possible to
express the theory as a computer program.
3 Thinking rationally: The “laws of thought” approach
• The Greek philosopher Aristotle was one of the first to attempt to codify
“right thinking,” that is, irrefutable reasoning processes.
• There are two main obstacles to this approach.
• First, it is not easy to take informal knowledge and state it in the formal terms
required by logical notation, particularly when the knowledge is less than
100% certain.
• Second, there is a big difference between solving a problem “in principle” and
solving it in practice.
4:Acting rationally: The rational agent approach
• An agent is just something that acts
• all computer programs do something, but computer agents are expected to do
more: operate autonomously, perceive their environment, persist over a
prolonged time period, adapt to change, and create and pursue goals.
• A rational agent is one that acts so as to achieve the best outcome or, when
there is uncertainty, the best expected outcome.
Foundation of AI
1.Philosophy
2.Mathematics
3.Economics
4.Neuroscience
5.Psychology
6.Computer engineering
7.Control theory and cybernetics
8.Linguistics
1.Philosophy:
• Descartes, a prominent philosopher of the 17th century, played a crucial role in shaping the
foundations of modern philosophy.
• He was a staunch advocate of rationalism and he was also a proponent of dualism.
• He held that there is a part of the human mind (or soul or spirit) that is outside of nature,
exempt from physical laws.
• Animals, on the other hand, did not possess this dual quality; MATERIALISM they could be
treated as machines.
2.Mathematics:
• The main three fundamental areas are logic, computation and probability.
• Philosophers staked out some of the fundamental ideals of AI, but the leap to formal science
required a level of mathematical formalization in their fundamental areas:
• What are the formal rules to draw valid conclusions?
• What can be compute?
• How do we reason with uncertain information?
3.Economics:
• AI takes the following ideas from economics.
• How should we make decisions so as to maximize payoff (utility)?
• Decision theory, which combines probability theory. With utility theory, provides a formal and
complete framework for decision made under uncertainty. This is suitable for large “economics”
where each agent need pay no attention to the action of the other gent as individuals.
• How should we do this when others may not go along?
• For small economics, the situation is much more like a game. The action of one player can
significantly affect the utility of another. Unlike decision theory game theory does not offer an
unambiguous prescription for selecting actions.
• How should we do this when the payoff may be far in the future?
• For the most part economist did not address this question. This topic was pursed in the field of
operation research (OR).
4.Neuroscience
• Neuroscience is the study of the nervous system, particularly the brain.
• This study of how brain process information directly help in the development of AI.
• It has also long been known that human brains are somehow different; in about 335 B.C. Aristotle
wrote, “Of all the animals, man has the largest brain in proportion to his size.”
5.Psychology
• Psychology tries to answer the following question:
• How do humans and animals think and act?
• Modern science proves that computer models could be used to address the psychology of
memory, language and logical thinking respectively.
• It is now common view among psychologist that "a cognitive theory should be like a computer
program."
6.Computer engineering
• How can we build an efficient computer?
• For artificial intelligence to succeed, we need two things: intelligence and an artifact.
• The computer has been the artifact of choice.
• The modern digital electronic computer was invented independently and almost simultaneously
by scientists in three countries
7.Control theory and cybernetics.
• How can artifacts operate under their own control?
• Control theory and cybernetics deals with the self controlling machine.
• Modern control theory has its goal to design the system that maximize an objective function
overtime.
• This roughly match the AI view: designing system that behave optimally.
8. Linguistics:
• Language is directly related to thought. Modern linguistics and AI, were born at about the same
time and grew up together, intersecting in a hybrid field called computational linguistics or
natural language processing.
History of AI
History of AI
• Artificial Intelligence is not a new word and not a new technology for researchers.
• This technology is much older than you would imagine.
• Even there are the myths of Mechanical men in Ancient Greek and Egyptian Myths.
• Following are some milestones in the history of AI which defines the journey from the AI
generation to till date development.
Maturation of Artificial Intelligence (1943-1952)
Year 1943:
• The first work which is now recognized as AI was done by Warren McCulloch and Walter pits in
1943. They proposed a model of artificial neurons.
Year 1949:
• Donald Hebb demonstrated an updating rule for modifying the connection strength between
neurons. His rule is now called Hebbian learning.
Year 1950:
• The Alan Turing who was an English mathematician and pioneered Machine learning in 1950.
• Alan Turing publishes "Computing Machinery and Intelligence" in which he proposed a test.
• The test can check the machine's ability to exhibit intelligent behavior equivalent to human
intelligence, called a Turing test.
The birth of Artificial Intelligence (1952-1956)
Year 1955:
• An Allen Newell and Herbert A. Simon created the "first artificial intelligence program "Which
was named as "Logic Theorist".
• This program had proved 38 of 52 Mathematics theorems, and find new and more elegant
proofs for some theorems.
Year 1956:
• The word "Artificial Intelligence" first adopted by American Computer scientist John
McCarthy at the Dartmouth Conference.
• For the first time, AI coined as an academic field.
Year 1966:
• The researchers emphasized developing algorithms which can solve mathematical problems.
• Joseph Weizenbaum created the first chatbot in 1966, which was named as ELIZA.
Year 1972: The first intelligent humanoid robot was built in Japan which was named as WABOT-
1.
The first AI winter (1974-1980)
• The duration between years 1974 to 1980 was the first AI winter duration. AI winter refers
to the time period where computer scientist dealt with a severe shortage of funding from
government for AI researches.
• During AI winters, an interest of publicity on artificial intelligence was decreased.
A boom of AI (1980-1987)
• Year 1980: After AI winter duration, AI came back with "Expert System". Expert systems
were programmed that emulate the decision-making ability of a human expert.
• In the Year 1980, the first national conference of the American Association of Artificial
Intelligence was held at Stanford University.
The second AI winter (1987-1993)
• The duration between the years 1987 to 1993 was the second AI Winter duration.
• Again Investors and government stopped in funding for AI research as due to high cost but not
efficient result. The expert system such as XCON was very cost effective.
The emergence of intelligent agents (1993-2011)
• Year 1997: In the year 1997, IBM Deep Blue beats world chess champion, Gary Kasparov, and
became the first computer to beat a world chess champion.
• Year 2002: for the first time, AI entered the home in the form of Roomba, a vacuum cleaner.
• Year 2006: AI came in the Business world till the year 2006. Companies like Facebook, Twitter,
and Netflix also started using AI.
Deep learning, big data and artificial general intelligence
(2011-present)
• Year 2011: In the year 2011, IBM's Watson won jeopardy, a quiz show, where it had to solve the
complex questions as well as riddles. Watson had proved that it could understand natural
language and can solve tricky questions quickly.
• Year 2012: Google has launched an Android app feature "Google now", which was able to
provide information to the user as a prediction.
• Year 2014: In the year 2014, Chatbot "Eugene Goostman" won a competition in the infamous
"Turing test."
• Year 2018: The "Project Debater" from IBM debated on complex topics with two master
debaters and also performed extremely well.
• Google has demonstrated an AI program "Duplex" which was a virtual assistant and which had
taken hairdresser appointment on call, and lady on other side didn't notice that she was talking
with the machine.
THE STATE OF THE ART : AI
• Robotic vehicles: Driverless Cars
• Speech recognition: Communication Guidelines
• Autonomous planning and scheduling: Robots to Space
• Game playing: Computer chess
• Spam fighting: Avoid spam generated msgs
• Logistics planning: Transportation
• Robotics: Military Applications
• Machine Translation: Language Conversion
Chapter 3: Problem solving:
‐
Contents:
Problem solving agents
Example problems
Searching for Solutions
Uninformed Search Strategies:
1.Breadth First search,
2.Depth First Search,
Artificial Intelligence problem solving agents
Artificial Intelligence problem solving agents
Artificial Intelligence problem solving agents
Artificial Intelligence problem solving agents
Artificial Intelligence problem solving agents
•PEAS: Performance measure, Environment,
Actuators, Sensors
Artificial Intelligence problem solving agents
Types Of
Environment In AI
CONTENT
• Fully Observable / Partially Observable
– Deterministic / Non-deterministic
– Episodic / Non-episodic(Sequential)
– Static / Dynamic
– Discrete / Continuous
– Single agent / Multiple agents
Fully Observable/Partially
Observable
An agent’s sensors give it access to complete state of the environment at each point in time,
then we say that the task environment is fully observable; otherwise it is only partially
observable.
Chess is fully observed: A player gets to see the whole board.
Poker is partially observable: A player gets to see only his own cards, not the cards
of everyone in the game.
Deterministic/Non-Deterministic
If the next state of the environment is completely determined by the current state and
the actions of the agent, then the environment is deterministic; otherwise it is non-
deterministic.
Examples:
Deterministic environment: Tic Tac Toe game
Self-driving vehicles are a classic example of Non-Deterministic AI
processes.
Episodic / Non-episodic(Sequential)
The agent's experience is divided into atomic "episodes" (each episode consists
of the agent perceiving and then performing a single action) and the choice of
action in each episode depends only on the episode itself. Episodic
environments are much simpler because the agent does not need to think
ahead.
Sequential if current decisions affect future decisions, or rely on previous ones.
Examples:
Episodic environment: mail sorting system
Non-episodic environment: chess game
Static / Dynamic
If the environment does not change while an agent is acting, then it is static;
otherwise it is dynamic.
OR
An environment is static if only the actions of an agent modify it. It is dynamic on
the other hand if other processes are operating on it.
 Dynamic if the environment may change over time.
 Static if nothing (other than the agent) in the environment changes.
Examples:
(Static): If we add 2+2=4 this will remain same they will never be change…
(Dynamic): Playing football game, other players make it dynamic.Every action
there will be new reaction..
Discrete / Continuous
If there are a limited number of distinct, clearly defined, states of the environment, the
environment is discrete.
E.g. A game of chess or checkers where there are a set number of moves
Continuous = Signals constantly coming into sensors, actions continually changing.
E.g. Taxi driving. There could be a route from to anywhere to anywhere else. OR Simple
Driving a car
Single agent / Multiple agents
• An agent operating by itself in an environment is single agent!
• Multi agent is when other agents are present!
Examples:
(Static): Automated Taxi Car, Crossword Puzzle
(Multi agent): Other players in a football team (or opposing team)
Artificial Intelligence problem solving agents
Condition–action rule (situation–action rules, productions, if–then rules)
Ex: blinking when something approaches the eye
Artificial Intelligence problem solving agents
Artificial Intelligence problem solving agents
Artificial Intelligence problem solving agents
Behavior: These agents respond directly to percepts (i.e., what they sense) without
considering the history of their previous actions. They follow simple "condition-
action" rules where a specific condition triggers a predefined action.
Example: A thermostat in a room.
It senses the current temperature (percept) and compares it with the desired
setpoint. If the temperature is too low, it turns on the heater, and if it's too high, it
turns off the heater.
Justification: The thermostat does not need to consider its past states or future
goals. It just reacts to the current temperature using predefined rules.
Simple Reflex Agents
Summary
Model-based reflex agents
• Keep track of the part of the world it can’t see now
• The agent should maintain some sort of internal state that depends
on the percept history and thereby reflects at least some of the
unobserved aspects of the current state.
• Updating this internal state information as time goes by requires two
kinds of knowledge to be encoded in the agent program
• This knowledge about “how the world works”
• Whether implemented in simple Boolean circuits or in complete
scientific theories—is called a model of the world.
• An agent that uses such a model is called a model-based agent
Artificial Intelligence problem solving agents
Artificial Intelligence problem solving agents
Behavior: These agents maintain some internal state of the world based on a model
of how the world works. They use this model to keep track of unobservable aspects
of the environment and make decisions based on both the current percept and the
internal state.
Example: A robot vacuum cleaner.
It uses a map (model) of the environment to navigate through a room. It senses
obstacles (current percept) and remembers where it has already cleaned (internal
state) to avoid re-cleaning the same areas.
Justification: The vacuum cleaner relies on a model of the room's layout and its
internal state (i.e., areas that have been cleaned) to decide its next move, rather
than only reacting to the current state..
Model-Based Reflex Agents
Summary
Artificial Intelligence problem solving agents
Artificial Intelligence problem solving agents
Behavior: These agents act to achieve specific goals by considering different
actions and their effects. They evaluate their actions based on whether they move
them closer to achieving a goal.
Example: A GPS navigation system.
It has a goal to reach a specific destination. It takes into account the current location,
available routes, and updates (such as traffic conditions) to plan the best path.
Justification: The GPS system chooses actions (turn left, turn right) by considering
whether they help achieve the goal of reaching the destination, rather than just
reacting to immediate sensory inputs.
Goal-Based Agents
Summary
Artificial Intelligence problem solving agents
Artificial Intelligence problem solving agents
Artificial Intelligence problem solving agents
Behavior: These agents not only aim to achieve goals but also try to maximize
some measure of utility (i.e., satisfaction or performance). They choose actions
based on which ones provide the highest expected utility.
Example: A self-driving car.
It has the goal of reaching a destination, but it also considers other factors such as
safety, speed, fuel efficiency, and comfort. It selects routes and maneuvers that
balance these factors to maximize overall utility.
Justification: The self-driving car doesn't just aim to reach the destination but does
so in a way that optimizes various performance metrics like minimizing travel time
and ensuring safety, considering trade-offs between them.
Utility-Based Agents
Summary
Artificial Intelligence problem solving agents
Artificial Intelligence problem solving agents
Thank You
Artificial Intelligence problem solving agents
Problem solving agents
• Intelligent agents are supposed to maximize their performance measure.
• Goal formulation, based on the current situation and the agent’s performance measure,
is the first step in problem solving.
• The agent’s task is to find out how to act, now and in the future, so that it reaches a
goal state.
• Before it can do this, it needs to decide what sorts of actions and states it should
consider.
• Problem formulation is the process of deciding what actions and states to consider,
given a goal.
• If the agent has no additional information—i.e., if the environment is unknown then it
is has no choice but to try one of the actions at random.
• The process of looking for a sequence of actions that reaches the goal is called search.
• A search algorithm takes a problem as input and returns a solution in the form of an
action sequence
Problem solving agents
• Once a solution is found, the actions it recommends can be
carried out. This is called the execution phase.
• We also assume the environment is discrete, so at any given
state there are only finitely many actions to choose from.
• We will assume the environment is known, so the agent
knows which states are reached by each action.
• Finally, we assume that the environment is deterministic, so
each action has exactly one outcome.
Problem solving agents
• Intelligent Agents :
1. Step-1:
– Goal Formulation: What is the Agent task..?
2. Step-2
– Problem Formulation: What is the Agent task..?
– Agents Adopts a goal & aim at satisfying it.
Properties of the Environment
• For examining future actions
If the Environment is Observable ..
– Agent should always knows the current state
 If the Environment is Discrete ..
– There are only finitely many actions to chose from
If the Environment is Known ..
– Agent should knows which states are reached by each action
If the Environment is Deterministic ..
– Each action has exactly one outcome
Well-defined problems and solutions:
• A problem can be defined formally by five components:
• Goal Formulation: TA has to visit all cities and return to the starting city with minimum
cost/Distance. Ex: Cities {A, B, C, D}
• Problem Formulation: Problem must be clearly defined so that agent can find a solution
– Initial State: Start from City A.
– Actions: Move from A to B or Move from A to C or Move from A to D.
– Transition Model: Updating the state of the enviroment.
– Goal Test: Agent has visited all the cities and returned to city A.
– Path Cost: Sum of the distance traveled between cities.
• Search for a Solution: It finds the best algorithm out of various algorithms, which may be
proven as the best optimal solution. Adopting best search strategies: DFS, BFS, Greedy,
etc..
• Execution: It executes the best optimal solution from the searching algorithms to reach the
goal state from the current state.
Problem Solving Agent designed for Travelling Agent (Tourism)
2. Formulating problems
• The process of removing detail from a representation is called abstraction.
• A driving action has many effects. Besides changing the location of the
vehicle and its occupants, it takes up time, consumes fuel, generates
pollution, and changes the agent.
• The abstraction is valid if we can expand any abstract solution into a solution
in the more detailed world
• The abstraction is useful if carrying out each of the actions in the solution is
easier than the original problem
• The choice of a good abstraction thus involves removing as much detail as
• Search: It identifies all the best possible sequence of actions
to reach the goal state from the current state. It takes a problem
as an input and returns solution as its output.
• Solution: It finds the best algorithm out of various algorithms,
which may be proven as the best optimal solution.
• Execution: It executes the best optimal solution from the
searching algorithms to reach the goal state from the current
state.
EXAMPLE PROBLEMS
• The problem-solving approach has been applied to a vast array of task
environments. We list some of the best known here, distinguishing between
toy and real-world problems.
• A toy problem in AI is a simplified ,that serves as a starting point for testing
and understanding various algorithms or approaches.
• A real-world problem is one whose solutions people actually care about.
• A description of the possible actions available to the agent. Given a particular state s,
ACTIONS(s) returns the set of actions that can be executed in s.
• A description of what each action does; the formal name for this is the transition
model, specified by a function RESULT(s, a) that returns the state that results from
doing action a in state s.
• Together, the initial state, actions, and transition model implicitly define the state
space of the problem—the set of all states reachable from the initial state by any
sequence of actions. The state space forms a directed network or graph in which the
nodes are states and the links between nodes are actions.
• The goal test, which determines whether a given state is a goal state. Sometimes there
is an explicit set of possible goal states, and the test simply checks whether the given
state is one of them.
• A path cost function that assigns a numeric cost to each path. The problem-solving
agent chooses a cost function that reflects its own performance measure.
Problem formulation is a crucial step in the problem-solving process and involves specifying
various elements that characterize the problem.
These elements typically include:
• Initial State: Describing the starting point or the current state of the system.
• Actions: Identifying the set of possible actions or operations that can be performed
• Transition Model: Defining the rules or mechanisms that govern how the system moves
from one state to another based on the chosen actions.
• Goal Test: Establishing the conditions that determine whether the problem has been
solved or the goal has been reached.
• Path Cost: Assigning a cost or value to each action, representing the effort, time, or
resources required to perform the action. This is important for optimizing the solution.
Artificial Intelligence problem solving agents
Artificial Intelligence problem solving agents
Toy problems
• Some of the toy problems are:
• The first example we examine is the vacuum world:
• States: The state is determined by both the agent location and the dirt locations. The agent
is in one of two locations, each of which might or might not contain dirt. Thus, there are 2 ×
22
= 8 possible world states. A larger environment with n locations has n · 2n states.
• Initial state: Any state can be designated as the initial state.
• Actions: In this simple environment, each state has just three actions: Left, Right, and
Suck. Larger environments might also include Up and Down.
• Transition model: The actions have their expected effects, except that moving Left in the
leftmost square, moving Right in the rightmost square, and Sucking in a clean square have
no effect. .
• Goal test: This checks whether all the squares are clean.
• Path cost: Each step costs 1, so the path cost is the number of steps in the path.
Artificial Intelligence problem solving agents
Artificial Intelligence problem solving agents
Artificial Intelligence problem solving agents
Artificial Intelligence problem solving agents
Artificial Intelligence problem solving agents
Artificial Intelligence problem solving agents
Artificial Intelligence problem solving agents
Artificial Intelligence problem solving agents
Artificial Intelligence problem solving agents
Artificial Intelligence problem solving agents
Artificial Intelligence problem solving agents
Artificial Intelligence problem solving agents
Artificial Intelligence problem solving agents
Artificial Intelligence problem solving agents
Artificial Intelligence problem solving agents
Artificial Intelligence problem solving agents
• The 8-puzzle, an instance of which is shown in Figure 3.4, consists of a 3×3 board with
eight numbered tiles and a blank space. A tile adjacent to the blank space can slide into the
space.
• The object is to reach a specified goal state, such as the one shown on the right of the
figure. The standard formulation is as follows:
Sliding-block puzzles
Artificial Intelligence problem solving agents
Artificial Intelligence problem solving agents
Artificial Intelligence problem solving agents
• States: A state description specifies the location of each of the eight tiles and the blank in one
of the nine squares.
• Initial state: Any state can be designated as the initial state. Note that any given goal can be
reached from exactly half of the possible initial states
• Actions: The simplest formulation defines the actions as movements of the blank space,Left,
Right, Up, or Down.
• Transition model: Given a state and action, this returns the resulting state;
• Goal test: This checks whether the state matches the goal configuration
• Path cost: Each step costs 1, so the path cost is the number of steps in the path.
For this problem, there are two main kinds of formulations
An incremental formulation involves operators that augment the state
description, starting with an empty state; for the 8-queens problem, this means
that each action adds a queen to the state.
A complete-state formulation starts with all 8 queens on the board and
moves them around. In either case, the path cost is of no interest because only
the final state counts. The first incremental formulation one might try is the
following:
Artificial Intelligence problem solving agents
Artificial Intelligence problem solving agents
• States: Any arrangement of 0 to 8 queens on the board is a state.
• Initial state: No queens on the board.
• Actions: Add a queen to any empty square.
• Transition model: Returns the board with a queen added to the specified
square.
• Goal test: 8 queens are on the board, none attacked.
• A better formulation would prohibit placing a queen in any square that is
already attacked:
• States: All possible arrangements of n queens (0 ≤ n ≤ 8), one per column in
the leftmost n columns, with no queen attacking another.
• Actions: Add a queen to any square in the leftmost empty column such that it
is not attacked by any other queen.
This formulation reduces the 8-queens state space from 1.8 × 1014 to just 2,057,
and solutions are easy to find
A real-world problem
A real-world problem
Artificial Intelligence problem solving agents
Artificial Intelligence problem solving agents
Artificial Intelligence problem solving agents
Artificial Intelligence problem solving agents
Artificial Intelligence problem solving agents
• Goal Formulation: TA has to visit all cities and return to the starting city with minimum
cost/Distance. Ex: Cities {A, B, C, D}
• Problem Formulation: Problem must be clearly defined so that agent can find a solution
– Initial State: Start from City A.
– Actions: Move from A to B or Move from A to C or Move from A to D.
– Transition Model: Updating the state of the enviroment.
– Goal Test: Agent has visited all the cities and returned to city A.
– Path Cost: Sum of the distance traveled between cities.
• Search for a Solution: It finds the best algorithm out of various algorithms, which may be
proven as the best optimal solution. Adopting best search strategies: DFS, BFS, Greedy,
etc..
• Execution: It executes the best optimal solution from the searching algorithms to reach the
goal state from the current state.
Problem Solving Agent designed for Travelling Agent (Tourism)
Artificial Intelligence problem solving agents
Artificial Intelligence problem solving agents
Artificial Intelligence problem solving agents
Artificial Intelligence problem solving agents
Artificial Intelligence problem solving agents
Artificial Intelligence problem solving agents
2) Real word problems
• Route-finding algorithms are used in a variety of applications.
• Some, such as Web sites and in-car systems that provide driving directions ,Others, such as routing video streams in
computer networks, military operations planning, and airline travel-planning systems.
• Consider the airline travel problems that must be solved by a travel-planning Web site:
• States: Each state obviously includes a location (e.g., an airport) and the current time. The cost of an action(flight)
may depend on previous segments, their fare bases, and their status as domestic or international, the state must
record extra information about these “historical” aspects.
• Initial state: This is specified by the user’s query.
• Actions: Take any flight from the current location, in any seat class, leaving after the current time, leaving enough
time for within-airport transfer if needed.
• Transition model: The state resulting from taking a flight will have the flight’s destination as the current
location and the flight’s arrival time as the current time.
• Goal test: Are we at the final destination specified by the user?
• Path cost: This depends on monetary cost, waiting time, flight time, customs and immigration procedures, seat
quality, time of day, type of airplane, frequent-flyer mileage awards, and so on.
1) Travelling salesperson problem
• (TSP) is a touring problem in which each city must be visited exactly once.
• The aim is to find the shortest tour. The problem is known to be NP-hard, but an enormous
amount of effort has been expended to improve the capabilities of TSP algorithms.
2) VLSI layout :VLSI layout problem requires positioning millions of components and
connections on a chip to minimize area, minimize circuit delays, minimize stray capacitances,
and maximize manufacturing yield.
3) Robot navigation
• It is a generalization of the route-finding problem described earlier.
• Rather than following a discrete set of routes, a robot can move in a continuous space with
(in principle) an infinite set of possible actions and states.
4) Automatic assembly sequencing
• Automatic assembly sequencing of complex objects by a robot was first demonstrated by
FREDDY (Michael, 1972).
• In assembly problems, the aim is to find an order in which to assemble the parts of some
object. If the wrong order is chosen, there will be no way to add some part later in the
sequence without undoing some of the work already done.
• Checking a step in the sequence for feasibility is a difficult geometrical search problem closely related to
robot navigation. Thus, the generation of legal actions is the expensive part of assembly sequencing
• Another important assembly problem is protein design, in which the goal is to find a sequence of amino
acids that will fold into a three-dimensional protein with the right properties to cure some disease.
Searching for solutions:
• Having formulated some problems, we now need to solve them.
• A solution is an action sequence, so search algorithms work by considering various possible
action sequences.
• The possible action sequences starting at the initial state form a search tree with the initial state at
the root; the branches are actions and the nodes correspond to states in the state space of the
problem.
• The first step is to test whether this is a goal state. Then we need to consider taking various
actions.
• We do this by expanding the current state; that is, applying each legal action to the current state,
thereby generating a new set of states. In this case, we add three branches from the parent node
In(Arad) leading to three new child nodes: In(Sibiu), In(Timisoara), and In(Zerind).
• Now we must choose which of these three possibilities to consider further. follow up one option
now and putting the others aside for later, in case the first choice does not lead to a solution.
Artificial Intelligence problem solving agents
Artificial Intelligence problem solving agents
A real-world problem
Artificial Intelligence problem solving agents
Artificial Intelligence problem solving agents
Artificial Intelligence problem solving agents
Artificial Intelligence problem solving agents
Artificial Intelligence problem solving agents
Artificial Intelligence problem solving agents
Artificial Intelligence problem solving agents
Artificial Intelligence problem solving agents
Artificial Intelligence problem solving agents
Artificial Intelligence problem solving agents
Artificial Intelligence problem solving agents
Artificial Intelligence problem solving agents
Artificial Intelligence problem solving agents
• Suppose we choose Sibiu first. We check to see whether it is a goal state (it is not) and then
expand it to get In(Arad), In(Fagaras), In(Oradea), and In(RimnicuVilcea).
• We can then choose any of these four or go back and choose Timisoara or Zerind. Each of
these six nodes is a leaf node, that is, a node with no children in the tree. The set of all leaf
nodes available for expansion at any given point is called the frontier.
• The process of expanding nodes on the frontier continues until either a solution is found or
there are no more states to expand. The general TREE-SEARCH algorithm is shown
informally in Figure 3.7.Loopy paths are a special case of the more general concept of
redundant paths, which exist whenever there is more than one way to get from one state to
another.
Artificial Intelligence problem solving agents
• Suppose we choose Sibiu first. We check to see whether it is a goal state (it is not) and then expand
it to get In(Arad), In(Fagaras), In(Oradea), and In(RimnicuVilcea).
• We can then choose any of these four or go LEAF NODE back and choose Timisoara or Zerind.
• Each of these six nodes is a leaf node, that is, a node with no children in the tree. The set of all
leaf nodes available for expansion at any given FRONTIER point is called the frontier.
• Loopy paths are a special case of the more general concept of redundant paths, which exist
whenever there is more than one way to get from one state to another.
• Consider the paths Arad–Sibiu (140 km long) and Arad–Zerind–Oradea–Sibiu (297 km long).
Obviously, the second path is redundant—it’s just a worse way to get to the same state. If you are
concerned about reaching the goal, there’s never any reason to keep more than one path to any
given state, because any goal state that is reachable by extending one path is also reachable by
extending the other.
• In some cases, it is possible to define the problem itself so as to eliminate redundant paths.
• In other cases, redundant paths are unavoidable. This includes all problems where the actions are
reversible, such as route-finding problems and sliding-block puzzles.
• Route finding on a rectangular grid (like the one used later for Figure 3.9) is a particularly
important example in computer games.
1.Infrastructure for search
algorithms
• Search algorithms require a data structure to keep track of the search tree that is
being constructed. For each node n of the tree, we have a structure that contains four
components:
• n.STATE: the state in the state space to which the node corresponds;
• n.PARENT: the node in the search tree that generated this node;
• n.ACTION: the action that was applied to the parent to generate the node;
• n.PATH-COST: the cost, traditionally denoted by g(n), of the path from the initial
state to the node, as indicated by the parent pointers.
• Given the components for a parent node, it is easy to see how to compute the necessary
components for a child node. The function CHILD-NODE takes a parent node and an action and
returns the resulting child node:
• The node data structure is depicted in Figure 3.10.Furthermore, two different nodes can contain the
same world state if that state is generated via two different search paths.
• Now that we have nodes, we need somewhere to put them. The frontier needs to be stored in such a way
that the search algorithm can easily choose the next node to expand according to its preferred strategy.
The appropriate data structure for this is a queue.
• The operations on a queue are as follows:
• EMPTY?(queue) returns true only if there are no more elements in the queue.
• POP(queue) removes the first element of the queue and returns it.
• INSERT(element, queue) inserts an element and returns the resulting queue.
Three common variants are the
1) first-in, first-out or FIFO queue, which pops the oldest element of the queue
2)the last-in, first-out or LIFO queue (also known as a stack), which pops the newest element of the
queue
3) priority queue, which pops the element of the queue with the highest priority according to some
ordering function
2. Measuring problem-solving performance
• Before we get into the design of specific search algorithms, we need to consider the criteria that
might be used to choose among them. We can evaluate an algorithm’s performance in four ways:
•Completeness: Is the algorithm guaranteed to find a solution when there is one?
• Optimality: Does the strategy find the optimal solution, as defined on page 68?
• Time complexity: How long does it take to find a solution?
• Space complexity: How much memory is needed to perform the search
• Time and space complexity are always considered with respect to some measure of the problem
difficulty.
• In AI, the graph is often represented implicitly by the initial state, actions, and transition model and is
frequently infinite.
• For these reasons, complexity is expressed in terms of three quantities:
• b, the branching factor or maximum number of successors of any node;
• d, the depth of the shallowest goal node (i.e., the number of steps along the path from the root);
• and m, the maximum length of any path in the state space.
• Time is often measured in terms of the number of nodes generated during the search, and space in
terms of the maximum number of nodes stored in memory.
• For the most part, we describe time and space complexity for search on a tree; for a graph, the answer
depends on how “redundant” the paths in the state space are.
Uninformed search strategies:
• Here we will study several search strategies that come under the heading of uninformed search
(also called blind search). Strategies that know whether one non-goal state is “more promising”
than another are called informed search or heuristic search strategies
1)Breadth first search
Breadth-first search is a simple strategy in which the root node is expanded first, then all the successors of
the root node are expanded next, then their successors, and so on.
• BFS expands the shallowest (i.e., not deep) node first using FIFO (First in first out) order. Thus, new
nodes (i.e., children of a parent node) remain in the queue and old unexpanded node which are shallower
than the new nodes, get expanded first.
• In BFS, goal test (a test to check whether the current state is a goal state or not) is applied to each
node at the time of its generation rather when it is selected for expansion.
• In general, all the nodes are expanded at a given depth in the search tree before any nodes at the next
level are expanded.
• Breadth-first search is an instance of the general graph-search algorithm (Figure 3.7) in which the
shallowest unexpanded node is chosen for expansion.
• This is achieved very simply by using a FIFO queue for the frontier. Thus, new nodes go to the back of
the queue, and old nodes, which are shallower than the new nodes, get expanded first. There is one slight
tweak on the general graph-search algorithm, which is that the goal test is applied to each node when it is
generated rather than when it is selected for expansion.
• Pseudocode is given in Figure 3.11. Figure 3.12 shows the progress of the search on a simple binary tree.
Artificial Intelligence problem solving agents
• How does breadth-first search rate according to the four criteria from the previous section?
We can easily see that it is complete—if the shallowest goal node is at some finite depth d, breadth-
first search will eventually find it after generating all shallower nodes
• breadth-first search is optimal if the path cost is a nondecreasing function of the depth of the node.
The most common such scenario is that all actions have the same cost.
• So far, the news about breadth-first search has been good. The news about time and space is not so
good
• Imagine searching a uniform tree where every state has b successors. The root of the search tree
generates b nodes at the first level, each of which generates b more nodes, for a total of b2 at the
second level. Each of these generates b more nodes, yielding b3 nodes at the third level, and so on
• Then the total number of nodes generated is
• As for space complexity: for any kind of graph search, which stores every expanded node in the
explored set, the space complexity is always within a factor of b of the time complexity
• the space complexity is O(bd), i.e., it is dominated by the size of the frontier
• In the figure, it is seen that the nodes are expanded level by level starting from the root
node A till the last node I in the tree. Therefore, the BFS sequence followed is: A->B->C-
>D->E->F->G->I.
BFS Algorithm
• Set a variable NODE to the initial state, i.e., the root node.
• Set a variable GOAL which contains the value of the goal state.
• Loop each node by traversing level by level until the goal state is not found.
• While performing the looping, start removing the elements from the queue in FIFO order.
• If the goal state is found, return goal state otherwise continue the search.
The performance measure of BFS is as follows:
• Completeness: It is a complete strategy as it definitely finds the goal state.
• Optimality: It gives an optimal solution if the cost of each node is same.
• Space Complexity: The space complexity of BFS is O(bd
), i.e., it requires a huge amount of memory.
Here, b is the branching factor and d denotes the depth/level of the tree
• Time Complexity: BFS consumes much time to reach the goal node for large instances.
Disadvantages of BFS
• The biggest disadvantage of BFS is that it requires a lot of memory space, therefore it is a memory
bounded strategy.
• BFS is time taking search strategy because it expands the nodes breadthwise.
Depth-first search
• This search strategy explores the deepest node first, then backtracks to explore other nodes.
• Depth-first search always expands the deepest node in the current frontier of the search tree. The
progress of the search is illustrated in Figure 3.16.
• The search proceeds immediately to the deepest level of the search tree, where the nodes have no
successors.
• As those nodes are expanded, they are dropped from the frontier, so then the search “backs up” to
the next deepest node that still has unexplored successors.
• It uses LIFO (Last in First Out) order, which is based on the stack, in order to expand the
unexpanded nodes in the search tree.
• The search proceeds to the deepest level of the tree where it has no successors. This search expands
nodes till infinity, i.e., the depth of the tree.
Artificial Intelligence problem solving agents
• The properties of depth-first search depend strongly on whether the graph-search or tree-search version is
used.
• The graph-search version, which avoids repeated states and redundant paths, is complete in finite state
spaces because it will eventually expand every node. The tree-search version, on the other hand, is not
complete.
• The time complexity of depth-first graph search is bounded by the size of the state space
• A depth-first tree search, on the other hand, may generate all of the O(bm) nodes in the search tree, where m
is the maximum depth of any node; this can be much greater than the size of the state space. Note that m
itself can be much larger than d (the depth of the shallowest solution) and is infinite if the tree is unbounded.
DFS search tree
• In the above figure, DFS works starting from the initial node A (root node) and traversing in one direction
deeply till node I and then backtrack to B and so on. Therefore, the sequence will be A->B->D->I->E->C-
>F->G
DFS Algorithm
• Set a variable NODE to the initial state, i.e., the root node.
• Set a variable GOAL which contains the value of the goal state.
• Loop each node by traversing deeply in one direction/path in search of the goal node.
• While performing the looping, start removing the elements from the stack in LIFO order.
• If the goal state is found, return goal state otherwise backtrack to expand nodes in other direction.
The performance measure of DFS
• Completeness: DFS does not guarantee to reach the goal state.
• Optimality: It does not give an optimal solution as it expands nodes in one direction deeply.
• Space complexity: It needs to store only a single path from the root node to the leaf node.
Therefore, DFS has O(bm) space complexity where b is the branching factor(i.e., total no. of child
nodes, a parent node have) and m is the maximum length of any path.
• Time complexity: DFS has O(bm
) time complexity.
Disadvantages of DFS
• It may get trapped in an infinite loop.
• It is also possible that it may not reach the goal state.
• DFS does not give an optimal solution.
• Note: DFS uses the concept of backtracking to explore each node in a search tree.
THANK YOU

More Related Content

Similar to Artificial Intelligence problem solving agents (20)

PPT
ai.ppt
AlfonsoDAdams1
 
PPT
ai.ppt
securework
 
PPT
ai.ppt
KhanKhaja1
 
PPTX
Unit 1 AI.pptx
jsjsjeusjsj
 
PPTX
PPT ON INTRODUCTION TO AI- UNIT-1-PART-1.pptx
RaviKiranVarma4
 
PPT
M1 intro
Arslanbek Turdaliev
 
PPTX
Artificial intelligence introduction
BHAGYAPRASADBUGGE
 
PPTX
1 Introduction to Articial Intelliegence.pptx
gadisaAdamu
 
PPTX
1.Introductiontrdrdtrresrrzrexrextrtrc.pptx
gkavitha5225
 
PPTX
Module 01 IS & MLA.pptx for 22 scheme notes in a ppt form
chandushivamurthy4
 
PPT
M1 intro
Yasir Khan
 
PPT
m1-intro artificial intelligence tec.ppt
Iftikhar70
 
PPT
m1-intro.ppt
SarfarazChoudhury1
 
PPTX
Intro artificial intelligence
Fraz Ali
 
PDF
Week 1.pdf
ZamshedForman1
 
PPTX
Artificial Intelligences -CHAPTER 1_1.pptx
wesenderbe
 
PPTX
AI Slides till 27-Mar.pptx
MuhammadRiaz237
 
PPTX
Module-I -Final Copy (1).pptx xcvbgnhjmcvb
prasadhiragond45
 
PDF
Introduction of Artificial Intelligence related to BIT course.pdf
Home
 
ai.ppt
securework
 
ai.ppt
KhanKhaja1
 
Unit 1 AI.pptx
jsjsjeusjsj
 
PPT ON INTRODUCTION TO AI- UNIT-1-PART-1.pptx
RaviKiranVarma4
 
Artificial intelligence introduction
BHAGYAPRASADBUGGE
 
1 Introduction to Articial Intelliegence.pptx
gadisaAdamu
 
1.Introductiontrdrdtrresrrzrexrextrtrc.pptx
gkavitha5225
 
Module 01 IS & MLA.pptx for 22 scheme notes in a ppt form
chandushivamurthy4
 
M1 intro
Yasir Khan
 
m1-intro artificial intelligence tec.ppt
Iftikhar70
 
m1-intro.ppt
SarfarazChoudhury1
 
Intro artificial intelligence
Fraz Ali
 
Week 1.pdf
ZamshedForman1
 
Artificial Intelligences -CHAPTER 1_1.pptx
wesenderbe
 
AI Slides till 27-Mar.pptx
MuhammadRiaz237
 
Module-I -Final Copy (1).pptx xcvbgnhjmcvb
prasadhiragond45
 
Introduction of Artificial Intelligence related to BIT course.pdf
Home
 

More from Sanjay Kumar (7)

PDF
Event Handling in Java as per university
Sanjay Kumar
 
PPTX
Module 3 - internet of things vtu university
Sanjay Kumar
 
PPTX
Internet of Things prescribed by University
Sanjay Kumar
 
PPTX
Data science for BE subject code is 2cs642
Sanjay Kumar
 
PPTX
Basics of artificial intelligence and machine learning
Sanjay Kumar
 
PDF
opaerational system states in an operating
Sanjay Kumar
 
PPTX
icmctt PPT Template (1).pptx
Sanjay Kumar
 
Event Handling in Java as per university
Sanjay Kumar
 
Module 3 - internet of things vtu university
Sanjay Kumar
 
Internet of Things prescribed by University
Sanjay Kumar
 
Data science for BE subject code is 2cs642
Sanjay Kumar
 
Basics of artificial intelligence and machine learning
Sanjay Kumar
 
opaerational system states in an operating
Sanjay Kumar
 
icmctt PPT Template (1).pptx
Sanjay Kumar
 
Ad

Recently uploaded (20)

PPTX
care of patient with elimination needs.pptx
Rekhanjali Gupta
 
PPTX
How to Handle Salesperson Commision in Odoo 18 Sales
Celine George
 
PPTX
PPT-Q1-WK-3-ENGLISH Revised Matatag Grade 3.pptx
reijhongidayawan02
 
PDF
CONCURSO DE POESIA “POETUFAS – PASSOS SUAVES PELO VERSO.pdf
Colégio Santa Teresinha
 
PPTX
grade 5 lesson matatag ENGLISH 5_Q1_PPT_WEEK4.pptx
SireQuinn
 
PDF
Generative AI: it's STILL not a robot (CIJ Summer 2025)
Paul Bradshaw
 
PPTX
Cultivation practice of Litchi in Nepal.pptx
UmeshTimilsina1
 
PPTX
A PPT on Alfred Lord Tennyson's Ulysses.
Beena E S
 
PPTX
How to Manage Large Scrollbar in Odoo 18 POS
Celine George
 
PPTX
QUARTER 1 WEEK 2 PLOT, POV AND CONFLICTS
KynaParas
 
PPTX
How to Create Odoo JS Dialog_Popup in Odoo 18
Celine George
 
PDF
Aprendendo Arquitetura Framework Salesforce - Dia 03
Mauricio Alexandre Silva
 
PPTX
CATEGORIES OF NURSING PERSONNEL: HOSPITAL & COLLEGE
PRADEEP ABOTHU
 
PDF
Exploring the Different Types of Experimental Research
Thelma Villaflores
 
PDF
Reconstruct, Restore, Reimagine: New Perspectives on Stoke Newington’s Histor...
History of Stoke Newington
 
PDF
0725.WHITEPAPER-UNIQUEWAYSOFPROTOTYPINGANDUXNOW.pdf
Thomas GIRARD, MA, CDP
 
PPTX
MENINGITIS: NURSING MANAGEMENT, BACTERIAL MENINGITIS, VIRAL MENINGITIS.pptx
PRADEEP ABOTHU
 
PPTX
Neurodivergent Friendly Schools - Slides from training session
Pooky Knightsmith
 
PDF
ARAL-Orientation_Morning-Session_Day-11.pdf
JoelVilloso1
 
PDF
Chapter-V-DED-Entrepreneurship: Institutions Facilitating Entrepreneurship
Dayanand Huded
 
care of patient with elimination needs.pptx
Rekhanjali Gupta
 
How to Handle Salesperson Commision in Odoo 18 Sales
Celine George
 
PPT-Q1-WK-3-ENGLISH Revised Matatag Grade 3.pptx
reijhongidayawan02
 
CONCURSO DE POESIA “POETUFAS – PASSOS SUAVES PELO VERSO.pdf
Colégio Santa Teresinha
 
grade 5 lesson matatag ENGLISH 5_Q1_PPT_WEEK4.pptx
SireQuinn
 
Generative AI: it's STILL not a robot (CIJ Summer 2025)
Paul Bradshaw
 
Cultivation practice of Litchi in Nepal.pptx
UmeshTimilsina1
 
A PPT on Alfred Lord Tennyson's Ulysses.
Beena E S
 
How to Manage Large Scrollbar in Odoo 18 POS
Celine George
 
QUARTER 1 WEEK 2 PLOT, POV AND CONFLICTS
KynaParas
 
How to Create Odoo JS Dialog_Popup in Odoo 18
Celine George
 
Aprendendo Arquitetura Framework Salesforce - Dia 03
Mauricio Alexandre Silva
 
CATEGORIES OF NURSING PERSONNEL: HOSPITAL & COLLEGE
PRADEEP ABOTHU
 
Exploring the Different Types of Experimental Research
Thelma Villaflores
 
Reconstruct, Restore, Reimagine: New Perspectives on Stoke Newington’s Histor...
History of Stoke Newington
 
0725.WHITEPAPER-UNIQUEWAYSOFPROTOTYPINGANDUXNOW.pdf
Thomas GIRARD, MA, CDP
 
MENINGITIS: NURSING MANAGEMENT, BACTERIAL MENINGITIS, VIRAL MENINGITIS.pptx
PRADEEP ABOTHU
 
Neurodivergent Friendly Schools - Slides from training session
Pooky Knightsmith
 
ARAL-Orientation_Morning-Session_Day-11.pdf
JoelVilloso1
 
Chapter-V-DED-Entrepreneurship: Institutions Facilitating Entrepreneurship
Dayanand Huded
 
Ad

Artificial Intelligence problem solving agents

  • 1. ARTIFICIAL INTELLIGENCE AND MACHINE LEARNING 21CS52 By Prof. Sanjay Kumar N V KIT ,Tiptur
  • 11. Module 1: Chapter 1: Introduction: What is AI? Foundations and History of AI Chapter 3: Problem solving: ‐ Problem solving agents ‐ Example problems Searching for Solutions Uninformed Search Strategies: Breadth First search, Depth First Search,
  • 12. Chapter 1: Introduction • The Artificial Intelligence provides an introduction to AI which will help you to understand the concepts behind Artificial Intelligence. • In this session, we will see various popular topics such as History of AI, applications of AI, etc. • In today's world, technology is growing very fast, and we are getting in touch with different new technologies day by day.
  • 13. What is AI? • Artificial Intelligence is composed of two words Artificial and Intelligence • where Artificial defines "man-made," and intelligence defines "thinking power", hence AI means "a man-made thinking power”. So, we can define AI as: • "It is a branch of computer science by which we can create intelligent machines which can behave like a human, think like humans, and able to make decisions."
  • 14. Introduction • Here, one of the booming technologies of computer science is Artificial Intelligence which is ready to create a new revolution in the world by making intelligent machines. • The Artificial Intelligence is now all around us. • It is currently working with a variety of subfields, ranging from general to specific, such as self-driving cars, playing chess, proving theorems, playing music, Painting, etc.
  • 15. Definition: A branch of computer science dealing with the simulation of intelligent behavior in computers. Objective: • The capability of a machine to imitate intelligent human behavior. What are the tasks it can do: • A computer system able to perform tasks that normally require human intelligence such as visual perception, speech recognition, decision making and translation between languages.
  • 18. • The field of artificial intelligence, or AI, goes further still: it attempts not just to understand but also to build intelligent entities. • AI is one of the newest fields in science and engineering. Work started in earnest soon after World War II, and the name itself was coined in 1956. • In Figure 1.1 we see eight definitions of AI, laid out along two dimensions. The definitions on top are concerned with thought processes and reasoning, whereas the ones on the bottom address behavior. • The definitions on the left measure success in terms of fidelity to human performance, whereas RATIONALITY the ones on the right measure against an ideal performance measure, called rationality. A system is rational if it does the “right thing,” given what it knows.
  • 19. • Historically, all four approaches to AI have been followed, each by different people with different methods.
  • 20. • Let us look at the four approaches in more detail. • A human-centered approach must be in part an empirical science, involving observations and hypotheses about human behavior. A rationalist approach involves a combination of mathematics and engineering 1 Acting humanly: The Turing Test approach: • The Turing Test, proposed by Alan Turing (1950), was designed to provide a satisfactory operational definition of intelligence. • A computer passes the test if a human interrogator, after posing some written questions, cannot tell whether the written responses come from a person or from a computer. • The computer would need to possess the following capabilities: • natural language processing: to enable it to communicate successfully in English • knowledge representation: to store what it knows or hears • automated reasoning: to use the stored information to answer questions and to draw new conclusions
  • 22. • machine learning: to adapt to new circumstances and to detect and extrapolate patterns • To pass the total Turing Test, the computer will need 1]computer vision to perceive objects, and 2]robotics to manipulate objects and move about. 2 Thinking humanly: The cognitive modeling approach: • A given program thinks like a human, we must have some way of determining how humans think. We need to get inside the actual workings of human minds. • There are three ways to do this: 1) through introspection: trying to catch our own thoughts as they go by; 2) through psychological experiments: observing a person in action; and through brain imaging—observing the brain in action 3) Once we have a sufficiently precise theory of the mind, it becomes possible to express the theory as a computer program.
  • 23. 3 Thinking rationally: The “laws of thought” approach • The Greek philosopher Aristotle was one of the first to attempt to codify “right thinking,” that is, irrefutable reasoning processes. • There are two main obstacles to this approach. • First, it is not easy to take informal knowledge and state it in the formal terms required by logical notation, particularly when the knowledge is less than 100% certain. • Second, there is a big difference between solving a problem “in principle” and solving it in practice. 4:Acting rationally: The rational agent approach • An agent is just something that acts • all computer programs do something, but computer agents are expected to do more: operate autonomously, perceive their environment, persist over a prolonged time period, adapt to change, and create and pursue goals. • A rational agent is one that acts so as to achieve the best outcome or, when there is uncertainty, the best expected outcome.
  • 25. 1.Philosophy: • Descartes, a prominent philosopher of the 17th century, played a crucial role in shaping the foundations of modern philosophy. • He was a staunch advocate of rationalism and he was also a proponent of dualism. • He held that there is a part of the human mind (or soul or spirit) that is outside of nature, exempt from physical laws. • Animals, on the other hand, did not possess this dual quality; MATERIALISM they could be treated as machines. 2.Mathematics: • The main three fundamental areas are logic, computation and probability. • Philosophers staked out some of the fundamental ideals of AI, but the leap to formal science required a level of mathematical formalization in their fundamental areas: • What are the formal rules to draw valid conclusions? • What can be compute? • How do we reason with uncertain information?
  • 26. 3.Economics: • AI takes the following ideas from economics. • How should we make decisions so as to maximize payoff (utility)? • Decision theory, which combines probability theory. With utility theory, provides a formal and complete framework for decision made under uncertainty. This is suitable for large “economics” where each agent need pay no attention to the action of the other gent as individuals. • How should we do this when others may not go along? • For small economics, the situation is much more like a game. The action of one player can significantly affect the utility of another. Unlike decision theory game theory does not offer an unambiguous prescription for selecting actions. • How should we do this when the payoff may be far in the future? • For the most part economist did not address this question. This topic was pursed in the field of operation research (OR).
  • 27. 4.Neuroscience • Neuroscience is the study of the nervous system, particularly the brain. • This study of how brain process information directly help in the development of AI. • It has also long been known that human brains are somehow different; in about 335 B.C. Aristotle wrote, “Of all the animals, man has the largest brain in proportion to his size.” 5.Psychology • Psychology tries to answer the following question: • How do humans and animals think and act? • Modern science proves that computer models could be used to address the psychology of memory, language and logical thinking respectively. • It is now common view among psychologist that "a cognitive theory should be like a computer program."
  • 28. 6.Computer engineering • How can we build an efficient computer? • For artificial intelligence to succeed, we need two things: intelligence and an artifact. • The computer has been the artifact of choice. • The modern digital electronic computer was invented independently and almost simultaneously by scientists in three countries 7.Control theory and cybernetics. • How can artifacts operate under their own control? • Control theory and cybernetics deals with the self controlling machine. • Modern control theory has its goal to design the system that maximize an objective function overtime. • This roughly match the AI view: designing system that behave optimally.
  • 29. 8. Linguistics: • Language is directly related to thought. Modern linguistics and AI, were born at about the same time and grew up together, intersecting in a hybrid field called computational linguistics or natural language processing. History of AI
  • 30. History of AI • Artificial Intelligence is not a new word and not a new technology for researchers. • This technology is much older than you would imagine. • Even there are the myths of Mechanical men in Ancient Greek and Egyptian Myths. • Following are some milestones in the history of AI which defines the journey from the AI generation to till date development.
  • 31. Maturation of Artificial Intelligence (1943-1952) Year 1943: • The first work which is now recognized as AI was done by Warren McCulloch and Walter pits in 1943. They proposed a model of artificial neurons. Year 1949: • Donald Hebb demonstrated an updating rule for modifying the connection strength between neurons. His rule is now called Hebbian learning. Year 1950: • The Alan Turing who was an English mathematician and pioneered Machine learning in 1950. • Alan Turing publishes "Computing Machinery and Intelligence" in which he proposed a test. • The test can check the machine's ability to exhibit intelligent behavior equivalent to human intelligence, called a Turing test.
  • 32. The birth of Artificial Intelligence (1952-1956) Year 1955: • An Allen Newell and Herbert A. Simon created the "first artificial intelligence program "Which was named as "Logic Theorist". • This program had proved 38 of 52 Mathematics theorems, and find new and more elegant proofs for some theorems. Year 1956: • The word "Artificial Intelligence" first adopted by American Computer scientist John McCarthy at the Dartmouth Conference. • For the first time, AI coined as an academic field. Year 1966: • The researchers emphasized developing algorithms which can solve mathematical problems. • Joseph Weizenbaum created the first chatbot in 1966, which was named as ELIZA. Year 1972: The first intelligent humanoid robot was built in Japan which was named as WABOT- 1.
  • 33. The first AI winter (1974-1980) • The duration between years 1974 to 1980 was the first AI winter duration. AI winter refers to the time period where computer scientist dealt with a severe shortage of funding from government for AI researches. • During AI winters, an interest of publicity on artificial intelligence was decreased. A boom of AI (1980-1987) • Year 1980: After AI winter duration, AI came back with "Expert System". Expert systems were programmed that emulate the decision-making ability of a human expert. • In the Year 1980, the first national conference of the American Association of Artificial Intelligence was held at Stanford University.
  • 34. The second AI winter (1987-1993) • The duration between the years 1987 to 1993 was the second AI Winter duration. • Again Investors and government stopped in funding for AI research as due to high cost but not efficient result. The expert system such as XCON was very cost effective. The emergence of intelligent agents (1993-2011) • Year 1997: In the year 1997, IBM Deep Blue beats world chess champion, Gary Kasparov, and became the first computer to beat a world chess champion. • Year 2002: for the first time, AI entered the home in the form of Roomba, a vacuum cleaner. • Year 2006: AI came in the Business world till the year 2006. Companies like Facebook, Twitter, and Netflix also started using AI.
  • 35. Deep learning, big data and artificial general intelligence (2011-present) • Year 2011: In the year 2011, IBM's Watson won jeopardy, a quiz show, where it had to solve the complex questions as well as riddles. Watson had proved that it could understand natural language and can solve tricky questions quickly. • Year 2012: Google has launched an Android app feature "Google now", which was able to provide information to the user as a prediction. • Year 2014: In the year 2014, Chatbot "Eugene Goostman" won a competition in the infamous "Turing test." • Year 2018: The "Project Debater" from IBM debated on complex topics with two master debaters and also performed extremely well. • Google has demonstrated an AI program "Duplex" which was a virtual assistant and which had taken hairdresser appointment on call, and lady on other side didn't notice that she was talking with the machine.
  • 36. THE STATE OF THE ART : AI • Robotic vehicles: Driverless Cars • Speech recognition: Communication Guidelines • Autonomous planning and scheduling: Robots to Space • Game playing: Computer chess • Spam fighting: Avoid spam generated msgs • Logistics planning: Transportation • Robotics: Military Applications • Machine Translation: Language Conversion
  • 37. Chapter 3: Problem solving: ‐ Contents: Problem solving agents Example problems Searching for Solutions Uninformed Search Strategies: 1.Breadth First search, 2.Depth First Search,
  • 43. •PEAS: Performance measure, Environment, Actuators, Sensors
  • 46. CONTENT • Fully Observable / Partially Observable – Deterministic / Non-deterministic – Episodic / Non-episodic(Sequential) – Static / Dynamic – Discrete / Continuous – Single agent / Multiple agents
  • 47. Fully Observable/Partially Observable An agent’s sensors give it access to complete state of the environment at each point in time, then we say that the task environment is fully observable; otherwise it is only partially observable. Chess is fully observed: A player gets to see the whole board. Poker is partially observable: A player gets to see only his own cards, not the cards of everyone in the game.
  • 48. Deterministic/Non-Deterministic If the next state of the environment is completely determined by the current state and the actions of the agent, then the environment is deterministic; otherwise it is non- deterministic. Examples: Deterministic environment: Tic Tac Toe game Self-driving vehicles are a classic example of Non-Deterministic AI processes.
  • 49. Episodic / Non-episodic(Sequential) The agent's experience is divided into atomic "episodes" (each episode consists of the agent perceiving and then performing a single action) and the choice of action in each episode depends only on the episode itself. Episodic environments are much simpler because the agent does not need to think ahead. Sequential if current decisions affect future decisions, or rely on previous ones. Examples: Episodic environment: mail sorting system Non-episodic environment: chess game
  • 50. Static / Dynamic If the environment does not change while an agent is acting, then it is static; otherwise it is dynamic. OR An environment is static if only the actions of an agent modify it. It is dynamic on the other hand if other processes are operating on it.  Dynamic if the environment may change over time.  Static if nothing (other than the agent) in the environment changes. Examples: (Static): If we add 2+2=4 this will remain same they will never be change… (Dynamic): Playing football game, other players make it dynamic.Every action there will be new reaction..
  • 51. Discrete / Continuous If there are a limited number of distinct, clearly defined, states of the environment, the environment is discrete. E.g. A game of chess or checkers where there are a set number of moves Continuous = Signals constantly coming into sensors, actions continually changing. E.g. Taxi driving. There could be a route from to anywhere to anywhere else. OR Simple Driving a car
  • 52. Single agent / Multiple agents • An agent operating by itself in an environment is single agent! • Multi agent is when other agents are present! Examples: (Static): Automated Taxi Car, Crossword Puzzle (Multi agent): Other players in a football team (or opposing team)
  • 54. Condition–action rule (situation–action rules, productions, if–then rules) Ex: blinking when something approaches the eye
  • 58. Behavior: These agents respond directly to percepts (i.e., what they sense) without considering the history of their previous actions. They follow simple "condition- action" rules where a specific condition triggers a predefined action. Example: A thermostat in a room. It senses the current temperature (percept) and compares it with the desired setpoint. If the temperature is too low, it turns on the heater, and if it's too high, it turns off the heater. Justification: The thermostat does not need to consider its past states or future goals. It just reacts to the current temperature using predefined rules. Simple Reflex Agents Summary
  • 59. Model-based reflex agents • Keep track of the part of the world it can’t see now • The agent should maintain some sort of internal state that depends on the percept history and thereby reflects at least some of the unobserved aspects of the current state. • Updating this internal state information as time goes by requires two kinds of knowledge to be encoded in the agent program • This knowledge about “how the world works” • Whether implemented in simple Boolean circuits or in complete scientific theories—is called a model of the world. • An agent that uses such a model is called a model-based agent
  • 62. Behavior: These agents maintain some internal state of the world based on a model of how the world works. They use this model to keep track of unobservable aspects of the environment and make decisions based on both the current percept and the internal state. Example: A robot vacuum cleaner. It uses a map (model) of the environment to navigate through a room. It senses obstacles (current percept) and remembers where it has already cleaned (internal state) to avoid re-cleaning the same areas. Justification: The vacuum cleaner relies on a model of the room's layout and its internal state (i.e., areas that have been cleaned) to decide its next move, rather than only reacting to the current state.. Model-Based Reflex Agents Summary
  • 65. Behavior: These agents act to achieve specific goals by considering different actions and their effects. They evaluate their actions based on whether they move them closer to achieving a goal. Example: A GPS navigation system. It has a goal to reach a specific destination. It takes into account the current location, available routes, and updates (such as traffic conditions) to plan the best path. Justification: The GPS system chooses actions (turn left, turn right) by considering whether they help achieve the goal of reaching the destination, rather than just reacting to immediate sensory inputs. Goal-Based Agents Summary
  • 69. Behavior: These agents not only aim to achieve goals but also try to maximize some measure of utility (i.e., satisfaction or performance). They choose actions based on which ones provide the highest expected utility. Example: A self-driving car. It has the goal of reaching a destination, but it also considers other factors such as safety, speed, fuel efficiency, and comfort. It selects routes and maneuvers that balance these factors to maximize overall utility. Justification: The self-driving car doesn't just aim to reach the destination but does so in a way that optimizes various performance metrics like minimizing travel time and ensuring safety, considering trade-offs between them. Utility-Based Agents Summary
  • 74. Problem solving agents • Intelligent agents are supposed to maximize their performance measure. • Goal formulation, based on the current situation and the agent’s performance measure, is the first step in problem solving. • The agent’s task is to find out how to act, now and in the future, so that it reaches a goal state. • Before it can do this, it needs to decide what sorts of actions and states it should consider. • Problem formulation is the process of deciding what actions and states to consider, given a goal. • If the agent has no additional information—i.e., if the environment is unknown then it is has no choice but to try one of the actions at random. • The process of looking for a sequence of actions that reaches the goal is called search. • A search algorithm takes a problem as input and returns a solution in the form of an action sequence
  • 75. Problem solving agents • Once a solution is found, the actions it recommends can be carried out. This is called the execution phase. • We also assume the environment is discrete, so at any given state there are only finitely many actions to choose from. • We will assume the environment is known, so the agent knows which states are reached by each action. • Finally, we assume that the environment is deterministic, so each action has exactly one outcome.
  • 76. Problem solving agents • Intelligent Agents : 1. Step-1: – Goal Formulation: What is the Agent task..? 2. Step-2 – Problem Formulation: What is the Agent task..? – Agents Adopts a goal & aim at satisfying it.
  • 77. Properties of the Environment • For examining future actions If the Environment is Observable .. – Agent should always knows the current state  If the Environment is Discrete .. – There are only finitely many actions to chose from If the Environment is Known .. – Agent should knows which states are reached by each action If the Environment is Deterministic .. – Each action has exactly one outcome
  • 78. Well-defined problems and solutions: • A problem can be defined formally by five components:
  • 79. • Goal Formulation: TA has to visit all cities and return to the starting city with minimum cost/Distance. Ex: Cities {A, B, C, D} • Problem Formulation: Problem must be clearly defined so that agent can find a solution – Initial State: Start from City A. – Actions: Move from A to B or Move from A to C or Move from A to D. – Transition Model: Updating the state of the enviroment. – Goal Test: Agent has visited all the cities and returned to city A. – Path Cost: Sum of the distance traveled between cities. • Search for a Solution: It finds the best algorithm out of various algorithms, which may be proven as the best optimal solution. Adopting best search strategies: DFS, BFS, Greedy, etc.. • Execution: It executes the best optimal solution from the searching algorithms to reach the goal state from the current state. Problem Solving Agent designed for Travelling Agent (Tourism)
  • 80. 2. Formulating problems • The process of removing detail from a representation is called abstraction. • A driving action has many effects. Besides changing the location of the vehicle and its occupants, it takes up time, consumes fuel, generates pollution, and changes the agent. • The abstraction is valid if we can expand any abstract solution into a solution in the more detailed world • The abstraction is useful if carrying out each of the actions in the solution is easier than the original problem • The choice of a good abstraction thus involves removing as much detail as
  • 81. • Search: It identifies all the best possible sequence of actions to reach the goal state from the current state. It takes a problem as an input and returns solution as its output. • Solution: It finds the best algorithm out of various algorithms, which may be proven as the best optimal solution. • Execution: It executes the best optimal solution from the searching algorithms to reach the goal state from the current state.
  • 82. EXAMPLE PROBLEMS • The problem-solving approach has been applied to a vast array of task environments. We list some of the best known here, distinguishing between toy and real-world problems. • A toy problem in AI is a simplified ,that serves as a starting point for testing and understanding various algorithms or approaches. • A real-world problem is one whose solutions people actually care about.
  • 83. • A description of the possible actions available to the agent. Given a particular state s, ACTIONS(s) returns the set of actions that can be executed in s. • A description of what each action does; the formal name for this is the transition model, specified by a function RESULT(s, a) that returns the state that results from doing action a in state s. • Together, the initial state, actions, and transition model implicitly define the state space of the problem—the set of all states reachable from the initial state by any sequence of actions. The state space forms a directed network or graph in which the nodes are states and the links between nodes are actions. • The goal test, which determines whether a given state is a goal state. Sometimes there is an explicit set of possible goal states, and the test simply checks whether the given state is one of them. • A path cost function that assigns a numeric cost to each path. The problem-solving agent chooses a cost function that reflects its own performance measure.
  • 84. Problem formulation is a crucial step in the problem-solving process and involves specifying various elements that characterize the problem. These elements typically include: • Initial State: Describing the starting point or the current state of the system. • Actions: Identifying the set of possible actions or operations that can be performed • Transition Model: Defining the rules or mechanisms that govern how the system moves from one state to another based on the chosen actions. • Goal Test: Establishing the conditions that determine whether the problem has been solved or the goal has been reached. • Path Cost: Assigning a cost or value to each action, representing the effort, time, or resources required to perform the action. This is important for optimizing the solution.
  • 87. Toy problems • Some of the toy problems are: • The first example we examine is the vacuum world: • States: The state is determined by both the agent location and the dirt locations. The agent is in one of two locations, each of which might or might not contain dirt. Thus, there are 2 × 22 = 8 possible world states. A larger environment with n locations has n · 2n states. • Initial state: Any state can be designated as the initial state. • Actions: In this simple environment, each state has just three actions: Left, Right, and Suck. Larger environments might also include Up and Down. • Transition model: The actions have their expected effects, except that moving Left in the leftmost square, moving Right in the rightmost square, and Sucking in a clean square have no effect. . • Goal test: This checks whether all the squares are clean. • Path cost: Each step costs 1, so the path cost is the number of steps in the path.
  • 104. • The 8-puzzle, an instance of which is shown in Figure 3.4, consists of a 3×3 board with eight numbered tiles and a blank space. A tile adjacent to the blank space can slide into the space. • The object is to reach a specified goal state, such as the one shown on the right of the figure. The standard formulation is as follows: Sliding-block puzzles
  • 108. • States: A state description specifies the location of each of the eight tiles and the blank in one of the nine squares. • Initial state: Any state can be designated as the initial state. Note that any given goal can be reached from exactly half of the possible initial states • Actions: The simplest formulation defines the actions as movements of the blank space,Left, Right, Up, or Down. • Transition model: Given a state and action, this returns the resulting state; • Goal test: This checks whether the state matches the goal configuration • Path cost: Each step costs 1, so the path cost is the number of steps in the path.
  • 109. For this problem, there are two main kinds of formulations An incremental formulation involves operators that augment the state description, starting with an empty state; for the 8-queens problem, this means that each action adds a queen to the state. A complete-state formulation starts with all 8 queens on the board and moves them around. In either case, the path cost is of no interest because only the final state counts. The first incremental formulation one might try is the following:
  • 112. • States: Any arrangement of 0 to 8 queens on the board is a state. • Initial state: No queens on the board. • Actions: Add a queen to any empty square. • Transition model: Returns the board with a queen added to the specified square. • Goal test: 8 queens are on the board, none attacked. • A better formulation would prohibit placing a queen in any square that is already attacked: • States: All possible arrangements of n queens (0 ≤ n ≤ 8), one per column in the leftmost n columns, with no queen attacking another. • Actions: Add a queen to any square in the leftmost empty column such that it is not attacked by any other queen. This formulation reduces the 8-queens state space from 1.8 × 1014 to just 2,057, and solutions are easy to find
  • 120. • Goal Formulation: TA has to visit all cities and return to the starting city with minimum cost/Distance. Ex: Cities {A, B, C, D} • Problem Formulation: Problem must be clearly defined so that agent can find a solution – Initial State: Start from City A. – Actions: Move from A to B or Move from A to C or Move from A to D. – Transition Model: Updating the state of the enviroment. – Goal Test: Agent has visited all the cities and returned to city A. – Path Cost: Sum of the distance traveled between cities. • Search for a Solution: It finds the best algorithm out of various algorithms, which may be proven as the best optimal solution. Adopting best search strategies: DFS, BFS, Greedy, etc.. • Execution: It executes the best optimal solution from the searching algorithms to reach the goal state from the current state. Problem Solving Agent designed for Travelling Agent (Tourism)
  • 127. 2) Real word problems • Route-finding algorithms are used in a variety of applications. • Some, such as Web sites and in-car systems that provide driving directions ,Others, such as routing video streams in computer networks, military operations planning, and airline travel-planning systems. • Consider the airline travel problems that must be solved by a travel-planning Web site: • States: Each state obviously includes a location (e.g., an airport) and the current time. The cost of an action(flight) may depend on previous segments, their fare bases, and their status as domestic or international, the state must record extra information about these “historical” aspects. • Initial state: This is specified by the user’s query. • Actions: Take any flight from the current location, in any seat class, leaving after the current time, leaving enough time for within-airport transfer if needed. • Transition model: The state resulting from taking a flight will have the flight’s destination as the current location and the flight’s arrival time as the current time. • Goal test: Are we at the final destination specified by the user? • Path cost: This depends on monetary cost, waiting time, flight time, customs and immigration procedures, seat quality, time of day, type of airplane, frequent-flyer mileage awards, and so on.
  • 128. 1) Travelling salesperson problem • (TSP) is a touring problem in which each city must be visited exactly once. • The aim is to find the shortest tour. The problem is known to be NP-hard, but an enormous amount of effort has been expended to improve the capabilities of TSP algorithms. 2) VLSI layout :VLSI layout problem requires positioning millions of components and connections on a chip to minimize area, minimize circuit delays, minimize stray capacitances, and maximize manufacturing yield. 3) Robot navigation • It is a generalization of the route-finding problem described earlier. • Rather than following a discrete set of routes, a robot can move in a continuous space with (in principle) an infinite set of possible actions and states.
  • 129. 4) Automatic assembly sequencing • Automatic assembly sequencing of complex objects by a robot was first demonstrated by FREDDY (Michael, 1972). • In assembly problems, the aim is to find an order in which to assemble the parts of some object. If the wrong order is chosen, there will be no way to add some part later in the sequence without undoing some of the work already done. • Checking a step in the sequence for feasibility is a difficult geometrical search problem closely related to robot navigation. Thus, the generation of legal actions is the expensive part of assembly sequencing • Another important assembly problem is protein design, in which the goal is to find a sequence of amino acids that will fold into a three-dimensional protein with the right properties to cure some disease.
  • 130. Searching for solutions: • Having formulated some problems, we now need to solve them. • A solution is an action sequence, so search algorithms work by considering various possible action sequences. • The possible action sequences starting at the initial state form a search tree with the initial state at the root; the branches are actions and the nodes correspond to states in the state space of the problem. • The first step is to test whether this is a goal state. Then we need to consider taking various actions. • We do this by expanding the current state; that is, applying each legal action to the current state, thereby generating a new set of states. In this case, we add three branches from the parent node In(Arad) leading to three new child nodes: In(Sibiu), In(Timisoara), and In(Zerind). • Now we must choose which of these three possibilities to consider further. follow up one option now and putting the others aside for later, in case the first choice does not lead to a solution.
  • 147. • Suppose we choose Sibiu first. We check to see whether it is a goal state (it is not) and then expand it to get In(Arad), In(Fagaras), In(Oradea), and In(RimnicuVilcea). • We can then choose any of these four or go back and choose Timisoara or Zerind. Each of these six nodes is a leaf node, that is, a node with no children in the tree. The set of all leaf nodes available for expansion at any given point is called the frontier. • The process of expanding nodes on the frontier continues until either a solution is found or there are no more states to expand. The general TREE-SEARCH algorithm is shown informally in Figure 3.7.Loopy paths are a special case of the more general concept of redundant paths, which exist whenever there is more than one way to get from one state to another.
  • 149. • Suppose we choose Sibiu first. We check to see whether it is a goal state (it is not) and then expand it to get In(Arad), In(Fagaras), In(Oradea), and In(RimnicuVilcea). • We can then choose any of these four or go LEAF NODE back and choose Timisoara or Zerind. • Each of these six nodes is a leaf node, that is, a node with no children in the tree. The set of all leaf nodes available for expansion at any given FRONTIER point is called the frontier. • Loopy paths are a special case of the more general concept of redundant paths, which exist whenever there is more than one way to get from one state to another. • Consider the paths Arad–Sibiu (140 km long) and Arad–Zerind–Oradea–Sibiu (297 km long). Obviously, the second path is redundant—it’s just a worse way to get to the same state. If you are concerned about reaching the goal, there’s never any reason to keep more than one path to any given state, because any goal state that is reachable by extending one path is also reachable by extending the other. • In some cases, it is possible to define the problem itself so as to eliminate redundant paths. • In other cases, redundant paths are unavoidable. This includes all problems where the actions are reversible, such as route-finding problems and sliding-block puzzles. • Route finding on a rectangular grid (like the one used later for Figure 3.9) is a particularly important example in computer games.
  • 150. 1.Infrastructure for search algorithms • Search algorithms require a data structure to keep track of the search tree that is being constructed. For each node n of the tree, we have a structure that contains four components: • n.STATE: the state in the state space to which the node corresponds; • n.PARENT: the node in the search tree that generated this node; • n.ACTION: the action that was applied to the parent to generate the node; • n.PATH-COST: the cost, traditionally denoted by g(n), of the path from the initial state to the node, as indicated by the parent pointers.
  • 151. • Given the components for a parent node, it is easy to see how to compute the necessary components for a child node. The function CHILD-NODE takes a parent node and an action and returns the resulting child node:
  • 152. • The node data structure is depicted in Figure 3.10.Furthermore, two different nodes can contain the same world state if that state is generated via two different search paths. • Now that we have nodes, we need somewhere to put them. The frontier needs to be stored in such a way that the search algorithm can easily choose the next node to expand according to its preferred strategy. The appropriate data structure for this is a queue. • The operations on a queue are as follows: • EMPTY?(queue) returns true only if there are no more elements in the queue. • POP(queue) removes the first element of the queue and returns it. • INSERT(element, queue) inserts an element and returns the resulting queue. Three common variants are the 1) first-in, first-out or FIFO queue, which pops the oldest element of the queue 2)the last-in, first-out or LIFO queue (also known as a stack), which pops the newest element of the queue 3) priority queue, which pops the element of the queue with the highest priority according to some ordering function
  • 153. 2. Measuring problem-solving performance • Before we get into the design of specific search algorithms, we need to consider the criteria that might be used to choose among them. We can evaluate an algorithm’s performance in four ways: •Completeness: Is the algorithm guaranteed to find a solution when there is one? • Optimality: Does the strategy find the optimal solution, as defined on page 68? • Time complexity: How long does it take to find a solution? • Space complexity: How much memory is needed to perform the search • Time and space complexity are always considered with respect to some measure of the problem difficulty.
  • 154. • In AI, the graph is often represented implicitly by the initial state, actions, and transition model and is frequently infinite. • For these reasons, complexity is expressed in terms of three quantities: • b, the branching factor or maximum number of successors of any node; • d, the depth of the shallowest goal node (i.e., the number of steps along the path from the root); • and m, the maximum length of any path in the state space. • Time is often measured in terms of the number of nodes generated during the search, and space in terms of the maximum number of nodes stored in memory. • For the most part, we describe time and space complexity for search on a tree; for a graph, the answer depends on how “redundant” the paths in the state space are.
  • 155. Uninformed search strategies: • Here we will study several search strategies that come under the heading of uninformed search (also called blind search). Strategies that know whether one non-goal state is “more promising” than another are called informed search or heuristic search strategies
  • 156. 1)Breadth first search Breadth-first search is a simple strategy in which the root node is expanded first, then all the successors of the root node are expanded next, then their successors, and so on. • BFS expands the shallowest (i.e., not deep) node first using FIFO (First in first out) order. Thus, new nodes (i.e., children of a parent node) remain in the queue and old unexpanded node which are shallower than the new nodes, get expanded first. • In BFS, goal test (a test to check whether the current state is a goal state or not) is applied to each node at the time of its generation rather when it is selected for expansion. • In general, all the nodes are expanded at a given depth in the search tree before any nodes at the next level are expanded. • Breadth-first search is an instance of the general graph-search algorithm (Figure 3.7) in which the shallowest unexpanded node is chosen for expansion. • This is achieved very simply by using a FIFO queue for the frontier. Thus, new nodes go to the back of the queue, and old nodes, which are shallower than the new nodes, get expanded first. There is one slight tweak on the general graph-search algorithm, which is that the goal test is applied to each node when it is generated rather than when it is selected for expansion. • Pseudocode is given in Figure 3.11. Figure 3.12 shows the progress of the search on a simple binary tree.
  • 158. • How does breadth-first search rate according to the four criteria from the previous section? We can easily see that it is complete—if the shallowest goal node is at some finite depth d, breadth- first search will eventually find it after generating all shallower nodes • breadth-first search is optimal if the path cost is a nondecreasing function of the depth of the node. The most common such scenario is that all actions have the same cost. • So far, the news about breadth-first search has been good. The news about time and space is not so good • Imagine searching a uniform tree where every state has b successors. The root of the search tree generates b nodes at the first level, each of which generates b more nodes, for a total of b2 at the second level. Each of these generates b more nodes, yielding b3 nodes at the third level, and so on • Then the total number of nodes generated is • As for space complexity: for any kind of graph search, which stores every expanded node in the explored set, the space complexity is always within a factor of b of the time complexity • the space complexity is O(bd), i.e., it is dominated by the size of the frontier
  • 159. • In the figure, it is seen that the nodes are expanded level by level starting from the root node A till the last node I in the tree. Therefore, the BFS sequence followed is: A->B->C- >D->E->F->G->I. BFS Algorithm • Set a variable NODE to the initial state, i.e., the root node. • Set a variable GOAL which contains the value of the goal state. • Loop each node by traversing level by level until the goal state is not found. • While performing the looping, start removing the elements from the queue in FIFO order. • If the goal state is found, return goal state otherwise continue the search.
  • 160. The performance measure of BFS is as follows: • Completeness: It is a complete strategy as it definitely finds the goal state. • Optimality: It gives an optimal solution if the cost of each node is same. • Space Complexity: The space complexity of BFS is O(bd ), i.e., it requires a huge amount of memory. Here, b is the branching factor and d denotes the depth/level of the tree • Time Complexity: BFS consumes much time to reach the goal node for large instances. Disadvantages of BFS • The biggest disadvantage of BFS is that it requires a lot of memory space, therefore it is a memory bounded strategy. • BFS is time taking search strategy because it expands the nodes breadthwise.
  • 161. Depth-first search • This search strategy explores the deepest node first, then backtracks to explore other nodes. • Depth-first search always expands the deepest node in the current frontier of the search tree. The progress of the search is illustrated in Figure 3.16. • The search proceeds immediately to the deepest level of the search tree, where the nodes have no successors. • As those nodes are expanded, they are dropped from the frontier, so then the search “backs up” to the next deepest node that still has unexplored successors. • It uses LIFO (Last in First Out) order, which is based on the stack, in order to expand the unexpanded nodes in the search tree. • The search proceeds to the deepest level of the tree where it has no successors. This search expands nodes till infinity, i.e., the depth of the tree.
  • 163. • The properties of depth-first search depend strongly on whether the graph-search or tree-search version is used. • The graph-search version, which avoids repeated states and redundant paths, is complete in finite state spaces because it will eventually expand every node. The tree-search version, on the other hand, is not complete. • The time complexity of depth-first graph search is bounded by the size of the state space • A depth-first tree search, on the other hand, may generate all of the O(bm) nodes in the search tree, where m is the maximum depth of any node; this can be much greater than the size of the state space. Note that m itself can be much larger than d (the depth of the shallowest solution) and is infinite if the tree is unbounded. DFS search tree • In the above figure, DFS works starting from the initial node A (root node) and traversing in one direction deeply till node I and then backtrack to B and so on. Therefore, the sequence will be A->B->D->I->E->C- >F->G DFS Algorithm • Set a variable NODE to the initial state, i.e., the root node. • Set a variable GOAL which contains the value of the goal state. • Loop each node by traversing deeply in one direction/path in search of the goal node. • While performing the looping, start removing the elements from the stack in LIFO order. • If the goal state is found, return goal state otherwise backtrack to expand nodes in other direction.
  • 164. The performance measure of DFS • Completeness: DFS does not guarantee to reach the goal state. • Optimality: It does not give an optimal solution as it expands nodes in one direction deeply. • Space complexity: It needs to store only a single path from the root node to the leaf node. Therefore, DFS has O(bm) space complexity where b is the branching factor(i.e., total no. of child nodes, a parent node have) and m is the maximum length of any path. • Time complexity: DFS has O(bm ) time complexity. Disadvantages of DFS • It may get trapped in an infinite loop. • It is also possible that it may not reach the goal state. • DFS does not give an optimal solution. • Note: DFS uses the concept of backtracking to explore each node in a search tree.