SlideShare a Scribd company logo
Reinforcement Learning
Tutorial
Peter Bodík
RAD Lab, UC Berkeley
Previous Lectures
• Supervised learning
– classification, regression
• Unsupervised learning
– clustering
• Reinforcement learning
– more general than supervised/unsupervised learning
– learn from interaction w/ environment to achieve a goal
environment
agent
action
reward
new state
Today
• examples
• defining an RL problem
– Markov Decision Processes
• solving an RL problem
– Dynamic Programming
– Monte Carlo methods
– Temporal-Difference learning
Robot in a room
+1
-1
START
actions: UP, DOWN, LEFT, RIGHT
UP
80% move UP
10% move LEFT
10% move RIGHT
• reward +1 at [4,3], -1 at [4,2]
• reward -0.04 for each step
• what’s the strategy to achieve max reward?
• what if the actions were deterministic?
Other examples
• pole-balancing
• TD-Gammon [Gerry Tesauro]
• helicopter [Andrew Ng]
• no teacher who would say “good” or “bad”
– is reward “10” good or bad?
– rewards could be delayed
• similar to control theory
– more general, fewer constraints
• explore the environment and learn from experience
– not just blind search, try to be smart about it
Resource allocation in datacenters
• A Hybrid Reinforcement Learning Approach to
Autonomic Resource Allocation
– Tesauro, Jong, Das, Bennani (IBM)
– ICAC 2006
loadbalancer
application A application B application C
Outline
• examples
• defining an RL problem
– Markov Decision Processes
• solving an RL problem
– Dynamic Programming
– Monte Carlo methods
– Temporal-Difference learning
Robot in a room
+1
-1
START
actions: UP, DOWN, LEFT, RIGHT
UP
80% move UP
10% move LEFT
10% move RIGHT
reward +1 at [4,3], -1 at [4,2]
reward -0.04 for each step
• states
• actions
• rewards
• what is the solution?
Is this a solution?
+1
-1
• only if actions deterministic
– not in this case (actions are stochastic)
• solution/policy
– mapping from each state to an action
Optimal policy
+1
-1
Reward for each step: -2
+1
-1
Reward for each step: -0.1
+1
-1
Reward for each step: -0.04
+1
-1
Reward for each step: -0.01
+1
-1
Reward for each step: +0.01
+1
-1
Markov Decision Process (MDP)
• set of states S, set of actions A, initial state S0
• transition model P(s,a,s’)
– P( [1,1], up, [1,2] ) = 0.8
• reward function r(s)
– r( [4,3] ) = +1
• goal: maximize cumulative reward in the long run
• policy: mapping from S to A
– (s) or (s,a) (deterministic vs. stochastic)
• reinforcement learning
– transitions and rewards usually not available
– how to change the policy based on experience
– how to explore the environment
environment
agent
action
reward
new state
Computing return from rewards
• episodic (vs. continuing) tasks
– “game over” after N steps
– optimal policy depends on N; harder to analyze
• additive rewards
– V(s0, s1, …) = r(s0) + r(s1) + r(s2) + …
– infinite value for continuing tasks
• discounted rewards
– V(s0, s1, …) = r(s0) + γ*r(s1) + γ2
*r(s2) + …
– value bounded if rewards bounded
Value functions
• state value function: V
(s)
– expected return when starting in s and following 
• state-action value function: Q
(s,a)
– expected return when starting in s, performing a, and following 
• useful for finding the optimal policy
– can estimate from experience
– pick the best action using Q
(s,a)
• Bellman equation
s
a
s’
r
Optimal value functions
• there’s a set of optimal policies
– V
defines partial ordering on policies
– they share the same optimal value function
• Bellman optimality equation
– system of n non-linear equations
– solve for V*(s)
– easy to extract the optimal policy
• having Q*(s,a) makes it even simpler
s
a
s’
r
Outline
• examples
• defining an RL problem
– Markov Decision Processes
• solving an RL problem
– Dynamic Programming
– Monte Carlo methods
– Temporal-Difference learning
Dynamic programming
• main idea
– use value functions to structure the search for good policies
– need a perfect model of the environment
• two main components
– policy evaluation: compute V
from 
– policy improvement: improve  based on V
– start with an arbitrary policy
– repeat evaluation/improvement until convergence
Policy evaluation/improvement
• policy evaluation:  -> V
– Bellman eqn’s define a system of n eqn’s
– could solve, but will use iterative version
– start with an arbitrary value function V0, iterate until Vk converges
• policy improvement: V
-> ’
– ’ either strictly better than , or ’ is optimal (if  = ’)
Policy/Value iteration
• Policy iteration
– two nested iterations; too slow
– don’t need to converge to Vk
• just move towards it
• Value iteration
– use Bellman optimality equation as an update
– converges to V*
Using DP
• need complete model of the environment and rewards
– robot in a room
• state space, action space, transition model
• can we use DP to solve
– robot in a room?
– back gammon?
– helicopter?
Outline
• examples
• defining an RL problem
– Markov Decision Processes
• solving an RL problem
– Dynamic Programming
– Monte Carlo methods
– Temporal-Difference learning
• miscellaneous
– state representation
– function approximation
– rewards
Monte Carlo methods
• don’t need full knowledge of environment
– just experience, or
– simulated experience
• but similar to DP
– policy evaluation, policy improvement
• averaging sample returns
– defined only for episodic tasks
Monte Carlo policy evaluation
• want to estimate V
(s)
= expected return starting from s and following 
– estimate as average of observed returns in state s
• first-visit MC
– average returns following the first visit to state s
s0
s s
+1 -2 0 +1 -3 +5
R1(s) = +2
s0
s0
s0
s0
s0
R2(s) = +1
R3(s) = -5
R4(s) = +4
V
(s) ≈ (2 + 1 – 5 + 4)/4 = 0.5
Monte Carlo control
• V
not enough for policy improvement
– need exact model of environment
• estimate Q
(s,a)
• MC control
– update after each episode
• non-stationary environment
• a problem
– greedy policy won’t explore all actions
Maintaining exploration
• deterministic/greedy policy won’t explore all actions
– don’t know anything about the environment at the beginning
– need to try all actions to find the optimal one
• maintain exploration
– use soft policies instead: (s,a)>0 (for all s,a)
• ε-greedy policy
– with probability 1-ε perform the optimal/greedy action
– with probability ε perform a random action
– will keep exploring the environment
– slowly move it towards greedy policy: ε -> 0
Simulated experience
• 5-card draw poker
– s0: A, A, 6, A, 2
– a0: discard 6, 2
– s1: A, A, A, A, 9 + dealer takes 4 cards
– return: +1 (probably)
• DP
– list all states, actions, compute P(s,a,s’)
• P( [A,A,6,A,2], [6,2], [A,9,4] ) = 0.00192
• MC
– all you need are sample episodes
– let MC play against a random policy, or itself, or another
algorithm
Summary of Monte Carlo
• don’t need model of environment
– averaging of sample returns
– only for episodic tasks
• learn from sample episodes or simulated experience
• can concentrate on “important” states
– don’t need a full sweep
• need to maintain exploration
– use soft policies
Outline
• examples
• defining an RL problem
– Markov Decision Processes
• solving an RL problem
– Dynamic Programming
– Monte Carlo methods
– Temporal-Difference learning
• miscellaneous
– state representation
– function approximation
– rewards
Temporal Difference Learning
• combines ideas from MC and DP
– like MC: learn directly from experience (don’t need a model)
– like DP: learn from values of successors
– works for continuous tasks, usually faster than MC
• constant-alpha MC:
– have to wait until the end of episode to update
• simplest TD
– update after every step, based on the successor
target
MC vs. TD
• observed the following 8 episodes:
A – 0, B – 0 B – 1 B – 1 B - 1
B – 1 B – 1 B – 1 B – 0
• MC and TD agree on V(B) = 3/4
• MC: V(A) = 0
– converges to values that minimize the error on training data
• TD: V(A) = 3/4
– converges to ML estimate
of the Markov process
A B
r = 0
100%
r = 1
75%
r = 0
25%
Sarsa
• again, need Q(s,a), not just V(s)
• control
– start with a random policy
– update Q and  after each step
– again, need -soft policies
st st+1
at st+2
at+1 at+2
rt rt+1
The RL Intro book
Richard Sutton, Andrew Barto
Reinforcement Learning,
An Introduction
http://www.cs.ualberta.ca/
~sutton/book/the-book.html
Backup slides
Q-learning
• before: on-policy algorithms
– start with a random policy, iteratively improve
– converge to optimal
• Q-learning: off-policy
– use any policy to estimate Q
– Q directly approximates Q* (Bellman optimality eqn)
– independent of the policy being followed
– only requirement: keep updating each (s,a) pair
• Sarsa
Outline
• examples
• defining an RL problem
– Markov Decision Processes
• solving an RL problem
– Dynamic Programming
– Monte Carlo methods
– Temporal-Difference learning
• miscellaneous
– state representation
– function approximation
– rewards
State representation
• pole-balancing
– move car left/right to keep the pole balanced
• state representation
– position and velocity of car
– angle and angular velocity of pole
• what about Markov property?
– would need more info
– noise in sensors, temperature, bending of pole
• solution
– coarse discretization of 4 state variables
• left, center, right
– totally non-Markov, but still works
Function approximation
• represent Vt as a parameterized function
– linear regression, decision tree, neural net, …
– linear regression:
• update parameters instead of entries in a table
– better generalization
• fewer parameters and updates affect “similar” states as well
• TD update
– treat as one data point for regression
– want method that can learn on-line (update after each step)
x y
Features
• tile coding, coarse coding
– binary features
• radial basis functions
– typically a Gaussian
– between 0 and 1
[ Sutton & Barto, Reinforcement Learning ]
Splitting and aggregation
• want to discretize the state space
– learn the best discretization during training
• splitting of state space
– start with a single state
– split a state when different parts of that state have different
values
• state aggregation
– start with many states
– merge states with similar values
Designing rewards
• robot in a maze
– episodic task, not discounted, +1 when out, 0 for each step
• chess
– GOOD: +1 for winning, -1 losing
– BAD: +0.25 for taking opponent’s pieces
• high reward even when lose
• rewards
– rewards indicate what we want to accomplish
– NOT how we want to accomplish it
• shaping
– positive reward often very “far away”
– rewards for achieving subgoals (domain knowledge)
– also: adjust initial policy or initial value function
Case study: Back gammon
• rules
– 30 pieces, 24 locations
– roll 2, 5: move 2, 5
– hitting, blocking
– branching factor: 400
• implementation
– use TD() and neural nets
– 4 binary features for each position on board (# white pieces)
– no BG expert knowledge
• results
– TD-Gammon 0.0: trained against itself (300,000 games)
• as good as best previous BG computer program (also by Tesauro)
• lot of expert input, hand-crafted features
– TD-Gammon 1.0: add special features
– TD-Gammon 2 and 3 (2-ply and 3-ply search)
• 1.5M games, beat human champion
Summary
• Reinforcement learning
– use when need to make decisions in uncertain environment
• solution methods
– dynamic programming
• need complete model
– Monte Carlo
– time-difference learning (Sarsa, Q-learning)
• most work
– algorithms simple
– need to design features, state representation, rewards

More Related Content

Similar to Reinforcement Learner) is an intelligent agent that’s always striving to learn and improve through interaction with its environment. (20)

PPT
POMDP Seminar Backup3
Darin Hitchings, Ph.D.
 
PPTX
RL_in_10_min.pptx
YasutoTamura1
 
PDF
Introduction to Deep Reinforcement Learning
IDEAS - Int'l Data Engineering and Science Association
 
PDF
Intro to Reinforcement learning - part II
Mikko Mäkipää
 
PPTX
Reinforcement Learning and Artificial Neural Nets
Pierre de Lacaze
 
PDF
Introduction of Deep Reinforcement Learning
NAVER Engineering
 
PPT
fai unit 4 in this your will find the da
kigogi4609
 
PPTX
R22 Machine learning jntuh UNIT- 5.pptx
23Q95A6706
 
PPT
Heuristc Search Techniques
Jismy .K.Jose
 
PPTX
Reinforcement Learning
DongHyun Kwak
 
PPT
constraintSat.ppt
PallaviThukral2
 
PPT
Chapter 03 - artifi HEURISTIC SEARCH.ppt
jenyisblessed
 
PPT
Hierarchical RL (DAI).ppt
butest
 
PPTX
Nondeterministic rewards and actions.pptx
DendiVigneshwarReddy
 
PDF
Lec2 sampling-based-approximations-and-function-fitting
Ronald Teo
 
PDF
Head First Reinforcement Learning
azzeddine chenine
 
PPTX
How to formulate reinforcement learning in illustrative ways
YasutoTamura1
 
PPTX
An introduction to reinforcement learning
Subrat Panda, PhD
 
PDF
anintroductiontoreinforcementlearning-180912151720.pdf
ssuseradaf5f
 
PPTX
Survey of Modern Reinforcement Learning
Julia Maddalena
 
POMDP Seminar Backup3
Darin Hitchings, Ph.D.
 
RL_in_10_min.pptx
YasutoTamura1
 
Introduction to Deep Reinforcement Learning
IDEAS - Int'l Data Engineering and Science Association
 
Intro to Reinforcement learning - part II
Mikko Mäkipää
 
Reinforcement Learning and Artificial Neural Nets
Pierre de Lacaze
 
Introduction of Deep Reinforcement Learning
NAVER Engineering
 
fai unit 4 in this your will find the da
kigogi4609
 
R22 Machine learning jntuh UNIT- 5.pptx
23Q95A6706
 
Heuristc Search Techniques
Jismy .K.Jose
 
Reinforcement Learning
DongHyun Kwak
 
constraintSat.ppt
PallaviThukral2
 
Chapter 03 - artifi HEURISTIC SEARCH.ppt
jenyisblessed
 
Hierarchical RL (DAI).ppt
butest
 
Nondeterministic rewards and actions.pptx
DendiVigneshwarReddy
 
Lec2 sampling-based-approximations-and-function-fitting
Ronald Teo
 
Head First Reinforcement Learning
azzeddine chenine
 
How to formulate reinforcement learning in illustrative ways
YasutoTamura1
 
An introduction to reinforcement learning
Subrat Panda, PhD
 
anintroductiontoreinforcementlearning-180912151720.pdf
ssuseradaf5f
 
Survey of Modern Reinforcement Learning
Julia Maddalena
 

Recently uploaded (20)

PPTX
The Role of Information Technology in Environmental Protectio....pptx
nallamillisriram
 
PPTX
Solar Thermal Energy System Seminar.pptx
Gpc Purapuza
 
PDF
Biomechanics of Gait: Engineering Solutions for Rehabilitation (www.kiu.ac.ug)
publication11
 
PPTX
原版一样(Acadia毕业证书)加拿大阿卡迪亚大学毕业证办理方法
Taqyea
 
PPTX
Element 7. CHEMICAL AND BIOLOGICAL AGENT.pptx
merrandomohandas
 
PDF
Book.pdf01_Intro.ppt algorithm for preperation stu used
archu26
 
PDF
Zilliz Cloud Demo for performance and scale
Zilliz
 
PPTX
Break Statement in Programming with 6 Real Examples
manojpoojary2004
 
PDF
Design Thinking basics for Engineers.pdf
CMR University
 
PDF
International Journal of Information Technology Convergence and services (IJI...
ijitcsjournal4
 
PPTX
Shinkawa Proposal to meet Vibration API670.pptx
AchmadBashori2
 
PPTX
Introduction to Design of Machine Elements
PradeepKumarS27
 
PPTX
美国电子版毕业证南卡罗莱纳大学上州分校水印成绩单USC学费发票定做学位证书编号怎么查
Taqyea
 
DOCX
8th International Conference on Electrical Engineering (ELEN 2025)
elelijjournal653
 
PPTX
Green Building & Energy Conservation ppt
Sagar Sarangi
 
PDF
Introduction to Productivity and Quality
মোঃ ফুরকান উদ্দিন জুয়েল
 
PPTX
Thermal runway and thermal stability.pptx
godow93766
 
PDF
PORTFOLIO Golam Kibria Khan — architect with a passion for thoughtful design...
MasumKhan59
 
PPTX
UNIT DAA PPT cover all topics 2021 regulation
archu26
 
PPTX
265587293-NFPA 101 Life safety code-PPT-1.pptx
chandermwason
 
The Role of Information Technology in Environmental Protectio....pptx
nallamillisriram
 
Solar Thermal Energy System Seminar.pptx
Gpc Purapuza
 
Biomechanics of Gait: Engineering Solutions for Rehabilitation (www.kiu.ac.ug)
publication11
 
原版一样(Acadia毕业证书)加拿大阿卡迪亚大学毕业证办理方法
Taqyea
 
Element 7. CHEMICAL AND BIOLOGICAL AGENT.pptx
merrandomohandas
 
Book.pdf01_Intro.ppt algorithm for preperation stu used
archu26
 
Zilliz Cloud Demo for performance and scale
Zilliz
 
Break Statement in Programming with 6 Real Examples
manojpoojary2004
 
Design Thinking basics for Engineers.pdf
CMR University
 
International Journal of Information Technology Convergence and services (IJI...
ijitcsjournal4
 
Shinkawa Proposal to meet Vibration API670.pptx
AchmadBashori2
 
Introduction to Design of Machine Elements
PradeepKumarS27
 
美国电子版毕业证南卡罗莱纳大学上州分校水印成绩单USC学费发票定做学位证书编号怎么查
Taqyea
 
8th International Conference on Electrical Engineering (ELEN 2025)
elelijjournal653
 
Green Building & Energy Conservation ppt
Sagar Sarangi
 
Introduction to Productivity and Quality
মোঃ ফুরকান উদ্দিন জুয়েল
 
Thermal runway and thermal stability.pptx
godow93766
 
PORTFOLIO Golam Kibria Khan — architect with a passion for thoughtful design...
MasumKhan59
 
UNIT DAA PPT cover all topics 2021 regulation
archu26
 
265587293-NFPA 101 Life safety code-PPT-1.pptx
chandermwason
 
Ad

Reinforcement Learner) is an intelligent agent that’s always striving to learn and improve through interaction with its environment.

  • 2. Previous Lectures • Supervised learning – classification, regression • Unsupervised learning – clustering • Reinforcement learning – more general than supervised/unsupervised learning – learn from interaction w/ environment to achieve a goal environment agent action reward new state
  • 3. Today • examples • defining an RL problem – Markov Decision Processes • solving an RL problem – Dynamic Programming – Monte Carlo methods – Temporal-Difference learning
  • 4. Robot in a room +1 -1 START actions: UP, DOWN, LEFT, RIGHT UP 80% move UP 10% move LEFT 10% move RIGHT • reward +1 at [4,3], -1 at [4,2] • reward -0.04 for each step • what’s the strategy to achieve max reward? • what if the actions were deterministic?
  • 5. Other examples • pole-balancing • TD-Gammon [Gerry Tesauro] • helicopter [Andrew Ng] • no teacher who would say “good” or “bad” – is reward “10” good or bad? – rewards could be delayed • similar to control theory – more general, fewer constraints • explore the environment and learn from experience – not just blind search, try to be smart about it
  • 6. Resource allocation in datacenters • A Hybrid Reinforcement Learning Approach to Autonomic Resource Allocation – Tesauro, Jong, Das, Bennani (IBM) – ICAC 2006 loadbalancer application A application B application C
  • 7. Outline • examples • defining an RL problem – Markov Decision Processes • solving an RL problem – Dynamic Programming – Monte Carlo methods – Temporal-Difference learning
  • 8. Robot in a room +1 -1 START actions: UP, DOWN, LEFT, RIGHT UP 80% move UP 10% move LEFT 10% move RIGHT reward +1 at [4,3], -1 at [4,2] reward -0.04 for each step • states • actions • rewards • what is the solution?
  • 9. Is this a solution? +1 -1 • only if actions deterministic – not in this case (actions are stochastic) • solution/policy – mapping from each state to an action
  • 11. Reward for each step: -2 +1 -1
  • 12. Reward for each step: -0.1 +1 -1
  • 13. Reward for each step: -0.04 +1 -1
  • 14. Reward for each step: -0.01 +1 -1
  • 15. Reward for each step: +0.01 +1 -1
  • 16. Markov Decision Process (MDP) • set of states S, set of actions A, initial state S0 • transition model P(s,a,s’) – P( [1,1], up, [1,2] ) = 0.8 • reward function r(s) – r( [4,3] ) = +1 • goal: maximize cumulative reward in the long run • policy: mapping from S to A – (s) or (s,a) (deterministic vs. stochastic) • reinforcement learning – transitions and rewards usually not available – how to change the policy based on experience – how to explore the environment environment agent action reward new state
  • 17. Computing return from rewards • episodic (vs. continuing) tasks – “game over” after N steps – optimal policy depends on N; harder to analyze • additive rewards – V(s0, s1, …) = r(s0) + r(s1) + r(s2) + … – infinite value for continuing tasks • discounted rewards – V(s0, s1, …) = r(s0) + γ*r(s1) + γ2 *r(s2) + … – value bounded if rewards bounded
  • 18. Value functions • state value function: V (s) – expected return when starting in s and following  • state-action value function: Q (s,a) – expected return when starting in s, performing a, and following  • useful for finding the optimal policy – can estimate from experience – pick the best action using Q (s,a) • Bellman equation s a s’ r
  • 19. Optimal value functions • there’s a set of optimal policies – V defines partial ordering on policies – they share the same optimal value function • Bellman optimality equation – system of n non-linear equations – solve for V*(s) – easy to extract the optimal policy • having Q*(s,a) makes it even simpler s a s’ r
  • 20. Outline • examples • defining an RL problem – Markov Decision Processes • solving an RL problem – Dynamic Programming – Monte Carlo methods – Temporal-Difference learning
  • 21. Dynamic programming • main idea – use value functions to structure the search for good policies – need a perfect model of the environment • two main components – policy evaluation: compute V from  – policy improvement: improve  based on V – start with an arbitrary policy – repeat evaluation/improvement until convergence
  • 22. Policy evaluation/improvement • policy evaluation:  -> V – Bellman eqn’s define a system of n eqn’s – could solve, but will use iterative version – start with an arbitrary value function V0, iterate until Vk converges • policy improvement: V -> ’ – ’ either strictly better than , or ’ is optimal (if  = ’)
  • 23. Policy/Value iteration • Policy iteration – two nested iterations; too slow – don’t need to converge to Vk • just move towards it • Value iteration – use Bellman optimality equation as an update – converges to V*
  • 24. Using DP • need complete model of the environment and rewards – robot in a room • state space, action space, transition model • can we use DP to solve – robot in a room? – back gammon? – helicopter?
  • 25. Outline • examples • defining an RL problem – Markov Decision Processes • solving an RL problem – Dynamic Programming – Monte Carlo methods – Temporal-Difference learning • miscellaneous – state representation – function approximation – rewards
  • 26. Monte Carlo methods • don’t need full knowledge of environment – just experience, or – simulated experience • but similar to DP – policy evaluation, policy improvement • averaging sample returns – defined only for episodic tasks
  • 27. Monte Carlo policy evaluation • want to estimate V (s) = expected return starting from s and following  – estimate as average of observed returns in state s • first-visit MC – average returns following the first visit to state s s0 s s +1 -2 0 +1 -3 +5 R1(s) = +2 s0 s0 s0 s0 s0 R2(s) = +1 R3(s) = -5 R4(s) = +4 V (s) ≈ (2 + 1 – 5 + 4)/4 = 0.5
  • 28. Monte Carlo control • V not enough for policy improvement – need exact model of environment • estimate Q (s,a) • MC control – update after each episode • non-stationary environment • a problem – greedy policy won’t explore all actions
  • 29. Maintaining exploration • deterministic/greedy policy won’t explore all actions – don’t know anything about the environment at the beginning – need to try all actions to find the optimal one • maintain exploration – use soft policies instead: (s,a)>0 (for all s,a) • ε-greedy policy – with probability 1-ε perform the optimal/greedy action – with probability ε perform a random action – will keep exploring the environment – slowly move it towards greedy policy: ε -> 0
  • 30. Simulated experience • 5-card draw poker – s0: A, A, 6, A, 2 – a0: discard 6, 2 – s1: A, A, A, A, 9 + dealer takes 4 cards – return: +1 (probably) • DP – list all states, actions, compute P(s,a,s’) • P( [A,A,6,A,2], [6,2], [A,9,4] ) = 0.00192 • MC – all you need are sample episodes – let MC play against a random policy, or itself, or another algorithm
  • 31. Summary of Monte Carlo • don’t need model of environment – averaging of sample returns – only for episodic tasks • learn from sample episodes or simulated experience • can concentrate on “important” states – don’t need a full sweep • need to maintain exploration – use soft policies
  • 32. Outline • examples • defining an RL problem – Markov Decision Processes • solving an RL problem – Dynamic Programming – Monte Carlo methods – Temporal-Difference learning • miscellaneous – state representation – function approximation – rewards
  • 33. Temporal Difference Learning • combines ideas from MC and DP – like MC: learn directly from experience (don’t need a model) – like DP: learn from values of successors – works for continuous tasks, usually faster than MC • constant-alpha MC: – have to wait until the end of episode to update • simplest TD – update after every step, based on the successor target
  • 34. MC vs. TD • observed the following 8 episodes: A – 0, B – 0 B – 1 B – 1 B - 1 B – 1 B – 1 B – 1 B – 0 • MC and TD agree on V(B) = 3/4 • MC: V(A) = 0 – converges to values that minimize the error on training data • TD: V(A) = 3/4 – converges to ML estimate of the Markov process A B r = 0 100% r = 1 75% r = 0 25%
  • 35. Sarsa • again, need Q(s,a), not just V(s) • control – start with a random policy – update Q and  after each step – again, need -soft policies st st+1 at st+2 at+1 at+2 rt rt+1
  • 36. The RL Intro book Richard Sutton, Andrew Barto Reinforcement Learning, An Introduction http://www.cs.ualberta.ca/ ~sutton/book/the-book.html
  • 38. Q-learning • before: on-policy algorithms – start with a random policy, iteratively improve – converge to optimal • Q-learning: off-policy – use any policy to estimate Q – Q directly approximates Q* (Bellman optimality eqn) – independent of the policy being followed – only requirement: keep updating each (s,a) pair • Sarsa
  • 39. Outline • examples • defining an RL problem – Markov Decision Processes • solving an RL problem – Dynamic Programming – Monte Carlo methods – Temporal-Difference learning • miscellaneous – state representation – function approximation – rewards
  • 40. State representation • pole-balancing – move car left/right to keep the pole balanced • state representation – position and velocity of car – angle and angular velocity of pole • what about Markov property? – would need more info – noise in sensors, temperature, bending of pole • solution – coarse discretization of 4 state variables • left, center, right – totally non-Markov, but still works
  • 41. Function approximation • represent Vt as a parameterized function – linear regression, decision tree, neural net, … – linear regression: • update parameters instead of entries in a table – better generalization • fewer parameters and updates affect “similar” states as well • TD update – treat as one data point for regression – want method that can learn on-line (update after each step) x y
  • 42. Features • tile coding, coarse coding – binary features • radial basis functions – typically a Gaussian – between 0 and 1 [ Sutton & Barto, Reinforcement Learning ]
  • 43. Splitting and aggregation • want to discretize the state space – learn the best discretization during training • splitting of state space – start with a single state – split a state when different parts of that state have different values • state aggregation – start with many states – merge states with similar values
  • 44. Designing rewards • robot in a maze – episodic task, not discounted, +1 when out, 0 for each step • chess – GOOD: +1 for winning, -1 losing – BAD: +0.25 for taking opponent’s pieces • high reward even when lose • rewards – rewards indicate what we want to accomplish – NOT how we want to accomplish it • shaping – positive reward often very “far away” – rewards for achieving subgoals (domain knowledge) – also: adjust initial policy or initial value function
  • 45. Case study: Back gammon • rules – 30 pieces, 24 locations – roll 2, 5: move 2, 5 – hitting, blocking – branching factor: 400 • implementation – use TD() and neural nets – 4 binary features for each position on board (# white pieces) – no BG expert knowledge • results – TD-Gammon 0.0: trained against itself (300,000 games) • as good as best previous BG computer program (also by Tesauro) • lot of expert input, hand-crafted features – TD-Gammon 1.0: add special features – TD-Gammon 2 and 3 (2-ply and 3-ply search) • 1.5M games, beat human champion
  • 46. Summary • Reinforcement learning – use when need to make decisions in uncertain environment • solution methods – dynamic programming • need complete model – Monte Carlo – time-difference learning (Sarsa, Q-learning) • most work – algorithms simple – need to design features, state representation, rewards

Editor's Notes

  • #4: want to learn a policy (what’s the solution?) can we learn it using (un)supervised learning? why not? so how do we learn it? any ideas? let the robot explore the environment
  • #8: want to learn a policy (what’s the solution?) can we learn it using (un)supervised learning? why not? so how do we learn it? any ideas? let the robot explore the environment
  • #44: maybe show the pendulum?