policy gradient详解(附代码)

本文详细介绍了强化学习中的 Policy Gradient 方法,包括其基本原理、数学推导、改进策略及不同形式。并通过 PyTorch 实现了 Policy Gradient 的算法。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

1 引言

policy gradient是强化学习中一种基于概率策略的方法。智能体通过与环境的交互获得特定时刻的状态信息,并直接给出下一步要采取各种动作的概率,然后根据该状态动作的策略分布采取下一步的行动,所以每种动作都有可能被选中,只是选中的概率性不同。智能体直接学习状态动作的策略分布,在强化学习的训练中,用神经网络来表示状态动作分布,给一个状态,就会输出该状态下的动作分布。强化学习算法直接对策略进行优化,使指定的策略能够获得最大的奖励。

2 policy gradient原理介绍

考虑一个随机参数化的策略πθ\pi_\thetaπθ,强化学习主要目标是最大化期望回报J(πθ)=Eτ∼πθ[R(τ)]J(\pi_\theta)=\mathbb{E}_{\tau \sim \pi_\theta}[R(\tau)]J(πθ)=Eτπθ[R(τ)]其中τ=(s0,a0,s1,a1,⋯ ,sT+1)\tau=(s_0,a_0,s_1,a_1,\cdots,s_{T+1})τ=(s0,a0,s1,a1,,sT+1)sis_isiaia_iai分别表示第iii时刻的状态和动作。R(τ)=∑t=0TrtR(\tau)=\sum\limits_{t=0}^Tr_tR(τ)=t=0Trt表示TTT时刻内的回报,rtr_trt表示第ttt时刻的回报。通过梯度上升法优化策略即有θk+1=θk+α∇θJ(πθ)∣θk\theta_{k+1}=\theta_{k}+\alpha \nabla_\theta J(\pi_\theta)|_{\theta_k}θk+1=θk+αθJ(πθ)θk其中∇θJ(πθ)\nabla_\theta J(\pi_\theta)θJ(πθ)表示策略梯度。策略梯度的具体导数形式的推导如下所示。给定策略πθ\pi_\thetaπθ,轨迹τ\tauτ的概率为P(τ∣θ)=ρ0(s0)∏t=0TP(st+1∣st,at)πθ(at∣st)P(\tau|\theta)=\rho_0(s_0)\prod_{t=0}^TP(s_{t+1}|s_t,a_t)\pi_\theta(a_t|s_t)P(τθ)=ρ0(s0)t=0TP(st+1st,at)πθ(atst)其中ρ0(⋅)\rho_0(\cdot)ρ0()表示状态分布。根据链式法则则有∇θP(τ∣θ)=P(τ∣θ)∇θlog⁡P(τ∣θ)\nabla_\theta P(\tau|\theta)=P(\tau|\theta)\nabla_\theta \log P(\tau|\theta)θP(τθ)=P(τθ)θlogP(τθ)进一步可知轨迹的对数概率表示为log⁡P(τ∣θ)=log⁡ρ0(s0)+∑t=0T(log⁡P(st+1∣st,at)+log⁡πθ(at∣st))\log P(\tau|\theta)=\log \rho_0 (s_0)+\sum\limits_{t=0}^T\left(\log P(s_{t+1}|s_t,a_t)+\log \pi_\theta(a_t|s_t)\right)logP(τθ)=logρ0(s0)+t=0T(logP(st+1st,at)+logπθ(atst))因为ρ0(s0)\rho_0(s_0)ρ0(s0)P(st+1∣st,at)P(s_{t+1}|s_t,a_t)P(st+1st,at)与策略参数θ\thetaθ无关 ,所以它们的梯度为000,进而可知轨迹的对数概率梯度表示为∇θlog⁡P(τ∣θ)=∇θlog⁡ρ0(s0)+∑t=0T(∇θlog⁡P(st+1∣st,at)+∇θlog⁡πθ(at∣st))=∑t=0T∇θlog⁡πθ(at∣st)\begin{aligned}\nabla_\theta \log P(\tau|\theta)&=\nabla_\theta \log \rho_0(s_0)+\sum\limits_{t=0}^T\left(\nabla_\theta \log P(s_{t+1}|s_t,a_t)+\nabla_\theta \log \pi_\theta(a_t|s_t)\right)\\&=\sum\limits_{t=0}^T\nabla_\theta \log \pi_\theta(a_t|s_t)\end{aligned}θlogP(τθ)=θlogρ0(s0)+t=0T(θlogP(st+1st,at)+θlogπθ(atst))=t=0Tθlogπθ(atst)综上所述可得∇θJ(πθ)=∇θEτ∼πθ[R(τ)]=∇θ∫τP(τ∣θ)R(τ)=∫τ∇θP(τ∣θ)R(τ)=∫τP(τ∣θ)∇θlog⁡P(τ∣θ)R(τ)=Eτ∼πθ[∇θlog⁡P(τ∣θ)R(τ)]=Eτ∼πθ[∑t=0T∇θlog⁡πθ(at∣st)R(τ)]\begin{aligned}\nabla_\theta J(\pi_\theta)&=\nabla_\theta \mathbb{E}_{\tau\sim \pi_\theta}[R(\tau)]\\&=\nabla_\theta \int_\tau P(\tau|\theta)R(\tau)\\&=\int_\tau \nabla_\theta P(\tau|\theta)R(\tau)\\&=\int_\tau P(\tau|\theta)\nabla_\theta \log P(\tau|\theta)R(\tau)\\&=\mathbb{E}_{\tau\sim \pi_\theta}[\nabla_\theta \log P(\tau|\theta)R(\tau)]\\&=\mathbb{E}_{\tau\sim \pi_\theta}\left[\sum\limits_{t=0}^T\nabla_\theta \log \pi_\theta(a_t|s_t)R(\tau)\right]\end{aligned}θJ(πθ)=θEτπθ[R(τ)]=θτP(τθ)R(τ)=τθP(τθ)R(τ)=τP(τθ)θlogP(τθ)R(τ)=Eτπθ[θlogP(τθ)R(τ)]=Eτπθ[t=0Tθlogπθ(atst)R(τ)]以上公式是期望的表示形式,可以通过对蒙特卡洛模拟采样均值来估计它的值。假定采样得到一个轨迹集合D={τi}i=1,⋯ ,N\mathcal{D}=\{\tau_i\}_{i=1,\cdots,N}D={τi}i=1,,N,其中每个轨迹都是在策略πθ\pi_\thetaπθ下智能体与环境交互得到的,此时估计的策略梯度表示为g^=1∣D∣∑τ∈D∑t=0T∇θlog⁡πθ(at∣st)R(τ)\hat{g}=\frac{1}{|\mathcal{D}|}\sum\limits_{\tau\in\mathcal{D}}\sum\limits_{t=0}^T \nabla_\theta \log \pi_\theta(a_t|s_t)R(\tau)g^=D1τDt=0Tθlogπθ(atst)R(τ)其中∣D∣|\mathcal{D}|D表示轨迹集合D\mathcal{D}D的元素个数。

3 EGLP引理

根据策略梯度可以推导出一个中间结果为对数概率梯度的期望(Expected Grad-Log-Prob)引理。

EGLP引理: 假定PθP_\thetaPθ是随机变量xxx的参数化的概率分布,进而则有Ex∼Pθ[∇θlog⁡Pθ(x)]=0\mathbb{E}_{x\sim P_\theta}[\nabla_\theta \log P_\theta(x)]=0ExPθ[θlogPθ(x)]=0

证明: 由概率分布的特性可知∫xPθ(x)=1\int_x P_\theta(x)=1xPθ(x)=1对以上等式两边求梯度可知∇θ∫xPθ(x)=∇θ1=0\nabla_\theta \int_x P_\theta(x)=\nabla_\theta 1 = 0θxPθ(x)=θ1=0由对数导数可知0=∇θ∫xPθ(x)=∫x∇Pθ(x)=∫xPθ(x)∇θlog⁡Pθ(x)\begin{aligned}0&=\nabla_\theta \int_x P_\theta (x)\\&=\int_x \nabla P_\theta(x)\\&=\int_x P_\theta(x) \nabla_\theta\log P_\theta(x)\end{aligned}0=θxPθ(x)=xPθ(x)=xPθ(x)θlogPθ(x)所以可知Ex∼Pθ[∇θlog⁡Pθ(x)]=0\mathbb{E}_{x\sim P_\theta}[\nabla_\theta\log P_\theta(x)]=0ExPθ[θlogPθ(x)]=0

4 policy gradient改进版

已知策略梯度的表达式为∇θJ(πθ)=Eτ∼πθ[∑t=0T∇θlog⁡πθ(at∣st)R(τ)]\nabla_\theta J(\pi_\theta)=\mathbb{E}_{\tau\sim \pi_\theta}\left[\sum\limits_{t=0}^T\nabla_\theta \log \pi_\theta(a_t|s_t)R(\tau)\right]θJ(πθ)=Eτπθ[t=0Tθlogπθ(atst)R(τ)]由以上公式可知,在一个状态动作轨迹里,当执行下一步动作的时,得到的回报是固定的,即为总回报RτR_{\tau}Rτ,这有背与常识。根据经验可知,一个智能体应该根据其执行动作之后带来结果再对当前的行为进行更新,跟行动之前的结果好坏无关,所以可以得到改进版的policy gradient的公式为∇θJ(πθ)=Eτ∼πθ[∑t=0T∇θlog⁡πθ(at∣st)∑t′=tTR(st′,at′,st′+1)]\nabla_\theta J(\pi_\theta)=\mathbb{E}_{\tau\sim \pi_\theta}\left[\sum\limits_{t=0}^T\nabla_\theta \log \pi_\theta(a_t|s_t)\sum\limits_{t^\prime =t}^TR(s_{t^\prime},a_{t^\prime},s_{t^\prime +1})\right]θJ(πθ)=Eτπθ[t=0Tθlogπθ(atst)t=tTR(st,at,st+1)]以上公式表明行动只会根据采取行动后获得的奖励来加强,其中R^t=∑t′=tTR(st′,at′,st′+1)\hat{R}_t=\sum\limits_{t^\prime = t}^TR(s_{t^\prime},a_{t^\prime},s_{t^\prime+1})R^t=t=tTR(st,at,st+1)该公式表示在ttt时刻之后的奖励回报。

根据EGLP引理可推知,对于任意只依赖状态的函数b(⋅)b(\cdot)b(),可得Eat∼πθ[∇θlog⁡πθ(at∣st)b(st)]=0\mathbb{E}_{a_t \sim\pi_\theta}[\nabla_\theta \log \pi_\theta (a_t|s_t)b(s_t)]=0Eatπθ[θlogπθ(atst)b(st)]=0这可以在改进版的策略梯度公式中随意加上或者减去函数b(⋅)b(\cdot)b(),即有∇θJ(πθ)=Eτ∼πθ[∑t=0T∇θlog⁡πθ(at∣st)(∑t′=tTR(st′,at′,st′+1)−b(st))]\nabla_\theta J(\pi_\theta)=\mathbb{E}_{\tau\sim \pi_\theta}\left[\sum\limits_{t=0}^T\nabla_\theta \log \pi_\theta(a_t|s_t)\left(\sum\limits_{t^\prime =t}^TR(s_{t^\prime},a_{t^\prime},s_{t^\prime +1})-b(s_t)\right)\right]θJ(πθ)=Eτπθ[t=0Tθlogπθ(atst)(t=tTR(st,at,st+1)b(st))]其中b(s)b(s)b(s)被称作baseline函数。一般情况下baseline函数会选择状态值函数Vπ(s)V^\pi(s)Vπ(s)

5 policy gradient其它形式

policy graident的通用形式如下公式所示∇θJ(πθ)=Eτ∼πθ[∑t=0T∇θlog⁡πθ(at∣st)Φt]\nabla_\theta J(\pi_\theta)=\mathbb{E}_{\tau \sim \pi_\theta}\left[\sum\limits_{t=0}^T\nabla_\theta \log \pi_\theta (a_t|s_t)\Phi_t\right]θJ(πθ)=Eτπθ[t=0Tθlogπθ(atst)Φt]根据Φt\Phi_tΦt函数的不同可以将policy gradient种类划分为如下形式

  • 轨迹τ\tauτ总回报函数:Φt=R(τ)\Phi_t=R(\tau)Φt=R(τ)
  • 轨迹τ\tauτttt时刻后回报函数:Φt=∑t′=tTR(st′,at′,st′+1)\Phi_t=\sum\limits_{t^\prime = t}^T R(s_{t^\prime},a_{t^\prime},s_{t^\prime+1})Φt=t=tTR(st,at,st+1)
  • 状态值函数:Φt=Vπ(s)=Eτ∼π[R(τ)∣s0=s]\Phi_t=V^\pi(s)=\mathbb{E}_{\tau\sim \pi}[R(\tau)|s_0=s]Φt=Vπ(s)=Eτπ[R(τ)s0=s]
  • 动作状态值函数:Φt=Qπ(s,a)=Eτ∼π[R(τ)∣s0=s,a0=a]\Phi_t=Q^\pi(s,a)=\mathbb{E}_{\tau\sim \pi}[R(\tau)|s_0=s,a_0=a]Φt=Qπ(s,a)=Eτπ[R(τ)s0=s,a0=a]
  • 优势函数:Φt=Aπ(st,at)=Qπ(st,at)−Vπ(st)\Phi_t=A^\pi (s_t,a_t)=Q^\pi(s_t,a_t)-V^\pi(s_t)Φt=Aπ(st,at)=Qπ(st,at)Vπ(st)

6 程序代码

policy gradient的pytorch代码实现如下所示,此代码实现了的policy gradient是以下的形式∇θJ(πθ)=Eτ∼πθ[∑t=0T∇θlog⁡πθ(at∣st)∑t′=tTR(st′,at′,st′+1)]\nabla_\theta J(\pi_\theta)=\mathbb{E}_{\tau\sim \pi_\theta}\left[\sum\limits_{t=0}^T\nabla_\theta \log \pi_\theta(a_t|s_t)\sum\limits_{t^\prime =t}^TR(s_{t^\prime},a_{t^\prime},s_{t^\prime +1})\right]θJ(πθ)=Eτπθ[t=0Tθlogπθ(atst)t=tTR(st,at,st+1)]在以下文件RL_template.py中进行了实现。

import numpy as np  
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
from torch.distributions import Categorical


class Policy(nn.Module):
    def __init__(self, in_features = 4, hid_features = 128, out_features = 2, pro = 0.6):
        super(Policy, self).__init__()
        self.fc1 = nn.Linear(in_features,hid_features)
        self.dropout = nn.Dropout(p = pro)
        self.fc2 = nn.Linear(hid_features, out_features)
 
    def forward(self, x):
        x = self.fc1(x)
        x = self.dropout(x)
        x = F.relu(x)
        action_scores = self.fc2(x)
        return F.softmax(action_scores,dim=1)


class PolicyGradient(object):
	def __init__(
		self, 
		policy_net,
		learning_rate = 0.01, 
		reward_decay = 0.95
	):
		
		self.policy_net = policy_net
		self.lr = learning_rate
		self.gamma = reward_decay

		self.ep_ss = []
		self. ep_as= []
		self.ep_rs = []
		self.ep_log_pros = []

	def choose_action(self, state):
		state = torch.from_numpy(state).float().unsqueeze(0)
		probs = self.policy_net(state)
		m = Categorical(probs)
		action = m.sample()
		# m.log_prob(action) <===> probs.log()[0][action.item()].unsqueeze(0)
		self.ep_log_pros.append(m.log_prob(action)) 
		return action.item()


	def store_transition(self, s, a, r):
		self.ep_ss.append(s)
		self.ep_as.append(a)
		self.ep_rs.append(r)


	def eposide_learning(self):
		optimizer = optim.Adam(self.policy_net.parameters(),lr=1e-2)
		eps = np.finfo(np.float32).eps.item()  
		R = 0
		policy_loss = []
		returns = []

		for r in self.ep_rs[::-1]:
			R = r + self.gamma * R
			returns.insert(0, R)  

		returns = torch.tensor(returns)
		returns = (returns - returns.mean()) / (returns.std() + eps)

		for log_prob, R in zip(self.ep_log_pros, returns):
			policy_loss.append(-log_prob * R)

		policy_loss = torch.cat(policy_loss).sum()

		optimizer.zero_grad()
		policy_loss.backward()
		optimizer.step()

		del self.ep_rs[:]
		del self.ep_log_pros[:]
		del self.ep_as[:]
		del self.ep_ss[:]

以下代码可以针对不用游戏环境进行强化学习。

import argparse
from RL_template import PolicyGradient, Policy
import gym


parser = argparse.ArgumentParser(description='Pytorch REINFORCE example')
parser.add_argument('--render',action='store_false')
parser.add_argument('--episodes', type=int, default=1000)
parser.add_argument('--steps_per_episode', type=int, default=100)
parser.add_argument('--gamma', type=float, default=0.99)
parser.add_argument('--seed',type=int, default=543)
args = parser.parse_args()

# env = gym.make('CartPole-v1')
env = gym.make('MountainCar-v0')


policy_net = Policy(
		in_features = env.observation_space.shape[0],
		out_features = env.action_space.n
		)

print(env.action_space)
print(env.observation_space)
print(env.observation_space.high)
print(env.observation_space.low)


RL = PolicyGradient(
	learning_rate = 0.002,
	policy_net = policy_net)

for episode in range(args.episodes):

	state, ep_reward = env.reset(), 0

	while True:
	# for t in range(args.steps_per_episode):
		if args.render:
			env.render()

		action = RL.choose_action(state)

		state, reward, done, info = env.step(action)

		RL.store_transition(state, action, reward)

		ep_reward += reward

		if done == True:
			break

	RL.eposide_learning()

	print('Episode {}\tLast reward: {:.2f}'.format(episode, ep_reward))
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

道2024

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值