在开始说基于Stochastic Policy
的方法之前,我们需要了解一下Policy Gradient
的方法。在Policy Gradient里面有一个非常重要的定理:Policy Gradient Theorem。
- Theorem:
For any differentiable policy $\pi_{\theta}(a|s)$, for any of policy objective function $J = J_{1},J_{avR},J_{avV}$, the policy gradient is:
$$ \frac{\partial J(\theta)}{\partial \theta} = \mathbb{E}_{\pi_{\theta}}[\frac{\partial \text{log}\pi_{\theta}(a|s)}{\partial \theta}Q^{\pi_{\theta}}(s,a)] $$
上面这个式子也是Stochastic Policy
里面的核心梯度公式,你可以不需要证明怎么来的,但是需要理解它背后的思想。
Policy Network Gradients
如果我们的Policy
用Neural Network
来近似拟合,最后一层使用Softmax的话,那输出的动作概率可表示为如下函数形式:
$$ \pi_{\theta}(a | s)=\frac{e^{f_{\theta}(s, a)}}{\sum_{a^{\prime}} e^{f_{\theta}\left(s, a^{\prime}\right)}} $$
其中$f_{\theta}(s,a)$ is the score function of a state-action pair parametrized by $\theta$, which can be implemented with a neural net。
The gradient of its log-form可表示为:
$$ \begin{aligned} \frac{\partial \log \pi_{\theta}(a | s)}{\partial \theta} &=\frac{\partial f_{\theta}(s, a)}{\partial \theta}-\frac{1}{\left.\sum_{a^{\prime}} e^{f_{\theta}, a^{\prime}}\right)} \sum_{a^{\prime \prime}} e^{f_{\theta}\left(s, a^{\prime \prime}\right)} \frac{\partial f_{\theta}\left(s, a^{\prime \prime}\right)}{\partial \theta} \\ &=\frac{\partial f_{\theta}(s, a)}{\partial \theta}-\mathbb{E}_{a^{\prime} \sim \pi_{\theta}\left(a^{\prime} | s\right)}\left[\frac{\partial f_{\theta}\left(s, a^{\prime}\right)}{\partial \theta}\right] \end{aligned} $$
上述公式中最后一个等式很有意思,它是score function
对某一个具体的动作$a$ 求梯度,然后减去对所有动作的梯度的期望。(可以思考一下其背后的含义)。
Looking into Policy Gradient
在Policy Network Gradients
里面我们将策略引入Softmax
函数进行了拆解求导,得到一个梯度,这个梯度还需要去乘上优势函数再拿去做更新,所以这里需要回顾一下Policy Gradient
方法。
- Let $R(\pi)$ denote the expected return of $\pi$:
$$ R(\pi) = \mathbb{E}_{s_{0} \sim \rho_{0},a_{t} \sim \pi(\cdot|s_{t})} [\sum_{t=0}^{\infty} \gamma^{t}r_{t}] $$
- We collect experience data with another policy $\pi_{old}$ ,and want to optimize some objective to get a new better policy $\pi$。
由于强化学习的学习过程需要大量数据,所以我们必须从提高数据使用率,因此过去的policy
采集所得到的数据,还是要拿过来用,由此也是off-policy
的一种方法。
- Note that a useful identity
$$ R(\pi) = R(\pi_{old}) + \mathbb{E}_{\tau \sim \pi}[\sum_{t=0}^{\infty}\gamma^{t} A^{\pi_{old}}(s_{t},a_{t})] $$
其中的$\mathbb{E}_{\tau \sim \pi}$可以Trajectories sampled from $\pi$。$A^{\pi_{old}}$ 表示的是优势函数。
Advantage function展开表示如下:
$$ A^{\pi_{old}}(s,a) = \mathbb{E}_{s^{\prime} \sim \rho(s^{\prime}|s,a)}[r(s) + \gamma V^{\pi_{old}}(s^{\prime} )-V^{\pi_{old}}(s)] $$
很多时候你也会看到一些简洁的表达,比如$A(s,a) = Q(s,a) -V(s)$,其中$V(s)$描述的是baseline的思想,而$A(s,a)$表达的是每个动作选择的好坏,而 $V^{\pi}(s) = \sum_{a}\pi(a|s)Q^{\pi}(s,a)$。
Proof
$$ R(\pi) = R(\pi_{old}) + \mathbb{E}_{\tau \sim \pi}[\sum_{t=0}^{\infty}\gamma^{t} A^{\pi_{old}}(s_{t},a_{t})] $$
对上述等式证明:
$$ \begin{aligned} \mathbb{E}_{\tau \sim \pi}\left[\sum_{t=0}^{\infty} \gamma^{t} A^{\pi_{\mathrm{old}}}\left(s_{t}, a_{t}\right)\right] &=\mathbb{E}_{\tau \sim \pi}\left[\sum_{t=0}^{\infty} \gamma^{t}\left(r\left(s_{t}\right)+\gamma V^{\pi_{\mathrm{old}}}\left(s_{t+1}\right)-V^{\pi_{\mathrm{old}}\left(s_{t}\right)}\right)\right] \\ &=\mathbb{E}_{\tau \sim \pi}\left[-V^{\pi_{\mathrm{old}}}\left(s_{0}\right)+\sum_{t=0}^{\infty} \gamma^{t} r\left(s_{t}\right)\right] \\ &=-\mathbb{E}_{s_{0}}\left[V^{\pi_{\mathrm{old}}}\left(s_{0}\right)\right]+\mathbb{E}_{\tau \sim \pi}\left[\sum_{t=0}^{\infty} \gamma^{t} r\left(s_{t}\right)\right]\\ &=-R\left(\pi_{\mathrm{old}}\right)+R(\pi) \end{aligned} $$
那上述公式的直观理解是什么样子的呢?相当于是在老的policy基础之上做一点改进,而改进的优势函数又是基于old policy所得出来的,但action是基于新的策略$\pi$选出来的,所以及时奖励会去修正原来Old policy得出来的$V$。
- S. Kakadeand J. Langford. Approximately optimal approximate reinforcement learning. ICML. 2002.
More for the Policy Expected Return
我们再来分析一下advantage function:
$$ A^{\pi_{old}}(s,a) = \mathbb{E}_{s^{\prime} \sim \rho(s^{\prime}|s,a)}[r(s) + \gamma V^{\pi_{old}}(s^{\prime} )-V^{\pi_{old}}(s)] $$
Want to manipulate $R(\pi)$ into an objective that can be estimated from data :
$$ \begin{aligned} R(\pi) &=R\left(\pi_{\text {old }}\right)+\mathbb{E}_{\tau \sim \pi}\left[\sum_{t=0}^{\infty} \gamma^{t} A^{\pi_{\text {old }}}\left(s_{t}, a_{t}\right)\right] \\ &=R\left(\pi_{\text {old }}\right)+\sum_{t=0}^{\infty} \sum_{s} P\left(s_{t}=s | \pi\right) \sum_{a} \pi(a | s) \gamma^{t} A^{\pi_{\text {old }}}(s, a) \\ &=R\left(\pi_{\text {old }}\right)+\sum_{s} \sum_{t=0}^{\infty} \gamma^{t} P\left(s_{t}=s | \pi\right) \sum_{a} \pi(a | s) A^{\pi_{\text {old }}}(s, a) \\ &=R\left(\pi_{\text {old }}\right)+\sum_{s} \rho_{\pi}(s) \sum_{a} \pi(a | s) A^{\pi_{\text {old }}}(s, a) \end{aligned} $$
其中$\rho_{\pi}(s)$表示的是基于当前策略$\pi$,$s$被采样采中的概率,乘以它是哪一次被采中的概率 $\gamma^{t}$。
但是上述等式依然要从新的策略$\pi$里面去sample
数据,当策略改变之后,要重新sample
数据才可以更新。为了解决这个问题,引入重要性采样。
$$ \begin{aligned} R(\pi) &=R\left(\pi_{\text {old }}\right)+\sum_{s} \rho_{\pi}(s) \sum_{a} \pi(a | s) A^{\pi_{\text {old }}}(s, a) \\ &=R\left(\pi_{\text {old }}\right)+\mathbb{E}_{s \sim \pi, a \sim \pi}\left[A^{\pi_{\text {old }}}(s, a)\right] \\ &=R\left(\pi_{\text {old }}\right)+\mathbb{E}_{s \sim \pi, a \sim \pi_{\text {old }}}\left[\frac{\pi(a | s)}{\pi_{\text {old }}(a | s)} A^{\pi_{\text {old }}}(s, a)\right] \end{aligned} $$
也就是state
$s$ 从新的policy采样,$a$从old policy
采样。上面这一步是完全恒等,但是 $s$ 依然需要从新的policy上面来采样。
Surrogate Loss Function
- Define a surrogate loss function based on sampled data that ignores change in state distribution :
$$ L(\pi) = \mathbb{E}_{s \sim \pi_{old}, a \sim \pi_{\text {old }}}\left[\frac{\pi(a | s)}{\pi_{\text {old }}(a | s)} A^{\pi_{\text {old }}}(s, a)\right] $$
此时数据可以完全由old ploicy
采样得到。这个可替代(surrogate)的loss function
与之前的 loss function
区别就在于state
是从哪个policy sample
出来的。上述等式能够替代的条件是old policy
和新的policy
差距不要太大。
- 小节:
现在我们来对上述过程做一个小结:
开始我们是有一个Target function
:
$$ R(\pi) =R\left(\pi_{\text {old }}\right)+\mathbb{E}_{s \sim \pi, a \sim \pi}[ \pi(a | s) A^{\pi_{\text {old }}}(s, a) ] $$
之后通过重要性采样,使得其在old policy
上采样:
$$ L(\pi) = \mathbb{E}_{s \sim \pi_{old}, a \sim \pi_{\text {old }}}\left[\frac{\pi(a | s)}{\pi_{\text {old }}(a | s)} A^{\pi_{\text {old }}}(s, a)\right] $$
然后我们对其进行求导:
$$ \begin{aligned} \left.\nabla_{\theta} L\left(\pi_{\theta}\right)\right|_{\theta_{\text {old }}} &=\left.\mathbb{E}_{s \sim \pi_{\text {old }}, a \sim \pi_{\text {old }}}\left[\frac{\nabla_{\theta} \pi_{\theta}(a | s)}{\pi_{\text {old }}(a | s)} A^{\pi_{\text {old }}}(s, a)\right]\right|_{\theta_{\text {old }}} \\ &=\left.\mathbb{E}_{s \sim \pi_{\text {old }}, a \sim \pi_{\text {old }}}\left[\frac{\pi_{\theta}(a | s) \nabla_{\theta} \log \pi_{\theta}(a | s)}{\pi_{\text {old }}(a | s)} A^{\pi_{\text {old }}}(s, a)\right]\right|_{\theta_{\text {old }}} \\ &=\left.\mathbb{E}_{s \sim \pi_{\text {old }}, a \sim \pi_{\theta}}\left[\nabla_{\theta} \log \pi_{\theta}(a | s) A^{\pi_{\text {old }}}(s, a)\right]\right|_{\theta_{\text {old }}} \\ &=\left.\nabla_{\theta} R\left(\pi_{\theta}\right)\right|_{\theta_{\text {old }}} \end{aligned} $$
我的微信公众号名称:深度学习与先进智能决策
微信公众号ID:MultiAgent1024
公众号介绍:主要研究分享深度学习、机器博弈、强化学习等相关内容!期待您的关注,欢迎一起学习交流进步!