博客
关于我
强烈建议你试试无所不能的chatGPT,快点击我
强化学习(四) —— Actor-Critic演员评论家 & code
阅读量:2135 次
发布时间:2019-04-30

本文共 6175 字,大约阅读时间需要 20 分钟。

Actor Critic是强化学习中的一种结合体,结合了以值为基础(如Q-learning)和以动作概率为基础(如policy gradient)的算法

 

Actor Critic的 actor前身就是policy gradient, 能够在连续动作中选择合适的动作

critic的前身就是Q-learning,或其他function approximation方法,能进行单步更新

actor基于概率选行为,critic基于actor的行为评判行为的得分,actor根据critic的评分修改选行为的概率

 

优点: 可以进行单步更新,比传统的Policy Gradient要快.

缺点: 取决于Critic的价值判断,但是Critic难收敛,再加上Actor的更新,就更难收敛

后来google deepmind提出了actor critic升级版 Deep Deterministic Policy Gradient, 它融合了DQN的优势,解决了收敛难的问题

 

 

 

实战

actor修改行为时就像蒙着眼睛一直向前开车,critic就是那个扶方向盘改变actor开车方向的

"""Actor-Critic using TD-error as the Advantage, Reinforcement Learning.The cart pole example. Policy is oscillated.View more on my tutorial page: https://morvanzhou.github.io/tutorials/Using:tensorflow 1.0gym 0.8.0"""import numpy as npimport tensorflow as tfimport gymnp.random.seed(2)tf.set_random_seed(2)  # reproducible# SuperparametersOUTPUT_GRAPH = FalseMAX_EPISODE = 3000DISPLAY_REWARD_THRESHOLD = 200  # renders environment if total episode reward is greater then this thresholdMAX_EP_STEPS = 1000   # maximum time step in one episodeRENDER = False  # rendering wastes timeGAMMA = 0.9     # reward discount in TD errorLR_A = 0.001    # learning rate for actorLR_C = 0.01     # learning rate for criticenv = gym.make('CartPole-v0')env.seed(1)  # reproducibleenv = env.unwrappedN_F = env.observation_space.shape[0]N_A = env.action_space.nclass Actor(object):    def __init__(self, sess, n_features, n_actions, lr=0.001):        self.sess = sess        self.s = tf.placeholder(tf.float32, [1, n_features], "state")        self.a = tf.placeholder(tf.int32, None, "act")        self.td_error = tf.placeholder(tf.float32, None, "td_error")  # TD_error        with tf.variable_scope('Actor'):            l1 = tf.layers.dense(                inputs=self.s,                units=20,    # number of hidden units                activation=tf.nn.relu,                kernel_initializer=tf.random_normal_initializer(0., .1),    # weights                bias_initializer=tf.constant_initializer(0.1),  # biases                name='l1'            )            self.acts_prob = tf.layers.dense(                inputs=l1,                units=n_actions,    # output units                activation=tf.nn.softmax,   # get action probabilities                kernel_initializer=tf.random_normal_initializer(0., .1),  # weights                bias_initializer=tf.constant_initializer(0.1),  # biases                name='acts_prob'            )        with tf.variable_scope('exp_v'):            log_prob = tf.log(self.acts_prob[0, self.a])            self.exp_v = tf.reduce_mean(log_prob * self.td_error)  # advantage (TD_error) guided loss        with tf.variable_scope('train'):            self.train_op = tf.train.AdamOptimizer(lr).minimize(-self.exp_v)  # minimize(-exp_v) = maximize(exp_v)    def learn(self, s, a, td):        s = s[np.newaxis, :]        feed_dict = {self.s: s, self.a: a, self.td_error: td}        _, exp_v = self.sess.run([self.train_op, self.exp_v], feed_dict)        return exp_v    def choose_action(self, s):        s = s[np.newaxis, :]        probs = self.sess.run(self.acts_prob, {self.s: s})   # get probabilities for all actions        return np.random.choice(np.arange(probs.shape[1]), p=probs.ravel())   # return a intclass Critic(object):    def __init__(self, sess, n_features, lr=0.01):        self.sess = sess        self.s = tf.placeholder(tf.float32, [1, n_features], "state")        self.v_ = tf.placeholder(tf.float32, [1, 1], "v_next")        self.r = tf.placeholder(tf.float32, None, 'r')        with tf.variable_scope('Critic'):            l1 = tf.layers.dense(                inputs=self.s,                units=20,  # number of hidden units                activation=tf.nn.relu,  # None                # have to be linear to make sure the convergence of actor.                # But linear approximator seems hardly learns the correct Q.                kernel_initializer=tf.random_normal_initializer(0., .1),  # weights                bias_initializer=tf.constant_initializer(0.1),  # biases                name='l1'            )            self.v = tf.layers.dense(                inputs=l1,                units=1,  # output units                activation=None,                kernel_initializer=tf.random_normal_initializer(0., .1),  # weights                bias_initializer=tf.constant_initializer(0.1),  # biases                name='V'            )        with tf.variable_scope('squared_TD_error'):            self.td_error = self.r + GAMMA * self.v_ - self.v            self.loss = tf.square(self.td_error)    # TD_error = (r+gamma*V_next) - V_eval        with tf.variable_scope('train'):            self.train_op = tf.train.AdamOptimizer(lr).minimize(self.loss)    def learn(self, s, r, s_):        s, s_ = s[np.newaxis, :], s_[np.newaxis, :]        v_ = self.sess.run(self.v, {self.s: s_})        td_error, _ = self.sess.run([self.td_error, self.train_op],                                          {self.s: s, self.v_: v_, self.r: r})        return td_errorsess = tf.Session()actor = Actor(sess, n_features=N_F, n_actions=N_A, lr=LR_A)critic = Critic(sess, n_features=N_F, lr=LR_C)     # we need a good teacher, so the teacher should learn faster than the actorsess.run(tf.global_variables_initializer())if OUTPUT_GRAPH:    tf.summary.FileWriter("logs/", sess.graph)for i_episode in range(MAX_EPISODE):    s = env.reset()    t = 0    track_r = []    while True:        if RENDER: env.render()        a = actor.choose_action(s)        s_, r, done, info = env.step(a)        if done: r = -20        track_r.append(r)        td_error = critic.learn(s, r, s_)  # gradient = grad[r + gamma * V(s_) - V(s)]        actor.learn(s, a, td_error)     # true_gradient = grad[logPi(s,a) * td_error]        s = s_        t += 1        if done or t >= MAX_EP_STEPS:            ep_rs_sum = sum(track_r)            if 'running_reward' not in globals():                running_reward = ep_rs_sum            else:                running_reward = running_reward * 0.95 + ep_rs_sum * 0.05            if running_reward > DISPLAY_REWARD_THRESHOLD: RENDER = True  # rendering            print("episode:", i_episode, "  reward:", int(running_reward))            break

 

转载地址:http://elygf.baihongyu.com/

你可能感兴趣的文章
【手机自动化测试】monkey测试
查看>>
【英语】软件开发常用英语词汇
查看>>
Fiddler 抓包工具总结
查看>>
【雅思】雅思需要购买和准备的学习资料
查看>>
【雅思】雅思写作作业(1)
查看>>
【雅思】【大作文】【审题作业】关于同不同意的审题作业(重点)
查看>>
【Loadrunner】通过loadrunner录制时候有事件但是白页无法出来登录页怎么办?
查看>>
【English】【托业】【四六级】写译高频词汇
查看>>
【托业】【新东方全真模拟】01~02-----P5~6
查看>>
【托业】【新东方全真模拟】03~04-----P5~6
查看>>
【托业】【新东方托业全真模拟】TEST05~06-----P5~6
查看>>
【托业】【新东方托业全真模拟】TEST09~10-----P5~6
查看>>
【托业】【新东方托业全真模拟】TEST07~08-----P5~6
查看>>
solver及其配置
查看>>
JAVA多线程之volatile 与 synchronized 的比较
查看>>
Java集合框架知识梳理
查看>>
笔试题(一)—— java基础
查看>>
Redis学习笔记(二)— 在linux下搭建redis服务器
查看>>
Redis学习笔记(三)—— 使用redis客户端连接windows和linux下的redis并解决无法连接redis的问题
查看>>
Intellij IDEA使用(一)—— 安装Intellij IDEA(ideaIU-2017.2.3)并完成Intellij IDEA的简单配置
查看>>