歡迎光臨
每天分享高質量文章

深度強化學習的 18 個關鍵問題 | PaperDaily #30

在碎片化閱讀充斥眼球的時代,越來越少的人會去關註每篇論文背後的探索和思考。

在這個欄目裡,你會快速 get 每篇精選論文的亮點和痛點,時刻緊跟 AI 前沿成果。


點選本文底部的「閱讀原文」即刻加入社群,檢視更多最新論文推薦。

這是 PaperDaily 的第 30 篇文章

關於作者:王凌霄(社群ID @Nevertiree),中國科學院自動化研究所實習生,研究方向為強化學習和多智慧體。


這兩天我閱讀了兩篇篇猛文 A Brief Survey of Deep Reinforcement LearningDeep Reinforcement Learning: An Overview,作者排山倒海的取用了 200 多篇文獻,闡述強化學習未來的方向。

■ 論文 | A Brief Survey of Deep Reinforcement Learning

■ 連結 | http://www.paperweekly.site/papers/922

■ 作者 | Nevertiree


■ 論文 | Deep Reinforcement Learning: An Overview

■ 連結 | http://www.paperweekly.site/papers/1372

■ 作者 | Nevertiree


原文歸納出深度強化學習中的常見科學問題,併列出了目前解法與相關綜述,我在這裡做出整理,抽取了相關的論文。 這裡精選 18 個關鍵問題,涵蓋空間搜尋、探索利用、策略評估、記憶體使用、網路設計、反饋激勵等等話題


本文精選了 73 篇論文(其中 2017 年的論文有 27 篇,2016 年的論文有 21 篇),為了方便閱讀,原標題放在文章最後,可以根據索引找到。


問題一:預測與策略評估


prediction, policy evaluation 


萬變不離其宗,Temporal Difference 方法仍然是策略評估的核心哲學【Sutton 1988】。TD的拓展版本和她本身一樣鼎鼎大名—1992 年的 Q-learning 與 2015 年的 DQN。 


美中不足,TD Learning 中很容易出現 Over-Estimate(高估)問題,具體原因如下: 


The max operator in standard Q-learning and DQN use the same values both to select and to evaluate an action. — van Hasselt 


曠世猛將 van Hasselt 先生很喜歡處理 Over-Estimate 問題,他先搞出一個 Double Q-learning【van Hasselt 2010】大鬧 NIPS,六年後搞出深度學習版本的 Double DQN【van Hasselt 2016a】


問題二:控制與最佳策略選擇


control, finding optimal policy 


目前解法有三個流派,一圖勝千言:

△ 圖1:臺大李宏毅教授的 Slide

1. 最傳統的方法是 Value-Based,就是選擇有最優 Value 的 Action。最經典方法有:Q-learning 【Watkins and Dayan 1992】、SARSA 【Sutton and Barto 2017】


2. 後來 Policy-Based 方法引起註意,最開始是 REINFORCE 演演算法【Williams 1992】,後來策略梯度 Policy Gradient【Sutton 2000】出現。


3. 最時行的 Actor-Critic 【Barto et al 1983】把兩者做了結合。樓上 Sutton 老爺子的好學生、AlphaGo 的總設計師 David Silver 同志提出了 Deterministic Policy Gradient,錶面上是 PG,實際講了一堆 AC,這個改進史稱 DPG【Silver 2014】。

△ 圖2:Actor-Critic 的迴圈促進過程


問題三:不穩定與不收斂問題


Instability and Divergence when combining off-policy,function approximation,bootstrapping 


早在 1997 年 Tsitsiklis 就證明瞭如果 Function Approximator 採用了神經網路這種非線性的黑箱,那麼其收斂性和穩定性是無法保證的。 


分水嶺論文 Deep Q-learning Network【Mnih et al 2013】中提到:雖然我們的結果看上去很好,但是沒有任何理論依據(原文很狡猾的反過來說一遍)。 


This suggests that, despite lacking any theoretical convergence guarantees, our method is able to train large neural networks using a reinforcement learning signal and stochastic gradient descent in stable manner.

△ 圖3:征服 Atari 遊戲的 DQN


DQN 的改良主要依靠兩個 Trick: 


1. 經驗回放【Lin 1993】


雖然做不到完美的獨立同分佈,但還是要儘力減少資料之間的關聯性 。


2. Target Network【Mnih 2015】


Estimated Network 和 Target Network 不能同時更新引數,應該另設 Target Network 以保證穩定性。


Since the network Q being updated is also used in calculating the target value, the Q update is prone to divergence.(為什麼我們要用 Target Network) 

下麵幾篇論文都是 DQN 相關話題的: 


1. 經驗回放升級版:Prioritized Experience Replay 【Schaul 2016】 


2. 更好探索策略 【Osband 2016】 


3. DQN 加速 【He 2017a】 


4. 透過平均減少方差與不穩定性 Averaged-DQN 【Anschel 2017】 


下麵跳出 DQN 的範疇:


Duel DQN【Wang 2016c】(ICML 2016 最佳論文) 


Tips:閱讀此文請掌握 DQN、Double DQN、Prioritized Experience Replay 這三個背景。 


  • 非同步演演算法 A3C 【Mnih 2016】

  • TRPO (Trust Region Policy Optimization)【Schulman 2015】

  • Distributed Proximal Policy Optimization 【Heess 2017】 


  • Policy gradient 與 Q-learning 的結合【O’Donoghue 2017、Nachum 2017、 Gu 2017、Schulman 2017】 


  • GTD 【Sutton 2009a、Sutton 2009b、Mahmood 2014】 


  • Emphatic-TD 【Sutton 2016】

問題四:End-to-End 下的訓練感知與控制


train perception and control jointly end-to-end 


現有解法是 Guided Policy Search 【Levine et al 2016a】。


問題五:資料利用效率


data/sample efficiency 


現有解法有: 


  • Q-learning 與 Actor-Critic 


  • 經驗回放下的actor-critic 【Wang et al 2017b】 


  • PGQ,policy gradient and Q-learning 【O’Donoghue et al 2017】 


  • Q-Prop, policy gradient with off-policy critic 【Gu et al 2017】 


  • return-based off-policy control, Retrace 【Munos et al 2016】, Reactor 【Gruslyset al 2017】 


  • learning to learn, 【Duan et al 2017、Wang et al 2016a、Lake et al 2015】

問題六:無法取得激勵


reward function not available 


現有解法基本上圍繞模仿學習:


  • 吳恩達的逆強化學習【Ng and Russell 2000】 


  • learn from demonstration 【Hester et al 2017】 


  • imitation learning with GANs 【Ho and Ermon 2016、Stadie et al 2017】 (附TensorFlow 實現 [1]) 


  • train dialogue policy jointly with reward model 【Su et al 2016b】

問題七:探索-利用問題


exploration-exploitation tradeoff 


現有解法有: 


  • unify count-based exploration and intrinsic motivation 【Bellemare et al 2017】 


  • under-appreciated reward exploration 【Nachum et al 2017】 


  • deep exploration via bootstrapped DQN 【Osband et al 2016)】 


  • variational information maximizing exploration 【Houthooft et al 2016】

問題八:基於模型的學習


model-based learning 


現有解法: 


  • Sutton 老爺子教科書裡的經典安利:Dyna-Q 【Sutton 1990】 


  • model-free 與 model-based 的結合使用【Chebotar et al 2017】


問題九:無模型規劃


model-free planning 


比較新的解法有兩個: 


1. Value Iteration Networks【Tamar et al 2016】是勇奪 NIPS2016 最佳論文頭銜的猛文。


知乎上有專門的文章解說:Value iteration Network [2]還有作者的採訪:NIPS 2016 最佳論文作者:如何打造新型強化學習觀 [3]VIN 的 TensorFlow 實現 [4]

△ 圖4:Value Iteration Network 的框架

2. DeepMind 的 Silver 大神發表的 Predictron 方法 【Silver et al 2016b】,附 TensorFlow 實現 [5]

問題十:它山之石可以攻玉


focus on salient parts 


@賈揚清 大神曾經說過: 


伯克利人工智慧方向的博士生,入學一年以後資格考試要考這幾個內容:強化學習和 Robotics、 統計和機率圖模型、 計算機視覺和影象處理、 語音和自然語言處理、 核方法及其理論、 搜尋,CSP,邏輯,Planning 等。


如果真的想做人工智慧,建議都瞭解一下,不是說都要搞懂搞透,但是至少要達到開會的時候和人在 poster 前面談笑風生不出錯的程度吧。 


因此,一個很好的思路是從計算機視覺與自然語言處理領域汲取靈感,例如下文中將會提到的 unsupervised auxiliary learning 方法借鑒了 RNN+LSTM 中的大量操作。 


下麵是 CV 和 NLP 方面的幾個簡介:物體檢測 【Mnih 2014】、機器翻譯 【Bahdanau 2015】、影象標註【Xu 2015】、用 Attention 代替 CNN 和 RNN【Vaswani 2017】等等。

問題十一:長時間資料儲存


data storage over long time, separating from computation 


最出名的解法是在 Nature 上大秀一把的 Differentiable Neural Computer【Graves et al 2016】。

問題十二:無回報訓練


benefit from non-reward training signals in environments 


現有解法圍繞著無監督學習開展:


Horde 【Sutton et al 2011】 


極其優秀的工作:


unsupervised reinforcement and auxiliary learning 【Jaderberg et al 2017】 


learn to navigate with unsupervised auxiliary learning 【Mirowski et al 2017】 


大名鼎鼎的 GANs 【Goodfellow et al 2014】


問題十三:跨領域學習


learn knowledge from different domains 


現有解法全部圍繞遷移學習走:【Taylor and Stone, 2009、Pan and Yang 2010、Weiss et al 2016】,learn invariant features to transfer skills 【Gupta et al 2017】。

問題十四:有標簽資料與無標簽資料混合學習


benefit from both labelled and unlabelled data 


現有解法全部圍繞半監督學習:


  • 【Zhu and Goldberg 2009】 


  • learn with MDPs both with and without reward functions 【Finn et al 2017)】 


  • learn with expert’s trajectories and those may not from experts 【Audiffren et al 2015】


問題十五:多層抽象差分空間的表示與推斷


learn, plan, and represent knowledge with spatio-temporal abstraction at multiple levels 


現有解法:


  • 多層強化學習 【Barto and Mahadevan 2003】 


  • strategic attentive writer to learn macro-actions 【Vezhnevets et al 2016】 


  • integrate temporal abstraction with intrinsic motivation 【Kulkarni et al 2016】 


  • stochastic neural networks for hierarchical RL 【Florensa et al 2017】 


  • lifelong learning with hierarchical RL 【Tessler et al 2017】

問題十六:不同任務環境快速適應


adapt rapidly to new tasks 


現有解法基本上是 learn to learn learn:


  • a flexible RNN model to handle a family of RL tasks 【Duan et al 2017、Wang et al 2016a】 


  • one/few/zero-shot learning 【Duan et al 2017、Johnson et al 2016、 Kaiser et al 2017b、Koch et al 2015、Lake et al 2015、Li and Malik 2017、Ravi and Larochelle, 2017、Vinyals et al 2016


問題十七:巨型搜尋空間


gigantic search space 


現有解法依然是蒙特卡洛搜尋,詳情可以參考初代 AlphaGo 的實現【Silver et al 2016a】。

問題十八:神經網路架構設計


neural networks architecture design


現有的網路架構搜尋方法【Baker et al 2017、Zoph and Le 2017】,其中 Zoph 的工作分量非常重。 


新的架構有【Kaiser et al 2017a、Silver et al 2016b、Tamar et al 2016、Vaswani et al 2017、Wang et al 2016c】。


相關連結


[1] imitation learning with GANs 實現

https://github.com/openai/imitation

[2] Value iteration Network  

https://zhuanlan.zhihu.com/p/24478944

[3] 如何打造新型強化學習觀

http://www.sohu.com/a/121100017_465975

[4] Value Iteration Networks 實現

https://github.com/TheAbhiKumar/tensorflow-value-iteration-networks

[5] Predictron 實現

https://github.com/zhongwen/predictron


參考文獻

[1] Anschel, O., Baram, N., and Shimkin, N. (2017). Averaged-DQN: Variance reduction and stabilization for deep reinforcement learning. In the International Conference on Machine Learning (ICML).

[2] Audiffren, J., Valko, M., Lazaric, A., and Ghavamzadeh, M. (2015). Maximum entropy semisupervised inverse reinforcement learning. In the International Joint Conference on Artificial Intelligence (IJCAI).

[3] Bahdanau, D., Brakel, P., Xu, K., Goyal, A., Lowe, R., Pineau, J., Courville, A., and Bengio, Y. (2017). An actor-critic algorithm for sequence prediction. In the International Conference on Learning Representations (ICLR).

[4] Baker, B., Gupta, O., Naik, N., and Raskar, R. (2017). Designing neural network architectures using reinforcement learning. In the International Conference on Learning Representations (ICLR).

[5] Barto, A. G. and Mahadevan, S. (2003). Recent advances in hierarchical reinforcement learning. Discrete Event Dynamic Systems, 13(4):341–379.

[6] Barto, A. G., Sutton, R. S., and Anderson, C. W. (1983). Neuronlike elements that can solve difficult learning control problems. IEEE Transactions on Systems, Man, and Cybernetics, 13:835–846

[7] Bellemare, M. G., Danihelka, I., Dabney, W., Mohamed, S.,Lakshminarayanan, B., Hoyer, S., and Munos, R. (2017). The Cramer Distance as a Solution to Biased Wasserstein Gradients. ArXiv e-prints.

[8] Chebotar, Y., Hausman, K., Zhang, M., Sukhatme, G., Schaal, S., and Levine, S. (2017). Combining model-based and model-free updates for trajectory-centric reinforcement learning. In the International Conference on Machine Learning (ICML)

[9] Duan, Y., Andrychowicz, M., Stadie, B. C., Ho, J., Schneider, J.,Sutskever, I., Abbeel, P., and Zaremba, W. (2017). One-Shot Imitation Learning. ArXiv e-prints.

[10] Finn, C., Christiano, P., Abbeel, P., and Levine, S. (2016a). A connection between GANs, inverse reinforcement learning, and energy-based models. In NIPS 2016 Workshop on Adversarial Training.

[11] Florensa, C., Duan, Y., and Abbeel, P. (2017). Stochastic neural networks for hierarchical reinforcement learning. In the International Conference on Learning Representations (ICLR)

[12] Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., , and Bengio, Y. (2014). Generative adversarial nets. In the Annual Conference on Neural Information Processing Systems (NIPS), page 2672?2680.

[13] Graves, A., Wayne, G., Reynolds, M., Harley, T., Danihelka, I., Grabska-Barwinska, A., Col- ´ menarejo, S. G., Grefenstette, E., Ramalho, T., Agapiou, J., nech Badia, A. P., Hermann, K. M., Zwols, Y., Ostrovski, G., Cain, A., King, H., Summerfield, C., Blunsom, P., Kavukcuoglu, K., and Hassabis, D. (2016). Hybrid computing using a neural network with dynamic external memory. Nature, 538:471–476

[14] Gruslys, A., Gheshlaghi Azar, M., Bellemare, M. G., and Munos, R. (2017). The Reactor: A Sample-Efficient Actor-Critic Architecture. ArXiv e-prints

[15] Gu, S., Lillicrap, T., Ghahramani, Z., Turner, R. E., and Levine, S. (2017). Q-Prop: Sampleefficient policy gradient with an off-policy critic. In the International Conference on Learning Representations (ICLR).

[16] Gupta, A., Devin, C., Liu, Y., Abbeel, P., and Levine, S. (2017). Learning invariant feature spaces to transfer skills with reinforcement learning. In the International Conference on Learning Representations (ICLR).

[17] He, F. S., Liu, Y., Schwing, A. G., and Peng, J. (2017a). Learning to play in a day: Faster deep reinforcement learning by optimality tightening. In the International Conference on Learning Representations (ICLR)

[18] Heess, N., TB, D., Sriram, S., Lemmon, J., Merel, J., Wayne, G., Tassa, Y., Erez, T., Wang, Z., Eslami, A., Riedmiller, M., and Silver, D. (2017). Emergence of Locomotion Behaviours in Rich Environments. ArXiv e-prints

[19] Hester, T. and Stone, P. (2017). Intrinsically motivated model learning for developing curious robots. Artificial Intelligence, 247:170–86.

[20] Ho, J. and Ermon, S. (2016). Generative adversarial imitation learning. In the Annual Conference on Neural Information Processing Systems (NIPS).

[21] Houthooft, R., Chen, X., Duan, Y., Schulman, J., Turck, F. D., and Abbeel, P. (2016). Vime: Variational information maximizing exploration. In the Annual Conference on Neural Information Processing Systems (NIPS).

[22] Jaderberg, M., Mnih, V., Czarnecki, W., Schaul, T., Leibo, J. Z., Silver, D., and Kavukcuoglu, K. (2017). Reinforcement learning with unsupervised auxiliary tasks. In the International Conference on Learning Representations (ICLR).

[23] Johnson, M., Schuster, M., Le, Q. V., Krikun, M., Wu, Y., Chen, Z., Thorat, N., Viegas, F., Watten- ´berg, M., Corrado, G., Hughes, M., and Dean, J. (2016). Google’s Multilingual Neural Machine Translation System: Enabling Zero-Shot Translation. ArXive-prints.

[24] Kaiser, L., Gomez, A. N., Shazeer, N., Vaswani, A., Parmar, N., Jones, L., and Uszkoreit, J. (2017a). One Model To Learn Them All. ArXiv e-prints.

[25] Kaiser, Ł., Nachum, O., Roy, A., and Bengio, S. (2017b). Learning to Remember Rare Events. In the International Conference on Learning Representations (ICLR).

[26] Koch, G., Zemel, R., and Salakhutdinov, R. (2015). Siamese neural networks for one-shot image recognition. In the International Conference on Machine Learning (ICML).

[27] Kulkarni, T. D., Narasimhan, K. R., Saeedi, A., and Tenenbaum, J. B. (2016). Hierarchical deep reinforcement learning: Integrating temporal abstraction and intrinsic motivation. In the Annual Conference on Neural Information Processing Systems (NIPS)

[28] Lake, B. M., Salakhutdinov, R., and Tenenbaum, J. B. (2015). Human-level concept learning through probabilistic program induction. Science, 350(6266):1332–1338.

[29] Levine, S., Finn, C., Darrell, T., and Abbeel, P. (2016a). End-to-end training of deep visuomotor policies. The Journal of Machine Learning Research, 17:1–40.

[30] Li, K. and Malik, J. (2017). Learning to optimize. In the International Conference on Learning Representations (ICLR).

[31] Lillicrap, T. P., Hunt, J. J., Pritzel, A., Heess, N., Erez, T., & Tassa, Y., et al. (2015). Continuous control with deep reinforcement learning. Computer Science, 8(6), A187.

[32] Lin, L. J. (1993). Reinforcement learning for robots using neural networks.

[33] Mahmood, A. R., van Hasselt, H., and Sutton, R. S. (2014). Weighted importance sampling for off-policy learning with linear function approximation. In the Annual Conference on Neural Information Processing Systems (NIPS).

[34] Mirowski, P., Pascanu, R., Viola, F., Soyer, H., Ballard, A., Banino, A., Denil, M., Goroshin, R., Sifre, L., Kavukcuoglu, K., Kumaran, D., and Hadsell, R. (2017). Learning to navigate in complex environments. In the International Conference on Learning Representations (ICLR).

[35] Mnih, Volodymyr, Kavukcuoglu, Koray, Silver, David, Graves, Alex, Antonoglou, Ioannis, Wier- stra, Daan, and Riedmiller, Martin. Playing atari with deep reinforcement learning. arXiv preprint arXiv:1312.5602, 2013.

[36] Mnih, V., Heess, N., Graves, A., and Kavukcuoglu, K. (2014). Recurrent models of visual attention. In the Annual Conference on Neural Information Processing Systems (NIPS).

[37] Mnih, V., Kavukcuoglu, K., Silver, D., Rusu, A. A., Veness, J., Bellemare, M. G., Graves, A., Riedmiller, M., Fidjeland, A. K., Ostrovski, G., Petersen, S., Beattie, C., Sadik, A., Antonoglou, I., King, H., Kumaran, D., Wierstra, D., Legg, S., and Hassabis, D. (2015). Human-level control through deep reinforcement learning. Nature, 518(7540):529–533.

[38] Mnih, V., Badia, A. P., Mirza, M., Graves, A., Harley, T., Lillicrap, T. P., Silver, D., and Kavukcuoglu, K. (2016). Asynchronous methods for deep reinforcement learning. In the International Conference on Machine Learning (ICML)

[39] Munos, R., Stepleton, T., Harutyunyan, A., and Bellemare, M. G.(2016). Safe and efficient offpolicy reinforcement learning. In the Annual Conference on Neural Information Processing Systems (NIPS).

[40] Nachum, O., Norouzi, M., and Schuurmans, D. (2017). Improving policy gradient by exploring under-appreciated rewards. In the International Conference on Learning Representations (ICLR).

[41] Nachum, O., Norouzi, M., Xu, K., and Schuurmans, D. (2017). Bridging the Gap Between Value and Policy Based Reinforcement Learning. ArXive-prints.

[42] Ng, A. and Russell, S. (2000).Algorithms for inverse reinforcement learning. In the International Conference on Machine Learning (ICML).

[43] O’Donoghue, B., Munos, R., Kavukcuoglu, K., and Mnih, V. (2017). PGQ: Combining policy gradient and q-learning. In the International Conference on Learning Representations (ICLR).

[44] Osband, I., Blundell, C., Pritzel, A., and Roy, B. V. (2016). Deep exploration via bootstrapped DQN. In the Annual Conference on Neural Information Processing Systems (NIPS).

[45] Pan, S. J. and Yang, Q. (2010). A survey on transfer learning. IEEE Transactions on Knowledge and Data Engineering, 22(10):1345 – 1359.

[46] Ravi, S. and Larochelle, H. (2017). Optimization as a model for few-shot learning. In the International Conference on Learning Representations (ICLR).

[47] Schaul, T., Quan, J., Antonoglou, I., and Silver, D. (2016). Prioritized experience replay. In the International Conference on Learning Representations (ICLR).

[48] Schulman, J., Levine, S., Moritz, P., Jordan, M. I., and Abbeel, P. (2015). Trust region policy optimization. In the International Conference on Machine Learning (ICML).

[49] Schulman, J., Abbeel, P., and Chen, X. (2017). Equivalence Between Policy Gradients and Soft Q-Learning. ArXiv e-prints.

[50] Silver, D., Lever, G., Heess, N., Degris, T., Wierstra, D., & Riedmiller, M. (2014). Deterministic policy gradient algorithms. International Conference on International Conference on Machine Learning (pp.387-395). JMLR.org.

[51] Silver, D., Huang, A., Maddison, C. J., Guez, A., Sifre, L., Van Den Driessche, G., Schrittwieser, J., Antonoglou, I., Panneershelvam, V., Lanctot, M., et al. (2016a). Mastering the game of go with deep neural networks and tree search. Nature, 529(7587):484–489.

[52] Silver, D., van Hasselt, H., Hessel, M., Schaul, T., Guez, A., Harley, T., Dulac-Arnold, G., Reichert, D., Rabinowitz, N., Barreto, A., and Degris, T. (2016b). The predictron: End-to-end learning and planning. In NIPS 2016 Deep Reinforcement Learning Workshop.

[53] Stadie, B. C., Abbeel, P., and Sutskever, I. (2017).Third person imitation learning. In the International Conference on Learning Representations (ICLR).

[54] Sutton, R. S. and Barto, A. G. (2017). Reinforcement Learning: An Introduction (2nd Edition, in preparation). MIT Press.

[55] Sutton, R. S., McAllester, D., Singh, S., and Mansour, Y. (2000). Policy gradient methods for reinforcement learning with function approximation. In the Annual Conference on Neural Information Processing Systems
(NIPS).

[56] Sutton, R. S., Maei, H. R., Precup, D., Bhatnagar, S., Silver, D., Szepesvari, C., and Wiewiora, ´E. (2009a). Fast gradient-descent methods for temporal-difference learning with linear function approximation. In the International Conference on Machine Learning (ICML).

[57] Sutton, R. S., Szepesvari, C., and Maei, H. R. (2009b). A convergent O( ´ n) algorithm for off-policy temporal-difference learning with linear function approximation. In the Annual Conference on Neural Information Processing Systems (NIPS).

[58] Sutton, R. S., Modayil, J., Delp, M., Degris, T., Pilarski, P. M., White, A., and Precup, D. (2011). Horde: A scalable real-time architecture for learning knowledge from unsupervised sensorimotor interaction, , proc. of 10th. In International Conference on Autonomous Agents and Multiagent Systems (AAMAS).

[59] Sutton, R. S., Mahmood, A. R., and White, M. (2016). An emphatic approach to the problem of off-policy temporal-difference learning. The Journal of Machine Learning Research, 17:1–29

[60] Sutton, R. S. (1988). Learning to predict by the methods of temporal differences. Machine Learning,3(1):9–44.

Sutton, R. S. (1990). Integrated architectures for learning, planning, and reacting based on approximating dynamic programming. In the International Conference on Machine Learning (ICML).

[61] Tamar, A., Wu, Y., Thomas, G., Levine, S., and Abbeel, P. (2016). Value iteration networks. In the Annual Conference on Neural Information Processing Systems (NIPS).

[62] Taylor, M. E. and Stone, P. (2009). Transfer learning for reinforcement learning domains: A survey. Journal of Machine Learning Research, 10:1633–1685.

[63] Tessler, C., Givony, S., Zahavy, T., Mankowitz, D. J., and Mannor, S. (2017). A deep hierarchical approach to lifelong learning in minecraft. In the AAAI Conference on Artificial Intelligence (AAAI).

[64] van Hasselt, H. (2010). Double Q-learning. Advances in Neural Information Processing Systems 23:, Conference on Neural Information Processing Systems 2010.

[65] van Hasselt, H., Guez, A., , and Silver, D. (2016a). Deep reinforcement learning with double Qlearning. In the AAAI Conference on Artificial Intelligence (AAAI).

[66] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., Kaiser, L., and Polosukhin, I. (2017). Attention Is All You Need. ArXiv e-prints.

[67] Vezhnevets, A. S., Mnih, V., Agapiou, J., Osindero, S., Graves, A., Vinyals, O., and Kavukcuoglu, K. (2016). Strategic attentive writer for learning macro-actions. In the Annual Conference on Neural Information Processing Systems (NIPS).

[68] Vinyals, O., Blundell, C., Lillicrap, T., Kavukcuoglu, K., and Wierstra, D. (2016). Matching networks for one shot learning. In the Annual Conference on Neural Information Processing Systems (NIPS).

[69] Wang, J. X., Kurth-Nelson, Z., Tirumala, D., Soyer, H., Leibo, J. Z., Munos, R., Blundell, C., Kumaran, D., and Botvinick, M. (2016a). Learning to reinforcement learn. arXiv:1611.05763v1.

[70] Wang, S. I., Liang, P., and Manning, C. D. (2016b). Learning language games through interaction. In the Association for Computational Linguistics annual meeting (ACL)

[71] Wang, Z., Schaul, T., Hessel, M., van Hasselt, H., Lanctot, M., and de Freitas, N. (2016c). Dueling network architectures for deep reinforcement learning. In the International Conference on Machine Learning (ICML).

[72] Watkins, C. J. C. H. and Dayan, P. (1992). Q-learning. Machine Learning, 8:279–292

[73] Weiss, K., Khoshgoftaar, T. M., and Wang, D. (2016). A survey of transfer learning. Journal of Big Data, 3(9)

Williams, R. J. (1992). Simple statistical gradient-following algorithms for connectionist reinforcement learning. Machine Learning, 8(3):229–256.

[74] Xu, K., Ba, J. L., Kiros, R., Cho, K., Courville, A.,Salakhutdinov, R., Zemel, R. S., and Bengio,Y. (2015). Show, attend and tell: Neural image caption generation with visual attention. In the International Conference on Machine Learning (ICML).

[75] Zhu, X. and Goldberg, A. B. (2009). Introduction to semi-supervised learning. Morgan & Claypool

Zoph, B. and Le, Q. V. (2017). Neural architecture search with reinforcement learning. In the International Conference on Learning Representations (ICLR)


本文由 AI 學術社群 PaperWeekly 精選推薦,社群目前已改寫自然語言處理、計算機視覺、人工智慧、機器學習、資料挖掘和資訊檢索等研究方向,點選「閱讀原文」即刻加入社群!

  我是彩蛋 


解鎖新功能:熱門職位推薦!

PaperWeekly小程式升級啦

今日arXiv√猜你喜歡√熱門職位

找全職找實習都不是問題

 

 解鎖方式 

1. 識別下方二維碼開啟小程式

2. 用PaperWeekly社群賬號進行登陸

3. 登陸後即可解鎖所有功能

 職位釋出 

請新增小助手微信(pwbot01)進行諮詢

 

長按識別二維碼,使用小程式

*點選閱讀原文即可註冊

           

關於PaperWeekly


PaperWeekly 是一個推薦、解讀、討論、報道人工智慧前沿論文成果的學術平臺。如果你研究或從事 AI 領域,歡迎在公眾號後臺點選「交流群」,小助手將把你帶入 PaperWeekly 的交流群裡。


贊(0)

分享創造快樂