Successor Feature Landmarks for Long-Horizon Goal-Conditioned Reinforcement Learning_최진우 발표
페이지 정보
작성자 최고관리자 댓글 조회 작성일 22-04-25 11:24본문
Successor Feature Landmarks for Long-Horizon Goal-Conditioned Reinforcement Learning
Operating in the real-world often requires agents to learn about a complex environment and apply this understanding to achieve a breadth of goals. This problem, known as goal-conditioned reinforcement learning (GCRL), becomes especially challenging for long-horizon goals. Current methods have tackled this problem by augmenting goal-conditioned policies with graph-based planning algorithms. However, they struggle to scale to large, high-dimensional state spaces and assume access to exploration mechanisms for efficiently collecting training data. In this work, we introduce Successor Feature Landmarks (SFL), a framework for exploring large, high-dimensional environments so as to obtain a policy that is proficient for any goal. SFL leverages the ability of successor features (SF) to capture transition dynamics, using it to drive exploration by estimating state-novelty and to enable high-level planning by abstracting the state-space as a non-parametric landmark-based graph. We further exploit SF to directly compute a goal-conditioned policy for inter-landmark traversal, which we use to execute plans to "frontier" landmarks at the edge of the explored state space. We show in our experiments on MiniGrid and ViZDoom that SFL enables efficient exploration of large, high-dimensional state spaces and outperforms state-of-the-art baselines on long-horizon GCRL tasks.
첨부파일
- seminar_220303_SFL.pptx (74.3M) 1회 다운로드 | DATE : 2022-04-25 11:24:35
- 이전글Ground Texture Recognition (Explicitly or Implicitly?)_윤형석 발표 22.04.25
- 다음글A ConvNet for the 2020s_양동욱 발표 22.04.13
댓글목록
등록된 댓글이 없습니다.