Transformers are Sample-Efficient World Models_최진우발표
페이지 정보
작성자 최고관리자 댓글 조회 작성일 24-01-02 03:54본문
Deep reinforcement learning agents are notoriously sample inefficient, which
considerably limits their application to real-world problems. Recently, many
model-based methods have been designed to address this issue, with learning in
the imagination of a world model being one of the most prominent approaches.
However, while virtually unlimited interaction with a simulated environment sounds
appealing, the world model has to be accurate over extended periods of time.
Motivated by the success of Transformers in sequence modeling tasks, we introduce
IRIS, a data-efficient agent that learns in a world model composed of a discrete
autoencoder and an autoregressive Transformer. With the equivalent of only two
hours of gameplay in the Atari 100k benchmark, IRIS achieves a mean human
normalized score of 1.046, and outperforms humans on 10 out of 26 games, setting a
new state of the art for methods without lookahead search.
첨부파일
- seminar230531_IRIS.pptx (35.6M) 0회 다운로드 | DATE : 2024-01-02 03:54:07
댓글목록
등록된 댓글이 없습니다.