Exploring Self-attention for image recognition_황지훈 발표
페이지 정보
작성자 최고관리자 댓글 조회 작성일 22-02-03 13:20본문
Exploring Self-attention for Image Recognition
Recent work has shown that self-attention can serve as a basic building block for image recognition models. We explore variations of self-attention and assess their effectiveness for image recognition. We consider two forms of self-attention. One is pairwise self-attention, which generalizes standard dot-product attention and is fundamentally a set operator. The other is patchwise self-attention, which is strictly more powerful than convolution. Our pairwise self-attention networks match or outperform their convolutional counterparts, and the patchwise models substantially outperform the convolutional baselines. We also conduct experiments that probe the robustness of learned representations and conclude that self-attention networks may have significant benefits in terms of robustness and generalization.
첨부파일
- 랩세미나_22_01_20.pptx (3.3M) 21회 다운로드 | DATE : 2022-02-03 13:20:34
- 이전글Quaternion Neural Networks_서민국 발표 22.04.13
- 다음글Hierarchical multi-scale attention for semantic segmentation_우명우 발표 22.01.19
댓글목록
등록된 댓글이 없습니다.