Self-supervised video transformers for isolated sign language recognition

M Sandoval-Castaneda, Y Li, D Brentari… - arXiv preprint arXiv …, 2023 - arxiv.org
arXiv preprint arXiv:2309.02450, 2023arxiv.org
This paper presents an in-depth analysis of various self-supervision methods for isolated
sign language recognition (ISLR). We consider four recently introduced transformer-based
approaches to self-supervised learning from videos, and four pre-training data regimes, and
study all the combinations on the WLASL2000 dataset. Our findings reveal that MaskFeat
achieves performance superior to pose-based and supervised video models, with a top-1
accuracy of 79.02% on gloss-based WLASL2000. Furthermore, we analyze these models' …
This paper presents an in-depth analysis of various self-supervision methods for isolated sign language recognition (ISLR). We consider four recently introduced transformer-based approaches to self-supervised learning from videos, and four pre-training data regimes, and study all the combinations on the WLASL2000 dataset. Our findings reveal that MaskFeat achieves performance superior to pose-based and supervised video models, with a top-1 accuracy of 79.02% on gloss-based WLASL2000. Furthermore, we analyze these models' ability to produce representations of ASL signs using linear probing on diverse phonological features. This study underscores the value of architecture and pre-training task choices in ISLR. Specifically, our results on WLASL2000 highlight the power of masked reconstruction pre-training, and our linear probing results demonstrate the importance of hierarchical vision transformers for sign language representation.
arxiv.org