Signing Outside the Studio: Benchmarking Background Robustness for Continuous Sign Language Recognition

1KAIST / Daejeon, South Korea. 2Hanyang University / Seoul, South Korea.
British Machine Vision Conference (BMVC 2022)

Examples of our Scene-PHOENIX with backgrounds for evaluating the background robustness of the model. To best our knowledge, we are the first to construct a benchmark dataset with various backgrounds that can evaluate the background robustness of the CSLR model, and it has a reasonable construction cost in that it reuses existing CSLR datasets and scene datasets.

Abstract

The goal of this work is background-robust continuous sign language recognition. Most existing Continuous Sign Language Recognition (CSLR) benchmarks have fixed backgrounds and are filmed in studios with a static monochromatic background. However, signing is not limited only to studios in the real world.

In order to analyze the robustness of CSLR models under background shifts, we first evaluate existing state-of-the-art CSLR models on diverse backgrounds. To synthesize the sign videos with a variety of backgrounds, we propose a pipeline to automatically generate a benchmark dataset utilizing existing CSLR benchmarks. Our newly constructed benchmark dataset consists of diverse scenes to simulate a real-world environment. We observe that even the most recent CSLR method cannot recognize glosses well on our new dataset with changed backgrounds.

In this regard, we also propose a simple yet effective training scheme including (1) background randomization and (2) feature disentanglement for CSLR models. The experimental results on our dataset demonstrate that our method generalizes well to other unseen background data with minimal additional training images.

Introduction

Most publicly available CSLR benchmarks are curated from either studio or TV broadcasts, where background images are fixed and monochromatic. A naïve solution to this would be constructing a new dataset outside the studio, but the cost of extensive gloss annotations as well as collecting sign videos with skilled signers present significant challenges.

To tackle this issue, We make variants of development and test splits of PHOENIX-2014 [1] with our automated pipeline and name our benchmark dataset with diverse backgrounds Scene-PHOENIX.

Background Attack to CSLR models

Based on our Scene-PHOENIX dataset, we find that current CSLR approaches are not robust to background shifts. Baseline (ResNet-18 [2] + 1D-CNN) and VAC [3] which is the state-of-the-art model in the CSLR field severely degrade when tested on Scene-PHOENIX.

Word Error Rate (WER) scores from test benchmarks. We attack the state-of-the-art model VAC by chainging the background images in the Test split of PHOENIX-2014 dataset.

Background Agnostic Framework

Our framework comprises of (1) Background Randomization (BR), which simply generates a sign video with new background via mixup [4] to simulate background shift, and (2) Disentangling Auto-Encoder (DAE) that aims to disentangle the signer from videos with background in latent space.

Background Randomization

Data generation. (a) For Scene-PHOENIX, background matting is performed with scene images using person masks. (b) For training set, we apply mixup between a sign video and a scene image without person masks to reduce additional labeling cost in training.

Disentangling Auto-Encoder

The overall architecture of the proposed model. The original video passes through Teacher Network, and the background-randomized video passes through Student Notwork. In the latent space, the signer features are swapped with each other. Then, the swapped features are input to the shared DAE decoder for reconstructing the original features.

Experimental Restuls

Main Restuls

Experimental results on PHOENIX-2014 and Scene-PHOENIX. VAC-Oracle is a VAC model that is trained on all LSUN [5] background matted images. While the performance of the baselines severely degrades under Scene-PHOENIX, the proposed Background Randomization (BR) shows significant performance improvements. Our final model (BR + DAE) shows the best performance among the baseline models. Note that our final model with K = 1 outperforms all VAC w/ BR models. Moreover, Our with K = 1000 surpasses the VAC-Oracle and VAC in both dataset without any off-the-shelf human segmentation masks.

Ablation on Additional Training Data

Using DAE is more efficient in annotation cost compared to using pose, which requires extra annotation. We emphasize using additional 100 scene images for BR is much cheaper than annotating pose for training.

Different Backbone Network

Comparison of performances with different feature extractors: GoogLeNet [6] and ResNet18. Our framework consistently works well with different feature extractors.

Qualitative Restuls

Grad-CAM Visualization

By virtue of our Disentangling Auto-Encoder, latent features consistently focus on the signer and background area respectively.

Gloss Predictions

We visualize the frame-level gloss predictions from the models and show the difference when the background shifted. We observe that VAC fails to predict correct glosses with different backgrounds, while our method consistently recognizes glosses regardless of backgrounds.

References

[1] Koller, Oscar, Jens Forster, and Hermann Ney. "Continuous sign language recognition: Towards large vocabulary statistical recognition systems handling multiple signers." Computer Vision and Image Understanding 141 (2015): 108-125.

[2] He, Kaiming, et al. "Deep residual learning for image recognition." Proceedings of the IEEE conference on computer vision and pattern recognition. 2016.

[3] Min, Yuecong, et al. "Visual alignment constraint for continuous sign language recognition." Proceedings of the IEEE/CVF International Conference on Computer Vision. 2021.

[4] Zhang, Hongyi, et al. "mixup: Beyond empirical risk minimization." arXiv preprint arXiv:1710.09412 (2017).

[5] Yu, Fisher, et al. "Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop." arXiv preprint arXiv:1506.03365 (2015).

[6] Szegedy, Christian, et al. "Going deeper with convolutions." Proceedings of the IEEE conference on computer vision and pattern recognition. 2015.

BibTeX

If you find our work useful for your research, please cite with the following bibtex:

@inproceedings{jang2022signing, 
   title={Signing Outside the Studio: Benchmarking Background Robustness for Continuous Sign Language Recognition},
   author={Jang, Youngjoon and Oh, Youngtaek and Cho, Jae Won and Kim, Dong-Jin and Chung, Joon Son and Kweon, In So},
   booktitle={British Machine Vision Conference (BMVC)},
   year={2022}
}