Natural language video description (NLVD) has recently received strong interest in the computer vision, natural language processing (NLP), multimedia, and autonomous robotics communities. The state-of-the-art (SotA) approaches obtained remarkable results when tested on the benchmark datasets. However, those approaches poorly generalize to new datasets. In addition, none of the existing works focus on the processing of the input to the NLVD systems, which is both visual and textual. In this paper, an extensive study is presented to deal with the role of the visual input, evaluated with respect to the overall NLP performance. This is achieved by performing data augmentation of the visual component, applying common transformations to model camera distortions, noise, lighting, and camera positioning that are typical in real-world operative scenarios. A t-SNE-based analysis is proposed to evaluate the effects of the considered transformations on the overall visual data distribution. For this study, the English subset of the Microsoft Research Video Description (MSVD) dataset is considered, which is used commonly for NLVD. It was observed that this dataset contains a relevant amount of syntactic and semantic errors. These errors have been amended manually, and the new version of the dataset (called MSVD-v2) is used in the experimentation. The MSVD-v2 dataset is released to help to gain insight into the NLVD problem.
Dettaglio pubblicazione
2020, IEEE TRANSACTIONS ON MULTIMEDIA, Pages 271-283 (volume: 22)
The Role of the Input in Natural Language Video Description (01a Articolo in rivista)
Cascianelli Silvia, Costante Gabriele, Devo Alessandro, Ciarfuglia Thomas A., Valigi Paolo, Fravolini Mario L.
Gruppo di ricerca: Artificial Intelligence and Robotics
keywords