Learning audio-visual correlations from variational cross-modal generation

Y Zhu, Y Wu, H Latapie, Y Yang… - ICASSP 2021-2021 IEEE …, 2021 - ieeexplore.ieee.org
ICASSP 2021-2021 IEEE International Conference on Acoustics …, 2021ieeexplore.ieee.org
People can easily imagine the potential sound while seeing an event. This natural
synchronization between audio and visual signals reveals their intrinsic correlations. To this
end, we propose to learn the audio-visual correlations from the perspective of cross-modal
generation in a self-supervised manner, the learned correlations can be then readily applied
in multiple downstream tasks such as the audio-visual cross-modal localization and
retrieval. We introduce a novel Variational AutoEncoder (VAE) framework that consists of …
People can easily imagine the potential sound while seeing an event. This natural synchronization between audio and visual signals reveals their intrinsic correlations. To this end, we propose to learn the audio-visual correlations from the perspective of cross-modal generation in a self-supervised manner, the learned correlations can be then readily applied in multiple downstream tasks such as the audio-visual cross-modal localization and retrieval. We introduce a novel Variational AutoEncoder (VAE) framework that consists of Multiple encoders and a Shared decoder (MS-VAE) with an additional Wasserstein distance constraint to tackle the problem. Extensive experiments demonstrate that the optimized latent representation of the proposed MS-VAE can effectively learn the audio-visual correlations and can be readily applied in multiple audio-visual downstream tasks to achieve competitive performance even without any given label information during training.
ieeexplore.ieee.org