Hao Zhu, Man-Di Luo, Rui Wang, Ai-Hua Zheng, Ran He. Deep Audio-visual Learning: A Survey[J]. Machine Intelligence Research, 2021, 18(3): 351-376. DOI: 10.1007/s11633-021-1293-0
Citation: Hao Zhu, Man-Di Luo, Rui Wang, Ai-Hua Zheng, Ran He. Deep Audio-visual Learning: A Survey[J]. Machine Intelligence Research, 2021, 18(3): 351-376. DOI: 10.1007/s11633-021-1293-0

Deep Audio-visual Learning: A Survey

  • Audio-visual learning, aimed at exploiting the relationship between audio and visual modalities, has drawn considerable attention since deep learning started to be used successfully. Researchers tend to leverage these two modalities to improve the performance of previously considered single-modality tasks or address new challenging problems. In this paper, we provide a comprehensive survey of recent audio-visual learning development. We divide the current audio-visual learning tasks into four different subfields: audio-visual separation and localization, audio-visual correspondence learning, audio-visual generation, and audio-visual representation learning. State-of-the-art methods, as well as the remaining challenges of each subfield, are further discussed. Finally, we summarize the commonly used datasets and challenges.
  • loading

Catalog

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return