Na Luo, Weiyang Shi, Zhengyi Yang, Ming Song, Tianzi Jiang. Multimodal Fusion of Brain Imaging Data: Methods and Applications[J]. Machine Intelligence Research, 2024, 21(1): 136-152. DOI: 10.1007/s11633-023-1442-8
Citation: Na Luo, Weiyang Shi, Zhengyi Yang, Ming Song, Tianzi Jiang. Multimodal Fusion of Brain Imaging Data: Methods and Applications[J]. Machine Intelligence Research, 2024, 21(1): 136-152. DOI: 10.1007/s11633-023-1442-8

Multimodal Fusion of Brain Imaging Data: Methods and Applications

  • Neuroimaging data typically include multiple modalities, such as structural or functional magnetic resonance imaging, diffusion tensor imaging, and positron emission tomography, which provide multiple views for observing and analyzing the brain. To leverage the complementary representations of different modalities, multimodal fusion is consequently needed to dig out both inter-modality and intra-modality information. With the exploited rich information, it is becoming popular to combine multiple modality data to explore the structural and functional characteristics of the brain in both health and disease status. In this paper, we first review a wide spectrum of advanced machine learning methodologies for fusing multimodal brain imaging data, broadly categorized into unsupervised and supervised learning strategies. Followed by this, some representative applications are discussed, including how they help to understand the brain arealization, how they improve the prediction of behavioral phenotypes and brain aging, and how they accelerate the biomarker exploration of brain diseases. Finally, we discuss some exciting emerging trends and important future directions. Collectively, we intend to offer a comprehensive overview of brain imaging fusion methods and their successful applications, along with the challenges imposed by multi-scale and big data, which arises an urgent demand on developing new models and platforms.
  • loading

Catalog

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return