Multimodal Contrastive Learning for Alzheimer’s Disease Prediction in Imaging Genetics
Abstract
Alzheimer’s disease (AD) is an inherently multimodal pathology driven by complex genetic and phenotypic interactions, making reliable early detection a critical challenge. Existing multimodal approaches often struggle to effectively align static baseline genetic risk with longitudinal physical changes. In this work, we introduce a novel two-stage contrastive learning framework integrating Single Nucleotide Polymorphisms (SNPs) and structural MRI volumes. To overcome the bottleneck of single-timepoint genetic measurements, we propose an age-conditioned augmentation strategy that generates time-aware genetic embeddings for longitudinal contrastive pairing. Utilizing a dynamic Gated Fusion mechanism for downstream classification, our approach effectively weights modality contributions. Evaluated on the ADNI database, our framework consistently outperforms strong classical baselines and state-of-the-art generative models, demonstrating particularly significant improvements in early-stage cognitive decline detection.
How to Cite:
Fallmann, J. & Kobler, E., (2026) “Multimodal Contrastive Learning for Alzheimer’s Disease Prediction in Imaging Genetics”, Proceedings of the Austrian Symposium on AI, Robotics, and Vision 3(1), 33-38.
Downloads:
Download PDF
3 Views
1 Downloads