Graph Neural Networks and Deep Attention Fusion for Multi-Organ Medical Image Analysis

Main Article Content

Elric Winslow

Abstract

This paper proposes a hybrid deep learning framework that integrates Graph Neural Networks (GNNs) with deep attention fusion mechanisms to address the challenges of multi-organ medical image analysis. Traditional convolutional neural networks (CNNs) struggle to capture inter-organ spatial dependencies and cross-regional contextual relationships, which are crucial for accurate diagnosis in complex medical imaging tasks. Our proposed model constructs a medical graph representation where each node corresponds to a distinct organ or tissue region, and the edges represent anatomical or functional correlations. The GNN module propagates high-level relational information among organs, while the deep attention fusion module adaptively combines features across multiple imaging modalities (CT, MRI, and Ultrasound) and hierarchical layers. Extensive experiments conducted on public datasets such as the CHAOS and BTCV benchmarks demonstrate that our approach achieves superior segmentation and classification accuracy compared to conventional CNN and Transformer-based methods. The proposed framework achieves an average Dice coefficient improvement of 3.8% and a sensitivity gain of 4.5% across multi-organ datasets. Furthermore, the attention fusion enhances interpretability by highlighting clinically relevant regions, making it valuable for real-world diagnostic support systems. The contributions of this paper lie in the unified design of relational reasoning and attention-guided fusion for robust, interpretable, and generalizable medical image analysis.

Article Details

Section

Articles