HOME   |    PDF   |   


Title

Interpretable self-supervised contrastive learning for colorectal cancer histopathology: GRADCAM visualization

 

Authors

Tarun Jain1 & Andrew M. Lynn2,*

 

Affiliation

School of Computational and Integrative Sciences (SCIS), Jawaharlal Nehru University, New Delhi, India; *Corresponding author

 

Email

Tarun Jain - E-mail: tarun32_sit@jnu.ac.in

Andrew M. Lynn - E-mail: andrew@jnu.ac.in

 

Article Type

Research Article

 

Date

Received July 1, 2025; Revised July 31, 2025; Accepted July 31, 2025, Published July 31, 2025

 

Abstract

Accurate colorectal cancer diagnosis from histopathological images is crucial for effective treatment. Therefore, it is of interest to describe a novel framework that combines self-supervised contrastive learning (SSCL) with Grad-CAM-based interpretability for classifying hyperplastic polyp (HP) and sessile serrated adenoma (SSA). A ResNet50 encoder is first pre-trained using SSCL to learn rich feature representations from unlabeled images, minimizing the need for manual annotations which are then fine-tuned in a supervised setting, achieving a classification accuracy of 85.86%. Grad-CAM is used to generate visual explanations, highlighting critical regions influencing the model’s decisions. This interpretable, data-efficient approach outperforms conventional CNN methods, offering improved diagnostic accuracy and enhanced trust in automated pathology.

 

Keywords

Self supervised contrastive learning, colorectal cancer histopathology, deep learning, interpretable AI, GradCAM

 

Citation

Jain & Lynn, Bioinformation 21(7): 1836-1842 (2025)

 

Edited by

P Kangueane

 

ISSN

0973-2063

 

Publisher

Biomedical Informatics

 

License

This is an Open Access article which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly credited. This is distributed under the terms of the Creative Commons Attribution License.