An Interpretable Classifier for High-Resolution Breast Cancer Screening Images Utilizing Weakly Supervised Localization

  • Shaik Roshna, Dr.Prasanthi Boyapati

Abstract

Medical pictures vary in substantially greater resolution and smaller inter-est areas from natural pictures. As a result, neural network designs that perform well for natural pictures may perhaps not be relevant to the interpretation of medical photos. In this study, we propose to address these specific characteristics of medical pictures a new model for neural networking. This model utilises a poorly capable but still memory-efficient network to find the most relevant areas across the entire picture. It then applies a new network to gather information from selected areas. This network applies. Finally, a fusion module is used to combine global and local data to create a forecast. While current techniques typically need the segmentation of Le-Sion throughout training, our model is trained on just picture levels and may produce pixels that indicate potential cancerous discoveries. We use this model to check the interpretation of mammography: to predict the presence or absence of benign and malignant lesions. Our model exceeds (AUC = 0.93) ResNet-34 and Faster R-CNN in breast classification with malignant results using the NYU Breast Cancer Screenset dataset. Data set Our model performatively (AUC = 0,858) is achieved using state-of-the-art methods in the CBIS-DDSM dataset. Our model is up 4.1x quicker for inference compared to ResNet-34, while utilising 78.4% less GPU storage. Moreover, we show in a reading trial that our model exceeds the AUC level of the radiologist by a margin of 0.11.

Published
2021-08-07
How to Cite
Shaik Roshna, Dr.Prasanthi Boyapati. (2021). An Interpretable Classifier for High-Resolution Breast Cancer Screening Images Utilizing Weakly Supervised Localization. Design Engineering, 9870- 9882. Retrieved from http://www.thedesignengineering.com/index.php/DE/article/view/3578
Section
Articles