This research investigates different existing Computer Aided Diagnostics Systems (CADS) for Mammography and presents a CAD system for breast cancer lesions based on an advanced object identification architecture, which it validates using an experimental dataset. Mammography is an important technique in the early identification of breast cancer because it allows patients to notice changes in their breasts far sooner than they can feel them. The CAD system examines digital mammography images for abnormalities such as well-defined or confined masses, calcification, architectural distortion, and asymmetry in the breast. Machine learning approaches for feature extraction necessitate domain expertise, which is a difficult and time-consuming procedure. Deep learning algorithms, on the other hand, learn from expert-annotated input data to adaptively extract features. Convolutional neural networks (CNN) and other deep learning methods have had a lot of success in various imaging tasks as image identification, recognition, and classification. This research offers a CAD system based on Faster R-CNN, a sophisticated, effective object detection framework with region proposal generation and classifier layers, for automatic identification and classification of breast cancer lesions in mammograms. For the testing set, the system generated a mAP (mean Average Precision) value of 0.857, which represents the combined classification and object identification performance accuracy.
The goal of this study is to use a faster R-CNN deep learning network to detect and classify breast cancer lesions in mammograms.
Methods and Materials: Based on Faster R-CNN, a sophisticated, effective object detection framework with area proposal generation and classifier layers, this research offers a CAD system for detection and classification of breast cancer lesions in mammograms. The suggested CAD system trains the model with 115 photos from the mini-MIAS database (mini-Mammographic Image Analysis Society). The initial MIAS database had 330 digitised mammographic pictures with a resolution of 50 microns, which were lowered to 200 microns using a Portable Gray Map (PGM) format file with a resolution of 1024 x 1024 pixels.
Conclusions: In this paper, a mini-MIAS database was defined as a database that requires only a few data pre-processing processes and only a small amount of hardware. The CBIS-DDSM database, which is a portion of the original DDSM (Digital Database for Screening Mammography) database with annotations by a professional mammographer, will be used to improve this CAD system. The mammographic pictures that have been decompressed and transformed into DICOM format are stored in the CBIS-DDSM database.Author(S) Details
Rubi Devika
R & D Department, Keshav Memorial Institute of Technology, 3-5-1026 Narayanaguda, Hyderabad, Telangana 500029, India.
Subramanian Rajasekaran
R & D Department, Keshav Memorial Institute of Technology, 3-5-1026 Narayanaguda, Hyderabad, Telangana 500029, India.
R. Lakshmi Gayathri
R & D Department, Keshav Memorial Institute of Technology, 3-5-1026 Narayanaguda, Hyderabad, Telangana 500029, India.
Jain Priyal
R & D Department, Keshav Memorial Institute of Technology, 3-5-1026 Narayanaguda, Hyderabad, Telangana 500029, India.
Kanneganti, Sai Rohith
R & D Department, Keshav Memorial Institute of Technology, 3-5-1026 Narayanaguda, Hyderabad, Telangana 500029, India.
View Book:- https://stm.bookpi.org/IDMMR-V6/article/view/5594
No comments:
Post a Comment