Abstract:
Microscopic examination is essential for Breast cancer which is the most common cancer among women worldwide. Finding clinical assessment hints to make accurate diagnoses during a pathology examination involves laboriously going through tissue photos at various magnification levels. Experts may also disagree as a result of personally examining breast cancer cases. Technological developments in digital imaging allow for the evaluation of pathology pictures through the use of computer vision and deep learning techniques, potentially automating a number of jobs within the diagnostic pathology process. Reducing observer variability, increasing objectivity, and obtaining quick and accurate quantification might all be facilitated by such automation. While deep learning techniques provide remarkable results in classification tasks involving histopathology pictures of breast cancer, current state-of-the-art algorithms are either computationally costly or only distinguish between binary or multi classes. Models that combine both binary and multiclass classification do not outperform ours model for multiclass in terms of performance accuracy. Furthermore, a small number of current models that achieve high performance accuracy are dependent on different magnification factors, rendering the model dependent.
Our primary contribution to this work is the implementation of the YOLOv5 (You Only Look Once) model with ResNet feature extractor, which is built on the CSP-Darknet53 backbone with the ResNet block incorporated after its backbone in order to extract complex hierarchical features from histopathological images for Breast Cancer classification. Additionally, we trained the model using images from all magnification factors, which makes it independent from magnification and increases its generalizability. It can detect BC from images acquired at various magnifications (10X, 200X, 400X, and 100X). Experiments indicate that an 80% and 20% ratio of training testing yields the optimum accuracy performance. The suggested model has a 99% binary class accuracy, a 98% malignant class accuracy, and a 97% benign class accuracy.