dc.description.abstract |
Attribute and class noise is a pervasive issue in software quality interpretation that has
caught ample consideration due to its substantial impression on classification algorithms.
This study delves into composite interplays amidst attribute and class noise as regards to
software quality datasets and demonstrates advancements in model performance that
originated from enquiring effective means for reducing specific forms of noise. It uses a
broad-spectrum of field research, applying random forest as key classification approach
and uniting various data sampling methods, to review the significance of attribute and class
noise. This study delves into the intricacies of attribute noise along with the domain of
class noise, examining its effects on model performance. Comparable changes in accuracy,
precision, recall and F-score are observed as attribute noise levels increase. The
experimental data points out quantifiable merits of skillful noise reduction when assessing
software quality. In particular, the study demonstrates that significant gains in recall,
accuracy, precision, and F-score are closely correlated with noise reduction. Eminently,
important advances are observed by converting from unclean data to class-noise cleaned
data. The results demonstrate the importance of noise handling approaches and the effect
of noise on the accuracy and dependability of machine learning models. The proposed
algorithm achieves significant gains 94.59%, 97.74%, 94.79% and 96.24% in accuracy,
precision, recall and F-score respectively, that exhibit how necessary noise reduction
strategies are and how extensive of an effect they have on the performance of an ML model. |
en_US |