Abstract:
Incremental Learning is desirable for future learning systems to get fast and better outcomes
by using less computational power despite the fact that we have the availability of
a large amount of data with complex and advanced algorithms. We proposed Bisecting
K-means technique with optimization by using Adaptive Aggregation Networks in class
incremental learning. It is clear that whenever an Artificial Neural Network is trained
iteratively, and new data comes in, some problems are elevated. There are two major
problems the model’s behavior begins to override the data. Secondly, it is difficult for
the model to play back the entire dataset. Therefore, our proposed technique is used
to overcome these problems by training multiple incoming classes step by step. At each
stage, new class data is used to train the classifier, which is used to assess test data from
the previously trained data along with currently trained data. The significant issues is
the severe memory limitation which results in unbalanced distribution of old and new
classes and also affects the stability and plasticity of the system. This research uses the
updated Adaptive Aggregation Networks approach collectively and it provides the best
results and least catastrophic forgetting from previous studies. We run several experiments
on three different datasets and produced the results. We compared our proposed
technique on the FISH (UWA) dataset with Regular K Means and achieved improved
results in terms of accuracy, testing loss, and forgetting. So, with N = 5 and N = 10,
respectively, the improved results are 78.57% and 76.6%, with catastrophic forgetting of
0.88 and 11.88 on the FISH (UWA) dataset. We conducted several experiments on the
FISH (UWA) and ImageNet-Subset datasets and the results of our suggested technique
on the FISH (UWA) dataset are outstanding.