NUST Institutional Repository

Deep Learning with Adaptive Aggregation technique and Optimization for Class Incremental Learning

Show simple item record

dc.contributor.author Saeed, Fahad
dc.date.accessioned 2022-09-16T04:29:59Z
dc.date.available 2022-09-16T04:29:59Z
dc.date.issued 2022-08-30
dc.identifier.other RCMS003349
dc.identifier.uri http://10.250.8.41:8080/xmlui/handle/123456789/30490
dc.description.abstract Incremental Learning is desirable for future learning systems to get fast and better outcomes by using less computational power despite the fact that we have the availability of a large amount of data with complex and advanced algorithms. We proposed Bisecting K-means technique with optimization by using Adaptive Aggregation Networks in class incremental learning. It is clear that whenever an Artificial Neural Network is trained iteratively, and new data comes in, some problems are elevated. There are two major problems the model’s behavior begins to override the data. Secondly, it is difficult for the model to play back the entire dataset. Therefore, our proposed technique is used to overcome these problems by training multiple incoming classes step by step. At each stage, new class data is used to train the classifier, which is used to assess test data from the previously trained data along with currently trained data. The significant issues is the severe memory limitation which results in unbalanced distribution of old and new classes and also affects the stability and plasticity of the system. This research uses the updated Adaptive Aggregation Networks approach collectively and it provides the best results and least catastrophic forgetting from previous studies. We run several experiments on three different datasets and produced the results. We compared our proposed technique on the FISH (UWA) dataset with Regular K Means and achieved improved results in terms of accuracy, testing loss, and forgetting. So, with N = 5 and N = 10, respectively, the improved results are 78.57% and 76.6%, with catastrophic forgetting of 0.88 and 11.88 on the FISH (UWA) dataset. We conducted several experiments on the FISH (UWA) and ImageNet-Subset datasets and the results of our suggested technique on the FISH (UWA) dataset are outstanding. en_US
dc.description.sponsorship Dr. Faisal Shafait en_US
dc.language.iso en_US en_US
dc.publisher SINES NUST en_US
dc.subject Deep Learning with Adaptive Aggregation technique en_US
dc.title Deep Learning with Adaptive Aggregation technique and Optimization for Class Incremental Learning en_US
dc.type Thesis en_US


Files in this item

This item appears in the following Collection(s)

  • MS [234]

Show simple item record

Search DSpace


Advanced Search

Browse

My Account