Abstract:
Artificial Neural Networks have revolutionized the field of representation
learning over the past decade. It has been shown that by jointly training with multiple
objective functions, it is possible to learn general representation that are affective
for a multitude of tasks. One key limitation, however, is that simultaneous access to
all the data of all the tasks is required during training. Any attempts to learn tasks
one at a time, called incremental learning, result in poor performance of the system
on older tasks, called catastrophic forgetting. This is unlike humans, who can easily
learn new tasks without forgetting about others. Several techniques have been
proposed over the past year to solve this problem, however the results are far from
ideal. Furthermore, there is no single agreed upon benchmark criteria which makes
it difficult to compare existing methods. In this paper, we propose a general framework
to compare the prominent existing methods. We analyze their strengths and
weaknesses, and investigate how they work in unison. Furthermore, we propose a
conditional GAN based rehearsal method, a privacy preserving incremental learning
method, and a dynamic threshold moving algorithm. We demonstrate that our proposed
methods are effective at solving the problem at hand, and provide promising
future directions. We also release a framework implementing ours as well as existing
methods to facilitate future research in this direction.