Two complementary perspectives to continual learning: ask not only what to optimize, but also how
Abstract: Continually learning from a stream of non-stationary data is difficult for deep neural networks. When these networks are trained on something new, they tend to quickly forget what was learned before. In recent years, considerable progress has been made towards overcoming such “catastrophic forgetting”, predominantly thanks to approaches that add replay or regularization terms … Read more








