The Need for Budgeted Computation in Continual Learning - Adel Bibi (University of Oxford)
posted on 5 September, 2023


Abstract: Continual learning literature is focused on learning from streams under limited access to previously seen data with no restriction on the computational budget. On the contrary, in this talk, we particularly study continual learning under budgeted computation in both offline and online settings. In offline settings, we study, at scale, various previously proposed components, e.g., distillation, sampling strategies, novel loss functions, for when the computational budget is restricted per time step. Moreover, in the online setting, we consider the computational budget through delayed real-time evaluations. That is to say, continual learners that are twice as expensive to train will end up having the model parameters updated half the number of times while being evaluated on every stream sample. Our experiments suggest that the majority of current evaluations were not carried fairly to account for normalized computation. Surprisingly, simple efficient methods outperform the majority of recently proposed, but computationally involved algorithms, in both online and offline. This observation holds on several datasets and experimental settings, i.e., class incremental, data incremental, time distributed settings. This hints that evaluations that do not factor the relative computation between methods can inadequately mislead to incorrect conclusions on the performance.