In the continual learning literature, several evaluation benchmarks suite have been proposed. Those suites propose a variety of tasks to evaluate algorithms. Continuum aims to reproduce some of their main settings for easy reproducibility.

  • CORe50

  • Stream-51

  • Synbols

  • CTRL

  • ALMA

  • MNIST-360

  • DomainBed

  • FluentSpeech

  • HuggingFace’s NLP datasets

  • And soon many others