The CLEAR Benchmark: Continual LEArning on Real-World Imagery
By Carnegie Mellon University and CMU Argo AI Center
Last updated
By Carnegie Mellon University and CMU Argo AI Center
Last updated
CLEAR is a novel continual/lifelong benchmark that captures real-world distribution shifts in Internet image collection (YFCC100M) from 2004 to 2014.
For long, researchers in continual learning (CL) community have been working with artificial CL benchmarks such as "Permuted-MNIST" and "Split-CIFAR", which do not align with practical applications. In reality, distribution shifts are smooth, such as natural temporal evolution of visual concepts.
We are also extending CLEAR to an ImageNet-scale benchmark. If you have feedback and insights, feel free to reach out to us!
Please cite our paper if it is useful for your research:
Given the top teams' promising performance on CLEAR-10/-100 benchmarks via utilizing methods that improve generalization, such as sharpness aware minimization, supervised contrastive loss, strong data augmentation, experience replay, etc., we believe there are still a wealth of problems in CLEAR for the community to explore, such as:
Improving Forward Transfer and Next-Domain Accuracy
Unsupervised/Online Domain Generalization
Self-supervised/Semi-Supervised Continuous Learning
In the following pages, we will explain the motivation of CLEAR benchmark, how it is curated via visio-linguistic approach, its evaluation protocols, and a walk-through of the 1st CLEAR Challenge on CVPR'22.
You can also jump to the links for downloading CLEAR dataset:
Below are examples of classes in that changed over the past decade:
The CLEAR Benchmark and the dataset are first introduced in our .
In spirit of the famous for static image classification tasks, we also collected a more challenging with a diverse set of 100 classes.
We hope our benchmarks can be the new "CIFAR" as a test stone for continual/lifelong learning community.
In June 2022, the was hosted on , with a total of 15 teams from 21 different countries and regions partcipating. You may find a quick summary of the workshop in the below page: