The CLEAR Benchmark
  • The CLEAR Benchmark: Continual LEArning on Real-World Imagery
  • Introduction
    • Motivation of CLEAR Benchmark
    • Visio-Linguistic Dataset Curation
    • 📊Evaluation Protocol on CLEAR
    • 🚀1st CLEAR Challenge (CVPR'22)
    • About us
  • Documentation
    • Download CLEAR-10/CLEAR-100
    • Avalanche Integration
Powered by GitBook
On this page
  • About CLEAR Benchmark
  • 1st CLEAR challenge on CVPR 2022

The CLEAR Benchmark: Continual LEArning on Real-World Imagery

By Carnegie Mellon University and CMU Argo AI Center

NextMotivation of CLEAR Benchmark

Last updated 1 year ago

CLEAR is a novel continual/lifelong benchmark that captures real-world distribution shifts in Internet image collection (YFCC100M) from 2004 to 2014.

For long, researchers in continual learning (CL) community have been working with artificial CL benchmarks such as "Permuted-MNIST" and "Split-CIFAR", which do not align with practical applications. In reality, distribution shifts are smooth, such as natural temporal evolution of visual concepts.

About CLEAR Benchmark

We are also extending CLEAR to an ImageNet-scale benchmark. If you have feedback and insights, feel free to reach out to us!

Please cite our paper if it is useful for your research:

@inproceedings{lin2021clear,
  title={The CLEAR Benchmark: Continual LEArning on Real-World Imagery},
  author={Lin, Zhiqiu and Shi, Jia and Pathak, Deepak and Ramanan, Deva},
  booktitle={Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
  year={2021}
}

1st CLEAR challenge on CVPR 2022

Given the top teams' promising performance on CLEAR-10/-100 benchmarks via utilizing methods that improve generalization, such as sharpness aware minimization, supervised contrastive loss, strong data augmentation, experience replay, etc., we believe there are still a wealth of problems in CLEAR for the community to explore, such as:

  • Improving Forward Transfer and Next-Domain Accuracy

  • Unsupervised/Online Domain Generalization

  • Self-supervised/Semi-Supervised Continuous Learning

In the following pages, we will explain the motivation of CLEAR benchmark, how it is curated via visio-linguistic approach, its evaluation protocols, and a walk-through of the 1st CLEAR Challenge on CVPR'22.

You can also jump to the links for downloading CLEAR dataset:

Below are examples of classes in that changed over the past decade:

The CLEAR Benchmark and the dataset are first introduced in our .

In spirit of the famous for static image classification tasks, we also collected a more challenging with a diverse set of 100 classes.

We hope our benchmarks can be the new "CIFAR" as a test stone for continual/lifelong learning community.

In June 2022, the was hosted on , with a total of 15 teams from 21 different countries and regions partcipating. You may find a quick summary of the workshop in the below page:

CLEAR-100
CLEAR-10/-100
About us
1st CLEAR Challenge
CVPR 2022 Open World Vision Workshop
🚀1st CLEAR Challenge (CVPR'22)
Download CLEAR-10/CLEAR-100
NeurIPS 2021 paper
CIFAR-10/CIFAR-100 benchmarks
The CLEAR Benchmark: Continual LEArning on Real-World ImageryarXiv.org
NeurIPS'21 Datasets and Benchmarks Track
Logo
Back to 2004, we had bulky desktop, old-fashioned analog watches, and 2D pixel-art game. Nonetheless, visual concepts gradually evolved from 2004 to 2014, e.g., fancier-looking Macbook Pro, digital watches, and 3D realistic-graphics games.
CLEAR-10
CLEAR-100