# The CLEAR Benchmark: Continual LEArning on Real-World Imagery

![](https://2411580087-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FiPLWAhemH9JTpCCJxZ3p%2Fuploads%2F6WL3EcdpSaIbAboRNW6z%2Fbanner_white.png?alt=media\&token=9ed7f60a-a157-4255-bd3f-08519e6a158e)

CLEAR is a novel continual/lifelong benchmark that captures real-world distribution shifts in Internet image collection (YFCC100M) from 2004 to 2014.&#x20;

> For long, researchers in continual learning (CL) community have been working with artificial CL benchmarks such as "Permuted-MNIST" and "Split-CIFAR", which do not align with practical applications. In reality, distribution shifts are smooth, such as natural temporal evolution of visual concepts.&#x20;

Below are examples of classes in [CLEAR-100](https://linzhiqiu.gitbook.io/the-clear-benchmark/documentation/download-clear-10-clear-100) that changed over the past decade:

![Back to 2004, we had bulky desktop, old-fashioned analog watches, and 2D pixel-art game.
Nonetheless, visual concepts gradually evolved from 2004 to 2014, e.g., fancier-looking Macbook Pro, digital watches, and 3D realistic-graphics games.](https://2411580087-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FiPLWAhemH9JTpCCJxZ3p%2Fuploads%2F1MkaOq7SfImBZyRTki9t%2Fexamples.png?alt=media\&token=af9361ce-502b-4974-a2da-dd76f754b305)

## About CLEAR Benchmark

The CLEAR Benchmark and the [CLEAR-10](https://linzhiqiu.gitbook.io/the-clear-benchmark/documentation/download-clear-10-clear-100#clear-10-s3-download-links) dataset are first introduced in our [NeurIPS 2021 paper](https://arxiv.org/abs/2201.06289).&#x20;

{% embed url="<https://arxiv.org/abs/2201.06289>" %}
NeurIPS'21 Datasets and Benchmarks Track
{% endembed %}

In spirit of the famous [CIFAR-10/CIFAR-100 benchmarks](https://www.cs.toronto.edu/~kriz/cifar.html) for static image classification tasks, we also collected a more challenging [CLEAR-100](https://linzhiqiu.gitbook.io/the-clear-benchmark/documentation/download-clear-10-clear-100#clear-100-s3-download-links) with a diverse set of 100 classes.

{% hint style="info" %}
We hope our [CLEAR-10/-100](https://linzhiqiu.gitbook.io/the-clear-benchmark/documentation/download-clear-10-clear-100) benchmarks can be the new "CIFAR" as a test stone for continual/lifelong learning community.
{% endhint %}

We are also extending CLEAR to an ImageNet-scale benchmark. If you have feedback and insights, feel free to reach out to us!

{% content-ref url="introduction/about-us" %}
[about-us](https://linzhiqiu.gitbook.io/the-clear-benchmark/introduction/about-us)
{% endcontent-ref %}

Please cite our paper if it is useful for your research:

```markup
@inproceedings{lin2021clear,
  title={The CLEAR Benchmark: Continual LEArning on Real-World Imagery},
  author={Lin, Zhiqiu and Shi, Jia and Pathak, Deepak and Ramanan, Deva},
  booktitle={Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
  year={2021}
}
```

## 1st CLEAR challenge on CVPR 2022

In June 2022, the [1st CLEAR Challenge](https://www.aicrowd.com/challenges/cvpr-2022-clear-challenge) was hosted on [CVPR 2022 Open World Vision Workshop](https://www.cs.cmu.edu/~shuk/vplow.html), with a total of 15 teams from 21 different countries and regions partcipating. You may find a quick summary of the workshop in the below page:

{% content-ref url="introduction/1st-clear-challenge-cvpr22" %}
[1st-clear-challenge-cvpr22](https://linzhiqiu.gitbook.io/the-clear-benchmark/introduction/1st-clear-challenge-cvpr22)
{% endcontent-ref %}

Given the top teams' promising performance on CLEAR-10/-100 benchmarks via utilizing methods that **improve** **generalization**, such as sharpness aware minimization, supervised contrastive loss, strong data augmentation, experience replay, etc., we believe there are still a wealth of problems in CLEAR for the community to explore, such as:

* Improving Forward Transfer and Next-Domain Accuracy
* Unsupervised/Online Domain Generalization
* Self-supervised/Semi-Supervised Continuous Learning

In the following pages, we will explain the motivation of CLEAR benchmark, how it is curated via visio-linguistic approach, its evaluation protocols, and a walk-through of the 1st CLEAR Challenge on CVPR'22.&#x20;

You can also jump to the links for downloading CLEAR dataset:

{% content-ref url="documentation/download-clear-10-clear-100" %}
[download-clear-10-clear-100](https://linzhiqiu.gitbook.io/the-clear-benchmark/documentation/download-clear-10-clear-100)
{% endcontent-ref %}
