# 1st CLEAR Challenge (CVPR'22)

The [1st CLEAR challenge](https://www.aicrowd.com/challenges/cvpr-2022-clear-challenge) was hosted on [CVPR'22 Open World Vision workshop](https://www.cs.cmu.edu/~shuk/vplow.html) in June 19th, 2022. Through the challenge, we hope to encourage the continual learning (CL) community to embrace real-world CL problem by:

* Working on scenarios with **natural distribution shifts**, e.g., CLEAR-10 and CLEAR-100.
* Addressing the **generalization bottleneck** of practical CL systems, especially the train-test domain gap and forward transfer.
* **Not limiting the replay buffer size** (unlike in most prior works) because memory cost is usually not the bottleneck. Instead, we use a more realistic proxy of total training cost, that is to limit the **training time per bucket** to be 12hr (on a single 2080ti, see this [chart](https://mtli.github.io/gpubench/) for conversion to other types of GPU).

At the end of the challenge, we have 79 participants from 21 different countries and region joining the challenge, and together they pushed the state-of-the-art on CLEAR-10/CLEAR-100 to another level.&#x20;

![Demographics for 1st CLEAR challenge](https://2411580087-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FiPLWAhemH9JTpCCJxZ3p%2Fuploads%2FaXMGTYHSyhOErhm0AZn9%2Fworkshop.png?alt=media\&token=79c59194-b1ae-4e26-8c9b-5a03b7dcd267)

Top performing teams in this challenge adopt various stratgies to improve the generalization bottleneck, such as:

* *Experience Replay* for more efficient usage of diverse training samples.
* *Data augmentation*, e.g., Cutmix, Mixup, etc., for improving data efficiency.
* *Special losses to improve generalization*, e.g., sharpness aware minimization, contrastive-based representation learning, etc.

Please checkout the [slides](https://linzhiqiu.github.io/papers/clear/clear_cvpr.pdf) for a quick summary of the workshop.

## Workshop result&#x20;

![Top-4 and Innovation Prize on 1st CLEAR challenge](https://2411580087-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FiPLWAhemH9JTpCCJxZ3p%2Fuploads%2Fp5rEWflGah5uIEve5rdg%2Fwinner.png?alt=media\&token=11756c2a-d31d-405c-aa54-2ed74575fcc4)

1st Place (Youtu Lab) summary video ([link to repo](https://github.com/TencentYoutuResearch/VPLOW_CLEAR22)):

{% file src="<https://2411580087-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FiPLWAhemH9JTpCCJxZ3p%2Fuploads%2FpJyQjQ0hZr1tfFq7OC3j%2Fshennong3_clear.mp4?alt=media&token=c492feda-1cb1-4afd-8596-4368711e9605>" %}
1st Place Summary Video
{% endfile %}

2nd Place (AI Prime) | [solution code](https://github.com/eashenyang/CLEAR-AIPrime)


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://linzhiqiu.gitbook.io/the-clear-benchmark/introduction/1st-clear-challenge-cvpr22.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
