CoLeCLIP: Open-Domain Continual Learning via Joint Task Prompt and Vocabulary Learning.
This article investigates the problem of continual learning (CL) of vision-language models (VLMs) in open domains, where models are required to perform continual updating and inference on a stream of datasets from diverse seen and unseen domains with novel classes. Such a capability is crucial for various applications in open environments, e.g., AI assistants, autonomous driving systems, and robotics. Current CL studies mostly focus on closed-set scenarios in a single domain with known classes. Large pretrained VLMs such as CLIP have showcased exceptional zero-shot recognition capabilities, and several recent studies have leveraged the unique characteristics of VLMs to mitigate catastrophic forgetting in CL. However, they primarily focus on closed-set CL in a single-domain dataset. Open-domain CL of large VLMs is significantly more challenging due to 1) large class correlations and domain gaps across the datasets and 2) the forgetting of zero-shot knowledge in the pretrained VLMs and the knowledge learned from the newly adapted datasets. In this work, we introduce a novel approach, termed CoLeCLIP, which learns an open-domain CL model based on CLIP. It addresses these challenges through joint learning of a set of task prompts and a cross-domain class vocabulary. Extensive experiments on 11 domain datasets show that CoLeCLIP achieves new state-of-the-art performance for open-domain CL under both task- and class-incremental learning (CIL) settings.