Multi-modal Attribute Prompting for Vision-Language Models
Large pre-trained Vision-Language Models (VLMs), like CLIP, exhibit strong generalization ability to downstream tasks but struggle in few-shot scenarios. Existing prompting techniques primarily focus on global text and image representations, yet overlooking multi-modal attribute characteristics. This limitation hinders the model's ability to perceive fine-grained visual details and restricts its generalization ability to a broader range of unseen classes. To address this issue, we propose a Multi-modal Attribute Prompting method (MAP) by jointly exploring textual attribute prompting, visual attribute prompting, and attribute-level alignment. The proposed MAP enjoys several merits. First, we introduce learnable visual attribute prompts enhanced by textual attribute semantics to adaptively capture visual attributes for images from unknown categories, boosting fine-grained visual perception capabilities for CLIP. Second, the proposed attribute-level alignment complements the global alignment to enhance the robustness of cross-modal alignment for open-vocabulary objects. To our knowledge, this is the first work to establish cross-modal attribute-level alignment for CLIP-based few-shot adaptation. Extensive experimental results on 11 datasets demonstrate that our method performs favorably against state-of-the-art approaches..
Medienart: |
Preprint |
---|
Erscheinungsjahr: |
2024 |
---|---|
Erschienen: |
2024 |
Enthalten in: |
arXiv.org - (2024) vom: 29. Feb. Zur Gesamtaufnahme - year:2024 |
---|
Sprache: |
Englisch |
---|
Beteiligte Personen: |
Liu, Xin [VerfasserIn] |
---|
Links: |
Volltext [kostenfrei] |
---|
Themen: |
000 |
---|
Förderinstitution / Projekttitel: |
|
---|
PPN (Katalog-ID): |
XCH04280230X |
---|
LEADER | 01000naa a22002652 4500 | ||
---|---|---|---|
001 | XCH04280230X | ||
003 | DE-627 | ||
005 | 20240306114526.0 | ||
007 | cr uuu---uuuuu | ||
008 | 240306s2024 xx |||||o 00| ||eng c | ||
035 | |a (DE-627)XCH04280230X | ||
035 | |a (chemrXiv)2403.00219 | ||
040 | |a DE-627 |b ger |c DE-627 |e rakwb | ||
041 | |a eng | ||
100 | 1 | |a Liu, Xin |e verfasserin |4 aut | |
245 | 1 | 0 | |a Multi-modal Attribute Prompting for Vision-Language Models |
264 | 1 | |c 2024 | |
336 | |a Text |b txt |2 rdacontent | ||
337 | |a Computermedien |b c |2 rdamedia | ||
338 | |a Online-Ressource |b cr |2 rdacarrier | ||
520 | |a Large pre-trained Vision-Language Models (VLMs), like CLIP, exhibit strong generalization ability to downstream tasks but struggle in few-shot scenarios. Existing prompting techniques primarily focus on global text and image representations, yet overlooking multi-modal attribute characteristics. This limitation hinders the model's ability to perceive fine-grained visual details and restricts its generalization ability to a broader range of unseen classes. To address this issue, we propose a Multi-modal Attribute Prompting method (MAP) by jointly exploring textual attribute prompting, visual attribute prompting, and attribute-level alignment. The proposed MAP enjoys several merits. First, we introduce learnable visual attribute prompts enhanced by textual attribute semantics to adaptively capture visual attributes for images from unknown categories, boosting fine-grained visual perception capabilities for CLIP. Second, the proposed attribute-level alignment complements the global alignment to enhance the robustness of cross-modal alignment for open-vocabulary objects. To our knowledge, this is the first work to establish cross-modal attribute-level alignment for CLIP-based few-shot adaptation. Extensive experimental results on 11 datasets demonstrate that our method performs favorably against state-of-the-art approaches. | ||
650 | 4 | |a Computer Science - Computer Vision and Pattern Recognition |7 (dpeaa)DE-84 | |
650 | 4 | |a 000 |7 (dpeaa)DE-84 | |
700 | 1 | |a Wu, Jiamin |4 aut | |
700 | 1 | |a Zhang, Tianzhu |4 aut | |
773 | 0 | 8 | |i Enthalten in |t arXiv.org |g (2024) vom: 29. Feb. |
773 | 1 | 8 | |g year:2024 |g day:29 |g month:02 |
856 | 4 | 0 | |u https://arxiv.org/abs/2403.00219 |z kostenfrei |3 Volltext |
912 | |a GBV_XCH | ||
951 | |a AR | ||
952 | |j 2024 |b 29 |c 02 |