Skip to content

Commit

Permalink
update index
Browse files Browse the repository at this point in the history
  • Loading branch information
ys-zong committed Mar 21, 2024
1 parent 511d9bc commit 98f62dd
Showing 1 changed file with 3 additions and 3 deletions.
6 changes: 3 additions & 3 deletions index.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@ data: https://huggingface.co/datasets/ys-zong/VL-ICL
<div class="column is-four-fifths">
<h2>Abstract</h2>
<div class="content has-text-justified">
Large language models (LLMs) famously exhibit emergent in-context learning (ICL) -- the ability to rapidly adapt to new tasks using few-shot examples provided as a prompt, without updating the model's weights. Built on top of LLMs, vision large language models (VLLMs) have advanced significantly in areas such as recognition, reasoning, and grounding. However, investigations into \emph{multimodal ICL} have predominantly focused on few-shot visual question answering (VQA), and image captioning, which we will show neither exploit the strengths of ICL, nor test its limitations. The broader capabilities and limitations of multimodal ICL remain under-explored. In this study, we introduce a comprehensive benchmark VL-ICL for multimodal in-context learning, encompassing a broad spectrum of tasks that involve both images and text as inputs and outputs, and different types of challenges, from {perception to reasoning and long context length}. We evaluate the abilities of state-of-the-art VLLMs against this benchmark suite, revealing their diverse strengths and weaknesses, and showing that even the most advanced models, such as GPT-4, find the tasks challenging. By highlighting a range of new ICL tasks, and the associated strengths and limitations of existing models, we hope that our dataset will inspire future work on enhancing the in-context learning capabilities of VLLMs, as well as inspire new applications that leverage VLLM ICL.
Large language models (LLMs) famously exhibit emergent in-context learning (ICL) -- the ability to rapidly adapt to new tasks using few-shot examples provided as a prompt, without updating the model's weights. Built on top of LLMs, vision large language models (VLLMs) have advanced significantly in areas such as recognition, reasoning, and grounding. However, investigations into multimodal ICL have predominantly focused on few-shot visual question answering (VQA), and image captioning, which we will show neither exploit the strengths of ICL, nor test its limitations. The broader capabilities and limitations of multimodal ICL remain under-explored. In this study, we introduce a comprehensive benchmark VL-ICL for multimodal in-context learning, encompassing a broad spectrum of tasks that involve both images and text as inputs and outputs, and different types of challenges, from perception to reasoning and long context length. We evaluate the abilities of state-of-the-art VLLMs against this benchmark suite, revealing their diverse strengths and weaknesses, and showing that even the most advanced models, such as GPT-4, find the tasks challenging. By highlighting a range of new ICL tasks, and the associated strengths and limitations of existing models, we hope that our dataset will inspire future work on enhancing the in-context learning capabilities of VLLMs, as well as inspire new applications that leverage VLLM ICL.
</div>
</div>
</div>
Expand All @@ -28,7 +28,7 @@ Large language models (LLMs) famously exhibit emergent in-context learning (ICL)

![Dataset](static/image/dataset.png)

Figure: Illustration of the different tasks in \bench{}. Image-to-text tasks are in the first three rows, while text-to-image tasks are in the bottom row. Image-to-text tasks in the third row do reasoning on interleaved image-text inputs.
Figure: Illustration of the different tasks in VL-ICL Bench. Image-to-text tasks are in the first three rows, while text-to-image tasks are in the bottom row. Image-to-text tasks in the third row do reasoning on interleaved image-text inputs.



Expand Down Expand Up @@ -63,7 +63,7 @@ The main results for VL-ICL Bench are presented in the figure above including a
```
@article{zong2024vlicl,
title={VL-ICL Bench: The Devil in the Details of Benchmarking Multimodal In-Context Learning},
author={Zong, Yongshuo and Bohdal, Ondrej and Hospedales Timothy},
author={Zong, Yongshuo and Bohdal, Ondrej and Hospedales, Timothy},
journal={arXiv preprint arXiv:2403.13164},
year={2024}
}
Expand Down

0 comments on commit 98f62dd

Please sign in to comment.