![]() ![]() In general, in contrast to conventional benchmarks that only capture the single performance metric, VLUE is a multi-dimension benchmark that takes multiple dimensions including performance, generalization ability, and efficiency into account. To facilitate that, we measure the efficiency-performance trade-off of representative VLP models in \benchmark to track a Pareto SOTA landscape for VLP research. In addition, we also encourage researchers to measure and compare the efficiency-performance trade-off when reporting new studies in the field of VLP. This enables us to better measure the true generalization and transferability of VLP models. As such, the label distribution in our OOD test sets is roughly the same as the original test set but the image distribution differs. ![]() Moreover, we carefully control the annotation protocol for our OOD test sets to be identical to the original in-domain datasets. This ensures that the image distribution in our OOD test sets differs from that of COCO/VG images. In contrast to standard datasets for these tasks that are annotated on COCO/VG images, our private OOD test sets are annotated on images from the MaRVL (Liu et al., 2021a) dataset where images are manually collected across cultures by native speakers from different countries. More importantly, VLUE includes a newly annotated private out-of-distribution (OOD) test set for each representative VL task. VLUE is the first multi-task benchmark focusing on vision-language understanding that covers a set of fundamental VL tasks including image-text retrieval, visual question answering, visual reasoning, and visual grounding, and maintains a leaderboard tracking the performance of representative studies and new methods on VLP. To address these problems and promote research on truly generalizable and practical VLP, we introduce the Vision- Language Understanding Evaluation ( VLUE) benchmark. We release the VLUE benchmark to promote research on building vision-language models that generalize well to images unseen during pre-training and are prac- tical in terms of efficiency-performance trade-off. Moreover, we find that measuring the efficiency-performance trade-off of VLP models leads to complementary insights for several design choices of VLP. We demonstrate that there is a sizable generalization gap for all VLP models when testing on out-of-distribution test sets annotated on images from a more diverse distribution that spreads across cultures. ![]() To this end, we introduce the Vision-Language Understanding Evaluation (VLUE) benchmark, a multi-task multi-dimension benchmark for evaluating the generalization capabilities and the efficiency-performance trade-off (“Pareto SOTA”) of VLP models. Second, recent VLP work mainly focuses on absolute performance but overlooks the efficiency-performance trade-off, which is also an important indicator for measuring progress. First, most of the downstream VL datasets are annotated using raw images that are already seen during pre-training, which may result in an overestimation of current VLP models’ generalization ability. However, there exist several challenges for measuring the community’s progress in building general multi-modal intelligence. Recent advances in vision-language pre-training (VLP) have demonstrated impressive performance in a range of vision-language (VL) tasks. ![]()
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |