VLE (Vision-Llanguage E.ncoder) is an image-text multimodal understanding model based on pre-trained text and image encoders, which can be applied to multimodal discriminative tasks such as visual question answering and image-text retrieval. In particular, VLE achieves the best performance among public models in the Visual Commonsense Reasoning (VCR) task, which has stronger requirements on language understanding and reasoning ability.

Online demo address:https://huggingface.co/spaces/hfl/VQA_VLE_LLM

The VLE model adopts a dual-stream structure, similar to the structure of the METER model, consisting of two single-modal encoders (image encoder and text encoder) and a cross-modal fusion module. The structural differences between VLE and METER are:

  • VLE uses DeBERTa-v3 as a text encoder, which outperforms RoBERTa-base used in METER.
  • In VLE-large, the hidden dimension of the cross-modal fusion module is increased to 1024 to increase the capacity of the model.
  • In the fine-tuning stage, VLE introduces an additional token type vector representation.

pre-training

VLE uses graphs and texts to pre-train data. In the pre-training phase, VLE uses four pre-training tasks:

  • MLM (Masked Language Modeling): Mask prediction task. Given a picture-text pair, some words in the text are randomly masked, and the training model restores the masked text.
  • ITM (Image-Text Matching): Image-text matching prediction task. Given an image-text pair, train the model to determine whether the image and text match.
  • MPC (Masked Patch-box Classification): Masking the patch classification task, given the image-text pair, and masking out the patch containing the specific object in the picture, and training the model to predict the masked object type.
  • PBC (Patch-box classification): Patch classification task. Given a picture-text pair, predict which patches in the picture are related to the text description.

VLE performed 25,000 steps of pre-training on 14M English text-to-text data, and the batch size was 2048. The figure below shows the model structure of VLE and some pre-training tasks (MLM, ITM and MPC).

Downstream Task Adaptation

Visual Question Answering (VQA)

  • Following standard practice, the model is trained using the VQA training set and validation set, and validated on the test-dev set. The output of the pooler of the fusion layer of the model is used to train the classification task.

Visual Commonsense Reasoning (VCR)

  • VCR is formatted as a RACE-like multiple-choice task, and for each object in an image, the average pooled value of the representations of the patches covering that object is added to the sequence of image features prior to the fusion module. Additional token_type_ids are also added for objects in images and text to inject alignment information between different modalities and improve the alignment performance of the model.

Model download

Two versions of the pre-training model, VLE-base and VLE-large, are released this time. The model weights are in PyTorch format. You can choose to manually download the weights and configuration files from the ???? transformers model library, or use them in the code from_pretrained(model_name) to load the model automatically.Detailed method to participatemodel use.

pretrained weights

Modeltext encoderimage encoderparameter amount*MODEL_NAMELink
VLE-baseDeBERTa-v3-baseCLIP-ViT-base-patch16378Mhfl/vle-baselink
VLE-largeDeBERTa-v3-largeCLIP-ViT-large-patch14930Mhfl/vle-largelink

* : Only parameters for encoder and emebddings are calculated. The amount of parameters of the task-specific prediction layer is not accounted for.

fine tune weights

Modeltext encoderimage encoderMODEL_NAMELink
VLE-base-for-VQADeBERTa-v3-baseCLIP-ViT-base-patch16hfl/vle-base-for-vqalink
VLE-large-for-VQADeBERTa-v3-largeCLIP-ViT-large-patch14hfl/vle-large-for-vqalink
VLE-base-for-VCR-q2aDeBERTa-v3-baseCLIP-ViT-base-patch16hfl/vle-base-for-vcr-q2alink
VLE-large-for-VCR-q2aDeBERTa-v3-largeCLIP-ViT-large-patch14hfl/vle-large-for-vcr-q2alink
VLE-base-for-VCR-qa2rDeBERTa-v3-baseCLIP-ViT-base-patch16hfl/vle-base-for-vcr-qa2rlink
VLE-large-for-VCR-qa2rDeBERTa-v3-largeCLIP-ViT-large-patch14hfl/vle-large-for-vcr-qa2rlink

Model comparison

The parameters, pre-training data, and downstream task effects of VLE, ​​METER, and other multimodal models are compared in the table below. Among them, VQA shows the effect on the test-dev set; VCR shows the effect on the dev set.

ModelVQAVCR (QA2R)VCR (Q2A)Parameter amountPre-training data volume*
CoCa82.32.1Bunknown
BeiT-384.21.9 B21M(IT) + 14M(I) + 160G(T)
OFA82.0930M20M(IT) + 39M(I) + 140G(T)
BLIP78.3385M~130M(IT)
METER-base77.7 (76.8†‡)79.8§77.6§345M9M(IT)
METER-Huge80.3878M20M(IT)
VLE-base77.6‡83.7§79.9§378M15M(IT)
VLE-large79.3‡87.5§84.3§930M15M(IT)

: Reproduce effect

: Fine-tuning parameters: lr=7e-6, batch_size={256, 512}, num_epochs=10

§ : Fine tuning parameters: lr=1e-5, batch_size=128, num_epochs=5

* : IT: Text. I: Image. T: Text.

Observing the above table, we can find that:

  • VLE pre-training is more efficient: Compared with similarly sized models, VLE uses less pre-trained data and achieves comparable or even better results on visual question answering.
  • VLE has stronger reasoning capabilities: In particular, VLE significantly outperforms METER with a similar structure on the Visual Commonsense Reasoning (VCR) task, which requires higher reasoning ability.

#VLE #Homepage #Documentation #Downloads #VisionLanguage #Multimodal #PreTraining #Model #News Fast Delivery

Leave a Comment

Your email address will not be published. Required fields are marked *