Loading...
HF多模态

vblagoje/dpr-question_encoder-single-lfqa-wiki

Introduction The question...

标签:


Introduction

The question encoder model based on DPRQuestionEncoder architecture. It uses the transformer’s pooler outputs as question representations. See blog post for more details.


Training

We trained vblagoje/dpr-question_encoder-single-lfqa-wiki using FAIR’s dpr-scale in two stages. In the first stage, we used PAQ based pretrained checkpoint and fine-tuned the retriever on the question-answer pairs from the LFQA dataset. As dpr-scale requires DPR formatted training set input with positive, negative, and hard negative samples – we created a training file with an answer being positive, negatives being question unrelated answers, while hard negative samples were chosen from answers on questions between 0.55 and 0.65 of cosine similarity. In the second stage, we created a new DPR training set using positives, negatives, and hard negatives from the Wikipedia/Faiss index created in the first stage instead of LFQA dataset answers. More precisely, for each dataset question, we queried the first stage Wikipedia Faiss index and subsequently used SBert cross-encoder to score questions/answers (passage) pairs with topk=50. The cross-encoder selected the positive passage with the highest score, while the bottom seven answers were selected for hard-negatives. Negative samples were again chosen to be answers unrelated to a given dataset question. After creating a DPR formatted training file with Wikipedia sourced positive, negative, and hard negative passages, we trained DPR-based question/passage encoders using dpr-scale.


Performance

LFQA DPR-based retriever (vblagoje/dpr-question_encoder-single-lfqa-wiki and vblagoje/dpr-ctx_encoder-single-lfqa-wiki) slightly underperform ‘state-of-the-art’ Krishna et al. “Hurdles to Progress in Long-form Question Answering” REALM based retriever with KILT benchmark performance of 11.2 for R-precision and 19.5 for Recall@5.


Usage

from Transformers import DPRContextEncoder, DPRContextEncoderTokenizer
model = DPRQuestionEncoder.from_pretrained("vblagoje/dpr-question_encoder-single-lfqa-wiki").to(device)
tokenizer = AutoTokenizer.from_pretrained("vblagoje/dpr-question_encoder-single-lfqa-wiki")
input_ids = tokenizer("Why do airplanes leave contrails in the sky?", return_tensors="pt")["input_ids"]
embeddings = model(input_ids).pooler_output


Author

  • Vladimir Blagojevic: dovlex [at] gmail.com Twitter | LinkedIn

数据统计

数据评估

vblagoje/dpr-question_encoder-single-lfqa-wiki浏览人数已经达到403,如你需要查询该站的相关权重信息,可以点击"5118数据""爱站数据""Chinaz数据"进入;以目前的网站数据参考,建议大家请以爱站数据为准,更多网站价值评估因素如:vblagoje/dpr-question_encoder-single-lfqa-wiki的访问速度、搜索引擎收录以及索引量、用户体验等;当然要评估一个站的价值,最主要还是需要根据您自身的需求以及需要,一些确切的数据则需要找vblagoje/dpr-question_encoder-single-lfqa-wiki的站长进行洽谈提供。如该站的IP、PV、跳出率等!

关于vblagoje/dpr-question_encoder-single-lfqa-wiki特别声明

本站Ai导航提供的vblagoje/dpr-question_encoder-single-lfqa-wiki都来源于网络,不保证外部链接的准确性和完整性,同时,对于该外部链接的指向,不由Ai导航实际控制,在2023年5月9日 下午7:10收录时,该网页上的内容,都属于合规合法,后期网页的内容如出现违规,可以直接联系网站管理员进行删除,Ai导航不承担任何责任。

相关导航

暂无评论

暂无评论...