datasets | license | |
---|---|---|
|
mit |
This is longformer-base-4096 model fine-tuned on SQuAD v1 dataset for question answering task.
Longformer model created by Iz Beltagy, Matthew E. Peters, Arman Coha from AllenAI. As the paper explains it
Longformer
is a BERT-like model for long documents.
The pre-trained model can handle sequences with upto 4096 tokens.
This model was trained on google colab v100 GPU. You can find the fine-tuning colab here .
Few things to keep in mind while training longformer for QA task,
by default longformer uses sliding-window local attention on all tokens. But For QA, all question tokens should have global attention. For more details on this please refer the paper. The LongformerForQuestionAnswering
model automatically does that for you. To allow it to do that
<s> question</s></s> context</s>
. If you encode the question and answer as a input pair, then the tokenizer already takes care of that, you shouldn't worry about it.input_ids
should always be a batch of examples.Metric | # Value |
---|---|
Exact Match | 85.1466 |
F1 | 91.5415 |
import torch
from transformers import AutoTokenizer, AutoModelForQuestionAnswering,
tokenizer = AutoTokenizer.from_pretrained("valhalla/longformer-base-4096-finetuned-squadv1")
model = AutoModelForQuestionAnswering.from_pretrained("valhalla/longformer-base-4096-finetuned-squadv1")
text = "Huggingface has democratized NLP. Huge thanks to Huggingface for this."
question = "What has Huggingface done ?"
encoding = tokenizer(question, text, return_tensors="pt")
input_ids = encoding["input_ids"]
# default is local attention everywhere
# the forward method will automatically set global attention on question tokens
attention_mask = encoding["attention_mask"]
start_scores, end_scores = model(input_ids, attention_mask=attention_mask)
all_tokens = tokenizer.convert_ids_to_tokens(input_ids[0].tolist())
answer_tokens = all_tokens[torch.argmax(start_scores) :torch.argmax(end_scores)+1]
answer = tokenizer.decode(tokenizer.convert_tokens_to_ids(answer_tokens))
# output => democratized NLP
The LongformerForQuestionAnswering
isn't yet supported in pipeline
. I'll update this card once the support has been added.
此处可能存在不合适展示的内容,页面不予展示。您可通过相关编辑功能自查并修改。
如您确认内容无涉及 不当用语 / 纯广告导流 / 暴力 / 低俗色情 / 侵权 / 盗版 / 虚假 / 无价值内容或违法国家有关法律法规的内容,可点击提交进行申诉,我们将尽快为您处理。
1. 开源生态
2. 协作、人、软件
3. 评估模型