代码拉取完成,页面将自动刷新
A repository comparing the inference accuracy of MindNLP and Transformer on the GLUE QNLI dataset
mindnlp/benchmark/GLUE-QNLI/
directory:To get started with this project, follow these steps:
conda create -n mindnlp python==3.9
conda activate mindnlp
pip install -r requirements.txt
# Evaluate specific model using default dataset (dev.tsv)
python model_QNLI.py --model bart
# Evaluate with custom dataset
python model_QNLI.py --model bart --data ./QNLI/test.tsv
bart
, bert
, roberta
, xlm-roberta
, gpt2
, t5
, distilbert
, albert
, llama
, opt
Our reproduced model performance on QNLI/dev.tsv is reported as follows. Experiments are tested on ascend 910* with mindspore 2.3.1 graph mode. All fine-tuned models are derived from open-source models provided by huggingface.
Model Name | bart | bert | roberta | xlm-roberta | gpt2 | t5 | distilbert | albert | opt | llama |
---|---|---|---|---|---|---|---|---|---|---|
Base Model | facebook/bart-base | google-bert/bert-base-uncased | FacebookAI/roberta-large | FacebookAI/xlm-roberta-large | openai-community/gpt2 | google-t5/t5-small | distilbert/distilbert-base-uncased | albert/albert-base-v2 | facebook/opt-125m | JackFram/llama-160m |
Fine-tuned Model(hf) | ModelTC/bart-base-qnli | Li/bert-base-uncased-qnli | howey/roberta-large-qnli | tmnam20/xlm-roberta-large-qnli-1 | tanganke/gpt2_qnli | lightsout19/t5-small-qnli | anirudh21/distilbert-base-uncased-finetuned-qnli | orafandina/albert-base-v2-finetuned-qnli | utahnlp/qnli_facebook_opt-125m_seed-1 | Cheng98/llama-160m-qnli |
transformers accuracy(GPU) | 92.29 | 67.43 | 94.50 | 92.50 | 88.15 | 89.71 | 59.21 | 55.14 | 86.10 | 50.97 |
mindnlp accuracy(NPU) | 92.29 | 67.43 | 94.51 | 92.50 | 88.15 | 89.71 | 59.23 | 55.13 | 86.10 | 50.97 |
此处可能存在不合适展示的内容,页面不予展示。您可通过相关编辑功能自查并修改。
如您确认内容无涉及 不当用语 / 纯广告导流 / 暴力 / 低俗色情 / 侵权 / 盗版 / 虚假 / 无价值内容或违法国家有关法律法规的内容,可点击提交进行申诉,我们将尽快为您处理。