8 Star 37 Fork 19

MindSpore Lab/mindnlp

加入 Gitee
与超过 1200万 开发者一起发现、参与优秀开源项目,私有仓库也完全免费 :)
免费加入
文件
克隆/下载
贡献代码
同步代码
取消
提示: 由于 Git 不支持空文件夾,创建文件夹后会生成空的 .keep 文件
Loading...
README

GLUE-QNLI

A repository comparing the inference accuracy of MindNLP and Transformer on the GLUE QNLI dataset

  • Dataset

  • The QNLI (Question Natural Language Inference) dataset is part of the GLUE benchmark. It is converted from the Stanford Question Answering Dataset (SQuAD).
  • Getting the Dataset

    1. Visit GLUE Benchmark Tasks
    1. Register/Login to download the GLUE data
    1. Download and extract the QNLI dataset
    1. Place the following files in the mindnlp/benchmark/GLUE-QNLI/ directory:
    • dev.tsv (Development set)
    • test.tsv (Test set)
    • train.tsv (Training set)
  • The QNLI task is a binary classification task derived from SQuAD, where the goal is to determine whether a given context sentence contains the answer to a given question.

Quick Start

Installation

To get started with this project, follow these steps:

  1. Create a conda environment (optional but recommended):
    conda create -n mindnlp python==3.9
    conda activate mindnlp
    
  2. Install the dependencies: Please note that mindnlp is in the Ascend environment, while transformers is in the GPU environment, and the required dependencies are in the requirements of their respective folders.
    pip install -r requirements.txt
    
  3. Usage Once the installation is complete, you can choose use differnet models to start inference. Here's how to run the inference:
    # Evaluate specific model using default dataset (dev.tsv)
    python model_QNLI.py --model bart
    
    # Evaluate with custom dataset
    python model_QNLI.py --model bart --data ./QNLI/test.tsv
    
    Supported model options: bart, bert, roberta, xlm-roberta, gpt2, t5, distilbert, albert, llama, opt

Accuracy Comparsion

Our reproduced model performance on QNLI/dev.tsv is reported as follows. Experiments are tested on ascend 910* with mindspore 2.3.1 graph mode. All fine-tuned models are derived from open-source models provided by huggingface.

Model Name bart bert roberta xlm-roberta gpt2 t5 distilbert albert opt llama
Base Model facebook/bart-base google-bert/bert-base-uncased FacebookAI/roberta-large FacebookAI/xlm-roberta-large openai-community/gpt2 google-t5/t5-small distilbert/distilbert-base-uncased albert/albert-base-v2 facebook/opt-125m JackFram/llama-160m
Fine-tuned Model(hf) ModelTC/bart-base-qnli Li/bert-base-uncased-qnli howey/roberta-large-qnli tmnam20/xlm-roberta-large-qnli-1 tanganke/gpt2_qnli lightsout19/t5-small-qnli anirudh21/distilbert-base-uncased-finetuned-qnli orafandina/albert-base-v2-finetuned-qnli utahnlp/qnli_facebook_opt-125m_seed-1 Cheng98/llama-160m-qnli
transformers accuracy(GPU) 92.29 67.43 94.50 92.50 88.15 89.71 59.21 55.14 86.10 50.97
mindnlp accuracy(NPU) 92.29 67.43 94.51 92.50 88.15 89.71 59.23 55.13 86.10 50.97
马建仓 AI 助手
尝试更多
代码解读
代码找茬
代码优化
1
https://gitee.com/mindspore-lab/mindnlp.git
git@gitee.com:mindspore-lab/mindnlp.git
mindspore-lab
mindnlp
mindnlp
master

搜索帮助