These scripts perform vocabulary pruning on the classification model (XLMRobertaForSequenceClassification
) and evaluate the performance.
We use a subset of XNLI English training set as the vocabulary file.
Download the fine-tuned model or train your own model on PAWS-X dataset, and save the files to ../models/xlmr_pawsx
.
Download link: * Google Drive * Hugging Face Models
bash vocabulary_pruning.sh
VOCABULARY_FILE=../datasets/xnli/en.tsv
MODEL_PATH=../models/xlmr_pawsx
python vocabulary_pruning.py $MODEL_PATH $VOCABULARY_FILE
Set $PRUNED_MODEL_PATH
to the directory where the pruned model is stored.
python measure_performance.py $PRUNED_MODEL_PATH
This script prunes the pre-trained models for MLM with a vocabulary limited to the SST-2 training set.
Set $MODEL_PATH
to the directory where the pre-trained model (BERT, RoBERTa, etc.) is stored.
python MaskedLM_vocabulary_pruning.py $MODEL_PATH
此处可能存在不合适展示的内容,页面不予展示。您可通过相关编辑功能自查并修改。
如您确认内容无涉及 不当用语 / 纯广告导流 / 暴力 / 低俗色情 / 侵权 / 盗版 / 虚假 / 无价值内容或违法国家有关法律法规的内容,可点击提交进行申诉,我们将尽快为您处理。