代码拉取完成,页面将自动刷新
BERT is designed to pre-train deep bidirectional representations from unlabeled text by jointly conditioning on both left and right context in all layers. As a result, the pre-trained BERT model can be fine-tuned with just one additional output layer to create state-of-the-art models for a wide range of tasks, such as question answering and language inference, without substantial task-specific architecture modifications.
Iluvatar GPU | IXUCA SDK |
---|---|
MR-V100 | 4.2.0 |
Pretrained model: https://huggingface.co/bert-base-chinese
Dataset: http://s3.bmio.net/kashgari/china-people-daily-ner-corpus.tar.gz
pip3 install -r requirements.txt
export DATASETS_DIR=/Path/to/china-people-daily-ner-corpus/
# Get pytorch weights
python3 get_weights.py
# Do QAT for INT8 test, will take a long time
cd Int8QAT/
python3 run_qat.py --model_dir ../test/ --datasets_dir ${DATASETS_DIR}
python3 export_hdf5.py --model quant_base/pytorch_model.bin
cd ../
# Accuracy
bash scripts/infer_bert_base_ner_int8_accuracy.sh
# Performance
bash scripts/infer_bert_base_ner_int8_performance.sh
Model | BatchSize | SeqLength | Precision | FPS | F1 Score |
---|---|---|---|---|---|
BERT Base NER | 8 | 256 | INT8 | 2067.252 | 96.2 |
此处可能存在不合适展示的内容,页面不予展示。您可通过相关编辑功能自查并修改。
如您确认内容无涉及 不当用语 / 纯广告导流 / 暴力 / 低俗色情 / 侵权 / 盗版 / 虚假 / 无价值内容或违法国家有关法律法规的内容,可点击提交进行申诉,我们将尽快为您处理。