ReadMe Language | 中文版 | English |
If you have any questions in your works with this project, welcome to put up issues in this repo and I will response as soon as possible.
You can check the FAQ Page (Chinese) first before asking questions to avoid repeating questions.
A post about ASRT's introduction
About how to use ASRT to train and deploy：
For questions about the principles of the statistical language model that are often asked, see:
For questions about CTC, see:
For more infomation please refer to author's blog website: AILemon Blog (Chinese)
This project uses tensorFlow.keras based on deep convolutional neural network and long-short memory neural network, attention mechanism and CTC to implement.
First, clone the project to your computer through Git, and then download the data sets needed for the training of this project. For the download links, please refer to End of Document
$ git clone https://github.com/nl8590687/ASRT_SpeechRecognition.git
Or you can use the "Fork" button to copy a copy of the project and then clone it locally with your own SSH key.
After cloning the repository via git, go to the project root directory; create a subdirectory
dataset/ (you can use a soft link instead) for datasets, and then extract the downloaded datasets directly into it.
$ cd ASRT_SpeechRecognition $ mkdir dataset $ tar zxf <dataset zip files name> -C dataset/
Then, you need to copy all the files in the 'datalist' directory to the dataset directory, that is, put them together with the data set.
Note that in the current version, in the configuration file, two data sets, Thchs30 and ST-CMDS, are added by default, please delete them if you don’t need them. If you want to use other data sets, you need to add data configuration yourself, and use the standard format supported by ASRT to organize the data in advance.
$ cp -rf datalist/* dataset/
Currently available models are 24, 25 and 251
Before running this project, please install the necessary Python3 version dependent library
To start training this project, please execute:
$ python3 train_speech_model.py
To start the test of this project, please execute:
$ python3 evaluate_speech_model.py
Before testing, make sure the model file path filled in the code files exists.
ASRT API Server startup please execute:
$ python3 asrserver.py
Please note that after opening the API server, you need to use the client software corresponding to this ASRT project for voice recognition. For details, see the Wiki documentation to download ASRT Client Demo.
If you want to train and use other model(not Model 251), make changes in the corresponding position of the
import speech_model_zoo in the code files.
If there is any problem during the execution of the program or during use, it can be promptly put forward in the issue, and I will reply as soon as possible.
Deploy ASRT by docker：
$ docker pull ailemondocker/asrt_service:1.1.0 $ docker run --rm -it -p 20000:20000 --name asrt-server -d ailemondocker/asrt_service:1.1.0
It will start a api server for recognition rather than training.
CNN/LSTM/GRU + CTC
The maximum length of the input audio is 16 seconds, and the output is the corresponding Chinese pinyin sequence.
The released finished software that includes trained model weights can be downloaded from ASRT download page.
Github Releases page includes the archives of the various versions of the software released and it's introduction. Under each version module, there is a zip file that includes trained model weights files.
Maximum Entropy Hidden Markov Model Based on Probability Graph.
The input is a Chinese pinyin sequence, and the output is the corresponding Chinese character text.
At present, the best model can basically reach 80% of Pinyin correct rate on the test set.
However, as the current international and domestic teams can achieve 98%, the accuracy rate still needs to be further improved.
If you have trouble when install those packages, please run the following script to do it as long as you have a GPU and CUDA 11.2 and cudnn 8.1 have been installed：
$ pip install -r requirements.txt
Tsinghua University THCHS30 Chinese voice data set
Free ST Chinese Mandarin Corpus
AIShell-1 Open Source Dataset
Note：unzip this dataset
$ tar xzf data_aishell.tgz $ cd data_aishell/wav $ for tar in *.tar.gz; do tar xvf $tar; done
Primewords Chinese Corpus Set 1
Special thanks! Thanks to the predecessors' public voice data set.
If the provided dataset link cannot be opened and downloaded, click this link OpenSLR
@nl8590687 (repo owner)
：Code submit frequency
：React/respond to issue & PR etc.
：Well-balanced team members and collaboration
：Recent popularity of project
：Star counts, download counts etc.