# SimulTransBaseline **Repository Path**: gongel/SimulTransBaseline ## Basic Information - **Project Name**: SimulTransBaseline - **Description**: No description available - **Primary Language**: Unknown - **License**: Not specified - **Default Branch**: master - **Homepage**: None - **GVP Project**: No ## Statistics - **Stars**: 0 - **Forks**: 0 - **Created**: 2021-05-31 - **Last Updated**: 2021-05-31 ## Categories & Tags **Categories**: Uncategorized **Tags**: None ## README # SimulTransBaseline This is a sample code for AutoSimulTrans Workshop (https://autosimtrans.github.io) based on PaddlePaddle(https://github.com/paddlepaddle/paddle) with dynamic graph. This code implements Transformer based Wait-K training and decoding proposed in paper STACL: Simultaneous Translation with Implicit Anticipation and Controllable Latency (https://arxiv.org/abs/1810.08398). The following is the code struture ```text . ├── utils # Utilities ├── gen_data.sh # Scripts to download and bpe preprocessed WMT18 zh-en corpus ├── predict.py # Inference code ├── reader.py # Data reader ├── stream_reader.py # Stream data reader ├── README.md # Documentation ├── train.py # Training ├── model.py # Transformer model and beam (greedy) search └── transformer.yaml # configuration ``` ## Dependencies 1. jieba==0.37 2. sacremoses==0.0.38 ## Quick Start ### Installation 1. Paddle This project depends on PaddlePaddle 1.7 develop version. Please refer to [Installation Manual](http://www.paddlepaddle.org/#quick-start) to install. You can simply install it with this comand: 2. Download code ```shell git clone https://github.com/PaddlePaddle/models.git cd models/dygraph/transformer