# eval_cap **Repository Path**: esingz/eval_cap ## Basic Information - **Project Name**: eval_cap - **Description**: No description available - **Primary Language**: Unknown - **License**: Not specified - **Default Branch**: master - **Homepage**: None - **GVP Project**: No ## Statistics - **Stars**: 0 - **Forks**: 0 - **Created**: 2022-03-04 - **Last Updated**: 2022-03-04 ## Categories & Tags **Categories**: Uncategorized **Tags**: None ## README # Visual Caption Evaluation Improved evaluation codes for common visual captioning metrics (base on [coco-caption](https://github.com/tylin/coco-caption)). 1. bleu, meteor, rouge, cider, spice 2. supporting python3 3. faster cider with cache for reference captions In order to use spice metric, you should download [stanford-corenlp-3.6.0-models.jar](http://stanfordnlp.github.io/CoreNLP/index.html) into the spice/lib directory. ``` wget http://nlp.stanford.edu/software/stanford-corenlp-full-2015-12-09.zip unzip stanford-corenlp-full-2015-12-09 cd stanford-corenlp-full-2015-12-09 mv stanford-corenlp-full-2015-12-09/stanford-corenlp-3.6.0-models.jar spice/lib mv stanford-corenlp-full-2015-12-09/stanford-corenlp-3.6.0.jar spice/lib ```