1 Star 0 Fork 0

Hugging Face 模型镜像 / ZephRP-m7b

加入 Gitee
与超过 1200万 开发者一起发现、参与优秀开源项目,私有仓库也完全免费 :)
免费加入
该仓库未声明开源许可证文件(LICENSE),使用请关注具体项目描述及其代码上游依赖。
克隆/下载
贡献代码
同步代码
取消
提示: 由于 Git 不支持空文件夾,创建文件夹后会生成空的 .keep 文件
Loading...
README
inference language library_name pipeline_tag tags license
false
en
transformers
text-generation
mistral
cc-by-nc-4.0

ZephRP-m7b

This is a Mistral-based model consisting of a merge between HuggingFaceH4/zephyr-7b-alpha and PEFT adapter trained using the LimaRP dataset.

The goal was to combine the message length instruction training of LimaRPv3 and additional stylistic elements with the superior knowledge and instruction-following capabilities of the Zephyr model.

Usage:

The intended prompt format is the Alpaca instruction format of LimaRP v3:

### Instruction:
Character's Persona: {bot character description}

User's Persona: {user character description}

Scenario: {what happens in the story}

Play the role of Character. You must engage in a roleplaying chat with User below this line. Do not write dialogues and narration for User.

### Input:
User: {utterance}

### Response:
Character: {utterance}

### Input
User: {utterance}

### Response:
Character: {utterance}

(etc.)

Message length control

Due to the inclusion of LimaRP v3, it is possible to append a length modifier to the response instruction sequence, like this:

### Input
User: {utterance}

### Response: (length = medium)
Character: {utterance}

This has an immediately noticeable effect on bot responses. The available lengths are: micro, tiny, short, medium, long, massive, huge, enormous, humongous, unlimited. The recommended starting length is medium. Keep in mind that the AI may ramble or impersonate the user with very long messages.

Bias, Risks, and Limitations

The model will show biases similar to those observed in niche roleplaying forums on the Internet, besides those exhibited by the base model. It is not intended for supplying factual information or advice in any form.

Training Details

The LimaRP PEFT adapter was trained as an 8-bit lora using axolotl.

The following hyperparameters were used during training of the adapter on the original mistralai/Mistral-7B-v0.1 model using a single L40 GPU:

  • learning_rate: 0.00015
  • train_batch_size: 2
  • eval_batch_size: 2
  • seed: 42
  • gradient_accumulation_steps: 4
  • total_train_batch_size: 8
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_steps: 10
  • num_epochs: 2

空文件

简介

Mirror of https://huggingface.co/royallab/ZephRP-m7b 展开 收起
取消

发行版

暂无发行版

贡献者

全部

近期动态

加载更多
不能加载更多了
1
https://gitee.com/hf-models/ZephRP-m7b.git
git@gitee.com:hf-models/ZephRP-m7b.git
hf-models
ZephRP-m7b
ZephRP-m7b
main

搜索帮助