# Stanford-Alpaca **Repository Path**: thinkerai/Stanford-Alpaca ## Basic Information - **Project Name**: Stanford-Alpaca - **Description**: Stanford Alpaca(斯坦福 Alpaca)是一个指令调优的 LLaMA 模型,从 Meta 的大语言模型 LLaMA 7B 微调而来 - **Primary Language**: Python - **License**: Not specified - **Default Branch**: main - **Homepage**: https://www.oschina.net/p/stanford-alpaca - **GVP Project**: No ## Statistics - **Stars**: 0 - **Forks**: 6 - **Created**: 2023-04-08 - **Last Updated**: 2025-10-15 ## Categories & Tags **Categories**: Uncategorized **Tags**: None ## README
# Stanford Alpaca: An Instruction-following LLaMA Model [](https://github.com/tatsu-lab/stanford_alpaca/blob/main/LICENSE) [](https://github.com/tatsu-lab/stanford_alpaca/blob/main/DATA_LICENSE) [](https://www.python.org/downloads/release/python-390/) [](https://github.com/psf/black) This is the repo for the Stanford Alpaca project, which aims to build and share an instruction-following LLaMA model. The repo contains: - The [52K data](#data-release) used for fine-tuning the model. - The code for [generating the data](#data-generation-process). - The code for [fine-tuning the model](#fine-tuning). Note: We thank the community for feedback on Stanford-Alpaca and supporting our research. Our live demo is suspended until further notice. **Usage and License Notices**: Alpaca is intended and licensed for research use only. The dataset is CC BY NC 4.0 (allowing only non-commercial use) and models trained using the dataset should not be used outside of research purposes. ## Overview The current Alpaca model is fine-tuned from a 7B LLaMA model [1] on 52K instruction-following data generated by the techniques in the Self-Instruct [2] paper, with some modifications that we discuss in the next section. In a preliminary human evaluation, we found that the Alpaca 7B model behaves similarly to the `text-davinci-003` model on the Self-Instruct instruction-following evaluation suite [2]. Alpaca is still under development, and there are many limitations that have to be addressed. Importantly, we have not yet fine-tuned the Alpaca model to be safe and harmless. We thus encourage users to be cautious when interacting with Alpaca, and to report any concerning behavior to help improve the safety and ethical considerations of the model. Our initial release contains the data generation procedure, dataset, and training recipe. We intend to release the model weights if we are given permission to do so by the creators of LLaMA. For now, we have chosen to host a live demo to help readers better understand the capabilities and limits of Alpaca, as well as a way to help us better evaluate Alpaca's performance on a broader audience. **Please read our release [blog post](https://crfm.stanford.edu/2023/03/13/alpaca.html) for more details about the model, our discussion of the potential harm and limitations of Alpaca models, and our thought process for releasing a reproducible model.** [1]: LLaMA: Open and Efficient Foundation Language Models. Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, Aurelien Rodriguez, Armand Joulin, Edouard Grave, Guillaume Lample. https://arxiv.org/abs/2302.13971v1 [2]: Self-Instruct: Aligning Language Model with Self Generated Instructions. Yizhong Wang, Yeganeh Kordi, Swaroop Mishra, Alisa Liu, Noah A. Smith, Daniel Khashabi, Hannaneh Hajishirzi. https://arxiv.org/abs/2212.10560 ## Data Release [`alpaca_data.json`](./alpaca_data.json) contains 52K instruction-following data we used for fine-tuning the Alpaca model. This JSON file is a list of dictionaries, each dictionary contains the following fields: - `instruction`: `str`, describes the task the model should perform. Each of the 52K instructions is unique. - `input`: `str`, optional context or input for the task. For example, when the instruction is "Summarize the following article", the input is the article. Around 40% of the examples have an input. - `output`: `str`, the answer to the instruction as generated by `text-davinci-003`. We used the following prompts for fine-tuning the Alpaca model: - for examples with a non-empty input field: ``` Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request. ### Instruction: {instruction} ### Input: {input} ### Response: ``` - for examples with an empty input field: ``` Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: {instruction} ### Response: ``` During inference (eg for the web demo), we use the user instruction with an empty input field (second option). ## Data Generation Process
](./assets/parse_analysis.png)
## Fine-tuning
We fine-tune our models using standard Hugging Face training code.
We fine-tune LLaMA-7B and LLaMA-13B with the following hyperparameters:
| Hyperparameter | LLaMA-7B | LLaMA-13B |
|----------------|----------|-----------|
| Batch size | 128 | 128 |
| Learning rate | 2e-5 | 1e-5 |
| Epochs | 3 | 5 |
| Max length | 512 | 512 |
| Weight decay | 0 | 0 |
We have also fine-tuned larger variants of LLaMA and are in the process of evaluating those models.
Given Hugging Face hasn't officially supported the LLaMA models, we fine-tuned LLaMA with Hugging Face's transformers library by installing it from a particular fork (i.e. this [PR](https://github.com/huggingface/transformers/pull/21955) to be merged).
The hash of the specific commit we installed was `68d640f7c368bcaaaecfc678f11908ebbd3d6176`.
To reproduce our fine-tuning runs for LLaMA, first install the requirements
```bash
pip install -r requirements.txt
```
Then, install the particular fork of Hugging Face's transformers library.
Below is a command that fine-tunes LLaMA-7B with our dataset on a machine with 4 A100 80G GPUs in FSDP `full_shard` mode.
We were able to reproduce a model of similar quality as the one we hosted in our demo with the following command using **Python 3.10**.
Replace `