# chatglm-finetune **Repository Path**: aizpy/chatglm-finetune ## Basic Information - **Project Name**: chatglm-finetune - **Description**: 对 ChatGLM-6B 做 Fine-tuning - **Primary Language**: Python - **License**: BSD-3-Clause - **Default Branch**: main - **Homepage**: None - **GVP Project**: No ## Statistics - **Stars**: 4 - **Forks**: 3 - **Created**: 2023-04-03 - **Last Updated**: 2023-06-06 ## Categories & Tags **Categories**: Uncategorized **Tags**: None ## README # chatglm-finetune Fine-tuning based on [THUDM/chatglm-6b](https://huggingface.co/THUDM/chatglm-6b). The first Fine-tuning method is [LoRA](https://arxiv.org/abs/2106.09685), and more methods will be made available if possible. ## LoRA Please checkout the [Tutorial](https://aizpy.com/2023/03/30/chatglm-6b-lora/) if you can read Chinese. ## Dataset An AI-generated QA dataset about AI explorer ([AI探险家](https://aizpy.com/about/)) is provided. The trained model saved in the output folder is also based on this dataset. ## Infer test ![image](https://user-images.githubusercontent.com/127382813/229401551-3819c3f5-0795-4927-b88e-8fc554cf5ed2.png)