SDXL is now supported. The sdxl branch has been merged into the main branch. If you update the repository, please follow the upgrade instructions. Also, the version of accelerate has been updated, so please run accelerate config again. The documentation for SDXL training is here.
This repository contains training, generation and utility scripts for Stable Diffusion.
Change History is moved to the bottom of the page. 更新履歴はページ末尾に移しました。
For easier use (GUI and PowerShell scripts etc...), please visit the repository maintained by bmaltais. Thanks to @bmaltais!
This repository contains the scripts for:
These files do not contain requirements for PyTorch. Because the versions of them depend on your environment. Please install PyTorch at first (see installation guide below.)
The scripts are tested with Pytorch 2.0.1. 1.12.1 is not tested but should work.
Most of the documents are written in Japanese.
English translation by darkstorm2150 is here. Thanks to darkstorm2150!
Python 3.10.6 and Git:
Give unrestricted script access to powershell so venv can work:
Set-ExecutionPolicy Unrestricted
and answer AOpen a regular Powershell terminal and type the following inside:
git clone https://github.com/kohya-ss/sd-scripts.git
cd sd-scripts
python -m venv venv
.\venv\Scripts\activate
pip install torch==2.0.1+cu118 torchvision==0.15.2+cu118 --index-url https://download.pytorch.org/whl/cu118
pip install --upgrade -r requirements.txt
pip install xformers==0.0.20
accelerate config
Note: Now bitsandbytes is optional. Please install any version of bitsandbytes as needed. Installation instructions are in the following section.
Answers to accelerate config:
- This machine
- No distributed training
- NO
- NO
- NO
- all
- fp16
note: Some user reports ValueError: fp16 mixed precision requires a GPU
is occurred in training. In this case, answer 0
for the 6th question:
What GPU(s) (by id) should be used for training on this machine as a comma-separated list? [all]:
(Single GPU with id 0
will be used.)
bitsandbytes
(8bit optimizer)For 8bit optimizer, you need to install bitsandbytes
. For Linux, please install bitsandbytes
as usual (0.41.1 or later is recommended.)
For Windows, there are several versions of bitsandbytes
:
bitsandbytes
0.35.0: Stable version. AdamW8bit is available. full_bf16
is not available.bitsandbytes
0.41.1: Lion8bit, PagedAdamW8bit and PagedLion8bit are available. full_bf16
is available.Note: bitsandbytes
above 0.35.0 till 0.41.0 seems to have an issue: https://github.com/TimDettmers/bitsandbytes/issues/659
Follow the instructions below to install bitsandbytes
for Windows.
Open a regular Powershell terminal and type the following inside:
cd sd-scripts
.\venv\Scripts\activate
pip install bitsandbytes==0.35.0
cp .\bitsandbytes_windows\*.dll .\venv\Lib\site-packages\bitsandbytes\
cp .\bitsandbytes_windows\cextension.py .\venv\Lib\site-packages\bitsandbytes\cextension.py
cp .\bitsandbytes_windows\main.py .\venv\Lib\site-packages\bitsandbytes\cuda_setup\main.py
This will install bitsandbytes
0.35.0 and copy the necessary files to the bitsandbytes
directory.
Install the Windows version whl file from here or other sources, like:
python -m pip install bitsandbytes==0.41.1 --prefer-binary --extra-index-url=https://jllllll.github.io/bitsandbytes-windows-webui
When a new release comes out you can upgrade your repo with the following command:
cd sd-scripts
git pull
.\venv\Scripts\activate
pip install --use-pep517 --upgrade -r requirements.txt
Once the commands have completed successfully you should be ready to use the new version.
The implementation for LoRA is based on cloneofsimo's repo. Thank you for great work!
The LoRA expansion to Conv2d 3x3 was initially released by cloneofsimo and its effectiveness was demonstrated at LoCon by KohakuBlueleaf. Thank you so much KohakuBlueleaf!
The majority of scripts is licensed under ASL 2.0 (including codes from Diffusers, cloneofsimo's and LoCon), however portions of the project are available under separate license terms:
Memory Efficient Attention Pytorch: MIT
bitsandbytes: MIT
BLIP: BSD-3-Clause
The documentation in this section will be moved to a separate document later.
sdxl_train.py
is a script for SDXL fine-tuning. The usage is almost the same as fine_tune.py
, but it also supports DreamBooth dataset.
--full_bf16
option is added. Thanks to KohakuBlueleaf!
--block_lr
option. Specify 23 values separated by commas like --block_lr 1e-3,1e-3 ... 1e-3
.
0: time/label embed, 1-9: input blocks 0-8, 10-12: mid blocks 0-2, 13-21: output blocks 0-8, 22: out
.prepare_buckets_latents.py
now supports SDXL fine-tuning.
sdxl_train_network.py
is a script for LoRA training for SDXL. The usage is almost the same as train_network.py
.
Both scripts has following additional options:
--cache_text_encoder_outputs
and --cache_text_encoder_outputs_to_disk
: Cache the outputs of the text encoders. This option is useful to reduce the GPU memory usage. This option cannot be used with options for shuffling or dropping the captions.--no_half_vae
: Disable the half-precision (mixed-precision) VAE. VAE for SDXL seems to produce NaNs in some cases. This option is useful to avoid the NaNs.--weighted_captions
option is not supported yet for both scripts.
sdxl_train_textual_inversion.py
is a script for Textual Inversion training for SDXL. The usage is almost the same as train_textual_inversion.py
.
--cache_text_encoder_outputs
is not supported.--use_object_template
or --use_style_template
option. The captions are generated from the template. The existing captions are ignored.--min_timestep
and --max_timestep
options are added to each training script. These options can be used to train U-Net with different timesteps. The default values are 0 and 1000.
tools/cache_latents.py
is added. This script can be used to cache the latents to disk in advance.
accelerate launch --num_cpu_threads_per_process 1 tools/cache_latents.py ...
tools/cache_text_encoder_outputs.py
is added. This script can be used to cache the text encoder outputs to disk in advance.
cache_latents.py
and sdxl_train.py
. See the help message for the usage.sdxl_gen_img.py
is added. This script can be used to generate images with SDXL, including LoRA, Textual Inversion and ControlNet-LLLite. See the help message for the usage.
--cache_text_encoder_outputs
option and caching latents.--cache_text_encoder_outputs
option and caching latents.--network_train_unet_only
option is highly recommended for SDXL LoRA. Because SDXL has two text encoders, the result of the training will be unexpected.--bucket_reso_steps
can be set to 32 instead of the default value 64. Smaller values than 32 will not work for SDXL training.Example of the optimizer settings for Adafactor with the fixed learning rate:
optimizer_type = "adafactor"
optimizer_args = [ "scale_parameter=False", "relative_step=False", "warmup_init=False" ]
lr_scheduler = "constant_with_warmup"
lr_warmup_steps = 100
learning_rate = 4e-7 # SDXL original learning rate
from safetensors.torch import save_file
state_dict = {"clip_g": embs_for_text_encoder_1280, "clip_l": embs_for_text_encoder_768}
save_file(state_dict, file)
ControlNet-LLLite, a novel method for ControlNet with SDXL, is added. See documentation for details.
Diffusers, Accelerate, Transformers and other related libraries have been updated. Please update the libraries with Upgrade.
torch.compile
is supported (experimental). PR #1024 Thanks to p1atdev!
--torch_compile
option in each training script.--dynamo_backend
option. The default is "inductor"
. inductor
or eager
seems to work.--sdpa
option instead of --xformers
option.The session name for wandb can be specified with --wandb_run_name
option. PR #1032 Thanks to hopl1t!
IPEX library is updated. PR #1030 Thanks to Disty0!
Fixed a bug that Diffusers format model cannot be saved.
Diffusers、Accelerate、Transformers 等の関連ライブラリを更新しました。Upgrade を参照し更新をお願いします。
torch.compile
がサポートされしました(実験的)。 PR #1024 p1atdev 氏に感謝します。
--torch_compile
オプションを指定してください。--dynamo_backend
オプションで使用される backend を選択できます。デフォルトは "inductor"
です。 inductor
または eager
が動作するようです。--xformers
オプションとは互換性がありません。 代わりに --sdpa
オプションを使用してください。wandb 保存時のセッション名が各学習スクリプトの --wandb_run_name
オプションで指定できるようになりました。 PR #1032 hopl1t 氏に感謝します。
IPEX ライブラリが更新されました。PR #1030 Disty0 氏に感謝します。
Diffusers 形式でのモデル保存ができなくなっていた不具合を修正しました。
Please read Releases for recent updates. 最近の更新情報は Release をご覧ください。
The LoRA supported by train_network.py
has been named to avoid confusion. The documentation has been updated. The following are the names of LoRA types in this repository.
LoRA-LierLa : (LoRA for Li n e a r La yers)
LoRA for Linear layers and Conv2d layers with 1x1 kernel
LoRA-C3Lier : (LoRA for C olutional layers with 3 x3 Kernel and Li n e a r layers)
In addition to 1., LoRA for Conv2d layers with 3x3 kernel
LoRA-LierLa is the default LoRA type for train_network.py
(without conv_dim
network arg). LoRA-LierLa can be used with our extension for AUTOMATIC1111's Web UI, or with the built-in LoRA feature of the Web UI.
To use LoRA-C3Lier with Web UI, please use our extension.
train_network.py
がサポートするLoRAについて、混乱を避けるため名前を付けました。ドキュメントは更新済みです。以下は当リポジトリ内の独自の名称です。
LoRA-LierLa : (LoRA for Li n e a r La yers、リエラと読みます)
Linear 層およびカーネルサイズ 1x1 の Conv2d 層に適用されるLoRA
LoRA-C3Lier : (LoRA for C olutional layers with 3 x3 Kernel and Li n e a r layers、セリアと読みます)
1.に加え、カーネルサイズ 3x3 の Conv2d 層に適用されるLoRA
LoRA-LierLa はWeb UI向け拡張、またはAUTOMATIC1111氏のWeb UIのLoRA機能で使用することができます。
LoRA-C3Lierを使いWeb UIで生成するには拡張を使用してください。
A prompt file might look like this, for example
# prompt 1
masterpiece, best quality, (1girl), in white shirts, upper body, looking at viewer, simple background --n low quality, worst quality, bad anatomy,bad composition, poor, low effort --w 768 --h 768 --d 1 --l 7.5 --s 28
# prompt 2
masterpiece, best quality, 1boy, in business suit, standing at street, looking back --n (low quality, worst quality), bad anatomy,bad composition, poor, low effort --w 576 --h 832 --d 2 --l 5.5 --s 40
Lines beginning with #
are comments. You can specify options for the generated image with options like --n
after the prompt. The following can be used.
--n
Negative prompt up to the next option.--w
Specifies the width of the generated image.--h
Specifies the height of the generated image.--d
Specifies the seed of the generated image.--l
Specifies the CFG scale of the generated image.--s
Specifies the number of steps in the generation.The prompt weighting such as ( )
and [ ]
are working.
プロンプトファイルは例えば以下のようになります。
# prompt 1
masterpiece, best quality, (1girl), in white shirts, upper body, looking at viewer, simple background --n low quality, worst quality, bad anatomy,bad composition, poor, low effort --w 768 --h 768 --d 1 --l 7.5 --s 28
# prompt 2
masterpiece, best quality, 1boy, in business suit, standing at street, looking back --n (low quality, worst quality), bad anatomy,bad composition, poor, low effort --w 576 --h 832 --d 2 --l 5.5 --s 40
#
で始まる行はコメントになります。--n
のように「ハイフン二個+英小文字」の形でオプションを指定できます。以下が使用可能できます。
--n
Negative prompt up to the next option.--w
Specifies the width of the generated image.--h
Specifies the height of the generated image.--d
Specifies the seed of the generated image.--l
Specifies the CFG scale of the generated image.--s
Specifies the number of steps in the generation.( )
や [ ]
などの重みづけも動作します。
此处可能存在不合适展示的内容,页面不予展示。您可通过相关编辑功能自查并修改。
如您确认内容无涉及 不当用语 / 纯广告导流 / 暴力 / 低俗色情 / 侵权 / 盗版 / 虚假 / 无价值内容或违法国家有关法律法规的内容,可点击提交进行申诉,我们将尽快为您处理。