# pytorch-animeGAN
**Repository Path**: nakaikeji/pytorch-animeGAN
## Basic Information
- **Project Name**: pytorch-animeGAN
- **Description**: No description available
- **Primary Language**: Unknown
- **License**: Not specified
- **Default Branch**: master
- **Homepage**: None
- **GVP Project**: No
## Statistics
- **Stars**: 0
- **Forks**: 0
- **Created**: 2024-12-07
- **Last Updated**: 2024-12-07
## Categories & Tags
**Categories**: Uncategorized
**Tags**: None
## README
# AnimeGAN Pytorch
Pytorch implementation of AnimeGAN for fast photo animation
* Paper: *AnimeGAN: a novel lightweight GAN for photo animation* - [Semantic scholar](https://www.semanticscholar.org/paper/AnimeGAN%3A-A-Novel-Lightweight-GAN-for-Photo-Chen-Liu/10a9c5d183e7e7df51db8bfa366bc862262b37d7#citing-papers) or from [Yoshino repo](https://github.com/TachibanaYoshino/AnimeGAN/blob/master/doc/Chen2020_Chapter_AnimeGAN.pdf)
* Original implementation in [Tensorflow](https://github.com/TachibanaYoshino/AnimeGAN) by [Tachibana Yoshino](https://github.com/TachibanaYoshino)
* [Try it on Hugging Face](https://huggingface.co/spaces/ptran1203/pytorchAnimeGAN) [](https://huggingface.co/spaces/ptran1203/pytorchAnimeGAN)
* [Demo and Docker image on Replicate](https://replicate.ai/ptran1203/pytorch-animegan)
* Sample anime video: https://www.youtube.com/watch?v=45ASFOR3rNU
| Input | Animation |
|--|--|
|||
---
* 09/06/2024: Integrated on Hugging Face Spaces, [try it here](https://huggingface.co/spaces/ptran1203/pytorchAnimeGAN)
* 02/06/2024: Arcane ([result here](#arcane)) and Shinkai style released
* 05/05/2024: Add [color_transfer](https://github.com/ptran1203/color_transfer) module to retain original color of generated images, [See here](#with-color-transfer-module).
* 23/04/2024: Added DDP training.
* 16/04/2024: **AnimeGANv2** (Hayao style) is released with training code
---
## Quick start
```bash
git clone https://github.com/ptran1203/pytorch-animeGAN.git
cd pytorch-animeGAN
```
Run Inference on your local machine
> --src can be directory or image file
```
python3 inference.py --weight hayao:v2 --src /your/path/to/image_dir --out /path/to/output_dir
```
* Python code
```python
from inference import Predictor
predictor= Predictor(
'hayao:v2',
# if set True, generated image will retain original color as input image
retain_color=True
)
url = 'https://github.com/ptran1203/pytorch-animeGAN/blob/master/example/result/real/1%20(20).jpg?raw=true'
predictor.transform_file(url, "anime.jpg")
```
## Pretrained weight
| Model name | Model | Dataset | Weight |
|--|--|--|--|
| Hayao | AnimeGAN | train_photo + Hayao style | [generator_hayao.pt](https://github.com/ptran1203/pytorch-animeGAN/releases/download/v1.0/generator_hayao.pth) |
| Shinkai | AnimeGAN | train_photo + Shinkai style | [generator_shinkai.pt](https://github.com/ptran1203/pytorch-animeGAN/releases/download/v1.0/generator_shinkai.pth) |
| Hayao:v2 | AnimeGANv2 | Google Landmark v2 + Hayao style | [GeneratorV2_gldv2_Hayao.pt](https://github.com/ptran1203/pytorch-animeGAN/releases/download/v1.2/GeneratorV2_gldv2_Hayao.pt) |
| Shinkai:v2 | AnimeGANv2 | Google Landmark v2 + Shinkai style | [GeneratorV2_gldv2_Shinkai.pt](https://github.com/ptran1203/pytorch-animeGAN/releases/download/v1.2/GeneratorV2_gldv2_Shinkai.pt) |
| Arcane:v2 | AnimeGANv2 | Face ffhq + Arcane style | [GeneratorV2_ffhq_Arcane_210624_e350.pt](https://github.com/ptran1203/pytorch-animeGAN/releases/download/v1.2/GeneratorV2_ffhq_Arcane_210624_e350.pt) |
## Train on custom dataset
- Training notebook on [Google colab](https://colab.research.google.com/github/ptran1203/pytorch-animeGAN/blob/master/notebooks/animeGAN.ipynb)
- Inference notebook on [Google colab](https://colab.research.google.com/github/ptran1203/pytorch-animeGAN/blob/master/notebooks/animeGAN_inference.ipynb)
### 1. Prepare dataset
#### 1.1 To download dataset from the paper, run below command
```bash
wget -O anime-gan.zip https://github.com/ptran1203/pytorch-animeGAN/releases/download/v1.0/dataset_v1.zip
unzip anime-gan.zip
```
=> The dataset folder can be found in your current folder with named `dataset`
#### 1.2 Create custom data from anime video
You need to have a video file located on your machine.
**Step 1.** Create anime images from the video
```bash
python3 script/video_to_images.py --video-path /path/to/your_video.mp4\
--save-path dataset/MyCustomData/style\
--image-size 256\
```
**Step 2.** Create edge-smooth version of dataset from **Step 1.**
```bash
python3 script/edge_smooth.py --dataset MyCustomData --image-size 256
```
### 2. Train animeGAN
To train the animeGAN from command line, you can run `train.py` as the following:
```bash
python3 train.py --anime_image_dir dataset/Hayao \
--real_image_dir dataset/photo_train \
--model v2 \ # animeGAN version, can be v1 or v2
--batch 8 \
--amp \ # Turn on Automatic Mixed Precision training
--init_epochs 10 \
--exp_dir runs \
--save-interval 1 \
--gan-loss lsgan \ # one of [lsgan, hinge, bce]
--init-lr 1e-4 \
--lr-g 2e-5 \
--lr-d 4e-5 \
--wadvd 300.0\ # Aversarial loss weight for D
--wadvg 300.0\ # Aversarial loss weight for G
--wcon 1.5\ # Content loss weight
--wgra 3.0\ # Gram loss weight
--wcol 30.0\ # Color loss weight
--use_sn\ # If set, use spectral normalization, default is False
```
### 3. Transform images
To convert images in a folder or single image, run `inference.py`, for example:
>
> --src and --out can be a directory or a file
```bash
python3 inference.py --weight path/to/Generator.pt \
--src dataset/test/HR_photo \
--out inference_images
```
### 4. Transform video
To convert a video to anime version:
> Be careful when choosing --batch-size, it might lead to CUDA memory error if the resolution of the video is too large
```bash
python3 inference.py --weight hayao:v2\
--src test_vid_3.mp4\
--out test_vid_3_anime.mp4\
--batch-size 4
```
#### Result of AnimeGAN v2
##### Hayao
| Input | Hayao style v2 |
|--|--|
|.jpg)|.jpg)|
|.jpg)|.jpg)|
|.jpg)|.jpg)|
|.jpg)|.jpg)|
|.jpg)|.jpg)|
##### Arcane
| Input | Arcane |
|--|--|
|||
|||
|||
|||
|||
|||
|||
More results - Hayao V2






