A Tensorflow implementation of AnimeGAN for fast photo animation ! 日本語
The paper can be accessed here or on the website.
Online access: Be grateful to @TonyLianLong for developing an online access project, you can implement photo animation through a browser without installing anything, click here to have a try.
Good news: tensorflow-1.15.0 is compatible with the code of this repository. In this version, you can run this code without any modification. The premise is that the CUDA and cudnn corresponding to the tf version are correctly installed. Maybe the versions between tf-1.8.0 and tf-1.15.0 are also supported and compatible with this repository, but I didn’t make too many extra attempts.
This is the Open source of the paper <AnimeGAN: a novel lightweight GAN for photo animation>, which uses the GAN framwork to transform real-world photos into anime images.
Some suggestions:
News: AnimeGAN+ will be expected to be released this summer.(TBD)
eg. python edge_smooth.py --dataset Hayao --img_size 256
eg. python main.py --phase train --dataset Hayao --epoch 101 --init_epoch 1
eg. python main.py --phase test --dataset Hayao
or python test.py --checkpoint_dir checkpoint/AnimeGAN_Hayao_lsgan_300_300_1_3_10 --test_dir dataset/test/real --style_name H
pictures from the paper 'AnimeGAN: a novel lightweight GAN for photo animation'
Photo to Hayao Style
This code is based on the CartoonGAN-Tensorflow and Anime-Sketch-Coloring-with-Swish-Gated-Residual-UNet. Thanks to the contributors of this project.
此处可能存在不合适展示的内容,页面不予展示。您可通过相关编辑功能自查并修改。
如您确认内容无涉及 不当用语 / 纯广告导流 / 暴力 / 低俗色情 / 侵权 / 盗版 / 虚假 / 无价值内容或违法国家有关法律法规的内容,可点击提交进行申诉,我们将尽快为您处理。