A Tensorflow implementation of AnimeGAN for fast photo animation !!!
This is the Open source of the paper <AnimeGAN: a novel lightweight GAN for photo animation>, which uses the GAN framwork to transform real-world photos into anime images.
Some suggestions:
News: AnimeGAN+ is expected to be released this summer. After some simple tricks were added to AnimeGAN, the obtained AnimeGAN+ has better animation effects. When I return to school to graduate, more pre-trained models and video animation test code will also be released in this repository.
eg. python edge_smooth.py --dataset Hayao --img_size 256
eg. python main.py --phase train --dataset Hayao --epoch 101 --init_epoch 1
eg. python main.py --phase test --dataset Hayao
or python test.py --checkpoint_dir checkpoint/AnimeGAN_Hayao_lsgan_300_300_1_3_10 --test_dir dataset/test/real --style_name H
------> pictures from the paper 'AnimeGAN: a novel lightweight GAN for photo animation'
------> Photo to Hayao Style
This code is based on the CartoonGAN-Tensorflow and Anime-Sketch-Coloring-with-Swish-Gated-Residual-UNet. Thanks to the contributors of this project.
此处可能存在不合适展示的内容,页面不予展示。您可通过相关编辑功能自查并修改。
如您确认内容无涉及 不当用语 / 纯广告导流 / 暴力 / 低俗色情 / 侵权 / 盗版 / 虚假 / 无价值内容或违法国家有关法律法规的内容,可点击提交进行申诉,我们将尽快为您处理。