# fake-photos **Repository Path**: az13js/fake-photos ## Basic Information - **Project Name**: fake-photos - **Description**: 可以AI生成图片的Web界面和服务,PHP,Laravel框架。需要配合stable-diffusion.cpp使用。仓库缺少产品文档,开发中,个人用。想找现成产品的人就不用浪费时间看了。 - **Primary Language**: Unknown - **License**: MIT - **Default Branch**: main - **Homepage**: None - **GVP Project**: No ## Statistics - **Stars**: 1 - **Forks**: 0 - **Created**: 2024-05-12 - **Last Updated**: 2025-01-04 ## Categories & Tags **Categories**: Uncategorized **Tags**: None ## README # 图片生成 *当前状态:所有内容都处于开发中,功能能用但是个别地方得针对使用场景调整,而且配置和部署非常麻烦,缺少文档……* *主分支main是开发分支,但是目前也只有开发分支,还没达到发布状态……* ![UI](./ui.jpg) 根据提示词生成假图。采用Laravel开发的Web服务,文生图依赖SDXL Turbo,运行SDXL Turbo依赖stable-diffusion.cpp或Python的diffusers包。SDXL Turbo是不区分大小写的 并且只能支持英文和少量特殊字符, **中文不支持** 。为了方便中文场景,需要部署 RWKV,让RWKV帮忙把页面提交的提示词通过英文描述一遍。 Laravel框架版本:8.x。数据库和队列配置参考官方说明: - [数据库配置](https://laravel.com/docs/8.x/database) - [消息队列](https://laravel.com/docs/8.x/queues#generating-job-classes) PHP版本:7.4。本地开发在7.4上进行,实际部署也可以到PHP8.3。 TODO: 完善部署说明。 ## 部署PHP项目 ```bash git clone --depth 1 'https://gitee.com/az13js/fake-photos.git' /var/fake-photos/code cd /var/fake-photos/code composer install --optimize-autoloader --no-dev ``` 拷贝配置文件: ```bash cd /var/fake-photos/code cp .env.example .env ``` 生成缓存: ```bash php artisan config:cache php artisan route:cache php artisan view:cache ``` 初始化数据库 ```bash touch /var/fake-photos/code/database.sqlite php artisan migrate ``` 检查服务状态: ```bash systemctl status fake_photos.service ``` Nginx配置: ```conf # Fake photos图像生成 server { listen 8910; listen [::]:8910; error_log /var/fake-photos/nginx-error.log notice; access_log /var/fake-photos/nginx-access.log; server_name _; root /var/fake-photos/code/public; add_header X-Frame-Options "SAMEORIGIN"; add_header X-Content-Type-Options "nosniff"; index index.html index.php; charset utf-8; location ~* \.png$ { try_files $uri =404; } location / { try_files $uri $uri/ /index.php?$query_string; } location = /favicon.ico { access_log off; log_not_found off; } location = /robots.txt { access_log off; log_not_found off; } error_page 404 /404.html; location ~ \.php$ { fastcgi_pass unix:/run/php-fpm.socket; fastcgi_param SCRIPT_FILENAME $realpath_root$fastcgi_script_name; include fastcgi_params; } location ~ /\.(?!well-known).* { deny all; } location ~* \.(txt)$ { expires -1; add_header Cache-Control "no-store, no-cache, must-revalidate, proxy-revalidate, max-age=0"; if ($request_uri ~* (^\.txt$)) { add_header Pragma "no-cache"; add_header Cache-Control "no-store, no-cache, must-revalidate, post-check=0, pre-check=0"; expires off; break; } } } ``` ## CPU版本部署说明 通过stable-diffusion.cpp和1.6B参数量的RWKV实现。SDXL Turbo的fp16在CPU上全量运行 内存占用大,所以放弃,使用了实测相对节省内存的stable-diffusion.cpp。SDXL Turbo 模型下载: ```sh nohup wget -o fake_photo_wget_download_main.log -O "sd_xl_turbo_1.0_fp16.safetensors" "https://huggingface.co/stabilityai/sdxl-turbo/resolve/main/sd_xl_turbo_1.0_fp16.safetensors?download=true" &>/dev/null & nohup wget -o fake_photo_wget_download_vae.log -O "sdxl.vae.safetensors" "https://huggingface.co/madebyollin/sdxl-vae-fp16-fix/resolve/main/sdxl.vae.safetensors?download=true" &>/dev/null & ``` 模型文件较大,建议挂后台下载。VAE模型是为了解决SDXL Turbo模型生成图片全黑的, 如果使用PyTorch部署就没有必要用。构建sd可执行程序: ```sh git clone --recursive https://gitee.com/az13js/fake-photos-stable-diffusion.cpp.git stable-diffusion.cpp cd stable-diffusion.cpp rm -rf build mkdir build cd build cmake .. cmake --build . --config Release ``` `az13js/fake-photos-stable-diffusion.cpp`是我的一个fork,GitHub上搜索`stable-diffusion.cpp` 可以找到原作者仓库。 部署RWKV。如果需要用RWKV那么就需要下载RWKV的模型文件,根据我的实际测试,1.6B 模型(最低能在Python3.7上用)性能够用,当然需要也可以下更大的模型。 假设模型保存到:`/var/rwkv/RWKV-x060-World-1B6-v2.1-20240328-ctx4096.pth`。 PHP项目位置`/var/fake-photos/code/`,stable-diffusion.cpp在`/var/fake-photos/`。 环境变量设置: ```sh export FAKE_PHOTOS_RWKV_MODEL_PATH="/var/rwkv/RWKV-x060-World-1B6-v2.1-20240328-ctx4096" export FAKE_PHOTOS_RWKV_WORKER="/var/fake-photos/code/rwkv_worker.py" export FAKE_PHOTOS_TMP_PROMPT="/var/fake-photos/code/public/images/prompt.txt" export FAKE_PHOTOS_SD_EXE="/var/fake-photos/stable-diffusion.cpp/bin/sd" ``` 如果运行在systemd,可以在`[service]`内设置 ```ini Environment=FAKE_PHOTOS_RWKV_MODEL_PATH="/var/rwkv/RWKV-x060-World-1B6-v2.1-20240328-ctx4096" Environment=FAKE_PHOTOS_RWKV_WORKER="/var/fake-photos/code/rwkv_worker.py" Environment=FAKE_PHOTOS_TMP_PROMPT="/var/fake-photos/code/public/images/prompt.txt" Environment=FAKE_PHOTOS_SD_EXE="/var/fake-photos/stable-diffusion.cpp/bin/sd" ``` 修改.env的配置: ```ini IMAGES_SD_EXE=python3.7 IMAGES_SD_PARAMS="/var/fake-photos/code/rwkv_client.py -m /var/fake-photos/sd_xl_turbo_1.0_fp16.safetensors --vae /var/fake-photos/sdxl.vae.safetensors --cfg-scale 1" ``` 【TODO】部署说明 -------------------- 下载模型文件: ```bash mkdir -p /var/fake-photos nohup wget -o fake_photo_wget_download_main.log -O "/var/fake-photos/sd_xl_turbo_1.0_fp16.safetensors" "https://huggingface.co/stabilityai/sdxl-turbo/resolve/main/sd_xl_turbo_1.0_fp16.safetensors?download=true" &>/dev/null & nohup wget -o fake_photo_wget_download_vae.log -O "/var/fake-photos/sdxl.vae.safetensors" "https://huggingface.co/madebyollin/sdxl-vae-fp16-fix/resolve/main/sdxl.vae.safetensors?download=true" &>/dev/null & ``` 部署 stable-diffusion.cpp (以下部署不考虑GPU支持,如需要GPU支持请参考[这里](https://gitee.com/az13js/fake-photos-stable-diffusion.cpp) ) ```bash git clone --recursive https://gitee.com/az13js/fake-photos-stable-diffusion.cpp.git /var/fake-photos/stable-diffusion.cpp cd /var/fake-photos/stable-diffusion.cpp rm -rf build mkdir build cd build cmake .. cmake --build . --config Release ``` # 可选 Torch的GPU版安装和测试: ```bash cd /var/fake-photos/code mkdir -p local_packages export PYTHONPATH="$(pwd)/local_packages" python -m pip install -t "$PYTHONPATH" diffusers transformers accelerate requests --upgrade echo 'Import torch and show version;' python -c 'import torch ; print(torch.__version__)' ``` 配置项修改: ```ini IMAGES_SD_EXE="python /var/fake-photos/code/python_diffusers_client.py" ``` # RWKV ```sh FILE_NAME="RWKV-x060-World-1B6-v2.1-20240328-ctx4096.pth" FILE_URL="https://huggingface.co/BlinkDL/rwkv-6-world/resolve/main/RWKV-x060-World-1B6-v2.1-20240328-ctx4096.pth?download=true" wget -O "$FILE_NAME" "$FILE_URL" python -m pip install rwkv ninja --upgrade # 视情况 export RWKV_CUDA_ON=1 ```