# ComfyUI-Creepy_nodes
**Repository Path**: chenabao/ComfyUI-Creepy_nodes
## Basic Information
- **Project Name**: ComfyUI-Creepy_nodes
- **Description**: ComfyUI插件:ComfyUI-Creepy_nodes. B站--走在路上跑同步. 感谢原作者贡献,请在github上给他们点个star吧!
- **Primary Language**: Unknown
- **License**: MIT
- **Default Branch**: Master
- **Homepage**: None
- **GVP Project**: No
## Statistics
- **Stars**: 0
- **Forks**: 0
- **Created**: 2025-04-03
- **Last Updated**: 2025-10-04
## Categories & Tags
**Categories**: Uncategorized
**Tags**: None
## README
# ComfyUI-Creepy_nodes
A collection of custom and specialized nodes for ComfyUI.
### Installing
Search for "creepy nodes" in [ComfyUI Manager](https://github.com/ltdrdata/ComfyUI-Manager) and install.
Manual installation:
Open a command prompt from your /custom_nodes/ComfyUI-Creepy_nodes/ folder and type:
`git clone https://github.com/Creepybits/ComfyUI-Creepy_nodes.git`
Then type: `..\..\python_embeded\python.exe -m pip install -r requirements.txt`
Or `\full path to your python\python.exe -m pip install -r requirements.txt`
Restart ComfyUI
___
### Free Beginner's Toolkit
The Free Beginner's Toolkit includes the following:
* 7 Essential Custom Nodes
* A 20+ pages detailed guide
* A versatile and useful workflow
It's available for download here: [Free Beginner's Toolkit](https://www.zanno.se/free-comfyui-beginner-toolkit/)

___
### Save Images To Google Drive node pack
Save Images To Google Drive node pack includes:
* Custom Node for saving images directly to Google Drive from ComfyUI
* Detailed PDF guide on how to setup your own Google Drive API
The node and guide are available here: [Save Images To Google Drive](https://patreon.com/creepybits)

___
## ALL CREEPY NODES

___
## GEMINI 2.5 FLASH/PRO API

This node is experimental!

* Image: regular image input
* System prompt: Customize a system prompt (some models require that the system prompt and user instructions use the same input, if you get an error message try to use the same text input or without a system prompt)
* Model: Chose between the following models (note that the free API calls are much more limited for the 2.5 models)
* - gemini 2.5 flash preview
- gemini 2.5 pro experimental
- gemini 2.0 Flash
- gemini 2.0 Flash Experimental
* max output tokens: (the 2.5 models require much higher output tokens than the 2.0 model)
* Temperature: Acts like a "creativity dial"
* - Higher Temperature: Makes the output more random, surprising, and potentially creative (but also riskier for coherence).
- Lower Temperature: Makes the output more focused, deterministic, and predictable (sticking to more probable words).
* Top K: Limits the pool of possible next words to the K most likely options.
* - Higher K: More options considered, leading to more diverse text.
- Lower K: Fewer options considered (only the very top ones), leading to more predictable text.
* Top P (Nucleus Sampling): Limits the pool of possible next words to the smallest set whose cumulative probability adds up to P.
* - Higher P: Includes a larger, more diverse set of words whose probabilities collectively reach the threshold. This adjusts dynamically based on how confident the model is.
- Lower P: Restricts choices to a smaller set of highly probable words.
* User instructions: Write additional instructions.
* API key: Get your free API key here: [Gemini API](https://aistudio.google.com/apikey)
* - Save your key in a text file named `gemini_api_key.txt`, copy the path to the file and paste it in the text box for API.
* Resize image to: If you load a lot of images in a batch, resizing them to a smaller size can save time and tokens.
* Thinking mode: The 2.5 models have a "thinking mode" where you can follow their reasoning. Not very useful in Comfy, but you can use it if you want. You will have to explicity tell Gemini that the output should include the thinking (costs a lot more output tokens).
___
## Fallback Text Switch
If there's no text input in the Primary text box, or if the input for it fails or are bypassed, the text from the "Default "Prompt" input will be used.
___
## CONDITIONAL LORA LOADER

This node will load loras based on if keywords (that you set) is present somewhere in the prompt.
* Format: keyword_phrase: lora_full_relative_path, lora_strength, clip_strength
* Example (use forward slashes for paths):
* portrait: Flux/Details/amateur_photo_v1.safetensors, 0.75, 1.0
* cinematic scene: MyLoRAs/Styles/retro_cinematic_v2.safetensors, 0.8, 0.9
* fantasy creature: Custom/Creatures/mythic_beast_lora.safetensors, 0.9, 0.9
* Use comma-separated values for strength. Default is 1.0 if omitted.
* Keep strength between -2.0 and 2.0.
* The keyword_phrase should be found anywhere in the prompt (case-insensitive by default).
* Use several keywords for each lora by separate them as:
keyword_1, keyword_2: MyLoRAs/Styles/retro_cinematic_v2.safetensors, 0.8, 0.9
This will load the lora _retro_cinematic_v2.safetensors_ from the path _MyLoRAs/Styles/_ if _keyword_1_ and/or _keyword_2_ is present anywhere in the prompt.
___
## AUDIO NODES
* Random/Fixed Audio Picker
* Audio To Image Draft
* Gemini Audio Analyzer

___
**Random/Fixed Audio Picker**

> Segment Lengt:
> Set how long the audio clip you forward to Gemini Audio Analyzer should be in seconds (max 600 seconds)
>
> Start Time:
> Set how far in your selected audio clip should begin, in seconds (or -1 to pick start time at random)
___
**Audio To Image Draft**

> Will load a system prompt located at `\custom_nodes\ComfyUI_Creepy_Nodes\assets\prompts\audio_keywords.txt`
>
> If you have additional or special instructions regarding how and what adio should be analyzed, you can enter the instructions in the text box.
___
**Gemini Audio Analyzer**

A lot of the code for this node is inspired by, or borrowed from, [Gemini 2.0 Flash Exp](https://github.com/ShmuelRonen/ComfyUI-Gemini_Flash_2.0_Exp)
> The API key will load automatically from `\custom_nodes\ComfyUI_Creepy_Nodes\assets\scripts\gemini_api_key.txt` if field is left empty.
> Pick between:
> * Gemini 2.0 Flash
> * Gemini 2.5 Pro
> * Gemini 2.5 Flash
___
## SWITCHES

* Multi Model Switch
* Multi VAE Switch
* Multi Clip Switch
* Multi Text Switch
These nodes works as you might expect them to. You can connect 3 different input nodes, and decide which to use by enter a number between 1-3 in the nodes.
* Dynamic Model Switch
* Dynamic Clip Switch
* Dynamic VAE switch
* Dynamic Conditioning switch
* Dynamic Latent Switch
* Dynamic Image Switch
These nodes works differently. The node will check input 1 and if there is a valid input in that slot it will forward it, if there is no input or an invalid input in the first input slot it will move on to the second one. If there's a valid input in the second slot it will use that one, else it will move to the third one. If no valid inputs are presented, the node will do nothing.
## DELAY NODES
These nodes will delay the execution of the node following the delay node by x seconds.

## SPECIAL NODES
* Sanitize Filename
* Evaluater Node
* People Evaluation Node
* Custom Node Manager
* Load Batch From Dir
* Keyword Extractor
* Summary Writer
* Prompt Generator
* Gemini Token Counter
* IMG To IMG Conditioning
### Sanitize Filename
The _Sanitize Filename_ node will make sure that no invalid characters are forwarded to the _save image_ node.

___
### Evaluater Node
The _Evaluater Node_ fetches and forwards a system prompt to [Gemini 2.0 Flash Experimental](https://github.com/ShmuelRonen/ComfyUI-Gemini_Flash_2.0_Exp) node for evaluating and grading images.
It will give a short answer with just a number between 1-10 when using _evaluate_img.txt_

It will give a longer explanation to the reasoning behind the grading when using _evaluate_img_long.txt_

___
### People Evaluation Node
The _People Evaluation Node_ I made just for fun, and it will rate the attractiveness/sexiness of people in images. It currently has 4 settings:
* attractiveness_nice
* attractiveness_rude
* attractiveness_x
* attractiveness_xx
_Attractiveness_nice_

_Attractiveness_rude_

_Attractiveness_x_

_Attractiveness_xx_

___
### Custom Node Manager
This node has two scan modes:
* Validate Python

This will scan a directory for valid ComfyUI nodes. If a node is a valid ComfyUI node it will forward the information in the output.

* Check Libraries

This will scan a directory and gather information about imported libraries, and which nodes that imported them. The output will only list nodes that acrtually use import {module} (nodes that doesn't require a specific library will be skipped) look like this:

Any folder path can be set in the "directory" textbox. If left empty it will use `custom_nodes/creepy_nodes/assets/nodes` as its default root directory.
___
### Load Batch From Dir

A large part of the code for this node comes from [ComfyUI Inspire Pack](https://github.com/ltdrdata/ComfyUI-Inspire-Pack)
___
### Keyword Extractor

This node will extract keywords from an image. In the textbox, describe which types of keywords it should extract.
___
### Summary Writer

Basically the same as Keyword Extractor, but lets you add several files in `/custom_nodes/creepy_nodes/assets/summary.json` to pick from in dropdown list.
___
### Prompt Generator

Basically the same as the System Prompt node.
___
### Gemini Token Counter

Estimates how many tokens your API call will cost
___
### IMG To IMG Conditioning

Largely based of the official [InstructPixToPixConditioning](https://github.com/comfyanonymous/ComfyUI/blob/master/comfy_extras/nodes_ip2p.py) node
___
## SYSTEM PROMPT

This node will automatically load a predetermined system prompt to the [Gemini 2.0 Experimental node](https://github.com/ShmuelRonen/ComfyUI-Gemini_Flash_2.0_Exp) and transform even short and inexact prompts to prompts that are suitable for Flux and Shuttle 3.1 together with the T5-XXL Clip. I created this node to make workflows a little bit less confusing, now there's no need to worry about the system prompt or wonder where to write the instructions to Gemini.
The current system prompt is written to work for both Text to image and image to image workflows. It's created to work with Gemini 2.0 Flash Experimental. It might work with other LLM's, but that's nothing I can guarantee. If you want to alter the actual system prompt, it is located in `/custom_nodes/creepy_nodes/assets/prompts/system_prompt.txt`
### EXAMPLE

___
### TESTS
I did some tests using 1 image and the same seed/setting, only changing the system prompt. The old system prompt I used was the following:
>You are an AI assistant specializing in crafting professional and efective prompts for the Flux model, suitable for the t5-xxl clip. You are specialized in creating prompts for generating realistic looking images based off another image. When an image or text is provided, you should generate a concise and descriptive prompt that will create a realistic looking image based of the traits of the image or text that is provided. The prompt should be between 150-300 tokens. The output should only show the final prompt, without any additional comments or instructions.
>

And this is the Image to Image and Text to Image that are created with the _system prompt node_.

These nodes doesn't require any extra installations. You do however need to install [Gemini 2.0 Flash Experimental](https://github.com/ShmuelRonen/ComfyUI-Gemini_Flash_2.0_Exp) and set up the API in order to use the _system prompt node_. Alternatively, you can try it with another LLM, but I have no idea how, or even if, that would work.