Comfyui nodes examples reddit. Please understand me when I find this amusing.
Comfyui nodes examples reddit I see that some nodes use Javascript too, but unfortunately I'm not good with that. I put an example image/workflow in the most recent commit that uses a couple of the main ones, and the nodes are named pretty easily so if you have the extension installed you should be able to just skim through the menu and 30 votes, 25 comments. com" url:text search for "text" in url selftext:text search for "text" in self post contents self:yes (or self:no) include (or exclude) self posts nsfw:yes (or nsfw:no) include (or exclude) results Heya, ive been working on this workflow for like a month and its finally ready, so I also made a tutorial on how to use it. \python_embeded\python. 5 for the moment) 3. This is an interesting implementation of that idea, with a lot of potential. For example you can do side-by-side and compare workflows: one with only base and one with base + lora and see the difference. Please share your tips, tricks, and workflows for using this software to create your AI art. We wrote about why and linked to the docs in our blog but this is really just the first step in us site:example. com)) . Is tag "looking at viewer" in list --> save. See the high res fix example, particularly the second pass version. I don't know why you don't want to use manager, if you install nodes with manager, a new folder is created in the custom_nodes folder, if something is messed up after installation, you sort folders by modification date and remove the last one you installed. Then through the comfyui manager, you can download any missing custom nodes. Thanks again for your great \Data\Packages\ComfyUI\custom_nodes\was-node-suite-comfyui And it has been /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app The ComfyUI-Wiki is an online quick reference manual that serves as a guide to ComfyUI. I ran it inside "ComfyUI\custom_nodes\ComfyUI-Stable-Video-Diffusion". What are your favorite custom nodes (or node packs) and what do you use them for? Here are approx. I show this in that tutorial because it is important for you to know this rule: whenever you work on a custom node, always remove it from the workflow before every test. 154 votes, 81 comments. They are ugly, and a little rough, but do the job in my workflows. So as long as you use the same prompt and the LLM gets to the same conclusion, that’s the whole workflow. gg/9sEavhnM] . py --windows-standalone-build Are there any ComfyUI nodes (i. If you find it confusing, please post here for help or create an Issue in GitHub. I ended up building a custom node that is very custom for the exact workflow I was trying to make, but it isn't good for general use. It doesn't have all the features and for that I do occasionally have to switch back, but the node style editor in Comfy is so much clearer and being able to save and swap layouts is amazing. Can't find any examples. My comfyUI backend is an API that can be used by other apps if they want to do things with stable diffusion so chainner could add support for the comfyUI backend and nodes if they wanted to. Creating such workflow with default core nodes of ComfyUI is not possible at the moment. Short version: You screenshot a reddit announcement and have a reddit account, did you post this question (about the safetensor) in response to it? Collab Example (for anyone following this that needs it) In case you didn't find the example colab script, or for the benefit of So instead of having a single workflow with a spaghetti of 30 nodes, it could be a workflow with 3 sub workflows, each with 10 nodes, for example. Share /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, With some nervous trepidation, I release my first node for ComfyUI, an implementation of the DemoFusion iterative mixing sampling process. Please share your tips, tricks, and If you want to try it, you can nest nodes together in ComfyUI (use the NestedNodeBuilder custom node). comfyui manager will identify what is missing and download for you . I am thinking of the scenario, where you have generated, say, a 1000 images with a Is there a function that runs when something is changed in a node? I'm trying to do it using Python since am OK with it. A few new nodes and functionality for rgthree-comfy went in recently. I've been trying to do something similar to your workflow and ran into the same kinds of problems. Do you have any example images to show what difference the samplers can make? If you've used the ComfyUI_LLMVISION node from u/AppleBotzz, /r/StableDiffusion is back open after the protest of Reddit killing open API Then find example workflows . com) video, I was pretty sure the nodes to do it already exist in comfyUI. The way any node works is that the node is the workflow. I haven’t seen a tutorial on this yet. There are also Efficiency custom nodes that come pre-combined with several related things in one node, such as both prompts and the resolution and model choice in one, etc. I found it extremely difficult to wrap my head around initially but after a few days of going through example nodes and the ComfyUI source I started being productive. (stuff that really should be in main rather than a plugin but eh, =shrugs= ) Hi all, sorry if this seems obvious or has been posted before, but i'm wondering if there's any way to get some basic info nodes. The video covers: New SD 2. com/r/comfyui/s/JQVkyMTM5w 2. The video has to be an activity that the person is known for. Update the VLM Nodes from github. I've added the Structured Output node to VLM Nodes. In a discussion around features ComfyUI could have this idea arose - an Image Picker node, which pauses the workflow until you As it stands for now, I have seen you post about it several times that you are now able to "let chatgpt write any node I want" but then your example is just addition of integers. I should be able to skip the image if some tags are or are not in a tag list. How to rent up to 1-8x 4090 GPUS, install ComfyUI (+Manager, Custom nodes, models, etc). It would require many specific Image manipulation You will need the custom nodes (obviously), the InsightFace models, the Ipadapter. 5 so that may give you a lot of your errors. You're right, I should have been more specific. Python - a node that allows you to execute python code written inside ComfyUI. I am so sorry but my video is outdated now because ComfyUI has officially implemented the a SVD natively, update ComfyUI and copy the previously downloaded models from the ComfyUI-SVD checkpoints to your comfy models SVD folder For example, I like to mix Excelsior with Arthemy Comics, or Sketchstyle, etc. In the examples folder, there are a few . /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, I made this Make it easier to change node colors in ComfyUI,FlatUI / Material Design Styles Color 6. For example, one that shows the image metadata like PNG info in A1111, or better still, one that shows the LORA info so i can see what the trigger words and training data was etc. try civitai . Welcome to the unofficial ComfyUI subreddit. To create this workflow I wrote a python script to wire up all the nodes. Any idea what I am missing? Thanks. Hey r/comfyui, I just published a new video going over the recent updates for ComfyUI reaching the end of year. Is it possible to do that in ComfyUI? I am at the point where I need to filter out images based on a tag list. My reasearch didnt yield much result so I might ask here before I start creating my custom nodes. For your all-in-one workflow, use the Generate tab. Hope you like some of Like a lot of you we've struggled with inconsistent (or nonexistent) documentation so we built a workflow to generate docs for 1600+ nodes. ComfyUI-Fal-API-Flux: This repository contains custom nodes for ComfyUI is extensible and many people have written some great custom nodes for it. I had implemented a similar process in the A1111-WebUI back then, and the results were good, but the code wasn't suitable for publication. Soon, there will also be examples showing what can be achieved with advanced workflows. It primarily focuses on the use of different nodes, installation procedures, and practical examples that help users to effectively engage with ComfyUI. ohhh nice, taking a look at this one :D no errors on install, already ahead of the game :P suggestions so far :please confine your nodes to a folder specific for your nodes rather than putting them in other sub folders or generic named folders, it would make them easier to Chord conditioned symbolic music generation Sound Example GitHub repo and ComfyUI node by kijai (only SD1. example: Is tag "2girl" in list --> do not save. I see that ComfyUI is a better way to create. In other words, I'd like to know more about new custom nodes or inventive ways of using the more popular ones. conflict with UE nodes (Anything Everywhere) White areas appear, causing the UI to break when zooming in or out. I was having trouble getting anything to look like the input image, but the issue was that I wasn't including the ControlNet (as I thought it was only needed for posing and I was having trouble loading the example workflows). /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, GitHub repo and ComfyUI node by kijai (only SD1. If you still experience the same issue after disabling these nodes, let me know, and I’ll share any additional nodes I disabled. . Here's an example of me using AnyNode in an image to image workflow. com find submissions from "example. Anyway am a nooby and this is how I approach Comfy. Another day tomorrow. Please understand me when I find this amusing. I wrote these to solve a problem I often have - I want to get some information (like a prompt) from an old image to reuse in a new workflow. You can with inpact or inspire nodes /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. if a box is in red then it's missing . A checkpoint is your main model and then loras add smaller models to vary output in specific ways . For example, a professional tennis player pretending to be an amateur tennis player or a famous singer smurfing as an unknown singer. I have Lora working but I just don’t know how to do controlnet with this For example, a professional tennis player pretending to be an amateur tennis player or a famous singer smurfing as an unknown singer. reddit. I can not find any decent examples or explanations on how this works or best ways to implement it. Remove the custom Welcome to the unofficial ComfyUI subreddit. Maybe if there is an easy small example I can understand it. I understand how outpainting is supposed to work in comfyui (workflow That will get you up and running with all the ComfyUI-Annotation example nodes installed and you can start editing from there. The Python node, in this instance, is effectively used as a gate. 10 votes, 10 comments. png files. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will I started experimenting with ComfyUI a couple of days ago, found the number of nodes required for basic workflow stupidly high, so I was glad there were custom nodes that work just as ComfyUI should by default. Filter and sort from their properties (right-click on the node and select "Node Help" for more info). Any help will be appreciated. 25K subscribers in the comfyui community. I only started making nodes today! 24 votes, 32 comments. 1 Turbo model Front-end improvements like group nodes, undo/redo, rerouting primitives Experimental For example: swapping out one loader to another loader. It'll add nodes as needed if you enable LoRAs or ControlNet or want it refined at 2x scale or whatever options you choose, and it can output your workflows as Comfy nodes if you ever want to export them as nodes. But I never used a node based system and also I want to understand the basics of ComfyUI. These have metadata which you can utilise by dragging the file into your comfyui window. Custom Nodes/extensions: ComfyUI is extensible and many people have written some great custom nodes for it. I love downloading new nodes and trying them out. And is it possible to connect the nodes without wire? For example you can creat a node same as reroute but it can ask about a variable name and then in any position if i load the variable node then i can select my previous variable from a drop . You can add additional descriptions to fields and choose the attributes you want it to return. 24K subscribers in the comfyui community. These are just a few examples. Back to our example. IPAdapter with use of attention masks is a nice example of the kind of tutorials that I'm looking for. I provide one example JSON to demonstrate how it works. 150 votes, 30 comments. Nodes are not always better, for many task yes, but nodes can also makes things way more complicated, for example try creating some shader effects using node based shader editor - some things are such that a few lines code become a huge graph mess. Something like this. Two nodes are selectors for style and effect, each with its own weight control slider. I need something that can help me apply and image (for example midjourney image) to a face mocap (For this i know there are tools like controlnet) but all of this for a video. Note that I am not responsible if one of these breaks your workflows, your ComfyUI install or anything So you want to make a custom node? You looked it up online and found very sparse or intimidating resources? I love ComfyUI, but it has to be said: despite being several months old, its documentation surrounding custom nodes is god I hope you'll enjoy the custom nodes. The reason why you typically don't want a final interface for workflows because many users will For the record, you can multi select nodes for update in the custom nodes manager (if you want to update only a selection of nodes for example, and not all of them at once) It's a little counter intuitive as the "select all" check box is by default disabled The first example is the panda with a red scarf with less prompt bleeding of the red color thanks to conditionning concat. I was getting frustrated by the amount of overhead involved in wrapping simple Python functions to expose as new ComfyUI nodes, so I decided to make a new decorator type to remove all the hassle from it. As you get comfortable with Comfyui, you can experiment and try editing a workflow. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, Here's an example using the nodes through the A8R8 interface with CN scribble If you've used the ComfyUI_LLMVISION node from u/AppleBotzz, /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, When I dragged the photo to ComfyUI, In the bottom left there are two nodes called "PrimitiveNode" (under "Text Prompts" group), Now, if I will go to Add Node->utils->Primitive, it will add a completely different node although the node it self called "PrimitiveNode", Same thing for "CLIP Text Encode" node. So nodes are not better singularly, but they have their place. For videos of celebrities just going undercover and not doing the activity they are known for please submit to /r/UndercoverCelebs. Hey everyone. Does this make sense? Just reply with a comment if you need anymore assistance :) Just a fellow comfyui dabbler, but found it pretty hard to do various simple things that I'm used to (regarding math/saved prompts and playing sounds), so I started making some custom nodes and figured I would share em. I'm not very experienced with Comfyui so any ideas on how I can set up a robust workstation utilizing common tools like img2img, New Tutorial: How to rent 1-8x GPUS, install ComfyUI in the cloud (+Manager, Custom nodes, models, etc). The custom node suites I found so far either lack the actual score calculator, don't support anything but CUDA, or have very basic rankers (unable to process a batch, for example, or only accepting 2 inputs instead of infinite ones). The way you could criticize Reddit is that we weren't a company – we were all heart and no head for a Release: AP Workflow 9. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, Best Comfyui Workflows, Ideas, and Nodes/Settings . It's basically just a mirror. I know that several samplers allow for having for example the number of steps as an input instead of a widget you so you supply it from a primitive node and control the steps on multiple samplers at the same time. 150 workflow examples of things I created with ComfyUI and ai models from Civitai Moved my workflow host to: https://openart. if I open the bat in notepad it says: . I want to make sure the logic & math I'm using gets the results I want. I like all of my models individually, but you can get some really awesome styles out of experimenting with it and trying out one and swapping it for another, tweaking the output ratios, and just seeing what kind of results you get when you force two different checkpoints to have a baby together lol It would be great to have a set of nodes that can further process the metadata, for example extract the seed and prompt to re-use in the workflow. If so, you can follow the high-res example from the GitHub. What I meant was tutorials involving custom nodes, for example. Plus quick run-through of an example ControlNet workflow. I made this Make it easier to change node colors in ComfyUI,FlatUI / Material Design Styles Color 6. Just follow the instructions and you'll have it setup in no time. You can extract entities, numbers, classify prompts with given classes, and generate one specific prompt. Having a computer science background, I feel that the potential for ComfyUI is huge if some basic branching and looping components are added, to unleash the creativity of developers. I have an image that I want to do a simple zoom out on. (There may be additional nodes not included in this list. After each step the first latent is down scaled and composited in the second, which is downscaled and composited with the third, etc I've been using A1111, for almost a year. media which can zoom in and move around simultaneously, making it easy to check details of big images. However, the other day I accidentally discovered this: comfyui-job-iterator (ali1234/comfyui-job-iterator: A for loop for ComfyUI (github. I'm working on the upcoming AP Workflow 8. Two nodes are used to manage the strings: in the input fields you can type the portions of the prompt, and with the sliders you can easily set the relative weights. Share Sort by: Best. I'm not sure that custom script allows you to select a new checkpoint but what it is doing can be done manually with more nodes. It could be that the impact I've been using ComfyUI as my go to for about a month and it's so much better than 1111. You can connect the input and output on the node to any input or output on any other node. I'm a basic user for now but I want the deep dive. I've watched a video about resizing and outpainting an image with inpaint controlnet on automatic1111. These tools do make use of WAS suite. That’s just how it is for now. For example, it would be very cool if one could place the node numbers on a grid (of customizable size) to define the position This is the example animation I do with comfy: https: PSA: If you've used the ComfyUI_LLMVISION node from u/AppleBotzz, you've been hacked /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, AnyNode does what you ask it to do. Note that I am not responsible if one of these breaks your workflows, your This custom ComfyUI node supports Checkpoint, LoRA, and LoRA Stack models, offering features like bypass options. Is there a debug or print node that will simply take the data passed out of a node & display the value in plain text/image as a debug (not as a generated image). So far I love the speed and lower ram requirement. 0 for ComfyUI - Now featuring SUPIR next-gen upscaler, IPAdapter Plus v2 nodes, a brand new Prompt Enricher, Dall-E 3 image generation, an advanced XYZ Plot, 2 types of automatic image selectors, and Get the Reddit app Scan this Is there any real breakdown of how to use the rgthree context and switching nodes. It is looking great, but in my opinion improving the node system is more powerful, for example adding some nodes like the efficiency nodes. 0 and want to add an Aesthetic Score Predictor function. true. Please share your tips, tricks, and A celebrity or professional pretending to be amateur usually under disguise. Something the community could share their node setups with, as right now having to go look up and check tutorials, or example layouts for things outside of basic generationon various githubs is such a pain, especially once you start finding all the The constant noise for whole batch doesn't exist in base comfy yet (there's PR about it), I made a simple node to generate the noise instead, which can then be used as latent input in the advanced/custom sampler nodes with "add_noise" off. A few months ago, I suggested the possibility of creating a frictionless mechanism to turn ComfyUI workflows (no matter how complex) into simple and customizable front-end for end-users. Reply reply More replies More replies More replies Comfyui Question: Does anyone know how to use controlnet (one or multiple) with the efficient loader & control net stacker node? A picture example of a workflow will help a lot. LLaVA -> LLM -> AudioLDM-2 Example workflow in the examples folder inside github. hopefully this will be https://www. I have developed custom nodes in the past, and I have very good hands-on and theoretical experience with LLMs. I’ll post some more detailed examples when my node launches later this week, examining different ways you can help nudge the higher resolution diffusion toward a good result despite the model being somewhat underpowered for the task. This is great for prompts so you don't have to manually change the prompt in every field (for upscalers for A few examples of my ComfyUI workflow to make very detailed 2K images of real people (cosplayers in my case) using LoRAs and with fast renders (10 minutes on a laptop RTX3060) Workflow Included Share I'm new to stable diffusion and i need some help in finding the right tools for my ideal workflow. start with simple workflows . exe -s ComfyUI\ main. Node-Red (an event driven node based programming language) has this functionality so it could defintely work in a node based environment such as ComfyUI . Mirrored nodes, where if you change anything in the node or it's mirror the other linked node will reflect the changes. Fast Groups Muter & Fast Groups Bypasser Like their "Fast Muter" and "Fast Bypasser" counterparts, but collecting groups automatically in your workflow. Fernicles SDTools V3 - ComfyUI nodes First off, its a good idea to get the custom nodes off git, specifically WAS Suite, Derfu's Nodes, and Davemanes nodes. Also you can listen the music inside ComfyUI. The node itself (or better, the LLM inside of it) writes the python code that runs the process. ai/profile/neuralunk?sort=most_liked. [Edit - want to discuss Custom Nodes, suggest ideas, ask questions, make requests? Join https://discord. I'm not sure if that approach is feasible if you are not an experienced programmer. A few examples of my ComfyUI workflow to make very detailed 2K images of real people (cosplayers in my case) using LoRAs and with fast renders (10 minutes on a laptop RTX3060) 10 upvotes · comments But standard A1111 inpaint works mostly same as this ComfyUI example you provided. You just tell it directly what to do, and it gives you the output you want. Or, at least, kinda. Please share your tips, tricks, and *Note: I'm not exactly sure which custom node is causing the issue, but I was able to resolve the problem after disabling these custom nodes. Read the nodes installation information on github. You type what you want it's function to be in your ComfyUI Workflow. The @ComfyFunc decorator inspects your function's annotations to compose the appropriate node definition for ComfyUI. e extensions) I've sent a feature request to add a "Button" class to the example Node code on the github. I'm looking to do the same but I don't have an idea how automatic implementation of said controlnet is correlating with comfy nodes. The work-flow takes a couple of prompt nodes, pipes them through a couple more, concatenates them, tests using Python and ultimately adds to the prompt if the condition is met. Open comment sort options. The Assembler node collects all incoming strings to combine them into a single final prompt. And, I just don't get how they function. My mind's busted. So is there any suggestion to where to That way the Comfy Workflow tab in Swarm will be your version of ComfyUI, with your custom nodes. ) 136 votes, 59 comments. Now, you can obtain your answers reliably. Thanks a lot for this amazing node! I've been wanting it for a while to compare various versions of one image. bin file, the ipadapter controlnet model, and an SDXL depth model for controlnet. Please keep posted images SFW. Sorry if I seemed greedy, but for Upscale Image Comparing, I think the best tool is from Upscale. Just reading the custom node repos' code seems to show the authors have a lot of knowledge on how Comfyui works and how to interface with it, but I am a bit lost (in the large amount of code in ComfyUI's repo and the large amount of custom node repos) as to how to get started. So when I saw the recent Generative Powers of Ten : r/StableDiffusion (reddit. and remember sdxl does not play well with 1. sttpigibahskjpwmmzpsvpfwuowanjigrtdmmzupffgwniaaprk