Posts
Comfyui workflow examples reddit
Comfyui workflow examples reddit. 4. 150 workflow examples of things I created with ComfyUI and ai models from Civitai This is the most well organised and easy to use ComfyUI Workflow I've come across so far showing difference between Preliminary, Base and Refiner setup. Welcome to the unofficial ComfyUI subreddit. Jul 6, 2024 · What is ComfyUI? ComfyUI is a node-based GUI for Stable Diffusion. 0 with support for the new Stable Diffusion 3, but it was way too optimistic. be/ppE1W0-LJas - the tutorial. That's exactly what I ended up planning, I'm a newbie to ComfyUI, so I setup Searg's workflow, then copied the official ComfyUI i2v workflow into it and pass into the node whatever image I like. ive got 3 tutorials that can teach you how to set up a decent comfyui inpaint workflow. Open-sourced the nodes and example workflow in this Github repo and my colleague Polina made a video walkthrough to help explain how they work! Nodes include: LoadOpenAIModel It works by converting your workflow. This guide is about how to setup ComfyUI on your Windows computer to run Flux. The example images on the top are using the "clip_g" slot on the SDXL encoder on the left, but using the default workflow CLIPText on the right. You can find the Flux Schnell diffusion model weights here this file should go in your: ComfyUI/models/unet/ folder. ComfyUI - Ultimate Starter Workflow + Tutorial Heya, ive been working on this workflow for like a month and its finally ready, so I also made a tutorial on how to use it. Hi everyone, I’m working on a project to generate furnished interiors from images of empty rooms using ComfyUI and Stable Diffusion, but I want to avoid using inpainting. To add content, your account must be vetted/verified. The idea of this workflow is to sample different parts of the sigma_min, cfg_scale, and steps space with a fixed prompt and seed. Here is an example of 3 characters each with its own pose, outfit, features, and expression : Left : woman wearing full armor, ginger hair, braids hair, hands on hips, serious Middle : girl, princess dress, blonde hair, tiara, jewels, sitting on a throne, blushing Users of ComfyUI which premade workflows do you use? I read through the repo but it has individual examples for each process we use - img2img, controlnet, upscale and all. Create animations with AnimateDiff. Thats where I'd gotten my second workflow I posted from, which got me going. 75s/it with the 14 frame model. but mine do include workflows for the most part in the video description. WAS suite has some workflow stuff in its github links somewhere as well. It'll add nodes as needed if you enable LoRAs or ControlNet or want it refined at 2x scale or whatever options you choose, and it can output your workflows as Comfy nodes if you ever want to I recently switched from A1111 to ComfyUI to mess around AI generated image. com/. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. It would require many specific Image manipulation nodes to cut image region, pass it through model and paste back. Just bse sampler and upscaler. If the term "workflow" is something that has only been used exclusively to describe ComfyUI's node graphs, I suggest just calling them "node graphs" or just "nodes". I think perfect place for them is Wiki on GitHub. the example pictures do load a workflow, but they don't have a label or text that indicates if its version 3. [If for some reasons you want to run somthing that is less that 16 frames long all you need is this part of the workflow] Warning. Inside the workflow, you will find a box with a note containing instructions and specifications on the settings to optimize its use. LoRA selector, (for example, download SDXL LoRA example from StabilityAI, put into ComfyUI\models\lora\) VAE selector, (download default VAE from StabilityAI, put into \ComfyUI\models\vae\), just in case in the future there's better VAE or mandatory VAE for some models, use this selector Restart ComfyUI Hey everyone,Got a lot of interest in the documentation we did of 1600+ ComfyUI nodes and wanted to share the workflow + nodes we used to do so using GPT4. Civitai has few workflows as well. So. Workflow Image with generated image But standard A1111 inpaint works mostly same as this ComfyUI example you provided. ComfyUI could have workflow screenshots like example repo has to demonstrate possible usage and also variety of extensions. Only the LCM Sampler extension is needed, as shown in this video. In addition, I provide some sample images that can be imported into the program. If you have any of those generated images in original PNG, you can just drop them into ComfyUI and the workflow will load. you can change the initial image size from 1024x1024 to other sizes compatible with SDXL as well. Still working on the the whole thing but I got the idea down And then the video in the post shows a rather simple layout that proves out the building blocks of a mute-based, context-building workflow. Just download it, drag it inside ComfyUI, and you’ll have the same workflow you see above. Ignore the prompts and setup That way the Comfy Workflow tab in Swarm will be your version of ComfyUI, with your custom nodes. We would like to show you a description here but the site won’t allow us. I think it was 3DS Max. The sample prompt as a test shows a really great result. A group that allows the user to perform a multitude of blends between image sources as well as add custom effects to images using a central control panel. Flux Schnell is a distilled 4 step model. My actual workflow file is a little messed up at the moment, I don't like sharing workflow files that people can't understand; my process is a bit particular to my needs and the whole power of ComfyUI is for you to create something that fits your needs. and it got very good results. second pic. Put the flux1-dev. The examples were generated with the RealisticVision 5. Table of contents. I'm not very experienced with Comfyui so any ideas on how I can set up a robust workstation utilizing common tools like img2img, txt2img, refiner, model merging, loras, etc. You can find the workflow here and the full image with metadata here. Starting workflow. SDXL Default ComfyUI workflow. I then just sort of pasted them together. AP Workflow 9. all in one workflow would be awesome. 1 ComfyUI install guidance, workflow and example. sft file in your: ComfyUI/models/unet/ folder. Everything else is the same. (for 12 gb VRAM Max is about 720p resolution). hopefully this will be useful to you. It's not meant to overwhelm anyone with complex, cutting edge tech, but rather show the power of building modules/groups as blocks, and merging into a workflow through muting (and easily done so from the Fast Muter nodes) and Context Switches. 5 with lcm with 4 steps and 0. K12sysadmin is open to view and closed to post. If you want to post and aren't approved yet, click on a post, click "Request to Comment" and then you'll receive a vetting form. You can then load or drag the following image in ComfyUI to get the workflow: 6 min read. Flux. ComfyUI breaks down a workflow into rearrangeable elements so you can easily make your own. Its a simpler setup than u/Ferniclestix uses, but I think he likes to generate and inpaint in one session, where I generate several images, then import them and inpaint later (like this) Welcome to the unofficial ComfyUI subreddit. Workflow. Please share your tips, tricks, and workflows for using this software to create your AI art. Upscaling ComfyUI workflow. 1 with ComfyUI Get the Reddit app Scan this QR code to download the app now Here are approx. of course) To make differences somewhat easiser to see, the above image is at 512x512. Seems very hit and miss, most of what I'm getting look like 2d camera pans. 1 or not. its the kind of thing thats a bit fiddly to use so using someone elses workflow might be of limited use to you. No Loras, no fancy detailing (apart from face detailing). You can encode then decode bck to a normal ksampler with an 1. So download the workflow picture and dragged it to comfyui but it doesn't load anything, looks like the metadata is not complete. 0 denoise, due to vae, maybe there is an obvious solution but i don't know it. How it works: Download & drop any image from the website into ComfyUI, and ComfyUI will load that image's entire workflow. While waiting for it, as always, the amount of new features and changes snowballed to the point that I must release it as is. Run any ComfyUI workflow w/ ZERO setup (free & open source) Try now Welcome to the unofficial ComfyUI subreddit. 0 and Pony for example which, for Pony I think needs 2 always) because of how their CLIP is encoded. This by Nathan Shipley didn't use this exact workflow but is a great example of how powerful and beautiful prompt scheduling can be: Share, discover, & run thousands of ComfyUI workflows. EDIT: For example this workflow shows the use of the other prompt windows. Step 2: Download this sample Image. 86s/it on a 4070 with the 25 frame model, 2. But let me know if you need help replicating some of the concepts in my process. Ending Workflow. this is just a simple node build off what's given and some of the newer nodes that have come out. 0 for ComfyUI. 1 checkpoint. You can then load or drag the following image in ComfyUI to get the workflow: Flux Schnell. 2 denoise to fix the blur and soft details, you can just use the latent without decoding and encoding to make it much faster but it causes problems with anything less than 1. https://youtu. Please keep posted images SFW. You can construct an image generation workflow by chaining different blocks (called nodes) together. or through searching reddit, the comfyUI manual needs updating imo. I originally wanted to release 9. That being said, here's a 1024x1024 comparison also. Merging 2 Images together. But for a base to start at it'll work. it upscales the second image up to 4096x4096 (4xultrasharp) by default for simplicity but can be changed to whatever. 1. I found it very helpful. json files into an executable Python script that can run without launching the ComfyUI server. Upcoming tutorial - SDXL Lora + using 1. Img2Img ComfyUI workflow. ControlNet Depth ComfyUI workflow. comfy uis inpainting and masking aint perfect. Surprisingly, I got the most realistic images of all so far. You can then load or drag the following image in ComfyUI to get the workflow: Welcome to the unofficial ComfyUI subreddit. If you asked about how to put it into the PNG, then you just need to create the PNG in ComfyUI and it will automatically contain the workflow as well. (Same seed, etc, etc. it's nothing spectacular but gives good consistent results without Welcome to the unofficial ComfyUI subreddit. Breakdown of workflow content. You can find the Flux Dev diffusion model weights here. The video is just a screenshot of the workflow I used in ComfyUI to get the output files. Step 3: Update ComfyUI Step 4: Launch ComfyUI and enable Auto Queue (Under Extra Options) Step 5: Drag and drog and sample image into ConfyUI Step 6: The FUN begins! If queue didn't start automatically, press Queue Prompt. K12sysadmin is for K12 techs. This workflow by Antzu is a good example of prompt scheduling, which is working well in Comfy thanks to Fitzdorf's great work. . The workflow posted here relies heavily on useless third-party nodes from unknown extensions. Just my two cents. Comfy Workflows Comfy Workflows. 5 Lora with SDXL, Upscaling Future tutorials planned: Prompting practices, post processing images, batch trickery, networking comfyUI in your home network, Masking and clipseg awesomeness, many more. 1; Overview of different versions of Flux. Potential use cases include: Streamlining the process for creating a lean app or pipeline deployment that uses a ComfyUI workflow Creating programmatic experiments for various prompt/parameter values A higher clipskip (in A1111, lower in ComfyUI's terms, or more negative) equates to LESS detail in CLIP (not to be confused by details in the image). I used to work with Latent Couple then Regional prompter module for A1111, which allowed me to generate separate regions of an image through masks and guided with ControlNets (for instance, generate several characters using poses derived from a preprocessed picture). I put the workflow to test by creating people with hands etc. A1111 has great categories like Features and Extensions that simply show what repo can do, what addon out there and all that stuff. A few examples of my ComfyUI workflow to make very detailed 2K images of real people (cosplayers in my case) using LoRAs and with fast renders (10 minutes on a laptop RTX3060) Welcome to the unofficial ComfyUI subreddit. I built a free website where you can share & discover thousands of ComfyUI workflows -- https://comfyworkflows. Creating such workflow with default core nodes of ComfyUI is not possible at the moment. For your all-in-one workflow, use the Generate tab. Some commonly used blocks are Loading a Checkpoint Model, entering a prompt, specifying a sampler, etc. Jul 28, 2024 · Over the last few months I have been working on a project with the goal of allowing users to run ComfyUI workflows from devices other than a desktop as ComfyUI isn't well suited to run on devices with smaller screens. 4 - The best workflow examples are through the github examples pages. Is there a workflow with all features and options combined together that I can simply load and use ? 2/Run the step 1 Workflow ONCE - all you need to change is put in where the original frames are and the dimensions of the output that you wish to have. This probably isn't the completely recommended workflow though as it has POS_G and NEG_G prompt windows but none for POS_L, POS_R, NEG_L, and NEG_R which is part of SDXL's trained prompting format. In this case he also uses the ModelSamplingDiscrete node from the WAS node suite, supposedly for chained loras, however in my tests that node made no difference whatsoever so it can be ignored as well. Aug 2, 2024 · Flux Dev. If the term "workflow" has been used to describe node graphs for a long time then that's unfortunate because now it has become entrenched. It covers the following topics: Introduction to Flux. Think Diffusion's Stable Diffusion ComfyUI Top 10 Cool Workflows. 1; Flux Hardware Requirements; How to install and use Flux. You can't change clipskip and get anything useful from some models (SD2.
jdgkz
wnof
qust
zhu
coiv
nctqrg
pjdgmhb
qshkl
rtffa
nzhibo