Skip to content

Comfyui inpainting tutorial reddit. Mar 19, 2024 路 Tips for inpainting. One small area at a time. from a folder Welcome to the unofficial ComfyUI subreddit. 5). anyway. It will automatically load the correct checkpoint each time you generate an image without having to do it Installation is complicated and annoying to setup, most people would have to watch YT tutorials just to get A1111 installed properly. and yess its long winded, I ramble. ControlNet, on the other hand, conveys it in the form of images. I loaded it up and input an image (the same image fyi) into the two image loaders and pointed the batch loader at a folder of random images and it produced an interesting but not usable result. With ComfyUI leading the way and an empty canvas, in front of us we set off on this thrilling adventure. Hi, I've been using both ComfyUI and Fooocus and the inpainting feature in Fooocus is crazy good, where as in ComfyUI I wasn't ever able to create a workflow that helps me remove or change clothing and jewelry from real world images without causing alterations on the skin tone. EDIT: Fix Hands - Basic Inpainting Tutorial | Civitai (Workflow Included) It's not perfect, but definitely much better than before. Thanks! Don't install ALL the suggested nodes from ComfyUI Manager's "install missing nodes" feature!!! It will lead to conflicted nodes with the same name and a crash. Welcome to the unofficial ComfyUI subreddit. This comprehensive tutorial covers 10 vital steps, including cropping, mask detection, sampler erasure, mask fine-tuning, and streamlined inpainting for incredible results. Hey hey, super long video for you this time, this tutorial covers how you can go about using external programs to do inpainting. and I advise you to who you're responding to just saying(I'm not the OG of this question). ) Invoke just released 3. try civitai . ComfyUI breaks down a workflow into rearrangeable elements so you can easily make your own. Deploy them across mobile, desktop, VR/AR, consoles or the Web and connect with people globally. here Welcome to the unofficial ComfyUI subreddit. a version of what you were thinking, prediffusion with an inpainting step. the first is the original background from which the background remover crappily removed the background, right? Because the others look way worse, inpainting is not really capable of inpainting an entire background without it looking like a cheap background replacement plus unwanted artifacts appearing. It might help to check out the advanced masking tutorial where I do a bunch of stuff with masks but I haven't really covered upscale processes in conjunction with inpainting yet. 21K subscribers in the comfyui community. Learn how to master inpainting on large images using ComfyUI and Stable Diffusion. it is supporting it is working well with High Res Images + SDXL + SDXL Lightening + FreeU2+ Self Attention gaudiness+ Fooocus inpainting + SAM + Manual mask Composition + Lama mate models + Upscale, IPAdaptern, and more. 22, the latest one available). The resources for inpainting workflow are scarce and riddled with errors. The center image flashes through the 64 random images it pulled from the batch loader and the outpainted portion seems to correlate to Welcome to the unofficial ComfyUI subreddit. this will open the live painting thing you are looking for. I will record the Tutorial ASAP. add a 'load mask' node, and add an vae for inpainting node, plug the mask into that. Inpainting a cat with the v2 inpainting model: Inpainting a woman with the v2 inpainting model: It also works with non inpainting models. So, the work begins. The clipdrop "uncrop" gave really good Unity is the ultimate entertainment development platform. Zero to Hero ControlNet Extension Tutorial - Easy QR Codes - Generative Fill (inpainting / outpainting) - 90 Minutes - 74 Video Chapters - Tips - Tricks - How To upvote r/StableDiffusionInfo In fact, there's a lot of inpainting stuff you can do with comfyui that you can't do with automatic1111. This was not an issue with WebUI where I can say, inpaint a cert Try the SD. Updated: Inpainting only on masked area in ComfyUI, + outpainting, + seamless blending (includes custom nodes, workflow, and video tutorial) Welcome to the unofficial ComfyUI subreddit. Here is a little demonstration/ tutorial of how I use Fooocus Inpainting. - comfyanonymous/ComfyUI Thank you for this interesting workflow. In addition to a whole image inpainting and mask only inpainting, I also have workflows that ComfyUI basics tutorial. Thank you, here. You must be mistaken, I will reiterate again, I am not the OG of this question But basically if you are doing manual inpainting make sure that the sampler producing your inpainting image is set to fixed that way it does inpainting on the same image you use for masking. The other inpainting workflows has too many nodes and I it's too messy. However, due to the more stringent requirements, while it can generate the intended images, it should be used carefully as conflicts between the interpretation of the AI model and ControlNet's enforcement can lead to a degradation in quality. If you have any questions, please feel free to leave a comment here or on my civitai article. 馃構 the workflow is basically an image loader combined with a whole bunch of little modules for doing various tasks like build a prompt with an image, generate a color gradient, batchload images. part two ill cover compositing and external image manipulation following on from this tutorial. I just recorded this video tutorial that explains, in just ten minutes, how to do very fast inpainting only on masked areas in ComfyUI. I've seen a lot of comments about people having trouble with inpainting and some saying that inpainting is useless. Using text has its limitations in conveying your intentions to the AI model. Initiating Workflow in ComfyUI. Some commonly used blocks are Loading a Checkpoint Model, entering a prompt, specifying a sampler, etc. 0. Both are quick and dirty tutorials without tooo much rambling, no workflows included because of how basic they are. Stable Diffusion ComfyUI Face Inpainting Tutorial (part 1) /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will ComfyUI Master Tutorial - Stable Diffusion XL (SDXL) - Install On PC, Google Colab (Free) & RunPod, SDXL LoRA, SDXL InPainting Jul 6, 2024 路 What is ComfyUI? ComfyUI is a node-based GUI for Stable Diffusion. The goal of this tutorial is to give an overview of a method I'm working on to simplify the process of creating manga, or comics. There, you will find more Welcome to the unofficial ComfyUI subreddit. In this case, I am trying to create Medusa but the base generation has much to be desired. I want to inpaint at 512p (for SD1. the tools are hidden. 0 denoise to work correctly and as you are running it with 0. masquerade nodes are awesome, I use some of them in my compositing tutorial. 3-0. 1. Not only I was able to recover a 176x144 pixel 20 year old video with this, in addition it supports the brand new SD15 model to Modelscope nodes by exponentialML, an SDXL lightning upscaler (in addition to the AD LCM one), and a SUPIR second stage, for a total a gorgeous 4k native output from comfyUI! Its a good idea to use the 'set latent noise mask' node instead of vae inpainting node. Detailed ComfyUI Face Inpainting Tutorial (Part 1) 24K subscribers in the comfyui community. Again, would really appreciate any of your Comfy 101 materials, resources, and creators, as well as your advice re. The Inpaint Crop and Stitch nodes can be downloaded using ComfyUI-Manager, just look for "Inpaint-CropAndStitch". Posted by u/cgpixel23 - 1 vote and no comments I just recorded this video tutorial that explains, in just ten minutes, how to do very fast inpainting only on masked areas in ComfyUI. In the step we need to choose the model, for inpainting. In a111, when you change the checkpoint, it changes it for all the active tabs. The following images can be loaded in ComfyUI to get the full workflow. comfyui manager will identify what is missing and download for you . Play with masked content to see which one works the best. and remember sdxl does not play well with 1. Node based editors are unfamiliar to lots of people, so even with the ability to have images loaded in people might get lost or just overwhelmed to the point where it turns people off even though they can handle it (like how people have an ugh reaction to math). Basically it doesn't open after downloading (v. Please keep posted images SFW. Or you could use a photoeditor like GIMP (free), photoshop, photopea and make a rough fix of the fingers and then do an Img2Img in comfyui at low denoise (0. Please share your tips, tricks, and workflows for using this… Welcome to the unofficial ComfyUI subreddit. It works with any SDXL model. In Automatic1111, we could control how much to change the source image by setting the denoising strength. It is actually faster for me to load a lora in comfyUi than A111. Source image. Wanted to share my approach to generate multiple hand fix options and then choose the best. Also lets us customize our experience making sure each step is tailored to meet our inpainting objectives. if a box is in red then it's missing . INTRO. The most direct method in ComfyUI is using prompts. alternatively use an 'image load' node and connect both outputs to the set latent noise node, this way it will use your image and your masking from the No ControlNet, No ADetailer, No LoRAs, No inpainting, No editing, No face restoring, Not Even Hires Fix!! (and obviously no spaghetti nightmare). I don't think alot of people realize how well it works (I didn't until recently). I have a ComfyUI inpaint workflow set up based on SDXL, but it seems to go for maximum deviation from the source image. its the kind of thing thats a bit fiddly to use so using someone elses workflow might be of limited use to you. There are tutorials covering, upscaling ComfyUI Master Tutorial - Stable Diffusion XL (SDXL) - Install On PC, Google Colab (Free) & RunPod, SDXL LoRA, SDXL InPainting Hi I am struggling to find any help or tutorials on how to connect inpainting using the efficiency loader I'm new to stable diffusion so it's all a bit confusing Does anyone have a screenshot of how it is connected I just want to see what nodes go where Welcome to the unofficial ComfyUI subreddit. This post hopes to bridge the gap by providing the following bare-bone inpainting examples with detailed instructions in ComfyUI. Please share your tips, tricks, and workflows for using this…. I recently just added the Inpainting function to it, I was just working on the drawing vs rectangles lol. The normal inpainting flow diffuses the whole image but pastes only the inpainted part back on top of the uninpainted one. inpainting is kinda. Link to my setup I am now just setting up ComfyUI and I have issues (already LOL) with opening the ComfyUI Manager from CivitAI. Tutorial 7 - Lora Usage Jan 10, 2024 路 This method not simplifies the process. 97 votes, 17 comments. (mainly because to avoid size mismatching its a good idea to keep the processes seperate) Welcome to the unofficial ComfyUI subreddit. Thanks for the guide! What is your experience with how image resolution affects inpating? I'm finding images must be 512 or 768 pixels (the resolution of the training data) for best img 2 img results if you're trying to retain a lot of the structure of the original image, but maybe that doesn't matter as much when you're making broad changes. As we delved deeper into the application and potential of ComfyUI in the field of interior design, you may have developed a strong interest in this innovative AI tool for generating images. vae for inpainting requires 1. Currently I am following the inpainting workflow from the github example workflows. You can construct an image generation workflow by chaining different blocks (called nodes) together. I WILL NOT respond to private messages. 5 so that may give you a lot of your errors. What works: It successfully identifies the hands and creates a mask for inpainting What does not work: it does not create anything close to a desired result All suggestions are welcome IF there is anything you would like me to cover for a comfyUI tutorial let me know. Below I have set up a basic workflow. I create a mask by erasing the part of the image that I want inpainted using Krita. It took me hours to get one I'm more or less happy with, where I feather the mask ( feather nodes usually don't work how I want to, so I use mask2image, blur the image, then image2mask ), 'only masked area' where it also apply to the controlnet ( applying it to the controlnet was probably the worst part ), and There are several ways to do it. In my understanding, their implementation of the SDXL Refiner isn't exactly as recommended by Stability AI, but if you are happy using just the Base model (or you are happy with their approach to the Refiner), you can use it today to generate SDXL images. You can achieve the same flow with the detailer from the impact pack. I have a wide range of tutorials with both basic and advanced workflows. (I will be sorting out workflows for tutorials at a later date in the youtube description for each, many can be found in r/comfyui where I first posted most of these. The problem with it is that the inpainting is performed at the whole resolution image, which makes the model perform poorly on already upscaled images. I believe Fooocus has their own inpainting engine for SDXL. Midjourney may not be as flexible as ComfyUI in controlling interior design styles, making ComfyUI a better choice. 20K subscribers in the comfyui community. Make sure you use an inpainting model. We would like to show you a description here but the site won’t allow us. A1111 is REALLY unstable compared to ComfyUI. Then find example workflows . Next fork of A1111 WebUI, by Vladmandic. I decided to do a short tutorial about how I use it. ComfyUI also has a mask editor that can be accessed by right clicking an image in the LoadImage node and “Open in MaskEditor”. ai/workflows/-/-/qbCySVLlwIuD9Ov7AmQZFlux Inpaint is a feature related to image generation models, particularly those developed by Black Fore Jan 20, 2024 路 Inpainting in ComfyUI has not been as easy and intuitive as in AUTOMATIC1111. I don’t think “if you’re too newb to figure it out try again later” is a productive way to introduce a technique. If it doesn't, here's a link to download it PNG config image Tutorials wise, there are a bunch of images that can be loaded as a workflow by comfyUI, you download the png and load it. in ComfyUI I compare all possible inpainting solutions in this tutorial, BrushNet, Powerpaint, fooocuse, Unet inpaint checkpoint, SdXL ControlNet inpaint and SD1. Just install these nodes: Fannovel16 ComfyUI's ControlNet Auxiliary Preprocessors Derfuu Derfuu_ComfyUI_ModdedNodes EllangoK ComfyUI-post-processing-nodes BadCafeCode Masquerade Nodes I'm looking for a workflow (or tutorial) that enables removal of an object or region (generative fill) in an image. 5 Inpainting tutorial. Successful inpainting requires patience and skill. annoying for comfyui. For some reason, it struggles to create decent results. Just created my first upscale layout last night and it's working (slooow on my 8GB card but results are pretty) but I'm eager to see what your approaches look like to such things and LoRAs and inpainting etc. I created a mask using photoshop (could just as easily google or sketch a scribble white on black, tell it to use a channel other than the alpha channel (because if you are half assing you won't have one) I am creating a workflow that allows me to fix hands easily using ComfyUI. In the positive prompt, I described that I want an interior design image with a bright living room and rich details. And above all, BE NICE. but mine do include workflows for the most part in the video description. you want to use vae for inpainting OR set latent noise, not both. It may be possible with some ComfyUI plugins but still would require some very complex pipe of many nodes. While I'd personally like to generate rough sketches that I can use for a frame of reference when later drawing, we will work on creating full images that you could use to create entire working pages. What do you mean by "change masked area not very drastically"? Maybe change CFG or number of steps, try different sampler and finally make sure you're using Inpainting model. comfy uis inpainting and masking aint perfect. Here's what I got going on, I'll probably open source it eventually, all you need to do is link your comfyui url, internal or external as long as it's a ComfyUI url. Please share your tips, tricks, and workflows for using this software to create your AI art. Use Unity to build high-quality 3D and 2D games and experiences. TLDR, workflow: link. Does anyone have any links to tutorials for "outpainting" or "stretch and fill" - expanding a photo by generating noise via prompt but matching the photo? I've done it on Automatic 1111, but its not been the best result - I could spend more time and get better, but I've been trying to switch to ComfyUI. and yess, this is arcane as FK and I have no idea why some of the workflows are shared this way. vae inpainting needs to be run at 1. Tutorial 6 - upscaling. Setting the crop_factor to 1 considers only the masked area for inpainting, while increasing the crop_factor incorporates context relative to the mask for inpainting. I talk a bunch about some of the different upscale methods and show what I think is one of the better upscale methods, I also explain how lora can be used in a comfyUI workflow. 0 denoising, but set latent denoising can use the original background image because it just masks with noise instead of empty latent. ive got 3 tutorials that can teach you how to set up a decent comfyui inpaint workflow. lol, thats silly, its a chance to learn stuff you dont know, and thats always worth a look. great video! I've gotten this far up-to-speed with ComfyUI but I'm looking forward to your more advanced videos. https://openart. I am fairly new to comfyui and have a question about inpainting. but hopefully it will be useful to you. A lot of people are just discovering this technology, and want to show off what they created. Here are some take homes for using inpainting. Link : Tutorial: Inpainting only on masked area in ComfyUI The Inpaint Crop and Stitch nodes can be downloaded using ComfyUI-Manager, just look for "Inpaint-CropAndStitch". The only references I've been able to find makes mention of this inpainting model, using raw python or auto1111. 6), and then you can run it through another sampler if you want to try and get more detailer. Tutorials on inpainting in ComfyUI. Inpainting with an inpainting model. ComfyUI Master Tutorial - Stable Diffusion XL (SDXL) - Install On PC, Google Colab (Free) & RunPod, SDXL LoRA, SDXL InPainting A somewhat decent inpainting workflow in comfyui can be a pain in the ass to make. Please share your tips, tricks, and… After spending 10 days finally, my new workflow for inpainting is ready for running in ComfyUI. Inpainting with a standard Stable Diffusion model. I've tried using an empty positive prompt (as suggested in demos) and describing the content to be replaced without From my limited knowledge, you could try to mask the hands and inpaint after (will either take longer or you'll get lucky). And now for part two of my "not SORA" series. One of the strengths of comfyui is that it doesn't share the checkpoint with all the tabs. For "only masked," using the Impact Pack's detailer simplifies the process. A checkpoint is your main model and then loras add smaller models to vary output in specific ways . When using the Impact Pack's detailer, you can mask the area to inpaint and use MaskToSEGS with DetailerForEach to crop only the masked area and the surrounding area specified by crop_factor for inpainting. I'd specially like to just make it an image loader instead of generating a new one Could I get some help with this? I'd appreciate it very much, my config is inside the flower picture, I dont know if reddit keeps it. Hi, is there an analogous workflow/custom nodes for WebUI's "Masked Only" inpainting option in ComfyUI? I am trying to experiment with Animatediff + inpainting but inpainting in ComfyUI always generates on a subset of pixels on my original image so the inpainted region always ends up low quality. Keep masked content at Original and adjust denoising strength works 90% of the time. start with simple workflows . Link: Tutorial: Inpainting only on masked area in ComfyUI. load your image to be inpainted into the mask node then right click on it and go to edit mask. I learned about MeshGraphormer from this youtube video of Scott Detweiler, but felt like simple inpainting does not do the trick for me, especially with SDXL. Here is a quick tutorial on how I use Fooocus for SDXL inpainting. Belittling their efforts will get you banned. . my rule of thumb is if I need to completely replace a feature of my image I use vae for inpainting with an inpainting model. 0 that ads controlnet and a node based backend that you can use for plugins etc so seems a big teams finally taking node based expansion serious i love comfy but a bigger team and really nice ui with node plugin support gives serious potential to them… wonder if comfy and invoke will somehow work together or if things will stay fragmented between all the various Due to the complexity of the workflow, a basic understanding of ComfyUI and ComfyUI Manager is recommended. ComfyUI Manager issue. Whenever I mention that Fooocus inpainting/outpainting is indispensable in my workflow, people often ask me why. ControlNet inpainting. The most powerful and modular diffusion model GUI, api and backend with a graph/nodes interface. Note that if force_inpaint is turned off, inpainting might not occur due to the guide_size. I really like cyber realistic inpainting model. 3 its still wrecking it even though you have set latent noise. ComfyUI - SDXL basic to advanced workflow tutorial - 4 - upgrading your workflow Heya, tutorial 4 from my series is up, it covers the creation of an input selector switch, use of some math nodes and has a few tips and tricks. May 9, 2024 路 Hello everyone, in this video I will guide you step by step on how to set up and perform the inpainting and outpainting process with Comfyui using a new method with Foocus, a quite useful A tutorial that covers some of the processes and techniques used for making art in SD but specific for how to do them in comfyUI using 3rd party programs in the workflow. I've written a beginner's tutorial on how to inpaint in comfyui. Hi amazing ComfyUI community. Newcomers should familiarize themselves with easier to understand workflows, as it can be somewhat complex to understand a workflow with so many nodes in detail, despite the attempt at a clear structure. 3. With ComfyUI you just download the portable zip file, unzip it and get ComfyUI running instantly, even a kid can get ComfyUI installed. Raw output, pure and simple TXT2IMG. You can move, resize, do whatever to the boxes. I teach you how to build workflows rather than just use them, I ramble a bit and damn if my tutorials aren't a little long winded, I go into a fair amount of detail so maybe you like that kind of thing. To learn more about ComfyUI and to experience how it revolutionizes the design process, please visit Comflowy(opens in a new tab). 5, inpaint checkpoints, normal checkpoint with and without Differential Diffusion I got a workflow working for inpainting (the tutorial which show the inpaint encoder should be removed because its missleading). vngnf sdpyx tkos ilxvma oolo evmk ltjhctpgz hxgwol itvk wagndxp