Skip to content

Comfyui clipseg reddit. Also: changed to Image -> Save Image WAS node. TYVM. Please keep posted images SFW. And while idea is the same, imho when you name thing "clip skip" best would be 0-11, so you skip 0 to 11 last layers, where 0 means "do nothing" and where 11 means "use only the first layer", like you said going from right to left and removing N layers. I might do an issue in ComfyUI about that. YouTube playback is very choppy if I use SD locally for anything serious. I've updated the ComfyUI Stable Video Diffusion repo to resolve the installation issues people were facing earlier (sorry to everyone that had installation issues!) This is a node pack for ComfyUI, primarily dealing with masks. its super ez to get it to grab random words each time from a list, to get it to step through them one by one is more difficult. From the ComfyUI_examples, there are two different 2-pass (Hires fix) methods, one is latent scaling, one is non-latent scaling Mar 18, 2024 · You signed in with another tab or window. 17K subscribers in the comfyui community. Open the . models Welcome to the unofficial ComfyUI subreddit. The new versions uses two ControlNet inputs : a 9x9 openpose faces, and a single openpose face. I am looking to remove specific details in images, inpaint with what is behind it, and then the holy grail will be to replace it with specific other details with clipseg and masking. Future tutorials planned: Prompting practices, post processing images, batch trickery, networking comfyUI in your home network, Masking and clipseg awesomeness, many more. Please share your tips, tricks, and… Find your ComfyUI main directory (usually something like C:\ComfyUI_windows_portable) and just put your arguments in the run_nvidia_gpu. 0. 24K subscribers in the comfyui community. Yup, also it seems all interfaces use different approach to the topic. Started with A1111, but now solely ComfyUI. For example, this is mine: You signed in with another tab or window. Florence2 is more precise when it works, but it often selects all or most of a person when only asking for the face / head / hand etc. However, due to the more stringent requirements, while it can generate the intended images, it should be used carefully as conflicts between the interpretation of the AI model and ControlNet's enforcement can lead to a degradation in quality. ComfyUI is a powerful and modular GUI for diffusion models with a graph interface. Dec 2, 2023 · Hey! Great package. json, SDXL seems to operate at clip skip 2 by default, so overriding with skip 1 goes to an empty layer or something. 01, 0. But I don't have bmad4ever comfyui_bmad_nodes installed In Manager, ComfyLiterals shows a conflict with comfyui_bmad_nodes. I created some custom nodes that allow you to use the CLIPSeg model inside ComfyUI to dynamically mask areas of an image based on a text prompt. py", line 183, in load_modelfrom clipseg. comfyui-reactor-node ComfyUI CLIPSeg. Look into clipseg, lets you define masked regions using a keyword. How to use SDXL locally with ComfyUI (How to install SDXL 0. 5]* means and it uses that vector to generate the image. I improved on my previous expressions workflow for ComfyUI by replacing the attention couple nodes by area composition ones. The graphic style 132 votes, 61 comments. Exploring "generative AI" technologies to empower game devs and benefit humanity. 80 votes, 48 comments. empty () in function 'cv::hal::resize'. If we look at comfyui\comfy\sd2_clip_config. ipynb notebook we provide the code for using a pre-trained CLIPSeg model. 10 votes, 18 comments. clipseg_model 'clipseg_model'输出提供了已加载的CLIPSeg模型,准备用于图像分割任务。它代表了节点操作的成果,封装了模型的下游应用能力。此输出非常重要,因为它使得进一步的处理和分析成为可能,充当了模型加载和实际使用之间的桥梁。 Comfy dtype: CLIPSEG_MODEL Welcome to the unofficial ComfyUI subreddit. bat file with notepad, make your changes, then save it. Welcome to the unofficial ComfyUI subreddit. For ComfyUI there should be a license information for each node in my opinion: "Commercial use: yes, no, needs license" and a workflow using non-commercial should show some warning in red. Explore its features, templates and examples on GitHub. 1:8188 in its address, but the page itself remains dark and blank - no grid, no modules, no floating menu. File "C:\ComfyUI\ComfyUI\execution. The browser opens a new tab with 127. ckpt. ControlNet, on the other hand, conveys it in the form of images. Posted by u/Spirited_Employee_61 - No votes and no comments File "F:\Tools\ComfyUI\custom_nodes\masquerade-nodes-comfyui\MaskNodes. I found that the clipseg directory doesn't have an __init__. If you run the notebook locally, make sure you downloaded the rd64-uni. py", line 136, in get_maskmodel = self. 19 votes, 10 comments. Using a Jessica Alba image as a test case, setting the CLIP Set Last Layer node to "-1" should theoretically produce results identical to when the node is disabled. Running a basic request functionality through Ollama and OpenAI to see who codes the better node Day 3 of dev and we got… in my current workflow i tried extracting hair and the head with clipSEG from the input image and incorporating it via IPAdapter (inpainted the head of the destination image) but it still does not register the hair length of the input image. A lot of people are just discovering this technology, and want to show off what they created. Although ComfyUI and A1111 ultimately do the same thing, they are not targeting the same audience. Hi everyone, I've been using SD / ComfyUI for a few weeks now and I find myself overwhelmed with the number of ways to do upscaling. py file in it. Comfy uses -1 to -infinity, A1111 uses 1-12, invokeAI uses 0-12. util import instantiate_from_config from ldm. Set the mode to incremental_image and then set the Batch count of comfyui to the number of images in the batch. Need help with FaceDetailer in ComfyUI? Join the discussion and find solutions from other users in r/StableDiffusion. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. . Please give feedback at /r/beta, or learn more on the wiki. 78, 0, . It needs a better quick start to get people rolling. Newcomers should familiarize themselves with easier to understand workflows, as it can be somewhat complex to understand a workflow with so many nodes in detail, despite the attempt at a clear structure. biegert/ComfyUI-CLIPSeg - This is a custom node that enables the use of CLIPSeg technology, which can find segments through prompts, in ComfyUI. Belittling their efforts will get you banned. Think there are different colored polka dots and stars on clothing and I need to remove them. 7. 15 with the faces being masked using clipseg, but thats me. g. A1111 is probably easier to start with: everything is siloed, easy to get results. 不过这个工作流呢,还是有点问题的,可能需要多调整下效果,clipseg的热力图不一定适合与这种换脸工作流,因为边界过渡范围太大,有的时候硬边缘会更好一点。-END-欢迎大家关注我的公众号:月起星九. Links to different 3D models, images, articles, and videos related to 3D photogrammetry are highly encouraged, e. However, the "-1" setting significantly changes the output, whereas "-2" yields images that are indistinguishable from those produced with the node disabled, as verified through pixel-by-pixel comparison in Photoshop. This could lead users to increase pressure to developers. 3, 0, 0, 0. ckpt: Resumed from sd-v1-2. we use clipseg to mask the 'horse' in each frame seperately We use a mask subtract to remove the masked area #86 from #111 then we blend the resulting #110 with #86 to get #113, this creats a masked area with highlights on all areas that change between those two images. yeps dats meeee, I tend to use reactor then ill do a pass at like 0. this would probably fix gpfgan although if you are doing this at mid distances, you have to do some upscaling in the process which is why lots of people use Impact packs face detailer. Reproducing the behavior of the most popular SD implementation (and then surpassing it) would be a very compelling goal I would think. mainly using WAS suite, (ignore the multiple clips thing im doing, screenshot is just one I had hanging around and showing it. I am using this with the Masquerade-Nodes for comfyui, but on install it complains: "clipseg is not a module". 9) r/StableDiffusion • Is there an Android app to connect to my local A1111 for the times when I want to be lazy and lay in the sofa with my phone, generating images? 😁 In the Quickstart. basically using clipseg for the image and apply Ipadapter. 2 seconds (IMPORT FAILED): G:\ComfyUI\custom_nodes\SeargeSDXL this is what I get when I start it with main. Share Add a Comment Sort by: Welcome to the unofficial ComfyUI subreddit. This might be useful for example in batch processing with inpainting so you don't have to manually mask every image. I have a Nvidia GeoForce GTX Titian with 12GB Vram and 128 normal ram. 2 with a modified unet sd-v1-5-inpainting. Contribute to biegert/ComfyUI-CLIPSeg development by creating an account on GitHub. CLIPSeg Plugin for ComfyUI. For seven months now. 日更写作,AIGC探索,深耕AI绘画 (SD webui/ComfyUI/MJ) comfyui节点文档插件,enjoy~~. cpp:3699: error: (-215:Assertion failed) !dsize. Much Python installing with the server restart. 3 - add clipseg import os, sys, time import torch import numpy as np from omegaconf import OmegaConf from PIL import Image from einops import rearrange from pytorch_lightning import seed_everything from contextlib import nullcontext from ldm. 1. i am trying to use this workflow Easy Theme Photo 简易主题摄影 | ComfyUI Workflow | OpenArt. Every time you run the . I use clipseg to select the shirt. /r/StableDiffusion is back open after the protest of For now ClipSeg still appears to be the most reliable solution for proposing regions for inpainting. clipseg import CLIPDensePredT here's the github issue if you want to follow it when the fix comes out: Welcome to the unofficial ComfyUI subreddit. Use case (simplified) - using impact nodes. I've used Comfyui to rotoscope the actor and modify the background to look like a different style living room, so it doesn't look like we're shooting the same location for every video. 5-inpainting models. Contribute to CavinHuang/comfyui-nodes-docs development by creating an account on GitHub. Sep 28, 2022 · #! python # myByways simplified Stable Diffusion v0. bat file. I'm looking for an updated (or better) version of… Hello, clipseg stopped working! Error occurred when executing CLIPSeg: OpenCV (4. I played with denoise/cfg/sampler (fixed seed). And run Comfyui locally via Stability Matrix on my workstation in my home/office. any help would be appreciated, thank you so much! Welcome to the unofficial ComfyUI subreddit. and i run into an issue with one nod comfyui-mixlab-nodes the node pack is installed but cannot load clipseg it says: When loading shome graph that used CLIPseg, it shows the following node types were not found: comfyui-mixlab-nodes [WIP] 🔗 If you are just wanting to loop through a batch of images for nodes that don't take an array of images like clipSeg, I use Add Node -> WAS Suite -> IO -> Load Image Batch. This is a community to share and discuss 3D photogrammetry modeling. pth weights, either manually or via git lfs extension. Via the ComfyUI custom node manager, searched for WAS and installed it. Basically the SD portion does not know or have any way to know what is a “woman” but it knows what [0. 8K subscribers in the aigamedev community. The detailed explanation of the workflow structure will be provided May 19, 2024 · By integrating the CLIPSeg model, JagsClipseg allows you to generate precise masks, heatmaps, and black-and-white masks from images, making it an invaluable tool for AI artists looking to manipulate and analyze visual content. It is much more coherent and relies heavily on the IPAdapter source image as you can see in the gallery. I can generate 3 images with text2img in about 60 seconds, but for whatever reason the img2img (which has always been *faster* with any other program/ui ive used) it's taking several minutes (5-7 minutes) to produce one image. Before realising this, I understood the comment 'Not using that node should not pose any issues' as meaning 'don't use a conflicted node from an installed custom-node in the node-graph'. 5) sdxl 1. The idea is sometimes the area to be masked may be different from the semantic segment by clipseg and also the area may not be properly fixed by automatic segmentation. In the SDXL paper, they had stated that the model uses the penultimate layer, I was never sure what that meant exactly*. But no matter what, I never ever get a white shirt, I sometime get white shirt with black bolero. ComfyUI is not supposed to reproduce A1111 behaviour I found the documentation for ComfyUI to be quite poor when I was learning it. Using text has its limitations in conveying your intentions to the AI model. Aug 8, 2023 · This video is a demonstration of a workflow that showcases how to change hairstyles using Impact Pack and custom CLIPSeg nodes. load_model()File "F:\Tools\ComfyUI\custom_nodes\masquerade-nodes-comfyui\MaskNodes. ComfyUI is meant for people who: like node-based editors (and are rigorous enough not to get lost in their own architecture); We would like to show you a description here but the site won’t allow us. I also modified the model to a 1. Some example workflows this pack enables are: (Note that all examples use the default 1. Clipseg makes segmentation so easy i could cry. ComfyUI-WD14-Tagger ComfyUI_UltimateSDUpscale ComfyUI-Advanced-ControlNet ComfyUI-KJNodes ComfyUI-Frame-Interpolation ComfyUI-AnimateDiff-Evolved rgthree-comfy comfyui_controlnet_aux ComfyUI_Dave_CustomNode ComfyUI-Flowty-LDSR ComfyUI_InstantID ComfyUI-VideoHelperSuite ComfyUI-Manager clipseg. py. Reload to refresh your session. And above all, BE NICE. Please share your tips, tricks, and workflows for using this software to create your AI art. Aug 2, 2023 · You signed in with another tab or window. 0) D:\a\opencv-python\opencv-python\opencv\modules\imgproc\src\resize. In this workflow we try and merge two masks one from "clipseg" and another from Mask inpainting so that the combined mask acts as a place holder for image generation. I've also used comfyui to do a style transfer to videos and images with our brand style. BlenderNeok/ ComfyUI-TiledKSampler - The tile sampler allows high-resolution sampling even in places with low GPU VRAM. 5 with inpaint , deliberate (1. First 595k steps regular training, then 440k steps of inpainting training at resolution 512x512 on "laion-aesthetics v2 5+" and 10% dropping of the text-conditioning to improve classifier-free guidance sampling. Cannot import G:\ComfyUI\custom_nodes\SeargeSDXL module for custom nodes: No module named 'cv2' Import times for custom nodes: 0. Heres an example of building a prompt from a randomly assembled string. py Dec 7, 2023 · You signed in with another tab or window. A while ago, after loading the server using run_nvidia_gpu. You signed out in another tab or window. and masquerade which has some great masking tools. Trained from 1. combined with multi composite conditioning from davemane would be the kind of tools you are after. articles on new photogrammetry software or techniques. true. Please share your tips, tricks, and… Welcome to the unofficial ComfyUI subreddit. ) Welcome to the unofficial ComfyUI subreddit. py", line 151, in recursive_execute output_data, output_ui = get You're in beta mode! Thanks for helping to test reddit. Due to the complexity of the workflow, a basic understanding of ComfyUI and ComfyUI Manager is recommended. Hello! I'm new to comfyUI and I'm having an issue with how long an image takes to generate when using a simple img2img setup. First: added IO -> Save Text File WAS node and hooked it up to the prompt. Please share your tips, tricks, and workflows for using this… Welcome to the unofficial ComfyUI subreddit. bat file, it will load the arguments. Restarted ComfyUI server and refreshed the web page. 5 and 1. bat, ComfiUI's interface stopped appearing, more often than not. Also, if this is new and exciting to you, feel free to post CLIP and it’s variants is a language embedding model to take text inputs and generate a vector that the ML algorithm can understand. 15K subscribers in the comfyui community. You switched accounts on another tab or window. lzgl cheg fhiao grkwl vudmk bpxduqt zwsmc qtlvye aoitx iulbjoj