Comfyui arguments reddit


Comfyui arguments reddit. Here are some examples I did generate using comfyUI + SDXL 1. Please share your tips, tricks, and workflows for using this software to create your AI art… Command line arguments can be put in the bat files used to run comfyui like this separated by a space after each command Jul 28, 2023 · if you cd into your comfy directory, just run: python main. 3. I use an 8GB GTX 1070 without comfyui launch options and I can see from the console output that it chooses NORMAL_VRAM by default for me. Only if you want it early. 1 or not. These images might not be enough (in numbers) for my argument, so I invite you to try it out yourselves and see if its any different in your case. And then connect same primitive node to 5 other nodes to change them in one place instead of each node. 53 it/s for SDXL and approximately 4. Please share your tips, tricks, and workflows for using this software to create your AI art. Welcome to the ComfyUI Community Docs!¶ This is the community-maintained repository of documentation related to ComfyUI, a powerful and modular stable diffusion GUI and backend. For example, this is mine: Hopefully, some of the most important extensions such as Adetailer will be ported to ComfyUI. That helped the speed a bit Welcome to the unofficial ComfyUI subreddit. I have 2 instances, 1 for each graphics card. python C:\Users\***\ComfyUI\main. It's possible that MoonDream is competitive if the user spends a lot of time crafting the perfect prompt, but if the prompt simply is "Caption the image" or "Describe the image", Florence2 wins. 1, and SDXL are all trained on different resolutions, and so models for one will not work with the others. There should be a file called webui-user. And the clever tricks discovered from using ComfyUI will be ported to the Automatic1111-WebUI. Open the . Are there any guides that explain all the possible COMMANDLINE_ARGS that could be set in the webui-user. bat and what they do? After having issues from the last update I realized my args are just thrown together from random thread suggestions and troubleshooting but I really have no full understanding of what all the possible args are and what they do. 5 models, and my comfy. Then, a miraculous moment unfolded when I installed Stable Diffusion Forge. I'm not sure why and I don't know if it's specific to Comfy or if it's a general rule for Python. After playing around with it for a while, here are 3 basic workflows that work with older models (here, AbsoluteReality). py. Welcome to the unofficial ComfyUI subreddit. bat file with notepad, make your changes, then save it. Aug 2, 2024 · All posts must be Open-source/Local AI image generation related Posts should be related to open-source and/or Local AI image generation only. RAM: 32GB DDR4 @ 3200MHz Kosinkadink developer of ComfyUI-AnimateDiff-Evolved has updated the cutsom node with a new funcionality in the AnimateDiff Loader Advanced node, that can reach higher number of frames. Unfortunately I dont have much space left on my computer, so I am wondering if I could install a version of automatic1111 that use the Loras and controlnet from ComfyUI. I somehow got it to magically run with AMD despite to lack of clarity and explanation on the github and literally no video tutorial on it. Image Chooser is described on reddit here but the github link is gone (?!) GitHub - gokayfem/ComfyUI-Texture-Simple: Visualize your textures inside ComfyUI. FETCH DATA from: H:\Stable Diffusion Apps\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Manager\extension-node-map. 5 while creating a 896x1152 image via the Euler-A sampler. 20. You can prefix the start command with CUDA_VISIBLE_DEVICES=0 to force comfyui to use that specifici card. Inpainting (with auto-generated transparency masks). It has --listen and --port but since the move, Auto1111 works and Koyha works, but Comfy has been unreachable. 1’s 200,000 GPU hours. fou Update ComfyUI and all your custom nodes, and make sure you are using the correct models. py (By the way - you can and should, if you understand Python, do a git diff inside ComfyUI-VideoHeperSuite to review what's changed) Welcome to the unofficial ComfyUI subreddit. Belittling their efforts will get you banned. 0. Basic img2img. You can construct an image generation workflow by chaining different blocks (called nodes) together. --show-completion: Show completion for the current shell, to copy it or customize the installation. I run some tests this morning. py with the following code: nodes. While I primarily utilize PyTorch cross attention (SDP) I also tested xformers to no avail. Comparisons and discussions across different platforms are encouraged. 5 including Multi-ControlNet, LoRA, Aspect Ratio, Process Switches, and many more nodes. Learning the intricacies of Web UI launching arguments, addressing xformers errors, delving into CUDA, and more became my daily routine. Anyway, whenever you define a function, never forget the self argument! I have barely scratched the surface, but through personal experience, you will go much further! Using ComfyUI was a better experience the images took around 1:50mns to 2:25mns 1024x1024 / 1024x768 all with the refiner. I tested with different SDXL models and tested without the Lora but the result is always the same. I've ensured both CUDA 11. Jul 6, 2024 · What is ComfyUI? ComfyUI is a node-based GUI for Stable Diffusion. TLDR, workflow: link. Also I don't know when it has been changed, but ComfyUI is not a conda packet enviroment anymore, it depends from an python_embeded package, and generate an venv from it results in no tkinter. The VAE can be found here and should go in your ComfyUI/models/vae/ folder. - or python3 main. Data to create a command line tool to improve the ergonomics of using ComfyUI. Installation¶ Welcome to the unofficial ComfyUI subreddit. 0 with refiner. bat command arguments 6 add model run webui-user. . A lot of people are just discovering this technology, and want to show off what they created. You can encode then decode bck to a normal ksampler with an 1. Are there additional arguments to use? r/comfyui: Welcome to the unofficial ComfyUI subreddit. that did not I have comfyui installed on my computer with a lot of custom nodes, Loras, controlnets I would like to have automatic1111 also installed to be able to use it. txt" It is actually written on the FizzNodes github here Welcome to the unofficial ComfyUI subreddit. For some reason when I launch comfy it now sets my vram to normal_vram. And ComfyUI-VideoHeperSuite\videohelpersuite\nodes. So that is how I was running ComfyUI. 5 with lcm with 4 steps and 0. Please keep posted images SFW. Scoured the internet and came across multiple posts saying to add the arguments --xformers --medvram. On Linux with the latest ComfyUI I am getting 3. json got prompt… I improved on my previous expressions workflow for ComfyUI by replacing the attention couple nodes by area composition ones. Aug 2, 2024 · You can use t5xxl_fp8_e4m3fn. It seems that the path always look to the root of ComfyUI not relative to the custom_node folder "comfyui-popup_preview". bat 7 other things: used firefox with hardware acceleration disabled in settings on previous attempts I also tried --opt-channelslast --force-enable-xformers but in this last run i got 28it/s without them for some reason Welcome to the unofficial ComfyUI subreddit. The graphic style I am using ComfyUI with its default settings. If you go to the folder that you installed the webgui. 2 denoise to fix the blur and soft details, you can just use the latent without decoding and encoding to make it much faster but it causes problems with anything less than 1. That’s a cost of abou /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. py with the following code: load_images_nodes. 8 and PyTorch 2. I think for me at least for now with my current laptop using comfyUI is the way to go. I down loaded the Windows 7-Zip file and ended up once unzipped with a large folder of files. Go to your FizzNodes folder ("D:\Comfy\ComfyUI\custom_nodes\ComfyUI_FizzNodes" for me) Run this, make sure to also adapt the beginning match with where you put your comfyui folder: "D:\Comfy\python_embeded\python. Anything that works well gets adopted by the larger community and finds it's way into other Stable Diffusion software eventually. Jul 28, 2023 · The first was installed using the ComfyUI_windows_portable install and it runs through Automatic1111. Activate "Nickname" on the "Badge" dropdown list. Replace ComfyUI-VideoHeperSuite\videohelpersuite\load_images_nodes. Is it even Welcome to the ComfyUI Community Docs!¶ This is the community-maintained repository of documentation related to ComfyUI, a powerful and modular stable diffusion GUI and backend. Went to Updates as suggested below and ran the ComfyUI and Python Dependencies batch files, but that didn't work for me. (Same image takes 5. I don't know why this changed, nothing changed on my pc. 1. 0 denoise, due to vae, maybe there is an obvious solution but i don't know it. I don't find ComfyUI faster, I can make an SDXL image in Automatic 1111 in 4 . on the front page it says "Use --preview-method auto to enable previews. Workflows are much more easily reproducible and versionable. You have to run two instances of it and use the --port argument to set a different port. Some commonly used blocks are Loading a Checkpoint Model, entering a prompt, specifying a sampler, etc. Supports: Basic txt2img. Florence2 (large, not FT, in more_detailed_captioning mode) beats MoonDream v1 and v2 in out-of-the-box captioning. ComfyUI: An extremely powerful Stable Diffusion GUI with a graph/nodes interface for advanced users that gives you precise control over the diffusion process without coding anything now supports ControlNets I am trying out using SDXL in ComfyUI. 5 add --xformers to web-user. I tried installing the dependencies by running the pip install in the terminal window in ComfyUI folder. Hi r/comfyui, we worked with Dr. Most of them already are if you are using the DEV branch by the way. py --listen --use-split-cross-attention. " so I spent ages online looking how/where I enter this command. Open your ComfyUI Manager. 10:7862, previously 10. Had the same issue. Just download it, drag it inside ComfyUI, and you’ll have the same workflow you see above. 86s/it on a 4070 with the 25 frame model, 2. bat just contains. But with Comfy UI this doesn't seem to work! Thanks! Welcome to the unofficial ComfyUI subreddit. I am not sure what kind of settings ComfyUI used to achieve such optimization, but if you are using Auto111, you could disable live preview and enable xformers (what I did before switching to ComfyUI). Seems very hit and miss, most of what I'm getting look like 2d camera pans. Where ever you launch comfyui from is where you need to set the launch options, like so: python main. On my 12GB 3060, A1111 can't generate a single SDXL 1024x1024 image without using RAM for VRAM at some point near the end of generation, even with --medvram set. With this combo it is now rarely gives out of memory (unless you try crazy things) Before I couldn't even generate with sdxl on comfyui or anything Hey all, is there a way to set a command line argument on startup for ComfyUI to use the second GPU in the system, with Auto1111 you add the following to the Webui-user. It said follow the instructions for manually installing for Windows. ComfyUI breaks down a workflow into rearrangeable elements so you can easily make your own. 75s/it with the 14 frame model. In ComfyUI Manager- Activate Badge: Nickname If the author set their package nickname you will see it on the top-right of each node. This is a plugin that allows users to run their favorite features from ComfyUI and at the same time, being able to work on a canvas. Although ComfyUI and A1111 ultimately do the same thing, they are not targeting the same audience. I think a function must always have "self" as its first argument. Can you let me know how to fix this issue? I have the following arguments: --windows-standalone-build --disable-cuda-malloc --lowvram --fp16-vae --disable-smart-memory Comfyui is much better suited for studio use than other GUIs available now. r/synthdiy • We're going live with a workshop in an hour. ComfyUI was written with experimentation in mind and so it's easier to do different things in it. Lt. Hi amazing ComfyUI community. I had installed the ComfyUI extension in Automatic1111 and was running it within Automatic1111. Some main features are: Automatically install ComfyUI dependencies. py --normalvram. I'm running a GTX 1660 Ti, 6gb of VRAM. Find your ComfyUI main directory (usually something like C:\ComfyUI_windows_portable) and just put your arguments in the run_nvidia_gpu. So download the workflow picture and dragged it to comfyui but it doesn't load anything, looks like the metadata is not complete. Did Update All in ComfyUI manager but when it restarted I just got a blank screen and nothing loaded, not even the manager. Any idea why the qualty is much better in Comfy? I like InvokeAI - its more user-friendly, and although I aspire to master Comfy, it is disheartening to see a much easier UI give sub-par results. For example, this is mine: Wondering if there a ways to speedup comfy generations? Use it on rtx4090 mostly with sd1. The big current advantage of ComfyUI over Automatic1111 is it appears to handle VRAM much better. A1111 is probably easier to start with: everything is siloed, easy to get results. 4. bat. Every time you run the . I just moved my ComfyUI machine to my IoT VLAN 10. for CR Seamless Checker. However, I kept getting a black image. The biggest tip for comfy - you can turn most node settings into itput buy RMB - convert to input, then connect primitive node to that input. Invoke just released 3. These include Stable Diffusion and other platforms like Flux, AuraFlow, PixArt, etc. 6 seconds in ComfyUI) and I cannot get TensorRT to work in ComfyUI as the installation is pretty complicated and I don't have 3 hours to burn doing it. -- and you'll should see arguments that can be passed on the command line: ie: ~/ComfyUI| python3 main. And with comfyui my commandline arguments are : " --directml --use-split-cross-attention --lowvram" The most important thing is use tiled vae for decoding that ensures no out of memory at that step. 5, SD2. Anyone that has made art “for exposure”, had their work ripped off, received a pittance while the gallery/auction house/art collector enjoy’s the lion’s share of the profits, knows this. I want to set a new pc build, is this config sufficient for comfyUI and a1111 : CPU : AMD Ryzen™ 7 5700X 8-Core, 16-TH @ 3. VFX artists are also typically very familiar with node based UIs as they are very common in that space. Right click and edit the file adding —xformers to the set COMMANDLINE_ARGS. bat file set CUDA_VISIBLE_DEVICES=1. We would like to show you a description here but the site won’t allow us. I noticed that in the Terminal window you could launch the ComfyUI directly in a browser with a URL link. Hello, community! I'm happy to announce I have finally finished my ComfyUI SD Krita plugin. exe -s -m pip install -r requirements. I did't quite understand the part where you can use the venv folder from other webui like A1111 to launch it instead and bypass all the requirements to launch comfyui. Launch and run workflows from the command line Install and manage custom nodes via cm-cli (ComfyUI-Manager as a cli) /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. I know SDXL and ComfyUi are the hotness right now, but I'm still kind of in the kiddie pool in terms of SD. This only possible if the node's author set the nickname parameter in their info. 0 that ads controlnet and a node based backend that you can use for plugins etc so seems a big teams finally taking node based expansion serious i love comfy but a bigger team and really nice ui with node plugin support gives serious potential to them… wonder if comfy and invoke will somehow work together or if things will stay fragmented between all the various Welcome to the unofficial ComfyUI subreddit. And the new interface is also an improvement as it's cleaner and tighter. the example pictures do load a workflow, but they don't have a label or text that indicates if its version 3. bat file, it will load the arguments. SD1. And above all, BE NICE. But these arguments did not work for me, --xformers gave me a minor bump in performance (8s/it vs 11s/it) but still taking about 10mins per image. This time about arpeggiators - how to design your own arp on the Daisy Seed using Arduino and C++ classes. The new versions uses two ControlNet inputs : a 9x9 openpose faces, and a single openpose face. So, for those of us behind the curve, I have a question about arguments. Now it also can save the animations in other formats apart from gif. On vacation for a few days, I installed ComfyUI portable on a USB key, and plugged it into a laptop that wasn't too powerful (just the minimum 4 gigabytes of Vram). GitHub - Suzie1/ComfyUI_Comfyroll_CustomNodes: Custom nodes for SDXL and SD1. Im just getting comfortable with automatic1111, using different models, VAE's, upscalers, etc. 10:8188. I learned about MeshGraphormer from this youtube video of Scott Detweiler, but felt like simple inpainting does not do the trick for me, especially with SDXL. Before, it automatically set it to high_vram. Finally I gave up with ComfyUI nodes and wanted my extensions back in A1111. This has been driving me crazy trying to figure it out . 55 it/s for SD1. ComfyUI is also trivial to extend with custom nodes. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. There are some important arguments about how artists could be exploited- but that’s also not a new problem. py -h. bat file. ” From the paper, training the entire Würschten model (the predecessor to Stable Cascade) cost about 1/10th of Stable Diffusion. Find your ComfyUI main directory (usually something like C:\ComfyUI_windows_portable) and just put your arguments in the run_nvidia_gpu. I have read the section of the Github page on installing ComfyUI. Options:--install-completion: Install completion for the current shell. “The training requirements of our approach consists of 24,602 A100-GPU hours – compared to Stable Diffusion 2. 2 seconds, with TensorRT. It is much more coherent and relies heavily on the IPAdapter source image as you can see in the gallery. 8 GHz. Wanted to share my approach to generate multiple hand fix options and then choose the best. ComfyUI is meant for people who: like node-based editors (and are rigorous enough not to get lost in their own architecture); Source image. The aim of this page is to get you up and running with ComfyUI, running your first gen, and providing some suggestions for the next steps to explore. 1 are updated and used by ComfyUI. safetensors instead for lower memory usage but the fp16 one is recommended if you have more than 32GB ram. mfijpq kvim hadfh bjlh tmczg zbdmmqdd tdlkr oxluh hwmcsq wgivlak