Comfyui workflows tutorial reddit


Comfyui workflows tutorial reddit. Mar 20, 2024 · What is ComfyUI? ComfyUI is a node-based GUI for Stable Diffusion. EDIT: the website https://comfyui. A lot of people are just discovering this technology, and want to show off what they created. 01 or so , with begin 0 and end 1 The other can be controlnet main used face alignment and can be set with default values. This is a tutorial on creating a live paint module which is compatable with most graphics editing packages, movies, video files, and games can also be sent through this into comfyUI. 5 so that may give you a lot of your errors. (w/o worrying about model downloads, custom nodes, etc. The power prompt node replaces your positive and negative prompts in a comfy workflow. Will upload the workflow to OpenArt soon. It allows you to put Loras and Embeddings . Thank you :). Custom node support is critical IMO because any significantly complex workflow will be Welcome to the unofficial ComfyUI subreddit. 0-inpainting-0. Thanks for sharing, that being said I wish there was a better sorting for the workflows on comfyworkflows. its not that big workflows are better. All four of these in one workflow including the mentioned preview, changed, final image displays. Discussion. Can someone guide me to the best all-in-one workflow that includes base model, refiner model, hi-res fix, and one LORA. ComfyUI basics tutorial. Install ComfyUI Manager. Tutorial Master Inpainting on Large Images with Stable Diffusion & ComfyUI. Heya, I've been working on a few tutorials for comfyUI over the past couple of weeks if you are new at comfyUI and want a good grounding in how to use comfyUI then this tutorial might help you out. Bring back old Backgrounds! I finally found a workflow that does good 3440 x 1440 generations in a single go and was getting it working with IP-Adapter and realised I could recreate some of my favourite backgrounds from the past 20 years. ComfyUI - SDXL basic to advanced workflow tutorial - 4 - upgrading your workflow. Best Comfyui Workflows, Ideas, and Nodes/Settings. Then open your destination workflow, ctrl-V. 5, but enough folk have sworn by Comfy to encourage me. Next, install RGThree's custom node pack, from the manager. Best part since i moved to Comfyui (Animatediff), i can still use my PC without any lag, browsing and watching movies while its generating in the background. What would people recommend as a good step by step starter tutorial? Welcome to the unofficial ComfyUI subreddit. I'm already searching in YouTube and following some but there are too many. I like to do photo portraits - nothing crazily complex but as realistic as possible. I've also simplified the spaghetti with set/get node to better visualize the system. It seems I may have made a mistake in my setup, as the results for the faces after Adetailer are not Welcome to the unofficial ComfyUI subreddit. 5. darktable is an open source photography workflow application and raw developer. I am just curious whether people still want to learn the basics of comfyui. Hi! I just made the move from A1111 to ComfyUI a few days ago. If your node turned red and the software throws an error, you didn’t add enough spaces, or you didn’t copy the line in the required zone. A virtual lighttable and darkroom for photographers. It took me hours to get one I'm more or less happy with, where I feather the mask ( feather nodes usually don't work how I want to, so I use mask2image, blur the image, then image2mask ), 'only masked area' where it also apply to the controlnet ( applying it to the controlnet was probably the worst part ), and Welcome to the unofficial ComfyUI subreddit. New Tutorial: How to rent 1-8x GPUS, install ComfyUI in the cloud (+Manager, Custom nodes, models, etc). I know there is the ComfyAnonymous workflow but it's lacking. Cannot wait to finish the course! Welcome to the unofficial ComfyUI subreddit. and remember sdxl does not play well with 1. ai /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. Add your thoughts and get the conversation going. (I will be sorting out workflows for tutorials at a later date in the youtube description for each, many can be found in r/comfyui where I first posted most of these. 0 for ComfyUI - Now featuring SUPIR next-gen upscaler, IPAdapter Plus v2 nodes, a brand new Prompt Enricher, Dall-E 3 image generation, an advanced XYZ Plot, 2 types of automatic image selectors, and the capability to automatically generate captions for an image directory ComfyUI doesn't have a mechanism to help you map your paths and models against my paths and models. its super useful and very flexible. Detailer (with before detail and after detail preview image) Upscaler. ComfyUI Modular Tutorial - Prompt Module. Heya, tutorial 4 from my series is up, it covers the creation of an input selector switch, use of some math nodes and has a few tips and tricks. Also, if this is new and exciting to you, feel free to post The ControlNet input is just 16FPS in the portal scene and rendered in Blender, and my ComfyUI workflow is just your single ControlNet Video example, modified to swap the ControlNet used for QR Code Monster and using my own input video frames and a different SD model+vae etc. Copy-paste that line, then add 16 spaces before it, in your code. Learn how to master inpainting on large images using ComfyUI and Stable Diffusion. People want to find workflows that are based on SDXL, SD1. Ive been super busy getting a discord community built, learning a whole bunch of stuff about comfyui Two workflows included. TLDR of video: First part he uses RevAnimated to generate an anime picture with Revs styling, then it I watched first 5 videos of this course already and this is definitely the most complete and detailed tutorials I have ever watched about ComfyUI. In a few weeks, I have understood, little about images and videos and want to work on the quality of generations, I want my workflows to be under 19GB of session storages and so please guide me as to what In this hands-on tutorial, I cover: Downloading the code and dependencies. . 9 but it looks like I need to switch my upscaling method. For example a faceswap with a decent detailer and upscaler should contain no more than 20 nodes. Hi, I'm looking for input and suggestions on how I can improve my output image results using tips and tricks as well as various workflow setups. Would love to hear your feedback! Welcome to the unofficial ComfyUI subreddit. Then find example workflows . I see examples with 200+ nodes on that site. Hey, I make tutorials for comfyUI, they ramble and go on for a bit but unlike some other tutorials I focus on the mechanics of building workflows. It still in the beta phase, but there are some ready-to-use workflows you can try. I'm not very experienced with Comfyui so any ideas on how I can set up a robust workstation utilizing common tools like img2img, txt2img, refiner You use a prompt in the prediffusion process to create a simple image, so you might say 'A simple grey room with pillars' for example. using a second prompt (your main one) you add features to this first image using a sampler set to an appropriate noise level for the influence levels you want. SVD Anime Workflow - need help. It includes literally everything possible with AI image generation. ) — More to share soon! 👀 Discover & follow interesting creators on the site Upload multiple output images/videos per workflow Welcome to the unofficial ComfyUI subreddit. It'll be perfect if it includes upscale too (though I can upscale it in an extra step in the extras tap of automatic1111). And now for part two of my "not SORA" series. 1 at main (huggingface. Controlnet (thanks u/y90210. And above all, BE NICE. These courses are designed to help you master ComfyUI and build your own workflows, from basic concepts of ComfyUI, txt2img, img2img to LoRAs, ControlNet, Facedetailer, and much more! Each course is about 10 minutes long with a cloud runnable workflow for you to run and practice with, completely free! Loading full workflows (with seeds) from generated PNG files. This is the timestamps: 00:00:00 Introduction 00:00:23 Installation 00:01:54 PhotoMaker Nodes 00:05:01 Basic workflow 00:07:23 SDXL workflow 00:10:57 Community workflow 00:14:30 Conclusion 00:15:13 Thank you for watching So I explain it in the video and all the links to HF in the workflow. Tutorial 6 - upscaling. 8K subscribers in the comfyui community. The images look better than most 1. Automate your ComfyUI workflows with the ComfyUI to Python Extension. Please share your opinions in the comments. Bonus would be adding one for Video. prompt : Epic movie poster holding a diamond, in blizzard snow,with face of an old man with glowing eyes Here are approx. Please keep posted images SFW. ComfyUI Inpaint Color Shenanigans (workflow attached) In a minimal inpainting workflow, I've found that both: The color of the area inside the inpaint mask does not match the rest of the 'no-touch' (not masked) rectangle (the mask edge is noticeable due to color shift even though content is consistent) The rest of the 'untouched' rectangle's Welcome to the unofficial ComfyUI subreddit. You can construct an image generation workflow by chaining different blocks (called nodes) together. Using the text-to-image, image-to-image, and upscaling tabs. Also, I forgot to ask. Following this, I utilize Facedetailer to enhance faces (similar to Adetailer for A1111). If you copy your nodes from one workflow they will still be in memory to paste them in a new workflow. This is normal SVD workflow, and my objective is to make animated short films, and so I am learning comfyui. IF there is anything you would like me to cover for a comfyUI tutorial let me know. With a higher config it seems to have decent results. For a111-. I recently started to learn ComfyUi and found this workflow from Olivio and Im looking for something that does a similar thing, but instead can start with a SD or Real image as an input. I want to post videos about my learning and turn them into like a series (learn with me kinda stuff). Edit: I realized that the workflow loads just fine, but the prompts are sometimes not as expected. I tend to use Fooocus for SDXL and Auto1111 for 1. ComfyUI breaks down a workflow into rearrangeable elements so you can easily make your own. If you want to follow my tutorials from part one onwards you can learn to build a complex multi use workflow from the Welcome to the unofficial ComfyUI subreddit. Not only I was able to recover a 176x144 pixel 20 year old video with this, in addition it supports the brand new SD15 model to Modelscope nodes by exponentialML, an SDXL lightning upscaler (in addition to the AD LCM one), and a SUPIR second Welcome to the unofficial ComfyUI subreddit. #2 is especially common: when these 3rd party node suites change, and you update them, the existing nodes spot working because they don't preserve backward compatibility. 150 workflow examples of things I created with ComfyUI and ai models from Civitai Thanks for this. I'll make this more clear in the documentation. Wish there was some #hashtag system or ComfyUI Basic to advanced tutorials. An Unofficial place for questions, discussions, tutorials, workflows and possible bug discussions about darktable. It started from the very foundation and slowly progressed. co) Thanks for sharing this setup. Plus quick run-through of an example ControlNet workflow. ) Welcome to the unofficial ComfyUI subreddit. Save it, then restart ComfyUI. Yes, this is the way to go. Reply reply More replies SDrenderer We’ve got some exciting things coming up for this extension, such as making it easier for you to run workflows from the site in your local ComfyUI, etc. Thanks tons! That's the one I'm referring Loras (multiple, positive, negative). Release: AP Workflow 9. Saving/Loading workflows as Json files. This pack includes a node called "power prompt". Cfg indeed quite low at max 3. Interesting idea, but I'd hope bullets 2 and 3 could be solved by something that leverages the API, preferably by injecting variables anywhere in the GUI-loaded or API-provided workflow. Simply load / drag the png into comfyUI and it will load the workflow. Looking for ComfyUi workflow that transforms IRL images. If you have questions, feel free to I talk a bunch about some of the different upscale methods and show what I think is one of the better upscale methods, I also explain how lora can be used in a comfyUI workflow. I do like to go in depth and ramble a bit so maybe thats not for you, maybe you like that kind of thing. Based Settings : DPM 2M SDE Karras - Stage C 25 steps - 4 CFG Stabe B 10 Steps - 1 CFG. chat. This comprehensive tutorial covers 10 vital steps, including cropping, mask detection, sampler erasure, mask fine-tuning, and streamlined inpainting for incredible results. I've also started a weekly 2 minute tutorial series, so if there is anything you want covered that I can fit into 2 minutes please post it! 21K subscribers in the comfyui community. The prediffusion sampler uses ddim at 10 steps so In this tutorial, I cover how to install PhotoMaker and show workflows. so if you are interested in actually building your own systems for comfyUI and creating your own bespoke awesome images without relying on a workflow you don ComfyUI Basic to advanced tutorials. Great tutorial for any artists wanting to integrate live AI painting into their workflows. 5 (or maybe SD2. Opening the image in stable-diffusion-webui's PNG-info I can see that there are indeed two different sets of prompts in that file and for some reason the wrong one is being chosen. 9. Belittling their efforts will get you banned. Tutorial 7 - Lora Usage. The ComfyUI workflow uses the latent upscaler (nearest/exact) set to 512x912 multiplied by 2 and it takes around 120-140 seconds per image at 30 steps with SDXL 0. Txt-to-img, img-to-img, Inpainting, Outpainting, Image Upcale, Latent Upscale, multiple characters at once, LoRAs, ControlNet, IP-Adapter, but also video generation, pixelization, 360 image generation, and even Live painting Welcome to the unofficial ComfyUI subreddit. I have been looking for some tutorials but I couldn't find any good ones. Nobody's responded to this post yet. It must be between the brackets related to the word “required”. Nodes interface can be used to create complex workflows like one for Hires fix or much more advanced ones. Please share your tips, tricks, and workflows for using this…. Thanks in advance! Just open one workflow, ctrl-A, ctrl-C. I wanted to provide an easy-to-follow guide for anyone interested in using my open-sourced Gradio app to generate AI Image generated with my new hopefully upcoming Instantly Transfer Face By Using IP-Adapter-FaceID: Full Tutorial & GUI For Windows, RunPod & Kaggle tutorial and Web APP Help with Facedetailer workflow SDXL. This is an example of an image that I generated with the advanced workflow. Making Horror Films with ComfyUI Tutorial + full Workflow. A checkpoint is your main model and then loras add smaller models to vary output in specific ways . If you really want the json, you can save it after loading the png into comfyui. Please share your tips, tricks, and workflows for using this… Beginners' guide to ComfyUI 😊 We discussed the fundamental comfyui workflow in this post 😊 You can express your creativity with ComfyUI #ComfyUI #CreativeDesign #ImaginativePictures #Jarvislabs. if a box is in red then it's missing . I've built a cloud environment with everything ready: preloaded nodes, models, and it runs smoothly. Keep up the good work. Some commonly used blocks are Loading a Checkpoint Model, entering a prompt, specifying a sampler, etc. A lot. Please share your tips, tricks, and workflows for using this software to create your AI art. I set up a workflow for first pass and highres pass. Now you can manage custom nodes within the app. The trick is adding these workflows without deep diving how to install Be the first to comment. The first one is very similar to the old workflow and just called "simple". 3. Saw lots of folks struggling with workflow setups and manual tasks. usually the smaller workflows are more efficient or make use of specialized nodes. 5 and SDXL version. For now, I have to manually copy the right prompts. Forgot to mention, you will have to download this inpaint model from huggingface and put it in your comfyUI "Unet" folder that can be found in the models folder. Hi hi I make tutorials, I try to help people who want to learn to harness the power of comfyUI, not just by using other peoples workflows but building thier own unique creations so that whatever crazy idea you dream up can become a reality :D. start with simple workflows . 5 based models with greater detail in SDXL 0. Set ip adapter instant xl control net to 0. ControlNet and T2I-Adapter Welcome to the unofficial ComfyUI subreddit. try civitai . comfyui manager will identify what is missing and download for you . Fetch Updates in the ComfyUI Manager to be sure that you have the latest version. This info is from the github issues/forum regarding the a1111 plugin. diffusers/stable-diffusion-xl-1. THE LAB EVOLVED is an intuitive, ALL-IN-ONE workflow. Welcome to the unofficial ComfyUI subreddit. So, when you download the AP Workflow (or any other workflow), you have to review each and every node to be sure that they point to your version of the model that you see in the picture. com. There's an SD1. Configuring file paths. Remove the node from the workflow and re-add it. I want to get into ComfyUI, starting from a blank screen. I have a wide range of tutorials with both basic and advanced workflows. Area Composition; Inpainting with both regular and inpainting models. Explaining the Python code so you can customize it. Thanks for sharing, I did not know that site before. Running the app. A somewhat decent inpainting workflow in comfyui can be a pain in the ass to make. 1 if people still use that?) People want to find workflows that use AnimateDiff (and AnimateDiff Evolved!) to make animation, do txt2vid, vid2vid, animated controlNet, IP-Adapter, etc. The second workflow is called "advanced" and it uses an experimental way to combine prompts for the sampler. 18K subscribers in the comfyui community. With cli, auto1111 and now moved over to Comfyui where it's very smooth and i can go higher in resolution even. az jv sf mb tt pa dn ch zt rs