Comfyui workflow png example reddit

Comfyui workflow png example reddit. Hope you like some of them :) Workflow. If the term "workflow" has been used to describe node graphs for a long time then that's unfortunate because now it has become entrenched. The problem I'm having is that Reddit strips this information out of the png files when I try to upload them. For example I just glance at my workflows and pick the one that I want, drag and drop into ComfyUI and I'm ready to go. So, i added reverse image search that queries a workflow catalog to find workflows that produce similar looking results. Please share your tips, tricks, and workflows for using this software to create your AI art. Belittling their efforts will get you banned. AnimateDiff in ComfyUI is an amazing way to generate AI Videos. I generated images from comfyUI. K12sysadmin is for K12 techs. You can construct an image generation workflow by chaining different blocks (called nodes) together. 2) or (bad code:0. Hi everyone, I've been using SD / ComfyUI for a few weeks now and I find myself overwhelmed with the number of ways to do upscaling. Download & drop any image from the website into ComfyUI, and ComfyUI will load that image's entire workflow. I was confused by the fact that I saw in several Youtube videos by Sebastain Kamph and Olivio Sarikas that they simply drop png's into the empty ComfyUI. EDIT: For example this workflow shows the use of the other prompt windows. 0. If you have any of those generated images in original PNG, you can just drop them into ComfyUI and the workflow will load. Comfy Workflows Comfy Workflows. com or https://imgur. Increasing the sample count leads to more stable and consistent results. I'll do you one better, and send you a png you can directly load into Comfy. First of all, sorry if this has been covered before, i did search and nothing came back. Svelte is a radical new approach to building user interfaces. 1 or not. I had to place the image into a zip, because people have told me that Reddit strips . I conducted an experiment on a single image using SDXL 1. Most workflows you see on GitHub can also be downloaded. Potential use cases include: Streamlining the process for creating a lean app or pipeline deployment that uses a ComfyUI workflow Creating programmatic experiments for various prompt/parameter values Here is the workflow for ComfyUI updated to a folder on google drive with both json and png of some of my workflows example by @midjourney_man - img2vid No refiner. hopefully this will be useful to you. Ending Workflow. Whereas traditional frameworks like React and Vue do the bulk of their work in the browser, Svelte shifts that work into a compile step that happens when you build your app. The workflow is kept very simple for this test; Load image Upscale Save image. Some commonly used blocks are Loading a Checkpoint Model, entering a prompt, specifying a sampler, etc. com ComfyUI Examples. A group that allows the user to perform a multitude of blends between image sources as well as add custom effects to images using a central control panel. txt containing a prompt describing the outfit in outfit1. png" in the file list on the top, and then you should click Download Raw File, but alas, in this case the workflow does not load. A1111 has great categories like Features and Extensions that simply show what repo can do, what addon out there and all that stuff. Dragging a generated png on the webpage or loading one will give you the full workflow including seeds that were used to create it. I can load workflows from the example images through localhost:8188, this seems to work fine. You can then load or drag the following image in ComfyUI to get the workflow: Flux Schnell. That's because the base 1. 168. For your all-in-one workflow, use the Generate tab. I can load the comfyui through 192. This makes it potentially very convenient to share workflows with other. I'm facing a problem where, whenever I attempt to drag PNG/JPG files that include workflows into ComfyUI—be it examples The API workflows are not the same format as an image workflow, you'll create the workflow in ComfyUI and use the "Save (API Format)" button under the Save button you've probably used before. it is a simple way to compare these methods, it is a bit messy as I have no artistic cell in my body. Once the final image is produced, I begin working with it in A1111, refining, photobashing in some features I wanted and re-rendering with a second model, etc. PS: If someone has access to Magnific AI, please can you upscale and post result for 256x384 (5 jpg quality) and 256x384 (0 jpg quality) . You can use () to change emphasis of a word or phrase like: (good code:1. But reddit will strip it away. If you want to post and aren't approved yet, click on a post, click "Request to Comment" and then you'll receive a vetting form. json files into an executable Python script that can run without launching the ComfyUI server. From the ComfyUI_examples, there are two different 2-pass (Hires fix) methods, one is latent scaling, one is non-latent scaling Now there's also a `PatchModelAddDownscale` node. - If the image was generated in ComfyUI, the civitai image page should have a "Workflow: xx Nodes" box. If you asked about how to put it into the PNG, then you just need to create the PNG in ComfyUI and it will automatically contain the workflow as well. Just download it, drag it inside ComfyUI, and you’ll have the same workflow you see above. A lot of people are just discovering this technology, and want to show off what they created. Hello, I'm wondering if the ability to read workflows embedded in images is connected to the workspace configuration. 5 Lora with SDXL, Upscaling Future tutorials planned: Prompting practices, post processing images, batch trickery, networking comfyUI in your home network, Masking and clipseg awesomeness, many more. You can then load or drag the following image in ComfyUI to get the workflow: This is the most well organised and easy to use ComfyUI Workflow I've come across so far showing difference between Preliminary, Base and Refiner setup. All the images in this repo contain metadata which means they can be loaded into ComfyUI with the Load button (or dragged onto the window) to get the full workflow that was used to create the image. but mine do include workflows for the most part in the video description. ive got 3 tutorials that can teach you how to set up a decent comfyui inpaint workflow. the example pictures do load a workflow, but they don't have a label or text that indicates if its version 3. OP probably thinks that comfyUI has the workflow included with the PNG, and it does. be/ppE1W0-LJas - the tutorial. Really chaotic images or images that actually benefit from added details from the prompt can look exceptionally good at ~8. its the kind of thing thats a bit fiddly to use so using someone elses workflow might be of limited use to you. The TL;DR version is this: it makes a image from your prompt without a LoRA, runs it through ControlNet, and uses that to make a new image with the LoRA. About a week or so ago, I've began to notice a weird bug - If I load my workflow by dragging the image into the site, it'll put the wrong positive prompt. com and then post a link back here if you are willing to share it. I tried to find either of those two examples, but I have so many damn images I couldn't find them. pngs of metadata. github. And my workflow itself for something like SDXL with Refiner upscaled to 4kx4k is super simple. The sample prompt as a test shows a really great result. So OP, please upload the PNG to civitai. png) Give it a folder of OpenPose poses to iterate over Create a list of emotion expressions. I found it very helpful. This workflow can use LoRAs, ControlNets, enabling negative prompting with Ksampler, dynamic thresholding, inpainting, and more. This workflow is entirely put together by me, using the ComfyUI interface and various open source nodes that people have added to it. And the documentation uses a highly technical language, with no examples to make it worse. Just started with ComfyUI and really love the drag and drop workflow feature. Remove 3/4 stick figures in the pose image. It took me hours to get one I'm more or less happy with, where I feather the mask ( feather nodes usually don't work how I want to, so I use mask2image, blur the image, then image2mask ), 'only masked area' where it also apply to the controlnet ( applying it to the controlnet was probably the worst part ), and Using just the base model in AUTOMATIC with no VAE produces this same result. Any ideas on this? Give it a folder of images of outfits (with, for example, outfit1. The png files produced by ComfyUI contain all the workflow info. comfy uis inpainting and masking aint perfect. Each time I do a step, I can see the color being somehow changed and the quality and color coherence of the newly generated pictures are hard to maintain. Those images have to contain a workflow, so one you've generated yourself for example. 5 noise, decoded, then saved. I noticed that ComfyUI is only able to load workflows saved with the "Save" button and not with "Save API Format" button. Unfortunately, Reddit strips the workflow info from uploaded png files. Welcome to the unofficial ComfyUI subreddit. Run any ComfyUI workflow w/ ZERO setup (free & open source) Try now Hello Fellow ComfyUI users, this is my workflow for testing different methods to improve image resolution. Apparently the dev uploaded some version with trimmed data But generally speaking, workflows seen on GitHub can also be used. Example: Starting workflow. This was really a test of Comfy UI. io/ComfyUI_examples/flux/flux_schnell_example. You can find the Flux Schnell diffusion model weights here this file should go in your: ComfyUI/models/unet/ folder. true. Im trying to do the same as high res fix, with a model and weight below 0. To add content, your account must be vetted/verified. An All-in-One FluxDev workflow in ComfyUI that combines various techniques for generating images with the FluxDev model, including img-to-img and text-to-img. Ignore the prompts and setup I think perfect place for them is Wiki on GitHub. ComfyUI breaks down a workflow into rearrangeable elements so you can easily make your own. 8). 5 from 512x512 to 2048x2048. So download the workflow picture and dragged it to comfyui but it doesn't load anything, looks like the metadata is not complete. Upcoming tutorial - SDXL Lora + using 1. There is the "example_workflow. If I drag and drop the image it is supposed to load the workflow ? I also extracted the workflow from its metadata and tried to load it, but it doesn't load. From what I see in the ControlNet and T2I-Adapter Examples, this allows me to set both a character pose and the position in the composition. If the term "workflow" is something that has only been used exclusively to describe ComfyUI's node graphs, I suggest just calling them "node graphs" or just "nodes". I'm using the ComfyUI notebook from their repo, using it remotely in Paperspace. ComfyUI could have workflow screenshots like example repo has to demonstrate possible usage and also variety of extensions. Plus there a ton of extensions which provide plenty ease of use cases. That way the Comfy Workflow tab in Swarm will be your version of ComfyUI, with your custom nodes. K12sysadmin is open to view and closed to post. I cant load workflows from the example images using a second computer. A text file with multiple lines in the format "emotionName|prompt for emotion" will be used. Generate one character at a time and remove the background with Rembg Background Removal Node for ComfyUI . Need help with FaceDetailer in ComfyUI? Join the discussion and find solutions from other users in r/StableDiffusion. here i just use: futuristic robotic iguana, extreme minimalism, white porcelain robot animal, details, build by Tesla, Tesla factory in the background I'm not using breathtaking, professional, award winning, etc, because it's already handled by "sai-enhance" I totally agree. png) 29 comments See full list on github. Also, if this is new and exciting to you, feel free to post ComfyUI - Ultimate Starter Workflow + Tutorial Heya, ive been working on this workflow for like a month and its finally ready, so I also made a tutorial on how to use it. In this Guide I will try to help you with starting out using this and give you some starting workflows to work with. As a pogrammer, the workflow logic should be relatively easy to understand, but the function of each node cannot be inferred by simply looking at its name. This repo contains examples of what is achievable with ComfyUI. I have a client who has asked me to produce a ComfyUI workflow as backend for a front-end mobile app (which someone else is developing using React) He wants a basic faceswap workflow. Instead, I created a simplified 2048X2048 workflow. hey guys, i always love seeing a cool image online and trying to reproduce it, but trying to find the original method or workflow is troublesome since google‘s image search just shows similar looking images. Posted by u/Kinfolk0117 - 37 votes and 7 comments A somewhat decent inpainting workflow in comfyui can be a pain in the ass to make. Jul 6, 2024 · What is ComfyUI? ComfyUI is a node-based GUI for Stable Diffusion. Hello there. Flux Schnell is a distilled 4 step model. 0 and ComfyUI to explore how doubling the sample count affects performance, especially on higher sample counts, seeing where the image changes relative to the sampling steps. Otherwise, please change the flare to "Workflow not included" First of all, sorry if this has been covered before, i did search and nothing came back. It'll add nodes as needed if you enable LoRAs or ControlNet or want it refined at 2x scale or whatever options you choose, and it can output your workflows as Comfy nodes if you ever want to - If the image was generated in ComfyUI and metadata is intact (some users / websites remove the metadata), you can just drag the image into your ComfyUI window. I have a workflow with this kind of loop where latest generated image is loaded, encoded to latent space, sampled with 0. Click this and paste into Comfy. Please keep posted images SFW. It works by converting your workflow. This probably isn't the completely recommended workflow though as it has POS_G and NEG_G prompt windows but none for POS_L, POS_R, NEG_L, and NEG_R which is part of SDXL's trained prompting format. ai/profile/neuralunk?sort=most_liked. 150 workflow examples of things I created with ComfyUI and ai models from Civitai Moved my workflow host to: https://openart. And you need to drag them into an empty spot, not a load image node or something. You can also easily upload & share your own ComfyUI workflows, so that others can build on top of them! :) Why I built this: I just started learning ComfyUI, and really like how it saves the workflow info within each image it generates. No attempts to fix jpg artifacts, etc. But the workflow is dead simple- model - dreamshaper_7 Pos Prompt - sexy ginger heroine in leather armor, anime Neg Prompt - ugly Sampler - euler steps - 20 cfg - 8 seed - 674367638536724 That's it. === How to prompt this workflow === Main Prompt ----- The subject of the image in natural language Example: a cat with a hat in a grass field Secondary Prompt ----- A list of keywords derived from the main prompts, at the end references to artists Example: cat, hat, grass field, style of [artist name] and [artist name] Style and References Welcome to the unofficial ComfyUI subreddit. Here are approx. Hi Antique_Juggernaut_7 this could help me massively. And above all, BE NICE. Just wanted to share that I have updated comfy_api_simplified package, and now it can be used to send images, run workflows and receive images from the running ComfyUI server. Merge 2 images together with this ComfyUI workflow: View Now: ControlNet Depth Comfyui workflow: Use ControlNet Depth to enhance your SDXL images: View Now: Animation workflow: A great starting point for using AnimateDiff: View Now: ControlNet workflow: A great starting point for using ControlNet: View Now: Inpainting workflow: A great starting I think it was 3DS Max. 0 version of the SDXL model already has that VAE embedded in it. My ComfyUI workflow was created to solve that. . If you can't see that button, you need to check the 'enable dev mode options'. I am personally using it as a layer between telegram bot and a ComfyUI to run different workflows and get the results using user's text and image input. u/wolowhatever we set 5 as the default but it really depends on the image and image style tbh - I tend to find that most images work well around Freedom of 3. Just my two cents. Share, discover, & run thousands of ComfyUI workflows. Breakdown of workflow content. Where can one get such things? It would be nice to use ready-made, elaborate workflows! For example, ones that might do Tile Upscle like we're used to in AUTOMATIC 1111, to produce huge images. Aug 2, 2024 · You can then load or drag the following image in ComfyUI to get the workflow: This image contains the workflow (https://comfyanonymous. https://youtu. We would like to show you a description here but the site won’t allow us. But for a base to start at it'll work. 157 votes, 62 comments. 1:8188 but when i try to load a flow through one of the example images it just does nothing. alxce uwfiq uyfil wzebrv hqf gnknu mmpav mzetl anz nfnzun