inpainting comfyui. Requirements: WAS Suit [Text List, Text Concatenate] : ( Master Tutorial - Stable Diffusion XL (SDXL) - Install On PC, Google Colab (Free) & RunPod, SDXL LoRA, SDXL InPainting. inpainting comfyui

 
Requirements: WAS Suit [Text List, Text Concatenate] : ( Master Tutorial - Stable Diffusion XL (SDXL) - Install On PC, Google Colab (Free) & RunPod, SDXL LoRA, SDXL InPaintinginpainting comfyui Inpainting with both regular and inpainting models

In addition to a whole image inpainting and mask only inpainting, I also have workflows that upscale the masked region to do an inpaint and then downscale it back to the original resolution when pasting it back in. eh, if you build the right workflow, it will pop out 2k and 8k images without the need for alot of ram. ComfyUI. . 卷疯了!. on 1. The Load Image (as Mask) node can be used to load a channel of an image to use as a mask. Simple upscale and upscaling with model (like Ultrasharp). Inpainting replaces or edits specific areas of an image. If the server is already running locally before starting Krita, the plugin will automatically try to connect. 0 in ComfyUI I've come across three different methods that seem to be commonly used: Base Model with Latent Noise Mask, Base Model using InPaint VAE Encode and using the UNET "diffusion_pytorch" InPaint specific model from Hugging Face. @lllyasviel I've merged changes from v2. 25:01 How to install and. . 懒人一键制作Ai视频 Comfyui整合包 AnimateDiff工作流. Inpaint Examples | ComfyUI_examples (comfyanonymous. ago. Unpack the SeargeSDXL folder from the latest release into ComfyUI/custom_nodes, overwrite existing files. 5 based model and then do it. crop your mannequin image to the same w and h as your edited image. "it can't be done!" is the lazy/stupid answer. 2. comfyUI采用的是workflow体系来运行Stable Diffusion的各种模型和参数,有点类似于桌面软件. If you have previously generated images you want to upscale, you'd modify the HiRes to include the IMG2IMG. The core idea behind IA is. ComfyShop phase 1 is to establish the basic painting features for ComfyUI. Step 1: Create an inpaint mask; Step 2: Open the inpainting workflow; Step 3: Upload the image; Step 4: Adjust parameters; Step 5:. Inpainting erases object instead of modifying. • 4 mo. 1 Inpainting work in ComfyUI? I already tried several variations of puttin a b/w mask into image-input of CN or encoding it into latent input, but nothing worked as expected. "VAE Encode for inpainting" should be used with denoise of 100%, it's for true inpainting and is best used with inpaint models but will work with all models. What Auto1111 does with "only masked" inpainting is it inpaints the masked area at the resolution you set (so 1024x1024 for examples) and then it downscales it back to stitch it into the picture. io) Also it can be very diffcult to get. The. SDXL 1. This in-depth tutorial will guide you to set up repositories, prepare datasets, optimize training parameters, and leverage techniques like LoRA and inpainting to achieve photorealistic results. Inpainting Workflow for ComfyUI. 1 Inpainting work in ComfyUI? I already tried several variations of puttin a b/w mask into image-input of CN or encoding it into latent input, but nothing worked as expected. A recent change in ComfyUI conflicted with my implementation of inpainting, this is now fixed and inpainting should work again New Features ; Support for FreeU has been added and is included in the v4. ago. Config file to set the search paths for models. I. 5 based model and then do it. r/StableDiffusion. Users can drag and drop nodes to design advanced AI art pipelines, and also take advantage of libraries of existing workflows. Note that --force-fp16 will only work if you installed the latest pytorch nightly. inputs¶ samples. Part 3: CLIPSeg with SDXL in ComfyUI. Sample workflow for ComfyUI below - picking up pixels from SD 1. Discover amazing ML apps made by the community. ComfyUI Inpaint Color Shenanigans (workflow attached) In a minimal inpainting workflow, I've found that both: The color of the area inside the inpaint mask does not match the rest of the 'no-touch' (not masked) rectangle (the mask edge is noticeable due to color shift even though content is consistent) The rest of the 'untouched' rectangle's. 25:01 How to install and use ComfyUI on a free. So you’re saying you take the new image with the lighter face and then put that into the inpainting with a new mask and run it again at a low noise level? I’ll give it a try, thanks. Info. Click "Install Missing Custom Nodes" and install/update each of the missing nodes. IMO I would say InvokeAI is the best newbie AI to learn instead, then move to A1111 if you need all the extensions and stuff, then go to. Please keep posted images SFW. The Conditioning (Set Mask) node can be used to limit a conditioning to a specified mask. Click. I won’t go through it here. A series of tutorials about fundamental comfyUI skills This tutorial covers masking, inpainting and image manipulation. 0. In part 1 (this post), we will implement the simplest SDXL Base workflow and generate our first images. • 3 mo. 5 and 1. In researching InPainting using SDXL 1. Overall, Comfuy UI is a neat power user tool, but for a casual AI enthusiast you will probably make it 12 seconds into ComfyUI and get smashed into the dirt by the far more complex nature of how it works. In the ComfyUI folder run "run_nvidia_gpu" if this is the first time then it may take a while to download an install a few things. The big current advantage of ComfyUI over Automatic1111 is it appears to handle VRAM much better. bat file to the same directory as your ComfyUI installation. Run git pull. You have to draw a mask, save the image with the mask, then upload to the UI again to inpaint. This is a collection of AnimateDiff ComfyUI workflows. From this, I will probably start using DPM++ 2M. Check out ComfyI2I: New Inpainting Tools Released for ComfyUI. . Simple LoRA workflows; Multiple LoRAs; Exercise: Make a workflow to compare with and without LoRA I'm an Automatic1111 user but was attracted to ComfyUI because of it's node based approach. ComfyUI A powerful and modular stable diffusion GUI and backend. An example of Inpainting+Controlnet from the controlnet. Notably, it contains a " Mask by Text " node that allows dynamic creation of a mask. In the added loader, select sd_xl_refiner_1. Then you can mess around with the blend nodes and image levels to get the mask and outline you want, then run and enjoy!ComfyUI comes with the following shortcuts you can use to speed up your workflow: Keybind. Outputs will not be saved. Show image: Opens a new tab with the current visible state as the resulting image. * The result should best be in the resolution-space of SDXL (1024x1024). to the corresponding Comfy folders, as discussed in ComfyUI manual installation. This approach is more technically challenging but also allows for unprecedented flexibility. • 3 mo. fp16. ok TY ILY bye. Official implementation by Samsung Research. use increment or fixed. The origin of the coordinate system in ComfyUI is at the top left corner. So in this workflow each of them will run on your input image and you. In this video, I will show you how to use Comfy UI, a powerful and modular stable diffusion GUI with a graph/nodes interface. It looks like this:Step 2: Download ComfyUI. 0 based on the effect you want) 3. 1 of the workflow, to use FreeU load the newComfyUI is a powerful and modular GUI for Stable Diffusion that lets you create advanced workflows using a node/graph interface. You can also copy images from the save image to the load image node by right clicking the save image node and “Copy (clipspace)” and then right clicking the load image node and “Paste (clipspace)”. ComfyUI: Modular Stable Diffusion GUI sd-webui (hlky) Peacasso. ckpt" model works just fine though so it must be a problem with the model. Latest Version Download. Wether or not to center-crop the image to maintain the aspect ratio of the original latent images. Open a command line window in the custom_nodes directory. 5 gives me consistently amazing results (better that trying to convert a regular model to inpainting through controlnet, by the way). 0 for ComfyUI (Hand Detailer, Face Detailer, Free Lunch, Image Chooser, XY Plot, ControlNet/Control-LoRAs, Fine-tuned SDXL models, SDXL Base+Refiner, ReVision, Upscalers, Prompt Builder, Debug, etc. Obviously since it aint doin much GIMP would have to subjugate itself. x, 2. 0-inpainting-0. Don't know if inpainting works with SDXL, but ComfyUI inpainting works with SD 1. workflows " directory and replace tags. stable-diffusion-xl-inpainting. Since a few days there is IP-Adapter and a corresponding ComfyUI node which allow to guide SD via images rather than text. Fernicles SDTools V3 - ComfyUI nodes. First, press Send to inpainting to send your newly generated image to the inpainting tab. Loaders GLIGEN Loader Hypernetwork Loader. ComfyShop has been introduced to the ComfyI2I family. For this editor we've integrated Jack Qiao's excellent custom inpainting model from the glid-3-xl-sd project instead. Build complex scenes by combine and modifying multiple images in a stepwise fashion. The extracted folder will be called ComfyUI_windows_portable. The denoise controls the amount of noise added to the image. 1 is a latent text-to-image diffusion model capable of generating photo-realistic images given any text input, with the extra capability of inpainting the pictures by using a mask. This is useful to get good. Part 4: Two Text Prompts (Text Encoders) in SDXL 1. If you have another Stable Diffusion UI you might be. The area of the mask can be increased using grow_mask_by to provide the inpainting process with some. Make sure to select the Inpaint tab. A1111 generates an image with the same settings (in spoilers) in 41 seconds, and ComfyUI in 54 seconds. right. ComfyUI AnimateDiff一键复制三分钟搞定动画制作!. I don’t think “if you’re too newb to figure it out try again later” is a productive way to introduce a technique. Features. Inpainting large images in comfyui. It should be placed in the folder ComfyUI_windows_portable which contains the ComfyUI , python_embeded , and update folders. Please share your tips, tricks, and workflows for using this software to create your AI art. As long as you're running the latest ControlNet and models, the inpainting method should just work. 关键还免费,SDXL+ComfyUI+Roop AI换脸,【玩转SD】再也不用写提示词了 SDXL最新技术Revision 用图片代替提示词,comfyui最新模型:clip vision在SDXL中完美实现图像blend合并功能,Openpose更新,Controlnet迎来了新的更新,不要再学StableDiffusion. 5 i thought that the inpanting controlnet was much more useful than the. If you want your workflow to generate a low resolution image and then upscale it immediately, the HiRes examples are exactly what I think you are asking for. Inpainting. To open ComfyShop, simply right click on any image node that outputs an image and mask and you will see the ComfyShop option much in the same way you would see MaskEditor. Inpainting with the "v1-5-pruned. 5 i thought that the inpanting controlnet was much more useful than the inpaining fine-tuned models. This is the answer, we need to wait for controlnetXL comfyUI nodes, and then a whole new world opens up. With this plugin, you'll be able to take advantage of ComfyUI's best features while working on a canvas. 0. There are many possibilities. 6. I only get image with mask as output. Img2Img. Follow the ComfyUI manual installation instructions for Windows and Linux. If you uncheck and hide a layer, it will be excluded from the inpainting process. So I sent this image to inpainting to replace the first one. Here are amazing ways to use ComfyUI. Paper: "Beyond Surface Statistics: Scene Representations in a Latent Diffusion Model". 1 of the workflow, to use FreeU load the newThis is exactly the kind of content the ComfyUI community needs, thank you! I'm huge fan of your workflows in github too. This project strives to positively impact the domain of AI-driven. I already tried it and this doesnt seems to work. The order of LORA. It would be great if there was a simple tidy UI workflow the ComfyUI for SDXL. r/StableDiffusion. Outpainting: SD-infinity, auto-sd-krita extension. Good for removing objects from the image; better than using higher denoising strengths or latent noise. 1: Enables dynamic layer manipulation for intuitive image. 1. , Stable Diffusion) fill the "hole" according to the text. Start ComfyUI by running the run_nvidia_gpu. Select workflow and hit Render button. • 1 yr. So far this includes 4 custom nodes for ComfyUI that can perform various masking functions like blur, shrink, grow, and mask from prompt. sdxl lora sdxl training sdxl inpainting sdxl fine tuning sdxl auto1111 + 8. Learn how to use Stable Diffusion SDXL 1. 2. Installing WindowscomfyUI和sdxl0. 1. by Roman Suvorov, Elizaveta Logacheva, Anton Mashikhin, Anastasia Remizova, Arsenii Ashukha, Aleksei Silvestrov, Naejin Kong, Harshith Goka, Kiwoong Park, Victor Lempitsky. But. i think, its hard to tell what you think is wrong. vae inpainting needs to be run at 1. Note that --force-fp16 will only work if you installed the latest pytorch nightly. ai just released a suite of open source audio diffusion tools. SDXL-Inpainting. Direct download only works for NVIDIA GPUs. please let me know. Superior Strategies: Varied superior approaches are supported by the instrument, together with Loras (common, locon, and loha), Hypernetworks, ControlNet,. Here’s a basic example of how you might code this using a hypothetical inpaint function: In simple terms, inpainting is an image editing process that involves masking a select area and then having Stable Diffusion redraw the area based on user input. Thank you! Also notice that you can download that image and drag'n'drop it to your comfyui to load that workflow and you can also drag'n'drop images to Load Image node to quicker load them. Use global_inpaint_harmonious when you want to set the inpainting denoising strength high. json" file in ". Inpainting with inpainting models at low denoise levels. ai & PPA Master Professional PhotographerGreetings! I am the lead QA at Stability. Other features include embeddings/textual inversion, area composition, inpainting with both regular and inpainting models, ControlNet and T2I-Adapter, upscale models, unCLIP models, and more. true. 3. Remeber to use a specific checkpoint for inpainting otherwise it won't work. 1: Enables dynamic layer manipulation for intuitive image synthesis in ComfyUI. py has write permissions. It's also available as a standalone UI (still needs access to Automatic1111 API though). PS内直接跑图,模型可自由控制!. Use 2 controlnet modules for two images with weights reverted. The method used for resizing. so I sent it to inpainting and mask the left hand. Two of the most popular repos. ComfyUI Fundamentals - Masking - Inpainting. Yet, it’s ComfyUI. Sometimes I get better result replacing "vae encode" and "set latent noise mask" by "vae encode for inpainting". Also if you want better quality inpaint I would recommend the impactpack SEGSdetailer node. Outpainting just uses a normal model. In the case of features like pupils, where the mask is generated at a nearly point level, this option is necessary to create a sufficient mask for inpainting. ComfyUI Fundamentals - Masking - Inpainting. Make sure you use an inpainting model. It allows you to create customized workflows such as image post processing, or conversions. Its a good idea to use the 'set latent noise mask' node instead of vae inpainting node. First we create a mask on a pixel image, then encode it into a latent image. A GIMP plugin that makes it a facility for ComfyUI. [GUIDE] ComfyUI AnimateDiff Guide/Workflows Including Prompt Scheduling - An Inner-Reflections Guide (Including a Beginner Guide) Tutorial | Guide AnimateDiff in ComfyUI is an amazing way to generate AI Videos. 35 or so. A recent change in ComfyUI conflicted with my implementation of inpainting, this is now fixed and inpainting should work again. g. In this endeavor, I've employed the Impact Pack extension and Con. Img2Img. I change probably 85% of the image with latent nothing and inpainting models 1. In order to improve faces even more, you can try the FaceDetailer node from the ComfyUI-Impact. VAE Encode (for Inpainting) Transform Transform Crop Latent Flip Latent Rotate Latent Loaders. I have about a decade of blender node experience, so I figured that this would be a perfect match for me. Another neat trick you can do with. The UNetLoader node is use to load the diffusion_pytorch_model. Course DiscountsBEGINNER'S Stable Diffusion COMFYUI and SDXL Guide- USE. Vom Laden der Basisbilder über das Anpass. Jattoe. alamonelfon Apr 14. This node based UI can do a lot more than you might think. New Features. Although the 'inpaint' function is still in the development phase, the results from the 'outpaint' function remain quite. Outputs will not be saved. ComfyUI is a node-based interface to use Stable Diffusion which was created by comfyanonymous in 2023. While it can do regular txt2img and img2img, it really shines when filling in missing regions. Use the paintbrush tool to create a mask. Very impressed by ComfyUI ! r/StableDiffusion. New Features. there are images you can download and just load into ComfyUI (via the menu on the right, which set up all the nodes for you. Use in Diffusers. Add a 'launch openpose editor' button on the LoadImage node. Yes, you would. Realistic Vision V6. lite stable nightly Info - Token - Model Page; stable_diffusion_comfyui_colab CompVis/stable-diffusion-v-1-4-original: waifu_diffusion_comfyui_colabIf you're running on Linux, or non-admin account on windows you'll want to ensure /ComfyUI/custom_nodes, ComfyUI_I2I, and ComfyI2I. Open a command line window in the custom_nodes directory. • 3 mo. I created some custom nodes that allow you to use the CLIPSeg model inside ComfyUI to dynamically mask areas of an image based on a text prompt. In the case of ComfyUI and Stable Diffusion, you have a few different "machines," or nodes. The lower the. ComfyUI was created in January 2023 by Comfyanonymous, who created the tool to learn how Stable Diffusion works. Part 2 - (coming in 48 hours) we will add SDXL-specific conditioning implementation + test what impact that conditioning has on the generated images. This is for anyone that wants to make complex workflows with SD or that wants to learn more how SD works. Prior to adoption I generated an image in A1111, auto-detected and masked the face, inpainted the face only (not whole image), which improved the face rendering 99% of the time. Inpainting strength. Now let’s choose the “Bezier Curve Selection Tool”: With this, let’s make a selection over the right eye, copy and paste it to a new layer, and. For some reason the inpainting black is still there but invisible. This can result in unintended results or errors if executed as is, so it is important to check the node values. It looks like I need at least 6GB VRAM to pass VAE Encode (for inpainting) step on 1920*1080 image. Extract the zip file. Still using A1111 for 1. Basically, you can load any ComfyUI workflow API into mental diffusion. okolenmion Sep 1. Run update-v3. If you can't figure out a node based workflow from running it, maybe you should stick with a1111 for a bit longer. . A suitable conda environment named hft can be created and activated with: conda env create -f environment. Any help I’d appreciated. ComfyUIは若干取っつきにくい印象がありますが、SDXLを動かす場合はメリットが大きく便利なツールだと思います。 特にStable Diffusion web UIだとVRAMが足りなくて試せないなぁ…とお悩みの方には救世主となりうるツールだと思いますので、ぜひ試. Please read the AnimateDiff repo README for more information about how it works at its core. A good place to start if you have no idea how any of this works is the: ComfyUI Basic Tutorial VN: All the art is made with ComfyUI. 0 comfyui ControlNet and img2img working alright but inpainting seems like it doesn't even listen to my prompt 8/9 times. SDXL ControlNet/Inpaint Workflow. Unless I'm mistaken, that inpaint_only +Lama capability is within ControlNet. Honestly I never digged deeper to get why sometimes it works and sometimes not. Join me in this journey as we uncover the most mind-blowing inpainting techniques you won't believe exist! Learn how to extract elements with surgical precision. Inpainting is a technique used to replace missing or corrupted data in an image. All models, including Realistic Vision. Interface NodeOptions Save File Formatting Shortcuts Text Prompts Utility Nodes. bat file to the same directory as your ComfyUI installation. 1. canvas websocket vscode-extension webui painting lora inpainting upscaler img2img outpainting realesrgan txt2img stable -diffusion. And another general difference is that A1111 when you set 20 steps 0. true. Space Composition and Inpainting: ComfyUI supplies space composition and inpainting options with regular and inpainting fashions, considerably boosting image enhancing abilities. ComfyUI provides a powerful yet intuitive way to harness Stable Diffusion through a flowchart interface. . 6. Can anyone add the ability to use the new enhanced inpainting method to ComfyUI which is discussed here Mikubill/sd-webui-controlnet#1464. Making a user-friendly pipeline with prompt-free inpainting (like FireFly) in SD can be difficult. 23:06 How to see ComfyUI is processing the which part of the workflow. It also. It works just like the regular VAE encoder but you need to connect it to the mask output from Load Image. The model is trained for 40k steps at resolution 1024x1024. bat file. amount to pad above the image. Download the included zip file. Restart ComfyUI. Don't use VAE Encode (for inpaint) That is used to apply denoise at 1. I'm trying to create an automatic hands fix/inpaint flow. Support for FreeU has been added and is included in the v4. Researchers discover that Stable Diffusion v1 uses internal representations of 3D geometry when generating an image. To encode the image you need to use the "VAE Encode (for inpainting)" node which is under latent->inpaint. Right off the bat, it does all the Automatic1111 stuff like using textual inversions/embeddings and LORAs, inpainting, stitching the keywords, seeds and settings into PNG metadata allowing you to load the generated image and retrieve the entire workflow, and then it does more Fun Stuff™. Embeddings/Textual Inversion. 5 my workflow used to be: 1- Img-Img upscale (this corrected a lot of details 2- Inpainting with controlnet (got decent results) 3- Controlnet tile for upscale 4- Upscale the image with upscalers This workflow doesn't work for SDXL, and I'd love to know what workflow. Part 1: Stable Diffusion SDXL 1. The node-based workflow builder makes it easy to experiment with different generative pipelines for state-of-the-art results. Using ComfyUI, inpainting becomes as simple as sketching out where you want the image to be repaired. Please share your tips, tricks, and workflows for using this software to create your AI art. Also come with a ConditioningUpscale node. . 20:57 How to use LoRAs with SDXL. It's just another control net, this one is trained to fill in masked parts of images. Note that in ComfyUI txt2img and img2img are the same node. Dust spots and scratches. Launch ComfyUI by running python main. 0 and Refiner 1. use simple prompts without "fake" enhancers like "masterpiece, photorealistic, 4k, 8k, super realistic, realism" etc. 1. Code Issues Pull requests Discussions ComfyUI Interface for VS Code. Within the factory there are a variety of machines that do various things to create a complete image, just like you might have multiple machines in a factory that produces cars. Forgot to mention, you will have to download this inpaint model from huggingface and put it in your comfyUI "Unet" folder that can be found in the models folder. Edit model card. Question about Detailer (from ComfyUI Impact pack) for inpainting hands. 6. Auto detecting, masking and inpainting with detection model. Install the ComfyUI dependencies. Any idea what might be causing that reddish tint? I tried to keep the data processing as in vanilla, and normal generation works fine. Then drag the output of the RNG to each sampler so they all use the same seed. Wether or not to center-crop the image to maintain the aspect ratio of the original latent images. - GitHub - Bing-su/adetailer: Auto detecting, masking and inpainting with detection model. Hi, I've been inpainting my images with the Comfy UI's custom node called Workflow Component feature - Image refiner as this workflow is simply the quickest for me (The A1111 or other UI's are not even close comparing to the speed). 4: Let you visualize the ConditioningSetArea node for better control. lordpuddingcup. We also changed the parameters, as discussed earlier. The text was updated successfully, but these errors were encountered: All reactions. 0 in ComfyUI I've come across three different methods that seem to be commonly used: Base Model with Latent Noise Mask, Base. This colab have the custom_urls for download the models. . 0 has been out for just a few weeks now, and already we're getting even more SDXL 1. Stable Diffusion Inpainting is a latent text-to-image diffusion model capable of generating photo-realistic images given any text input, with the extra capability of inpainting the pictures by using a mask. r/StableDiffusion. Stable Diffusion XL (SDXL) 1. I only get image with. New Features. Hypernetworks. Please share your tips, tricks, and workflows for using this software to create your AI art. Save workflow. . I have all the latest ControlNet models. And + HF Spaces for you try it for free and unlimited. Original v1 description: After a lot of tests I'm finally releasing my mix model. Loaders GLIGEN Loader Hypernetwork Loader Load CLIP Load CLIP Vision Load Checkpoint Load ControlNet Model. The plugin uses ComfyUI as backend. Q: Why not use ComfyUI for inpainting? A: ComfyUI currently have issue about inpainting models, see issue for detail. r/comfyui. sd-webui-comfyui is an extension for A1111 webui that embeds ComfyUI workflows in different sections of the normal pipeline of the webui. The Set Latent Noise Mask node can be used to add a mask to the latent images for inpainting. amount to pad left of the image. the example code is this. I've been learning to use comfyUI though, it doesn't have all of the features that Auto has, but opens up a ton of custom workflows and gens substantially faster with the amount of bloat that auto has accumulated. This node encodes images in tiles allowing it to encode larger images than the regular VAE Encode node. Yes, you can add the mask yourself, but the inpainting would still be done with the amount of pixels that are currently in the masked area. By the way, regarding your workflow, in case you don't know, you can edit the mask directly on the load image node, right. 17:38 How to use inpainting with SDXL with ComfyUI. 0 should essentially ignore the original image under the masked. With SD 1. g. Lora. . It has an almost uncanny ability. How does ControlNet 1. Ctrl + Shift + Enter. I use SD upscale and make it 1024x1024. Welcome to the unofficial ComfyUI subreddit. Troubleshootings: Occasionally, when a new parameter is created in an update, the values of nodes created in the previous version can be shifted to different fields. Here's how the flow looks rn: Yeah, I stole adopted most of it from some example on inpainting a face. Available at HF and Civitai. SDXL Examples. Added today your IPadapter plus. For example: 896x1152 or 1536x640 are good resolutions. ) Starts up very fast. maskImproving faces. Install the ComfyUI dependencies. Done! FAQ.