Stable Diffusion UI
Stable Diffusion on your own computer. No dependencies or technical knowledge required. 1-click install, powerful features.
Easiest way to install and use (for support, and development discussion) | Troubleshooting guide for common problems
Step 1: Download the installer
Step 2: Run the program
- On Windows: Double-click
Start Stable Diffusion UI.cmd
- On Linux: Run
./start.sh
in a terminal
Step 3: There is no step 3!
It's simple to get started. You don't need to install or struggle with Python, Anaconda, Docker etc.
The installer will take care of whatever is needed. A friendly Discord community will help you if you face any problems.
Easy for new users, powerful features for advanced users
Features:
- No Dependencies or Technical Knowledge Required: 1-click install for Windows 10/11 and Linux. No dependencies, no need for WSL or Docker or Conda or technical setup. Just download and run!
- Clutter-free UI: a friendly and simple UI, while providing a lot of powerful features
- Supports "Text to Image" and "Image to Image"
- Custom Models: Use your own
.ckpt
file, by placing it inside themodels/stable-diffusion
folder! - Live Preview: See the image as the AI is drawing it
- Task Queue: Queue up all your ideas, without waiting for the current task to finish
- In-Painting: Specify areas of your image to paint into
- Face Correction (GFPGAN) and Upscaling (RealESRGAN)
- Image Modifiers: A library of modifier tags like "Realistic", "Pencil Sketch", "ArtStation" etc. Experiment with various styles quickly.
- Loopback: Use the output image as the input image for the next img2img task
- Negative Prompt: Specify aspects of the image to remove.
- Attention/Emphasis: () in the prompt increases the model's attention to enclosed words, and [] decreases it
- Weighted Prompts: Use weights for specific words in your prompt to change their importance, e.g.
red:2.4 dragon:1.2
- Prompt Matrix: (in beta) Quickly create multiple variations of your prompt, e.g.
a photograph of an astronaut riding a horse | illustration | cinematic lighting
- Lots of Samplers: ddim, plms, heun, euler, euler_a, dpm2, dpm2_a, lms
- Multiple Prompts File: Queue multiple prompts by entering one prompt per line, or by running a text file
- NSFW Setting: A setting in the UI to control NSFW content
- JPEG/PNG output
- Save generated images to disk
- Use CPU setting: If you don't have a compatible graphics card, but still want to run it on your CPU.
- Auto-updater: Gets you the latest improvements and bug-fixes to a rapidly evolving project.
- Low Memory Usage: Creates 512x512 images with less than 4GB of VRAM!
- Developer Console: A developer-mode for those who want to modify their Stable Diffusion code, and edit the conda environment.
Easy for new users:
Powerful features for advanced users:
Live Preview
Useful for judging (and stopping) an image quickly, without waiting for it to finish rendering.
Task Queue
System Requirements
- Windows 10/11, or Linux. Experimental support for Mac is coming soon.
- An NVIDIA graphics card, preferably with 4GB or more of VRAM. If you don't have a compatible graphics card, it'll automatically run in the slower "CPU Mode".
- Minimum 8 GB of RAM and 25GB of disk space.
You don't need to install or struggle with Python, Anaconda, Docker etc. The installer will take care of whatever is needed.
Installation
-
Download for Windows or for Linux.
-
Extract:
- For Windows: After unzipping the file, please move the
stable-diffusion-ui
folder to yourC:
(or any drive like D:, at the top root level), e.g.C:\stable-diffusion-ui
. This will avoid a common problem with Windows (file path length limits). - For Linux: After extracting the .tar.xz file, please open a terminal, and go to the
stable-diffusion-ui
directory.
- Run:
- For Windows:
Start Stable Diffusion UI.cmd
by double-clicking it. - For Linux: In the terminal, run
./start.sh
(orbash start.sh
)
This will automatically install Stable Diffusion, set it up, and start the interface. No additional steps are needed.
To Uninstall: Just delete the stable-diffusion-ui
folder to uninstall all the downloaded packages.
How to use?
Please use our guide to understand how to use the features in this UI.
Bugs reports and code contributions welcome
If there are any problems or suggestions, please feel free to ask on the discord server or file an issue.
We could really use help on these aspects (click to view tasks that need your help):
If you have any code contributions in mind, please feel free to say Hi to us on the discord server. We use the Discord server for development-related discussions, and for helping users.
Disclaimer
The authors of this project are not responsible for any content generated using this interface.
The license of this software forbids you from sharing any content that violates any laws, produce any harm to a person, disseminate any personal information that would be meant for harm, spread misinformation, or target vulnerable groups. For the full list of restrictions please read the license. You agree to these terms by using this software.
"Potential NSFW content" on the default prompt.
Configuration:
Windows 11 CPU: AMD Ryzen 5 5600X Memory: 64GB WSL2 + ubuntu 22.04.1 GPU: GeForce GTX 1660 SUPER GPU Memory: 6GB
Error message:
I get the error with the default prompt: "a photograph of an astronaut riding a horse" I tried with 256x256 image size, and I get the same error.
Version 2 - Development
A development version of v2 is available for Windows 10/11 and Linux. Experimental support for Mac will be added soon.
The instructions for installing are at: https://github.com/cmdr2/stable-diffusion-ui/blob/v2/README.md#installation
It is not a binary, and the source code used for building this is open at https://github.com/cmdr2/stable-diffusion-ui/tree/v2
What is this?
This version is a 1-click installer. You don't need WSL or Docker or Python or anything beyond a working NVIDIA GPU with an updated driver. You don't need to use the command-line at all.
It'll download the necessary files from the original Stable Diffusion git repository, and set it up. It'll then start the browser-based interface like before.
An NSFW option is present in the interface, for those people who are unable to run their prompts without hitting the NSFW filter incorrectly.
Is it stable?
It has run successfully for a number of users, but I would love to know if it works on more computers. Please let me know if it works or fails in this thread, it'll be really helpful! Thanks :)
PS: There's a new Discord server for support and development discussions: https://discord.com/invite/u9yhsFmEkB . Please join in for faster discussion and feedback on v2.
cannot start up docker container
build was successful using windows 10, docker-compose version 1.29.2, build 5becea4c
after running
docker-compose up
ERR_EMPTY_RESPONSE on port 9000
I can't reach the UI after update. Port 8000 works fine, it displays the rodiet notice, but port 9000 returns nothing at all. I'm running Windows, so I was not (easily) able to execute the
server
file. But I opened it and executed the code below (start_server()
) as a troubleshoot. Without luck though.ModuleNotFoundError: No module named 'cv2'
python is installed and updated and so is opencv
The following is the output:
Exception in ASGI application
First time run of this... I have a laptop Nvidia 3060 GPU, running Ubuntu in WSL on Windows 10. I tried my first prompt from the web page but got this error below. I didn't install the Nvidia driver within Ubuntu because a) it didn't recognise my GPU and b) I had all sorts of other problems. Do I need to install the Nvidia driver within WSL, or does it use the host driver?
Installing Stable Diffusion on Linux Mint Error
I tried Installing Stable Diffusion V2 on Linux Mint again & again and here's what Error I got. I'm a BIG Noob so explain like I'm Five!
Stable Diffusion UI
Stable Diffusion UI's git repository was already installed. Updating.. HEAD is now at 051ef56 Merge pull request #79 from iJacqu3s/patch-1 Already up to date. Stable Diffusion's git repository was already installed. Updating.. HEAD is now at c56b493 Merge pull request #117 from neonsecret/basujindal_attn Already up to date.
Downloading packages necessary for Stable Diffusion..
***** This will take some time (depending on the speed of the Internet connection) and may appear to be stuck, but please be patient ***** ..
WARNING: A space was detected in your requested environment path '/home/sharky/Desktop/Stable Diffusion/stable-diffusion-ui/stable-diffusion/env' Spaces in paths can sometimes be problematic. Collecting package metadata (repodata.json): done Solving environment: done Preparing transaction: done Verifying transaction: done Executing transaction: done ERROR conda.core.link:_execute(730): An error occurred while installing package 'defaults::cudatoolkit-11.3.1-h2bc3f7f_2'. Rolling back transaction: done
LinkError: post-link script failed for package defaults::cudatoolkit-11.3.1-h2bc3f7f_2 location of failed script: /home/sharky/Desktop/Stable Diffusion/stable-diffusion-ui/stable-diffusion/env/bin/.cudatoolkit-post-link.sh ==> script messages <==
==> script output <==
stdout:
stderr: Traceback (most recent call last):
File "/home/sharky/Desktop/Stable Diffusion/stable-diffusion-ui/installer/bin/conda", line 12, in
from conda.cli import main
ModuleNotFoundError: No module named 'conda'
Traceback (most recent call last):
File "/home/sharky/Desktop/Stable Diffusion/stable-diffusion-ui/installer/bin/conda", line 12, in
from conda.cli import main
ModuleNotFoundError: No module named 'conda'
/home/sharky/Desktop/Stable Diffusion/stable-diffusion-ui/stable-diffusion/env/bin/.cudatoolkit-post-link.sh: line 3: $PREFIX/.messages.txt: ambiguous redirect
return code: 1
()
Error installing the packages necessary for Stable Diffusion. Please try re-running this installer. If it doesn't work, please copy the messages in this window, and ask the community at https://discord.com/invite/u9yhsFmEkB or file an issue at https://github.com/cmdr2/stable-diffusion-ui/issues
Hope this helps someone!
ModuleNotFoundError: No module named 'torch'
I installed and ran v2 on Windows using Start Stable Diffusion UI.cmd, and encountered an error running the server:
Is anyone else getting this?
start_server() in ./server not working
I am not familiar with shell script, but I think line 10 in server did not run. I can run
docker-compose up stability-ai stable-diffusion-ui
in console manually.It was working but suddenly config.json error
File "D:\stable-diffusion-ui\ui\server.py", line 144, in getAppConfig with open(config_json_path, 'r') as f: FileNotFoundError: [Errno 2] No such file or directory: 'D:\stable-diffusion-ui\ui\..\scripts\config.json'
Website doesn't show
I've had some troubles executing server.sh in WSL, it said something about no permission even with sudo, but with some chmod magic I got that working eventually. After executing it, docker showed that it is using port 5000 instead of 8000 like shown in the Tutorial.
When opening localhost:5000 in a browser, all the site contains is
{"docs_url":"/docs","openapi_url":"/openapi.json"}
Prevent flooding the log with warnings for GPU<3GB
Show the warning that GPUs with <3GB aren't supported only once instead of every 5s
Fixes https://discord.com/channels/1014774730907209781/1059869326313799700
ED2.5 (BETA): img2img fails when actual_inference_steps==0
Describe the bug When using a very small prompt strength, actual_inference_steps becomes 0 and sdkit fatally fails. This wasn't the case in 2.4 with stable-diffusion-kit.
After the failure, EasyDiffusion becomes unusable and complains that cards with less than 3GB are not supported - regardless of the GPU's actual VRAM size.
To Reproduce Steps to reproduce the behavior:
Use as input
on a previously generated imageExpected behavior Image generation like in SDUI2.4, as used in the tips for "How to use the upscalers to upscale an image without passing it through SD"
Severity Blocker
Logfiles prompts-strength-0.01.txt
No option for full precision mode on beta channel? Also, "CUDA: 0"
I wondered at first if maybe full precision would just be built into the balanced/fast/slow render speed/VRAM usage setting, but even on the highest setting it is still half-precision. I'm loving the beta channel right now because it has what are IMO the best samplers, and the speeds it hits with the "fast" mode selected are unlike anything I've seen before-- so thank you for adding those, & kudos on achieving such speeds without sacrificing quality; it's all very impressive). But it makes it very hard to know which channel to use when one of them has all of that but no full precision mode and the other has none of that, but full precision mode.
So yeah; I'm not sure whether it's a bug or a mistake or if it's just taking time to code for compatibility between the "half/full precision" toggle and the new "balanced/fast/slow" option. But I didn't see anyone else acknowledging it in issues, & it's definitely been an issue for me, so I wanted to bring it to your attention in case it was an oversight. I am having some other issues on and off, but I think it's necessary to make a separate issue for them if they persist.
One other issue! [Writing the below conclusion reminded me]. From the time I started using this UI
Thanks! And thanks again for implementing the additional samplers in the beta channel; I don't know if it was in response to my post about them or not but it's awesome that you all got to that quickly! I've already gotten some fantastic results with DPM ++2M in particular, but also the Stability AI solver. I'd rank them up there with DPM 2, DPM 2a, and perhaps DDIM or Heun. Still need to experiment more with Fast, Adaptive, SDE, & 2s ancestral before I can judge against them.
Anyway, I wasn't sure whether to post this as a bug report or a feature request, because I don't know why the full precision option disappears in beta, but I would greatly appreciate the option. Just got a new GPU and I'm very excited about what it can do. OH, PS: that actually reminds me of another issue I keep meaning to ask about. From the first version I ever used, on this UI, under settings where it shows the GPU information, it has always said "CUDA: 0." Before I was using an Nvidia GTX 1050ti (which had something like 700 or 800 CUDA cores. Now I'm using an RTX 3080, with more like 8,000 CUDA cores! But for some reason, with both GPUs, it has always said "CUDA: 0." Is it supposed to read how many cores it's utilizing? Or is it just a binary indicator where 0 = yes and 1 = no or something? It just always read as something showing up wrong to me, but maybe that's what it is.
Cheers! Happy New Year.
Desktop:
Images not saving automatically
Since the most recent update images are no longer saving. Even when the process is complete when I go to close the browser window it asking me if I want to leave the page because there is still activity even though the task is compelete.
Enforce an autosave directory
Add a config.bat/sh setting
FORCE_SAVE_PATH
that can be used by server admins to restrict auto save to a specific directory. Also useful for users who use different end devices and want to centrally configure the auto save option. IfFORCE_SAVE_PATH
is set, the auto save options in the UI are disabled.Fixes #597 Fixes https://discord.com/channels/1014774730907209781/1052691036981428255