Easiest 1-click way to install and use Stable Diffusion on your own computer. Provides a browser UI for generating images from text prompts and images. Just enter your text prompt, and see the generated image.

  • By null
  • Last update: Nov 22, 2022
  • Comments: 16

Stable Diffusion UI

Easiest way to install and use Stable Diffusion on your own computer. No dependencies or technical knowledge required. 1-click install, powerful features.

Discord Server (for support, and development discussion) | Troubleshooting guide for common problems


Step 1: Download the installer

Step 2: Run the program

  • On Windows: Double-click Start Stable Diffusion UI.cmd
  • On Linux: Run ./start.sh in a terminal

Step 3: There is no step 3!

It's simple to get started. You don't need to install or struggle with Python, Anaconda, Docker etc.

The installer will take care of whatever is needed. A friendly Discord community will help you if you face any problems.


Easy for new users, powerful features for advanced users

Features:

  • No Dependencies or Technical Knowledge Required: 1-click install for Windows 10/11 and Linux. No dependencies, no need for WSL or Docker or Conda or technical setup. Just download and run!
  • Clutter-free UI: a friendly and simple UI, while providing a lot of powerful features
  • Supports "Text to Image" and "Image to Image"
  • Custom Models: Use your own .ckpt file, by placing it inside the models/stable-diffusion folder!
  • Live Preview: See the image as the AI is drawing it
  • Task Queue: Queue up all your ideas, without waiting for the current task to finish
  • In-Painting: Specify areas of your image to paint into
  • Face Correction (GFPGAN) and Upscaling (RealESRGAN)
  • Image Modifiers: A library of modifier tags like "Realistic", "Pencil Sketch", "ArtStation" etc. Experiment with various styles quickly.
  • Loopback: Use the output image as the input image for the next img2img task
  • Negative Prompt: Specify aspects of the image to remove.
  • Attention/Emphasis: () in the prompt increases the model's attention to enclosed words, and [] decreases it
  • Weighted Prompts: Use weights for specific words in your prompt to change their importance, e.g. red:2.4 dragon:1.2
  • Prompt Matrix: (in beta) Quickly create multiple variations of your prompt, e.g. a photograph of an astronaut riding a horse | illustration | cinematic lighting
  • Lots of Samplers: ddim, plms, heun, euler, euler_a, dpm2, dpm2_a, lms
  • Multiple Prompts File: Queue multiple prompts by entering one prompt per line, or by running a text file
  • NSFW Setting: A setting in the UI to control NSFW content
  • JPEG/PNG output
  • Save generated images to disk
  • Use CPU setting: If you don't have a compatible graphics card, but still want to run it on your CPU.
  • Auto-updater: Gets you the latest improvements and bug-fixes to a rapidly evolving project.
  • Low Memory Usage: Creates 512x512 images with less than 4GB of VRAM!
  • Developer Console: A developer-mode for those who want to modify their Stable Diffusion code, and edit the conda environment.

Easy for new users:

Screenshot of the initial UI

Powerful features for advanced users:

Screenshot of advanced settings

Live Preview

Useful for judging (and stopping) an image quickly, without waiting for it to finish rendering.

live-512

Task Queue

Screenshot of task queue

System Requirements

  1. Windows 10/11, or Linux. Experimental support for Mac is coming soon.
  2. An NVIDIA graphics card, preferably with 4GB or more of VRAM. If you don't have a compatible graphics card, it'll automatically run in the slower "CPU Mode".
  3. Minimum 8 GB of RAM and 25GB of disk space.

You don't need to install or struggle with Python, Anaconda, Docker etc. The installer will take care of whatever is needed.

Installation

  1. Download for Windows or for Linux.

  2. Extract:

  • For Windows: After unzipping the file, please move the stable-diffusion-ui folder to your C: (or any drive like D:, at the top root level), e.g. C:\stable-diffusion-ui. This will avoid a common problem with Windows (file path length limits).
  • For Linux: After extracting the .tar.xz file, please open a terminal, and go to the stable-diffusion-ui directory.
  1. Run:
  • For Windows: Start Stable Diffusion UI.cmd by double-clicking it.
  • For Linux: In the terminal, run ./start.sh (or bash start.sh)

This will automatically install Stable Diffusion, set it up, and start the interface. No additional steps are needed.

To Uninstall: Just delete the stable-diffusion-ui folder to uninstall all the downloaded packages.

How to use?

Please use our guide to understand how to use the features in this UI.

Bugs reports and code contributions welcome

If there are any problems or suggestions, please feel free to ask on the discord server or file an issue.

We could really use help on these aspects (click to view tasks that need your help):

If you have any code contributions in mind, please feel free to say Hi to us on the discord server. We use the Discord server for development-related discussions, and for helping users.

Disclaimer

The authors of this project are not responsible for any content generated using this interface.

The license of this software forbids you from sharing any content that violates any laws, produce any harm to a person, disseminate any personal information that would be meant for harm, spread misinformation, or target vulnerable groups. For the full list of restrictions please read the license. You agree to these terms by using this software.

Github

https://github.com/cmdr2/stable-diffusion-ui

Comments(16)

  • 1

    "Potential NSFW content" on the default prompt.

    Configuration:

    Windows 11 CPU: AMD Ryzen 5 5600X Memory: 64GB WSL2 + ubuntu 22.04.1 GPU: GeForce GTX 1660 SUPER GPU Memory: 6GB

    docker run --rm -it --gpus=all nvcr.io/nvidia/k8s/cuda-sample:nbody nbody -gpu -benchmark
    > Windowed mode
    > Simulation data stored in video memory
    > Single precision floating point simulation
    > 1 Devices used for simulation
    GPU Device 0: "Turing" with compute capability 7.5
    
    > Compute 7.5 CUDA device: [NVIDIA GeForce GTX 1660 SUPER]
    22528 bodies, total time for 10 iterations: 32.767 ms
    = 154.884 billion interactions per second
    = 3097.676 single-precision GFLOP/s at 20 flops per interaction
    

    Error message:

    sd                                    | Using seed: 922
    50it [00:32,  1.56it/s]               |
    sd                                    | Potential NSFW content was detected in one or more images. A black image will be returned instead. Try again with a different prompt and/or seed.
    sd                                    | INFO:     172.18.0.4:36142 - "POST /predictions HTTP/1.1" 500 Internal Server Error
    sd                                    | ERROR:    Exception in ASGI application
    sd                                    | Traceback (most recent call last):
    sd                                    |   File "/root/.pyenv/versions/3.10.6/lib/python3.10/site-packages/uvicorn/protocols/http/httptools_impl.py", line 401, in run_asgi
    sd                                    |     result = await app(self.scope, self.receive, self.send)
    sd                                    |   File "/root/.pyenv/versions/3.10.6/lib/python3.10/site-packages/uvicorn/middleware/proxy_headers.py", line 78, in __call__
    sd                                    |     return await self.app(scope, receive, send)
    sd                                    |   File "/root/.pyenv/versions/3.10.6/lib/python3.10/site-packages/fastapi/applications.py", line 269, in __call__
    sd                                    |     await super().__call__(scope, receive, send)
    sd                                    |   File "/root/.pyenv/versions/3.10.6/lib/python3.10/site-packages/starlette/applications.py", line 124, in __call__
    sd                                    |     await self.middleware_stack(scope, receive, send)
    sd                                    |   File "/root/.pyenv/versions/3.10.6/lib/python3.10/site-packages/starlette/middleware/errors.py", line 184, in __call__
    sd                                    |     raise exc
    sd                                    |   File "/root/.pyenv/versions/3.10.6/lib/python3.10/site-packages/starlette/middleware/errors.py", line 162, in __call__
    sd                                    |     await self.app(scope, receive, _send)
    sd                                    |   File "/root/.pyenv/versions/3.10.6/lib/python3.10/site-packages/starlette/exceptions.py", line 93, in __call__
    sd                                    |     raise exc
    sd                                    |   File "/root/.pyenv/versions/3.10.6/lib/python3.10/site-packages/starlette/exceptions.py", line 82, in __call__
    sd                                    |     await self.app(scope, receive, sender)
    sd                                    |   File "/root/.pyenv/versions/3.10.6/lib/python3.10/site-packages/fastapi/middleware/asyncexitstack.py", line 21, in __call__
    sd                                    |     raise e
    sd                                    |   File "/root/.pyenv/versions/3.10.6/lib/python3.10/site-packages/fastapi/middleware/asyncexitstack.py", line 18, in __call__
    sd                                    |     await self.app(scope, receive, send)
    sd                                    |   File "/root/.pyenv/versions/3.10.6/lib/python3.10/site-packages/starlette/routing.py", line 670, in __call__
    sd                                    |     await route.handle(scope, receive, send)
    sd                                    |   File "/root/.pyenv/versions/3.10.6/lib/python3.10/site-packages/starlette/routing.py", line 266, in handle
    sd                                    |     await self.app(scope, receive, send)
    sd                                    |   File "/root/.pyenv/versions/3.10.6/lib/python3.10/site-packages/starlette/routing.py", line 65, in app
    sd                                    |     response = await func(request)
    sd                                    |   File "/root/.pyenv/versions/3.10.6/lib/python3.10/site-packages/fastapi/routing.py", line 227, in app
    sd                                    |     raw_response = await run_endpoint_function(
    sd                                    |   File "/root/.pyenv/versions/3.10.6/lib/python3.10/site-packages/fastapi/routing.py", line 162, in run_endpoint_function
    sd                                    |     return await run_in_threadpool(dependant.call, **values)
    sd                                    |   File "/root/.pyenv/versions/3.10.6/lib/python3.10/site-packages/starlette/concurrency.py", line 41, in run_in_threadpool
    sd                                    |     return await anyio.to_thread.run_sync(func, *args)
    sd                                    |   File "/root/.pyenv/versions/3.10.6/lib/python3.10/site-packages/anyio/to_thread.py", line 31, in run_sync
    sd                                    |     return await get_asynclib().run_sync_in_worker_thread(
    sd                                    |   File "/root/.pyenv/versions/3.10.6/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 937, in run_sync_in_worker_thread
    sd                                    |     return await future
    sd                                    |   File "/root/.pyenv/versions/3.10.6/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 867, in run
    sd                                    |     result = context.run(func, *args)
    sd                                    |   File "/root/.pyenv/versions/3.10.6/lib/python3.10/site-packages/cog/server/http.py", line 79, in predict
    sd                                    |     output = predictor.predict(**request.input.dict())
    sd                                    |   File "/root/.pyenv/versions/3.10.6/lib/python3.10/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
    sd                                    |     return func(*args, **kwargs)
    sd                                    |   File "/root/.pyenv/versions/3.10.6/lib/python3.10/site-packages/torch/amp/autocast_mode.py", line 12, in decorate_autocast
    sd                                    |     return func(*args, **kwargs)
    sd                                    |   File "/src/predict.py", line 113, in predict
    sd                                    |     raise Exception("NSFW content detected, please try a different prompt")
    sd                                    | Exception: NSFW content detected, please try a different prompt
    sd-ui                                 | INFO:     172.18.0.1:34184 - "POST /image HTTP/1.1" 500 Internal Server Error
    

    I get the error with the default prompt: "a photograph of an astronaut riding a horse" I tried with 256x256 image size, and I get the same error.

  • 2

    Version 2 - Development

    A development version of v2 is available for Windows 10/11 and Linux. Experimental support for Mac will be added soon.

    The instructions for installing are at: https://github.com/cmdr2/stable-diffusion-ui/blob/v2/README.md#installation

    It is not a binary, and the source code used for building this is open at https://github.com/cmdr2/stable-diffusion-ui/tree/v2

    What is this?

    This version is a 1-click installer. You don't need WSL or Docker or Python or anything beyond a working NVIDIA GPU with an updated driver. You don't need to use the command-line at all.

    It'll download the necessary files from the original Stable Diffusion git repository, and set it up. It'll then start the browser-based interface like before.

    An NSFW option is present in the interface, for those people who are unable to run their prompts without hitting the NSFW filter incorrectly.

    Is it stable?

    It has run successfully for a number of users, but I would love to know if it works on more computers. Please let me know if it works or fails in this thread, it'll be really helpful! Thanks :)

    PS: There's a new Discord server for support and development discussions: https://discord.com/invite/u9yhsFmEkB . Please join in for faster discussion and feedback on v2.

  • 3

    cannot start up docker container

    build was successful using windows 10, docker-compose version 1.29.2, build 5becea4c

    after running docker-compose up

    Starting sd ... error
    
    ERROR: for sd  Cannot start service stability-ai: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: error running hook #0: error running hook: signal: segmentation fault, stdout: , stderr:: unknown
    
    ERROR: for stability-ai  Cannot start service stability-ai: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: error running hook #0: error running hook: signal: segmentation fault, stdout: , stderr:: unknown
    ERROR: Encountered errors while bringing up the project.
    
  • 4

    ERR_EMPTY_RESPONSE on port 9000

    I can't reach the UI after update. Port 8000 works fine, it displays the rodiet notice, but port 9000 returns nothing at all. I'm running Windows, so I was not (easily) able to execute the server file. But I opened it and executed the code below (start_server()) as a troubleshoot. Without luck though.

    docker-compose up -d stable-diffusion-old-port-redirect
    docker-compose up stability-ai stable-diffusion-ui
    
  • 5

    ModuleNotFoundError: No module named 'cv2'

    python is installed and updated and so is opencv

    The following is the output:

    "Ready to rock!"
    
    started in  C:\Users\adama\Documents\stable-diffusion-ui\stable-diffusion-ui\stable-diffusion
    ←[32mINFO←[0m:     Started server process [←[36m16544←[0m]
    ←[32mINFO←[0m:     Waiting for application startup.
    ←[32mINFO←[0m:     Application startup complete.
    ←[32mINFO←[0m:     Uvicorn running on ←[1mhttp://127.0.0.1:9000←[0m (Press CTRL+C to quit)
    Traceback (most recent call last):
      File "C:\Users\adama\Documents\stable-diffusion-ui\stable-diffusion-ui\stable-diffusion\..\ui\server.py", line 56, in ping
        from sd_internal import runtime
      File "C:\Users\atomica\Documents\stable-diffusion-ui\stable-diffusion-ui\stable-diffusion\..\ui\sd_internal\runtime.py", line 2, in <module>
        import cv2
    ModuleNotFoundError: No module named 'cv2'
    
    ←[32mINFO←[0m:     127.0.0.1:52859 - "←[1mGET /ping HTTP/1.1←[0m" ←[32m200 OK←[0m
    ←[32mINFO←[0m:     127.0.0.1:52859 - "←[1mGET /ping HTTP/1.1←[0m" ←[32m200 OK←[0m
    ←[32mINFO←[0m:     127.0.0.1:52859 - "←[1mGET /favicon.ico HTTP/1.1←[0m" ←[31m404 Not Found←[0m
    ←[32mINFO←[0m:     127.0.0.1:52859 - "←[1mGET /ping HTTP/1.1←[0m" ←[32m200 OK←[0m
    ←[32mINFO←[0m:     127.0.0.1:52859 - "←[1mGET /ping HTTP/1.1←[0m" ←[32m200 OK←[0m
    ←[32mINFO←[0m:     127.0.0.1:52859 - "←[1mGET /ping HTTP/1.1←[0m" ←[32m200 OK←[0m
    ←[32mINFO←[0m:     127.0.0.1:52859 - "←[1mGET /ping HTTP/1.1←[0m" ←[32m200 OK←[0m
    ←[32mINFO←[0m:     127.0.0.1:52859 - "←[1mGET /ping HTTP/1.1←[0m" ←[32m200 OK←[0m
    ←[32mINFO←[0m:     127.0.0.1:52859 - "←[1mGET /ping HTTP/1.1←[0m" ←[32m200 OK←[0m
    
  • 6

    Exception in ASGI application

    First time run of this... I have a laptop Nvidia 3060 GPU, running Ubuntu in WSL on Windows 10. I tried my first prompt from the web page but got this error below. I didn't install the Nvidia driver within Ubuntu because a) it didn't recognise my GPU and b) I had all sorts of other problems. Do I need to install the Nvidia driver within WSL, or does it use the host driver?

    sd     | ERROR:    Exception in ASGI application
    sd     | Traceback (most recent call last):
    sd     |   File "/root/.pyenv/versions/3.10.6/lib/python3.10/site-packages/uvicorn/protocols/http/httptools_impl.py", line 401, in run_asgi
    sd     |     result = await app(self.scope, self.receive, self.send)
    sd     |   File "/root/.pyenv/versions/3.10.6/lib/python3.10/site-packages/uvicorn/middleware/proxy_headers.py", line 78, in __call__
    sd     |     return await self.app(scope, receive, send)
    sd     |   File "/root/.pyenv/versions/3.10.6/lib/python3.10/site-packages/fastapi/applications.py", line 269, in __call__
    sd     |     await super().__call__(scope, receive, send)
    sd     |   File "/root/.pyenv/versions/3.10.6/lib/python3.10/site-packages/starlette/applications.py", line 124, in __call__
    sd     |     await self.middleware_stack(scope, receive, send)
    sd     |   File "/root/.pyenv/versions/3.10.6/lib/python3.10/site-packages/starlette/middleware/errors.py", line 184, in __call__
    sd     |     raise exc
    sd     |   File "/root/.pyenv/versions/3.10.6/lib/python3.10/site-packages/starlette/middleware/errors.py", line 162, in __call__
    sd     |     await self.app(scope, receive, _send)
    sd     |   File "/root/.pyenv/versions/3.10.6/lib/python3.10/site-packages/starlette/exceptions.py", line 93, in __call__
    sd     |     raise exc
    sd     |   File "/root/.pyenv/versions/3.10.6/lib/python3.10/site-packages/starlette/exceptions.py", line 82, in __call__
    sd     |     await self.app(scope, receive, sender)
    sd     |   File "/root/.pyenv/versions/3.10.6/lib/python3.10/site-packages/fastapi/middleware/asyncexitstack.py", line 21, in __call__
    sd     |     raise e
    sd     |   File "/root/.pyenv/versions/3.10.6/lib/python3.10/site-packages/fastapi/middleware/asyncexitstack.py", line 18, in __call__
    sd     |     await self.app(scope, receive, send)
    sd     |   File "/root/.pyenv/versions/3.10.6/lib/python3.10/site-packages/starlette/routing.py", line 670, in __call__
    sd     |     await route.handle(scope, receive, send)
    sd     |   File "/root/.pyenv/versions/3.10.6/lib/python3.10/site-packages/starlette/routing.py", line 266, in handle
    sd     |     await self.app(scope, receive, send)
    sd     |   File "/root/.pyenv/versions/3.10.6/lib/python3.10/site-packages/starlette/routing.py", line 65, in app
    sd     |     response = await func(request)
    sd     |   File "/root/.pyenv/versions/3.10.6/lib/python3.10/site-packages/fastapi/routing.py", line 227, in app
    sd-ui  | INFO:     172.18.0.1:59682 - "POST /image HTTP/1.1" 500 Internal Server Error
    sd     |     raw_response = await run_endpoint_function(
    sd     |   File "/root/.pyenv/versions/3.10.6/lib/python3.10/site-packages/fastapi/routing.py", line 162, in run_endpoint_function
    sd     |     return await run_in_threadpool(dependant.call, **values)
    sd     |   File "/root/.pyenv/versions/3.10.6/lib/python3.10/site-packages/starlette/concurrency.py", line 41, in run_in_threadpool
    sd     |     return await anyio.to_thread.run_sync(func, *args)
    sd     |   File "/root/.pyenv/versions/3.10.6/lib/python3.10/site-packages/anyio/to_thread.py", line 31, in run_sync
    sd     |     return await get_asynclib().run_sync_in_worker_thread(
    sd     |   File "/root/.pyenv/versions/3.10.6/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 937, in run_sync_in_worker_thread
    sd     |     return await future
    sd     |   File "/root/.pyenv/versions/3.10.6/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 867, in run
    sd     |     result = context.run(func, *args)
    sd     |   File "/root/.pyenv/versions/3.10.6/lib/python3.10/site-packages/cog/server/http.py", line 79, in predict
    sd     |     output = predictor.predict(**request.input.dict())
    sd     |   File "/root/.pyenv/versions/3.10.6/lib/python3.10/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
    sd     |     return func(*args, **kwargs)
    sd     |   File "/root/.pyenv/versions/3.10.6/lib/python3.10/site-packages/torch/amp/autocast_mode.py", line 12, in decorate_autocast
    sd     |     return func(*args, **kwargs)
    sd     |   File "/src/predict.py", line 88, in predict
    sd     |     output = self.pipe(
    sd     |   File "/root/.pyenv/versions/3.10.6/lib/python3.10/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
    sd     |     return func(*args, **kwargs)
    sd     |   File "/src/image_to_image.py", line 156, in __call__
    sd     |     noise_pred = self.unet(
    sd     |   File "/root/.pyenv/versions/3.10.6/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
    sd     |     return forward_call(*input, **kwargs)
    sd     |   File "/root/.pyenv/versions/3.10.6/lib/python3.10/site-packages/diffusers/models/unet_2d_condition.py", line 168, in forward
    sd     |     sample = upsample_block(
    sd     |   File "/root/.pyenv/versions/3.10.6/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
    sd     |     return forward_call(*input, **kwargs)
    sd     |   File "/root/.pyenv/versions/3.10.6/lib/python3.10/site-packages/diffusers/models/unet_blocks.py", line 1037, in forward
    sd     |     hidden_states = attn(hidden_states, context=encoder_hidden_states)
    sd     |   File "/root/.pyenv/versions/3.10.6/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
    sd     |     return forward_call(*input, **kwargs)
    sd     |   File "/root/.pyenv/versions/3.10.6/lib/python3.10/site-packages/diffusers/models/attention.py", line 168, in forward
    sd     |     x = block(x, context=context)
    sd     |   File "/root/.pyenv/versions/3.10.6/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
    sd     |     return forward_call(*input, **kwargs)
    sd     |   File "/root/.pyenv/versions/3.10.6/lib/python3.10/site-packages/diffusers/models/attention.py", line 196, in forward
    sd     |     x = self.attn1(self.norm1(x)) + x
    sd     |   File "/root/.pyenv/versions/3.10.6/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
    sd     |     return forward_call(*input, **kwargs)
    sd     |   File "/root/.pyenv/versions/3.10.6/lib/python3.10/site-packages/diffusers/models/attention.py", line 254, in forward
    sd     |     attn = sim.softmax(dim=-1)
    sd     | RuntimeError: CUDA error: unknown error
    sd     | CUDA kernel errors might be asynchronously reported at some other API call,so the stacktrace below might be incorrect.
    sd     | For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
    
  • 7

    Installing Stable Diffusion on Linux Mint Error

    I tried Installing Stable Diffusion V2 on Linux Mint again & again and here's what Error I got. I'm a BIG Noob so explain like I'm Five!


    Stable Diffusion UI

    Stable Diffusion UI's git repository was already installed. Updating.. HEAD is now at 051ef56 Merge pull request #79 from iJacqu3s/patch-1 Already up to date. Stable Diffusion's git repository was already installed. Updating.. HEAD is now at c56b493 Merge pull request #117 from neonsecret/basujindal_attn Already up to date.

    Downloading packages necessary for Stable Diffusion..

    ***** This will take some time (depending on the speed of the Internet connection) and may appear to be stuck, but please be patient ***** ..

    WARNING: A space was detected in your requested environment path '/home/sharky/Desktop/Stable Diffusion/stable-diffusion-ui/stable-diffusion/env' Spaces in paths can sometimes be problematic. Collecting package metadata (repodata.json): done Solving environment: done Preparing transaction: done Verifying transaction: done Executing transaction: done ERROR conda.core.link:_execute(730): An error occurred while installing package 'defaults::cudatoolkit-11.3.1-h2bc3f7f_2'. Rolling back transaction: done

    LinkError: post-link script failed for package defaults::cudatoolkit-11.3.1-h2bc3f7f_2 location of failed script: /home/sharky/Desktop/Stable Diffusion/stable-diffusion-ui/stable-diffusion/env/bin/.cudatoolkit-post-link.sh ==> script messages <== ==> script output <== stdout: stderr: Traceback (most recent call last): File "/home/sharky/Desktop/Stable Diffusion/stable-diffusion-ui/installer/bin/conda", line 12, in from conda.cli import main ModuleNotFoundError: No module named 'conda' Traceback (most recent call last): File "/home/sharky/Desktop/Stable Diffusion/stable-diffusion-ui/installer/bin/conda", line 12, in from conda.cli import main ModuleNotFoundError: No module named 'conda' /home/sharky/Desktop/Stable Diffusion/stable-diffusion-ui/stable-diffusion/env/bin/.cudatoolkit-post-link.sh: line 3: $PREFIX/.messages.txt: ambiguous redirect

    return code: 1

    ()

    Error installing the packages necessary for Stable Diffusion. Please try re-running this installer. If it doesn't work, please copy the messages in this window, and ask the community at https://discord.com/invite/u9yhsFmEkB or file an issue at https://github.com/cmdr2/stable-diffusion-ui/issues


    Hope this helps someone!

  • 8

    ModuleNotFoundError: No module named 'torch'

    I installed and ran v2 on Windows using Start Stable Diffusion UI.cmd, and encountered an error running the server:

    started in  C:\Users\myuser\stable-diffusion-ui\stable-diffusion-ui\stable-diffusion
    ←[32mINFO←[0m:     Started server process [←[36m12336←[0m]
    ←[32mINFO←[0m:     Waiting for application startup.
    ←[32mINFO←[0m:     Application startup complete.
    ←[32mINFO←[0m:     Uvicorn running on ←[1mhttp://127.0.0.1:9000←[0m (Press CTRL+C to quit)
    ←[32mINFO←[0m:     127.0.0.1:51205 - "←[1mGET / HTTP/1.1←[0m" ←[32m200 OK←[0m
    ←[32mINFO←[0m:     127.0.0.1:51205 - "←[1mGET /modifiers.json HTTP/1.1←[0m" ←[32m200 OK←[0m
    ←[32mINFO←[0m:     127.0.0.1:51205 - "←[1mGET /output_dir HTTP/1.1←[0m" ←[32m200 OK←[0m
    Traceback (most recent call last):
      File "C:\Users\myuser\stable-diffusion-ui\stable-diffusion-ui\stable-diffusion\..\ui\server.py", line 63, in ping
        from sd_internal import runtime
      File "C:\Users\myuser\stable-diffusion-ui\stable-diffusion-ui\stable-diffusion\..\ui\sd_internal\runtime.py", line 2, in <module>
        import torch
    ModuleNotFoundError: No module named 'torch'
    
    ←[32mINFO←[0m:     127.0.0.1:51205 - "←[1mGET /ping HTTP/1.1←[0m" ←[32m200 OK←[0m
    ←[32mINFO←[0m:     127.0.0.1:51205 - "←[1mGET /ping HTTP/1.1←[0m" ←[32m200 OK←[0m
    ←[32mINFO←[0m:     127.0.0.1:51205 - "←[1mGET /ping HTTP/1.1←[0m" ←[32m200 OK←[0m
    

    Is anyone else getting this?

  • 9

    start_server() in ./server not working

    I am not familiar with shell script, but I think line 10 in server did not run. I can run docker-compose up stability-ai stable-diffusion-ui in console manually.

  • 10

    It was working but suddenly config.json error

    File "D:\stable-diffusion-ui\ui\server.py", line 144, in getAppConfig with open(config_json_path, 'r') as f: FileNotFoundError: [Errno 2] No such file or directory: 'D:\stable-diffusion-ui\ui\..\scripts\config.json'

  • 11

    Website doesn't show

    I've had some troubles executing server.sh in WSL, it said something about no permission even with sudo, but with some chmod magic I got that working eventually. After executing it, docker showed that it is using port 5000 instead of 8000 like shown in the Tutorial.

    When opening localhost:5000 in a browser, all the site contains is

    {"docs_url":"/docs","openapi_url":"/openapi.json"}

  • 12

    Image Editor

    Implemented an image editor that both replaces the old inpainting editor (and our dependancy on drawingboard.js) and also adds functionality for drawing ontop of existing images.

    Heres what the new main screen ui looks like image

    And heres what it looks like with an image selected image

    • The "Browse" button will let you open an image from your machine
    • The "Draw" button will edit an existing image or create a new one
    • The "Inpainting" button is for editing the inpainting

    Heres the image editor with some examples of the kinds of drawing you can do: image

    Heres the inpainting editor with a spot painted as an example: image

    Mobile works ok but could probably be made a bit better by someone who knows more. The main issue with it is when you touch n drag it tries to scroll. Not sure how to disable that.

    As always, let me know if you'd like any changes! There are a lot of adjustments and tweaks that could be made, as a lot of stuff is added for this, so feel free to let me know if theres something in particular that should be changed.

  • 13

    Beta: OSError: Windows requires Developer Mode to be activated

    Describe the bug SD UI beta halts with the message in console: OSError: Windows requires Developer Mode to be activated

    To Reproduce Steps to reproduce the behavior:

    1. Switch to beta and stop
    2. Copy sd2 models into models stable diffusion folder
    3. Start SD and switch to sd2 and restart it

    Expected behavior For it to accept the sd2_*.ckpt models. Only works if none is present. I only tried with the sd2 official models 512 and 768.

    Desktop (please complete the following information):

    • OS: Win 10
    • Browser: Edge
    • Version: Latest

    Additional context

    D:\stable-diffusion-ui-beta\installer\Library\bin\git.exe
    d:\Program Files\Git\cmd\git.exe
    git version 2.34.1.windows.1
    D:\stable-diffusion-ui-beta\installer\Library\bin\conda.bat
    D:\stable-diffusion-ui-beta\installer\Scripts\conda.exe
    conda 4.14.0
    
    "Stable Diffusion UI - v2"
    
    "Stable Diffusion UI's git repository was already installed. Updating from beta.."
    HEAD is now at cb02b5b Merge pull request #567 from madrang/tabs-css
    Already on 'beta'
    Your branch is up to date with 'origin/beta'.
    remote: Enumerating objects: 24, done.
    remote: Counting objects: 100% (24/24), done.
    remote: Compressing objects: 100% (4/4), done.
    remote: Total 24 (delta 20), reused 24 (delta 20), pack-reused 0
    Unpacking objects: 100% (24/24), 3.13 KiB | 2.00 KiB/s, done.
    From https://github.com/cmdr2/stable-diffusion-ui
       cb02b5b..3d0cdc1  beta       -> origin/beta
    Updating cb02b5b..3d0cdc1
    Fast-forward
     CHANGES.md                     |   1 +
     scripts/on_sd_start.bat        |   8 +--
     scripts/on_sd_start.sh         |   8 +--
     ui/index.html                  |   2 +-
     ui/sd_internal/runtime.py      | 110 +++++++++++++++++++++++------------------
     ui/sd_internal/task_manager.py |  45 +++++------------
     ui/server.py                   |   9 +++-
     7 files changed, 87 insertions(+), 96 deletions(-)
    463 File(s) copied
            1 file(s) copied.
            1 file(s) copied.
            1 file(s) copied.
            1 file(s) copied.
            1 file(s) copied.
            1 file(s) copied.
    A subdirectory or file tmp already exists.
    
    Hotfixed broken JSON file from OpenAI
    "Stable Diffusion's git repository was already installed. Updating.."
    HEAD is now at 6e2f821 Update ddim.py
    remote: Enumerating objects: 7, done.
    remote: Counting objects: 100% (7/7), done.
    remote: Compressing objects: 100% (3/3), done.
    remote: Total 7 (delta 4), reused 7 (delta 4), pack-reused 0
    Unpacking objects: 100% (7/7), 698 bytes | 1024 bytes/s, done.
    From https://github.com/easydiffusion/diffusion-kit
       6e2f821..8878d67  v2         -> origin/v2
    You are not currently on a branch.
    Please specify which branch you want to merge with.
    See git-pull(1) for details.
    
        git pull <remote> <branch>
    
    Previous HEAD position was 6e2f821 Update ddim.py
    HEAD is now at 8878d67 Image callback in DDIM.decode()
    "Packages necessary for Stable Diffusion were already installed"
    "Packages necessary for GFPGAN (Face Correction) were already installed"
    "Packages necessary for ESRGAN (Resolution Upscaling) were already installed"
    "Packages necessary for Stable Diffusion UI were already installed"
    "Data files (weights) necessary for Stable Diffusion were already downloaded. Using the HuggingFace 4 GB Model."
    "Data files (weights) necessary for GFPGAN (Face Correction) were already downloaded"
    "Data files (weights) necessary for ESRGAN (Resolution Upscaling) x4plus were already downloaded"
    "Data files (weights) necessary for ESRGAN (Resolution Upscaling) x4plus_anime were already downloaded"
    "Data files (weights) necessary for the default VAE (sd-vae-ft-mse-original) were already downloaded"
    Requirement already satisfied: open_clip_torch==2.0.2 in d:\stable-diffusion-ui-beta\stable-diffusion\env\lib\site-packages (2.0.2)
    Requirement already satisfied: ftfy in d:\stable-diffusion-ui-beta\stable-diffusion\env\lib\site-packages (from open_clip_torch==2.0.2) (6.1.1)
    Requirement already satisfied: tqdm in d:\stable-diffusion-ui-beta\stable-diffusion\env\lib\site-packages (from open_clip_torch==2.0.2) (4.64.1)
    Requirement already satisfied: huggingface-hub in d:\stable-diffusion-ui-beta\stable-diffusion\env\lib\site-packages (from open_clip_torch==2.0.2) (0.9.1)
    Requirement already satisfied: torch>=1.9 in d:\stable-diffusion-ui-beta\stable-diffusion\env\lib\site-packages (from open_clip_torch==2.0.2) (1.11.0)
    Requirement already satisfied: torchvision in d:\stable-diffusion-ui-beta\stable-diffusion\env\lib\site-packages (from open_clip_torch==2.0.2) (0.12.0)
    Requirement already satisfied: regex in d:\stable-diffusion-ui-beta\stable-diffusion\env\lib\site-packages (from open_clip_torch==2.0.2) (2022.9.13)
    Requirement already satisfied: typing_extensions in d:\stable-diffusion-ui-beta\stable-diffusion\env\lib\site-packages (from torch>=1.9->open_clip_torch==2.0.2) (4.3.0)
    Requirement already satisfied: wcwidth>=0.2.5 in d:\stable-diffusion-ui-beta\stable-diffusion\env\lib\site-packages (from ftfy->open_clip_torch==2.0.2) (0.2.5)
    Requirement already satisfied: filelock in d:\stable-diffusion-ui-beta\stable-diffusion\env\lib\site-packages (from huggingface-hub->open_clip_torch==2.0.2) (3.8.0)
    Requirement already satisfied: pyyaml>=5.1 in d:\stable-diffusion-ui-beta\stable-diffusion\env\lib\site-packages (from huggingface-hub->open_clip_torch==2.0.2) (6.0)
    Requirement already satisfied: requests in d:\stable-diffusion-ui-beta\stable-diffusion\env\lib\site-packages (from huggingface-hub->open_clip_torch==2.0.2) (2.28.1)
    Requirement already satisfied: packaging>=20.9 in d:\stable-diffusion-ui-beta\stable-diffusion\env\lib\site-packages (from huggingface-hub->open_clip_torch==2.0.2) (21.3)
    Requirement already satisfied: pyparsing!=3.0.5,>=2.0.2 in d:\stable-diffusion-ui-beta\stable-diffusion\env\lib\site-packages (from packaging>=20.9->huggingface-hub->open_clip_torch==2.0.2) (3.0.9)
    Requirement already satisfied: idna<4,>=2.5 in d:\stable-diffusion-ui-beta\stable-diffusion\env\lib\site-packages (from requests->huggingface-hub->open_clip_torch==2.0.2) (3.3)
    Requirement already satisfied: charset-normalizer<3,>=2 in d:\stable-diffusion-ui-beta\stable-diffusion\env\lib\site-packages (from requests->huggingface-hub->open_clip_torch==2.0.2) (2.0.4)
    Requirement already satisfied: certifi>=2017.4.17 in d:\stable-diffusion-ui-beta\stable-diffusion\env\lib\site-packages (from requests->huggingface-hub->open_clip_torch==2.0.2) (2022.9.14)
    Requirement already satisfied: urllib3<1.27,>=1.21.1 in d:\stable-diffusion-ui-beta\stable-diffusion\env\lib\site-packages (from requests->huggingface-hub->open_clip_torch==2.0.2) (1.26.11)
    Requirement already satisfied: numpy in d:\stable-diffusion-ui-beta\stable-diffusion\env\lib\site-packages (from torchvision->open_clip_torch==2.0.2) (1.23.3)
    Requirement already satisfied: pillow!=8.3.*,>=5.3.0 in d:\stable-diffusion-ui-beta\stable-diffusion\env\lib\site-packages (from torchvision->open_clip_torch==2.0.2) (9.2.0)
    Requirement already satisfied: colorama in d:\stable-diffusion-ui-beta\stable-diffusion\env\lib\site-packages (from tqdm->open_clip_torch==2.0.2) (0.4.5)
    
    "Stable Diffusion is ready!"
    
    PYTHONPATH=D:\stable-diffusion-ui-beta\stable-diffusion;D:\stable-diffusion-ui-beta\stable-diffusion\env\Lib\site-packages
    D:\stable-diffusion-ui-beta\stable-diffusion\env\python.exe
    D:\stable-diffusion-ui-beta\installer\python.exe
    C:\Users\shaol\AppData\Local\Programs\Python\Python310\python.exe
    C:\Users\shaol\AppData\Local\Microsoft\WindowsApps\python.exe
    Python 3.8.5
    started in  D:\stable-diffusion-ui-beta\stable-diffusion
    requesting for render_devices auto
    devices_to_start {'cuda:0'}
    devices_to_stop set()
    Start new Rendering Thread on device cuda:0
    Setting cuda:0 as active
    loading D:\stable-diffusion-ui-beta\models\stable-diffusion\sd2_512-base-ema.ckpt to device cuda:0 using precision autocast
    Loading model from D:\stable-diffusion-ui-beta\models\stable-diffusion\sd2_512-base-ema.ckpt
    active devices {'cuda:0': {'name': 'NVIDIA GeForce RTX 3060', 'mem_free': 11.818500096, 'mem_total': 12.884246528}}
    INFO:     Started server process [18404]
    INFO:     Waiting for application startup.
    INFO:     Application startup complete.
    INFO:     Uvicorn running on http://0.0.0.0:9000 (Press CTRL+C to quit)
    INFO:     127.0.0.1:53002 - "GET / HTTP/1.1" 200 OK
    INFO:     127.0.0.1:53002 - "GET /css/fonts.css HTTP/1.1" 200 OK
    INFO:     127.0.0.1:53003 - "GET /css/themes.css HTTP/1.1" 200 OK
    INFO:     127.0.0.1:53002 - "GET /css/main.css HTTP/1.1" 200 OK
    INFO:     127.0.0.1:53003 - "GET /css/auto-save.css HTTP/1.1" 200 OK
    INFO:     127.0.0.1:53002 - "GET /css/fontawesome-all.min.css HTTP/1.1" 200 OK
    INFO:     127.0.0.1:53003 - "GET /css/drawingboard.min.css HTTP/1.1" 200 OK
    INFO:     127.0.0.1:53002 - "GET /js/jquery-3.6.1.min.js HTTP/1.1" 200 OK
    INFO:     127.0.0.1:53003 - "GET /js/drawingboard.min.js HTTP/1.1" 200 OK
    INFO:     127.0.0.1:53003 - "GET /js/marked.min.js HTTP/1.1" 200 OK
    INFO:     127.0.0.1:53002 - "GET /js/utils.js HTTP/1.1" 200 OK
    INFO:     127.0.0.1:53002 - "GET /js/parameters.js HTTP/1.1" 200 OK
    INFO:     127.0.0.1:53003 - "GET /js/plugins.js HTTP/1.1" 200 OK
    INFO:     127.0.0.1:53007 - "GET /js/inpainting-editor.js HTTP/1.1" 200 OK
    INFO:     127.0.0.1:53003 - "GET /js/auto-save.js HTTP/1.1" 200 OK
    INFO:     127.0.0.1:53007 - "GET /js/main.js HTTP/1.1" 200 OK
    INFO:     127.0.0.1:53002 - "GET /js/themes.js HTTP/1.1" 200 OK
    INFO:     127.0.0.1:53003 - "GET /js/dnd.js HTTP/1.1" 200 OK
    Scan D:\stable-diffusion-ui-beta\models\stable-diffusion\sd-v1-4-full-ema.ckpt: 1 scanned, 0 issue, 0 infected.
    Scan D:\stable-diffusion-ui-beta\models\stable-diffusion\sd2_512-base-ema.ckpt: 1 scanned, 0 issue, 0 infected.
    Scan D:\stable-diffusion-ui-beta\models\stable-diffusion\v1-5-pruned-emaonly.ckpt: 1 scanned, 0 issue, 0 infected.
    Scan D:\stable-diffusion-ui-beta\models\stable-diffusion\v1-5-pruned.ckpt: 1 scanned, 0 issue, 0 infected.
    Scan D:\stable-diffusion-ui-beta\models\vae\vae-ft-mse-840000-ema-pruned.ckpt: 1 scanned, 0 issue, 0 infected.
    INFO:     127.0.0.1:53003 - "GET /get/models HTTP/1.1" 200 OK
    INFO:     127.0.0.1:53003 - "GET /get/app_config HTTP/1.1" 200 OK
    INFO:     127.0.0.1:53003 - "GET /get/modifiers HTTP/1.1" 200 OK
    INFO:     127.0.0.1:53003 - "GET /get/ui_plugins HTTP/1.1" 200 OK
    INFO:     127.0.0.1:53008 - "GET /get/devices HTTP/1.1" 200 OK
    INFO:     127.0.0.1:53003 - "GET /Modifiers-dnd.plugin.js?t=1669715340187 HTTP/1.1" 200 OK
    INFO:     127.0.0.1:53007 - "GET /release-notes.plugin.js?t=1669715340187 HTTP/1.1" 200 OK
    INFO:     127.0.0.1:53002 - "GET /Modifiers-wheel.plugin.js?t=1669715340187 HTTP/1.1" 200 OK
    INFO:     127.0.0.1:53002 - "GET /js/marked.min.js HTTP/1.1" 200 OK
    INFO:     127.0.0.1:53002 - "GET /get/app_config HTTP/1.1" 200 OK
    Global Step: 875000
    No module 'xformers'. Proceeding without it.
    LatentDiffusion: Running in v-prediction mode
    DiffusionWrapper has 865.91 M params.
    making attention of type 'vanilla' with 512 in_channels
    Working with z of shape (1, 4, 32, 32) = 4096 dimensions.
    making attention of type 'vanilla' with 512 in_channels
    Traceback (most recent call last):
      File "D:\stable-diffusion-ui-beta\stable-diffusion\env\Lib\site-packages\huggingface_hub\file_download.py", line 837, in _create_relative_symlink
        os.symlink(relative_src, dst)
    OSError: [WinError 1314] A required privilege is not held by the client: '..\\..\\blobs\\9a78ef8e8c73fd0df621682e7a8e8eb36c6916cb3c16b291a082ecd52ab79cc4' -> 'C:\\Users\\shaol/.cache\\huggingface\\hub\\models--laion--CLIP-ViT-H-14-laion2B-s32B-b79K\\snapshots\\58a1e03a7acfacbe6b95ebc24ae0394eda6a14fc\\open_clip_pytorch_model.bin'
    
    During handling of the above exception, another exception occurred:
    
    Traceback (most recent call last):
      File "D:\stable-diffusion-ui-beta\ui\sd_internal\task_manager.py", line 197, in preload_model
        runtime.load_model_ckpt()
      File "D:\stable-diffusion-ui-beta\ui\sd_internal\runtime.py", line 113, in load_model_ckpt
        load_model_ckpt_sd2()
      File "D:\stable-diffusion-ui-beta\ui\sd_internal\runtime.py", line 218, in load_model_ckpt_sd2
        thread_data.model = instantiate_from_config(config.model)
      File "D:\stable-diffusion-ui-beta\stable-diffusion\ldm\util.py", line 79, in instantiate_from_config
        return get_obj_from_str(config["target"])(**config.get("params", dict()))
      File "D:\stable-diffusion-ui-beta\stable-diffusion\ldm\models\diffusion\ddpm.py", line 563, in __init__
        self.instantiate_cond_stage(cond_stage_config)
      File "D:\stable-diffusion-ui-beta\stable-diffusion\ldm\models\diffusion\ddpm.py", line 630, in instantiate_cond_stage
        model = instantiate_from_config(config)
      File "D:\stable-diffusion-ui-beta\stable-diffusion\ldm\util.py", line 79, in instantiate_from_config
        return get_obj_from_str(config["target"])(**config.get("params", dict()))
      File "D:\stable-diffusion-ui-beta\stable-diffusion\ldm\modules\encoders\modules.py", line 147, in __init__
        model, _, _ = open_clip.create_model_and_transforms(arch, device=torch.device('cpu'), pretrained=version)
      File "D:\stable-diffusion-ui-beta\stable-diffusion\env\Lib\site-packages\open_clip\factory.py", line 151, in create_model_and_transforms
        model = create_model(
      File "D:\stable-diffusion-ui-beta\stable-diffusion\env\Lib\site-packages\open_clip\factory.py", line 113, in create_model
        checkpoint_path = download_pretrained(pretrained_cfg, cache_dir=cache_dir)
      File "D:\stable-diffusion-ui-beta\stable-diffusion\env\Lib\site-packages\open_clip\pretrained.py", line 295, in download_pretrained
        target = download_pretrained_from_hf(model_id, cache_dir=cache_dir)
      File "D:\stable-diffusion-ui-beta\stable-diffusion\env\Lib\site-packages\open_clip\pretrained.py", line 265, in download_pretrained_from_hf
        cached_file = hf_hub_download(model_id, filename, revision=revision, cache_dir=cache_dir)
      File "D:\stable-diffusion-ui-beta\stable-diffusion\env\Lib\site-packages\huggingface_hub\file_download.py", line 1218, in hf_hub_download
        _create_relative_symlink(blob_path, pointer_path)
      File "D:\stable-diffusion-ui-beta\stable-diffusion\env\Lib\site-packages\huggingface_hub\file_download.py", line 841, in _create_relative_symlink
        raise OSError(
    OSError: Windows requires Developer Mode to be activated, or to run Python as an administrator, in order to create symlinks.
    In order to activate Developer Mode, see this article: https://docs.microsoft.com/en-us/windows/apps/get-started/enable-your-device-for-development
    
  • 14

    An inbuild translator

    It would be great if there was a possibility to enter text in other languages.

    DeepL or google translator can certainly be used before but it would be handy if you just enter the text in the language of your choice and stable diffusion automatically translates it into a language it understands / can interpret.

    Text in German is very poorly accepted and the results are correspondingly inappropriate.

    Translated with www.DeepL.com/Translator (free version)

  • 15

    Tweak the seed behavior

    Update the seed before starting the processing, so interrupting the processing retains the seed being used for the batch being currently processed.

    The idea behind that is that if I like the gen I'm currently seeing and want to build on top of it, I can create a new task with the same seed without having to wait for the current task to complete.

  • 16

    Visual feedback on button click

    When there are too many tasks and the top of the list is not visible, there is no visual feedback that a task has been successfully added to the queue.

    Adding a subtle visual feedback on buttons upon click to reflect that the mouse event was taken into account.