Easiest 1-click way to install and use Stable Diffusion on your own computer. Provides a browser UI for generating images from text prompts and images. Just enter your text prompt, and see the generated image.

  • By null
  • Last update: Jan 4, 2023
  • Comments: 16

Stable Diffusion UI

Easiest way to install and use Stable Diffusion on your own computer. No dependencies or technical knowledge required. 1-click install, powerful features.

Discord Server (for support, and development discussion) | Troubleshooting guide for common problems


Step 1: Download the installer

Step 2: Run the program

  • On Windows: Double-click Start Stable Diffusion UI.cmd
  • On Linux: Run ./start.sh in a terminal

Step 3: There is no step 3!

It's simple to get started. You don't need to install or struggle with Python, Anaconda, Docker etc.

The installer will take care of whatever is needed. A friendly Discord community will help you if you face any problems.


Easy for new users, powerful features for advanced users

Features:

  • No Dependencies or Technical Knowledge Required: 1-click install for Windows 10/11 and Linux. No dependencies, no need for WSL or Docker or Conda or technical setup. Just download and run!
  • Clutter-free UI: a friendly and simple UI, while providing a lot of powerful features
  • Supports "Text to Image" and "Image to Image"
  • Custom Models: Use your own .ckpt file, by placing it inside the models/stable-diffusion folder!
  • Live Preview: See the image as the AI is drawing it
  • Task Queue: Queue up all your ideas, without waiting for the current task to finish
  • In-Painting: Specify areas of your image to paint into
  • Face Correction (GFPGAN) and Upscaling (RealESRGAN)
  • Image Modifiers: A library of modifier tags like "Realistic", "Pencil Sketch", "ArtStation" etc. Experiment with various styles quickly.
  • Loopback: Use the output image as the input image for the next img2img task
  • Negative Prompt: Specify aspects of the image to remove.
  • Attention/Emphasis: () in the prompt increases the model's attention to enclosed words, and [] decreases it
  • Weighted Prompts: Use weights for specific words in your prompt to change their importance, e.g. red:2.4 dragon:1.2
  • Prompt Matrix: (in beta) Quickly create multiple variations of your prompt, e.g. a photograph of an astronaut riding a horse | illustration | cinematic lighting
  • Lots of Samplers: ddim, plms, heun, euler, euler_a, dpm2, dpm2_a, lms
  • Multiple Prompts File: Queue multiple prompts by entering one prompt per line, or by running a text file
  • NSFW Setting: A setting in the UI to control NSFW content
  • JPEG/PNG output
  • Save generated images to disk
  • Use CPU setting: If you don't have a compatible graphics card, but still want to run it on your CPU.
  • Auto-updater: Gets you the latest improvements and bug-fixes to a rapidly evolving project.
  • Low Memory Usage: Creates 512x512 images with less than 4GB of VRAM!
  • Developer Console: A developer-mode for those who want to modify their Stable Diffusion code, and edit the conda environment.

Easy for new users:

Screenshot of the initial UI

Powerful features for advanced users:

Screenshot of advanced settings

Live Preview

Useful for judging (and stopping) an image quickly, without waiting for it to finish rendering.

live-512

Task Queue

Screenshot of task queue

System Requirements

  1. Windows 10/11, or Linux. Experimental support for Mac is coming soon.
  2. An NVIDIA graphics card, preferably with 4GB or more of VRAM. If you don't have a compatible graphics card, it'll automatically run in the slower "CPU Mode".
  3. Minimum 8 GB of RAM and 25GB of disk space.

You don't need to install or struggle with Python, Anaconda, Docker etc. The installer will take care of whatever is needed.

Installation

  1. Download for Windows or for Linux.

  2. Extract:

  • For Windows: After unzipping the file, please move the stable-diffusion-ui folder to your C: (or any drive like D:, at the top root level), e.g. C:\stable-diffusion-ui. This will avoid a common problem with Windows (file path length limits).
  • For Linux: After extracting the .tar.xz file, please open a terminal, and go to the stable-diffusion-ui directory.
  1. Run:
  • For Windows: Start Stable Diffusion UI.cmd by double-clicking it.
  • For Linux: In the terminal, run ./start.sh (or bash start.sh)

This will automatically install Stable Diffusion, set it up, and start the interface. No additional steps are needed.

To Uninstall: Just delete the stable-diffusion-ui folder to uninstall all the downloaded packages.

How to use?

Please use our guide to understand how to use the features in this UI.

Bugs reports and code contributions welcome

If there are any problems or suggestions, please feel free to ask on the discord server or file an issue.

We could really use help on these aspects (click to view tasks that need your help):

If you have any code contributions in mind, please feel free to say Hi to us on the discord server. We use the Discord server for development-related discussions, and for helping users.

Disclaimer

The authors of this project are not responsible for any content generated using this interface.

The license of this software forbids you from sharing any content that violates any laws, produce any harm to a person, disseminate any personal information that would be meant for harm, spread misinformation, or target vulnerable groups. For the full list of restrictions please read the license. You agree to these terms by using this software.

Github

https://github.com/cmdr2/stable-diffusion-ui

Comments(16)

  • 1

    "Potential NSFW content" on the default prompt.

    Configuration:

    Windows 11 CPU: AMD Ryzen 5 5600X Memory: 64GB WSL2 + ubuntu 22.04.1 GPU: GeForce GTX 1660 SUPER GPU Memory: 6GB

    docker run --rm -it --gpus=all nvcr.io/nvidia/k8s/cuda-sample:nbody nbody -gpu -benchmark
    > Windowed mode
    > Simulation data stored in video memory
    > Single precision floating point simulation
    > 1 Devices used for simulation
    GPU Device 0: "Turing" with compute capability 7.5
    
    > Compute 7.5 CUDA device: [NVIDIA GeForce GTX 1660 SUPER]
    22528 bodies, total time for 10 iterations: 32.767 ms
    = 154.884 billion interactions per second
    = 3097.676 single-precision GFLOP/s at 20 flops per interaction
    

    Error message:

    sd                                    | Using seed: 922
    50it [00:32,  1.56it/s]               |
    sd                                    | Potential NSFW content was detected in one or more images. A black image will be returned instead. Try again with a different prompt and/or seed.
    sd                                    | INFO:     172.18.0.4:36142 - "POST /predictions HTTP/1.1" 500 Internal Server Error
    sd                                    | ERROR:    Exception in ASGI application
    sd                                    | Traceback (most recent call last):
    sd                                    |   File "/root/.pyenv/versions/3.10.6/lib/python3.10/site-packages/uvicorn/protocols/http/httptools_impl.py", line 401, in run_asgi
    sd                                    |     result = await app(self.scope, self.receive, self.send)
    sd                                    |   File "/root/.pyenv/versions/3.10.6/lib/python3.10/site-packages/uvicorn/middleware/proxy_headers.py", line 78, in __call__
    sd                                    |     return await self.app(scope, receive, send)
    sd                                    |   File "/root/.pyenv/versions/3.10.6/lib/python3.10/site-packages/fastapi/applications.py", line 269, in __call__
    sd                                    |     await super().__call__(scope, receive, send)
    sd                                    |   File "/root/.pyenv/versions/3.10.6/lib/python3.10/site-packages/starlette/applications.py", line 124, in __call__
    sd                                    |     await self.middleware_stack(scope, receive, send)
    sd                                    |   File "/root/.pyenv/versions/3.10.6/lib/python3.10/site-packages/starlette/middleware/errors.py", line 184, in __call__
    sd                                    |     raise exc
    sd                                    |   File "/root/.pyenv/versions/3.10.6/lib/python3.10/site-packages/starlette/middleware/errors.py", line 162, in __call__
    sd                                    |     await self.app(scope, receive, _send)
    sd                                    |   File "/root/.pyenv/versions/3.10.6/lib/python3.10/site-packages/starlette/exceptions.py", line 93, in __call__
    sd                                    |     raise exc
    sd                                    |   File "/root/.pyenv/versions/3.10.6/lib/python3.10/site-packages/starlette/exceptions.py", line 82, in __call__
    sd                                    |     await self.app(scope, receive, sender)
    sd                                    |   File "/root/.pyenv/versions/3.10.6/lib/python3.10/site-packages/fastapi/middleware/asyncexitstack.py", line 21, in __call__
    sd                                    |     raise e
    sd                                    |   File "/root/.pyenv/versions/3.10.6/lib/python3.10/site-packages/fastapi/middleware/asyncexitstack.py", line 18, in __call__
    sd                                    |     await self.app(scope, receive, send)
    sd                                    |   File "/root/.pyenv/versions/3.10.6/lib/python3.10/site-packages/starlette/routing.py", line 670, in __call__
    sd                                    |     await route.handle(scope, receive, send)
    sd                                    |   File "/root/.pyenv/versions/3.10.6/lib/python3.10/site-packages/starlette/routing.py", line 266, in handle
    sd                                    |     await self.app(scope, receive, send)
    sd                                    |   File "/root/.pyenv/versions/3.10.6/lib/python3.10/site-packages/starlette/routing.py", line 65, in app
    sd                                    |     response = await func(request)
    sd                                    |   File "/root/.pyenv/versions/3.10.6/lib/python3.10/site-packages/fastapi/routing.py", line 227, in app
    sd                                    |     raw_response = await run_endpoint_function(
    sd                                    |   File "/root/.pyenv/versions/3.10.6/lib/python3.10/site-packages/fastapi/routing.py", line 162, in run_endpoint_function
    sd                                    |     return await run_in_threadpool(dependant.call, **values)
    sd                                    |   File "/root/.pyenv/versions/3.10.6/lib/python3.10/site-packages/starlette/concurrency.py", line 41, in run_in_threadpool
    sd                                    |     return await anyio.to_thread.run_sync(func, *args)
    sd                                    |   File "/root/.pyenv/versions/3.10.6/lib/python3.10/site-packages/anyio/to_thread.py", line 31, in run_sync
    sd                                    |     return await get_asynclib().run_sync_in_worker_thread(
    sd                                    |   File "/root/.pyenv/versions/3.10.6/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 937, in run_sync_in_worker_thread
    sd                                    |     return await future
    sd                                    |   File "/root/.pyenv/versions/3.10.6/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 867, in run
    sd                                    |     result = context.run(func, *args)
    sd                                    |   File "/root/.pyenv/versions/3.10.6/lib/python3.10/site-packages/cog/server/http.py", line 79, in predict
    sd                                    |     output = predictor.predict(**request.input.dict())
    sd                                    |   File "/root/.pyenv/versions/3.10.6/lib/python3.10/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
    sd                                    |     return func(*args, **kwargs)
    sd                                    |   File "/root/.pyenv/versions/3.10.6/lib/python3.10/site-packages/torch/amp/autocast_mode.py", line 12, in decorate_autocast
    sd                                    |     return func(*args, **kwargs)
    sd                                    |   File "/src/predict.py", line 113, in predict
    sd                                    |     raise Exception("NSFW content detected, please try a different prompt")
    sd                                    | Exception: NSFW content detected, please try a different prompt
    sd-ui                                 | INFO:     172.18.0.1:34184 - "POST /image HTTP/1.1" 500 Internal Server Error
    

    I get the error with the default prompt: "a photograph of an astronaut riding a horse" I tried with 256x256 image size, and I get the same error.

  • 2

    Version 2 - Development

    A development version of v2 is available for Windows 10/11 and Linux. Experimental support for Mac will be added soon.

    The instructions for installing are at: https://github.com/cmdr2/stable-diffusion-ui/blob/v2/README.md#installation

    It is not a binary, and the source code used for building this is open at https://github.com/cmdr2/stable-diffusion-ui/tree/v2

    What is this?

    This version is a 1-click installer. You don't need WSL or Docker or Python or anything beyond a working NVIDIA GPU with an updated driver. You don't need to use the command-line at all.

    It'll download the necessary files from the original Stable Diffusion git repository, and set it up. It'll then start the browser-based interface like before.

    An NSFW option is present in the interface, for those people who are unable to run their prompts without hitting the NSFW filter incorrectly.

    Is it stable?

    It has run successfully for a number of users, but I would love to know if it works on more computers. Please let me know if it works or fails in this thread, it'll be really helpful! Thanks :)

    PS: There's a new Discord server for support and development discussions: https://discord.com/invite/u9yhsFmEkB . Please join in for faster discussion and feedback on v2.

  • 3

    cannot start up docker container

    build was successful using windows 10, docker-compose version 1.29.2, build 5becea4c

    after running docker-compose up

    Starting sd ... error
    
    ERROR: for sd  Cannot start service stability-ai: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: error running hook #0: error running hook: signal: segmentation fault, stdout: , stderr:: unknown
    
    ERROR: for stability-ai  Cannot start service stability-ai: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: error running hook #0: error running hook: signal: segmentation fault, stdout: , stderr:: unknown
    ERROR: Encountered errors while bringing up the project.
    
  • 4

    ERR_EMPTY_RESPONSE on port 9000

    I can't reach the UI after update. Port 8000 works fine, it displays the rodiet notice, but port 9000 returns nothing at all. I'm running Windows, so I was not (easily) able to execute the server file. But I opened it and executed the code below (start_server()) as a troubleshoot. Without luck though.

    docker-compose up -d stable-diffusion-old-port-redirect
    docker-compose up stability-ai stable-diffusion-ui
    
  • 5

    ModuleNotFoundError: No module named 'cv2'

    python is installed and updated and so is opencv

    The following is the output:

    "Ready to rock!"
    
    started in  C:\Users\adama\Documents\stable-diffusion-ui\stable-diffusion-ui\stable-diffusion
    ←[32mINFO←[0m:     Started server process [←[36m16544←[0m]
    ←[32mINFO←[0m:     Waiting for application startup.
    ←[32mINFO←[0m:     Application startup complete.
    ←[32mINFO←[0m:     Uvicorn running on ←[1mhttp://127.0.0.1:9000←[0m (Press CTRL+C to quit)
    Traceback (most recent call last):
      File "C:\Users\adama\Documents\stable-diffusion-ui\stable-diffusion-ui\stable-diffusion\..\ui\server.py", line 56, in ping
        from sd_internal import runtime
      File "C:\Users\atomica\Documents\stable-diffusion-ui\stable-diffusion-ui\stable-diffusion\..\ui\sd_internal\runtime.py", line 2, in <module>
        import cv2
    ModuleNotFoundError: No module named 'cv2'
    
    ←[32mINFO←[0m:     127.0.0.1:52859 - "←[1mGET /ping HTTP/1.1←[0m" ←[32m200 OK←[0m
    ←[32mINFO←[0m:     127.0.0.1:52859 - "←[1mGET /ping HTTP/1.1←[0m" ←[32m200 OK←[0m
    ←[32mINFO←[0m:     127.0.0.1:52859 - "←[1mGET /favicon.ico HTTP/1.1←[0m" ←[31m404 Not Found←[0m
    ←[32mINFO←[0m:     127.0.0.1:52859 - "←[1mGET /ping HTTP/1.1←[0m" ←[32m200 OK←[0m
    ←[32mINFO←[0m:     127.0.0.1:52859 - "←[1mGET /ping HTTP/1.1←[0m" ←[32m200 OK←[0m
    ←[32mINFO←[0m:     127.0.0.1:52859 - "←[1mGET /ping HTTP/1.1←[0m" ←[32m200 OK←[0m
    ←[32mINFO←[0m:     127.0.0.1:52859 - "←[1mGET /ping HTTP/1.1←[0m" ←[32m200 OK←[0m
    ←[32mINFO←[0m:     127.0.0.1:52859 - "←[1mGET /ping HTTP/1.1←[0m" ←[32m200 OK←[0m
    ←[32mINFO←[0m:     127.0.0.1:52859 - "←[1mGET /ping HTTP/1.1←[0m" ←[32m200 OK←[0m
    
  • 6

    Exception in ASGI application

    First time run of this... I have a laptop Nvidia 3060 GPU, running Ubuntu in WSL on Windows 10. I tried my first prompt from the web page but got this error below. I didn't install the Nvidia driver within Ubuntu because a) it didn't recognise my GPU and b) I had all sorts of other problems. Do I need to install the Nvidia driver within WSL, or does it use the host driver?

    sd     | ERROR:    Exception in ASGI application
    sd     | Traceback (most recent call last):
    sd     |   File "/root/.pyenv/versions/3.10.6/lib/python3.10/site-packages/uvicorn/protocols/http/httptools_impl.py", line 401, in run_asgi
    sd     |     result = await app(self.scope, self.receive, self.send)
    sd     |   File "/root/.pyenv/versions/3.10.6/lib/python3.10/site-packages/uvicorn/middleware/proxy_headers.py", line 78, in __call__
    sd     |     return await self.app(scope, receive, send)
    sd     |   File "/root/.pyenv/versions/3.10.6/lib/python3.10/site-packages/fastapi/applications.py", line 269, in __call__
    sd     |     await super().__call__(scope, receive, send)
    sd     |   File "/root/.pyenv/versions/3.10.6/lib/python3.10/site-packages/starlette/applications.py", line 124, in __call__
    sd     |     await self.middleware_stack(scope, receive, send)
    sd     |   File "/root/.pyenv/versions/3.10.6/lib/python3.10/site-packages/starlette/middleware/errors.py", line 184, in __call__
    sd     |     raise exc
    sd     |   File "/root/.pyenv/versions/3.10.6/lib/python3.10/site-packages/starlette/middleware/errors.py", line 162, in __call__
    sd     |     await self.app(scope, receive, _send)
    sd     |   File "/root/.pyenv/versions/3.10.6/lib/python3.10/site-packages/starlette/exceptions.py", line 93, in __call__
    sd     |     raise exc
    sd     |   File "/root/.pyenv/versions/3.10.6/lib/python3.10/site-packages/starlette/exceptions.py", line 82, in __call__
    sd     |     await self.app(scope, receive, sender)
    sd     |   File "/root/.pyenv/versions/3.10.6/lib/python3.10/site-packages/fastapi/middleware/asyncexitstack.py", line 21, in __call__
    sd     |     raise e
    sd     |   File "/root/.pyenv/versions/3.10.6/lib/python3.10/site-packages/fastapi/middleware/asyncexitstack.py", line 18, in __call__
    sd     |     await self.app(scope, receive, send)
    sd     |   File "/root/.pyenv/versions/3.10.6/lib/python3.10/site-packages/starlette/routing.py", line 670, in __call__
    sd     |     await route.handle(scope, receive, send)
    sd     |   File "/root/.pyenv/versions/3.10.6/lib/python3.10/site-packages/starlette/routing.py", line 266, in handle
    sd     |     await self.app(scope, receive, send)
    sd     |   File "/root/.pyenv/versions/3.10.6/lib/python3.10/site-packages/starlette/routing.py", line 65, in app
    sd     |     response = await func(request)
    sd     |   File "/root/.pyenv/versions/3.10.6/lib/python3.10/site-packages/fastapi/routing.py", line 227, in app
    sd-ui  | INFO:     172.18.0.1:59682 - "POST /image HTTP/1.1" 500 Internal Server Error
    sd     |     raw_response = await run_endpoint_function(
    sd     |   File "/root/.pyenv/versions/3.10.6/lib/python3.10/site-packages/fastapi/routing.py", line 162, in run_endpoint_function
    sd     |     return await run_in_threadpool(dependant.call, **values)
    sd     |   File "/root/.pyenv/versions/3.10.6/lib/python3.10/site-packages/starlette/concurrency.py", line 41, in run_in_threadpool
    sd     |     return await anyio.to_thread.run_sync(func, *args)
    sd     |   File "/root/.pyenv/versions/3.10.6/lib/python3.10/site-packages/anyio/to_thread.py", line 31, in run_sync
    sd     |     return await get_asynclib().run_sync_in_worker_thread(
    sd     |   File "/root/.pyenv/versions/3.10.6/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 937, in run_sync_in_worker_thread
    sd     |     return await future
    sd     |   File "/root/.pyenv/versions/3.10.6/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 867, in run
    sd     |     result = context.run(func, *args)
    sd     |   File "/root/.pyenv/versions/3.10.6/lib/python3.10/site-packages/cog/server/http.py", line 79, in predict
    sd     |     output = predictor.predict(**request.input.dict())
    sd     |   File "/root/.pyenv/versions/3.10.6/lib/python3.10/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
    sd     |     return func(*args, **kwargs)
    sd     |   File "/root/.pyenv/versions/3.10.6/lib/python3.10/site-packages/torch/amp/autocast_mode.py", line 12, in decorate_autocast
    sd     |     return func(*args, **kwargs)
    sd     |   File "/src/predict.py", line 88, in predict
    sd     |     output = self.pipe(
    sd     |   File "/root/.pyenv/versions/3.10.6/lib/python3.10/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
    sd     |     return func(*args, **kwargs)
    sd     |   File "/src/image_to_image.py", line 156, in __call__
    sd     |     noise_pred = self.unet(
    sd     |   File "/root/.pyenv/versions/3.10.6/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
    sd     |     return forward_call(*input, **kwargs)
    sd     |   File "/root/.pyenv/versions/3.10.6/lib/python3.10/site-packages/diffusers/models/unet_2d_condition.py", line 168, in forward
    sd     |     sample = upsample_block(
    sd     |   File "/root/.pyenv/versions/3.10.6/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
    sd     |     return forward_call(*input, **kwargs)
    sd     |   File "/root/.pyenv/versions/3.10.6/lib/python3.10/site-packages/diffusers/models/unet_blocks.py", line 1037, in forward
    sd     |     hidden_states = attn(hidden_states, context=encoder_hidden_states)
    sd     |   File "/root/.pyenv/versions/3.10.6/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
    sd     |     return forward_call(*input, **kwargs)
    sd     |   File "/root/.pyenv/versions/3.10.6/lib/python3.10/site-packages/diffusers/models/attention.py", line 168, in forward
    sd     |     x = block(x, context=context)
    sd     |   File "/root/.pyenv/versions/3.10.6/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
    sd     |     return forward_call(*input, **kwargs)
    sd     |   File "/root/.pyenv/versions/3.10.6/lib/python3.10/site-packages/diffusers/models/attention.py", line 196, in forward
    sd     |     x = self.attn1(self.norm1(x)) + x
    sd     |   File "/root/.pyenv/versions/3.10.6/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
    sd     |     return forward_call(*input, **kwargs)
    sd     |   File "/root/.pyenv/versions/3.10.6/lib/python3.10/site-packages/diffusers/models/attention.py", line 254, in forward
    sd     |     attn = sim.softmax(dim=-1)
    sd     | RuntimeError: CUDA error: unknown error
    sd     | CUDA kernel errors might be asynchronously reported at some other API call,so the stacktrace below might be incorrect.
    sd     | For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
    
  • 7

    Installing Stable Diffusion on Linux Mint Error

    I tried Installing Stable Diffusion V2 on Linux Mint again & again and here's what Error I got. I'm a BIG Noob so explain like I'm Five!


    Stable Diffusion UI

    Stable Diffusion UI's git repository was already installed. Updating.. HEAD is now at 051ef56 Merge pull request #79 from iJacqu3s/patch-1 Already up to date. Stable Diffusion's git repository was already installed. Updating.. HEAD is now at c56b493 Merge pull request #117 from neonsecret/basujindal_attn Already up to date.

    Downloading packages necessary for Stable Diffusion..

    ***** This will take some time (depending on the speed of the Internet connection) and may appear to be stuck, but please be patient ***** ..

    WARNING: A space was detected in your requested environment path '/home/sharky/Desktop/Stable Diffusion/stable-diffusion-ui/stable-diffusion/env' Spaces in paths can sometimes be problematic. Collecting package metadata (repodata.json): done Solving environment: done Preparing transaction: done Verifying transaction: done Executing transaction: done ERROR conda.core.link:_execute(730): An error occurred while installing package 'defaults::cudatoolkit-11.3.1-h2bc3f7f_2'. Rolling back transaction: done

    LinkError: post-link script failed for package defaults::cudatoolkit-11.3.1-h2bc3f7f_2 location of failed script: /home/sharky/Desktop/Stable Diffusion/stable-diffusion-ui/stable-diffusion/env/bin/.cudatoolkit-post-link.sh ==> script messages <== ==> script output <== stdout: stderr: Traceback (most recent call last): File "/home/sharky/Desktop/Stable Diffusion/stable-diffusion-ui/installer/bin/conda", line 12, in from conda.cli import main ModuleNotFoundError: No module named 'conda' Traceback (most recent call last): File "/home/sharky/Desktop/Stable Diffusion/stable-diffusion-ui/installer/bin/conda", line 12, in from conda.cli import main ModuleNotFoundError: No module named 'conda' /home/sharky/Desktop/Stable Diffusion/stable-diffusion-ui/stable-diffusion/env/bin/.cudatoolkit-post-link.sh: line 3: $PREFIX/.messages.txt: ambiguous redirect

    return code: 1

    ()

    Error installing the packages necessary for Stable Diffusion. Please try re-running this installer. If it doesn't work, please copy the messages in this window, and ask the community at https://discord.com/invite/u9yhsFmEkB or file an issue at https://github.com/cmdr2/stable-diffusion-ui/issues


    Hope this helps someone!

  • 8

    ModuleNotFoundError: No module named 'torch'

    I installed and ran v2 on Windows using Start Stable Diffusion UI.cmd, and encountered an error running the server:

    started in  C:\Users\myuser\stable-diffusion-ui\stable-diffusion-ui\stable-diffusion
    ←[32mINFO←[0m:     Started server process [←[36m12336←[0m]
    ←[32mINFO←[0m:     Waiting for application startup.
    ←[32mINFO←[0m:     Application startup complete.
    ←[32mINFO←[0m:     Uvicorn running on ←[1mhttp://127.0.0.1:9000←[0m (Press CTRL+C to quit)
    ←[32mINFO←[0m:     127.0.0.1:51205 - "←[1mGET / HTTP/1.1←[0m" ←[32m200 OK←[0m
    ←[32mINFO←[0m:     127.0.0.1:51205 - "←[1mGET /modifiers.json HTTP/1.1←[0m" ←[32m200 OK←[0m
    ←[32mINFO←[0m:     127.0.0.1:51205 - "←[1mGET /output_dir HTTP/1.1←[0m" ←[32m200 OK←[0m
    Traceback (most recent call last):
      File "C:\Users\myuser\stable-diffusion-ui\stable-diffusion-ui\stable-diffusion\..\ui\server.py", line 63, in ping
        from sd_internal import runtime
      File "C:\Users\myuser\stable-diffusion-ui\stable-diffusion-ui\stable-diffusion\..\ui\sd_internal\runtime.py", line 2, in <module>
        import torch
    ModuleNotFoundError: No module named 'torch'
    
    ←[32mINFO←[0m:     127.0.0.1:51205 - "←[1mGET /ping HTTP/1.1←[0m" ←[32m200 OK←[0m
    ←[32mINFO←[0m:     127.0.0.1:51205 - "←[1mGET /ping HTTP/1.1←[0m" ←[32m200 OK←[0m
    ←[32mINFO←[0m:     127.0.0.1:51205 - "←[1mGET /ping HTTP/1.1←[0m" ←[32m200 OK←[0m
    

    Is anyone else getting this?

  • 9

    start_server() in ./server not working

    I am not familiar with shell script, but I think line 10 in server did not run. I can run docker-compose up stability-ai stable-diffusion-ui in console manually.

  • 10

    It was working but suddenly config.json error

    File "D:\stable-diffusion-ui\ui\server.py", line 144, in getAppConfig with open(config_json_path, 'r') as f: FileNotFoundError: [Errno 2] No such file or directory: 'D:\stable-diffusion-ui\ui\..\scripts\config.json'

  • 11

    Website doesn't show

    I've had some troubles executing server.sh in WSL, it said something about no permission even with sudo, but with some chmod magic I got that working eventually. After executing it, docker showed that it is using port 5000 instead of 8000 like shown in the Tutorial.

    When opening localhost:5000 in a browser, all the site contains is

    {"docs_url":"/docs","openapi_url":"/openapi.json"}

  • 12

    Prevent flooding the log with warnings for GPU<3GB

    Show the warning that GPUs with <3GB aren't supported only once instead of every 5s

    Fixes https://discord.com/channels/1014774730907209781/1059869326313799700

  • 13

    ED2.5 (BETA): img2img fails when actual_inference_steps==0

    Describe the bug When using a very small prompt strength, actual_inference_steps becomes 0 and sdkit fatally fails. This wasn't the case in 2.4 with stable-diffusion-kit.

    After the failure, EasyDiffusion becomes unusable and complains that cards with less than 3GB are not supported - regardless of the GPU's actual VRAM size.

    To Reproduce Steps to reproduce the behavior:

    1. Upload an image for img2img, or use Use as input on a previously generated image
    2. Set the iteration steps to 20
    3. Set the input strength to e.g. 0.01
    4. Click "Make image"

    Expected behavior Image generation like in SDUI2.4, as used in the tips for "How to use the upscalers to upscale an image without passing it through SD"

    Severity Blocker

    Logfiles prompts-strength-0.01.txt

  • 14

    No option for full precision mode on beta channel? Also, "CUDA: 0"

    I wondered at first if maybe full precision would just be built into the balanced/fast/slow render speed/VRAM usage setting, but even on the highest setting it is still half-precision. I'm loving the beta channel right now because it has what are IMO the best samplers, and the speeds it hits with the "fast" mode selected are unlike anything I've seen before-- so thank you for adding those, & kudos on achieving such speeds without sacrificing quality; it's all very impressive). But it makes it very hard to know which channel to use when one of them has all of that but no full precision mode and the other has none of that, but full precision mode.

    So yeah; I'm not sure whether it's a bug or a mistake or if it's just taking time to code for compatibility between the "half/full precision" toggle and the new "balanced/fast/slow" option. But I didn't see anyone else acknowledging it in issues, & it's definitely been an issue for me, so I wanted to bring it to your attention in case it was an oversight. I am having some other issues on and off, but I think it's necessary to make a separate issue for them if they persist.

    One other issue! [Writing the below conclusion reminded me]. From the time I started using this UI

    Thanks! And thanks again for implementing the additional samplers in the beta channel; I don't know if it was in response to my post about them or not but it's awesome that you all got to that quickly! I've already gotten some fantastic results with DPM ++2M in particular, but also the Stability AI solver. I'd rank them up there with DPM 2, DPM 2a, and perhaps DDIM or Heun. Still need to experiment more with Fast, Adaptive, SDE, & 2s ancestral before I can judge against them.

    Anyway, I wasn't sure whether to post this as a bug report or a feature request, because I don't know why the full precision option disappears in beta, but I would greatly appreciate the option. Just got a new GPU and I'm very excited about what it can do. OH, PS: that actually reminds me of another issue I keep meaning to ask about. From the first version I ever used, on this UI, under settings where it shows the GPU information, it has always said "CUDA: 0." Before I was using an Nvidia GTX 1050ti (which had something like 700 or 800 CUDA cores. Now I'm using an RTX 3080, with more like 8,000 CUDA cores! But for some reason, with both GPUs, it has always said "CUDA: 0." Is it supposed to read how many cores it's utilizing? Or is it just a binary indicator where 0 = yes and 1 = no or something? It just always read as something showing up wrong to me, but maybe that's what it is.

    Cheers! Happy New Year.

    Desktop:

    • OS: Windows 10 Home
    • Browser: Firefox
    • Version: 2.5.4 beta or 2.4.23 is what shows up in the non-beta channel, but it's been crashing so one more reason I haven't been using it
  • 15

    Images not saving automatically

    Since the most recent update images are no longer saving. Even when the process is complete when I go to close the browser window it asking me if I want to leave the page because there is still activity even though the task is compelete.

  • 16

    Enforce an autosave directory

    Add a config.bat/sh setting FORCE_SAVE_PATH that can be used by server admins to restrict auto save to a specific directory. Also useful for users who use different end devices and want to centrally configure the auto save option. If FORCE_SAVE_PATH is set, the auto save options in the UI are disabled.

    Fixes #597 Fixes https://discord.com/channels/1014774730907209781/1052691036981428255