Easiest 1-click way to install and use Stable Diffusion on your own computer. Provides a browser UI for generating images from text prompts and images. Just enter your text prompt, and see the generated image.

Overview

Stable Diffusion UI

Easiest way to install and use Stable Diffusion on your own computer. No dependencies or technical knowledge required. 1-click install, powerful features.

Discord Server (for support, and development discussion) | Troubleshooting guide for common problems


Step 1: Download the installer

Step 2: Run the program

  • On Windows: Double-click Start Stable Diffusion UI.cmd
  • On Linux: Run ./start.sh in a terminal

Step 3: There is no step 3!

It's simple to get started. You don't need to install or struggle with Python, Anaconda, Docker etc.

The installer will take care of whatever is needed. A friendly Discord community will help you if you face any problems.


Easy for new users, powerful features for advanced users

Features:

  • No Dependencies or Technical Knowledge Required: 1-click install for Windows 10/11 and Linux. No dependencies, no need for WSL or Docker or Conda or technical setup. Just download and run!
  • Clutter-free UI: a friendly and simple UI, while providing a lot of powerful features
  • Supports "Text to Image" and "Image to Image"
  • Custom Models: Use your own .ckpt file, by placing it inside the models/stable-diffusion folder!
  • Live Preview: See the image as the AI is drawing it
  • Task Queue: Queue up all your ideas, without waiting for the current task to finish
  • In-Painting: Specify areas of your image to paint into
  • Face Correction (GFPGAN) and Upscaling (RealESRGAN)
  • Image Modifiers: A library of modifier tags like "Realistic", "Pencil Sketch", "ArtStation" etc. Experiment with various styles quickly.
  • Loopback: Use the output image as the input image for the next img2img task
  • Negative Prompt: Specify aspects of the image to remove.
  • Attention/Emphasis: () in the prompt increases the model's attention to enclosed words, and [] decreases it
  • Weighted Prompts: Use weights for specific words in your prompt to change their importance, e.g. red:2.4 dragon:1.2
  • Prompt Matrix: (in beta) Quickly create multiple variations of your prompt, e.g. a photograph of an astronaut riding a horse | illustration | cinematic lighting
  • Lots of Samplers: ddim, plms, heun, euler, euler_a, dpm2, dpm2_a, lms
  • Multiple Prompts File: Queue multiple prompts by entering one prompt per line, or by running a text file
  • NSFW Setting: A setting in the UI to control NSFW content
  • JPEG/PNG output
  • Save generated images to disk
  • Use CPU setting: If you don't have a compatible graphics card, but still want to run it on your CPU.
  • Auto-updater: Gets you the latest improvements and bug-fixes to a rapidly evolving project.
  • Low Memory Usage: Creates 512x512 images with less than 4GB of VRAM!
  • Developer Console: A developer-mode for those who want to modify their Stable Diffusion code, and edit the conda environment.

Easy for new users:

Screenshot of the initial UI

Powerful features for advanced users:

Screenshot of advanced settings

Live Preview

Useful for judging (and stopping) an image quickly, without waiting for it to finish rendering.

live-512

Task Queue

Screenshot of task queue

System Requirements

  1. Windows 10/11, or Linux. Experimental support for Mac is coming soon.
  2. An NVIDIA graphics card, preferably with 4GB or more of VRAM. If you don't have a compatible graphics card, it'll automatically run in the slower "CPU Mode".
  3. Minimum 8 GB of RAM.

You don't need to install or struggle with Python, Anaconda, Docker etc. The installer will take care of whatever is needed.

Installation

  1. Download for Windows or for Linux.

  2. Extract:

  • For Windows: After unzipping the file, please move the stable-diffusion-ui folder to your C: (or any drive like D:, at the top root level), e.g. C:\stable-diffusion-ui. This will avoid a common problem with Windows (file path length limits).
  • For Linux: After extracting the .tar.xz file, please open a terminal, and go to the stable-diffusion-ui directory.
  1. Run:
  • For Windows: Start Stable Diffusion UI.cmd by double-clicking it.
  • For Linux: In the terminal, run ./start.sh (or bash start.sh)

This will automatically install Stable Diffusion, set it up, and start the interface. No additional steps are needed.

To Uninstall: Just delete the stable-diffusion-ui folder to uninstall all the downloaded packages.

How to use?

Please use our guide to understand how to use the features in this UI.

Bugs reports and code contributions welcome

If there are any problems or suggestions, please feel free to ask on the discord server or file an issue.

Also, please feel free to submit a pull request, if you have any code contributions in mind. Join the discord server for development-related discussions, and for helping other users.

Disclaimer

The authors of this project are not responsible for any content generated using this interface.

The license of this software forbids you from sharing any content that violates any laws, produce any harm to a person, disseminate any personal information that would be meant for harm, spread misinformation, or target vulnerable groups. For the full list of restrictions please read the license. You agree to these terms by using this software.

Comments
  • "Potential NSFW content" on the default prompt.

    Configuration:

    Windows 11 CPU: AMD Ryzen 5 5600X Memory: 64GB WSL2 + ubuntu 22.04.1 GPU: GeForce GTX 1660 SUPER GPU Memory: 6GB

    docker run --rm -it --gpus=all nvcr.io/nvidia/k8s/cuda-sample:nbody nbody -gpu -benchmark
    > Windowed mode
    > Simulation data stored in video memory
    > Single precision floating point simulation
    > 1 Devices used for simulation
    GPU Device 0: "Turing" with compute capability 7.5
    
    > Compute 7.5 CUDA device: [NVIDIA GeForce GTX 1660 SUPER]
    22528 bodies, total time for 10 iterations: 32.767 ms
    = 154.884 billion interactions per second
    = 3097.676 single-precision GFLOP/s at 20 flops per interaction
    

    Error message:

    sd                                    | Using seed: 922
    50it [00:32,  1.56it/s]               |
    sd                                    | Potential NSFW content was detected in one or more images. A black image will be returned instead. Try again with a different prompt and/or seed.
    sd                                    | INFO:     172.18.0.4:36142 - "POST /predictions HTTP/1.1" 500 Internal Server Error
    sd                                    | ERROR:    Exception in ASGI application
    sd                                    | Traceback (most recent call last):
    sd                                    |   File "/root/.pyenv/versions/3.10.6/lib/python3.10/site-packages/uvicorn/protocols/http/httptools_impl.py", line 401, in run_asgi
    sd                                    |     result = await app(self.scope, self.receive, self.send)
    sd                                    |   File "/root/.pyenv/versions/3.10.6/lib/python3.10/site-packages/uvicorn/middleware/proxy_headers.py", line 78, in __call__
    sd                                    |     return await self.app(scope, receive, send)
    sd                                    |   File "/root/.pyenv/versions/3.10.6/lib/python3.10/site-packages/fastapi/applications.py", line 269, in __call__
    sd                                    |     await super().__call__(scope, receive, send)
    sd                                    |   File "/root/.pyenv/versions/3.10.6/lib/python3.10/site-packages/starlette/applications.py", line 124, in __call__
    sd                                    |     await self.middleware_stack(scope, receive, send)
    sd                                    |   File "/root/.pyenv/versions/3.10.6/lib/python3.10/site-packages/starlette/middleware/errors.py", line 184, in __call__
    sd                                    |     raise exc
    sd                                    |   File "/root/.pyenv/versions/3.10.6/lib/python3.10/site-packages/starlette/middleware/errors.py", line 162, in __call__
    sd                                    |     await self.app(scope, receive, _send)
    sd                                    |   File "/root/.pyenv/versions/3.10.6/lib/python3.10/site-packages/starlette/exceptions.py", line 93, in __call__
    sd                                    |     raise exc
    sd                                    |   File "/root/.pyenv/versions/3.10.6/lib/python3.10/site-packages/starlette/exceptions.py", line 82, in __call__
    sd                                    |     await self.app(scope, receive, sender)
    sd                                    |   File "/root/.pyenv/versions/3.10.6/lib/python3.10/site-packages/fastapi/middleware/asyncexitstack.py", line 21, in __call__
    sd                                    |     raise e
    sd                                    |   File "/root/.pyenv/versions/3.10.6/lib/python3.10/site-packages/fastapi/middleware/asyncexitstack.py", line 18, in __call__
    sd                                    |     await self.app(scope, receive, send)
    sd                                    |   File "/root/.pyenv/versions/3.10.6/lib/python3.10/site-packages/starlette/routing.py", line 670, in __call__
    sd                                    |     await route.handle(scope, receive, send)
    sd                                    |   File "/root/.pyenv/versions/3.10.6/lib/python3.10/site-packages/starlette/routing.py", line 266, in handle
    sd                                    |     await self.app(scope, receive, send)
    sd                                    |   File "/root/.pyenv/versions/3.10.6/lib/python3.10/site-packages/starlette/routing.py", line 65, in app
    sd                                    |     response = await func(request)
    sd                                    |   File "/root/.pyenv/versions/3.10.6/lib/python3.10/site-packages/fastapi/routing.py", line 227, in app
    sd                                    |     raw_response = await run_endpoint_function(
    sd                                    |   File "/root/.pyenv/versions/3.10.6/lib/python3.10/site-packages/fastapi/routing.py", line 162, in run_endpoint_function
    sd                                    |     return await run_in_threadpool(dependant.call, **values)
    sd                                    |   File "/root/.pyenv/versions/3.10.6/lib/python3.10/site-packages/starlette/concurrency.py", line 41, in run_in_threadpool
    sd                                    |     return await anyio.to_thread.run_sync(func, *args)
    sd                                    |   File "/root/.pyenv/versions/3.10.6/lib/python3.10/site-packages/anyio/to_thread.py", line 31, in run_sync
    sd                                    |     return await get_asynclib().run_sync_in_worker_thread(
    sd                                    |   File "/root/.pyenv/versions/3.10.6/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 937, in run_sync_in_worker_thread
    sd                                    |     return await future
    sd                                    |   File "/root/.pyenv/versions/3.10.6/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 867, in run
    sd                                    |     result = context.run(func, *args)
    sd                                    |   File "/root/.pyenv/versions/3.10.6/lib/python3.10/site-packages/cog/server/http.py", line 79, in predict
    sd                                    |     output = predictor.predict(**request.input.dict())
    sd                                    |   File "/root/.pyenv/versions/3.10.6/lib/python3.10/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
    sd                                    |     return func(*args, **kwargs)
    sd                                    |   File "/root/.pyenv/versions/3.10.6/lib/python3.10/site-packages/torch/amp/autocast_mode.py", line 12, in decorate_autocast
    sd                                    |     return func(*args, **kwargs)
    sd                                    |   File "/src/predict.py", line 113, in predict
    sd                                    |     raise Exception("NSFW content detected, please try a different prompt")
    sd                                    | Exception: NSFW content detected, please try a different prompt
    sd-ui                                 | INFO:     172.18.0.1:34184 - "POST /image HTTP/1.1" 500 Internal Server Error
    

    I get the error with the default prompt: "a photograph of an astronaut riding a horse" I tried with 256x256 image size, and I get the same error.

    opened by UrielCh 26
  • Version 2 - Development

    Version 2 - Development

    A development version of v2 is available for Windows 10/11 and Linux. Experimental support for Mac will be added soon.

    The instructions for installing are at: https://github.com/cmdr2/stable-diffusion-ui/blob/v2/README.md#installation

    It is not a binary, and the source code used for building this is open at https://github.com/cmdr2/stable-diffusion-ui/tree/v2

    What is this?

    This version is a 1-click installer. You don't need WSL or Docker or Python or anything beyond a working NVIDIA GPU with an updated driver. You don't need to use the command-line at all.

    It'll download the necessary files from the original Stable Diffusion git repository, and set it up. It'll then start the browser-based interface like before.

    An NSFW option is present in the interface, for those people who are unable to run their prompts without hitting the NSFW filter incorrectly.

    Is it stable?

    It has run successfully for a number of users, but I would love to know if it works on more computers. Please let me know if it works or fails in this thread, it'll be really helpful! Thanks :)

    PS: There's a new Discord server for support and development discussions: https://discord.com/invite/u9yhsFmEkB . Please join in for faster discussion and feedback on v2.

    opened by cmdr2 24
  • cannot start up docker container

    cannot start up docker container

    build was successful using windows 10, docker-compose version 1.29.2, build 5becea4c

    after running docker-compose up

    Starting sd ... error
    
    ERROR: for sd  Cannot start service stability-ai: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: error running hook #0: error running hook: signal: segmentation fault, stdout: , stderr:: unknown
    
    ERROR: for stability-ai  Cannot start service stability-ai: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: error running hook #0: error running hook: signal: segmentation fault, stdout: , stderr:: unknown
    ERROR: Encountered errors while bringing up the project.
    
    opened by BasicWalker 15
  • ERR_EMPTY_RESPONSE on port 9000

    ERR_EMPTY_RESPONSE on port 9000

    I can't reach the UI after update. Port 8000 works fine, it displays the rodiet notice, but port 9000 returns nothing at all. I'm running Windows, so I was not (easily) able to execute the server file. But I opened it and executed the code below (start_server()) as a troubleshoot. Without luck though.

    docker-compose up -d stable-diffusion-old-port-redirect
    docker-compose up stability-ai stable-diffusion-ui
    
    opened by ChrisAcrobat 12
  • ModuleNotFoundError: No module named 'cv2'

    ModuleNotFoundError: No module named 'cv2'

    python is installed and updated and so is opencv

    The following is the output:

    "Ready to rock!"
    
    started in  C:\Users\adama\Documents\stable-diffusion-ui\stable-diffusion-ui\stable-diffusion
    ←[32mINFO←[0m:     Started server process [←[36m16544←[0m]
    ←[32mINFO←[0m:     Waiting for application startup.
    ←[32mINFO←[0m:     Application startup complete.
    ←[32mINFO←[0m:     Uvicorn running on ←[1mhttp://127.0.0.1:9000←[0m (Press CTRL+C to quit)
    Traceback (most recent call last):
      File "C:\Users\adama\Documents\stable-diffusion-ui\stable-diffusion-ui\stable-diffusion\..\ui\server.py", line 56, in ping
        from sd_internal import runtime
      File "C:\Users\atomica\Documents\stable-diffusion-ui\stable-diffusion-ui\stable-diffusion\..\ui\sd_internal\runtime.py", line 2, in <module>
        import cv2
    ModuleNotFoundError: No module named 'cv2'
    
    ←[32mINFO←[0m:     127.0.0.1:52859 - "←[1mGET /ping HTTP/1.1←[0m" ←[32m200 OK←[0m
    ←[32mINFO←[0m:     127.0.0.1:52859 - "←[1mGET /ping HTTP/1.1←[0m" ←[32m200 OK←[0m
    ←[32mINFO←[0m:     127.0.0.1:52859 - "←[1mGET /favicon.ico HTTP/1.1←[0m" ←[31m404 Not Found←[0m
    ←[32mINFO←[0m:     127.0.0.1:52859 - "←[1mGET /ping HTTP/1.1←[0m" ←[32m200 OK←[0m
    ←[32mINFO←[0m:     127.0.0.1:52859 - "←[1mGET /ping HTTP/1.1←[0m" ←[32m200 OK←[0m
    ←[32mINFO←[0m:     127.0.0.1:52859 - "←[1mGET /ping HTTP/1.1←[0m" ←[32m200 OK←[0m
    ←[32mINFO←[0m:     127.0.0.1:52859 - "←[1mGET /ping HTTP/1.1←[0m" ←[32m200 OK←[0m
    ←[32mINFO←[0m:     127.0.0.1:52859 - "←[1mGET /ping HTTP/1.1←[0m" ←[32m200 OK←[0m
    ←[32mINFO←[0m:     127.0.0.1:52859 - "←[1mGET /ping HTTP/1.1←[0m" ←[32m200 OK←[0m
    
    opened by Atomica1 11
  • Exception in ASGI application

    Exception in ASGI application

    First time run of this... I have a laptop Nvidia 3060 GPU, running Ubuntu in WSL on Windows 10. I tried my first prompt from the web page but got this error below. I didn't install the Nvidia driver within Ubuntu because a) it didn't recognise my GPU and b) I had all sorts of other problems. Do I need to install the Nvidia driver within WSL, or does it use the host driver?

    sd     | ERROR:    Exception in ASGI application
    sd     | Traceback (most recent call last):
    sd     |   File "/root/.pyenv/versions/3.10.6/lib/python3.10/site-packages/uvicorn/protocols/http/httptools_impl.py", line 401, in run_asgi
    sd     |     result = await app(self.scope, self.receive, self.send)
    sd     |   File "/root/.pyenv/versions/3.10.6/lib/python3.10/site-packages/uvicorn/middleware/proxy_headers.py", line 78, in __call__
    sd     |     return await self.app(scope, receive, send)
    sd     |   File "/root/.pyenv/versions/3.10.6/lib/python3.10/site-packages/fastapi/applications.py", line 269, in __call__
    sd     |     await super().__call__(scope, receive, send)
    sd     |   File "/root/.pyenv/versions/3.10.6/lib/python3.10/site-packages/starlette/applications.py", line 124, in __call__
    sd     |     await self.middleware_stack(scope, receive, send)
    sd     |   File "/root/.pyenv/versions/3.10.6/lib/python3.10/site-packages/starlette/middleware/errors.py", line 184, in __call__
    sd     |     raise exc
    sd     |   File "/root/.pyenv/versions/3.10.6/lib/python3.10/site-packages/starlette/middleware/errors.py", line 162, in __call__
    sd     |     await self.app(scope, receive, _send)
    sd     |   File "/root/.pyenv/versions/3.10.6/lib/python3.10/site-packages/starlette/exceptions.py", line 93, in __call__
    sd     |     raise exc
    sd     |   File "/root/.pyenv/versions/3.10.6/lib/python3.10/site-packages/starlette/exceptions.py", line 82, in __call__
    sd     |     await self.app(scope, receive, sender)
    sd     |   File "/root/.pyenv/versions/3.10.6/lib/python3.10/site-packages/fastapi/middleware/asyncexitstack.py", line 21, in __call__
    sd     |     raise e
    sd     |   File "/root/.pyenv/versions/3.10.6/lib/python3.10/site-packages/fastapi/middleware/asyncexitstack.py", line 18, in __call__
    sd     |     await self.app(scope, receive, send)
    sd     |   File "/root/.pyenv/versions/3.10.6/lib/python3.10/site-packages/starlette/routing.py", line 670, in __call__
    sd     |     await route.handle(scope, receive, send)
    sd     |   File "/root/.pyenv/versions/3.10.6/lib/python3.10/site-packages/starlette/routing.py", line 266, in handle
    sd     |     await self.app(scope, receive, send)
    sd     |   File "/root/.pyenv/versions/3.10.6/lib/python3.10/site-packages/starlette/routing.py", line 65, in app
    sd     |     response = await func(request)
    sd     |   File "/root/.pyenv/versions/3.10.6/lib/python3.10/site-packages/fastapi/routing.py", line 227, in app
    sd-ui  | INFO:     172.18.0.1:59682 - "POST /image HTTP/1.1" 500 Internal Server Error
    sd     |     raw_response = await run_endpoint_function(
    sd     |   File "/root/.pyenv/versions/3.10.6/lib/python3.10/site-packages/fastapi/routing.py", line 162, in run_endpoint_function
    sd     |     return await run_in_threadpool(dependant.call, **values)
    sd     |   File "/root/.pyenv/versions/3.10.6/lib/python3.10/site-packages/starlette/concurrency.py", line 41, in run_in_threadpool
    sd     |     return await anyio.to_thread.run_sync(func, *args)
    sd     |   File "/root/.pyenv/versions/3.10.6/lib/python3.10/site-packages/anyio/to_thread.py", line 31, in run_sync
    sd     |     return await get_asynclib().run_sync_in_worker_thread(
    sd     |   File "/root/.pyenv/versions/3.10.6/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 937, in run_sync_in_worker_thread
    sd     |     return await future
    sd     |   File "/root/.pyenv/versions/3.10.6/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 867, in run
    sd     |     result = context.run(func, *args)
    sd     |   File "/root/.pyenv/versions/3.10.6/lib/python3.10/site-packages/cog/server/http.py", line 79, in predict
    sd     |     output = predictor.predict(**request.input.dict())
    sd     |   File "/root/.pyenv/versions/3.10.6/lib/python3.10/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
    sd     |     return func(*args, **kwargs)
    sd     |   File "/root/.pyenv/versions/3.10.6/lib/python3.10/site-packages/torch/amp/autocast_mode.py", line 12, in decorate_autocast
    sd     |     return func(*args, **kwargs)
    sd     |   File "/src/predict.py", line 88, in predict
    sd     |     output = self.pipe(
    sd     |   File "/root/.pyenv/versions/3.10.6/lib/python3.10/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
    sd     |     return func(*args, **kwargs)
    sd     |   File "/src/image_to_image.py", line 156, in __call__
    sd     |     noise_pred = self.unet(
    sd     |   File "/root/.pyenv/versions/3.10.6/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
    sd     |     return forward_call(*input, **kwargs)
    sd     |   File "/root/.pyenv/versions/3.10.6/lib/python3.10/site-packages/diffusers/models/unet_2d_condition.py", line 168, in forward
    sd     |     sample = upsample_block(
    sd     |   File "/root/.pyenv/versions/3.10.6/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
    sd     |     return forward_call(*input, **kwargs)
    sd     |   File "/root/.pyenv/versions/3.10.6/lib/python3.10/site-packages/diffusers/models/unet_blocks.py", line 1037, in forward
    sd     |     hidden_states = attn(hidden_states, context=encoder_hidden_states)
    sd     |   File "/root/.pyenv/versions/3.10.6/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
    sd     |     return forward_call(*input, **kwargs)
    sd     |   File "/root/.pyenv/versions/3.10.6/lib/python3.10/site-packages/diffusers/models/attention.py", line 168, in forward
    sd     |     x = block(x, context=context)
    sd     |   File "/root/.pyenv/versions/3.10.6/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
    sd     |     return forward_call(*input, **kwargs)
    sd     |   File "/root/.pyenv/versions/3.10.6/lib/python3.10/site-packages/diffusers/models/attention.py", line 196, in forward
    sd     |     x = self.attn1(self.norm1(x)) + x
    sd     |   File "/root/.pyenv/versions/3.10.6/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
    sd     |     return forward_call(*input, **kwargs)
    sd     |   File "/root/.pyenv/versions/3.10.6/lib/python3.10/site-packages/diffusers/models/attention.py", line 254, in forward
    sd     |     attn = sim.softmax(dim=-1)
    sd     | RuntimeError: CUDA error: unknown error
    sd     | CUDA kernel errors might be asynchronously reported at some other API call,so the stacktrace below might be incorrect.
    sd     | For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
    
    opened by AndrWeisR 11
  • Installing Stable Diffusion on Linux Mint Error

    Installing Stable Diffusion on Linux Mint Error

    I tried Installing Stable Diffusion V2 on Linux Mint again & again and here's what Error I got. I'm a BIG Noob so explain like I'm Five!


    Stable Diffusion UI

    Stable Diffusion UI's git repository was already installed. Updating.. HEAD is now at 051ef56 Merge pull request #79 from iJacqu3s/patch-1 Already up to date. Stable Diffusion's git repository was already installed. Updating.. HEAD is now at c56b493 Merge pull request #117 from neonsecret/basujindal_attn Already up to date.

    Downloading packages necessary for Stable Diffusion..

    ***** This will take some time (depending on the speed of the Internet connection) and may appear to be stuck, but please be patient ***** ..

    WARNING: A space was detected in your requested environment path '/home/sharky/Desktop/Stable Diffusion/stable-diffusion-ui/stable-diffusion/env' Spaces in paths can sometimes be problematic. Collecting package metadata (repodata.json): done Solving environment: done Preparing transaction: done Verifying transaction: done Executing transaction: done ERROR conda.core.link:_execute(730): An error occurred while installing package 'defaults::cudatoolkit-11.3.1-h2bc3f7f_2'. Rolling back transaction: done

    LinkError: post-link script failed for package defaults::cudatoolkit-11.3.1-h2bc3f7f_2 location of failed script: /home/sharky/Desktop/Stable Diffusion/stable-diffusion-ui/stable-diffusion/env/bin/.cudatoolkit-post-link.sh ==> script messages <== ==> script output <== stdout: stderr: Traceback (most recent call last): File "/home/sharky/Desktop/Stable Diffusion/stable-diffusion-ui/installer/bin/conda", line 12, in from conda.cli import main ModuleNotFoundError: No module named 'conda' Traceback (most recent call last): File "/home/sharky/Desktop/Stable Diffusion/stable-diffusion-ui/installer/bin/conda", line 12, in from conda.cli import main ModuleNotFoundError: No module named 'conda' /home/sharky/Desktop/Stable Diffusion/stable-diffusion-ui/stable-diffusion/env/bin/.cudatoolkit-post-link.sh: line 3: $PREFIX/.messages.txt: ambiguous redirect

    return code: 1

    ()

    Error installing the packages necessary for Stable Diffusion. Please try re-running this installer. If it doesn't work, please copy the messages in this window, and ask the community at https://discord.com/invite/u9yhsFmEkB or file an issue at https://github.com/cmdr2/stable-diffusion-ui/issues


    Hope this helps someone!

    opened by SharkyGoChompChomp21 10
  • ModuleNotFoundError: No module named 'torch'

    ModuleNotFoundError: No module named 'torch'

    I installed and ran v2 on Windows using Start Stable Diffusion UI.cmd, and encountered an error running the server:

    started in  C:\Users\myuser\stable-diffusion-ui\stable-diffusion-ui\stable-diffusion
    ←[32mINFO←[0m:     Started server process [←[36m12336←[0m]
    ←[32mINFO←[0m:     Waiting for application startup.
    ←[32mINFO←[0m:     Application startup complete.
    ←[32mINFO←[0m:     Uvicorn running on ←[1mhttp://127.0.0.1:9000←[0m (Press CTRL+C to quit)
    ←[32mINFO←[0m:     127.0.0.1:51205 - "←[1mGET / HTTP/1.1←[0m" ←[32m200 OK←[0m
    ←[32mINFO←[0m:     127.0.0.1:51205 - "←[1mGET /modifiers.json HTTP/1.1←[0m" ←[32m200 OK←[0m
    ←[32mINFO←[0m:     127.0.0.1:51205 - "←[1mGET /output_dir HTTP/1.1←[0m" ←[32m200 OK←[0m
    Traceback (most recent call last):
      File "C:\Users\myuser\stable-diffusion-ui\stable-diffusion-ui\stable-diffusion\..\ui\server.py", line 63, in ping
        from sd_internal import runtime
      File "C:\Users\myuser\stable-diffusion-ui\stable-diffusion-ui\stable-diffusion\..\ui\sd_internal\runtime.py", line 2, in <module>
        import torch
    ModuleNotFoundError: No module named 'torch'
    
    ←[32mINFO←[0m:     127.0.0.1:51205 - "←[1mGET /ping HTTP/1.1←[0m" ←[32m200 OK←[0m
    ←[32mINFO←[0m:     127.0.0.1:51205 - "←[1mGET /ping HTTP/1.1←[0m" ←[32m200 OK←[0m
    ←[32mINFO←[0m:     127.0.0.1:51205 - "←[1mGET /ping HTTP/1.1←[0m" ←[32m200 OK←[0m
    

    Is anyone else getting this?

    opened by westquote 10
  • start_server() in ./server not working

    start_server() in ./server not working

    I am not familiar with shell script, but I think line 10 in server did not run. I can run docker-compose up stability-ai stable-diffusion-ui in console manually.

    opened by zaqxs123456 10
  • It was working but suddenly config.json error

    It was working but suddenly config.json error

    File "D:\stable-diffusion-ui\ui\server.py", line 144, in getAppConfig with open(config_json_path, 'r') as f: FileNotFoundError: [Errno 2] No such file or directory: 'D:\stable-diffusion-ui\ui\..\scripts\config.json'

    opened by tamal777 9
  • Website doesn't show

    Website doesn't show

    I've had some troubles executing server.sh in WSL, it said something about no permission even with sudo, but with some chmod magic I got that working eventually. After executing it, docker showed that it is using port 5000 instead of 8000 like shown in the Tutorial.

    When opening localhost:5000 in a browser, all the site contains is

    {"docs_url":"/docs","openapi_url":"/openapi.json"}

    opened by LeWuDISE 9
  • Enforce an autosave directory

    Enforce an autosave directory

    Add a config.bat/sh setting FORCE_SAVE_PATH that can be used by server admins to restrict auto save to a specific directory. Also useful for users who use different end devices and want to centrally configure the auto save option. If FORCE_SAVE_PATH is set, the auto save options in the UI are disabled.

    Fixes #597 Fixes https://discord.com/channels/1014774730907209781/1052691036981428255

    opened by JeLuF 0
  • Error while trying merging 2 models

    Error while trying merging 2 models

    Error_Trying_Merge_Models.txt

    Hello folks!

    Trying the feature for merging models as described in "What's new section" of Easy Diffusion, I always get the error "positional argument follows keyword argument", as attached in this message.

    Any help would be priceless...

    Thanks a lot...

    Vincent

    bug 
    opened by ololiyuki 5
  • Clicking Stop button doesn't stop the task right away

    Clicking Stop button doesn't stop the task right away

    Describe the bug When you click 'Stop' button, it doesn't stop right away, instead it tries to finish the current task. I am using CPU mode (AMD card on Ubuntu)

    To Reproduce Steps to reproduce the behaviour:

    1. Start generating some image
    2. You can hear processor fan is spinning up right away
    3. Click 'Stop' button
    4. It takes about a minute to stop fan from spinning

    Expected behaviour I would expect to task stopped right away

    Desktop (please complete the following information):

    • OS: Ubuntu 22.04
    • Browser: Firefox
    • Version: 108.0.1 (64-bit)
    bug 
    opened by iegoshin 2
  • Preview is not working after the first image

    Preview is not working after the first image

    Describe the bug The new release stops updating preview images. When I am generating large images with img2img, e.g. 1024 x 1536 it displays first image and then it doesn't add new ones. The batch displays 3 of 10, but in the preview - only 1 image. It auto-saves images into the specified folder though. I am using CPU mode.

    To Reproduce Steps to reproduce the behavior:

    1. select large image e.g. 1024 x 1536
    2. enter prompt
    3. number of images: 10
    4. Inference Steps: 30
    5. w: 1024 h: 1536
    6. click Enqueue next 10 images
    7. wait

    Behavior It displays 1st image and then nothing changes any more

    Expected behavior It should display 2st, 3rd, ... and 10th image as well

    Desktop (please complete the following information):

    • OS: Ubuntu 22.04
    • Browser: Firefox
    • Version: 108.0.1 (64-bit)
    bug 
    opened by iegoshin 0
Releases(v2.4.13)
  • v2.4.13(Nov 22, 2022)

    Major Changes

    • Automatic scanning for malicious model files - using picklescan. Thanks @JeLuf
    • Support for custom VAE models. You can place your VAE files in the models/vae folder, and refresh the browser page to use them. More info: https://github.com/cmdr2/stable-diffusion-ui/wiki/VAE-Variational-Auto-Encoder
    • Experimental support for multiple GPUs! It should work automatically. Just open one browser tab per GPU, and spread your tasks across your GPUs. For e.g. open our UI in two browser tabs if you have two GPUs. You can customize which GPUs it should use in the "Settings" tab, otherwise let it automatically pick the best GPUs. Thanks @madrang . More info: https://github.com/cmdr2/stable-diffusion-ui/wiki/Run-on-Multiple-GPUs
    • Cleaner UI design - Show settings and help in new tabs, instead of dropdown popups (which were buggy). Thanks @mdiller
    • Progress bar. Thanks @mdiller
    • Custom Image Modifiers - You can now save your custom image modifiers! Your saved modifiers can include special characters like {}, (), [], |
    • Drag and Drop text files generated from previously saved images, and copy settings to clipboard. Thanks @madrang
    • Paste settings from clipboard. Thanks @JeLuf
    • Bug fixes to reduce the chances of tasks crashing during long multi-hour runs (chrome can put long-running background tabs to sleep). Thanks @JeLuf and @madrang
    • Improved documentation. Thanks @JeLuf and @jsuelwald
    • Improved the codebase for dealing with system settings and UI settings. Thanks @mdiller
    • Help instructions next to some setttings, and in the tab
    • Show system info in the settings tab
    • Keyboard shortcut: Ctrl+Enter to start a task
    • Configuration to prevent the browser from opening on startup
    • Lots of minor bug fixes
    • A What's New? tab in the UI

    Detailed changelog

    • 2.4.12 - 21 Nov 2022 - Another fix for improving how long images take to generate. Reduces the time taken for an enqueued task to start processing.
    • 2.4.11 - 21 Nov 2022 - Installer improvements: avoid crashing if the username contains a space or special characters, allow moving/renaming the folder after installation on Windows, whitespace fix on git apply
    • 2.4.11 - 21 Nov 2022 - Validate inputs before submitting the Image request
    • 2.4.11 - 19 Nov 2022 - New system settings to manage the network config (port number and whether to only listen on localhost)
    • 2.4.11 - 19 Nov 2022 - Address a regression in how long images take to generate. Use the previous code for moving a model to CPU. This improves things by a second or two per image, but we still have a regression (investigating).
    • 2.4.10 - 18 Nov 2022 - Textarea for negative prompts. Thanks @JeLuf
    • 2.4.10 - 18 Nov 2022 - Improved design for Settings, and rounded toggle buttons instead of checkboxes for a more modern look. Thanks @mdiller
    • 2.4.9 - 18 Nov 2022 - Add Picklescan - a scanner for malicious model files. If it finds a malicious file, it will halt the web application and alert the user. Thanks @JeLuf
    • 2.4.8 - 18 Nov 2022 - A Use these settings button to use the settings from a previously generated image task. Thanks @patriceac
    • 2.4.7 - 18 Nov 2022 - Don't crash if a VAE file fails to load
    • 2.4.7 - 17 Nov 2022 - Fix a bug where Face Correction (GFPGAN) would fail on cuda:N (i.e. GPUs other than cuda:0), as well as fail on CPU if the system had an incompatible GPU.
    • 2.4.6 - 16 Nov 2022 - Fix a regression in VRAM usage during startup, which caused 'Out of Memory' errors when starting on GPUs with 4gb (or less) VRAM
    • 2.4.5 - 16 Nov 2022 - Add checkbox for "Open browser on startup".
    • 2.4.5 - 16 Nov 2022 - Add a directory for core plugins that ship with Stable Diffusion UI by default.
    • 2.4.5 - 16 Nov 2022 - Add a "What's New?" tab as a core plugin, which fetches the contents of CHANGES.md from the app's release branch.
    Source code(tar.gz)
    Source code(zip)
    stable-diffusion-ui-linux.zip(11.87 KB)
    stable-diffusion-ui-windows.zip(10.98 KB)
  • v2.3.5(Oct 26, 2022)

    Lots of features since the previous version:

    • Full support for Custom Models (UI selection)

    • Choose JPEG or PNG for output format

    • Don't reload the model when switching between img2img and txt2img

    • Reduced RAM memory usage for txt2img

    • Task Queue

    • Negative Prompts

    • Thumbnails for the image modifiers, written by @Haka and @Manny

    • Specify Multiple Prompts - choose a text file, or enter one prompt per line

    • Early version of Prompt Matrix - Separate your prompt with the | character to explore variations quickly. E.g. girl holding a rose | illustration | cinematic lighting creates four task combinations automatically: girl holding a rose, girl holding a rose, illustration, girl holding a rose, cinematic lighting and girl holding a rose, illustration, cinematic lighting

    • Use curly brackets in prompts to try different words E.g. man riding a {horse,motorcycle} results in man riding a horse and man riding a motorcycle being created automatically.

    • New Image Buttons: Make Similar Images, Draw another 25 steps, Upscale and Fix Faces - you can run these after an image has been generated. For e.g. you can now upscale or fix your images after they have been generated! (thanks @Madrang)

    • Use our project simultaneously across multiple browser tabs/PCs/tablets/phones without errors (written by @Madrang)

    • Choose between 7 themes for the UI, Mobile-friendly UI, Cleaner styling of image settings (written by @Bilbo's Last Clean Doily)

    • Write your own custom plugins for the UI - you can now write custom buttons for the UI in a javascript file, and put your_filename.plugin.js file inside the plugins/ui folder. You can make custom buttons for the image right now.

    • Auto-save image settings across browser restarts - (written by @Bilbo's Last Clean Doily)

    • Aspect Ratio preserved in the in-painting editor

    • Custom Image Modifiers

    • Custom themes (thanks @Bilbo's Last Clean Doily)

    • Use micromamba to install git/conda (if required), instead of bundling it in the installer

    Source code(tar.gz)
    Source code(zip)
    stable-diffusion-ui-linux.zip(11.87 KB)
    stable-diffusion-ui-windows.zip(10.98 KB)
  • v2.16(Sep 24, 2022)

    • No Dependencies or Technical Knowledge Required: 1-click install for Windows 10/11 and Linux. No dependencies, no need for WSL or Docker or Conda or technical setup. Just download and run!
    • Face Correction (GFPGAN) and Upscaling (RealESRGAN)
    • In-Painting
    • Live Preview: See the image as the AI is drawing it
    • Lots of Samplers
    • Image Modifiers: A library of modifier tags like "Realistic", "Pencil Sketch", "ArtStation" etc. Experiment with various styles quickly.
    • New UI: with cleaner design
    • Supports "Text to Image" and "Image to Image"
    • NSFW Setting: A setting in the UI to control NSFW content
    • Use CPU setting: If you don't have a compatible graphics card, but still want to run it on your CPU.
    • Auto-updater: Gets you the latest improvements and bug-fixes to a rapidly evolving project.
    • Low Memory Usage: Creates 512x512 images with less than 4GB of VRAM!
    Source code(tar.gz)
    Source code(zip)
    basicsr-win64.zip(458.81 KB)
    stable-diffusion-ui-linux.tar.xz(87.02 MB)
    stable-diffusion-ui-win64.zip(166.03 MB)
  • v2.05(Sep 5, 2022)

    Major new release for Windows 10/11 and Linux!

    Features in the new v2 Version:

    • No Dependencies or Technical Knowledge Required: 1-click install for Windows 10/11 and Linux. No dependencies, no need for WSL or Docker or Conda or technical setup. Just download and run!
    • Image Modifiers: A library of modifier tags like "Realistic", "Pencil Sketch", "ArtStation" etc. Experiment with various styles quickly.
    • New UI: with cleaner design
    • Supports "Text to Image" and "Image to Image"
    • NSFW Setting: A setting in the UI to control NSFW content
    • Use CPU setting: If you don't have a compatible graphics card, but still want to run it on your CPU.
    • Auto-updater: Gets you the latest improvements and bug-fixes to a rapidly evolving project.
    Source code(tar.gz)
    Source code(zip)
    stable-diffusion-ui-linux.tar.xz(87.02 MB)
    stable-diffusion-ui-win64.zip(221.59 MB)
  • v1.25(Sep 1, 2022)

    1. New UI with a cleaner interface.
    2. Image Modifier tags (aka Prompt Tags), to browse and select tags like "Realistic", "ArtStation", "Pencil Sketch" etc.
    Source code(tar.gz)
    Source code(zip)
  • v1.24(Aug 27, 2022)

    The server script was causing problems on some platforms, so rolling it back until a permanent solution is found.

    Apologies for the inconvenience.

    Source code(tar.gz)
    Source code(zip)
  • v1.22(Aug 26, 2022)

  • v1.21(Aug 26, 2022)

  • v1.2(Aug 26, 2022)

    Added:

    1. Support for inpainting (image mask)
    2. a server executable which replaces directly running docker-compose up or docker-compose down. Instead ./server or ./server stop or ./server restart. This will help in the future to manage the underlying runtime and installation without being tied to docker.
    Source code(tar.gz)
    Source code(zip)
  • v1.1(Aug 26, 2022)

    Changes:

    1. img2img is now supported! You can provide an image to generate new images based on it (and an optional text prompt). You can also use the generated image as the new input image in 1-click, to refine it further.
    2. Upgrade to the latest stable-diffusion docker image on replicate.com.
    3. Dark mode
    4. An option to disable the ping sound, on completion of a task
    Source code(tar.gz)
    Source code(zip)
    micromamba.exe(7.84 MB)
MidJourney is an AI text-to-image service that generates images based on textual prompts. It is often used to create artistic or imaginative images, and is available on Discord and through a web interface here.

Midjourney MidJourney is an AI text-to-image service that generates images based on textual prompts. It is often used to create artistic or imaginativ

Andrew Tsegaye 8 May 1, 2023
It's an AI Tag (Prompt) generator for magic drawer! We have many tags and support to generate prompts easily!

魔导绪论 AI 魔咒生成器, 使用由 B 站 UP 主 十二今天也很可爱 提供的 4 万个 tag 并提供中文检索,并提供了魔咒社区方便魔法师们直接复制生成。永远免费,永无广告,无商业。 v2 版本更新内容 使用 Netlify 的云函数提供更为快速的社区数据操作!(暂时不知道花费多少) 查看自己的

江夏尧 28 Jan 1, 2023
A cross-platform desktop app with a nice interface to Stable Diffusion and others

GenerationQ GenerationQ (for "image generation queue") is a cross-platform desktop application (screens below) designed to provide a general purpose G

Weston C. Beecroft 25 Dec 28, 2022
A simple Stable Diffusion WebUI extension that adds a Photopea tab and integration.

Photopea Stable Diffusion WebUI Extension Photopea is essentially Photoshop in a browser. This is a simple extension to add a Photopea tab to AUTOMATI

Yanko Oliveira 624 Aug 10, 2023
a toy project to explore Stable Diffusion locally through a nodeJS server.

SD-explorer foreword this is a toy project to run the Stable Diffusion model locally. if you're after something more solid, I'd suggest you use WebUI

nicolas barradeau 18 Dec 18, 2022
🤗 A CLI for running Stable Diffusion locally via a REST API on an M1/M2 MacBook

Stable Diffusion REST API A CLI for running Stable Diffusion locally via a REST API on an M1/M2 MacBook Pre-requisites An M1/M2 MacBook Homebrew Pytho

Yuan Qing Lim 74 Dec 26, 2022
Minimalist UI for Stable Diffusion, built with SolidJS

Solid Diffusion Minimalist web-based interface for Stable Diffusion with persistant storage in the browser, built with SolidJS. This project is an alp

_nderscore 13 Nov 29, 2022
A web GUI for inpainting with Stable Diffusion using the Replicate API.

?? Inpainter A web GUI for inpainting with Stable Diffusion using the Replicate API. Try it out at inpainter.vercel.app cherries-oranges-bananas.mp4 H

Zeke Sikelianos 158 Dec 27, 2022
Concept Art/Prototyping faster with AIDA (AIDAdiffusion), "All-In-one" app for running Stable Diffusion on windows PC locally.

AIDAdiffusion Concept Art/Prototyping faster with ourbunka internal tool AIDA (AIDAdiffusion), "All-In-one" app for running Stable Diffusion on window

null 7 Nov 23, 2022
Backend for my Stable diffusion project(s)

Backend for my Stable diffusion project(s) Might be useful for others but at least it's useful for me. Only supports txt2img right now. Uses AUTOMATIC

Amotile 33 Dec 25, 2022
Portable, cross platform Stable Diffusion UI

FusionKit Releases FusionKit is a self-contained cross-platform app for generating images with Stable Diffusion. It leverages the InvokeAI project to

FusionKit 13 Dec 17, 2022
Piccloud is a full-stack (Angular & Spring Boot) online image clipboard that lets you share images over the internet by generating a unique URL. Others can access the image via this URL.

Piccloud Piccloud is a full-stack application built with Angular & Spring Boot. It is an online image clipboard that lets you share images over the in

Olayinka Atobiloye 3 Dec 15, 2022
Makes downloading Scratch projects easy. Simply enter two project IDs and click start.

Makes downloading Scratch projects easy. Simply enter two project IDs and click start. No need to pick the right format or include the assets, all of this is done automatically and in the browser.

null 6 May 27, 2022
Service Installer for VMware Tanzu is a one-click automation solution that enables VMware field engineers to easily and rapidly install, configure, and operate VMware Tanzu services across a variety of cloud infrastructures.

Service Installer for VMware Tanzu Service Installer for VMware Tanzu seeks to provide a one-click automation solution to enable our VMware engineers

VMware Tanzu 42 Dec 1, 2022
🎊 The easiest way to use MineAPI.

@mineapi/sdk Do you need my help? Visit our Discord server. Node Version: 16.16.0 Installation npm i @mineapi/sdk --save # or yarn add @mineapi/sdk Us

MineAPI 5 Jul 29, 2022
Contains html file showcasing Earthquake related data generated in the form of VR model, ArcGIS API with real-time earthquake feed and video of simulation of earthquake generated in blender

Module-EADGI-Project-All about Earthquakes Introduction Contains html file showcasing Earthquake related data generated in the form of VR model, ArcGI

Abhishek Rawat 2 Jun 9, 2022
Grupprojekt för kurserna 'Javascript med Ramverk' och 'Agil Utveckling'

JavaScript-med-Ramverk-Laboration-3 Grupprojektet för kurserna Javascript med Ramverk och Agil Utveckling. Utvecklingsguide För information om hur utv

Svante Jonsson IT-Högskolan 3 May 18, 2022
Hemsida för personer i Sverige som kan och vill erbjuda boende till människor på flykt

Getting Started with Create React App This project was bootstrapped with Create React App. Available Scripts In the project directory, you can run: np

null 4 May 3, 2022