This is a web interface frontend for generation of images using the Automatic1111 fork of Stable Diffusion.
The documentation is available here
Diffusion UI was made using:
- Text-to-image
- Image-to-Image:
- from an uploaded image
- from a drawing made on the interface
- Inpainting
- Including the possibility to draw inside an inpainting region
- Outpainting (using mouse to scroll out)
- Modification of model parameters in left tab
- Image gallery of previous image in the right tab
- Allow to do variations and inpainting edits to previously generated images
- Use the mouse wheel to zoom in or zoom out in a provided image
- Use the shift key to make straight lines for drawing or for making inpainting zones
- Use Control-z to cancel an action in the image editor
- Use the arrow keys (left,right,up and down) to move inside the image gallery. The Home key will allow you to go back to the first image of the batch.
The frontend is available at diffusionui.com (Note: You still need to have a local backend to make it work with Stable diffusion)
Or alternatively you can run it locally.
To be able to connect diffusion-ui to the Automatic1111 fork of Stable Diffusion from your own pc, you need to
run it with the following parameters: --no-gradio-queue --cors-allow-origins=http://localhost:5173,https://diffusionui.com
.
See the instructions here.
If you can't run it locally, it is also possible to use the automatic1111 fork of Stable Diffusion with diffusion-ui online for free with this Google Colab notebook
MIT License for the code here.
CreativeML Open RAIL-M license for the Stable Diffusion model.