Age | Commit message (Collapse) | Author | |
---|---|---|---|
2023-03-25 | Merge pull request #8780 from Brawlence/master | AUTOMATIC1111 | |
Unload and re-load checkpoint to VRAM on request (API & Manual) | |||
2023-03-23 | fix variable typo | carat-johyun | |
2023-03-21 | Unload checkpoints on Request | Φφ | |
…to free VRAM. New Action buttons in the settings to manually free and reload checkpoints, essentially juggling models between RAM and VRAM. | |||
2023-03-14 | fix an error loading Lora with empty values in metadata | AUTOMATIC | |
2023-03-14 | Add view metadata button for Lora cards. | AUTOMATIC | |
2023-02-19 | when exists | w-e-w | |
2023-02-19 | fix auto sd download issue | w-e-w | |
2023-02-15 | Add ".vae.ckpt" to ext_blacklist | missionfloyd | |
2023-02-14 | Download model if none are found | missionfloyd | |
2023-02-05 | make it possible to load SD1 checkpoints without CLIP | AUTOMATIC | |
2023-02-04 | fix issue with switching back to checkpoint that had its checksum calculated ↵ | AUTOMATIC | |
during runtime mentioned in #7506 | |||
2023-02-04 | Merge pull request #7470 from ↵ | AUTOMATIC1111 | |
cbrownstein-lambda/update-error-message-no-checkpoint Update error message WRT missing checkpoint file | |||
2023-02-04 | add --no-hashing | AUTOMATIC | |
2023-02-01 | Update error message WRT missing checkpoint file | Cody Brownstein | |
The Safetensors format is also supported. | |||
2023-01-29 | support for searching subdirectory names for extra networks | AUTOMATIC | |
2023-01-28 | fixed a bug where after switching to a checkpoint with unknown hash, you'd ↵ | AUTOMATIC | |
get empty space instead of checkpoint name in UI fixed a bug where if you update a selected checkpoint on disk and then restart the program, a different checkpoint loads, but the name is shown for the the old one. | |||
2023-01-27 | add data-dir flag and set all user data directories based on it | Max Audron | |
2023-01-27 | support detecting midas model | AUTOMATIC | |
fix broken api for checkpoint list | |||
2023-01-27 | remove the need to place configs near models | AUTOMATIC | |
2023-01-25 | Merge pull request #6510 from brkirch/unet16-upcast-precision | AUTOMATIC1111 | |
Add upcast options, full precision sampling from float16 UNet and upcasting attention for inference using SD 2.1 models without --no-half | |||
2023-01-25 | Add instruct-pix2pix hijack | Kyle | |
Allows loading instruct-pix2pix models via same method as inpainting models in sd_models.py and sd_hijack_ip2p.py Adds ddpm_edit.py necessary for instruct-pix2pix | |||
2023-01-25 | Add option for float32 sampling with float16 UNet | brkirch | |
This also handles type casting so that ROCm and MPS torch devices work correctly without --no-half. One cast is required for deepbooru in deepbooru_model.py, some explicit casting is required for img2img and inpainting. depth_model can't be converted to float16 or it won't work correctly on some systems (it's known to have issues on MPS) so in sd_models.py model.depth_model is removed for model.half(). | |||
2023-01-19 | bring back short hashes to sd checkpoint selection | AUTOMATIC | |
2023-01-14 | fix bug with "Ignore selected VAE for..." option completely disabling VAE ↵ | AUTOMATIC | |
election rework VAE resolving code to be more simple | |||
2023-01-14 | load hashes from cache for checkpoints that have them | AUTOMATIC | |
add checkpoint hash to footer | |||
2023-01-14 | update key to use with checkpoints' sha256 in cache | AUTOMATIC | |
2023-01-14 | change hypernets to use sha256 hashes | AUTOMATIC | |
2023-01-14 | change hash to sha256 | AUTOMATIC | |
2023-01-11 | fix for an error caused by skipping initialization, for realsies this time: ↵ | AUTOMATIC | |
TypeError: expected str, bytes or os.PathLike object, not NoneType | |||
2023-01-11 | possible fix for fallback for fast model creation from config, attempt 2 | AUTOMATIC | |
2023-01-11 | possible fix for fallback for fast model creation from config | AUTOMATIC | |
2023-01-10 | add support for transformers==4.25.1 | AUTOMATIC | |
add fallback for when quick model creation fails | |||
2023-01-10 | add more stuff to ignore when creating model from config | AUTOMATIC | |
prevent .vae.safetensors files from being listed as stable diffusion models | |||
2023-01-10 | disable torch weight initialization and CLIP downloading/reading checkpoint ↵ | AUTOMATIC | |
to speedup creating sd model from config | |||
2023-01-09 | allow model load if previous model failed | Vladimir Mandic | |
2023-01-04 | use commandline-supplied cuda device name instead of cuda:0 for safetensors ↵ | AUTOMATIC | |
PR that doesn't fix anything | |||
2023-01-04 | Merge remote-tracking branch 'Narsil/fix_safetensors_load_speed' | AUTOMATIC | |
2023-01-04 | fix broken inpainting model | AUTOMATIC | |
2023-01-04 | find configs for models at runtime rather than when starting | AUTOMATIC | |
2023-01-04 | helpful error message when trying to load 2.0 without config | AUTOMATIC | |
failing to load model weights from settings won't break generation for currently loaded model anymore | |||
2023-01-03 | call script callbacks for reloaded model after loading embeddings | AUTOMATIC | |
2023-01-02 | fix the issue with training on SD2.0 | AUTOMATIC | |
2022-12-31 | validate textual inversion embeddings | Vladimir Mandic | |
2022-12-27 | Attempting to solve slow loads for `safetensors`. | Nicolas Patry | |
Fixes #5893 | |||
2022-12-24 | fix F541 f-string without any placeholders | Yuval Aboulafia | |
2022-12-24 | Removed lenght in sd_model at line 115 | linuxmobile ( リナックス ) | |
Commit eba60a4 is what is causing this error, delete the length check in sd_model starting at line 115 and it's fine. https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/5971#issuecomment-1364507379 | |||
2022-12-24 | Merge pull request #5627 from deanpress/patch-1 | AUTOMATIC1111 | |
fix: fallback model_checkpoint if it's empty | |||
2022-12-11 | unconditionally set use_ema=False if value not specified (True never worked, ↵ | MrCheeze | |
and all configs except v1-inpainting-inference.yaml already correctly set it to False) | |||
2022-12-11 | fix: fallback model_checkpoint if it's empty | Dean van Dugteren | |
This fixes the following error when SD attempts to start with a deleted checkpoint: ``` Traceback (most recent call last): File "D:\Web\stable-diffusion-webui\launch.py", line 295, in <module> start() File "D:\Web\stable-diffusion-webui\launch.py", line 290, in start webui.webui() File "D:\Web\stable-diffusion-webui\webui.py", line 132, in webui initialize() File "D:\Web\stable-diffusion-webui\webui.py", line 62, in initialize modules.sd_models.load_model() File "D:\Web\stable-diffusion-webui\modules\sd_models.py", line 283, in load_model checkpoint_info = checkpoint_info or select_checkpoint() File "D:\Web\stable-diffusion-webui\modules\sd_models.py", line 117, in select_checkpoint checkpoint_info = checkpoints_list.get(model_checkpoint, None) TypeError: unhashable type: 'list' ``` | |||
2022-12-10 | fix support for 2.0 inpainting model while maintaining support for 1.5 ↵ | MrCheeze | |
inpainting model |