aboutsummaryrefslogtreecommitdiff
path: root/extensions-builtin/Lora/scripts/lora_script.py
diff options
context:
space:
mode:
authorbrkirch <brkirch@users.noreply.github.com>2023-01-25 00:23:10 -0500
committerbrkirch <brkirch@users.noreply.github.com>2023-01-25 01:13:04 -0500
commite3b53fd295aca784253dfc8668ec87b537a72f43 (patch)
tree6fb26afd730c0561a2506ead2d2c8295d326de40 /extensions-builtin/Lora/scripts/lora_script.py
parent84d9ce30cb427759547bc7876ed80ab91787d175 (diff)
Add UI setting for upcasting attention to float32
Adds "Upcast cross attention layer to float32" option in Stable Diffusion settings. This allows for generating images using SD 2.1 models without --no-half or xFormers. In order to make upcasting cross attention layer optimizations possible it is necessary to indent several sections of code in sd_hijack_optimizations.py so that a context manager can be used to disable autocast. Also, even though Stable Diffusion (and Diffusers) only upcast q and k, unfortunately my findings were that most of the cross attention layer optimizations could not function unless v is upcast also.
Diffstat (limited to 'extensions-builtin/Lora/scripts/lora_script.py')
0 files changed, 0 insertions, 0 deletions