aboutsummaryrefslogtreecommitdiff
path: root/modules/sd_hijack_optimizations.py
AgeCommit message (Expand)Author
2023-08-13Make sub-quadratic the default for MPSbrkirch
2023-08-13Use fixed size for sub-quadratic chunking on MPSbrkirch
2023-08-02update doggettx cross attention optimization to not use an unreasonable amoun...AUTOMATIC1111
2023-07-13get attention optimizations to workAUTOMATIC1111
2023-07-12SDXL supportAUTOMATIC1111
2023-06-07Merge pull request #11066 from aljungberg/patch-1AUTOMATIC1111
2023-06-06Fix upcast attention dtype error.Alexander Ljungberg
2023-06-04Merge pull request #10990 from vkage/sd_hijack_optimizations_bugfixAUTOMATIC1111
2023-06-04fix the broken line for #10990AUTOMATIC
2023-06-03torch.cuda.is_available() check for SdOptimizationXformersVivek K. Vasishtha
2023-06-01revert default cross attention optimization to DoggettxAUTOMATIC
2023-06-01revert default cross attention optimization to DoggettxAUTOMATIC
2023-05-31rename print_error to report, use it with together with package nameAUTOMATIC
2023-05-29Add & use modules.errors.print_error where currently printing exception info ...Aarni Koskela
2023-05-21Add a couple `from __future__ import annotations`es for Py3.9 compatAarni Koskela
2023-05-19Apply suggestions from code reviewAUTOMATIC1111
2023-05-19fix linter issuesAUTOMATIC
2023-05-18make it possible for scripts to add cross attention optimizationsAUTOMATIC
2023-05-11Autofix Ruff W (not W605) (mostly whitespace)Aarni Koskela
2023-05-10ruff auto fixesAUTOMATIC
2023-05-10autofixes from ruffAUTOMATIC
2023-05-08Fix for Unet NaNsbrkirch
2023-03-24Update sd_hijack_optimizations.pyFNSpd
2023-03-21Update sd_hijack_optimizations.pyFNSpd
2023-03-10sdp_attnblock_forward hijackPam
2023-03-10argument to disable memory efficient for sdpPam
2023-03-07scaled dot product attentionPam
2023-01-25Add UI setting for upcasting attention to float32brkirch
2023-01-23better support for xformers flash attention on older versions of torchAUTOMATIC
2023-01-21add --xformers-flash-attention option & implTakuma Mori
2023-01-21extra networks UIAUTOMATIC
2023-01-06Added licensebrkirch
2023-01-06Change sub-quad chunk threshold to use percentagebrkirch
2023-01-06Add Birch-san's sub-quadratic attention implementationbrkirch
2022-12-20Use other MPS optimization for large q.shape[0] * q.shape[1]brkirch
2022-12-10cleanup some unneeded imports for hijack filesAUTOMATIC
2022-12-10do not replace entire unet for the resolution hackAUTOMATIC
2022-11-23Patch UNet Forward to support resolutions that are not multiples of 64Billy Cao
2022-10-19Remove wrong self reference in CUDA support for invokeaiCheka
2022-10-18Update sd_hijack_optimizations.pyC43H66N12O12S2
2022-10-18readd xformers attnblockC43H66N12O12S2
2022-10-18delete xformers attnblockC43H66N12O12S2
2022-10-11Use apply_hypernetwork functionbrkirch
2022-10-11Add InvokeAI and lstein to credits, add back CUDA supportbrkirch
2022-10-11Add check for psutilbrkirch
2022-10-11Add cross-attention optimization from InvokeAIbrkirch
2022-10-11rename hypernetwork dir to hypernetworks to prevent clash with an old filenam...AUTOMATIC
2022-10-11fixes related to mergeAUTOMATIC
2022-10-11replace duplicate code with a functionAUTOMATIC
2022-10-10remove functorchC43H66N12O12S2