aboutsummaryrefslogtreecommitdiff
path: root/modules/sd_hijack.py
AgeCommit message (Collapse)Author
2022-12-03move #5216 to the extensionAUTOMATIC
2022-12-03Merge remote-tracking branch 'wywywywy/autoencoder-hijack'AUTOMATIC
2022-12-03Merge pull request #5194 from brkirch/autocast-and-mps-randn-fixesAUTOMATIC1111
Use devices.autocast() and fix MPS randn issues
2022-12-02Fixed AttributeError where openaimodel is not foundSmirkingFace
2022-11-30fix bugszhaohu xing
Signed-off-by: zhaohu xing <920232796@qq.com>
2022-11-30Merge branch 'master' into masterzhaohu xing
2022-11-29Add autoencoder to sd_hijackwywywywy
2022-11-29add AltDiffusion to webuizhaohu xing
Signed-off-by: zhaohu xing <920232796@qq.com>
2022-11-28Refactor and instead check if mps is being used, not availabilitybrkirch
2022-11-27Merge remote-tracking branch 'flamelaw/master'AUTOMATIC
2022-11-27Merge branch 'master' into support_any_resolutionBilly Cao
2022-11-26restore hypernetworks to seemingly working stateAUTOMATIC
2022-11-26Add support Stable Diffusion 2.0AUTOMATIC
2022-11-23Patch UNet Forward to support resolutions that are not multiples of 64Billy Cao
Also modifed the UI to no longer step in 64
2022-11-20Gradient accumulation, autocast fix, new latent sampling method, etcflamelaw
2022-11-18cleanly undo circular hijack #4818killfrenzy96
2022-11-12use the new devices.has_mps() function in register_buffer for DDIM/PLMS fix ↵AUTOMATIC
for OSX
2022-11-11move DDIM/PLMS fix for OSX out of the file with inpainting code.AUTOMATIC
2022-11-01Unload sd_model before loading the otherJairo Correa
2022-10-22removed aesthetic gradients as built-inAUTOMATIC
added support for extensions
2022-10-21make aestetic embedding ciompatible with prompts longer than 75 tokensAUTOMATIC
2022-10-21Merge branch 'ae'AUTOMATIC
2022-10-18Update sd_hijack.pyC43H66N12O12S2
2022-10-18use legacy attnblockC43H66N12O12S2
2022-10-16ui fix, re organization of the codeMalumaDev
2022-10-16ui fixMalumaDev
2022-10-16ui fixMalumaDev
2022-10-16Merge remote-tracking branch 'origin/test_resolve_conflicts' into ↵MalumaDev
test_resolve_conflicts
2022-10-16fixed dropbox updateMalumaDev
2022-10-16Merge branch 'master' into test_resolve_conflictsMalumaDev
2022-10-15Update sd_hijack.pyC43H66N12O12S2
2022-10-15fix to tokens lenght, addend embs generator, add new features to edit the ↵MalumaDev
embedding before the generation using text
2022-10-14initMalumaDev
2022-10-12fix iterator bug for #2295AUTOMATIC
2022-10-12Account when lines are mismatchedhentailord85ez
2022-10-11Add check for psutilbrkirch
2022-10-11Add cross-attention optimization from InvokeAIbrkirch
* Add cross-attention optimization from InvokeAI (~30% speed improvement on MPS) * Add command line option for it * Make it default when CUDA is unavailable
2022-10-11rename hypernetwork dir to hypernetworks to prevent clash with an old ↵AUTOMATIC
filename that people who use zip instead of git clone will have
2022-10-11Merge branch 'master' into hypernetwork-trainingAUTOMATIC
2022-10-11Comma backtrack padding (#2192)hentailord85ez
Comma backtrack padding
2022-10-10allow pascal onwardsC43H66N12O12S2
2022-10-10Add back in output hidden states parameterhentailord85ez
2022-10-10Pad beginning of textual inversion embeddinghentailord85ez
2022-10-10Unlimited Token Workshentailord85ez
Unlimited tokens actually work now. Works with textual inversion too. Replaces the previous not-so-much-working implementation.
2022-10-09Removed unnecessary tmp variableFampai
2022-10-09Updated code for legibilityFampai
2022-10-09Optimized code for Ignoring last CLIP layersFampai
2022-10-08Added ability to ignore last n layers in FrozenCLIPEmbedderFampai
2022-10-08add --force-enable-xformers option and also add messages to console ↵AUTOMATIC
regarding cross attention optimizations
2022-10-08check for ampere without destroying the optimizations. again.C43H66N12O12S2