aboutsummaryrefslogtreecommitdiff
path: root/modules/sd_hijack.py
AgeCommit message (Expand)Author
2022-10-11Merge branch 'master' into hypernetwork-trainingAUTOMATIC
2022-10-11Comma backtrack padding (#2192)hentailord85ez
2022-10-10allow pascal onwardsC43H66N12O12S2
2022-10-10Add back in output hidden states parameterhentailord85ez
2022-10-10Pad beginning of textual inversion embeddinghentailord85ez
2022-10-10Unlimited Token Workshentailord85ez
2022-10-09Removed unnecessary tmp variableFampai
2022-10-09Updated code for legibilityFampai
2022-10-09Optimized code for Ignoring last CLIP layersFampai
2022-10-08Added ability to ignore last n layers in FrozenCLIPEmbedderFampai
2022-10-08add --force-enable-xformers option and also add messages to console regarding...AUTOMATIC
2022-10-08check for ampere without destroying the optimizations. again.C43H66N12O12S2
2022-10-08check for ampereC43H66N12O12S2
2022-10-08why did you do thisAUTOMATIC
2022-10-08restore old opt_split_attention/disable_opt_split_attention logicAUTOMATIC
2022-10-08simplify xfrmers options: --xformers to enable and that's itAUTOMATIC
2022-10-08Merge pull request #1851 from C43H66N12O12S2/flashAUTOMATIC1111
2022-10-08Update sd_hijack.pyC43H66N12O12S2
2022-10-08default to split attention if cuda is available and xformers is notC43H66N12O12S2
2022-10-08fix bug where when using prompt composition, hijack_comments generated before...MrCheeze
2022-10-08fix bugs related to variable prompt lengthsAUTOMATIC
2022-10-08do not let user choose his own prompt token count limitAUTOMATIC
2022-10-08let user choose his own prompt token count limitAUTOMATIC
2022-10-08use new attnblock for xformers pathC43H66N12O12S2
2022-10-08delete broken and unnecessary aliasesC43H66N12O12S2
2022-10-07hypernetwork training mk1AUTOMATIC
2022-10-07make it possible to use hypernetworks without opt split attentionAUTOMATIC
2022-10-07Update sd_hijack.pyC43H66N12O12S2
2022-10-07Update sd_hijack.pyC43H66N12O12S2
2022-10-07Update sd_hijack.pyC43H66N12O12S2
2022-10-07Update sd_hijack.pyC43H66N12O12S2
2022-10-02Merge branch 'master' into stableJairo Correa
2022-10-02fix for incorrect embedding token length calculation (will break seeds that u...AUTOMATIC
2022-10-02initial support for training textual inversionAUTOMATIC
2022-09-30Merge branch 'master' into fix-vramJairo Correa
2022-09-30add embeddings dirAUTOMATIC
2022-09-29fix for incorrect model weight loading for #814AUTOMATIC
2022-09-29new implementation for attention/emphasisAUTOMATIC
2022-09-29Move silu to sd_hijackJairo Correa
2022-09-27switched the token counter to use hidden buttons instead of api callLiam
2022-09-27added token counter next to txt2img and img2img promptsLiam
2022-09-25potential fix for embeddings no loading on AMD cardsAUTOMATIC
2022-09-25Fix token max lengthguaneec
2022-09-21--opt-split-attention now on by default for torch.cuda, off for others (cpu a...AUTOMATIC
2022-09-21fix for too large embeddings causing an errorAUTOMATIC
2022-09-20fix a off by one error with embedding at the start of the sentenceAUTOMATIC
2022-09-20add the part that was missing for word textual inversion checksumsAUTOMATIC
2022-09-18Making opt split attention the default. Are you upset about this? Sorry.AUTOMATIC
2022-09-18.....C43H66N12O12S2
2022-09-18Move scale multiplication to the frontC43H66N12O12S2