aboutsummaryrefslogtreecommitdiff
path: root/modules/textual_inversion/textual_inversion.py
AgeCommit message (Collapse)Author
2023-01-25allow symlinks in the textual inversion embeddings folderAlex "mcmonkey" Goodwin
2023-01-21extra networks UIAUTOMATIC
rework of hypernets: rather than via settings, hypernets are added directly to prompt as <hypernet:name:weight>
2023-01-18add option to show/hide warningsAUTOMATIC
removed hiding warnings from LDSR fixed/reworked few places that produced warnings
2023-01-15big rework of progressbar/preview system to allow multiple users to prompts ↵AUTOMATIC
at the same time and do not get previews of each other
2023-01-14change hash to sha256AUTOMATIC
2023-01-13fix a bug caused by mergeAUTOMATIC
2023-01-13Merge branch 'master' into tensorboardAUTOMATIC1111
2023-01-13print bucket sizes for training without resizing images #6620AUTOMATIC
fix an error when generating a picture with embedding in it
2023-01-12Allow creation of zero vectors for TIShondoit
2023-01-11set descriptionsVladimir Mandic
2023-01-10Support loading textual inversion embeddings from safetensors filesLee Bousfield
2023-01-09make a dropdown for prompt template selectionAUTOMATIC
2023-01-09remove/simplify some changes from #6481AUTOMATIC
2023-01-09Merge branch 'master' into varsizeAUTOMATIC1111
2023-01-08make it possible for extensions/scripts to add their own embedding directoriesAUTOMATIC
2023-01-08skip images in embeddings dir if they have a second .preview extensionAUTOMATIC
2023-01-08Add checkbox for variable training dimsdan
2023-01-08Allow variable img sizedan
2023-01-07CLIP hijack reworkAUTOMATIC
2023-01-06rework saving training params to file #6372AUTOMATIC
2023-01-06Merge pull request #6372 from ↵AUTOMATIC1111
timntorres/save-ti-hypernet-settings-to-txt-revised Save hypernet and textual inversion settings to text file, revised.
2023-01-06allow loading embeddings from subdirectoriesFaber
2023-01-05typo in TIKuma
2023-01-05Include model in log file. Exclude directory.timntorres
2023-01-05Clean up ti, add same behavior to hypernetwork.timntorres
2023-01-05Add option to save ti settings to file.timntorres
2023-01-04Merge branch 'master' into gradient-clippingAUTOMATIC1111
2023-01-04use shared function from processing for creating dummy mask when training ↵AUTOMATIC
inpainting model
2023-01-04fix the mergeAUTOMATIC
2023-01-04Merge branch 'master' into inpaint_textual_inversionAUTOMATIC1111
2023-01-04Merge pull request #6253 from Shondoit/ti-optimAUTOMATIC1111
Save Optimizer next to TI embedding
2023-01-03add job info to modulesVladimir Mandic
2023-01-03Save Optimizer next to TI embeddingShondoit
Also add check to load only .PT and .BIN files as embeddings. (since we add .optim files in the same directory)
2023-01-02feat(api): return more data for embeddingsPhilpax
2023-01-02fix the issue with training on SD2.0AUTOMATIC
2022-12-31changed embedding accepted shape detection to use existing code and support ↵AUTOMATIC
the new alt-diffusion model, and reformatted messages a bit #6149
2022-12-31validate textual inversion embeddingsVladimir Mandic
2022-12-24fix F541 f-string without any placeholdersYuval Aboulafia
2022-12-14Fix various typosJim Hays
2022-12-03Merge branch 'master' into racecond_fixAUTOMATIC1111
2022-11-30Use devices.autocast instead of torch.autocastbrkirch
2022-11-27Merge remote-tracking branch 'flamelaw/master'AUTOMATIC
2022-11-27set TI AdamW default weight decay to 0flamelaw
2022-11-26Add support Stable Diffusion 2.0AUTOMATIC
2022-11-23small fixesflamelaw
2022-11-21fix pin_memory with different latent sampling methodflamelaw
2022-11-20Gradient accumulation, autocast fix, new latent sampling method, etcflamelaw
2022-11-19change StableDiffusionProcessing to internally use sampler name instead of ↵AUTOMATIC
sampler index
2022-11-05Simplify grad clipMuhammad Rizqi Nur
2022-11-04Fixes race condition in training when VAE is unloadedFampai
set_current_image can attempt to use the VAE when it is unloaded to the CPU while training