Age | Commit message (Collapse) | Author | |
---|---|---|---|
2023-01-10 | Variable dropout rate | aria1th | |
Implements variable dropout rate from #4549 Fixes hypernetwork multiplier being able to modified during training, also fixes user-errors by setting multiplier value to lower values for training. Changes function name to match torch.nn.module standard Fixes RNG reset issue when generating previews by restoring RNG state | |||
2023-01-09 | make a dropdown for prompt template selection | AUTOMATIC | |
2023-01-08 | Move batchsize check | dan | |
2023-01-08 | Add checkbox for variable training dims | dan | |
2023-01-06 | rework saving training params to file #6372 | AUTOMATIC | |
2023-01-05 | Include model in log file. Exclude directory. | timntorres | |
2023-01-05 | Clean up ti, add same behavior to hypernetwork. | timntorres | |
2023-01-04 | Merge branch 'master' into gradient-clipping | AUTOMATIC1111 | |
2023-01-03 | add job info to modules | Vladimir Mandic | |
2022-12-25 | Merge pull request #5992 from yuvalabou/F541 | AUTOMATIC1111 | |
Fix F541: f-string without any placeholders | |||
2022-12-24 | implement train api | Vladimir Mandic | |
2022-12-24 | fix F541 f-string without any placeholders | Yuval Aboulafia | |
2022-12-03 | Merge branch 'master' into racecond_fix | AUTOMATIC1111 | |
2022-11-30 | Use devices.autocast instead of torch.autocast | brkirch | |
2022-11-23 | last_layer_dropout default to False | flamelaw | |
2022-11-23 | fix dropout, implement train/eval mode | flamelaw | |
2022-11-23 | small fixes | flamelaw | |
2022-11-21 | fix pin_memory with different latent sampling method | flamelaw | |
2022-11-20 | Gradient accumulation, autocast fix, new latent sampling method, etc | flamelaw | |
2022-11-19 | change StableDiffusionProcessing to internally use sampler name instead of ↵ | AUTOMATIC | |
sampler index | |||
2022-11-07 | Merge branch 'master' into gradient-clipping | Muhammad Rizqi Nur | |
2022-11-05 | rework the code to not use the walrus operator because colab's 3.7 does not ↵ | AUTOMATIC | |
support it | |||
2022-11-05 | Merge pull request #4273 from Omegastick/ordered_hypernetworks | AUTOMATIC1111 | |
Sort hypernetworks list | |||
2022-11-05 | Simplify grad clip | Muhammad Rizqi Nur | |
2022-11-04 | Sort straight out of the glob | Isaac Poulton | |
2022-11-04 | Merge branch 'master' into gradient-clipping | Muhammad Rizqi Nur | |
2022-11-04 | Sort hypernetworks | Isaac Poulton | |
2022-11-04 | Fixes race condition in training when VAE is unloaded | Fampai | |
set_current_image can attempt to use the VAE when it is unloaded to the CPU while training | |||
2022-11-04 | only save if option is enabled | aria1th | |
2022-11-04 | split before declaring file name | aria1th | |
2022-11-04 | apply | aria1th | |
2022-11-04 | Merge branch 'master' into hn-activation | AUTOMATIC1111 | |
2022-10-31 | Fix merge conflicts | Muhammad Rizqi Nur | |
2022-10-31 | Fix merge conflicts | Muhammad Rizqi Nur | |
2022-10-30 | Merge master | Muhammad Rizqi Nur | |
2022-10-30 | Merge pull request #3928 from R-N/validate-before-load | AUTOMATIC1111 | |
Optimize training a little | |||
2022-10-30 | Fix dataset still being loaded even when training will be skipped | Muhammad Rizqi Nur | |
2022-10-30 | Add missing info on hypernetwork/embedding model log | Muhammad Rizqi Nur | |
Mentioned here: https://github.com/AUTOMATIC1111/stable-diffusion-webui/discussions/1528#discussioncomment-3991513 Also group the saving into one | |||
2022-10-30 | Revert "Add cleanup after training" | Muhammad Rizqi Nur | |
This reverts commit 3ce2bfdf95bd5f26d0f6e250e67338ada91980d1. | |||
2022-10-29 | Add cleanup after training | Muhammad Rizqi Nur | |
2022-10-29 | Add input validations before loading dataset for training | Muhammad Rizqi Nur | |
2022-10-29 | Merge branch 'master' into gradient-clipping | Muhammad Rizqi Nur | |
2022-10-29 | Merge branch 'AUTOMATIC1111:master' into 3825-save-hypernet-strength-to-info | timntorres | |
2022-10-29 | Merge pull request #3858 from R-N/log-csv | AUTOMATIC1111 | |
Fix log off by 1 #3847 | |||
2022-10-28 | Fix log off by 1 | Muhammad Rizqi Nur | |
2022-10-28 | Learning rate sched syntax support for grad clipping | Muhammad Rizqi Nur | |
2022-10-28 | Always ignore "None.pt" in the hypernet directory. | timntorres | |
2022-10-28 | Add missing support for linear activation in hypernetwork | benkyoujouzu | |
2022-10-28 | Gradient clipping in train tab | Muhammad Rizqi Nur | |
2022-10-27 | Revert unresolved changes in Bias initialization | AngelBottomless | |
it should be zeros_ or parameterized in future properly. |