aboutsummaryrefslogtreecommitdiff
path: root/modules/sd_models.py
AgeCommit message (Collapse)Author
2023-12-06Fix forced reloadKohaku-Blueleaf
2023-12-02Ensure the cached weight will not be affectedKohaku-Blueleaf
2023-12-02Merge branch 'dev' into test-fp8Kohaku-Blueleaf
2023-12-01Add support for SD 2.1 Turbo, by converting the state dict from SGM to LDM ↵MrCheeze
on load
2023-11-25Fix pre-fp8Kohaku-Blueleaf
2023-11-21Option for using fp16 weight when apply loraKohaku-Blueleaf
2023-11-19Use options instead of cmd_argsKohaku-Blueleaf
2023-11-16Merge branch 'dev' into test-fp8Kohaku-Blueleaf
2023-11-05more changes for #13865: fix formatting, rename the function, add comment ↵AUTOMATIC1111
and add a readme entry
2023-11-05linterAUTOMATIC1111
2023-11-05Merge branch 'dev' into masterAUTOMATIC1111
2023-11-05Use devices.torch_gc() instead of empty_cache()Ritesh Gangnani
2023-11-05Add SSD-1B as a supported modelRitesh Gangnani
2023-10-28ManualCast for 10/16 series gpuKohaku-Blueleaf
2023-10-25ignore mps for fp8Kohaku-Blueleaf
2023-10-25Fix alphas cumprodKohaku-Blueleaf
2023-10-25Fix alphas_cumprod dtypeKohaku-Blueleaf
2023-10-25fp8 for TEKohaku-Blueleaf
2023-10-24Fix lintKohaku-Blueleaf
2023-10-24Add CPU fp8 supportKohaku-Blueleaf
Since norm layer need fp32, I only convert the linear operation layer(conv2d/linear) And TE have some pytorch function not support bf16 amp in CPU. I add a condition to indicate if the autocast is for unet.
2023-10-19Add sdxl only argKohaku-Blueleaf
2023-10-19Add fp8 for sd unetKohaku-Blueleaf
2023-10-15repair unload sd checkpoint buttonAUTOMATIC1111
2023-10-14use shallow copy for #13535AUTOMATIC1111
2023-10-14Merge pull request #13535 from chu8129/devAUTOMATIC1111
fix: checkpoints_loaded:{checkpoint:state_dict}, model.load_state_dict issue in dict value empty
2023-10-07reverstwangqiuwen
2023-10-07upwangqiuwen
2023-09-30Merge pull request #13139 from AUTOMATIC1111/ckpt-dir-path-separatorAUTOMATIC1111
fix `--ckpt-dir` path separator and option use `short name` for checkpoint dropdown
2023-09-30add missing import, simplify code, use patches module for #13276AUTOMATIC1111
2023-09-30Merge pull request #13276 from woweenie/patch-1AUTOMATIC1111
patch DDPM.register_betas so that users can put given_betas in model yaml
2023-09-18fix王秋文/qwwang
2023-09-15patch DDPM.register_betas so that users can put given_betas in model yamlwoweenie
2023-09-15use dict[key]=model; did not update orderdict order, should use move to endqiuwen.wang
2023-09-08parsing string to pathw-e-w
2023-08-30keep order in list of checkpoints when loading model that doesn't have a ↵AUTOMATIC1111
checksum
2023-08-30keep order in list of checkpoints when loading model that doesn't have a ↵AUTOMATIC1111
checksum
2023-08-23set devices.dtype_unet correctlyAUTOMATIC1111
2023-08-22add --medvram-sdxlAUTOMATIC1111
2023-08-21Fix for consistency with shared.opts.sd_vae of UIUminosachi
2023-08-20Change where VAE state are stored in modelUminosachi
2023-08-20Change to access sd_model attribute with dotUminosachi
2023-08-20Store base_vae and loaded_vae_file in sd_modelUminosachi
2023-08-20Fix SD VAE switch error after model reuseUminosachi
2023-08-17resolve the issue with loading fp16 checkpoints while using --no-halfAUTOMATIC1111
2023-08-16send weights to target device instead of CPU memoryAUTOMATIC1111
2023-08-16Revert "send weights to target device instead of CPU memory"AUTOMATIC1111
This reverts commit 0815c45bcdec0a2e5c60bdd5b33d95813d799c01.
2023-08-16send weights to target device instead of CPU memoryAUTOMATIC1111
2023-08-12put refiner into main UI, into the new accordions sectionAUTOMATIC1111
add VAE from main model into infotext, not from refiner model option to make scripts UI without gr.Group fix inconsistencies with refiner when usings samplers that do more denoising than steps
2023-08-10resolve merge issuesAUTOMATIC1111
2023-08-10Merge branch 'dev' into refinerAUTOMATIC1111