aboutsummaryrefslogtreecommitdiff
path: root/modules/processing.py
AgeCommit message (Collapse)Author
2023-02-10Working UniPC (for batch size 1)space-nuko
2023-02-03txt2img Hires FixKyle
2023-02-03Image CFG Added (Full Implementation)Kyle
Uses separate denoiser for edit (instruct-pix2pix) models No impact to txt2img or regular img2img "Image CFG Scale" will only apply to instruct-pix2pix models and metadata will only be added if using such model
2023-02-02Processing only, no CFGDenoiser changeKyle
Allows instruct-pix2pix
2023-02-02Revert "instruct-pix2pix support"Kyle
This reverts commit 269833067de1e7d0b6a6bd65724743d6b88a133f.
2023-02-02instruct-pix2pix supportKyle
2023-01-30make the program read Eta and Eta DDIM from generation parametersAUTOMATIC
2023-01-29remove Batch size and Batch pos from textinfo (goodbye)AUTOMATIC
2023-01-28Merge pull request #7309 from brkirch/fix-embeddingsAUTOMATIC1111
Fix embeddings, upscalers, and refactor `--upcast-sampling`
2023-01-28Refactor conditional casting, fix upscalersbrkirch
2023-01-27add data-dir flag and set all user data directories based on itMax Audron
2023-01-26add an option to enable sections from extras tab in txt2img/img2imgAUTOMATIC
fix some style inconsistenices
2023-01-26Fix full previews, --no-half-vaebrkirch
2023-01-25add edit_image_conditioning from my earlier edits in case there's an attempt ↵AUTOMATIC
to inegrate pix2pix properly this allows to use pix2pix model in img2img though it won't work well this way
2023-01-25Merge pull request #6510 from brkirch/unet16-upcast-precisionAUTOMATIC1111
Add upcast options, full precision sampling from float16 UNet and upcasting attention for inference using SD 2.1 models without --no-half
2023-01-25change to code for live preview fix on OSX to be bit more obviousAUTOMATIC
2023-01-25Add UI setting for upcasting attention to float32brkirch
Adds "Upcast cross attention layer to float32" option in Stable Diffusion settings. This allows for generating images using SD 2.1 models without --no-half or xFormers. In order to make upcasting cross attention layer optimizations possible it is necessary to indent several sections of code in sd_hijack_optimizations.py so that a context manager can be used to disable autocast. Also, even though Stable Diffusion (and Diffusers) only upcast q and k, unfortunately my findings were that most of the cross attention layer optimizations could not function unless v is upcast also.
2023-01-25Add option for float32 sampling with float16 UNetbrkirch
This also handles type casting so that ROCm and MPS torch devices work correctly without --no-half. One cast is required for deepbooru in deepbooru_model.py, some explicit casting is required for img2img and inpainting. depth_model can't be converted to float16 or it won't work correctly on some systems (it's known to have issues on MPS) so in sd_models.py model.depth_model is removed for model.half().
2023-01-23Fix different first gen with Approx NN previewsbrkirch
The loading of the model for approx nn live previews can change the internal state of PyTorch, resulting in a different image. This can be avoided by preloading the approx nn model in advance.
2023-01-22enable compact view for train tabAUTOMATIC
prevent previews from ruining hypernetwork training
2023-01-21extract extra network data from prompt earlierAUTOMATIC
2023-01-21make it so that extra networks are not removed from infotextAUTOMATIC
2023-01-21extra networks UIAUTOMATIC
rework of hypernets: rather than via settings, hypernets are added directly to prompt as <hypernet:name:weight>
2023-01-18Merge pull request #6854 from EllangoK/masterAUTOMATIC1111
Saves Extra Generation Parameters to params.txt
2023-01-18use DDIM in hires fix is the sampler is PLMSAUTOMATIC
2023-01-17Changed params.txt save to after manual init callEllangoK
2023-01-16make StableDiffusionProcessing class not hold a reference to shared.sd_model ↵AUTOMATIC
object
2023-01-16Add a check and explanation for tensor with all NaNs.AUTOMATIC
2023-01-14change hypernets to use sha256 hashesAUTOMATIC
2023-01-12Fix extension parameters not being saved to last used parametersspace-nuko
2023-01-09add an option to use old hiresfix width/height behaviorAUTOMATIC
add a visual effect to inactive hires fix elements
2023-01-07Merge branch 'AUTOMATIC1111:master' into img2img-api-scriptsnoodleanon
2023-01-07rework hires fix preview for #6437: movie it to where it takes less place, ↵AUTOMATIC
make it actually account for all relevant sliders and calculate dimensions correctly
2023-01-05allow img2img api to run scriptsnoodleanon
2023-01-05experimental optimizationAUTOMATIC
2023-01-05move sd_model assignment to the place where we change the sd_modelAUTOMATIC
2023-01-05Merge branch 'AUTOMATIC1111:master' into fix-sd-arch-switch-in-override-settingsPhilpax
2023-01-05make hires fix not do anything if the user chooses the second pass ↵AUTOMATIC
resolution to be the same as first pass resolution
2023-01-04fix incorrect display/calculation for number of steps for hires fix in ↵AUTOMATIC
progress bars
2023-01-04added the option to specify target resolution with possibility of truncating ↵AUTOMATIC
for hires fix; also sampling steps
2023-01-04add XY plot parameters to grid image and do not add them to individual imagesAUTOMATIC
2023-01-04use shared function from processing for creating dummy mask when training ↵AUTOMATIC
inpainting model
2023-01-04add infotext to "-before-highres-fix" imagesAUTOMATIC
2023-01-04Merge pull request #6299 from stysmmaker/feat/latent-upscale-modesAUTOMATIC1111
Add more latent upscale modes
2023-01-04Update processing.pyMMaker
2023-01-04fix: Save full res of intermediate stepMMaker
2023-01-03fix hires fix not working in API when user does not specify upscalerAUTOMATIC
2023-01-02Hires fix reworkAUTOMATIC
2022-12-31make it so that memory/embeddings info is displayed in a separate UI element ↵AUTOMATIC
from generation parameters, and is preserved when you change the displayed infotext by clicking on gallery images
2022-12-26make it so that blank ENSD does not break image generationAUTOMATIC