Age | Commit message (Collapse) | Author | |
---|---|---|---|
2023-06-27 | Merge pull request #10823 from akx/model-loady | AUTOMATIC1111 | |
Upscaler model loading cleanup | |||
2023-06-13 | Fix up `if "http" in ...:` to be more sensible startswiths | Aarni Koskela | |
2023-06-13 | Use os.makedirs(..., exist_ok=True) | Aarni Koskela | |
2023-05-31 | rename print_error to report, use it with together with package name | AUTOMATIC | |
2023-05-29 | Add & use modules.errors.print_error where currently printing exception info ↵ | Aarni Koskela | |
by hand | |||
2023-05-10 | F401 fixes for ruff | AUTOMATIC | |
2023-01-27 | add data-dir flag and set all user data directories based on it | Max Audron | |
2022-11-12 | Set device for facelib/facexlib and gfpgan | brkirch | |
* FaceXLib/FaceLib doesn't pass the device argument to RetinaFace but instead chooses one itself and sets it to a global - in order to use a device other than its internally chosen default it is necessary to manually replace the default value * The GFPGAN constructor needs the device argument to work with MPS or a CUDA device ID that differs from the default | |||
2022-10-04 | Merge branch 'master' into cpu-cmdline-opt | brkirch | |
2022-10-04 | send all three of GFPGAN's and codeformer's models to CPU memory instead of ↵ | AUTOMATIC | |
just one for #1283 | |||
2022-10-04 | Merge branch 'master' into master | brkirch | |
2022-10-03 | use existing function for gfpgan | AUTOMATIC | |
2022-09-30 | When device is MPS, use CPU for GFPGAN instead | brkirch | |
GFPGAN will not work if the device is MPS, so default to CPU instead. | |||
2022-09-30 | remove unwanted formatting/functionality from the PR | AUTOMATIC | |
2022-09-29 | Holy $hit. | d8ahazard | |
Yep. Fix gfpgan_model_arch requirement(s). Add Upscaler base class, move from images. Add a lot of methods to Upscaler. Re-work all the child upscalers to be proper classes. Add BSRGAN scaler. Add ldsr_model_arch class, removing the dependency for another repo that just uses regular latent-diffusion stuff. Add one universal method that will always find and load new upscaler models without having to add new "setup_model" calls. Still need to add command line params, but that could probably be automated. Add a "self.scale" property to all Upscalers so the scalers themselves can do "things" in response to the requested upscaling size. Ensure LDSR doesn't get stuck in a longer loop of "upscale/downscale/upscale" as we try to reach the target upscale size. Add typehints for IDE sanity. PEP-8 improvements. Moar. | |||
2022-09-26 | Re-implement universal model loading | d8ahazard | |
2022-09-23 | gfpgan: just download the damn model | AUTOMATIC | |
2022-09-12 | Instance of CUDA out of memory on a low-res batch, even with ↵ | AUTOMATIC | |
--opt-split-attention-v1 (found cause) #255 | |||
2022-09-07 | codeformer support | AUTOMATIC | |
2022-09-03 | option to unload GFPGAN after using | AUTOMATIC | |
2022-09-03 | split codebase into multiple files; to anyone this affects negatively: sorry | AUTOMATIC | |