Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add simplified model manager install API to InvocationContext #6132

Open
wants to merge 55 commits into
base: main
Choose a base branch
from

Conversation

lstein
Copy link
Collaborator

@lstein lstein commented Apr 4, 2024

Summary

This adds two model manager-related methods to the InvocationContext uniform API. They are accessible via context.models.*:

  1. load_and_cache_model(source: Path|str|AnyHttpURL, loader: Optional[Callable[[Path], Dict[str, Tensor]]] = None) -> LoadedModel

Load the model located at the indicated path, URL or repo_id.

This will download the model from the indicated location , cache it locally, and load it into the model manager RAM cache if needed. If the optional loader argument is provided, the loader will be invoked to load the model into memory. Otherwise the method will call safetensors.torch.load_file() or torch.load() (with a pickle scan) as appropriate to the file suffix. Diffusers models are supported via HuggingFace repo_ids.

Be aware that the LoadedModel object will have a config attribute of None.

Here is an example of usage:

def invoke(self, context: InvocatinContext) -> ImageOutput:
       model_url = 'https://github.com/xinntao/Real-ESRGAN/releases/download/v0.1.0/RealESRGAN_x4plus.pth'
       loadnet = context.models.load_and_cache_model(model_url)
       with loadnet as loadnet_model:
             upscaler = RealESRGAN(loadnet=loadnet_model,...)

  1. download_and_cache_model( source: str | AnyHttpUrl, access_token: Optional[str] = None, timeout: Optional[int] = 0) -> Path

Download the model file located at source to the models cache and return its Path.

This will check models/.download_cache for the desired model file and download it from the indicated source if not already present. The local Path to the downloaded file is then returned.

Other Changes

This PR performs a migration, in which it renames models/.cache to models/.convert_cache, and migrates previously-downloaded ESRGAN, openpose, DepthAnything and Lama inpaint models from the models/core directory into models/.download_cache.

There are a number of legacy model files in models/core, such as GFPGAN, which are no longer used. This PR deletes them and tidies up the models/core directory.

Related Issues / Discussions

I have systematically replaced all the calls to download_with_progress_bar(). This function is no longer used elsewhere and has been removed.

QA Instructions

I have added unit tests for the three new calls. You may test that the load_and_cache_model() call is working by running the upscaler within the web app. On first try, you will see the model file being downloaded into the models .cache directory. On subsequent tries, the model will either load from RAM (if it hasn't been displaced) or will be loaded from the filesystem.

Merge Plan

Squash merge when approved.

Checklist

  • The PR has a short but descriptive title, suitable for a changelog
  • Tests added / updated (if applicable)
  • Documentation added / updated (if applicable)

@github-actions github-actions bot added python PRs that change python files services PRs that change app services labels Apr 4, 2024
@lstein lstein force-pushed the lstein/feat/simple-mm2-api branch from 9cc1f20 to af1b57a Compare April 12, 2024 01:46
@github-actions github-actions bot added invocations PRs that change invocations backend PRs that change backend files python-tests PRs that change python tests labels Apr 12, 2024
@lstein lstein marked this pull request as ready for review April 12, 2024 05:17
@lstein
Copy link
Collaborator Author

lstein commented Apr 14, 2024

I have added a migration script that tidies up the models/core directory and removes unused models such as GFPGAN. In addition, I have renamed models/.cache to models/.convert_cache to distinguish it from the directory in which just-in-time models are downloaded into, which is models/.download_cache. While the size of models/.convert_cache is capped such that less-used models are cleared periodically, files in models/.download_cache are not removed unless the user does so manually.

@lstein lstein force-pushed the lstein/feat/simple-mm2-api branch from 537a626 to 3ddd7ce Compare April 14, 2024 19:57
@lstein lstein force-pushed the lstein/feat/simple-mm2-api branch from 3ddd7ce to fa6efac Compare April 14, 2024 20:10
Copy link
Collaborator

@psychedelicious psychedelicious left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure what I was expecting the implementation to be, but it definitely wasn't as simple as this - great work.

I've requested a few changes and there's one discussion item that I'd like to marinate on before we change the public invocation API.

invokeai/app/invocations/upscale.py Outdated Show resolved Hide resolved
invokeai/app/services/shared/invocation_context.py Outdated Show resolved Hide resolved
@github-actions github-actions bot added the docs PRs that change docs label May 18, 2024
@lstein lstein marked this pull request as ready for review May 18, 2024 02:33
@lstein
Copy link
Collaborator Author

lstein commented May 18, 2024

@psychedelicious This is ready for your review now. There are now just two calls: load_and_cache_model() and download_and_cache_model() which return a locally cached Path and LoadedModel respectively. In addition, the model source can now be a URL, a local Path, or a repo_id. Support for the latter involved my refactoring the way that multifile downloads work.

@lstein
Copy link
Collaborator Author

lstein commented May 28, 2024

@psychedelicious I just updated the whole thing to work properly with the new (and very nice) Pydantic-based events. I've also added a new migration. Please review when you can. I'm having to resolve merge conflicts fairly regularly!

Lincoln Stein and others added 4 commits May 28, 2024 19:30
- Any mypy issues are a misconfiguration of mypy
- Use simple conditionals instead of ternaries
- Consistent & standards-compliant docstring formatting
- Use `dict` instead of `typing.Dict`
Comment on lines -26 to +27
config: AnyModelConfig
_locker: ModelLockerBase
config: Optional[AnyModelConfig] = None
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is a notable API change. This means we cannot rely on a LoadedModel to have a config. Maybe this should be two separate classes, LoadedModelWithoutConfig and LoadedModel(LoadedModelWithoutConfig)...

@psychedelicious
Copy link
Collaborator

psychedelicious commented Jun 2, 2024

I removed a number of unnecessary changes in invocation_context.py, mostly extraneous type annotations. If mypy is complaining about these, then that's a mypy problem, because all the methods are annotated correctly.

I also moved load_model_from_url from the main model manager class into the invocation context.

- Set `self._context=context` instead of changing the type signature of `run_processor`
- Tidy a few typing things
- Set `self._context=context` instead of passing it as an arg
Just a bit of typo protection in lieu of full type safety for these methods, which is difficult due to the typing of `DownloadEventHandler`.
It's inherited from the ABC.
Comment on lines +104 to +112
def diffusers_load_directory(directory: Path) -> AnyModel:
load_class = GenericDiffusersLoader(
app_config=self._app_config,
logger=self._logger,
ram_cache=self._ram_cache,
convert_cache=self.convert_cache,
).get_hf_load_class(directory)
result: AnyModel = load_class.from_pretrained(model_path, torch_dtype=TorchDevice.choose_torch_dtype())
return result
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This function is unused - I think the logic to get the loader should be checking if it's a directory? I'm not sure how to fix this myself bc the diffusers_load_directory function has a different type signature than the other loader function options.

@psychedelicious psychedelicious self-requested a review June 3, 2024 01:54
Copy link
Collaborator

@psychedelicious psychedelicious left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry for the delay in reviewing. I've tidied a few things and tested everything, working great!

Two minor issues noted.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
api backend PRs that change backend files docs PRs that change docs invocations PRs that change invocations python PRs that change python files python-tests PRs that change python tests services PRs that change app services
Projects
None yet
Development

Successfully merging this pull request may close these issues.

[bug]: CUDA out of memory error when upscaling x4 (or x2 twice)
3 participants