Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[bug]: Error Dependencies #6369

Closed
1 task done
SaucierMusic opened this issue May 15, 2024 · 11 comments
Closed
1 task done

[bug]: Error Dependencies #6369

SaucierMusic opened this issue May 15, 2024 · 11 comments
Labels
bug Something isn't working

Comments

@SaucierMusic
Copy link

Is there an existing issue for this problem?

  • I have searched the existing issues

Operating system

Windows

GPU vendor

Nvidia (CUDA)

GPU model

GEForce GTX 1060

GPU VRAM

?

Version number

HP Omen

Browser

Google

Python dependencies

Installing backend dependencies ... error
error: subprocess-exited-with-error

× pip subprocess to install backend dependencies did not run successfully.
│ exit code: 1
╰─> [5 lines of output]
Looking in indexes: https://pypi.org/simple, https://download.pytorch.org/whl/cu121
Collecting cython
Using cached Cython-3.0.10-cp310-cp310-win32.whl.metadata (3.2 kB)
ERROR: Could not find a version that satisfies the requirement torch (from versions: none)
ERROR: No matching distribution found for torch
[end of output]

note: This error originates from a subprocess, and is likely not a problem with pip.
error: subprocess-exited-with-error

× pip subprocess to install backend dependencies did not run successfully.
│ exit code: 1
╰─> See above for output.

note: This error originates from a subprocess, and is likely not a problem with pip.
Error: Unexpected exit code: 1
Command line: | 'C:\Users\M249_\invokeai.venv\Scripts\pip.exe' install --require-virtualenv --force-reinstall --use-pep517 'invokeai[xformers,onnx-cuda]' --extra-index-url https://download.pytorch.org/whl/cu121
Could not install InvokeAI. Please try downloading the latest version of the installer and install again.

What happened

..

What you expected to happen

..

How to reproduce the problem

..

Additional context

..

Discord username

sauciermusic

@SaucierMusic SaucierMusic added the bug Something isn't working label May 15, 2024
@psychedelicious
Copy link
Collaborator

Hi, can you please confirm that you have installed by following the install instructions on the latest release to download and run the installer?

@psychedelicious
Copy link
Collaborator

Also, please paste the output of python --version.

@SaucierMusic
Copy link
Author

Hi thank you so much for responding!! C:\Users\M249_>py --version
Python 3.10.9

yes followed and am at the very last stage of install when it fails.. chat gpt said something about torch?? I am so new to all of this im so sorry lol

@psychedelicious
Copy link
Collaborator

It's unexpected that the installer is attempting to install cython:

 Collecting cython
Using cached Cython-3.0.10-cp310-cp310-win32.whl.metadata (3.2 kB)

I don't think that is in our dependency graph at all. I just did a fresh install to double-check - no cython. Do you have any ideas why it might be attempting to install cython (e.g. you manually installed a specific build of python)?

Is this a new install or an update of an existing install?

@SaucierMusic
Copy link
Author

SaucierMusic commented May 15, 2024

Hello! This is a new install following the link from the Helium X guide...I did install 32 bit and 64 bit because I wasn't sure which I needed : from this link https://www.python.org/downloads/release/python-3109/

@SaucierMusic
Copy link
Author

SaucierMusic commented May 15, 2024

After using chat gpt (which recommend I try to update some stuff or something) I now see I went from 5 lines of output to 3? lol not sure if this whole thing helps...

Looking in indexes: https://pypi.org/simple, https://download.pytorch.org/whl/cu121
Collecting invokeai[onnx-cuda,xformers]
Using cached InvokeAI-4.2.1-py3-none-any.whl.metadata (23 kB)
Collecting accelerate==0.29.2 (from invokeai[onnx-cuda,xformers])
Using cached accelerate-0.29.2-py3-none-any.whl.metadata (18 kB)
Collecting clip-anytorch==2.5.2 (from invokeai[onnx-cuda,xformers])
Using cached clip_anytorch-2.5.2-py3-none-any.whl.metadata (8.1 kB)
Collecting compel==2.0.2 (from invokeai[onnx-cuda,xformers])
Using cached compel-2.0.2-py3-none-any.whl.metadata (12 kB)
Collecting controlnet-aux==0.0.7 (from invokeai[onnx-cuda,xformers])
Using cached controlnet_aux-0.0.7.tar.gz (202 kB)
Installing build dependencies ... done
Getting requirements to build wheel ... done
Installing backend dependencies ... done
Preparing metadata (pyproject.toml) ... done
Collecting diffusers==0.27.2 (from diffusers[torch]==0.27.2->invokeai[onnx-cuda,xformers])
Using cached diffusers-0.27.2-py3-none-any.whl.metadata (18 kB)
Collecting invisible-watermark==0.2.0 (from invokeai[onnx-cuda,xformers])
Using cached invisible_watermark-0.2.0-py3-none-any.whl.metadata (8.2 kB)
INFO: pip is looking at multiple versions of invokeai[onnx-cuda,xformers] to determine which version is compatible with other requirements. This could take a while.
Collecting invokeai[onnx-cuda,xformers]
Using cached InvokeAI-4.2.0-py3-none-any.whl.metadata (23 kB)
Using cached InvokeAI-4.1.0-py3-none-any.whl.metadata (34 kB)
Using cached InvokeAI-4.0.4-py3-none-any.whl.metadata (34 kB)
Collecting accelerate==0.28.0 (from invokeai[onnx-cuda,xformers])
Using cached accelerate-0.28.0-py3-none-any.whl.metadata (18 kB)
Collecting invokeai[onnx-cuda,xformers]
Using cached InvokeAI-4.0.3-py3-none-any.whl.metadata (34 kB)
Using cached InvokeAI-4.0.2-py3-none-any.whl.metadata (34 kB)
Using cached InvokeAI-4.0.1-py3-none-any.whl.metadata (34 kB)
Using cached InvokeAI-4.0.0-py3-none-any.whl.metadata (34 kB)
INFO: pip is still looking at multiple versions of invokeai[onnx-cuda,xformers] to determine which version is compatible with other requirements. This could take a while.
Using cached InvokeAI-3.7.0-py3-none-any.whl.metadata (33 kB)
Collecting accelerate==0.27.2 (from invokeai[onnx-cuda,xformers])
Using cached accelerate-0.27.2-py3-none-any.whl.metadata (18 kB)
Collecting diffusers==0.26.3 (from diffusers[torch]==0.26.3->invokeai[onnx-cuda,xformers])
Using cached diffusers-0.26.3-py3-none-any.whl.metadata (19 kB)
Collecting invokeai[onnx-cuda,xformers]
Using cached InvokeAI-3.6.3-py3-none-any.whl.metadata (33 kB)
Collecting accelerate==0.26.1 (from invokeai[onnx-cuda,xformers])
Using cached accelerate-0.26.1-py3-none-any.whl.metadata (18 kB)
Collecting diffusers==0.26.2 (from diffusers[torch]==0.26.2->invokeai[onnx-cuda,xformers])
Using cached diffusers-0.26.2-py3-none-any.whl.metadata (18 kB)
Collecting invokeai[onnx-cuda,xformers]
Using cached InvokeAI-3.6.2-py3-none-any.whl.metadata (33 kB)
Collecting basicsr==1.4.2 (from invokeai[onnx-cuda,xformers])
Using cached basicsr-1.4.2.tar.gz (172 kB)
Installing build dependencies ... done
Getting requirements to build wheel ... done
Installing backend dependencies ... error
error: subprocess-exited-with-error

× pip subprocess to install backend dependencies did not run successfully.
│ exit code: 1
╰─> [3 lines of output]
Looking in indexes: https://pypi.org/simple, https://download.pytorch.org/whl/cu121
ERROR: Could not find a version that satisfies the requirement torch (from versions: none)
ERROR: No matching distribution found for torch
[end of output]

note: This error originates from a subprocess, and is likely not a problem with pip.
error: subprocess-exited-with-error

× pip subprocess to install backend dependencies did not run successfully.
│ exit code: 1
╰─> See above for output.

note: This error originates from a subprocess, and is likely not a problem with pip.
Error: Unexpected exit code: 1
Command line: | 'C:\Users\M249_\invokeai.venv\Scripts\pip.exe' install --require-virtualenv --force-reinstall --use-pep517 'invokeai[xformers,onnx-cuda]' --extra-index-url https://download.pytorch.org/whl/cu121
Could not install InvokeAI. Please try downloading the latest version of the installer and install again.
Press any key to continue . . .

@psychedelicious
Copy link
Collaborator

Hello! This is a new install following the link from the Helium X guide...I did install 32 bit and 64 bit because I wasn't sure which I needed : from this link https://www.python.org/downloads/release/python-3109/

What is the Helium X guide?

You don't need both 32 and 64 bit pythons - who knows what trouble that might cause.

I'd suggest to uninstall everything you've installed so far to get a fresh start, then follow the simple instructions from our own docs to install python and Invoke. You'll need to delete the invokeai folder in your home folder too.

Installation guide: https://invoke-ai.github.io/InvokeAI/installation/010_INSTALL_AUTOMATED/

Be sure to review the requirements and follow the instructions there.

@SaucierMusic
Copy link
Author

sorry I'm not seeing a download to python?

@psychedelicious
Copy link
Collaborator

The installation requirements are here: https://invoke-ai.github.io/InvokeAI/installation/INSTALL_REQUIREMENTS/

@SaucierMusic
Copy link
Author

Thank you so much I beleive it's working!! Got to the stage of installing models now! Thank you SO MUCH!

@psychedelicious
Copy link
Collaborator

Sweet, glad you are up and running. Have fun!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants