Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to load a model from local disk path? #4494

Closed
quzhixue-Kimi opened this issue May 17, 2024 · 2 comments
Closed

How to load a model from local disk path? #4494

quzhixue-Kimi opened this issue May 17, 2024 · 2 comments
Labels
feature request New feature or request

Comments

@quzhixue-Kimi
Copy link

quzhixue-Kimi commented May 17, 2024

hi there,

I have two ubuntu 20.04 server (one is local machine, the another one is product server.) with latest ollama binary installed based on the document via https://github.com/ollama/ollama/blob/main/docs/linux.md

My local ubuntu 20.04 has got the internet to run the command and download the llama3 and llama3:70b model and stored in /usr/share/ollama/.ollama/models

The another ubuntu 20.04 server has no internet as a product server!!!
I just copied all model files from my local ubuntu 20.04 to the ubuntu 20.04 product server with the same model path : '/usr/share/ollama/.ollama/models'

After running the ollama process. I ran the 'ollama list' command in product server, there is no model listed. And, when I ran 'ollama run llama3', there has been one error occurred as 'pulling manifest Error: pull model manifest: Get https://registry.ollama.ai/v2/library/llama3/manifests/latest: dial tcp: lookup registry.ollama.ai on 127.0.0.53:53 server misbehaving'

The above issue was caused by on internet on my product server.

It is appreciated that you could tell me wheter there is one environment variable to set without internet or not?

BR
Kimi

@quzhixue-Kimi quzhixue-Kimi added the feature request New feature or request label May 17, 2024
@amonpaike
Copy link

amonpaike commented May 17, 2024

This of transforming the names of the .gguf files into hash names is a terrible method, the llm models are large and take up a lot of space at a certain point it is not convenient to duplicate them just to be able to use them with other llm runners, and also through the names in hash it is very difficult to identify them.

Furthermore the same hash files or .ollama folder cannot be shared between windows and linux because the hash name of model one is called for example "sha256-b9a918323fcb82484b5a51ecd08c251821a16920c4b57263dc8a2f8fc3348923" on windows and the same sha256:b9a918323 fcb82484b5a51ecd08c251821a16920c4b57263dc8a2f8fc3348923 on linux.

Making it complicated to share models on a single external disk.

@quzhixue-Kimi
Copy link
Author

quzhixue-Kimi commented May 20, 2024

The issue has been fixed as editing the /etc/systemd/system/ollama.service file

[Service]
Environment="OLLAMA_MODELS=my_model_path"

systemctl daemon-reload
systemctl restart ollama.service

my_mode_path is just /home/kimi/.ollama/models , and in this model folder just has two folders named blobs and manifests
In blobs folder, there have been these sha256-XXXXXXXXXX files, do not add any other model folders!

If configuration has been corrected. Then running 'ollama list'. The models will be listed.

BR
Kimi

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
feature request New feature or request
Projects
None yet
Development

No branches or pull requests

2 participants