-
Notifications
You must be signed in to change notification settings - Fork 5.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Import a model:latest aborted (core dumped) #4485
Comments
Can you post the Modelfile and the logs? What was the gguf you were using? |
Import from PyTorch or Safetensors |
root@autodl-container-c438119a3c-80821c25:~/autodl-tmp# ollama serve This is an error with the logs |
@pdevine Can this information determine the cause of the error? |
Can you include the Modelfile as well? |
@Anorid Did you find out the cause, I have the same error? |
@Anorid Did you find out the cause, I also have the same error? |
my Modelfile:
|
@ZhangZangQian can you update to the latest version of ollama?
|
yes, it fixed |
I'll close out the issue. |
What is the issue?
I carefully read the documentation content of the README to try
root@autodl-container-36e51198ae-c4ed76b0:
/autodl-tmp/model# ollama create example -f Modelfile/autodl-tmp/model# ollama run example "What is your favourite condiment?"transferring model data
using existing layer sha256:8c7d76a23837d1b07ca3c3aa497d90ffafdfc2fd417b93e4e06caeeabf4f1526
using existing layer sha256:dbc2ca980bfce0b44450f42033a51513616ac71f8b5881efbaa81d8f5e9b253e
using existing layer sha256:be7c61fea675f5a89b441192e604c0fcc8806a19e235421f17dda66e5fc67b2d
writing manifest
success
root@autodl-container-36e51198ae-c4ed76b0:
Error: llama runner process has terminated: signal: aborted (core dumped)
root@autodl-container-36e51198ae-c4ed76b0:
/autodl-tmp/model# nivdia-smi/autodl-tmp/model# nvidia-smibash: nivdia-smi: command not found
root@autodl-container-36e51198ae-c4ed76b0:
Fri May 17 10:02:03 2024
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 525.105.17 Driver Version: 525.105.17 CUDA Version: 12.0 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 NVIDIA A40 On | 00000000:C1:00.0 Off | Off |
| 0% 25C P8 20W / 300W | 2MiB / 49140MiB | 0% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=============================================================================|
| No running processes found |
+-----------------------------------------------------------------------------+
root@autodl-container-36e51198ae-c4ed76b0:~/autodl-tmp/model# ollama run example
Error: llama runner process has terminated: signal: aborted (core dumped)
OS
Linux
GPU
Nvidia
CPU
Intel
Ollama version
No response
The text was updated successfully, but these errors were encountered: