-
Notifications
You must be signed in to change notification settings - Fork 5.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
本地部署模型,启动error,Register to controller报错socket.gaierror: Name or service not known #4025
Comments
To address the
For the specific case of Langchain-Chatchat, ensure you've correctly configured local paths for model files in If you're using models like Qwen-1_8B-Chat or the Lastly, the application documentation should provide instructions for pre-downloading necessary models for offline use. Follow these guidelines to ensure all required resources are available locally before starting the application.
|
问题已解决 |
请问怎么解决的。 |
问题描述 / Problem Description
http://127.0.0.1:20000启动error,Register to controller报错:
requests.exceptions.ConnectionError: HTTPSConnectionPool(host='openaipublic.blob.core.windows.net', port=443): Max retries exceeded with url: /gpt-2/encodings/main/vocab.bpe (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x7f505ddc7e80>: Failed to establish a new connection: [Errno -2] Name or service not known'))
运行环境没有访问互联网权限,所有模型离线安装好,按理说不应该有访问互联网域名的情况,烦请大佬指导我的步骤和配置是哪里有问题,万分感激。
复现问题的步骤 / Steps to Reproduce
环境信息 / Environment Information
### model_config.py 配置如下:
MODEL_ROOT_PATH = "/mnt"
EMBEDDING_MODEL = "bge-large-zh"
EMBEDDING_DEVICE = "cuda"
RERANKER_MODEL = "bge-reranker-large"
USE_RERANKER = False
RERANKER_MAX_LENGTH = 1024
EMBEDDING_KEYWORD_FILE = "keywords.txt"
EMBEDDING_MODEL_OUTPUT_PATH = "output"
LLM_MODELS = ["Qwen-1_8B-Chat"]
Agent_MODEL = None
LLM_DEVICE = "cuda"
HISTORY_LEN = 3
MAX_TOKENS = 2048
TEMPERATURE = 0.7
ONLINE_LLM_MODEL = {
}
MODEL_PATH = {
"embed_model": {
"bge-large-zh": "/mnt/bge-large-zh",
},
"llm_model": {
"Qwen-1_8B-Chat": "/mnt/Qwen-1_8B-Chat",
},
"reranker": {
"bge-reranker-large": "/mnt/bge-reranker-large",
}
}
NLTK_DATA_PATH = os.path.join(os.path.dirname(os.path.dirname(file)), "nltk_data")
VLLM_MODEL_DICT = {
"Qwen-1_8B-Chat": "/mnt/Qwen-1_8B-Chat",
}
SUPPORT_AGENT_MODEL = [
"Qwen", # 所有Qwen系列本地模型
]
### server_config.py 配置如下:
OPEN_CROSS_DOMAIN = False
DEFAULT_BIND_HOST = "127.0.0.1"
WEBUI_SERVER = {
"host": "127.0.0.1",
"port": 8501,
}
API_SERVER = {
"host": "127.0.0.1",
"port": 7861,
}
FSCHAT_OPENAI_API = {
"host": "127.0.0.1",
"port": 20000,
}
FSCHAT_MODEL_WORKERS = {
"default": {
"host": "127.0.0.1",
"port": 20002,
"device": LLM_DEVICE,
},
"Qwen-1_8B-Chat": {
"device": "cuda",
},
}
FSCHAT_CONTROLLER = {
"host": "127.0.0.1",
"port": 20001,
"dispatch_method": "shortest_queue",
}
### kb_config.py配置如下:
kbs_config = {
"pg": {
"connection_uri": "postgresql://postgres:postgres@10.7.212.217:5432/langchain_chatchat",
},
}
The text was updated successfully, but these errors were encountered: