You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
To address the issue of receiving garbled text as output after importing a fine-tuned model, consider the following steps:
Verify Model Path Configuration: Double-check that the MODEL_PATH for chatglm3-6b is correctly set. Incorrect paths or misplaced model files can lead to loading issues.
PEFT File Path in startup.py: Ensure the PEFT file path you've added is correct and the file is accessible. Mistakes here could interfere with model performance.
PEFT_SHARE_BASE_WEIGHTS Compatibility: Confirm that setting PEFT_SHARE_BASE_WEIGHTS to true is compatible with the chatglm3-6b model. This setting indicates shared base weights, which not all models support seamlessly.
conv_template Configuration: The setting of conv_template to None might be affecting input processing. Verify this setting does not conflict with the chatglm3-6b model's requirements.
Model Compatibility: Ensure the chatglm3-6b model is compatible with your configuration, including any customizations in startup.py and adapter_config.json.
Device Configuration: Check the device configuration (e.g., cuda, cpu) to ensure it's suitable for the chatglm3-6b model. Incompatibilities here can degrade output quality.
Inspect Error Logs: Review application logs for any errors or warnings that might hint at the problem's root cause.
Test with Default Configuration: Try reverting to a default configuration that previously worked, then incrementally apply your changes to isolate the issue.
For further guidance, you might want to consult the documentation or support resources for the models and libraries you're using.
About Dosu
This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research.
Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant.
adapter_config.json以及将底座模型修改为chatglm3-6b模型
环境配置中的PEFT_SHARE_BASE_WEIGHTS=true 也修改了
startup.py文件中的args.model_names也增加了peft文件路径
但是输出结果却为乱码
MODEL_PATH配置
The text was updated successfully, but these errors were encountered: