We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
CUDA_VISIBLE_DEVICES=1,2 deepspeed --master_port=9902 src/train_bash.py --deepspeed ds_config_qwen.json --stage sft --do_train --model_name_or_path "qwen1.5-1.8b" --dataset_dir data --dataset glaive_toolcall --template qwen --finetuning_type full --cutoff_len 1024 --overwrite_cache True --output_dir path_to_sft_qwen_test --max_samples 100000 --per_device_train_batch_size 1 --gradient_accumulation_steps 2 --lr_scheduler_type cosine --max_grad_norm 1.0 --logging_steps 10 --save_steps 1000 --warmup_steps 0 --learning_rate 1e-6 --num_train_epochs 2.0 --plot_loss --fp16 True --overwrite_output_dir True
全参微调,目前发现两个问题: 1 训练中途用ctrl-c退出后,发现某些文件夹无法访问,提示input/output error 2 多卡训练,训练中途用ctrl-c退出或者kill掉进程或者oom导致的训练中断后,有一张卡出现显存一直无法释放的现象 请问下大家有遇到过类似的问题吗?谢谢
No response
The text was updated successfully, but these errors were encountered:
我也遇到同样的问题
Sorry, something went wrong.
No branches or pull requests
Reminder
Reproduction
CUDA_VISIBLE_DEVICES=1,2 deepspeed --master_port=9902 src/train_bash.py
--deepspeed ds_config_qwen.json
--stage sft
--do_train
--model_name_or_path "qwen1.5-1.8b"
--dataset_dir data
--dataset glaive_toolcall
--template qwen
--finetuning_type full
--cutoff_len 1024
--overwrite_cache True
--output_dir path_to_sft_qwen_test
--max_samples 100000
--per_device_train_batch_size 1
--gradient_accumulation_steps 2
--lr_scheduler_type cosine
--max_grad_norm 1.0
--logging_steps 10
--save_steps 1000
--warmup_steps 0
--learning_rate 1e-6
--num_train_epochs 2.0
--plot_loss
--fp16 True
--overwrite_output_dir True
Expected behavior
全参微调,目前发现两个问题:
1 训练中途用ctrl-c退出后,发现某些文件夹无法访问,提示input/output error
2 多卡训练,训练中途用ctrl-c退出或者kill掉进程或者oom导致的训练中断后,有一张卡出现显存一直无法释放的现象
请问下大家有遇到过类似的问题吗?谢谢
System Info
No response
Others
No response
The text was updated successfully, but these errors were encountered: