Replies: 1 comment
-
autotrain uses qlora by default. you can find all the parameters here: https://huggingface.co/docs/autotrain/llm_finetuning |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hi, I want to fine-tune the llama3-70B model using the autotrain library. How do I calculate the hardware requirement for this? Does the Autotrain library use qlora optimization? For other libraries, 48 or 80GB of vram is sufficient.
Beta Was this translation helpful? Give feedback.
All reactions