Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

DeepSpeedCheckpoint: support custom final ln idx #5506

Merged
merged 7 commits into from
May 29, 2024

Conversation

nelyahu
Copy link
Contributor

@nelyahu nelyahu commented May 8, 2024

till today only last layer (idx=-1) was considered using FINAL_LAYER_NORM_INDEX which is set to -1.
this PR allows the user to pass custom value for model where this default value does not apply.
see example for usage in HabanaAI/Megatron-DeepSpeed fork repository:
https://github.com/HabanaAI/Megatron-DeepSpeed/blob/c9feb8cacabc6dd4da4266cff08db555a21122e2/tools/verify_checkpoint_non_tp_consistency.py#L296

@nelyahu nelyahu requested a review from tjruwase as a code owner May 8, 2024 09:17
till today only last layer (idx=-1) was considered using
FINAL_LAYER_NORM_INDEX which is set to -1.
this commit allow the user to pass custom value for model where
this default value does not apply.
@nelyahu
Copy link
Contributor Author

nelyahu commented May 27, 2024

@loadams can you please re-run "nv-torch-latest-v100" validation ? i think it failed on set-up issue

@loadams
Copy link
Contributor

loadams commented May 28, 2024

@loadams can you please re-run "nv-torch-latest-v100" validation ? i think it failed on set-up issue

Yes, re-running now and will work on getting this merged.

@loadams loadams enabled auto-merge May 28, 2024 16:28
@loadams loadams added this pull request to the merge queue May 28, 2024
@loadams loadams removed this pull request from the merge queue due to a manual request May 28, 2024
@loadams loadams enabled auto-merge May 28, 2024 21:37
@loadams loadams added this pull request to the merge queue May 28, 2024
Merged via the queue into microsoft:master with commit 2fc702e May 29, 2024
12 checks passed
@jinyouzhi
Copy link
Contributor

Sorry disturb you here, Could you explain why FINAL_LAYER_NORM_INDEX set to -2 not -1 for LLaMA? @nelyahu Thanks.

@nelyahu
Copy link
Contributor Author

nelyahu commented Jun 4, 2024

@jinyouzhi The previous code assumed that model is built of embedding + transformer layers + projection layer
Therefore, in order to get all transformer layer it took layers[1:-1] (i.e. excluding the embedding and the projection layers).
However, llama has an RMSNorm layer before the projection layer, so the above assumption is incorrect - and need to exclude also the 2nd to last as well (i.e. embedding, RMSNorm and projection).

Also this approach of fetching layers based on their indices is not a good practice, and need to invest more efforts in doing it based on the layer type so it will generic and robust.

@jinyouzhi
Copy link
Contributor

@nelyahu got it. Thank you very much!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants