Skip to content

FIx Llama3.1/Llama3.2 Exporter#12497

Merged
cuichenx merged 2 commits intomainfrom
aot/llama31-exporter
Mar 10, 2025
Merged

FIx Llama3.1/Llama3.2 Exporter#12497
cuichenx merged 2 commits intomainfrom
aot/llama31-exporter

Conversation

@suiyoubi
Copy link
Collaborator

@suiyoubi suiyoubi commented Mar 5, 2025

Important

The Update branch button must only be pressed in very rare occassions.
An outdated branch is never blocking the merge of a PR.
Please reach out to the automation team before pressing that button.

What does this PR do ?

Add a one line overview of what this PR aims to accomplish.

Collection: [Note which collection this PR will affect]

Changelog

  • Add specific line by line info of high level changes in this PR.

Usage

  • You can potentially add a usage example below
# Add a code snippet demonstrating how to use this 

GitHub Actions CI

The Jenkins CI system has been replaced by GitHub Actions self-hosted runners.

The GitHub Actions CI will run automatically when the "Run CICD" label is added to the PR.
To re-run CI remove and add the label again.
To run CI on an untrusted fork, a NeMo user with write access must first click "Approve and run".

Before your PR is "Ready for review"

Pre checks:

  • Make sure you read and followed Contributor guidelines
  • Did you write any new necessary tests?
  • Did you add or update any necessary documentation?
  • Does the PR affect components that are optional to install? (Ex: Numba, Pynini, Apex etc)
    • Reviewer: Does the PR have correct import guards for all optional libraries?

PR Type:

  • New Feature
  • Bugfix
  • Documentation

If you haven't finished some of the above items you can still open "Draft" PR.

Who can review?

Anyone in the community is free to review the PR once the checks have passed.
Contributor guidelines contains specific people who can review PRs to various areas.

Additional Information

  • Related to # (issue)

Signed-off-by: Ao Tang <aot@nvidia.com>
@aflah02
Copy link

aflah02 commented Mar 5, 2025

Hi @suiyoubi
I just saw this PR and I have also been facing issues when converting from .distcp checkpoints saved during pretraining to HF compatible ones. Does this PR intend to address that?

@suiyoubi
Copy link
Collaborator Author

suiyoubi commented Mar 5, 2025

Hi @aflah02 , this PR fixes the converted HF config only. Not the weight files.

Specifically, rope_scaling is not exported properly when converting the model to HF. The weight file is unchanged.

@aflah02
Copy link

aflah02 commented Mar 5, 2025

ah gotcha, thanks!

@aflah02
Copy link

aflah02 commented Mar 5, 2025

@suiyoubi Is there any way right now to convert a distcp llama 3.1 config based model's intermediate training checkpoint to nemo format? I can't see any way. Was wondering if you knew any since you're working on this issue

I have raised an issue here - #12381

@suiyoubi suiyoubi requested a review from cuichenx March 6, 2025 03:34
Signed-off-by: Ao Tang <aot@nvidia.com>
@suiyoubi suiyoubi added Run CICD and removed Run CICD labels Mar 7, 2025
@ko3n1g ko3n1g added Run CICD and removed Run CICD labels Mar 7, 2025
@suiyoubi
Copy link
Collaborator Author

suiyoubi commented Mar 7, 2025

@suiyoubi Is there any way right now to convert a distcp llama 3.1 config based model's intermediate training checkpoint to nemo format? I can't see any way. Was wondering if you knew any since you're working on this issue

I have raised an issue here - #12381

What distcp do you have ? Currently we only officially support NeMo1/2 <=> HF checkpoint conversion

@aflah02
Copy link

aflah02 commented Mar 7, 2025

Hi @suiyoubi

I basically used a variant of this script which saves intermediate checkpoints which saves in distcp format - https://github.com/NVIDIA/NeMo/blob/main/nemo/collections/llm/recipes/llama32_1b.py
These are the checkpoints from one of the runs - https://huggingface.co/aflah/llama32_1b_dclm-SL-2048-PGBS-16-GAS-4-NGPU-8-NNODES-1-TW-PERF/tree/main

I also tried to use the NeMoModelCheckpoint callback (https://github.com/NVIDIA/NeMo/blob/8d6b58c16e5568cb0755aab163d5d191cc23475a/nemo/utils/callbacks/nemo_model_checkpoint.py#L37) but it crashes due to an attribute error -

So essentially the checkpoints cannot be used at all making the training useless as the checkpoints are not shareable

@github-actions
Copy link
Contributor

github-actions bot commented Mar 8, 2025

[🤖]: Hi @suiyoubi 👋,

We wanted to let you know that a CICD pipeline for this PR just finished successfully

So it might be time to merge this PR or get some approvals

I'm just a bot so I'll leave it you what to do next.

//cc @pablo-garay @ko3n1g

Copy link
Collaborator

@cuichenx cuichenx left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@cuichenx cuichenx merged commit 76435ce into main Mar 10, 2025
294 of 1481 checks passed
@cuichenx cuichenx deleted the aot/llama31-exporter branch March 10, 2025 20:39
BoxiangW pushed a commit that referenced this pull request Mar 10, 2025
* FIx Exporter

Signed-off-by: Ao Tang <aot@nvidia.com>

* token id update

Signed-off-by: Ao Tang <aot@nvidia.com>

---------

Signed-off-by: Ao Tang <aot@nvidia.com>
yuanzhedong pushed a commit to yuanzhedong/NeMo that referenced this pull request Mar 10, 2025
* FIx Exporter

Signed-off-by: Ao Tang <aot@nvidia.com>

* token id update

Signed-off-by: Ao Tang <aot@nvidia.com>

---------

Signed-off-by: Ao Tang <aot@nvidia.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants