Add skills: automodel-expert-lora and megatron-bridge-lora-sft#47
Closed
Doondi-Ashlesh wants to merge 3 commits into
Closed
Add skills: automodel-expert-lora and megatron-bridge-lora-sft#47Doondi-Ashlesh wants to merge 3 commits into
Doondi-Ashlesh wants to merge 3 commits into
Conversation
- Add automodel-expert-lora to skills/NeMo-AutoModel/ with component entry - Add megatron-bridge-lora-sft to skills/Megatron-Bridge/ - Each skill includes SKILL.md, card.yaml, and evals/evals.json Signed-off-by: Doondi-Ashlesh <doondiashlesh@gmail.com>
Signed-off-by: Doondi-Ashlesh <doondiashlesh@gmail.com>
sayalinvidia
pushed a commit
that referenced
this pull request
May 5, 2026
The DCO check (#38) verifies every commit carries a Signed-off-by trailer but doesn't validate that the author email matches an NVIDIA-affiliated address. Catalog content is published externally under NVIDIA's name — accepting commits from arbitrary personal/ external email addresses creates IP-traceability gaps that are hard to clean up after the fact. PR #47 surfaced this gap concretely: an external contributor opened a catalog onboarding PR with commits authored from gmail.com and eduquencher.com addresses. Detection happened during human review only; this workflow makes it an automated gate. The check walks every non-merge commit between base and head, and fails if any commit's author OR committer email isn't @nvidia.com or @users.noreply.github.com (github-noreply covers NVIDIA-org members who hide their personal email). The automated/sync-skills branch is exempt — same rationale as the DCO check, it's the bot mirror, not a contributor. Companion change: catalog-pr-reviewer skill updated with the same check inline so reviewers see the violation locally before opening the PR rather than after CI fails. Signed-off-by: Moshe Abramovitch <moshea@nvidia.com>
Author
|
Hi @mosheabr and @sayalinvidia, since PR #49 references this PR as a test case for the new author check, wanted to ask directly if there a way for external contributions, or is this catalog limited to NVIDIA employees? Happy to go through whatever review process is needed. Thanks for your time. |
Collaborator
|
Thanks for the contribution, @Doondi-Ashlesh! At the moment we're only accepting NVIDIA-authored skills, but stay tuned for updates on this in the future. |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Onboarding type
components.d/automodel.ymlfor NeMo-AutoModel)megatron-bridge-lora-sftskill to existing Megatron-Bridge productFor new product onboarding — author affirmations
NVIDIA-NeMo/Automodel,NVIDIA-NeMo/Megatron-Bridge)NVIDIA-NeMo)skills/path used for new entriesWhat this PR adds
skills/NeMo-AutoModel/automodel-expert-loraCovers applying LoRA to fused MoE expert layers in NeMo AutoModel via
PeftConfigwithtarget_modulesandmoe_rank_scaling. Documents theGroupedExpertsTElimitation andapply_lora_to_linear_modulesAPI. Confirmed fromnemo_automodel/components/_peft/lora.pyandtests/unit_tests/_peft/test_lora_experts.py.skills/Megatron-Bridge/megatron-bridge-lora-sftCovers LoRA, DoRA, and adapter export in Megatron-Bridge via the
LoRA/DoRAdataclasses,normalize_moe_lorafor MoE rank normalization, andAutoBridge.export_adapter_ckptfor HuggingFace PEFT export. Confirmed fromsrc/megatron/bridge/peft/lora.py,dora.py, andexamples/conversion/adapter/export_adapter.py.All PRs
Signed-off-by: Doondi-Ashlesh <doondiashlesh@gmail.com>)