-
Notifications
You must be signed in to change notification settings - Fork 212
fix: patch pytorch aten.alias.default shard strategy #1728
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
0d69746 to
6e54053
Compare
|
ℹ️ File Consistency CheckCheck based on commit: 047ce79 (PR #1728 from ✅ DTensor Policy Worker Synchronization CheckBoth DTensor policy worker files were modified in this PR:
Please ensure that the changes are consistent between both files where applicable. This check ensures that related file implementations remain synchronized across the codebase. If you believe this warning is incorrect or the files should intentionally differ, please add a comment explaining the reasoning. |
ℹ️ File Consistency CheckCheck based on commit: a6ae6d6 (PR #1728 from ✅ DTensor Policy Worker Synchronization CheckBoth DTensor policy worker files were modified in this PR:
Please ensure that the changes are consistent between both files where applicable. This check ensures that related file implementations remain synchronized across the codebase. If you believe this warning is incorrect or the files should intentionally differ, please add a comment explaining the reasoning. |
a6ae6d6 to
61c106f
Compare
ℹ️ File Consistency CheckCheck based on commit: 61c106f (PR #1728 from ✅ DTensor Policy Worker Synchronization CheckBoth DTensor policy worker files were modified in this PR:
Please ensure that the changes are consistent between both files where applicable. This check ensures that related file implementations remain synchronized across the codebase. If you believe this warning is incorrect or the files should intentionally differ, please add a comment explaining the reasoning. |
📝 WalkthroughWalkthroughThis PR introduces a patch for PyTorch 2.9.0 to handle a NotImplementedError related to Changes
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~20 minutes Possibly related PRs
Suggested labels
Suggested reviewers
🚥 Pre-merge checks | ✅ 4✅ Passed checks (4 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (3)
nemo_rl/models/policy/workers/patches.py (1)
113-128: Narrow the exception handling aroundregister_op_strategyor make failures louderCatching bare
Exceptionhere risks masking real programming errors in the DTensor stack. Consider either:
- Restricting to the expected failure types (e.g.,
RuntimeError/ registration errors), or- Re-raising after logging, or at least logging with full traceback, so unexpected failures don't silently degrade behavior.
nemo_rl/models/policy/workers/dtensor_policy_worker.py (1)
79-79: Keep the__init__docstring as the first statementCalling
apply_torch_aten_alias_tensor_patch()before the triple-quoted string means that string is no longer treated as the method docstring; it’s just a dead expression.Recommend:
def __init__(...): """Initialize the DTensorPolicyWorker.""" # Apply patch to work around 'NotImplementedError: Operator aten.alias.default does not have a sharding strategy registered' apply_torch_aten_alias_tensor_patch() ...to preserve proper docstring semantics while still applying the patch early.
Also applies to: 161-165
tests/unit/models/policy/test_patches.py (1)
21-27: Tighten test helper: avoidassert Falseand mark unusedrankThe new alias sharding test is structured well, but two small cleanups will make it more robust and linter-friendly:
- Instead of
assert Falsein Line 470, raise explicitly so behavior isn’t affected bypython -O:alias_dtensor = torch.ops.aten.alias.default(dtensor) raise AssertionError( "Torch==2.9 should raise 'NotImplementedError: Operator aten.alias.default does not have a sharding strategy registered', " "but it didn't. You can:\n " "1. Check if you bumped your torch version which contains the fix " "https://github.com/pytorch/pytorch/pull/166867\n" "2. If yes, remove patch apply_torch_aten_alias_tensor_patch in " "nemo_rl/models/policy/workers/patches.py\n" "3. Remove the patching call in nemo_rl/models/policy/workers/dtensor_policy_worker.py and " "nemo_rl/models/policy/workers/dtensor_policy_worker_v2.py\n" "4. Delete this test\n" )
- The
rankparameter is intentionally unused; consider renaming to_rankor addingdel rankafter the docstring to silence ARG001 without changing behavior.Also applies to: 452-483, 485-490
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (5)
nemo_rl/models/policy/workers/dtensor_policy_worker.pynemo_rl/models/policy/workers/dtensor_policy_worker_v2.pynemo_rl/models/policy/workers/patches.pytests/unit/models/policy/test_dtensor_worker.pytests/unit/models/policy/test_patches.py
🧰 Additional context used
📓 Path-based instructions (4)
**/*.py
📄 CodeRabbit inference engine (CODING_GUIDELINES.md)
**/*.py: Conform code to Python 3.12+
Indent code with 4 spaces. Do not use tabs
Use snake_case for file names
Use PascalCase for class names
Use snake_case for function and method names
Use snake_case for local variables
Prefix variable names that start with a number with 'k' (e.g., k_99th_percentile)
Use upper snake_case with 'G' prefix for global variables (e.g., G_MY_GLOBAL)
Use upper snake_case for constants
Avoid shadowing variables declared in an outer scope
Initialize all externally visible members of a class in the constructor
Prefer docstrings over comments for interfaces that may be used outside a file
Reserve comments for code within a function or interfaces that are local to a file
If a piece of code is commented out, include a comment describing its usage and why it's commented out. Remove debug comments before merging
Use Google style docstrings for classes and functions in Python, which can be parsed by Sphinx
Avoid using reflection when functionality can be easily achieved without reflection
When using try-except blocks, limit the except clause to the smallest set of specific errors possible
When using try-except blocks for duck-typing, keep the body of the try as small as possible and use the else block for logic
YAML is the single source of truth for configuration defaults. Do not set non-None defaults in code for configuration values
For required configuration attributes, access config directly and expect presence (e.g., policy_cfg['precision']) without hidden defaults
Use typing.NotRequired to mark optional attributes in TypedDict for configuration
When adding a new config key to a TypedDict subclass, document the key's purpose, valid values/types, and recommended default, and reflect the default in exemplar YAMLs under examples/configs/*.yaml
Follow the Google Python Style Guide for Python code
Files:
nemo_rl/models/policy/workers/dtensor_policy_worker_v2.pynemo_rl/models/policy/workers/dtensor_policy_worker.pynemo_rl/models/policy/workers/patches.pytests/unit/models/policy/test_patches.pytests/unit/models/policy/test_dtensor_worker.py
nemo_rl/**/*.py
📄 CodeRabbit inference engine (CODING_GUIDELINES.md)
For any source file under nemo_rl/*.py that defines a class or function decorated with @ray.remote, add a coverage pragma (# pragma: no cover) because these run in separate Ray processes
Files:
nemo_rl/models/policy/workers/dtensor_policy_worker_v2.pynemo_rl/models/policy/workers/dtensor_policy_worker.pynemo_rl/models/policy/workers/patches.py
!(**/tests/**|**/test_*.py|**/test_*.sh)
📄 CodeRabbit inference engine (CODING_GUIDELINES.md)
Add the NVIDIA copyright header to all Python files and shell scripts (excluding tests). The header should include the current year
Files:
nemo_rl/models/policy/workers/dtensor_policy_worker_v2.pynemo_rl/models/policy/workers/dtensor_policy_worker.pynemo_rl/models/policy/workers/patches.pytests/unit/models/policy/test_patches.pytests/unit/models/policy/test_dtensor_worker.py
**/*.{py,sh}
📄 CodeRabbit inference engine (CODING_GUIDELINES.md)
The NVIDIA copyright header should appear at the top of all Python files and shell scripts (excluding tests)
Files:
nemo_rl/models/policy/workers/dtensor_policy_worker_v2.pynemo_rl/models/policy/workers/dtensor_policy_worker.pynemo_rl/models/policy/workers/patches.pytests/unit/models/policy/test_patches.pytests/unit/models/policy/test_dtensor_worker.py
🧬 Code graph analysis (3)
nemo_rl/models/policy/workers/dtensor_policy_worker_v2.py (1)
nemo_rl/models/policy/workers/patches.py (2)
apply_torch_aten_alias_tensor_patch(113-128)apply_transformer_engine_patch(51-110)
nemo_rl/models/policy/workers/dtensor_policy_worker.py (1)
nemo_rl/models/policy/workers/patches.py (1)
apply_torch_aten_alias_tensor_patch(113-128)
tests/unit/models/policy/test_patches.py (2)
nemo_rl/models/policy/workers/patches.py (2)
_get_transformer_engine_file(23-48)apply_torch_aten_alias_tensor_patch(113-128)tests/unit/conftest.py (1)
distributed_test_runner(376-414)
🪛 Ruff (0.14.10)
nemo_rl/models/policy/workers/patches.py
127-127: Do not catch blind exception: Exception
(BLE001)
tests/unit/models/policy/test_patches.py
452-452: Unused function argument: rank
(ARG001)
470-470: Do not assert False (python -O removes these calls), raise AssertionError()
Replace assert False
(B011)
🔇 Additional comments (2)
tests/unit/models/policy/test_dtensor_worker.py (1)
584-587: Good addition of TP=2, SP=True coverageThe new TP=2, CP=1, SP=True cases fit the existing parameter pattern and explicitly exercise the TP+SP path the patch is targeting.
nemo_rl/models/policy/workers/dtensor_policy_worker_v2.py (1)
89-92: Patch ordering and integration in V2 look goodApplying the Transformer Engine patch first and then
apply_torch_aten_alias_tensor_patch()at the top of__init__keeps DTensor/TE compatibility fixes centralized and consistent with the v1 worker.Also applies to: 130-134
ℹ️ File Consistency CheckCheck based on commit: 4e961e5 (PR #1728 from ✅ DTensor Policy Worker Synchronization CheckBoth DTensor policy worker files were modified in this PR:
Please ensure that the changes are consistent between both files where applicable. This check ensures that related file implementations remain synchronized across the codebase. If you believe this warning is incorrect or the files should intentionally differ, please add a comment explaining the reasoning. |
Signed-off-by: ruit <[email protected]>
Signed-off-by: ruit <[email protected]>
Signed-off-by: ruit <[email protected]>
Signed-off-by: ruit <[email protected]>
Signed-off-by: ruit <[email protected]>
4e961e5 to
a68eefc
Compare
ℹ️ File Consistency CheckCheck based on commit: a68eefc (PR #1728 from ✅ DTensor Policy Worker Synchronization CheckBoth DTensor policy worker files were modified in this PR:
Please ensure that the changes are consistent between both files where applicable. This check ensures that related file implementations remain synchronized across the codebase. If you believe this warning is incorrect or the files should intentionally differ, please add a comment explaining the reasoning. |
Signed-off-by: ruit <[email protected]> Signed-off-by: NeMo Bot <[email protected]>
What does this PR do ?
WANDB result: https://wandb.ai/nvidia/slicing?nw=nwuserruit
Add a one line overview of what this PR aims to accomplish.
Fix sft-qwen2.5-32b-4n8g-fsdp2tp8sp-actckpt.v3 nightly test failed
WANDB Result Link: https://wandb.ai/nvidia/slicing/runs/o8jkwpkc?nw=nwuserruit
Issues
Usage
# Add a code snippet demonstrating how to use thisBefore your PR is "Ready for review"
Pre checks:
Additional Information
Summary by CodeRabbit
Bug Fixes
New Features
Tests
✏️ Tip: You can customize this high-level summary in your review settings.