Skip to content

Conversation

@RayenTian
Copy link
Contributor

@RayenTian RayenTian commented Jan 6, 2026

What does this PR do ?

WANDB result: https://wandb.ai/nvidia/slicing?nw=nwuserruit

Add a one line overview of what this PR aims to accomplish.
Fix sft-qwen2.5-32b-4n8g-fsdp2tp8sp-actckpt.v3 nightly test failed
WANDB Result Link: https://wandb.ai/nvidia/slicing/runs/o8jkwpkc?nw=nwuserruit

Issues

  File "/opt/ray_venvs/nemo_rl.models.policy.workers.dtensor_policy_worker_v2.DTensorPolicyWorkerV2/lib/python3.12/site-packages/transformers/models/qwen2/modeling_qwen2.py", line 463, in forward

    logits = self.lm_head(hidden_states[:, slice_indices, :])

                          ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^

  File "/opt/ray_venvs/nemo_rl.models.policy.workers.dtensor_policy_worker_v2.DTensorPolicyWorkerV2/lib/python3.12/site-packages/torch/_compile.py", line 53, in inner

    return disable_fn(*args, **kwargs)

           ^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "/opt/ray_venvs/nemo_rl.models.policy.workers.dtensor_policy_worker_v2.DTensorPolicyWorkerV2/lib/python3.12/site-packages/torch/_dynamo/eval_frame.py", line 1044, in _fn

    return fn(*args, **kwargs)

           ^^^^^^^^^^^^^^^^^^^

  File "/opt/ray_venvs/nemo_rl.models.policy.workers.dtensor_policy_worker_v2.DTensorPolicyWorkerV2/lib/python3.12/site-packages/torch/distributed/tensor/_api.py", line 349, in __torch_dispatch__

    return DTensor._op_dispatcher.dispatch(

           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "/opt/ray_venvs/nemo_rl.models.policy.workers.dtensor_policy_worker_v2.DTensorPolicyWorkerV2/lib/python3.12/site-packages/torch/distributed/tensor/_dispatch.py", line 156, in dispatch

    self.sharding_propagator.propagate(op_info)

  File "/opt/ray_venvs/nemo_rl.models.policy.workers.dtensor_policy_worker_v2.DTensorPolicyWorkerV2/lib/python3.12/site-packages/torch/distributed/tensor/_sharding_prop.py", line 327, in propagate

    OutputSharding, self.propagate_op_sharding(op_info.schema)

                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "/opt/ray_venvs/nemo_rl.models.policy.workers.dtensor_policy_worker_v2.DTensorPolicyWorkerV2/lib/python3.12/site-packages/torch/distributed/tensor/_sharding_prop.py", line 46, in __call__

    return self.cache(*args, **kwargs)

           ^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "/opt/ray_venvs/nemo_rl.models.policy.workers.dtensor_policy_worker_v2.DTensorPolicyWorkerV2/lib/python3.12/site-packages/torch/distributed/tensor/_sharding_prop.py", line 554, in propagate_op_sharding_non_cached

    raise NotImplementedError(

NotImplementedError: Operator aten.alias.default does not have a sharding strategy registered.

Usage

  • You can potentially add a usage example below
# Add a code snippet demonstrating how to use this

Before your PR is "Ready for review"

Pre checks:

  • Make sure you read and followed Contributor guidelines
  • Did you write any new necessary tests?
  • Did you run the unit tests and functional tests locally? Visit our Testing Guide for how to run tests
  • Did you add or update any necessary documentation? Visit our Document Development Guide for how to write, build and test the docs.

Additional Information

  • ...

Summary by CodeRabbit

  • Bug Fixes

    • Fixed a sharding issue that prevented certain tensor operations from executing correctly during initialization.
    • Improved patch application flow to address compatibility issues early in the process.
  • New Features

    • Enabled support for previously restricted tensor parallelism configurations.
  • Tests

    • Expanded test coverage for tensor sharding and patch application behavior.

✏️ Tip: You can customize this high-level summary in your review settings.

@github-actions
Copy link

github-actions bot commented Jan 6, 2026

⚠️ File Consistency Check

Check based on commit: 0d69746 (PR #1728 from ruit/slice_issue)

⚠️ DTensor Policy Worker Synchronization Warning

The file nemo_rl/models/policy/workers/dtensor_policy_worker_v2.py was modified in this PR, but nemo_rl/models/policy/workers/dtensor_policy_worker.py was not updated.

Why this matters:
These files contain related DTensor policy worker implementations that should be kept synchronized to ensure consistency across different versions.

Action required:

  • Please review if the changes in nemo_rl/models/policy/workers/dtensor_policy_worker_v2.py should also be applied to nemo_rl/models/policy/workers/dtensor_policy_worker.py
  • Update nemo_rl/models/policy/workers/dtensor_policy_worker.py if necessary to maintain consistency
  • If the files are intentionally different, please add a comment in the PR explaining why

Files to check:

  • Modified: nemo_rl/models/policy/workers/dtensor_policy_worker_v2.py
  • Not modified: nemo_rl/models/policy/workers/dtensor_policy_worker.py

This check ensures that related file implementations remain synchronized across the codebase. If you believe this warning is incorrect or the files should intentionally differ, please add a comment explaining the reasoning.

@github-actions
Copy link

github-actions bot commented Jan 8, 2026

⚠️ File Consistency Check

Check based on commit: 6e54053 (PR #1728 from ruit/slice_issue)

⚠️ DTensor Policy Worker Synchronization Warning

The file nemo_rl/models/policy/workers/dtensor_policy_worker_v2.py was modified in this PR, but nemo_rl/models/policy/workers/dtensor_policy_worker.py was not updated.

Why this matters:
These files contain related DTensor policy worker implementations that should be kept synchronized to ensure consistency across different versions.

Action required:

  • Please review if the changes in nemo_rl/models/policy/workers/dtensor_policy_worker_v2.py should also be applied to nemo_rl/models/policy/workers/dtensor_policy_worker.py
  • Update nemo_rl/models/policy/workers/dtensor_policy_worker.py if necessary to maintain consistency
  • If the files are intentionally different, please add a comment in the PR explaining why

Files to check:

  • Modified: nemo_rl/models/policy/workers/dtensor_policy_worker_v2.py
  • Not modified: nemo_rl/models/policy/workers/dtensor_policy_worker.py

This check ensures that related file implementations remain synchronized across the codebase. If you believe this warning is incorrect or the files should intentionally differ, please add a comment explaining the reasoning.

@RayenTian RayenTian changed the title fix: patch transformer qwen2 forward fix: patch pytorch aten.alias.default shard strategy Jan 8, 2026
@github-actions
Copy link

github-actions bot commented Jan 8, 2026

ℹ️ File Consistency Check

Check based on commit: 047ce79 (PR #1728 from ruit/slice_issue)

✅ DTensor Policy Worker Synchronization Check

Both DTensor policy worker files were modified in this PR:

  • nemo_rl/models/policy/workers/dtensor_policy_worker.py
  • nemo_rl/models/policy/workers/dtensor_policy_worker_v2.py

Please ensure that the changes are consistent between both files where applicable.


This check ensures that related file implementations remain synchronized across the codebase. If you believe this warning is incorrect or the files should intentionally differ, please add a comment explaining the reasoning.

@github-actions
Copy link

github-actions bot commented Jan 8, 2026

ℹ️ File Consistency Check

Check based on commit: a6ae6d6 (PR #1728 from ruit/slice_issue)

✅ DTensor Policy Worker Synchronization Check

Both DTensor policy worker files were modified in this PR:

  • nemo_rl/models/policy/workers/dtensor_policy_worker.py
  • nemo_rl/models/policy/workers/dtensor_policy_worker_v2.py

Please ensure that the changes are consistent between both files where applicable.


This check ensures that related file implementations remain synchronized across the codebase. If you believe this warning is incorrect or the files should intentionally differ, please add a comment explaining the reasoning.

@github-actions
Copy link

github-actions bot commented Jan 8, 2026

ℹ️ File Consistency Check

Check based on commit: 61c106f (PR #1728 from ruit/slice_issue)

✅ DTensor Policy Worker Synchronization Check

Both DTensor policy worker files were modified in this PR:

  • nemo_rl/models/policy/workers/dtensor_policy_worker.py
  • nemo_rl/models/policy/workers/dtensor_policy_worker_v2.py

Please ensure that the changes are consistent between both files where applicable.


This check ensures that related file implementations remain synchronized across the codebase. If you believe this warning is incorrect or the files should intentionally differ, please add a comment explaining the reasoning.

@RayenTian RayenTian marked this pull request as ready for review January 8, 2026 09:08
@RayenTian RayenTian requested review from a team as code owners January 8, 2026 09:08
@RayenTian RayenTian added the CI:L1 Run doctests, unit tests, and functional tests label Jan 8, 2026
@coderabbitai
Copy link
Contributor

coderabbitai bot commented Jan 8, 2026

📝 Walkthrough

Walkthrough

This PR introduces a patch for PyTorch 2.9.0 to handle a NotImplementedError related to aten.alias.default sharding in distributed tensor operations. The patch is applied during DTensorPolicyWorker initialization, and a previously strict error condition preventing sequence_parallel with tp_size > 1 is removed, allowing that configuration.

Changes

Cohort / File(s) Change Summary
Patch Implementation
nemo_rl/models/policy/workers/patches.py
New public function apply_torch_aten_alias_tensor_patch() that registers a sharding strategy for torch.ops.aten.alias.default using propagate_single_input_strategy, with version assertion for PyTorch 2.9.0 and exception logging.
Worker Initialization Updates
nemo_rl/models/policy/workers/dtensor_policy_worker.py, nemo_rl/models/policy/workers/dtensor_policy_worker_v2.py
Added import and early invocation of apply_torch_aten_alias_tensor_patch() during initialization; removed runtime error branch that previously blocked sequence_parallel when tp_size > 1.
Unit Tests
tests/unit/models/policy/test_dtensor_worker.py, tests/unit/models/policy/test_patches.py
Added parameterized test cases for TP=2, CP=1, SP=True combinations; expanded patch test coverage for aten.alias sharding behavior and transformer engine patch idempotence and integration scenarios.

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~20 minutes

Possibly related PRs

  • NVIDIA-NeMo/RL#1689: Alternative approach to the aten.alias.default issue—avoids triggering the operation by switching from slicing to tensor.narrow instead of registering a sharding strategy.

Suggested labels

CI, CI:L2

Suggested reviewers

  • terrykong
  • yfw
🚥 Pre-merge checks | ✅ 4
✅ Passed checks (4 passed)
Check name Status Explanation
Title check ✅ Passed The title 'fix: patch pytorch aten.alias.default shard strategy' accurately describes the main change in the pull request, which is to add a patch for the PyTorch aten.alias.default operator's sharding strategy to fix a NotImplementedError.
Docstring Coverage ✅ Passed Docstring coverage is 88.89% which is sufficient. The required threshold is 80.00%.
Test Results For Major Changes ✅ Passed PR implements targeted bug fix for DTensor sharding issue with limited scope, comprehensive tests, and documented failure reference.
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing touches
  • 📝 Generate docstrings

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (3)
nemo_rl/models/policy/workers/patches.py (1)

113-128: Narrow the exception handling around register_op_strategy or make failures louder

Catching bare Exception here risks masking real programming errors in the DTensor stack. Consider either:

  • Restricting to the expected failure types (e.g., RuntimeError / registration errors), or
  • Re-raising after logging, or at least logging with full traceback, so unexpected failures don't silently degrade behavior.
nemo_rl/models/policy/workers/dtensor_policy_worker.py (1)

79-79: Keep the __init__ docstring as the first statement

Calling apply_torch_aten_alias_tensor_patch() before the triple-quoted string means that string is no longer treated as the method docstring; it’s just a dead expression.

Recommend:

def __init__(...):
    """Initialize the DTensorPolicyWorker."""
    # Apply patch to work around 'NotImplementedError: Operator aten.alias.default does not have a sharding strategy registered'
    apply_torch_aten_alias_tensor_patch()
    ...

to preserve proper docstring semantics while still applying the patch early.

Also applies to: 161-165

tests/unit/models/policy/test_patches.py (1)

21-27: Tighten test helper: avoid assert False and mark unused rank

The new alias sharding test is structured well, but two small cleanups will make it more robust and linter-friendly:

  • Instead of assert False in Line 470, raise explicitly so behavior isn’t affected by python -O:
    alias_dtensor = torch.ops.aten.alias.default(dtensor)
    raise AssertionError(
        "Torch==2.9 should raise 'NotImplementedError: Operator aten.alias.default does not have a sharding strategy registered', "
        "but it didn't. You can:\n "
        "1. Check if you bumped your torch version which contains the fix "
        "https://github.com/pytorch/pytorch/pull/166867\n"
        "2. If yes, remove patch apply_torch_aten_alias_tensor_patch in "
        "nemo_rl/models/policy/workers/patches.py\n"
        "3. Remove the patching call in nemo_rl/models/policy/workers/dtensor_policy_worker.py and "
        "nemo_rl/models/policy/workers/dtensor_policy_worker_v2.py\n"
        "4. Delete this test\n"
    )
  • The rank parameter is intentionally unused; consider renaming to _rank or adding del rank after the docstring to silence ARG001 without changing behavior.

Also applies to: 452-483, 485-490

📜 Review details

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between ba46741 and 61c106f.

📒 Files selected for processing (5)
  • nemo_rl/models/policy/workers/dtensor_policy_worker.py
  • nemo_rl/models/policy/workers/dtensor_policy_worker_v2.py
  • nemo_rl/models/policy/workers/patches.py
  • tests/unit/models/policy/test_dtensor_worker.py
  • tests/unit/models/policy/test_patches.py
🧰 Additional context used
📓 Path-based instructions (4)
**/*.py

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

**/*.py: Conform code to Python 3.12+
Indent code with 4 spaces. Do not use tabs
Use snake_case for file names
Use PascalCase for class names
Use snake_case for function and method names
Use snake_case for local variables
Prefix variable names that start with a number with 'k' (e.g., k_99th_percentile)
Use upper snake_case with 'G' prefix for global variables (e.g., G_MY_GLOBAL)
Use upper snake_case for constants
Avoid shadowing variables declared in an outer scope
Initialize all externally visible members of a class in the constructor
Prefer docstrings over comments for interfaces that may be used outside a file
Reserve comments for code within a function or interfaces that are local to a file
If a piece of code is commented out, include a comment describing its usage and why it's commented out. Remove debug comments before merging
Use Google style docstrings for classes and functions in Python, which can be parsed by Sphinx
Avoid using reflection when functionality can be easily achieved without reflection
When using try-except blocks, limit the except clause to the smallest set of specific errors possible
When using try-except blocks for duck-typing, keep the body of the try as small as possible and use the else block for logic
YAML is the single source of truth for configuration defaults. Do not set non-None defaults in code for configuration values
For required configuration attributes, access config directly and expect presence (e.g., policy_cfg['precision']) without hidden defaults
Use typing.NotRequired to mark optional attributes in TypedDict for configuration
When adding a new config key to a TypedDict subclass, document the key's purpose, valid values/types, and recommended default, and reflect the default in exemplar YAMLs under examples/configs/*.yaml
Follow the Google Python Style Guide for Python code

Files:

  • nemo_rl/models/policy/workers/dtensor_policy_worker_v2.py
  • nemo_rl/models/policy/workers/dtensor_policy_worker.py
  • nemo_rl/models/policy/workers/patches.py
  • tests/unit/models/policy/test_patches.py
  • tests/unit/models/policy/test_dtensor_worker.py
nemo_rl/**/*.py

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

For any source file under nemo_rl/*.py that defines a class or function decorated with @ray.remote, add a coverage pragma (# pragma: no cover) because these run in separate Ray processes

Files:

  • nemo_rl/models/policy/workers/dtensor_policy_worker_v2.py
  • nemo_rl/models/policy/workers/dtensor_policy_worker.py
  • nemo_rl/models/policy/workers/patches.py
!(**/tests/**|**/test_*.py|**/test_*.sh)

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

Add the NVIDIA copyright header to all Python files and shell scripts (excluding tests). The header should include the current year

Files:

  • nemo_rl/models/policy/workers/dtensor_policy_worker_v2.py
  • nemo_rl/models/policy/workers/dtensor_policy_worker.py
  • nemo_rl/models/policy/workers/patches.py
  • tests/unit/models/policy/test_patches.py
  • tests/unit/models/policy/test_dtensor_worker.py
**/*.{py,sh}

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

The NVIDIA copyright header should appear at the top of all Python files and shell scripts (excluding tests)

Files:

  • nemo_rl/models/policy/workers/dtensor_policy_worker_v2.py
  • nemo_rl/models/policy/workers/dtensor_policy_worker.py
  • nemo_rl/models/policy/workers/patches.py
  • tests/unit/models/policy/test_patches.py
  • tests/unit/models/policy/test_dtensor_worker.py
🧬 Code graph analysis (3)
nemo_rl/models/policy/workers/dtensor_policy_worker_v2.py (1)
nemo_rl/models/policy/workers/patches.py (2)
  • apply_torch_aten_alias_tensor_patch (113-128)
  • apply_transformer_engine_patch (51-110)
nemo_rl/models/policy/workers/dtensor_policy_worker.py (1)
nemo_rl/models/policy/workers/patches.py (1)
  • apply_torch_aten_alias_tensor_patch (113-128)
tests/unit/models/policy/test_patches.py (2)
nemo_rl/models/policy/workers/patches.py (2)
  • _get_transformer_engine_file (23-48)
  • apply_torch_aten_alias_tensor_patch (113-128)
tests/unit/conftest.py (1)
  • distributed_test_runner (376-414)
🪛 Ruff (0.14.10)
nemo_rl/models/policy/workers/patches.py

127-127: Do not catch blind exception: Exception

(BLE001)

tests/unit/models/policy/test_patches.py

452-452: Unused function argument: rank

(ARG001)


470-470: Do not assert False (python -O removes these calls), raise AssertionError()

Replace assert False

(B011)

🔇 Additional comments (2)
tests/unit/models/policy/test_dtensor_worker.py (1)

584-587: Good addition of TP=2, SP=True coverage

The new TP=2, CP=1, SP=True cases fit the existing parameter pattern and explicitly exercise the TP+SP path the patch is targeting.

nemo_rl/models/policy/workers/dtensor_policy_worker_v2.py (1)

89-92: Patch ordering and integration in V2 look good

Applying the Transformer Engine patch first and then apply_torch_aten_alias_tensor_patch() at the top of __init__ keeps DTensor/TE compatibility fixes centralized and consistent with the v1 worker.

Also applies to: 130-134

yuki-97
yuki-97 previously approved these changes Jan 8, 2026
@RayenTian RayenTian removed the CI:L1 Run doctests, unit tests, and functional tests label Jan 9, 2026
@github-actions
Copy link

github-actions bot commented Jan 9, 2026

ℹ️ File Consistency Check

Check based on commit: 4e961e5 (PR #1728 from ruit/slice_issue)

✅ DTensor Policy Worker Synchronization Check

Both DTensor policy worker files were modified in this PR:

  • nemo_rl/models/policy/workers/dtensor_policy_worker.py
  • nemo_rl/models/policy/workers/dtensor_policy_worker_v2.py

Please ensure that the changes are consistent between both files where applicable.


This check ensures that related file implementations remain synchronized across the codebase. If you believe this warning is incorrect or the files should intentionally differ, please add a comment explaining the reasoning.

@RayenTian RayenTian added the CI:L1 Run doctests, unit tests, and functional tests label Jan 9, 2026
@github-actions
Copy link

github-actions bot commented Jan 9, 2026

ℹ️ File Consistency Check

Check based on commit: a68eefc (PR #1728 from ruit/slice_issue)

✅ DTensor Policy Worker Synchronization Check

Both DTensor policy worker files were modified in this PR:

  • nemo_rl/models/policy/workers/dtensor_policy_worker.py
  • nemo_rl/models/policy/workers/dtensor_policy_worker_v2.py

Please ensure that the changes are consistent between both files where applicable.


This check ensures that related file implementations remain synchronized across the codebase. If you believe this warning is incorrect or the files should intentionally differ, please add a comment explaining the reasoning.

@RayenTian RayenTian requested review from terrykong and yuki-97 January 9, 2026 02:43
@terrykong terrykong enabled auto-merge (squash) January 9, 2026 04:24
@yuki-97 yuki-97 added CI:L1 Run doctests, unit tests, and functional tests and removed CI:L1 Run doctests, unit tests, and functional tests labels Jan 9, 2026
@terrykong terrykong merged commit e05eefe into main Jan 9, 2026
43 of 44 checks passed
@terrykong terrykong deleted the ruit/slice_issue branch January 9, 2026 09:23
chtruong814 pushed a commit that referenced this pull request Jan 9, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

CI:L1 Run doctests, unit tests, and functional tests r0.5.0

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants