Skip to content

Conversation

@peterdsharpe
Copy link
Collaborator

PhysicsNeMo Pull Request

Description

Brings FullyConnected (both model and PN.nn variants) and UNet into compliance with new refactor standards.

Checklist

Dependencies

Review Process

All PRs are reviewed by the PhysicsNeMo team before merging.

Depending on which files are changed, GitHub may automatically assign a maintainer for review.

We are also testing AI-based code review tools (e.g., Greptile), which may add automated comments with a confidence score.
This score reflects the AI’s assessment of merge readiness and is not a qualitative judgment of your work, nor is
it an indication that the PR will be accepted / rejected.

AI-generated feedback should be reviewed critically for usefulness.
You are not required to respond to every AI comment, but they are intended to help both authors and reviewers.
Please react to Greptile comments with 👍 or 👎 to provide feedback on their accuracy.

@peterdsharpe peterdsharpe requested a review from mnabian as a code owner January 16, 2026 16:37
@greptile-apps
Copy link
Contributor

greptile-apps bot commented Jan 16, 2026

Greptile Summary

This PR brings the FullyConnected and UNet models into full compliance with the new refactor standards documented in CODING_STANDARDS/MODELS_IMPLEMENTATION.md.

Key improvements:

  • Added comprehensive NumPy-style docstrings with LaTeX math notation for all model classes and helper classes
  • Implemented jaxtyping annotations on forward() methods to explicitly document tensor shapes
  • Added input validation with torch.compiler.is_compiling() guards to catch shape mismatches at runtime
  • Enhanced test coverage with constructor tests, checkpoint attribute verification tests, and parameterized tests
  • Improved code documentation for physicsnemo.nn layer classes (FCLayer, ConvFCLayer, etc.)

The refactoring maintains backward compatibility while significantly improving code quality, documentation clarity, and developer experience. All changes align with the framework's best practices for model implementation.

Important Files Changed

Filename Overview
physicsnemo/models/mlp/fully_connected.py Refactored with improved docstrings, jaxtyping annotations, and input validation; good compliance with MOD-003 standards
physicsnemo/models/unet/unet.py Comprehensive refactor with detailed docstrings for all classes, jaxtyping annotations, and validation; strong adherence to coding standards
physicsnemo/nn/fully_connected_layers.py Enhanced with complete docstrings for all layer classes; excellent documentation quality and consistency
test/models/mlp/test_fully_connected.py Comprehensive tests including constructor, attributes, checkpoint loading, and non-regression tests; meets MOD-008 requirements
test/models/unet/test_unet.py Well-structured tests with constructor validation, checkpoint loading verification, and parameterized test cases

@peterdsharpe
Copy link
Collaborator Author

Blocked by possible fixes in #1332

Copy link
Collaborator

@CharlelieLrt CharlelieLrt left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Mostly minor comments and questions. One bigger question though: many of the layers and submodules used in the UNet have similar ones defined in physicnemo/nn (e.g. UNet block, convolutions, attention, etc.)

  • Would it be possible to reuse or consolidate them (maybe a long shot)?
  • If the ones defined in the unet.py module are useful enough and sufficiently general, would it make sense to move them to the appropriate module of physicsnemo.nn?

Copy link
Collaborator

@CharlelieLrt CharlelieLrt left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM!

@peterdsharpe
Copy link
Collaborator Author

/blossom-ci

# Torch script, need to save it as seperate model since TS model doesnt have meta
if model.meta.jit:
fwd_model = torch.jit.script(model)
fwd_model = torch.compile(model)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this change could be confusing. Why are we replacing torch.jit.script with torch.compile under the same jit metadata flag? If the goal is to deprecate TorchScript support, it might be clearer to rename the metadata (e.g., compile or torch_compile) rather than reusing jit.

Also, not all models that are TorchScript-compatible are necessarily compatible with torch.compile, so I’m curious whether this change fails any tests.

Copy link
Collaborator

@mnabian mnabian left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Left a comment about jit vs compile. Otherwise, LGTM!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants