Skip to content

Conversation

@naanrdk
Copy link

@naanrdk naanrdk commented Jan 29, 2025

Using torch.float16 or torch.cuda.amp can significantly reduce memory usage and speed up training by performing computations with lower precision.

Using torch.float16 or torch.cuda.amp can significantly reduce memory usage and speed up training by performing computations with lower precision.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant