Skip to content

Conversation

@uael
Copy link
Contributor

@uael uael commented Dec 9, 2025

Description
Fix a deadlock in Device::maintain

Testing
Manually

Checklist

  • Run cargo fmt.
  • Run taplo format.
  • Run cargo clippy --tests. If applicable, add:
    • --target wasm32-unknown-unknown
  • Run cargo xtask test to run tests.
  • If this contains user-facing changes, add a CHANGELOG.md entry.

@cwfitzgerald cwfitzgerald self-assigned this Dec 9, 2025
Comment on lines +857 to +861
) = queue_result;
// Queue::drop is acquiring the snatch lock as well
drop(snatch_guard);
} else {
drop(snatch_guard);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

question: It seems like the potential for Queue::drop getting called here is when:

  1. The user calls Device::maintain on this thread (let's call it thread 1) while holding a last strong ref. to the queue on another thread (let's call that thread 2).
  2. Execution on this thread (1) progresses until we have a strong ref. to queue, and follow the branch to call queue.maintain(…).
  3. Thread (2) drops its strong ref., making the strong ref. here in thread 1 the last one.

Does that sound right? Are you using wgpu from multiple threads?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Correct - thread 1 is calling Instance::poll_all(true) in a loop, and other threads 2+ create/drop queues

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could we add a (like eric's) to say:

  • Why this is valid
  • Why would holding it for longer be a problem.

This kind of thing is really hard to reason about and the effects are deeply non-local so retaining as much of the knowledge as we can about the decisions that were made is super important.

Copy link
Member

@ErichDonGubler ErichDonGubler left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, minus some questions I'd like to fully understand before narrowing lock regions.


// Don't hold the locks while calling release_gpu_resources.
drop(fence);
drop(snatch_guard);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

question: I think it's fine—but I'm not positive—to no longer be locking the snatch lock when we take the device lost closure. Tagging in @cwfitzgerald to double-check my conclusion here. Does that sound right, Connor?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't see any reason why this would be problematic. The snatch lock doesn't guard the device lost closure in any way.

@ErichDonGubler ErichDonGubler self-assigned this Dec 9, 2025

// Don't hold the locks while calling release_gpu_resources.
drop(fence);
drop(snatch_guard);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't see any reason why this would be problematic. The snatch lock doesn't guard the device lost closure in any way.

Comment on lines +857 to +861
) = queue_result;
// Queue::drop is acquiring the snatch lock as well
drop(snatch_guard);
} else {
drop(snatch_guard);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could we add a (like eric's) to say:

  • Why this is valid
  • Why would holding it for longer be a problem.

This kind of thing is really hard to reason about and the effects are deeply non-local so retaining as much of the knowledge as we can about the decisions that were made is super important.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants