Skip to content
Open
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
7 changes: 5 additions & 2 deletions wgpu-core/src/device/resource.rs
Original file line number Diff line number Diff line change
Expand Up @@ -854,7 +854,11 @@ impl Device {
user_closures.mappings,
user_closures.blas_compact_ready,
queue_empty,
) = queue_result
) = queue_result;
// Queue::drop is acquiring the snatch lock as well
drop(snatch_guard);
} else {
drop(snatch_guard);
Comment on lines +857 to +861
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

question: It seems like the potential for Queue::drop getting called here is when:

  1. The user calls Device::maintain on this thread (let's call it thread 1) while holding a last strong ref. to the queue on another thread (let's call that thread 2).
  2. Execution on this thread (1) progresses until we have a strong ref. to queue, and follow the branch to call queue.maintain(…).
  3. Thread (2) drops its strong ref., making the strong ref. here in thread 1 the last one.

Does that sound right? Are you using wgpu from multiple threads?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Correct - thread 1 is calling Instance::poll_all(true) in a loop, and other threads 2+ create/drop queues

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could we add a (like eric's) to say:

  • Why this is valid
  • Why would holding it for longer be a problem.

This kind of thing is really hard to reason about and the effects are deeply non-local so retaining as much of the knowledge as we can about the decisions that were made is super important.

};

// Based on the queue empty status, and the current finished submission index, determine the result of the poll.
Expand Down Expand Up @@ -909,7 +913,6 @@ impl Device {

// Don't hold the locks while calling release_gpu_resources.
drop(fence);
drop(snatch_guard);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

question: I think it's fine—but I'm not positive—to no longer be locking the snatch lock when we take the device lost closure. Tagging in @cwfitzgerald to double-check my conclusion here. Does that sound right, Connor?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't see any reason why this would be problematic. The snatch lock doesn't guard the device lost closure in any way.


if should_release_gpu_resource {
self.release_gpu_resources();
Expand Down
Loading