You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
perf: avoid arena lock for miniheap reuse fast path
Change lock ordering from arena->size-class to size-class->arena.
This allows allocSmallMiniheaps() to only acquire the size-class lock
when reusing existing miniheaps from freelists, deferring arena lock
acquisition to the slow path (allocating new miniheaps).
Previously, every call to allocSmallMiniheaps() grabbed the arena lock
even though the common case (reusing miniheaps from partial/empty
freelists) never touches arena state. The arena lock protects:
- _mhAllocator (miniheap metadata allocator)
- pageAlloc() (allocating pages from arena)
- trackMiniHeap() (updating page-to-miniheap mapping)
None of these are needed when simply taking a miniheap from a freelist
and marking it as attached.
Updated all lock acquisition sites for consistency:
- AllLocksGuard: size-classes[0..N-1] -> large -> arena
- allocSmallMiniheaps: size-class lock first, arena only if needed
- pageAlignedAlloc: large lock -> arena lock
- freeMiniheap: size-class/large lock -> arena lock
- freeFor (large): large lock -> arena lock
- lock()/unlock(): include arena lock for fork handling
0 commit comments