-
-
Notifications
You must be signed in to change notification settings - Fork 1.9k
⚡ perf: Improve usage of sync.Pool #3767
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
|
Important Review skippedDraft detected. Please check the settings in the CodeRabbit UI or the You can disable this status message by setting the Note Other AI code review bot(s) detectedCodeRabbit has detected other AI code review bot(s) in this pull request and will avoid duplicating their findings in the review comments. This may lead to a less comprehensive review. ✨ Finishing touches🧪 Generate unit tests
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
|
/gemini review |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request introduces sync.Pool across a wide range of the codebase to reduce allocations and improve performance. The changes are well-implemented, with pooling added for various data structures like maps, slices, and request/response-related objects. The acquire and release patterns are correctly applied, including logic to handle oversized objects to prevent memory bloat in the pools. Several areas have been refactored to be more allocation-friendly, such as string building and header processing. Additionally, some existing bugs, like a logic error in sorting Accept headers and potential resource leaks in CookieJar, have been fixed. I've found a minor redundancy in one of the new pooling implementations, which I've commented on. Overall, this is a solid performance improvement.
| if used > 0 { | ||
| clear(m) | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The map is cleared here before being put back into the pool, but it's also cleared in acquireStringSliceMap when retrieved from the pool. While this doesn't cause any harm, it's redundant. To improve clarity and have a single point of responsibility for resetting the object's state, it's conventional to clear/reset objects on acquisition from a pool.
| if used > 0 { | ||
| clear(m) | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
|
|
||
| *bufPtr = buf | ||
| return buf, bufPtr, nil |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The logic for handling oversized buffers in the pool is correct, but it could be slightly simplified. Instead of creating a new copied slice, you can return buf directly since it's a new allocation and not part of the pool. The bufPtr is correctly released.
| *bufPtr = buf | |
| return buf, bufPtr, nil | |
| return buf, nil, nil |
| for i := range r.files { | ||
| if f := r.files[i]; f != nil { | ||
| ReleaseFile(f) | ||
| r.files[i] = nil | ||
| } | ||
| } | ||
| r.files = r.files[:0] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| for i := range r.cookie { | ||
| if c := r.cookie[i]; c != nil { | ||
| fasthttp.ReleaseCookie(c) | ||
| r.cookie[i] = nil | ||
| } | ||
| } | ||
| if cap(r.cookie) > responseCookieSliceMaxCap { | ||
| r.cookie = make([]*fasthttp.Cookie, 0, responseCookieSliceDefaultCap) | ||
| } else { | ||
| r.cookie = r.cookie[:0] | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| TagReqHeaders: func(output Buffer, c fiber.Ctx, _ *Data, _ string) (int, error) { | ||
| out := make(map[string][]string, 0) | ||
| if err := c.Bind().Header(&out); err != nil { | ||
| headerMapPtr := acquireReqHeaderMap() | ||
| headers := *headerMapPtr | ||
| defer func() { | ||
| *headerMapPtr = headers | ||
| releaseReqHeaderMap(headerMapPtr) | ||
| }() | ||
|
|
||
| if err := c.Bind().Header(&headers); err != nil { | ||
| return 0, err | ||
| } | ||
|
|
||
| reqHeaders := make([]string, 0) | ||
| for k, v := range out { | ||
| reqHeaders = append(reqHeaders, k+"="+strings.Join(v, ",")) | ||
| var ( | ||
| written int | ||
| firstPair = true | ||
| ) | ||
|
|
||
| for key, values := range headers { | ||
| if !firstPair { | ||
| if err := output.WriteByte('&'); err != nil { | ||
| return written, err | ||
| } | ||
| written++ | ||
| } | ||
| firstPair = false | ||
|
|
||
| n, err := output.WriteString(key) | ||
| written += n | ||
| if err != nil { | ||
| return written, err | ||
| } | ||
|
|
||
| if err := output.WriteByte('='); err != nil { | ||
| return written, err | ||
| } | ||
| written++ | ||
|
|
||
| for i, value := range values { | ||
| if i > 0 { | ||
| if err := output.WriteByte(','); err != nil { | ||
| return written, err | ||
| } | ||
| written++ | ||
| } | ||
|
|
||
| n, err = output.WriteString(value) | ||
| written += n | ||
| if err != nil { | ||
| return written, err | ||
| } | ||
| } | ||
| } | ||
| return output.Write([]byte(strings.Join(reqHeaders, "&"))) | ||
|
|
||
| return written, nil | ||
| }, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Codecov Report❌ Patch coverage is Additional details and impacted files@@ Coverage Diff @@
## main #3767 +/- ##
==========================================
- Coverage 91.70% 89.43% -2.27%
==========================================
Files 113 114 +1
Lines 11940 13177 +1237
==========================================
+ Hits 10949 11785 +836
- Misses 728 976 +248
- Partials 263 416 +153
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
⚠️ Performance Alert ⚠️
Possible performance regression was detected for benchmark.
Benchmark result of this commit is worse than the previous benchmark result exceeding threshold 1.50.
| Benchmark suite | Current: 98f436a | Previous: a3c54bb | Ratio |
|---|---|---|---|
Benchmark_Ctx_GetRespHeaders |
994.9 ns/op 835 B/op 11 allocs/op |
372.1 ns/op 448 B/op 5 allocs/op |
2.67 |
Benchmark_Ctx_GetRespHeaders - ns/op |
994.9 ns/op |
372.1 ns/op |
2.67 |
Benchmark_Ctx_GetRespHeaders - B/op |
835 B/op |
448 B/op |
1.86 |
Benchmark_Ctx_GetRespHeaders - allocs/op |
11 allocs/op |
5 allocs/op |
2.20 |
This comment was automatically generated by workflow using github-action-benchmark.
|
/gemini review |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull Request Overview
This PR implements extensive sync.Pool usage improvements across the codebase to reduce allocations and improve memory management. The changes introduce object pooling for various frequently allocated data structures such as slices, maps, and buffers throughout the application.
Key changes include:
- Addition of sync.Pool implementations for commonly allocated data structures like string slices, header maps, and temporary buffers
- Enhanced pool management with capacity limits and proper cleanup procedures
- Validation of pool object releases to ensure proper state reset
Reviewed Changes
Copilot reviewed 49 out of 49 changed files in this pull request and generated 3 comments.
Show a summary per file
| File | Description |
|---|---|
| state.go | Adds sync.Pool for service key slices with proper acquire/release functions |
| state_test.go | Updates tests to use pooled slice management for service keys |
| services.go | Implements sync.Pool for error slices in service operations |
| router.go | Adds pooling for removed route sets in route deletion operations |
| res.go | Introduces complex pooling system for response headers with scratch management |
| req.go | Implements pooling for IP slices and encoding order slices |
| redirect.go | Adds pools for redirect message buffers and input maps |
| path.go | Implements pooling for route segments and parser components |
| helpers.go | Extensive pooling additions for accepted types, header values, and language tags |
| middleware/*/*.go | Various middleware-specific pooling implementations for their data structures |
| client/*.go | Client-side pooling for requests, responses, cookies, and core objects |
| binder/*.go | Pooling for binding operations using string slice and file header maps |
| return tagsPtr, tags | ||
| } | ||
|
|
||
| func releaseLanguageTagSlice(tagsPtr *([]string)) { |
Copilot
AI
Oct 5, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Invalid syntax for pointer to slice type. Should be *[]string instead of *([]string).
| func releaseLanguageTagSlice(tagsPtr *([]string)) { | |
| func releaseLanguageTagSlice(tagsPtr *[]string) { |
| select { | ||
| case <-ch: | ||
| default: | ||
| } |
Copilot
AI
Oct 5, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This select statement attempts to drain the channel but may cause a deadlock if the channel is unbuffered and no goroutine is sending. Consider using a non-blocking receive or ensure the channel is buffered.
| return | ||
| } | ||
|
|
||
| clear(buf) |
Copilot
AI
Oct 5, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Using clear() on a byte slice sets all elements to zero, but the slice length remains unchanged. This should be buf = buf[:0] to reset the length properly, or handle the clearing in releaseEncryptCookieBuffer.
| clear(buf) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request introduces significant performance improvements by leveraging sync.Pool across various parts of the codebase to reduce allocations. The changes are extensive and well-implemented, with new pooling mechanisms for maps, slices, and various structs. The addition of corresponding tests for the new pooling logic is also a great practice.
I've found one critical issue in client/cookiejar.go that could lead to a panic, and a couple of minor code quality suggestions in binder/binder.go and client/request.go.
Overall, this is a great contribution to improving the framework's performance.
| var matches []*fasthttp.Cookie | ||
| if matchesPtr != nil { | ||
| matches = *matchesPtr | ||
| } | ||
|
|
||
| matches = matches[:0] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This logic can cause a panic. If collectCookiesForRequest is called with matchesPtr == nil (which getByHostAndPath does), matches will be nil. Then, matches = matches[:0] on line 191 will cause a panic.
You should ensure matches is not nil before slicing, or change the logic to handle the nil case. The suggested change correctly resets the slice from the pool if available, or starts with a nil slice that append can handle safely.
| var matches []*fasthttp.Cookie | |
| if matchesPtr != nil { | |
| matches = *matchesPtr | |
| } | |
| matches = matches[:0] | |
| var matches []*fasthttp.Cookie | |
| if matchesPtr != nil { | |
| matches = (*matchesPtr)[:0] | |
| } |
| if m == nil { | ||
| return make(map[string][]string, stringSliceMapDefaultCap) | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| if m == nil { | ||
| return make(map[string][]*multipart.FileHeader, fileHeaderSliceMapDefaultCap) | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| keysPtr, keys := acquireHeaderKeySlice(len(peekKeys)) | ||
| if len(peekKeys) > 0 { | ||
| keys = keys[:len(peekKeys)] | ||
| copy(keys, peekKeys) // It is necessary to have immutable byte slice. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The comment // It is necessary to have immutable byte slice. is correct, but the implementation copy(keys, peekKeys) does not achieve this. It only performs a shallow copy of the slice headers. The underlying byte arrays of peekKeys are not copied, so the keys slice still contains references to fasthttp's internal buffers, which can be modified. This makes subsequent use of utils.UnsafeString unsafe if the buffers are modified.
While this might be an intentional performance trade-off, the comment is misleading as it suggests the data is being made immutable. Consider either performing a deep copy for safety or updating the comment to reflect the unsafe nature of this operation.
Summary